diff --git a/README.md b/README.md new file mode 100644 index 0000000000000000000000000000000000000000..b1c524c8b167f6fd7c6212aee3f01d37826ac7c1 --- /dev/null +++ b/README.md @@ -0,0 +1,144 @@ +--- +license: cc-by-4.0 +configs: +- config_name: livesqlbench + data_files: + - path: livesqlbench_data.jsonl + split: dev +viewer: true +tags: +- text-to-sql +- database +--- +# 🚀 LiveSQLBench-Base-Full-v1 +*A dynamic, **contamination‑free** benchmark for evaluating LLMs on complex, real‑world ****text‑to‑SQL**** tasks.* + +[🌐 Website](https://livesqlbench.ai) • [📄 Paper (coming soon)](https://arxiv.org) • [💻 GitHub](https://github.com/bird-bench/livesqlbench) • [🗄️ LiveSQLBench-Base-Lite](https://huggingface.co/datasets/birdsql/livesqlbench-base-lite) • [🌐 BIRD-Interact](https://bird-interact.github.io) + +Maintained by the **🦜 [BIRD Team @ HKU](https://bird-bench.github.io)** & **☁️ [Google Cloud](https://cloud.google.com/)** + + +## 📊 LiveSQLBench Overview + +**LiveSQLBench** (BIRD-SQL Pro v0.5) is a **contamination-free**, **continuously evolving** benchmark designed to evaluate LLMs on **complex, real-world text-to-SQL tasks**, featuring **diverse real-world user queries**, including **Business Intelligence (BI)**, **CRUD operations**, and more. Each release will include **around 20 new, fully open-source DBs** curated by the BIRD team through expert collaboration and continuous improvement. It will cover a **wide range of database sizes**, from **end-user level** (around 127 columns) to **industrial level** (1340+ columns). Here are the features of the LiveSQLBench benchmark: + +1. **🗄️ Live Databases:** +Constructed dynamically from extensive and regularly updated CSV datasets, with both base (user-end level) and large (industrial level) versions (1340+ columns each DB) to test scalability. + +2. **💬 Live User Queries and SQL:** +Each task pairs unambiguous user queries with annotated, gold-standard SQL statements. The user queries are grounded in an external knowledge base, with medium to hard complexity solution SQL statements. + +3. **🧠 Contextual Reasoning (HKB):** +Every DB includes a hierarchical knowledge base (HKB) where each knowledge may have dependencies to others, which requires the multi-hop reasoning ability. Two HKB formats are provided: (1) structured JSON format, and (2) unstructured Document format. + +4. **🔍 The First Full SQL Spectrum:** +Supports not just SELECT (Business Intelligence) queries, but also CRUD (e.g., UPDATE, CREATE, and other database management operations) queries. + +5. **⚡ Automated Evaluation:** +Support fast evaluation via PostgreSQL template & docker. Each question includes verifiable test cases for accurate, reproducible scoring. Soft EX metric is used to evaluate SELECT-ONLY tasks; customized test cases are designed for DBA tasks, such as CRUD (CREATE, READ, UPDATE, DELETE). + +6. **🔄 Truly Live & Hidden Test:** +New databases and tasks are added over time. Each release features both open development and hidden test phases. The hidden test set from each release becomes the open development set for the next release, ensuring continuous evolution and fair evaluation. + + +> 💡 LiveSQLBench's updating databases, tasks, and HKB support [BIRD-Interact](https://bird-interact.github.io)'s conversational and agentic evaluation. BIRD-Interact evaluates LLMs' text-to-SQL ability in dynamic interactive settings with database and user simulation. + +## Previous Releases: [LiveSQLBench-Base-Lite](https://huggingface.co/datasets/birdsql/livesqlbench-base-lite) +- [LiveSQLBench-Base-Lite](https://huggingface.co/datasets/birdsql/livesqlbench-base-lite) + +## 🎯 Current Release: LiveSQLBench-Base-Full-v1 +Currently, we are pleased to release a **LiveSQLBench-Base-Full-v1**, containing **22 NEW end-user level databases** with **600 NEW** tasks (410 SELECT-only, 190 Management tasks), **HKB-JSON** and the **JSON operation in SQL**. + +Some **NEW features**: +- **More Natural User Tasks**: User tasks are more colloquial and natural, making it implicit to mapping to the DB and KB. Some tasks are even reasoning-intensive. That means the model needs to reason more deeply and multi-hop to solve the task. +- **More Real and Complex DBs**: DBs are more real and complex, containing more N2M relationships and more noisy schema and data. + + + + +## 💻 How to Use the Dataset +### Get the Dataset and Ground Truth +Download the dataset containing data file `livesqlbench_data.jsonl` and DB metafiles (including schema, HKB, column meaning files) by: +```bash +git clone https://huggingface.co/datasets/birdsql/livesqlbench-base-full-v1 +``` +To prevent data leakage through automated crawling, please request access to the ground truth and test cases by emailing **[📧 bird.bench25@gmail.com](mailto:bird.bench25@gmail.com)** with the subject line `[livesqlbench-base-full-v1 GT&Test Cases]`. An automated response will provide these data fields. + +### Get the Database DDL Dumps and Building Scripts +The complete PostgreSQL **database dumps** and **building scripts** (`init-databases_postgresql.sh`) can be download from [the Google Drive](https://drive.google.com/file/d/1V9SFIWebi27JtaDUAScG1xE9ELbYcWLR/view?usp=sharing). + + +### Evaluation +The details of usage and evaluation can be referred to [livesqlbench repo](https://github.com/bird-bench/livesqlbench). + + + +## 📁 Directory Structure +Each database has its own directory: + +``` +. +├── README.md +├── database_name +│ ├── database_name_column_meaning_base.json +│ ├── database_name_kb.jsonl +│ ├── database_name_schema.txt +... +├── livesqlbench_data.jsonl +``` + +### 📂 Directory Contents: + + +* `*_schema.txt`: Database schema. +* `*_kb.jsonl`: Hierarchical knowledge base entries required to solve the user task. + * `id`: The unique identifier for the knowledge. + * `knowledge`: The name of the knowledge. + * `description`: The description of the knowledge. + * `definition`: The clear definition of the knowledge. + * `type`: The type of the knowledge. + * `children_knowledge`: A list of knowledge IDs that the current knowledge is dependent on. -1 means no children. +* `*_column_meaning_base.json`: Explanation of database columns. + + +## 📋 Dataset Fields (`livesqlbench_data.jsonl`): + +* **instance\_id**: Unique task identifier. +* **selected\_database**: Associated database name. +* **query**: More natural user query (which is used in evaluation and our leaderboard). +* **normal_query**: The normal user query, which is more concise and direct. Just for reference. +* **sol\_sql** 🔒: Ground truth SQL solution. +* **external\_knowledge** 🔒: IDs of required external knowledge to solve the user task. +* **preprocess\_sql**: SQL setup queries. +* **clean\_up\_sql**: SQL queries to reset database state. +* **test\_cases** 🔒: Test cases to validate the predicted corrected SQL. +* **category**: "Query" (SELECT-only) or "Management" (CRUD). +* **high\_level**: Boolean indicating whether the user query contains high-level description. +* **conditions**: Indicates decimal/distinct conditions in the user query. +* **difficulty\_tier**: Task difficulty (Simple, Moderate, Challenging). + + +## 🔒 Accessing Complete Data + +To avoid data leakage by auto-crawling, certain fields (e.g., `sol_sql`, `test_cases`, `external_knowledge`) are excluded from the public dataset. For the full dataset, please email: **[📧 bird.bench25@gmail.com](mailto:bird.bench25@gmail.com)** with subject tag `[livesqlbench-base-full-v1 GT&Test Cases]`, which will be sent automatically. + +## 🏆 Model Performance on LiveSQLBench-Base-Full-v1 (2025-09-04) + +Please refer to our homepage: [🌐 LiveSQLBench](https://livesqlbench.ai) + +## 🔄 Stay Tuned! + +Upcoming releases: + +* **🔄 LiveSQLBench-Large-Lite:** Industrial-scale databases with 1340+ columns. +* **🔄 LiveSQLBench-Large-Full:** Comprehensive large-scale datasets. + +Want new dialects? Vote for new SQL dialects [🗳️ here](https://docs.google.com/forms/d/e/1FAIpQLSfEogmsA7LObI13KOoiojdnYfW28KEqvEVtC9hXaZJ8O9aCpQ/viewform?usp=header)! + + + + + +## 📄 License: + +cc-by-sa-4.0 \ No newline at end of file diff --git a/archeology_scan/archeology_scan_column_meaning_base.json b/archeology_scan/archeology_scan_column_meaning_base.json new file mode 100644 index 0000000000000000000000000000000000000000..baac83f617f73baaf3bb7a17a830dd192e624d7a --- /dev/null +++ b/archeology_scan/archeology_scan_column_meaning_base.json @@ -0,0 +1,217 @@ +{ + "archeology_scan|projects|arcregistry": "TEXT. Unique identifier for archeologys project registration. PK = Projects(ArcRegistry). Example: PR7509.", + "archeology_scan|projects|vesseltag": "TEXT. Project name or designation label. Example: Project Happy.", + "archeology_scan|projects|fundflux": "TEXT. Source of project funding or financial backing. Possible values: Government, Grant, Private, University.", + "archeology_scan|projects|authpin": "TEXT. Official permit or authorization number for excavation. Example: PMT4719.", + "archeology_scan|projects|authhalt": "TEXT. Permit expiration date or authorization termination date. Example: 05/12/2025.", + "archeology_scan|personnel|crewregistry": "TEXT. Unique identifier for operator personnel registration. PK = Personnel(CrewRegistry). Example: OP4641.", + "archeology_scan|personnel|crewlabel": "TEXT. Name or designation of the operator personnel. Example: Joel Wallace.", + "archeology_scan|personnel|leadregistry": "TEXT. Unique identifier for supervisor personnel registration. Example: SV7658.", + "archeology_scan|personnel|leadlabel": "TEXT. Name or designation of the supervisor personnel. Example: Michael Kaiser.", + "archeology_scan|sites|zoneregistry": "TEXT. Unique site code identifier for archeologys zone. PK = Sites(ZoneRegistry). Example: SC9016.", + "archeology_scan|sites|zonelabel": "TEXT. Descriptive name of the archeologys site. Example: Site-North Alexanderville.", + "archeology_scan|sites|digunit": "TEXT. Excavation unit designation within the site. Example: Unit-C9.", + "archeology_scan|sites|gridtrace": "TEXT. Grid reference system coordinates for site location. Example: S29-E8.", + "archeology_scan|sites|phasefactor": "TEXT. Cultural period or chronological phase classification. **NULL means cultural period classification is undetermined or analysis incomplete.**. Example: Iron Age.", + "archeology_scan|sites|guessdate": "TEXT. Estimated date or age of archeologys materials. Example: -2929 BCE.", + "archeology_scan|sites|typesite": "TEXT. Classification of archeologys site type. **NULL means site type classification is pending or cannot be determined.**. Example: Bur..", + "archeology_scan|sites|guardhint": "TEXT. Weather protection measures in place at site. **NULL means weather protection assessment not completed or not applicable.**. Possible values: Permanent, Temporary.", + "archeology_scan|equipment|equipregistry": "TEXT. Unique serial number identifier for scanning equipment. PK = Equipment(EquipRegistry). Example: SN20065.", + "archeology_scan|equipment|equipform": "TEXT. Type classification of scanning equipment. Example: LiDAR scanner, precise 3D measurement technology.", + "archeology_scan|equipment|equipdesign": "TEXT. Model designation of scanning equipment. Example: Model-669.", + "archeology_scan|equipment|equiptune": "DATE. Date of last equipment calibration or maintenance. Examprle: 2024-11-01.", + "archeology_scan|equipment|equipstatus": "TEXT. Current operational condition of equipment. Excellent condition, optimal performance guaranteed.", + "archeology_scan|equipment|powerlevel": "TEXT. Battery charge level as percentage. Example: 62% battery, extended operation available.", + "archeology_scan|equipment|transport_speed": "TEXT. Transportation speed for equipment delivery. Example: 85 km/h.", + "archeology_scan|equipment|coverage_rate": "TEXT. Area coverage rate during scanning. Example: 12 m²/hr.", + "archeology_scan|equipment|point_generation_rate": "TEXT. Point cloud generation efficiency. Example: 62000 pts/min.", + "archeology_scan|equipment|cost_per_area": "TEXT. Cost efficiency per unit area. Example: 45 USD/m².", + "archeology_scan|equipment|accuracy_per_time": "TEXT. Accuracy improvement over scanning time. Example: 3 mm/hr.", + "archeology_scan|equipment|power_consumption": "TEXT. Equipment power consumption rate. Example: 125 W/hr.", + "archeology_scan|equipment|battery_drain": "TEXT. Battery consumption rate during operation. Example: 8%/hr.", + "archeology_scan|equipment|storage_usage_rate": "TEXT. Storage space consumption during scanning. Example: 2.8 GB/hr.", + "archeology_scan|scans|questregistry": "TEXT. Unique identifier for scan session record. PK = Scans(QuestRegistry). Example: ASD409481.", + "archeology_scan|scans|chronotag": "TEXT. Precise timestamp when scan was performed. Example: 2024-09-03 07:20:28.479288 GMT+8.", + "archeology_scan|scans|arcref": "TEXT. Reference to associated archeologys project. FK to Projects.", + "archeology_scan|scans|crewref": "TEXT. Reference to operator who performed the scan. FK to Personnel.", + "archeology_scan|scans|zoneref": "TEXT. Reference to site where scan was conducted. FK to Sites.", + "archeology_scan|scans|scancount": "TEXT. Total number of individual scans in session. Example: 5 scans, moderate coverage session.", + "archeology_scan|scans|climtune": "TEXT. Weather conditions during scanning session. **NULL means weather conditions not recorded or monitoring equipment unavailable.**. Example: Windy conditions, potential equipment stability issues.", + "archeology_scan|scans|huecatch": "TEXT. Color capture mode or settings used during scan. **NULL means color capture not enabled or settings not recorded.**. Possible values: Grayscale, RGB.", + "archeology_scan|scans|fmtfile": "TEXT. File format used for storing scan data. Example: PTS.", + "archeology_scan|scans|size": "TEXT. Total file size of scan data in gigabytes. Example: 24.71 GB.", + "archeology_scan|scans|scan_cost": "TEXT. Equipment cost per GB. Example: 3.22 USD/GB.", + "archeology_scan|scans|scanning_rate": "TEXT. Data acquisition rate during scanning. Example: 63 MB/min.", + "archeology_scan|environment|airregistry": "BIGSERIAL. Auto-generated unique identifier for environmental record. PK = Environment(AirRegistry).", + "archeology_scan|environment|zoneref": "TEXT. Reference to site where environmental data was recorded. FK to Sites.", + "archeology_scan|environment|equipref": "TEXT. Reference to equipment used for environmental monitoring. FK to Equipment.", + "archeology_scan|environment|photomap": "TEXT. Photogrammetry overlap percentage settings. Possible values: 60%, 70%, 80%, 90%.", + "archeology_scan|environment|imgcount": "TEXT. Number of images captured during session per minutes. Example: 248 imgs/mins.", + "archeology_scan|pointcloud|cloudregistry": "BIGSERIAL. Auto-generated unique identifier for point cloud record. PK = PointCloud(CloudRegistry).", + "archeology_scan|pointcloud|crewref": "TEXT. Reference to operator who generated point cloud. FK to Personnel.", + "archeology_scan|pointcloud|arcref": "TEXT. Reference to associated archeologys project. FK to Projects.", + "archeology_scan|mesh|facetregistry": "BIGSERIAL. Auto-generated unique identifier for mesh record. PK = Mesh(FacetRegistry).", + "archeology_scan|mesh|zoneref": "TEXT. Reference to site where mesh was generated. FK to Sites.", + "archeology_scan|mesh|equipref": "TEXT. Reference to equipment used for mesh generation. FK to Equipment.", + "archeology_scan|spatial|domainregistry": "BIGSERIAL. Auto-generated unique identifier for spatial measurement record. PK = Spatial(DomainRegistry).", + "archeology_scan|spatial|arcref": "TEXT. Reference to associated archeologys project. FK to Projects.", + "archeology_scan|spatial|crewref": "TEXT. Reference to operator who performed spatial measurements. FK to Personnel.", + "archeology_scan|features|traitregistry": "BIGSERIAL. Auto-generated unique identifier for feature analysis record. PK = Features(TraitRegistry).", + "archeology_scan|features|zoneref": "TEXT. Reference to site where features were analyzed. FK to Sites.", + "archeology_scan|features|equipref": "TEXT. Reference to equipment used for feature extraction. FK to Equipment.", + "archeology_scan|features|traitextract": "TEXT. Method used for feature extraction analysis. Possible values: Automatic, Manual, Semi-automatic.", + "archeology_scan|conservation|cureregistry": "BIGSERIAL. Auto-generated unique identifier for conservation record. PK = Conservation(CureRegistry).", + "archeology_scan|conservation|arcref": "TEXT. Reference to associated archeologys project. FK to Projects.", + "archeology_scan|conservation|zoneref": "TEXT. Reference to site requiring conservation attention. FK to Sites.", + "archeology_scan|conservation|harmassess": "TEXT. Damage assessment evaluation results. **NULL means damage assessment not conducted or evaluation pending.**. Possible values: Minor, Moderate, Severe.", + "archeology_scan|conservation|curerank": "TEXT. Conservation priority ranking classification. Possible values: Critical priority, immediate conservation required.", + "archeology_scan|conservation|structstate": "TEXT. Structural stability assessment status. Possible values: Moderate condition, careful monitoring required.", + "archeology_scan|conservation|intervhistory": "TEXT. History of previous conservation interventions. **NULL means no prior conservation interventions recorded or documentation unavailable.**. Possible values: Major, Minor.", + "archeology_scan|conservation|priordocs": "TEXT. Previous documentation and records available. **NULL means no previous documentation exists or records not accessible.**. Possible values: Complete, Partial.", + "archeology_scan|registration|logregistry": "BIGSERIAL. Auto-generated unique identifier for registration record. PK = Registration(LogRegistry).", + "archeology_scan|registration|crewref": "TEXT. Reference to operator who performed registration. FK to Personnel.", + "archeology_scan|registration|arcref": "TEXT. Reference to associated archeologys project. FK to Projects.", + "archeology_scan|processing|flowregistry": "BIGSERIAL. Auto-generated unique identifier for processing record. PK = Processing(FlowRegistry).", + "archeology_scan|processing|equipref": "TEXT. Reference to equipment used for data processing. FK to Equipment.", + "archeology_scan|processing|zoneref": "TEXT. Reference to site where data was processed. FK to Sites.", + "archeology_scan|processing|flowsoft": "TEXT. Software application used for data processing. **NULL means processing software not specified or custom processing pipeline used.**. Example: RealityCapture.", + "archeology_scan|processing|stashloc": "TEXT. Storage location for processed data files. Possible values: Cloud, Local, Network.", + "archeology_scan|processing|safebak": "TEXT. Backup status and redundancy measures. Possible values: Completed, In Progress, Pending.", + "archeology_scan|processing|datalevel": "TEXT. Data access level and security classification. Possible values: Confidential, Public, Restricted.", + "archeology_scan|processing|metabench": "TEXT. Metadata standard compliance and format. Possible values: CIDOC CRM, Custom, Dublin Core.", + "archeology_scan|processing|coordframe": "TEXT. Coordinate system used for spatial reference. Possible values: CUSTOM, Custom, LOCAL, Local, WGS84, Wgs84, custom, local, wgs84.", + "archeology_scan|processing|elevref": "TEXT. Elevation reference datum used for measurements. Possible values: Arbitrary, Local, Sea Level.", + "archeology_scan|processing|flowstage": "TEXT. Current processing stage or workflow step. Possible values: Aligned, Final, Meshed, Raw, Textured.", + "archeology_scan|processing|processing_rate": "TEXT. Data processing throughput rate. Example: 2.3 MB/s.", + "archeology_scan|qualitycontrol|qualregistry": "BIGSERIAL. Auto-generated unique identifier for quality control record. PK = QualityControl(QualRegistry).", + "archeology_scan|qualitycontrol|arcref": "TEXT. Reference to associated archeologys project. FK to Projects.", + "archeology_scan|qualitycontrol|crewref": "TEXT. Reference to operator who performed quality control. FK to Personnel.", + "archeology_scan|qualitycontrol|accucheck": "TEXT. Accuracy assessment evaluation results. Possible values: Accuracy check not required, standard quality sufficient.", + "archeology_scan|qualitycontrol|ctrlstate": "TEXT. Quality control status and validation state. Possible values: Failed, Passed, Pending.", + "archeology_scan|qualitycontrol|valimeth": "TEXT. Validation method used for quality assessment. Possible values: Automated, Hybrid, Visual.", + "archeology_scan|qualitycontrol|valistate": "TEXT. Validation status and completion state. Possible values: Validation rejected, data quality issues identified.", + "archeology_scan|qualitycontrol|archstat": "TEXT. Archival status for long-term data preservation. **NULL means archival process not initiated or status not determined.**. Example: Verified.", + "archeology_scan|qualitycontrol|pubstat": "TEXT. Publication status and dissemination readiness. Possible values: Draft status, preparation for submission.", + "archeology_scan|qualitycontrol|copystat": "TEXT. Copyright status and intellectual property rights. **NULL means copyright status not established or legal review pending.**. Example: Open Access.", + "archeology_scan|qualitycontrol|refmention": "TEXT. Data citation format and attribution requirements. Example: Citation-8447.", + "archeology_scan|qualitycontrol|remark": "TEXT. Additional notes and observations for quality control. **NULL means no additional notes recorded or observations not documented.**. Example: Sell shoulder understand serious degree particular game..", + "archeology_scan|sites|Geo_Position": { + "column_meaning": "JSONB column. Geographic positioning data including coordinates, elevation, and excavation depth measurements", + "fields_meaning": { + "Geo_X": "REAL. Latitude coordinate in decimal degrees. Example: -9.602135.", + "Geo_Y": "REAL. Longitude coordinate in decimal degrees. Example: -2.756411.", + "Height_M": "REAL. Altitude above sea level in meters. Example: 4391.4.", + "Depth_C": "REAL. Excavation depth below surface in centimeters. Example: 329.9." + } + }, + "archeology_scan|sites|Site_Status": { + "column_meaning": "JSONB column. Site condition and security assessment including preservation, access, safety, and risk evaluation status", + "fields_meaning": { + "Pres_Stat": "TEXT. Current preservation status of archeologys remains. Possible values: Critical, Excellent, Fair, Good, Poor.", + "Entry_Stat": "TEXT. Site accessibility status for personnel and equipment. Possible values: Closed, Open, Restricted.", + "Safe_Rank": "TEXT. Security level classification for site protection. **NULL means security assessment not completed or classification pending.**. Example: Minimal.", + "Insur_Stat": "TEXT. Insurance coverage status for site operations. Possible values: Active, Expired, Pending.", + "Risk_Eval": "TEXT. Risk assessment status for site safety evaluation. **NULL means risk assessment not conducted or evaluation incomplete.**. Example: Req..", + "Health_Eval": "TEXT. Health and safety evaluation status for site conditions. Possible values: Approved, Pending, Review.", + "Env_Haz": "TEXT. Environmental risk factors present at the site. Possible values: High, Low, Medium." + } + }, + "archeology_scan|environment|Ambient_Cond": { + "column_meaning": "JSONB column. Environmental conditions during scanning including temperature, humidity, lighting, and connectivity status", + "fields_meaning": { + "Ambic_Temp": "REAL. Ambient air temperature in degrees Celsius. Example: 25.3.", + "Hume_Pct": "REAL. Relative humidity percentage during scanning. Example: 60.4.", + "Illume_Lux": "BIGINT. Light intensity conditions in lux units. Example: 86054.", + "Geo_Signal": "TEXT. GPS signal quality and reception status. **NULL means GPS signal monitoring not available or receiver malfunction.**. Possible values: Excellent, Good, Poor.", + "Track_Status": "TEXT. RTK positioning system status and accuracy. **NULL means RTK system not operational or status monitoring unavailable.**. Possible values: Fixed, Float.", + "Link_Status": "TEXT. Network connectivity status during scanning. Possible values: Connected, Disconnected, Limited." + } + }, + "archeology_scan|pointcloud|Cloud_Metrics": { + "column_meaning": "JSONB column. Point cloud quality and density measurements including resolution, coverage, overlap, and noise analysis", + "fields_meaning": { + "Scan_Resol_Mm": "REAL. Scanning resolution in millimeters. Example: 2.4.", + "Point_Dense": "BIGINT. Point density per square meter. Example: 42812.", + "Cover_Pct": "REAL. Percentage of surface area covered by scan. Example: 91.2.", + "Total_Pts": "BIGINT. Total number of points in the point cloud. Example: 46562436.", + "Cloud_Dense": "BIGINT. Point cloud density measurement. Example: 9449.", + "Lap_Pct": "REAL. Overlap percentage between scan segments. Example: 31.3.", + "Noise_Db": "REAL. Noise level measurement in decibels. Example: 1.318.", + "Ref_Pct": "REAL. Surface reflectivity percentage measurement. Example: 11.0." + } + }, + "archeology_scan|mesh|Mesh_Specs": { + "column_meaning": "JSONB column. 3D mesh specifications including geometry, texture properties, and accuracy measurements", + "fields_meaning": { + "Facet_Verts": "BIGINT. Number of vertices in the 3D mesh. Example: 7234721.", + "Facet_Faces": "BIGINT. Number of faces in the 3D mesh. Example: 5997318.", + "Facet_Res_Mm": "REAL. Mesh resolution in millimeters. Example: 3.2.", + "Tex_Dist": "TEXT. Texture resolution classification or setting. **NULL means texture resolution not specified or texture mapping not applied.**. Possible values: 1K, 2K, 4K.", + "Tex_Pix": "BIGINT. Texture size in pixels. Possible values: 1024, 2048, 4096, 8192.", + "UV_Map_Qual": "TEXT. UV mapping quality assessment. Possible values: High, Low, Medium.", + "Geom_Delta_Mm": "REAL. Geometric accuracy measurement in millimeters. Example: 2.74." + } + }, + "archeology_scan|spatial|Spatial_Dims": { + "column_meaning": "JSONB column. Spatial dimensions and measurements including area, volume, bounding box coordinates, and orientation angles", + "fields_meaning": { + "Area_M2": "REAL. Surface area measurement in square meters. Example: 78.01.", + "Vol_M3": "REAL. Volume measurement in cubic meters. Example: 76.7.", + "Bounding_Box": { + "Box_X": "REAL. Bounding box X-dimension in meters. Example: 40.12.", + "Box_Y": "REAL. Bounding box Y-dimension in meters. Example: 1.06.", + "Box_Z": "REAL. Bounding box Z-dimension in meters. Example: 8.74." + }, + "Angles": { + "Angle_Az": "REAL. Azimuth orientation angle in degrees. Example: 342.4.", + "Angle_Tilt": "REAL. Tilt angle measurement in degrees. Example: 23.9." + }, + "Ground_Span": "REAL. Ground sampling distance in millimeters. Example: 4.13." + } + }, + "archeology_scan|features|Feature_Analysis": { + "column_meaning": "JSONB column. Archaeological feature analysis results including counts, material classification, and analysis completion status", + "fields_meaning": { + "Trait_Count": "BIGINT. Number of features identified in analysis. Example: 516.", + "Arti_Count": "BIGINT. Number of artifacts detected or counted. Example: 71.", + "Struct_Kind": "TEXT. Type of structure identified in analysis. Possible values: Artifact, Complex, Floor, Foundation, Wall.", + "Mat_Kind": "TEXT. Material type classification of analyzed features. Possible values: Ceramic, Metal, Mixed, Organic, Stone.", + "Analysis_Status": { + "Hue_Study": "TEXT. Color analysis results and classification. Possible values: Completed, Not Required, Partial.", + "Texture_Study": "TEXT. Texture analysis results and patterns. Possible values: Completed, Not Required, Partial.", + "Pattern_Note": "TEXT. Pattern recognition analysis and observations. Possible values: Completed, Not Required, Partial." + } + } + }, + "archeology_scan|processing|System_Usage": { + "column_meaning": "JSONB column. Computing system resource utilization during data processing including CPU, memory, GPU usage and processing time", + "fields_meaning": { + "Flow_Hrs": "REAL. Processing time duration in hours. Example: 21.9.", + "Proc_CPU": "BIGINT. CPU usage percentage during processing. Example: 81.", + "Mem_Usage_Gb": "REAL. Memory usage in gigabytes during processing. Example: 70.3.", + "Proc_GPU": "BIGINT. GPU usage percentage during processing. Example: 84.", + "Remain_Gb": "REAL. Remaining storage space in gigabytes. Example: 983.5." + } + }, + "archeology_scan|processing|Calib_Status": { + "column_meaning": "JSONB column. Calibration and correction status for various processing components including camera, lens, and color adjustments", + "fields_meaning": { + "Station_Link": "TEXT. Total station integration status and connectivity. **NULL means total station not integrated or integration status not monitored.**. Example: Partial.", + "Cam_Cal": "TEXT. Camera calibration status and accuracy. Possible values: Calibrated, Invalid, Required.", + "Lens_Dist": "TEXT. Lens distortion correction status. Possible values: Corrected, Uncorrected, Unknown.", + "Color_Tune": "TEXT. Color balance and calibration status. **NULL means color calibration not performed or status not documented.**. Possible values: Adjusted, Required." + } + }, + "archeology_scan|registration|Reg_Accuracy": { + "column_meaning": "JSONB column. Registration accuracy metrics including reference points, error measurements, and transformation parameters", + "fields_meaning": { + "Log_Accu_Mm": "REAL. Registration accuracy measurement in millimeters. Example: 0.84.", + "Ref_Mark": "TEXT. Reference markers used for registration process. Example: 40.", + "Ctrl_Pts": "TEXT. Control points used for geometric registration. Example: 73.", + "Log_Method": "TEXT. Registration method or algorithm used. Possible values: Feature-based, Hybrid, Target-based.", + "Transform": "TEXT. Transformation matrix applied during registration. Example: Matrix-47.", + "Err_Scale": "TEXT. Error metrics and measurement standards. Possible values: Cloud-to-Cloud, Cloud-to-Mesh, RMSE.", + "Err_Val_Mm": "REAL. Error value measurement in millimeters. Example: 6.962." + } + } +} \ No newline at end of file diff --git a/archeology_scan/archeology_scan_kb.jsonl b/archeology_scan/archeology_scan_kb.jsonl new file mode 100644 index 0000000000000000000000000000000000000000..14af77907119f06d1f42173ed1bd690b93cd2f62 --- /dev/null +++ b/archeology_scan/archeology_scan_kb.jsonl @@ -0,0 +1,54 @@ +{"id": 0, "knowledge": "Scan Resolution Index (SRI)", "description": "A sophisticated compound index measuring the overall resolution quality of a scan based on resolution and point density.", "definition": "SRI = \\frac{\\log_{10}(\\text{Scan Resolution (mm)} \\times 10^3)}{\\log_{10}(\\text{Point Density (points/m²)})} \\times 5, \\text{ where lower values indicate higher quality resolution and more balanced scanning parameters.}", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 1, "knowledge": "Scan Coverage Effectiveness (SCE)", "description": "Measures how effectively a scan covers its target area considering both coverage percentage and overlap redundancy.", "definition": "SCE = \\text{Coverage (%)} \\times \\left(1 + \\frac{\\text{Overlap (%)}}{100} \\times \\left(1 - \\frac{\\text{Coverage (%)}}{100}\\right)\\right), \\text{ where higher values indicate more effective coverage with appropriate overlap.}", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 2, "knowledge": "Point Cloud Density Ratio (PCDR)", "description": "Evaluates the relationship between total points and cloud density, used to assess scan efficiency and data distribution.", "definition": "PCDR = \\frac{\\text{Total Points}}{\\text{Point-Cloud Density Code} \\times \\text{Surface Area (m²)}}, \\text{ where higher values suggest more efficient and spatially consistent scanning techniques.}", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 3, "knowledge": "Scan Quality Score (SQS)", "description": "Comprehensive quality metric combining resolution, coverage, and noise factors with weighted importance.", "definition": "SQS = \\left(\\frac{10}{\\text{SRI}}\\right)^{1.5} \\times \\left(\\frac{\\text{SCE}}{100}\\right) \\times \\left(1 - \\frac{\\text{Noise Level (dB)}}{30}\\right)^2, \\text{ where higher values indicate exponentially better overall scan quality with emphasis on resolution.}", "type": "calculation_knowledge", "children_knowledge": [0, 1]} +{"id": 4, "knowledge": "Mesh Complexity Ratio (MCR)", "description": "Measures the topological complexity of a mesh relative to its resolution, helping identify overly complex or simplified archaeological models.", "definition": "MCR = \\frac{\\text{Mesh Faces}}{\\text{Mesh Vertices} \\times \\text{Mesh Resolution (mm)}^2} \\times 10^3, \\text{ where higher values indicate more complex meshes for a given resolution, capturing finer archaeological details.}", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 5, "knowledge": "Texture Density Index (TDI)", "description": "Evaluates the pixel density of textures relative to mesh resolution for assessing surface detail preservation.", "definition": "TDI = \\frac{\\text{Texture Size (px)}}{\\sqrt{\\text{Mesh Faces}} \\times \\text{Mesh Resolution (mm)}} \\times 10^{-2}, \\text{ where higher values indicate more detailed textures relative to geometric complexity.}", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 6, "knowledge": "Model Fidelity Score (MFS)", "description": "Combines mesh complexity, texture quality, and geometric accuracy to assess overall 3D model fidelity for archaeological analysis.", "definition": "MFS = \\text{MCR} \\times \\left(\\frac{\\text{TDI}}{10}\\right) \\times \\left(1 + \\exp\\left(-\\text{Geometric Accuracy (mm)}\\right)\\right), \\text{ where higher values indicate more accurate and detailed models with appropriate complexity.}", "type": "calculation_knowledge", "children_knowledge": [4, 5]} +{"id": 7, "knowledge": "Environmental Suitability Index (ESI)", "description": "Evaluates how suitable environmental conditions were for scanning operations using weighted parameters.", "definition": "ESI = 100 - 2.5 \\times \\left|\\text{Ambient Temperature (C)} - 20\\right| - \\left|\\frac{\\text{Relative Humidity (%)} - 50}{2}\\right|^{1.5} - \\frac{600}{\\text{Light Conditions (lux)} + 100}, \\text{ where higher values indicate more ideal scanning conditions adjusted for relative importance.}", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 8, "knowledge": "Processing Efficiency Ratio (PER)", "description": "Measures the efficiency of scan processing by comparing processing time to data complexity and size.", "definition": "PER = \\frac{\\text{File Size (GB)} \\times \\log_{10}(\\text{Total Points})}{\\text{Processing Time (hours)} \\times (\\text{CPU Usage (%)} + \\text{GPU Usage (%)})/200}, \\text{ where higher values indicate more efficient processing relative to data complexity.}", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 9, "knowledge": "Archaeological Documentation Completeness (ADC)", "description": "Comprehensive score for how completely a site has been documented through scanning with weighted importance factors.", "definition": "ADC = \\left(\\text{SQS} \\times 0.4\\right) + \\left(\\text{MFS} \\times 0.4\\right) + \\left(\\text{SCE} \\times 0.2\\right) - 5 \\times \\sqrt{\\frac{\\text{Noise Level (dB)}}{10}}, \\text{ where higher values indicate more complete documentation with multiple quality factors.}", "type": "calculation_knowledge", "children_knowledge": [3, 6, 1]} +{"id": 10, "knowledge": "High Resolution Scan", "description": "Defines what constitutes a high-resolution archaeological scan based on quantitative parameters.", "definition": "A scan with \\text{Scan Resolution (mm)} \\leq 1.0 and \\text{Point Density (points/m²)} \\geq 1000, allowing for sub-millimeter precision in archaeological documentation and feature detection.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 11, "knowledge": "Comprehensive Coverage", "description": "Defines the standard for comprehensive scan coverage of an archaeological site or artifact with statistical confidence.", "definition": "A scan with \\text{Coverage (%)} \\geq 95 and \\text{Overlap (%)} \\geq 30, ensuring minimal data gaps and sufficient overlap for accurate registration with 95% confidence interval for spatial measurements.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 12, "knowledge": "Premium Quality Scan", "description": "Defines the criteria for a premium quality archaeological scan suitable for conservation planning and scholarly publication.", "definition": "A scan that is both a High Resolution Scan and has Comprehensive Coverage with \\text{SQS} > 7.5, where SQS is the Scan Quality Score, producing data suitable for detailed analysis and conservation planning.", "type": "domain_knowledge", "children_knowledge": [10, 11, 3]} +{"id": 13, "knowledge": "High Fidelity Mesh", "description": "Defines criteria for high-fidelity 3D mesh models in archaeological documentation suitable for analytical studies.", "definition": "A mesh with \\text{MCR} > 5.0, \\text{Mesh Resolution (mm)} < 1.0, and \\text{Geometric Accuracy (mm)} < 0.5, where MCR is the Mesh Complexity Ratio, capable of representing fine archaeological details and surface morphology.", "type": "domain_knowledge", "children_knowledge": [4]} +{"id": 14, "knowledge": "Degradation Risk Zone", "description": "Identifies archaeological sites at risk of degradation requiring urgent conservation intervention based on multiple factors.", "definition": "A site with \\text{Preservation Status} containing 'Poor' or 'Critical' and \\text{Structural Stability} not containing 'Stable', signaling immediate conservation needs due to active deterioration processes.", "type": "domain_knowledge", "children_knowledge": [26]} +{"id": 15, "knowledge": "Optimal Scanning Conditions", "description": "Defines the environmental conditions considered optimal for archaeological scanning based on instrument sensitivity profiles.", "definition": "Conditions with \\text{ESI} > 85, where ESI is the Environmental Suitability Index (knowledge #7), characterized by moderate temperature, humidity around 50%, and good illumination, minimizing environmental interference with scanning accuracy.", "type": "domain_knowledge", "children_knowledge": [7]} +{"id": 16, "knowledge": "Digital Conservation Priority", "description": "Classification system for prioritizing digital conservation efforts based on site conditions, historical significance, and preservation status.", "definition": "A scoring system where sites in Degradation Risk Zones with \\text{Estimated Date} older than 1000 BCE or with \\text{Site Type} = 'Rare' or 'Unique' receive highest priority for digital preservation through Premium Quality Scans, requiring immediate allocation of scanning resources.", "type": "domain_knowledge", "children_knowledge": [12, 14]} +{"id": 17, "knowledge": "Processing Bottleneck", "description": "Identifies processing workflows that are experiencing resource constraints using performance metrics.", "definition": "A processing record with \\text{PER} < 0.5, where PER is the Processing Efficiency Ratio, indicating potential hardware limitations affecting processing speed and output quality, requiring workflow optimization.", "type": "domain_knowledge", "children_knowledge": [8]} +{"id": 18, "knowledge": "Registration Quality Threshold", "description": "Defines the quality threshold for scan registration in archaeological documentation based on error propagation analysis.", "definition": "A registration with \\text{Registration Accuracy (mm)} < 1.0 and \\text{Error Value (mm)} < 2.0, ensuring sufficient accuracy for reliable spatial analysis with maximum tolerable error below the significant feature size threshold.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 19, "knowledge": "Full Archaeological Digital Twin", "description": "Defines the comprehensive digital representation of an archaeological site meeting all quality standards for research and preservation.", "definition": "A site with Premium Quality Scans, High Fidelity Mesh, Registration Quality Threshold met, and \\text{ADC} > 85, where ADC is Archaeological Documentation Completeness, representing a complete digital twin suitable for research, conservation, and visualization purposes.", "type": "domain_knowledge", "children_knowledge": [12, 13, 18, 9]} +{"id": 20, "knowledge": "Scan Resolution", "description": "Illustrates the significance of scan resolution measurements in archaeological scanning for feature detection.", "definition": "Measured in millimeters, representing the smallest feature that can be distinguished in the scan. Values like 0.5mm enable documentation of fine tool marks on artifacts, while 2.0mm might only capture general shape and macroscopic features.", "type": "value_illustration", "children_knowledge": -1} +{"id": 21, "knowledge": "Point Density", "description": "Illustrates the significance of point density in archaeological point clouds for information richness.", "definition": "Measured as points per square meter. Values around 100 capture basic site topography, 1,000 can document structural details, while 10,000+ enables analysis of surface textures and fine engravings across multiple scales of inquiry.", "type": "value_illustration", "children_knowledge": -1} +{"id": 22, "knowledge": "Noise Level", "description": "Illustrates the impact of noise levels in point cloud data on feature recognition accuracy.", "definition": "Measured in decibels, representing signal-to-noise ratio in scan data. Values below 1.0 indicate clean data suitable for detailed analysis, while values above 3.0 suggest significant noise that may obscure small features and introduce measurement uncertainty.", "type": "value_illustration", "children_knowledge": -1} +{"id": 23, "knowledge": "Coverage Percentage", "description": "Illustrates the significance of coverage percentage in archaeological scans for site completeness assessment.", "definition": "Percentage of target area successfully captured in scan data. Values above 95% indicate near-complete documentation, while 80% might have significant gaps requiring additional scanning or interpolation methods for comprehensive site analysis.", "type": "value_illustration", "children_knowledge": -1} +{"id": 24, "knowledge": "Geometric Accuracy", "description": "Illustrates the significance of geometric accuracy in 3D models for measurement reliability.", "definition": "Measured in millimeters, representing the average deviation between the scan data and final 3D model. Values below 0.1mm indicate museum-quality accuracy, while values around 1.0mm are suitable for general documentation but introduce uncertainty in fine feature analysis.", "type": "value_illustration", "children_knowledge": -1} +{"id": 25, "knowledge": "Cultural Period", "description": "Illustrates the significance of cultural period classification in archaeological sites.", "definition": "Classifies archaeological sites into standardized chronological/cultural periods. Values like 'Neolithic' (10,000-4,500 BCE), 'Bronze Age' (3,300-1,200 BCE), 'Roman' (27 BCE-476 CE), or 'Medieval' (476-1453 CE) determine applicable research methodologies, conservation approaches, and contextual interpretation frameworks.", "type": "value_illustration", "children_knowledge": -1} +{"id": 26, "knowledge": "Structural Stability", "description": "Illustrates structural state classifications in archaeological conservation.", "definition": "A categorical assessment with specific values: 'Stable' indicates structures that maintain integrity under normal conditions, 'Unstable' indicates structures showing signs of deterioration requiring intervention, and 'Critical' indicates structures at imminent risk of collapse requiring emergency stabilization.", "type": "value_illustration", "children_knowledge": -1} +{"id": 27, "knowledge": "Processing Stage", "description": "Illustrates the progression of data processing in archaeological scanning workflows.", "definition": "A sequential classification system with defined stages: 'Raw' (unprocessed scan data), 'Aligned' (multiple scans registered together), 'Cleaned' (noise and artifacts removed), 'Meshed' (point cloud converted to polygon mesh), and 'Textured' (surface textures applied to mesh). Each stage represents a discrete processing milestone.", "type": "value_illustration", "children_knowledge": -1} +{"id": 28, "knowledge": "Registration Method", "description": "Illustrates different scan registration methodologies in archaeological documentation.", "definition": "A categorization of alignment techniques with specific methodologies: 'ICP' (Iterative Closest Point algorithm for point cloud alignment), 'Target-based' (alignment using physical reference markers), 'Hybrid' (combination of automatic and manual alignment), and 'SLAM' (Simultaneous Localization and Mapping for real-time registration). Each method has distinct accuracy characteristics and use cases.", "type": "value_illustration", "children_knowledge": -1} +{"id": 29, "knowledge": "Estimated Dating", "description": "Illustrates dating conventions in archaeological classification for chronological placement.", "definition": "Values like '3500-3000 BCE', '1st c. CE', or 'ca. 1450 CE' represent estimated chronological placement based on excavation findings. Precision varies from specific years to century-level estimates depending on available evidence and dating methodologies employed.", "type": "value_illustration", "children_knowledge": -1} +{"id": 30, "knowledge": "Scan Time Efficiency (STE)", "description": "Measures how efficiently scanning time was used relative to data quality and completeness metrics.", "definition": "STE = \\frac{\\text{SQS} \\times \\sqrt{\\text{Coverage (%)}}}{\\text{Scan Duration (min)} \\times \\sqrt{\\text{Number of Scans}}}, \\text{ where SQS is the Scan Quality Score and higher STE values indicate more efficient use of scanning time relative to coverage achieved.}", "type": "calculation_knowledge", "children_knowledge": [3]} +{"id": 31, "knowledge": "Environmental Impact Factor (EIF)", "description": "Quantifies how environmental conditions affected scan quality using statistical correlation analysis.", "definition": "EIF = \\frac{\\text{SQS}}{\\text{ESI} + 10} \\times 100, \\text{ where SQS is the Scan Quality Score and ESI is the Environmental Suitability Index. Values closer to 100 indicate minimal environmental interference with data acquisition.}", "type": "calculation_knowledge", "children_knowledge": [3, 7]} +{"id": 32, "knowledge": "Feature Extraction Efficiency (FEE)", "description": "Measures the efficiency of feature identification in scan data relative to point cloud density and complexity.", "definition": "FEE = \\frac{\\text{Number of Detected Features} + \\text{Artifact Count}}{\\text{PCDR} \\times \\sqrt{\\text{Point-Cloud Density Code}}} \\times 10^3, \\text{ where PCDR is the Point Cloud Density Ratio and higher values indicate more effective feature extraction from point cloud data relative to spatial distribution efficiency.}", "type": "calculation_knowledge", "children_knowledge": [2]} +{"id": 33, "knowledge": "Registration Accuracy Ratio (RAR)", "description": "Evaluates registration accuracy relative to scan resolution using propagation of uncertainty principles.", "definition": "RAR = \\frac{\\text{Scan Resolution (mm)}}{\\text{Registration Accuracy (mm)} \\times \\sqrt{1 + \\frac{\\text{Error Value (mm)}}{\\text{Registration Accuracy (mm)}}}}, \\text{ where values > 1 indicate registration accuracy exceeds scan resolution, a desirable outcome for precise spatial analysis.}", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 34, "knowledge": "Spatial Density Index (SDI)", "description": "Assesses point cloud density relative to site dimensions for spatial sampling adequacy.", "definition": "SDI = \\frac{\\text{Total Points}}{\\text{Surface Area (m²)} \\times 10^4} \\times \\left(\\frac{\\text{Point Density (points/m²)}}{\\text{Point-Cloud Density Code}}\\right)^{0.5}", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 35, "knowledge": "Conservation Priority Index (CPI)", "description": "Quantifies the urgency of conservation efforts based on site condition, historical significance and structural stability.", "definition": "CPI = \\begin{cases} 100 - PS + AF \\times \\left(1 + \\frac{TS}{10}\\right), & \\text{if in a Degradation Risk Zone} \\\\ 50 - PS + AF \\times \\left(1 + \\frac{TS}{20}\\right), & \\text{otherwise} \\end{cases}, \\text{ where PS is 0-100 based on \\text{Preservation Status} condition ('Excellent'=10, 'Good'=30, 'Fair'=50, 'Poor'=70, 'Critical'=90), AF is approximate age in millennia derived from \\text{Estimated Date}, and TS is 0-10 based on \\text{Site Type} rarity.}", "type": "calculation_knowledge", "children_knowledge": [14, 29]} +{"id": 36, "knowledge": "Mesh-to-Point Ratio (MPR)", "description": "Evaluates the efficiency of mesh generation from point cloud data for optimal decimation determination.", "definition": "MPR = \\frac{\\text{Mesh Vertices}}{\\text{Total Points}} \\times 100 \\times \\left(\\frac{\\text{MCR}}{10}\\right)^{0.3}, \\text{ where MCR is the Mesh Complexity Ratio and values around 25-30 indicate optimal decimation for archaeological purposes with appropriate feature preservation.}", "type": "calculation_knowledge", "children_knowledge": [4]} +{"id": 37, "knowledge": "Processing Resource Utilization (PRU)", "description": "Measures the efficiency of computing resource utilization during scan processing relative to data complexity.", "definition": "PRU = \\frac{\\text{Processing Time (hours)} \\times (\\text{CPU Usage (%)} + \\text{GPU Usage (%)}) / 2}{\\text{File Size (GB)} \\times 10 \\times \\log_{10}(\\text{Mesh Vertices} + 10^4)}, \\text{ where lower values indicate more efficient use of computing resources relative to mesh complexity.}", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 38, "knowledge": "Digital Preservation Quality (DPQ)", "description": "Comprehensive metric for evaluating digital preservation quality for archaeological sites with weighted quality factors.", "definition": "DPQ = (0.3 \\times \\text{ADC}) + (0.3 \\times \\text{MFS}) + (0.2 \\times \\text{RAR}) + (0.2 \\times \\text{SCE}) - 2 \\times \\sqrt{\\frac{\\text{Error Value (mm)}}{\\text{Scan Resolution (mm)}}}, \\text{ where ADC is Archaeological Documentation Completeness, MFS is Model Fidelity Score, RAR is Registration Accuracy Ratio, and SCE is Scan Coverage Effectiveness.}", "type": "calculation_knowledge", "children_knowledge": [9, 6, 33, 1]} +{"id": 39, "knowledge": "Equipment Effectiveness Ratio (EER)", "description": "Evaluates how effectively equipment was utilized based on power consumption and scan quality relative to equipment capability.", "definition": "EER = \\frac{\\text{SQS} \\times EquipStatus\\_value}{\\text{Battery Level (%)} \\times (101 - EquipAge\\_days) / 365} \\times 25, \\text{ where SQS is the Scan Quality Score, EquipStatus_value is 1.0 for 'Excellent' to 0.2 for 'Poor', and EquipAge_days is days since \\text{Calibration Date}, with higher values indicating more efficient use of equipment relative to condition.}", "type": "calculation_knowledge", "children_knowledge": [3]} +{"id": 40, "knowledge": "Spatially Complex Site", "description": "Defines sites with complex spatial characteristics requiring specialized scanning approaches based on dimensional analysis.", "definition": "A site with \\text{Surface Area (m²)} > 100 and \\text{SDI} > 50, where SDI is the Spatial Density Index, requiring strategic planning for comprehensive documentation with multiple scanning stations and methodologies to capture complex spatial relationships.", "type": "domain_knowledge", "children_knowledge": [34]} +{"id": 41, "knowledge": "Texture-Critical Artifact", "description": "Identifies artifacts where texture documentation is critical for analysis based on surface morphology characteristics.", "definition": "Features with \\text{Texture Analysis} containing 'Detailed' or 'Critical' and \\text{TDI} > 8.0, where TDI is the Texture Density Index, requiring specialized imaging techniques such as photometric stereo or multi-spectral imaging for complete surface characterization.", "type": "domain_knowledge", "children_knowledge": [5]} +{"id": 42, "knowledge": "Conservation Emergency", "description": "Identifies sites requiring immediate conservation intervention based on multiple risk factors and structural assessment.", "definition": "A site that is in a Degradation Risk Zone with \\text{CPI} > 75, where CPI is the Conservation Priority Index, requiring immediate protective measures and priority documentation with at least Premium Quality Scans before any intervention to establish baseline condition.", "type": "domain_knowledge", "children_knowledge": [14, 35]} +{"id": 43, "knowledge": "Processing Optimized Workflow", "description": "Defines optimized processing workflows balancing quality and resource use through benchmarked performance metrics.", "definition": "A processing workflow with \\text{PRU} < 5.0 while maintaining \\text{MFS} > 7.0, where PRU is Processing Resource Utilization and MFS is Model Fidelity Score, representing an efficient balance of resource use and output quality through optimized algorithm selection and hardware allocation.", "type": "domain_knowledge", "children_knowledge": [37, 6]} +{"id": 44, "knowledge": "Registration Confidence Level", "description": "Classification system for registration confidence based on multiple factors and error propagation analysis.", "definition": "A classification where 'High Confidence' registrations have \\text{RAR} > 1.5 and \\text{Registration Method} containing 'Target', where RAR is Registration Accuracy Ratio, 'Medium Confidence' have RAR between 1.0-1.5, and 'Low Confidence' otherwise, determining appropriate use cases for spatial analysis and interpretive visualization.", "type": "domain_knowledge", "children_knowledge": [33]} +{"id": 45, "knowledge": "Environmental Challenge Scan", "description": "Identifies scans conducted under challenging environmental conditions requiring expertise and specialized equipment adaptation.", "definition": "A scan with \\text{EIF} > 120, where EIF is Environmental Impact Factor, indicating successful data capture despite suboptimal environmental conditions through adaptive scanning methodologies and operator expertise in field condition compensation.", "type": "domain_knowledge", "children_knowledge": [31]} +{"id": 46, "knowledge": "High Temporal Value Site", "description": "Identifies sites with exceptional historical significance based on age and context for prioritized research attention.", "definition": "A site with \\text{Estimated Date} containing dates before 500 CE and \\text{CPI} > 60, where CPI is Conservation Priority Index, representing locations of exceptional chronological significance requiring specialized documentation protocols to capture temporally significant features.", "type": "domain_knowledge", "children_knowledge": [35, 29]} +{"id": 47, "knowledge": "Resource-Intensive Model", "description": "Identifies 3D models requiring substantial computing resources for visualization and analysis based on complexity metrics.", "definition": "A model with \\text{Mesh Faces} > 2,000,000 and \\text{MPR} < 15, where MPR is Mesh-to-Point Ratio, requiring specialized hardware for effective interaction and analytical software optimized for large-scale geometric processing with hierarchical level-of-detail implementation.", "type": "domain_knowledge", "children_knowledge": [36]} +{"id": 48, "knowledge": "Multi-Phase Documentation Project", "description": "Defines complex archaeological projects requiring multiple scanning phases with integrated documentation strategy.", "definition": "A project with multiple scans where the total \\text{ADC} < 70 for individual scans but \\text{DPQ} > 80 when combined, where ADC is Archaeological Documentation Completeness and DPQ is Digital Preservation Quality, indicating comprehensive documentation achieved through multiple phases with coherent registration strategy for holistic interpretation.", "type": "domain_knowledge", "children_knowledge": [9, 38]} +{"id": 49, "knowledge": "Equipment Optimization Opportunity", "description": "Identifies scenarios where equipment settings could be optimized for better results based on performance analysis.", "definition": "Scanning scenarios where \\text{EER} < 30 but \\text{ESI} > 80, where EER is Equipment Effectiveness Ratio and ESI is Environmental Suitability Index, indicating potential for improved equipment utilization in favorable conditions through calibration adjustment and scanning parameter optimization.", "type": "domain_knowledge", "children_knowledge": [39, 7]} +{"id": 50, "knowledge": "Environmental Condition Classification System (ECCS)", "description": "A comprehensive classification system for archaeological site environments based on their suitability for scanning operations.", "definition": "A four-tier classification where 'Optimal Scanning Conditions' have \\text{ESI} > 85, 'Good Scanning Conditions' have ESI between 70-85, 'Acceptable Scanning Conditions' have ESI between 50-70, and 'Challenging Scanning Conditions' have ESI < 50. This classification guides scanning schedule planning and equipment selection to maximize data quality.", "type": "domain_knowledge", "children_knowledge": [7, 15]} +{"id": 51, "knowledge": "Workflow Efficiency Classification", "description": "A standardized categorization system for assessing processing workflow efficiency based on Processing Resource Utilization (PRU) values.", "definition": "A three-tier classification where 'Optimized' workflows have \\text{PRU} < 5.0 (highly efficient resource usage), 'Acceptable' workflows have PRU between 5.0-10.0 (reasonable efficiency), and 'Needs Optimization' workflows have PRU > 10.0 (inefficient resource usage requiring intervention). This classification guides processing workflow improvements and resource allocation decisions.", "type": "domain_knowledge", "children_knowledge": [37]} +{"id": 52, "knowledge": "Risk Zone Category", "description": "Classification system that evaluates archaeological sites for degradation risk based on preservation status and structural condition.", "definition": "Categorizes archaeological sites into two main groups: 'Degradation Risk Zone' and 'Not in Risk Zone'. 'Not in Risk Zone' means that the site is not in a Degradation Risk Zone.", "type": "domain_knowledge", "children_knowledge": [14, 26]} +{"id": 53, "knowledge": "Mesh Quality Classification", "description": "A standardized system for categorizing archaeological site documentation based on the presence and quality of 3D mesh models.", "definition": "A three-tier classification where 'Has High-Fidelity Meshes' indicates sites with at least one mesh meeting high-fidelity criteria, 'Standard Mesh Quality' indicates sites with meshes that don't meet high-fidelity standards, and 'No Mesh Data' indicates sites lacking 3D mesh documentation entirely. This classification helps prioritize additional documentation efforts and determines appropriate analytical approaches for different sites.", "type": "domain_knowledge", "children_knowledge": [13]} \ No newline at end of file diff --git a/archeology_scan/archeology_scan_schema.txt b/archeology_scan/archeology_scan_schema.txt new file mode 100644 index 0000000000000000000000000000000000000000..358d0f8637d842932253cfa9f4b9830db38f989b --- /dev/null +++ b/archeology_scan/archeology_scan_schema.txt @@ -0,0 +1,307 @@ +CREATE TABLE "projects" ( +arcregistry text NOT NULL, +vesseltag text NULL, +fundflux text NULL, +authpin text NULL, +authhalt text NULL, + PRIMARY KEY (arcregistry) +); + +First 3 rows: +arcregistry vesseltag fundflux authpin authhalt +------------- --------------- ---------- --------- ---------- +PR7509 Project Happy Government PMT4719 05/12/2025 +PR8078 Project Off Government PMT4944 20/09/2025 +PR9973 Project Central University PMT5400 18/03/2025 +... + + +CREATE TABLE "pointcloud" ( +cloudregistry bigint NOT NULL DEFAULT nextval('pointcloud_cloudregistry_seq'::regclass), +crewref text NOT NULL, +arcref text NOT NULL, +cloud_metrics jsonb NULL, + PRIMARY KEY (cloudregistry), + FOREIGN KEY (arcref) REFERENCES projects(arcregistry), + FOREIGN KEY (crewref) REFERENCES personnel(crewregistry) +); + +First 3 rows: + cloudregistry crewref arcref cloud_metrics +--------------- --------- -------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------ + 1 OP4641 PR7509 {'Lap_Pct': 31.3, 'Ref_Pct': 11, 'Noise_Db': 1.318, 'Cover_Pct': 91.2, 'Total_Pts': 46562436, 'Cloud_Dense': 9449, 'Point_Dense': 42812, 'Scan_Resol_Mm': 2.4} + 2 OP7199 PR9973 {'Lap_Pct': 31.7, 'Ref_Pct': 53.8, 'Noise_Db': 1.79, 'Cover_Pct': 98.1, 'Total_Pts': 87734478, 'Cloud_Dense': 1746, 'Point_Dense': 934361, 'Scan_Resol_Mm': 4.9} + 3 OP5563 PR8865 {'Lap_Pct': 39.9, 'Ref_Pct': 67.4, 'Noise_Db': 1.041, 'Cover_Pct': 88.3, 'Total_Pts': 40047207, 'Cloud_Dense': 7553, 'Point_Dense': 411433, 'Scan_Resol_Mm': 1.41} +... + + +CREATE TABLE "registration" ( +logregistry bigint NOT NULL DEFAULT nextval('registration_logregistry_seq'::regclass), +crewref text NOT NULL, +arcref text NOT NULL, +reg_accuracy jsonb NULL, + PRIMARY KEY (logregistry), + FOREIGN KEY (arcref) REFERENCES projects(arcregistry), + FOREIGN KEY (crewref) REFERENCES personnel(crewregistry) +); + +First 3 rows: + logregistry crewref arcref reg_accuracy +------------- --------- -------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------- + 1 OP4641 PR7509 {'Ctrl_Pts': '73', 'Ref_Mark': '40', 'Err_Scale': 'Cloud-to-Mesh', 'Transform': 'Matrix-47', 'Err_Val_Mm': 6.962, 'Log_Method': 'Hybrid', 'Log_Accu_Mm': 0.84} + 2 OP8435 PR8078 {'Ctrl_Pts': '6', 'Ref_Mark': '21', 'Err_Scale': 'Cloud-to-Mesh', 'Transform': 'Matrix-712', 'Err_Val_Mm': 4.442, 'Log_Method': 'Target-based', 'Log_Accu_Mm': 3.44} + 3 OP5563 PR8865 {'Ctrl_Pts': '99', 'Ref_Mark': '31', 'Err_Scale': 'RMSE', 'Transform': 'Matrix-543', 'Err_Val_Mm': 2.963, 'Log_Method': 'Hybrid', 'Log_Accu_Mm': 0.17} +... + + +CREATE TABLE "mesh" ( +facetregistry bigint NOT NULL DEFAULT nextval('mesh_facetregistry_seq'::regclass), +zoneref text NOT NULL, +equipref text NOT NULL, +mesh_specs jsonb NULL, + PRIMARY KEY (facetregistry), + FOREIGN KEY (equipref) REFERENCES equipment(equipregistry), + FOREIGN KEY (zoneref) REFERENCES sites(zoneregistry) +); + +First 3 rows: + facetregistry zoneref equipref mesh_specs +--------------- --------- ---------- --------------------------------------------------------------------------------------------------------------------------------------------------------- + 1 SC9081 SN29799 {'Tex_Pix': 8192, 'Tex_Dist': '2K', 'Facet_Faces': 7708278, 'Facet_Verts': 2361491, 'UV_Map_Qual': 'High', 'Facet_Res_Mm': 9.79, 'Geom_Delta_Mm': 3.79} + 2 SC4817 SN83019 {'Tex_Pix': 4096, 'Tex_Dist': '1K', 'Facet_Faces': 1973487, 'Facet_Verts': 542100, 'UV_Map_Qual': 'Low', 'Facet_Res_Mm': 2.33, 'Geom_Delta_Mm': 0.48} + 3 SC4082 SN60801 {'Tex_Pix': 4096, 'Tex_Dist': '4K', 'Facet_Faces': 8715696, 'Facet_Verts': 5250157, 'UV_Map_Qual': 'Medium', 'Facet_Res_Mm': 3.76, 'Geom_Delta_Mm': 4.27} +... + + +CREATE TABLE "personnel" ( +crewregistry text NOT NULL, +crewlabel text NULL, +leadregistry text NULL, +leadlabel text NULL, + PRIMARY KEY (crewregistry) +); + +First 3 rows: +crewregistry crewlabel leadregistry leadlabel +-------------- ------------- -------------- ----------------- +OP4641 Joel Wallace SV7658 Michael Kaiser +OP8435 Latoya Abbott SV2189 Stephanie Marquez +OP7199 Aaron Knight SV6920 Victoria George +... + + +CREATE TABLE "processing" ( +flowregistry bigint NOT NULL DEFAULT nextval('processing_flowregistry_seq'::regclass), +equipref text NOT NULL, +zoneref text NOT NULL, +flowsoft text NULL, +stashloc text NULL, +safebak text NULL, +datalevel text NULL, +metabench text NULL, +coordframe text NULL, +elevref text NULL, +flowstage text NULL, +processing_rate text NULL, +system_usage jsonb NULL, +calib_status jsonb NULL, + PRIMARY KEY (flowregistry), + FOREIGN KEY (equipref) REFERENCES equipment(equipregistry), + FOREIGN KEY (zoneref) REFERENCES sites(zoneregistry) +); + +First 3 rows: + flowregistry equipref zoneref flowsoft stashloc safebak datalevel metabench coordframe elevref flowstage processing_rate system_usage calib_status +-------------- ---------- --------- -------------- ---------- ----------- ------------ ----------- ------------ --------- ----------- ----------------- -------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------- + 1 SN20065 SC9016 RealityCapture Local In Progress Confidential Dublin Core Local Arbitrary Aligned 2.3 MB/s {'Flow_Hrs': 21.9, 'Proc_CPU': 81, 'Proc_GPU': 84, 'Remain_Gb': 983.5, 'Mem_Usage_Gb': 70.3} {'Cam_Cal': 'Invalid', 'Lens_Dist': 'Corrected', 'Color_Tune': 'Required', 'Station_Link': 'Partial'} + 2 SN83019 SC4817 Network Pending Confidential CIDOC CRM Custom Sea Level Meshed 4.0 MB/s {'Flow_Hrs': 25.7, 'Proc_CPU': 67, 'Proc_GPU': 66, 'Remain_Gb': 306.1, 'Mem_Usage_Gb': 51.7} {'Cam_Cal': 'Required', 'Lens_Dist': 'Unknown', 'Color_Tune': 'Adjusted', 'Station_Link': 'Partial'} + 3 SN60801 SC4082 RealityCapture Cloud Completed Restricted Custom Local Arbitrary Final 5.7 MB/s {'Flow_Hrs': 16.5, 'Proc_CPU': 80, 'Proc_GPU': 78, 'Remain_Gb': 487.3, 'Mem_Usage_Gb': 79.5} {'Cam_Cal': 'Invalid', 'Lens_Dist': 'Corrected', 'Color_Tune': 'Required', 'Station_Link': 'Partial'} +... + + +CREATE TABLE "sites" ( +zoneregistry text NOT NULL, +zonelabel text NULL, +digunit text NULL, +gridtrace text NULL, +phasefactor text NULL, +guessdate text NULL, +typesite text NULL, +guardhint text NULL, +geo_position jsonb NULL, +site_status jsonb NULL, + PRIMARY KEY (zoneregistry) +); + +First 3 rows: +zoneregistry zonelabel digunit gridtrace phasefactor guessdate typesite guardhint geo_position site_status +-------------- ------------------------- --------- ----------- ------------- ----------- ---------- ----------- ------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------- +SC9016 Site-North Alexanderville Unit-C9 S29-E8 Iron Age -2929 BCE Bur. {'Geo_X': -9.602135, 'Geo_Y': -2.756411, 'Depth_C': 329.9, 'Height_M': 4391.4} {'Env_Haz': 'Low', 'Pres_Stat': 'Excellent', 'Risk_Eval': 'Req.', 'Safe_Rank': 'Minimal', 'Entry_Stat': 'Closed', 'Insur_Stat': 'Expired', 'Health_Eval': 'Approved'} +SC9081 Site-Grahammouth Unit-A14 N44-W27 Middle Ages 1335 BCE Industrial Temporary {'Geo_X': 57.10752, 'Geo_Y': 70.03605, 'Depth_C': 97.5, 'Height_M': 429.28} {'Env_Haz': 'Low', 'Pres_Stat': 'Fair', 'Risk_Eval': 'Pending', 'Safe_Rank': 'Standard', 'Entry_Stat': 'Restricted', 'Insur_Stat': 'Pending', 'Health_Eval': 'Pending'} +SC4817 Site-Port Brianside Unit-D19 S48-W26 IRON AGE -4985 BCE Burial {'Geo_X': 73.605545, 'Geo_Y': 141.71112, 'Depth_C': 499.9, 'Height_M': 4934.58} {'Env_Haz': 'High', 'Pres_Stat': 'Critical', 'Risk_Eval': 'Completed', 'Safe_Rank': 'High', 'Entry_Stat': 'Closed', 'Insur_Stat': 'Expired', 'Health_Eval': 'Review'} +... + + +CREATE TABLE "spatial" ( +domainregistry bigint NOT NULL DEFAULT nextval('spatial_domainregistry_seq'::regclass), +arcref text NOT NULL, +crewref text NOT NULL, +spatial_dims jsonb NULL, + PRIMARY KEY (domainregistry), + FOREIGN KEY (arcref) REFERENCES projects(arcregistry), + FOREIGN KEY (crewref) REFERENCES personnel(crewregistry) +); + +First 3 rows: + domainregistry arcref crewref spatial_dims +---------------- -------- --------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- + 1 PR7509 OP4641 {'Angles': {'Angle_Az': 342.4, 'Angle_Tilt': 23.9}, 'Vol_M3': 76.7, 'Area_M2': 78.01, 'Ground_Span': 4.13, 'Bounding_Box': {'Box_X': 40.12, 'Box_Y': 1.06, 'Box_Z': 8.74}} + 2 PR8865 OP5563 {'Angles': {'Angle_Az': 91.3, 'Angle_Tilt': -14.5}, 'Vol_M3': 49.88, 'Area_M2': 286.85, 'Ground_Span': 9.52, 'Bounding_Box': {'Box_X': 32.93, 'Box_Y': 5.25, 'Box_Z': 18.09}} + 3 PR5905 OP2517 {'Angles': {'Angle_Az': 147.9, 'Angle_Tilt': 19.6}, 'Vol_M3': 41.68, 'Area_M2': 993.96, 'Ground_Span': 1.83, 'Bounding_Box': {'Box_X': 32.52, 'Box_Y': 47.08, 'Box_Z': 16.6}} +... + + +CREATE TABLE "qualitycontrol" ( +qualregistry bigint NOT NULL DEFAULT nextval('qualitycontrol_qualregistry_seq'::regclass), +arcref text NOT NULL, +crewref text NOT NULL, +accucheck text NULL, +ctrlstate text NULL, +valimeth text NULL, +valistate text NULL, +archstat text NULL, +pubstat text NULL, +copystat text NULL, +refmention text NULL, +remark text NULL, + PRIMARY KEY (qualregistry), + FOREIGN KEY (arcref) REFERENCES projects(arcregistry), + FOREIGN KEY (crewref) REFERENCES personnel(crewregistry) +); + +First 3 rows: + qualregistry arcref crewref accucheck ctrlstate valimeth valistate archstat pubstat copystat refmention remark +-------------- -------- --------- -------------------------------------------------------- ----------- ---------- --------------------------------------------------- ---------- ---------------------------------------- ----------- ------------- ----------------------------------------- + 1 PR7509 OP4641 Accuracy check not required, standard quality sufficient Pending Automated Validation rejected, data quality issues identified Verified Draft status, preparation for submission Citation-8447 + 2 PR8078 OP8435 Accuracy check completed, validation successful Pending Visual Validation rejected, data quality issues identified Verified Submitted status, under editorial review Open Access Citation-6197 + 3 PR8865 OP5563 Accuracy check pending, validation in progress Pending Automated Validation approved, data quality meets standards Verified Submitted status, under editorial review restricted Citation-2238 Bed something performance leader realize. +... + + +CREATE TABLE "equipment" ( +equipregistry text NOT NULL, +equipform text NULL, +equipdesign text NULL, +equiptune date NULL, +equipstatus text NULL, +powerlevel text NULL, +transport_speed text NULL, +coverage_rate text NULL, +point_generation_rate text NULL, +cost_per_area text NULL, +accuracy_per_time text NULL, +power_consumption text NULL, +battery_drain text NULL, +storage_usage_rate text NULL, + PRIMARY KEY (equipregistry) +); + +First 3 rows: +equipregistry equipform equipdesign equiptune equipstatus powerlevel transport_speed coverage_rate point_generation_rate cost_per_area accuracy_per_time power_consumption battery_drain storage_usage_rate +--------------- --------------------------------------------------------- ------------- ----------- --------------------------------------------------- ----------------------------------------- ----------------- --------------- ----------------------- --------------- ------------------- ------------------- --------------- -------------------- +SN20065 LiDAR scanner, precise 3D measurement technology Model-669 2024-11-01 Excellent condition, optimal performance guaranteed 62% battery, extended operation available 82 km/h 15 m²/hr 62000 pts/min 58 USD/m² 8 mm/hr 124 W/hr 13%/hr 6 GB/hr +SN29799 Structured light scanner, high-resolution surface capture Model-835 2024-09-09 Good condition, reliable operation expected 21% battery, limited operation time 41 km/h 5 m²/hr 21000 pts/min 99 USD/m² 3 mm/hr 42 W/hr 21%/hr 2 GB/hr +SN83019 Photogrammetry system, image-based 3D reconstruction Model-566 2025-02-08 Good condition, reliable operation expected 46% battery, moderate operation time 66 km/h 11 m²/hr 46000 pts/min 74 USD/m² 6 mm/hr 92 W/hr 16%/hr 4 GB/hr +... + + +CREATE TABLE "features" ( +traitregistry bigint NOT NULL DEFAULT nextval('features_traitregistry_seq'::regclass), +zoneref text NOT NULL, +equipref text NOT NULL, +traitextract text NULL, +feature_analysis jsonb NULL, + PRIMARY KEY (traitregistry), + FOREIGN KEY (equipref) REFERENCES equipment(equipregistry), + FOREIGN KEY (zoneref) REFERENCES sites(zoneregistry) +); + +First 3 rows: + traitregistry zoneref equipref traitextract feature_analysis +--------------- --------- ---------- -------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- + 1 SC9016 SN20065 Manual {'Mat_Kind': 'Organic', 'Arti_Count': 71, 'Struct_Kind': 'Artifact', 'Trait_Count': 516, 'Analysis_Status': {'Hue_Study': 'Partial', 'Pattern_Note': 'Not Required', 'Texture_Study': 'Partial'}} + 2 SC4817 SN83019 Manual {'Mat_Kind': 'Ceramic', 'Arti_Count': 69, 'Struct_Kind': 'Complex', 'Trait_Count': 103, 'Analysis_Status': {'Hue_Study': 'Completed', 'Pattern_Note': 'Not Required', 'Texture_Study': 'Partial'}} + 3 SC4082 SN60801 Automatic {'Mat_Kind': 'Mixed', 'Arti_Count': 8, 'Struct_Kind': 'Wall', 'Trait_Count': 820, 'Analysis_Status': {'Hue_Study': 'Not Required', 'Pattern_Note': 'Completed', 'Texture_Study': 'Completed'}} +... + + +CREATE TABLE "scans" ( +questregistry text NOT NULL, +chronotag text NULL, +arcref text NOT NULL, +crewref text NOT NULL, +zoneref text NOT NULL, +scancount text NULL, +climtune text NULL, +huecatch text NULL, +fmtfile text NULL, +size text NULL, +scan_cost text NULL, +scanning_rate text NULL, + PRIMARY KEY (questregistry), + FOREIGN KEY (arcref) REFERENCES projects(arcregistry), + FOREIGN KEY (crewref) REFERENCES personnel(crewregistry), + FOREIGN KEY (zoneref) REFERENCES sites(zoneregistry) +); + +First 3 rows: +questregistry chronotag arcref crewref zoneref scancount climtune huecatch fmtfile size scan_cost scanning_rate +--------------- -------------------------------- -------- --------- --------- --------------------------------------- ------------------------------------------------------ ---------- --------- -------- ----------- --------------- +ASD409481 2024-09-03 07:20:28.479288 GMT+8 PR7509 OP4641 SC9016 5 scans, moderate coverage session Windy conditions, potential equipment stability issues RGB PTS 24.71 GB 3.22 USD/GB 63 MB/min +ASD648638 2024-07-27 08:52:12.479479 GMT+8 PR8078 OP8435 SC9081 2 scans, moderate coverage session Rainy weather, equipment protection required RGB PLY 21.63 GB 6.86 USD/GB 240 MB/min +ASD535327 2025-01-24 12:45:10.479479 GMT+8 PR9973 OP7199 SC4817 7 scans, comprehensive coverage session Windy conditions, potential equipment stability issues RGB PLY 41.48 GB 4.2 USD/GB 37 MB/min +... + + +CREATE TABLE "conservation" ( +cureregistry bigint NOT NULL DEFAULT nextval('conservation_cureregistry_seq'::regclass), +arcref text NOT NULL, +zoneref text NOT NULL, +harmassess text NULL, +curerank text NULL, +structstate text NULL, +intervhistory text NULL, +priordocs text NULL, + PRIMARY KEY (cureregistry), + FOREIGN KEY (arcref) REFERENCES projects(arcregistry), + FOREIGN KEY (zoneref) REFERENCES sites(zoneregistry) +); + +First 3 rows: + cureregistry arcref zoneref harmassess curerank structstate intervhistory priordocs +-------------- -------- --------- ------------ --------------------------------------------------- ----------------------------------------------- --------------- ----------- + 1 PR7509 SC9016 Critical priority, immediate conservation required Moderate condition, careful monitoring required Minor + 2 PR8078 SC9081 Severe Low priority, routine maintenance sufficient Moderate condition, careful monitoring required Major Partial + 3 PR3991 SC4460 Minor High priority, urgent conservation attention needed Moderate condition, careful monitoring required Major +... + + +CREATE TABLE "environment" ( +airregistry bigint NOT NULL DEFAULT nextval('environment_airregistry_seq'::regclass), +zoneref text NOT NULL, +equipref text NOT NULL, +photomap text NULL, +imgcount text NULL, +ambient_cond jsonb NULL, + PRIMARY KEY (airregistry), + FOREIGN KEY (equipref) REFERENCES equipment(equipregistry), + FOREIGN KEY (zoneref) REFERENCES sites(zoneregistry) +); + +First 3 rows: + airregistry zoneref equipref photomap imgcount ambient_cond +------------- --------- ---------- ---------- ------------- ------------------------------------------------------------------------------------------------------------------------------------ + 523 SC1896 SN93781 60% 241 imgs/mins {'Hume_Pct': 52.3, 'Ambic_Temp': 22.3, 'Geo_Signal': 'Good', 'Illume_Lux': 99474, 'Link_Status': 'Limited', 'Track_Status': 'Fixed'} + 606 SC1426 SN63826 60% 240 imgs/mins {'Hume_Pct': 71.2, 'Ambic_Temp': 4.6, 'Geo_Signal': 'Poor', 'Illume_Lux': 10046, 'Link_Status': 'Connected', 'Track_Status': None} + 1 SC9016 SN20065 80% 248 imgs/mins {'Hume_Pct': 60.4, 'Ambic_Temp': 25.3, 'Geo_Signal': None, 'Illume_Lux': 86054, 'Link_Status': 'Disconnected', 'Track_Status': None} +... diff --git a/cold_chain_pharma_compliance/cold_chain_pharma_compliance_column_meaning_base.json b/cold_chain_pharma_compliance/cold_chain_pharma_compliance_column_meaning_base.json new file mode 100644 index 0000000000000000000000000000000000000000..6caf89711a09a29fd24076d3a88e9cf70af78c02 --- /dev/null +++ b/cold_chain_pharma_compliance/cold_chain_pharma_compliance_column_meaning_base.json @@ -0,0 +1,199 @@ +{ + "cold_chain_pharma_compliance|Shipments|RecKey": "text. Unique shipment record key. PK. Example: CC381686.", + "cold_chain_pharma_compliance|Shipments|LOG_TS": "timestamp. Log-creation timestamp for this record. **NULL means record creation time not captured.**. Example: 2025-01-29T16:01:15.", + "cold_chain_pharma_compliance|Shipments|ShipTok": "text. Human-friendly shipment token (unique within system). Example: SH88875.", + "cold_chain_pharma_compliance|Products|ProdCode": "text. Unique product code. PK. Example: PH75271.", + "cold_chain_pharma_compliance|Products|ProdLabel": "text. Commercial product label. Example: strategize value-added deliverables.", + "cold_chain_pharma_compliance|Products|ProdCat": "text. Product category or therapeutic class. Possible values: Biologics, Blood Products, Insulin, Vaccines.", + "cold_chain_pharma_compliance|Products|Maker": "text. Manufacturer name. **NULL means manufacturer not specified.**. Example: York Ltd.", + "cold_chain_pharma_compliance|ProductBatches|BatchTag": "text. Unique production batch identifier. PK. Example: BT909380.", + "cold_chain_pharma_compliance|ProductBatches|ProdLink": "text. Product code this batch belongs to. FK to Products.", + "cold_chain_pharma_compliance|ProductBatches|MFG_TS": "timestamp. Manufacturing timestamp for the batch. Example: 2-Aug-24.", + "cold_chain_pharma_compliance|ProductBatches|EXP_TS": "timestamp. Expiry timestamp for the batch. Example: 09/30/2026.", + "cold_chain_pharma_compliance|ProductBatches|store_cond": "text. Textual description of required storage conditions. Possible values: -20°C, -70°C, 15-25°C, 2-8°C.", + "cold_chain_pharma_compliance|ProductBatches|TempMin": "real. Minimum allowable temperature (°C). Possible values: -70, -20, 2, 15.", + "cold_chain_pharma_compliance|ProductBatches|TempMax": "real. Maximum allowable temperature (°C). Example: 12.", + "cold_chain_pharma_compliance|ProductBatches|TempSense": "text. Temperature-sensing method or device type. Possible values: High, Low, Medium.", + "cold_chain_pharma_compliance|ProductBatches|pack_type": "text. Packaging type for the shipment units. Possible values: Ampoule, Container, Syringe, Vial.", + "cold_chain_pharma_compliance|ProductBatches|PACK_CNT": "bigint. Number of saleable packs in the batch. Example: 936.", + "cold_chain_pharma_compliance|ProductBatches|ValUSD": "text. Declared commercial value in USD. Example: $57,421.85 .", + "cold_chain_pharma_compliance|ProductBatches|InsUSD": "real. Insurance coverage value in USD. **NULL means no insurance value set.**. Example: 226483.74.", + "cold_chain_pharma_compliance|Carriers|CarrierTag": "text. Unique carrier identifier. PK. Example: Rodriguez, Mcintyre and Richards.", + "cold_chain_pharma_compliance|Carriers|CarrierCert": "text. Certification credentials or licence ID. **NULL means certification information unavailable.**. Possible values: Both, CEIV Pharma, GDP.", + "cold_chain_pharma_compliance|Vehicles|VehRef": "text. Unique vehicle reference identifier. PK. Example: VH6122.", + "cold_chain_pharma_compliance|Vehicles|CarrierBond": "text. Carrier identifier to which the vehicle belongs. FK to Carriers.", + "cold_chain_pharma_compliance|Vehicles|VehType": "text. Vehicle type (e.g., van, truck, reefer). Possible values: Aircraft, Container, Reefer Truck, Van.", + "cold_chain_pharma_compliance|Vehicles|veh_qual": "text. Qualitative vehicle qualification/status. Possible values: Qualified, Under Review, Validated.", + "cold_chain_pharma_compliance|Vehicles|TEMP_MON_SYS": "text. Temperature-monitoring system installed. Possible values: Continuous, Interval, Manual.", + "cold_chain_pharma_compliance|MonitoringDevices|MonDevRef": "text. Unique monitoring-device reference. PK. Example: MD9886.", + "cold_chain_pharma_compliance|MonitoringDevices|CalibTS": "timestamp. Timestamp of last calibration. Example: 2024/10/6.", + "cold_chain_pharma_compliance|MonitoringDevices|DevAcc": "text. Device accuracy specification. Example: 0.31.", + "cold_chain_pharma_compliance|MonitoringDevices|RecIntMin": "bigint. Recording interval in minutes. **NULL means interval not configured.**. Possible values: 5.0, 10.0, 15.0, 30.0.", + "cold_chain_pharma_compliance|MonitoringDevices|TempPts": "bigint. Number of temperature points that can be stored. Example: 891.", + "cold_chain_pharma_compliance|EnvironmentalMonitoring|RecKeyLink": "text. Shipment record key this monitoring pertains to. PK. FK to Shipments.", + "cold_chain_pharma_compliance|EnvironmentalMonitoring|DevLink": "text. Monitoring-device reference. FK to MonitoringDevices.", + "cold_chain_pharma_compliance|QualityCompliance|RecKeyQC": "text. Shipment record key for QC. PK. FK to Shipments.", + "cold_chain_pharma_compliance|IncidentAndRiskManagement|RecKeyRisk": "text. Shipment record key for risk log. PK. FK to Shipments.", + "cold_chain_pharma_compliance|InsuranceClaims|RecKeyClaim": "text. Shipment record key for insurance. PK. FK to Shipments.", + "cold_chain_pharma_compliance|InsuranceClaims|ClaimNeed": "text. Indicator whether a claim is required. Possible values: No, Under Review, Yes.", + "cold_chain_pharma_compliance|InsuranceClaims|ClaimStat": "text. Current claim status. **NULL means claim not initiated.**. Possible values: Approved, Filed, Rejected.", + "cold_chain_pharma_compliance|InsuranceClaims|ClaimUSD": "real. Amount claimed in USD. Example: 79419.22.", + "cold_chain_pharma_compliance|InsuranceClaims|CostImpactUSD": "real. Total cost impact in USD. Example: 47835.32.", + "cold_chain_pharma_compliance|InsuranceClaims|RespParty": "text. Responsible party for loss. **NULL means not yet assigned.**. Possible values: Carrier, Receiver, Shipper, Unknown.", + "cold_chain_pharma_compliance|ReviewsAndImprovements|RecKeyRev": "text. Shipment record key for review. PK. FK to Shipments.", + "cold_chain_pharma_compliance|ReviewsAndImprovements|ProcImprove": "text. Process improvement actions. Possible values: In Progress, No, Yes.", + "cold_chain_pharma_compliance|ReviewsAndImprovements|TrainNeeds": "text. Training needs identified. Possible values: No, Under Review, Yes.", + "cold_chain_pharma_compliance|ReviewsAndImprovements|SOP_Update": "text. SOP updates required. Possible values: No, Under Review, Yes.", + "cold_chain_pharma_compliance|ReviewsAndImprovements|VendorImpact": "text. Impact on vendor performance. **NULL means none recorded.**. Possible values: Disqualification, Warning.", + "cold_chain_pharma_compliance|ReviewsAndImprovements|NextShipChg": "text. Changes for next shipment. **NULL means none planned.**. Possible values: Major, Minor.", + "cold_chain_pharma_compliance|ReviewsAndImprovements|MonFreqChg": "text. Monitoring-frequency changes. Possible values: Decreased, Increased, No Change.", + "cold_chain_pharma_compliance|ReviewsAndImprovements|RouteRiskReassess": "text. Route risk reassessment outcome. Possible values: Completed, Not Required, Required.", + "cold_chain_pharma_compliance|ReviewsAndImprovements|PackSpecRev": "text. Packaging specification revision. Possible values: Completed, Not Required, Required.", + "cold_chain_pharma_compliance|ReviewsAndImprovements|LaneQualStat": "text. Transport lane quality status. **NULL means status not set.**. Possible values: Invalid, Review Required, Valid.", + "cold_chain_pharma_compliance|ReviewsAndImprovements|TechUpgrade": "text. Planned technology upgrades. Possible values: No, Under Evaluation, Yes.", + "cold_chain_pharma_compliance|ReviewsAndImprovements|CostOptPot": "text. Cost-optimisation opportunities. **NULL means none identified.**. Possible values: High, Low, Medium.", + "cold_chain_pharma_compliance|ReviewsAndImprovements|SustainImpact": "text. Sustainability impact assessment. **NULL means not assessed.**. Possible values: Negative, Neutral, Positive.", + "cold_chain_pharma_compliance|ReviewsAndImprovements|CarbonKG": "real. Estimated carbon footprint in kg CO₂-e. **NULL means footprint not calculated.**. Example: 2617.3.", + "cold_chain_pharma_compliance|ReviewsAndImprovements|EnergyScore": "text. Energy efficiency score. **NULL means score not set.**. Example: 81.0.", + "cold_chain_pharma_compliance|ReviewsAndImprovements|DocFormat": "text. Review document format. Possible values: Electronic, Hybrid, Paper.", + "cold_chain_pharma_compliance|ReviewsAndImprovements|DataIntegrity": "text. Data integrity verification status. Possible values: Compromised, Under Review, Verified.", + "cold_chain_pharma_compliance|ReviewsAndImprovements|AuditTrailComplete": "text. Audit-trail completeness status. Example: 91.0.", + "cold_chain_pharma_compliance|ReviewsAndImprovements|E_SigStat": "text. Electronic-signature compliance status. Possible values: Invalid, Not Required, Valid.", + "cold_chain_pharma_compliance|ReviewsAndImprovements|SysAccessCtrl": "text. System access-control status. Possible values: Adequate, Compromised, Limited.", + "cold_chain_pharma_compliance|ReviewsAndImprovements|DataBackupStat": "text. Data backup status. **NULL means backup not confirmed.**. Possible values: Current, Failed, Pending.", + "cold_chain_pharma_compliance|ReviewsAndImprovements|ReportGenStat": "text. Report generation status. Possible values: Completed, In Progress, Pending.", + "cold_chain_pharma_compliance|ReviewsAndImprovements|DistList": "text. Distribution list details. Possible values: Extended, Limited, Standard.", + "cold_chain_pharma_compliance|ReviewsAndImprovements|ArchiveStat": "text. Archiving status for review records. Possible values: Completed, Not Required, Pending.", + "cold_chain_pharma_compliance|ReviewsAndImprovements|ReviewNotes": "text. Free-text review notes. Example: Agent usually ten food focus. Throughout return mean..", + "cold_chain_pharma_compliance|ReviewsAndImprovements|NextRevTS": "timestamp. Scheduled timestamp for next review. Example: 2025/3/13.", + "cold_chain_pharma_compliance|ReviewsAndImprovements|CloseStat": "text. Review closure status. Possible values: Closed, Open, Under Review.", + "cold_chain_pharma_compliance|ShipSensorLink|ShpNode": "text. Shipment record key in link. PK. FK to Shipments.", + "cold_chain_pharma_compliance|ShipSensorLink|DevNode": "text. Monitoring-device reference in link. PK. FK to MonitoringDevices.", + "cold_chain_pharma_compliance|Shipments|shipment_overview": { + "column_meaning": "JSONB column. Groups routing, timing and risk-performance metadata of a single cold-chain shipment so that dashboards can fetch a complete lane summary from one JSONB field.", + "fields_meaning": { + "route": { + "origin": { + "hub": "text. Code of the originating logistics hub. Example: Gonzales-Lopez Facility.", + "address": "text. Street-level origin address. Example: 0131 Johnson Station\nNew Tinatown, FL 65757.", + "nation": "text. ISO country code of origin. **NULL means country not provided.**. Example: Fiji." + }, + "destination": { + "hub": "text. Code of the destination logistics hub. Example: Williams-Aguilar Hospital.", + "address": "text. Street-level destination address. Example: 7209 West Flats\nEast Tonya, OR 13658.", + "nation": "text. ISO country code of destination. **NULL means country not provided.**. Example: Bouvet Island (Bouvetoya)." + }, + "route_string": "text. Encoded string of planned route waypoints. Example: Fiji -> Bouvet Island (Bouvetoya).", + "risk_note": "text. Qualitative notes on route-specific risks. Possible values: High, Low, Medium." + }, + "timing_performance": { + "go_time_ts": "timestamp. Planned departure timestamp. Example: 2025/2/17 19:29.", + "planned_eta_hrs": "bigint. Estimated transit time in hours. Example: 27.6.", + "actual_duration_hrs": "bigint. Actual transit time in hours. **NULL means actual hours not yet recorded.**. Example: 10.2.", + "end_time_ts": "timestamp. Actual arrival timestamp. Possible values: 2025/2/19 08:29.", + "distance_km": "bigint. Total kilometres travelled for the shipment. Example: 2608." + } + } + }, + "cold_chain_pharma_compliance|EnvironmentalMonitoring|env_metrics": { + "column_meaning": "JSONB column. Consolidates all temperature, humidity, shock, light and tracking telemetry collected for a shipment into one JSONB object for faster rule-engine evaluation.", + "fields_meaning": { + "temperature": { + "avg_c": "real. Average temperature recorded (°C). Example: 3.1.", + "min_c": "real. Minimum temperature recorded (°C). Example: 0.1.", + "max_c": "real. Maximum temperature recorded (°C). Example: 10.0.", + "excursion_count": "bigint. Count of temperature deviations beyond limits. Example: 4.", + "excursion_duration_min": "bigint. Total minutes of temperature deviation. Example: 48.", + "alarm_count": "bigint. Number of temperature alarm events. Possible values: 0, 1, 2, 3, 4, 5." + }, + "humidity": { + "humidity_monitor": "text. Humidity monitoring flag. **NULL means humidity was not monitored.**. Possible values: No, Partial, Yes.", + "avg_pct": "real. Average relative humidity percentage. Example: 42.3.", + "excursion_count": "bigint. Count of humidity deviations. Possible values: 0, 1, 2, 3, 4, 5." + }, + "light_and_shock": { + "light_monitor_mode": "text. Light-exposure monitoring flag. **NULL means light was not monitored.**. Possible values: Continuous, Periodic.", + "shock_monitor": "text. Shock-event monitoring flag. **NULL means shock was not monitored.**. Possible values: Active, Passive.", + "shock_event_count": "bigint. Count of shock events detected. Example: 5." + }, + "tracking": { + "location_tracking_state": "text. Location-tracking status indicator. Possible values: Active, Failed, Intermittent.", + "gps_completeness_pct": "text. GPS completeness/coverage descriptor. Example: 97.4.", + "route_deviation_incidents": "bigint. Number of route deviation incidents. Possible values: 0, 1, 2, 3.", + "unscheduled_stops": "bigint. Count of unplanned stops. Possible values: 0, 1, 2, 3, 4, 5.", + "stop_duration_min": "bigint. Total minutes spent in unplanned stops. Example: 55." + } + } + }, + "cold_chain_pharma_compliance|QualityCompliance|qc_checklist": { + "column_meaning": "JSONB column. Bundles seal integrity, product/pack condition, documentation and regulatory checks plus release status into a single JSONB field for one-click quality-decision review.", + "fields_meaning": { + "security": { + "seal_status": "text. Security seal status. **NULL means seal status not captured.**. Possible values: Broken, Intact, Suspicious.", + "security_incident": "text. Security incident notes. Possible values: 0, 1, 2." + }, + "product_integrity": { + "product_intact_check": "text. Product-integrity check result. Possible values: Conditional, Failed, Passed.", + "pack_condition": "text. Packaging condition assessment. **NULL means packaging condition not assessed.**. Possible values: Compromised, Damaged, Good.", + "label_condition": "text. Label condition assessment. Possible values: Clear, Damaged, Illegible." + }, + "documentation": { + "documentation_complete": "text. Documentation completeness status. Possible values: Complete, Incomplete, Partial.", + "certification_status": "text. Certification status at reception. Possible values: Complete, Expired, Missing." + }, + "customs_and_regulatory": { + "custom_clearance_status": "text. Customs-clearance status. Possible values: Cleared, Delayed, Pending.", + "import_permit_status": "text. Import-permit compliance status. Possible values: Not Required, Pending, Valid.", + "regulatory_compliance_status": "text. Regulatory compliance status. Possible values: Compliant, Non-compliant, Under Review." + }, + "gdp_quality": { + "sop_compliance": "text. Standard operating procedure compliance status. Possible values: Full, Non-compliant, Partial.", + "gdp_compliance": "text. Good distribution practice compliance status. Possible values: Full, Non-compliant, Partial.", + "quality_agreement_status": "text. Quality agreement status with partners. Possible values: Active, Expired, Pending.", + "quality_review_status": "text. Quality review status. Possible values: Approved, Pending, Rejected.", + "responsible_person": "text. Responsible quality person. Example: Scott Alexander.", + "quality_approval_ts": "timestamp. Timestamp of quality approval. **NULL means approval pending.**. Example: 2025/1/23." + }, + "release": { + "product_release_status": "text. Product release status. Possible values: Quarantined, Rejected, Released.", + "release_ts": "timestamp. Product release timestamp. Possible values: 2025/2/19.", + "quarantine_reason": "text. Reason for quarantine if applicable. **NULL means product not quarantined.**. Possible values: Damage, Documentation, Temperature Deviation." + } + } + }, + "cold_chain_pharma_compliance|IncidentAndRiskManagement|incident_risk_record": { + "column_meaning": "JSONB column. Combines stability review, deviation investigation, risk assessment, reporting and impact analysis for a shipment into one JSONB structure to support CAPA and pharmacovigilance workflows.", + "fields_meaning": { + "stability_and_quality": { + "stability_data_review": "text. Stability-data review summary. Possible values: Completed, Not Required, Pending.", + "stability_impact_assessment": "text. Assessment of stability impact. **NULL means impact not assessed.**. Possible values: Major, Minor.", + "product_quality_impact": "text. Product-quality impact summary. **NULL means impact not assessed.**. Possible values: Confirmed, Possible." + }, + "batch_decision": { + "batch_release_decision": "text. Decision on batch release. Possible values: Approved, Pending, Rejected." + }, + "deviation_investigation": { + "investigation_status": "text. Deviation investigation status. **NULL means investigation not started.**. Possible values: Completed, Ongoing.", + "corrective_actions": "text. Corrective actions taken. **NULL means actions not defined.**. Possible values: Implemented, Pending.", + "preventive_actions": "text. Preventive actions planned. **NULL means actions not defined.**. Possible values: Implemented, Planned." + }, + "risk": { + "risk_assessment_status": "text. Overall risk-assessment status. Possible values: Completed, Ongoing, Required.", + "risk_level": "text. Assigned risk level. **NULL means level not set.**. Possible values: High, Low, Medium." + }, + "reporting_notification": { + "incident_report_status": "text. Incident-reporting status. **NULL means report not filed.**. Possible values: Draft, Reviewed, Submitted.", + "health_authority_notification": "text. Health-authority notification status. Possible values: Completed, Not Required, Pending.", + "authority_response": "text. Authority response summary. **NULL means no response yet.**. Possible values: Acknowledged, Investigation.", + "customer_notification": "text. Customer notification status. Possible values: Completed, Not Required, Pending.", + "customer_response": "text. Customer response summary. **NULL means no response yet.**. Possible values: Accepted, Rejected." + }, + "impact": { + "patient_impact": "text. Assessment of patient impact. **NULL means impact not assessed.**. Possible values: Confirmed, Possible.", + "market_impact": "text. Assessment of market impact. **NULL means impact not assessed.**. Possible values: Limited, Significant.", + "reputation_impact": "text. Reputation impact statement. **NULL means impact not assessed.**. Possible values: Major, Minor." + }, + "lessons_learned": "text. Lessons learned documentation reference. Possible values: In Progress, No, Yes." + } + } +} \ No newline at end of file diff --git a/cold_chain_pharma_compliance/cold_chain_pharma_compliance_kb.jsonl b/cold_chain_pharma_compliance/cold_chain_pharma_compliance_kb.jsonl new file mode 100644 index 0000000000000000000000000000000000000000..dd4f54b1c836616e670f25c145665da06636bcac --- /dev/null +++ b/cold_chain_pharma_compliance/cold_chain_pharma_compliance_kb.jsonl @@ -0,0 +1,60 @@ +{"id": 0, "knowledge": "Temperature Excursion Duration (TED)", "description": "The total time a shipment spends outside its required temperature range during transit.", "definition": "TED = \\sum_{i=1}^{n} t_i, \\text{where } t_i \\text{ is the duration in minutes of each individual temperature excursion event.}", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 1, "knowledge": "Temperature Excursion Severity Index (TESI)", "description": "Measures the severity of temperature excursions relative to product temperature requirements.", "definition": "TESI = TED \\times \\frac{|T_{max} - T_{allowed}| + |T_{min} - T_{allowed}|}{2}, \\text{where } T_{max} \\text{ and } T_{min} \\text{ are the maximum and minimum temperatures recorded during excursions, and } T_{allowed} \\text{ is the allowed temperature range midpoint.}", "type": "calculation_knowledge", "children_knowledge": [0]} +{"id": 2, "knowledge": "Critical Temperature Exposure", "description": "Indicates when a product has been exposed to temperatures that may significantly impact product quality.", "definition": "An exposure event where the product temperature deviates more than 5°C from its specified temperature range for more than 60 consecutive minutes.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 3, "knowledge": "Route Risk Classification", "description": "Categorizes shipping routes based on their risk level for temperature excursions.", "definition": "Routes are classified as: 'Low Risk' (0-1 historical excursion events per 10 shipments), 'Medium Risk' (2-4 historical excursion events per 10 shipments), 'High Risk' (5+ historical excursion events per 10 shipments).", "type": "value_illustration", "children_knowledge": -1} +{"id": 4, "knowledge": "GDP Certification Status", "description": "Illustrates the certification levels for carriers in Good Distribution Practice.", "definition": "Carriers can have one of the following certification statuses: 'None' (no certification), 'GDP' (Good Distribution Practice certified), 'CEIV Pharma' (IATA Center of Excellence for Independent Validators certification), or 'Both' (having both GDP and CEIV Pharma certifications). Null values indicate pending certification status verification.", "type": "value_illustration", "children_knowledge": -1} +{"id": 5, "knowledge": "Cold Chain Compliance Rate (CCCR)", "description": "Measures the percentage of shipments that maintained required temperature conditions throughout transit.", "definition": "CCCR = \\frac{N_{compliant}}{N_{total}} \\times 100\\%, \\text{where } N_{compliant} \\text{ is the number of shipments with zero temperature excursions, and } N_{total} \\text{ is the total number of shipments.}", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 6, "knowledge": "Temperature Sensitivity Tiers", "description": "Classifies pharmaceutical products based on their sensitivity to temperature variations.", "definition": "Products are classified into three sensitivity tiers: 'Low' (can tolerate up to 24 hours of minor temperature deviations), 'Medium' (can tolerate up to 8 hours of minor temperature deviations), 'High' (cannot tolerate temperature deviations beyond 2 hours).", "type": "value_illustration", "children_knowledge": -1} +{"id": 7, "knowledge": "Tier 1 Cold Chain Products", "description": "Identifies the highest value and most temperature-sensitive pharmaceutical products.", "definition": "Products with both a 'High' temperature sensitivity and a value exceeding $100,000 USD.", "type": "domain_knowledge", "children_knowledge": [6]} +{"id": 8, "knowledge": "Temperature Monitoring Gap", "description": "Indicates periods where temperature data is missing during transit.", "definition": "A period of over 15 minutes where no temperature data points were recorded in a shipment that should have continuous monitoring.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 9, "knowledge": "Product Storage Classifications", "description": "Standardized temperature ranges for pharmaceutical product storage.", "definition": "Four standard storage classifications: '2-8°C' (refrigerated), '-20°C' (frozen), '-70°C' (ultra-low temperature), and '15-25°C' (controlled room temperature).", "type": "value_illustration", "children_knowledge": -1} +{"id": 10, "knowledge": "Cold Chain Monitoring Compliance Score (CCMCS)", "description": "Evaluates the completeness and quality of monitoring data for a shipment.", "definition": "CCMCS = 0.4 \\times GPS\\% + 0.4 \\times Temp\\% + 0.2 \\times (100 - ER), \\text{where GPS\\% is the GPS completeness percentage, Temp\\% is the percentage of expected temperature readings received, and ER is the error rate of readings.}", "type": "calculation_knowledge", "children_knowledge": [8]} +{"id": 11, "knowledge": "Critical Monitoring Failure", "description": "Indicates when monitoring systems fail to provide sufficient data to assess product quality.", "definition": "Occurs when either GPS completeness falls below 80% or when temperature monitoring has gaps exceeding 30 minutes during transit.", "type": "domain_knowledge", "children_knowledge": [8, 10]} +{"id": 12, "knowledge": "On-Time Delivery Performance (OTDP)", "description": "Measures how accurately shipments meet their planned delivery times.", "definition": "OTDP = \\frac{actual\\_duration}{planned\\_duration} \\times 100\\%, \\text{where a value of 100\\% indicates exact on-time delivery, <100\\% is early, and >100\\% is late delivery.}", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 13, "knowledge": "Delivery Performance Classification", "description": "Categorizes delivery performance based on variance from expected delivery time.", "definition": "Shipments are classified as: 'Early' (>2 hours before scheduled time), 'On-Time' (within ±2 hours of scheduled time), 'Delayed' (2-24 hours after scheduled time), 'Severely Delayed' (>24 hours after scheduled time).", "type": "value_illustration", "children_knowledge": [12]} +{"id": 14, "knowledge": "Quality Agreement Status", "description": "Illustrates the current state of quality agreements between parties in the cold chain.", "definition": "Quality agreements can have the following statuses: 'Active' (current agreement in force), 'Expired' (agreement has lapsed), 'Pending' (agreement under review), or null (no formal agreement exists).", "type": "value_illustration", "children_knowledge": -1} +{"id": 15, "knowledge": "Carrier Performance Index (CPI)", "description": "Measures the overall performance of a carrier across multiple shipments.", "definition": "CPI = 0.4 \\times CCCR + 0.3 \\times (100 - ATNR) + 0.2 \\times (100 - ASDI) + 0.1 \\times DPR, \\text{where CCCR is Cold Chain Compliance Rate, ATNR is Average Temperature Non-conformance Rate, ASDI is Average Shock and Damage Incidents, and DPR is Documentation Problem Rate.}", "type": "calculation_knowledge", "children_knowledge": [5]} +{"id": 16, "knowledge": "Major Pharmaceutical Markets", "description": "Identifies the primary global pharmaceutical markets for cold chain distribution.", "definition": "Major pharmaceutical markets include: United States, European Union (specifically Germany, France, Italy, Spain, and United Kingdom), Japan, China, Brazil, India, and Russia.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 17, "knowledge": "High-Risk Shipping Origin-Destination Pairs", "description": "Identifies shipping routes that historically show elevated risks for cold chain integrity.", "definition": "Shipping lane combinations that either: 1) cross more than three climate zones, 2) involve more than two carrier transfers, 3) include countries with historical customs delays exceeding 48 hours, or 4) have previously documented temperature excursion rates above 15%.", "type": "domain_knowledge", "children_knowledge": [3, 16]} +{"id": 18, "knowledge": "Location Tracking States", "description": "Illustrates the possible states of location tracking systems during transit.", "definition": "Location tracking can be in one of these states: 'Active' (providing regular updates), 'Intermittent' (occasional gaps in tracking), 'Failed' (no longer providing location data), or null (tracking was not implemented for this shipment).", "type": "value_illustration", "children_knowledge": -1} +{"id": 19, "knowledge": "Packing Integrity Risk Factor (PIRF)", "description": "Quantifies the risk of packaging failure based on product, route, and handling conditions.", "definition": "PIRF = \\frac{S \\times D \\times T}{P}, \\text{where S is the shock event count, D is the distance in thousands of km, T is a temperature factor (1 for 15-25°C, 1.5 for 2-8°C, 2 for -20°C, 2.5 for -70°C), and P is the packaging robustness factor (1-10, with 10 being most robust).}", "type": "calculation_knowledge", "children_knowledge": [9]} +{"id": 20, "knowledge": "Shipment Risk Score (SRS)", "description": "A comprehensive risk assessment score for cold chain shipments.", "definition": "SRS = 0.3 \\times TESI + 0.25 \\times PIRF + 0.25 \\times (100 - CCMCS) + 0.2 \\times RRF, \\text{where TESI is Temperature Excursion Severity Index, PIRF is Packing Integrity Risk Factor, CCMCS is Cold Chain Monitoring Compliance Score, and RRF is Route Risk Factor (1-10 based on Route Risk Classification).}", "type": "calculation_knowledge", "children_knowledge": [1, 10, 19, 3]} +{"id": 21, "knowledge": "Regulatory Compliance Status Definitions", "description": "Explains the different compliance status classifications used in quality assessment.", "definition": "Compliance statuses are defined as: 'Compliant' (meets all regulatory requirements), 'Non-compliant' (fails to meet one or more critical requirements), 'Under Review' (compliance being assessed), or null (compliance status not yet determined).", "type": "value_illustration", "children_knowledge": -1} +{"id": 22, "knowledge": "Temperature-Sensitive Product Categories", "description": "The main categories of pharmaceutical products requiring temperature-controlled transport.", "definition": "Temperature-sensitive product categories include: Vaccines, Biologics (including monoclonal antibodies and recombinant proteins), Blood Products (including plasma and platelets), and Insulin products.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 23, "knowledge": "Shock Event Significance Levels", "description": "Classifies impact events based on their potential to damage pharmaceutical products.", "definition": "Shock events are classified as: 'Minor' (unlikely to cause damage), 'Moderate' (potential to damage if repeated), 'Severe' (likely to cause damage to product or packaging), with null values indicating no shock monitoring was implemented.", "type": "value_illustration", "children_knowledge": -1} +{"id": 24, "knowledge": "Cold Chain Vehicle Qualification Status", "description": "Illustrates the validation levels of vehicles used in pharmaceutical transport.", "definition": "Vehicles can be classified as: 'Validated' (fully validated through performance and temperature mapping studies), 'Qualified' (basic qualification but not fully validated), 'Unqualified' (not formally qualified for pharmaceutical transport).", "type": "value_illustration", "children_knowledge": -1} +{"id": 25, "knowledge": "Time In Range Percentage (TIRP)", "description": "Calculates the percentage of time a shipment's temperature stayed within required parameters.", "definition": "TIRP = \\frac{T_{total} - TED}{T_{total}} \\times 100\\%, \\text{where } T_{total} \\text{ is the total transit time and TED is the Temperature Excursion Duration.}", "type": "calculation_knowledge", "children_knowledge": [0]} +{"id": 26, "knowledge": "Mean Kinetic Temperature (MKT)", "description": "A calculated temperature that expresses the overall effect of temperature fluctuations during storage or transit.", "definition": "MKT = \\frac{-\\Delta H/R}{\\ln\\left(\\frac{\\sum_{i=1}^{n} e^{-\\Delta H/RT_i}}{n}\\right)}, \\text{where } \\Delta H \\text{ is the activation energy (usually 83.144 kJ/mol for pharmaceuticals), R is the gas constant (8.3144 J/mol/K), and } T_i \\text{ is each temperature point in Kelvin.}", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 27, "knowledge": "Quality Risk Zones", "description": "Identifies risk levels for temperature-sensitive products based on excursion metrics.", "definition": "Quality risk zones are: 'Green Zone' (TIRP > 98% and no excursions > 30 min), 'Yellow Zone' (95% ≤ TIRP ≤ 98% or any excursion 30-60 min), 'Red Zone' (TIRP < 95% or any excursion > 60 min).", "type": "domain_knowledge", "children_knowledge": [25, 0]} +{"id": 28, "knowledge": "Humidity Sensitivity Categories", "description": "Classifies pharmaceutical products based on their sensitivity to humidity variations.", "definition": "Humidity sensitivity categories are: 'Not Sensitive' (can tolerate wide humidity variations), 'Moderately Sensitive' (can tolerate limited humidity variations), 'Highly Sensitive' (requires strict humidity control), with null values indicating humidity sensitivity not specified.", "type": "value_illustration", "children_knowledge": -1} +{"id": 29, "knowledge": "Premium Transport Container Types", "description": "Identifies specialized containers used for high-value pharmaceutical transport.", "definition": "Premium transport containers include: Envirotainer RAP e2, Envirotainer RKN e1, va-Q-tainer USx, CSafe RKN, DoKaSch Opticooler, and Sonoco ThermoSafe PharmaPort 360.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 30, "knowledge": "Supply Chain Resilience Score (SCRS)", "description": "Measures a cold chain's ability to maintain integrity despite disruptions.", "definition": "SCRS = 0.4 \\times ART + 0.3 \\times RRD + 0.2 \\times SBP + 0.1 \\times SMC, \\text{where ART is Alternative Route Availability (0-10), RRD is Redundant Resources Depth (0-10), SBP is Supplier Backup Presence (0-10), and SMC is Stock Management Capability (0-10).}", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 31, "knowledge": "Pharmaceutical Stability Budget", "description": "The total allowable time a product can spend outside of ideal conditions before quality is compromised.", "definition": "The maximum cumulative duration (typically specified in hours or days) that a pharmaceutical product can experience conditions outside its labeled storage requirements without significant impact to its quality, safety, or efficacy. Null values indicate stability budget has not been determined for the product.", "type": "domain_knowledge", "children_knowledge": [6, 9]} +{"id": 32, "knowledge": "Stability Budget Consumption Rate (SBCR)", "description": "Measures how quickly a shipment is consuming its allocated stability budget during transit.", "definition": "SBCR = \\frac{TED}{SB_{total}} \\times 100\\%, \\text{where TED is the Temperature Excursion Duration and } SB_{total} \\text{ is the total stability budget allocated for the product.}", "type": "calculation_knowledge", "children_knowledge": [0, 31]} +{"id": 33, "knowledge": "Package Integrity Monitoring Systems", "description": "Identifies technologies used to monitor physical integrity of packages during transit.", "definition": "Package integrity monitoring systems include: Shock Indicators (mechanical devices that show when a package has received an impact), Tilt Indicators (show if a package was tilted beyond acceptable angles), Electronic Impact Recorders (provide detailed shock measurements), and Pressure Indicators (monitor pressure changes in sealed containers).", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 34, "knowledge": "Product Release Decision Framework", "description": "Structured approach for determining if temperature-sensitive products can be released for use after transport.", "definition": "A decision framework considering: 1) Presence of temperature excursions, 2) Stability data for specific excursion profiles, 3) Package integrity, 4) Product appearance, and 5) Analytical testing results if required. Products can be 'Released' (meets all criteria), 'Released with CAPA' (meets essential criteria but requires corrective action), 'Quarantined' (requires further investigation), or 'Rejected' (failed critical criteria).", "type": "domain_knowledge", "children_knowledge": [27, 31, 33]} +{"id": 35, "knowledge": "Qualification Status of Temperature Monitoring Devices", "description": "Indicates the validation level of monitoring devices used in pharmaceutical transport.", "definition": "Monitoring devices can be: 'Fully Qualified' (calibrated with NIST-traceable standards and validated for pharmaceutical use), 'Partially Qualified' (calibrated but not fully validated), 'Unqualified' (not formally qualified), with null values indicating qualification status is unknown.", "type": "value_illustration", "children_knowledge": -1} +{"id": 36, "knowledge": "Lane Risk Potential (LRP)", "description": "Quantifies the risk associated with a specific shipping route based on historical performance.", "definition": "LRP = \\frac{TE_{total} + SD_{total} + CD_{total}}{N_{shipments}}, \\text{where } TE_{total} \\text{ is the total count of temperature excursions, } SD_{total} \\text{ is the total count of shipping delays, } CD_{total} \\text{ is the total count of customs delays, and } N_{shipments} \\text{ is the number of shipments on that lane.}", "type": "calculation_knowledge", "children_knowledge": [0, 13]} +{"id": 37, "knowledge": "Storage Temperature Requirements for Biologics", "description": "Specifies the standard temperature storage requirements for biological pharmaceutical products.", "definition": "Most biologics require storage at either '2-8°C' (refrigerated) or '-20°C' (frozen), with certain specialized biologic products requiring '-70°C' (ultra-low temperature) storage. Room temperature (15-25°C) storage is rarely suitable for biologics unless specifically formulated for stability at those temperatures.", "type": "domain_knowledge", "children_knowledge": [9, 22]} +{"id": 38, "knowledge": "Last Mile Delivery Risk", "description": "Specific risks associated with the final stage of pharmaceutical product delivery.", "definition": "Last mile delivery risks include: Lack of temperature-controlled vehicles, Multiple stops increasing door-opening frequency, Variable driver training in handling procedures, Inconsistent receiving procedures at destinations, and Limited monitoring capabilities compared to main transport segments.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 39, "knowledge": "Security Incident Severity Scale", "description": "Categorizes security events during pharmaceutical transport by their severity.", "definition": "Security incidents are classified on a scale of 0-4: '0' (No security concerns), '1' (Minor procedural deviation), '2' (Moderate security concern without evidence of tampering), '3' (Clear evidence of attempted tampering), '4' (Confirmed breach with product access), with null values indicating security assessment was not performed.", "type": "value_illustration", "children_knowledge": -1} +{"id": 40, "knowledge": "Temperature Profile Categorization", "description": "Classifies the pattern of temperature readings during a shipment.", "definition": "Temperature profiles are categorized as: 'Stable' (minimal variations within range), 'Cyclic' (regular patterns of variation within range), 'Trend' (gradual increase or decrease over time), 'Excursion' (periods outside acceptable range), or 'Erratic' (unpredictable variations suggesting monitoring issues).", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 41, "knowledge": "Excursion Impact Assessment (EIA)", "description": "Evaluates the potential quality impact of temperature excursions on product.", "definition": "EIA = \\sum_{i=1}^{n} (|T_i - T_{limit}| \\times t_i), \\text{where } T_i \\text{ is the temperature during excursion } i, T_{limit} \\text{ is the nearest temperature limit (upper or lower), and } t_i \\text{ is the duration of excursion } i \\text{ in hours.}", "type": "calculation_knowledge", "children_knowledge": [0, 9]} +{"id": 42, "knowledge": "Primary Cold Chain Monitoring Authorities", "description": "Key regulatory bodies that oversee cold chain management for pharmaceuticals.", "definition": "Primary regulatory authorities for pharmaceutical cold chains include: FDA (US Food and Drug Administration), EMA (European Medicines Agency), MHRA (UK Medicines and Healthcare products Regulatory Agency), Health Canada, TGA (Australian Therapeutic Goods Administration), PMDA (Japanese Pharmaceuticals and Medical Devices Agency), and WHO (World Health Organization).", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 43, "knowledge": "Package Thermal Efficiency Rating", "description": "Measures how effectively a packaging system maintains internal temperature despite external conditions.", "definition": "Thermal efficiency rating is classified as: 'Basic' (<24 hours of protection), 'Standard' (24-48 hours of protection), 'Enhanced' (48-96 hours of protection), 'Extended' (>96 hours of protection).", "type": "value_illustration", "children_knowledge": -1} +{"id": 44, "knowledge": "Risk-Based Monitoring Intensity", "description": "Determines appropriate monitoring frequency based on product risk profile.", "definition": "High-risk products (biologics, vaccines) require continuous temperature monitoring with 5-minute intervals, medium-risk products should have 15-minute intervals, and low-risk products can use 30-minute intervals, with null values in monitoring frequency indicating unplanned or non-standardized monitoring approaches.", "type": "domain_knowledge", "children_knowledge": [6, 22]} +{"id": 45, "knowledge": "Cold Chain Cost Efficiency Ratio (CCER)", "description": "Measures the relationship between cold chain protection costs and product value.", "definition": "CCER = \\frac{C_{monitoring} + C_{packaging} + C_{transport}}{V_{product}} \\times 100\\%, \\text{where } C_{monitoring}, C_{packaging}, C_{transport} \\text{ are the respective costs, and } V_{product} \\text{ is the product value.}", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 46, "knowledge": "Quality Management System Maturity Model", "description": "Framework for assessing the sophistication of a cold chain quality management system.", "definition": "QMS maturity levels range from Level 1 (Basic: minimal documentation and reactive approaches) to Level 5 (Optimized: fully integrated systems with continuous improvement). Intermediate levels include Level 2 (Developing: standard procedures established), Level 3 (Defined: processes are well-documented and followed), and Level 4 (Managed: processes are measured and controlled).", "type": "domain_knowledge", "children_knowledge": [14]} +{"id": 47, "knowledge": "Thermodynamic Stability Class", "description": "Categorizes pharmaceutical products based on their thermal stability characteristics.", "definition": "Products are classified as: Class A (highly stable, can tolerate brief temperature excursions with minimal degradation), Class B (moderately stable), Class C (limited stability, requires strict temperature control), and Class D (highly unstable, no temperature excursions permitted).", "type": "value_illustration", "children_knowledge": [6, 31]} +{"id": 48, "knowledge": "Data Logger Reliability Score (DLRS)", "description": "Assesses the reliability of temperature monitoring devices based on historical performance.", "definition": "DLRS = 100 - (10 \\times F_r + 5 \\times F_d + 3 \\times F_c + 2 \\times F_b), \\text{where } F_r \\text{ is the rate of reading failures, } F_d \\text{ is the rate of download failures, } F_c \\text{ is the calibration drift rate, and } F_b \\text{ is the battery failure rate. All rates are per 100 deployments.}", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 49, "knowledge": "Acceptable Temperature Deviation Limits", "description": "Industry standards for tolerable temperature deviations during pharmaceutical transport.", "definition": "For refrigerated products (2-8°C): brief excursions (< 30 min) to 0-12°C may be acceptable; For frozen products (-20°C): brief excursions to -15°C may be acceptable; For ultra-frozen products (-70°C): brief excursions to -60°C may be acceptable; For controlled room temperature products (15-25°C): brief excursions to 10-30°C may be acceptable.", "type": "domain_knowledge", "children_knowledge": [9, 31]} +{"id": 50, "knowledge": "Last Mile Delivery Metrics", "description": "Key performance indicators specific to final stage pharmaceutical delivery.", "definition": "Key last mile metrics include: First Attempt Delivery Success Rate, Temperature Deviation Frequency, Receiver Wait Time, Documentation Completion Rate, and Handler Qualification Status.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 51, "knowledge": "Batch Release Critical Path Elements", "description": "The essential components required before a temperature-sensitive product batch can be released.", "definition": "Critical path elements include: Complete temperature history with no unexplained gaps, Confirmation that any excursions were within stability budgets, Intact security seals or acceptable explanation for compromised seals, Complete chain of custody documentation, and Acceptable visual inspection results.", "type": "domain_knowledge", "children_knowledge": [31, 39, 8]} +{"id": 52, "knowledge": "Electronic Monitoring System Tiers", "description": "Classification of monitoring systems based on their capabilities and features.", "definition": "Electronic monitoring systems are classified as: Tier 1 (basic data loggers with manual download), Tier 2 (enhanced loggers with USB or Bluetooth download), Tier 3 (network-connected devices with real-time alerts), and Tier 4 (fully integrated IoT systems with predictive capabilities and automated interventions).", "type": "value_illustration", "children_knowledge": [35, 48]} +{"id": 53, "knowledge": "Validation Documentation Requirements", "description": "The essential documentation needed to validate cold chain processes and equipment.", "definition": "Required validation documents include: Validation Master Plan, Risk Assessment, User Requirements Specification, Design Qualification, Installation Qualification, Operational Qualification, Performance Qualification, Temperature Mapping Studies, and Final Validation Report.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 54, "knowledge": "Serialization Maturity Model", "description": "Framework for assessing pharmaceutical tracking and tracing capabilities.", "definition": "Serialization maturity levels are: Level 0 (No serialization), Level 1 (Batch-level tracking only), Level 2 (Package-level serialization without systematic verification), Level 3 (Verified package-level serialization), and Level 4 (End-to-end serialized supply chain with real-time visibility).", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 55, "knowledge": "Monitoring Device Calibration Status", "description": "Indicates if temperature monitoring devices are properly calibrated for accurate readings.", "definition": "Calibration status includes: 'Current' (calibrated within required timeframe, typically 12 months), 'Expired' (calibration period has elapsed), 'Exempt' (single-use devices with factory calibration), with null values indicating calibration status is unknown or not documented.", "type": "value_illustration", "children_knowledge": -1} +{"id": 56, "knowledge": "GDP Corrective Action Categories", "description": "Standard categories for corrective actions in Good Distribution Practice.", "definition": "GDP corrective action categories include: Training Enhancement, Documentation Update, Process Modification, Equipment Upgrade, Supplier Management, Transport Adjustment, Monitoring Improvement, and Regulatory Compliance Actions.", "type": "domain_knowledge", "children_knowledge": [4, 46]} +{"id": 57, "knowledge": "Temperature Accuracy Impact Factor (TAIF)", "description": "Quantifies the potential error in product status assessment due to monitoring device accuracy limitations.", "definition": "TAIF = \\frac{|T_{max} - T_{upper}|}{A} + \\frac{|T_{min} - T_{lower}|}{A}, \\text{where } T_{max} \\text{ and } T_{min} \\text{ are the maximum and minimum recorded temperatures, } T_{upper} \\text{ and } T_{lower} \\text{ are the upper and lower temperature limits, and } A \\text{ is the stated accuracy of the monitoring device in °C.}", "type": "calculation_knowledge", "children_knowledge": [48, 55]} +{"id": 58, "knowledge": "Vehicle Temperature Monitoring Types", "description": "Methods used to monitor temperatures in transport vehicles.", "definition": "Vehicle temperature monitoring types include: 'Continuous' (uninterrupted monitoring throughout transport), 'Interval' (periodic checks at predetermined times), 'None' (no active monitoring), with null values indicating monitoring type was not specified in documentation.", "type": "value_illustration", "children_knowledge": -1} +{"id": 59, "knowledge": "Data Integrity Components", "description": "Essential elements that ensure cold chain data is reliable for decision-making.", "definition": "Data integrity components include the ALCOA+ principles: Attributable (traceable to individual/system), Legible (readable and permanent), Contemporaneous (recorded at time of activity), Original (source data or certified copy), Accurate (error-free), plus Complete (all data included), Consistent (expected sequence followed), Enduring (preserved for required period), and Available (accessible when needed).", "type": "domain_knowledge", "children_knowledge": -1} \ No newline at end of file diff --git a/cold_chain_pharma_compliance/cold_chain_pharma_compliance_schema.txt b/cold_chain_pharma_compliance/cold_chain_pharma_compliance_schema.txt new file mode 100644 index 0000000000000000000000000000000000000000..45fbfbd7f829b9f5310b0f5994c44924c8a0cf42 --- /dev/null +++ b/cold_chain_pharma_compliance/cold_chain_pharma_compliance_schema.txt @@ -0,0 +1,238 @@ +CREATE TABLE "shipments" ( +reckey text NOT NULL, +log_ts timestamp without time zone NULL, +shiptok text NULL, +shipment_overview jsonb NULL, + PRIMARY KEY (reckey) +); + +First 3 rows: +reckey log_ts shiptok shipment_overview +-------- ------------------- --------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- +CC299394 NaT SH84615 {'route': {'origin': {'hub': 'Nielsen, Hogan and Morgan Facility', 'nation': 'Ireland', 'address': '07026 Bell Trail\nGonzalezland, CT 04965'}, 'risk_note': 'Low', 'destination': {'hub': 'Garcia-Mann Hospital', 'nation': None, 'address': '704 Valentine Parkways\nNew Andrew, PW 63414'}, 'route_string': 'Ireland -> Rwanda'}, 'timing_performance': {'go_time_ts': '2025-02-18T21:29:00', 'distance_km': 1710, 'end_time_ts': '2025-02-19T08:29:00', 'planned_eta_hrs': 18, 'actual_duration_hrs': 72}} +CC122014 2025-02-10 23:41:55 SH95068 {'route': {'origin': {'hub': 'Bowers-Hurley Facility', 'nation': 'Pitcairn Islands', 'address': '7757 Victoria Walk Apt. 470\nAlexishaven, GU 69290'}, 'risk_note': 'Low', 'destination': {'hub': 'Mcgee-Gonzales Hospital', 'nation': 'Libyan Arab Jamahiriya', 'address': '75571 Cline Causeway Suite 713\nNorth Christopher, RI 33695'}, 'route_string': 'Pitcairn Islands -> Libyan Arab Jamahiriya'}, 'timing_performance': {'go_time_ts': '2025-02-18T17:29:00', 'distance_km': 303, 'end_time_ts': '2025-02-19T08:29:00', 'planned_eta_hrs': 8, 'actual_duration_hrs': 55}} +CC892358 2025-02-05 08:30:29 SH68318 {'route': {'origin': {'hub': 'Smith-Carter Facility', 'nation': 'Benin', 'address': '646 Wolf Village\nSouth Natalieburgh, NM 29188'}, 'risk_note': 'High', 'destination': {'hub': 'Webb, Mendez and Davis Hospital', 'nation': 'Iran', 'address': '43648 Jackson Plaza\nLake Ambertown, LA 02999'}, 'route_string': 'Benin -> Iran'}, 'timing_performance': {'go_time_ts': '2025-02-19T02:29:00', 'distance_km': 3371, 'end_time_ts': '2025-02-19T08:29:00', 'planned_eta_hrs': 58, 'actual_duration_hrs': 48}} +... + + +CREATE TABLE "environmentalmonitoring" ( +reckeylink text NOT NULL, +devlink text NULL, +env_metrics jsonb NULL, + PRIMARY KEY (reckeylink), + FOREIGN KEY (reckeylink) REFERENCES shipments(reckey), + FOREIGN KEY (devlink) REFERENCES monitoringdevices(mondevref) +); + +First 3 rows: +reckeylink devlink env_metrics +------------ --------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- +CC892358 MD8154 {'humidity': {'avg_pct': 65.8, 'excursion_count': 4, 'humidity_monitor': 'No'}, 'tracking': {'stop_duration_min': 77, 'unscheduled_stops': 5, 'gps_completeness_pct': '96.3', 'location_tracking_state': 'Active', 'route_deviation_incidents': 2}, 'temperature': {'avg_c': 3, 'max_c': 10, 'min_c': 4, 'alarm_count': 5, 'excursion_count': 0, 'excursion_duration_min': 31}, 'light_and_shock': {'shock_monitor': None, 'shock_event_count': 5, 'light_monitor_mode': None}} +CC837054 MD2483 {'humidity': {'avg_pct': 69.6, 'excursion_count': 0, 'humidity_monitor': None}, 'tracking': {'stop_duration_min': 78, 'unscheduled_stops': 5, 'gps_completeness_pct': '91.2', 'location_tracking_state': 'Failed', 'route_deviation_incidents': 2}, 'temperature': {'avg_c': 7.6, 'max_c': 9.5, 'min_c': 2.2, 'alarm_count': 4, 'excursion_count': 6, 'excursion_duration_min': 111}, 'light_and_shock': {'shock_monitor': 'Passive', 'shock_event_count': 5, 'light_monitor_mode': None}} +CC324346 MD3745 {'humidity': {'avg_pct': 44.9, 'excursion_count': 5, 'humidity_monitor': 'Yes'}, 'tracking': {'stop_duration_min': 171, 'unscheduled_stops': 5, 'gps_completeness_pct': '97.3', 'location_tracking_state': 'Intermittent', 'route_deviation_incidents': 2}, 'temperature': {'avg_c': 5.7, 'max_c': 11.8, 'min_c': 0.6, 'alarm_count': 5, 'excursion_count': 4, 'excursion_duration_min': 58}, 'light_and_shock': {'shock_monitor': None, 'shock_event_count': 6, 'light_monitor_mode': None}} +... + + +CREATE TABLE "products" ( +prodcode text NOT NULL, +prodlabel text NULL, +prodcat text NULL, +maker text NULL, + PRIMARY KEY (prodcode) +); + +First 3 rows: +prodcode prodlabel prodcat maker +---------- ----------------------------------- -------------- -------------- +PH75271 strategize value-added deliverables Vaccines York Ltd +PH70163 maximize enterprise platforms Biologics Davis and Sons +PH42851 target dot-com partnerships Blood Products +... + + +CREATE TABLE "productbatches" ( +batchtag text NOT NULL, +prodlink text NULL, +mfg_ts timestamp without time zone NULL, +exp_ts timestamp without time zone NULL, +store_cond text NULL, +tempmin real NULL, +tempmax real NULL, +tempsense text NULL, +pack_type text NULL, +pack_cnt bigint NULL, +valusd text NULL, +insusd real NULL, + PRIMARY KEY (batchtag), + FOREIGN KEY (prodlink) REFERENCES products(prodcode) +); + +First 3 rows: +batchtag prodlink mfg_ts exp_ts store_cond tempmin tempmax tempsense pack_type pack_cnt valusd insusd +---------- ---------- ------------------- ------------------- ------------ --------- --------- ----------- ----------- ---------- ----------- -------- +BT909380 PH75271 2024-08-02 00:00:00 2026-09-30 00:00:00 2-8°C 2 12 Medium Ampoule 936 $57,421.85 nan +BT468883 PH70163 2024-07-22 00:00:00 2025-11-29 00:00:00 -20°C -70 -55 High Container 899 $188,736.45 226484 +BT980454 PH42851 2024-08-05 00:00:00 2026-03-01 00:00:00 15-25°C 15 30 Medium Vial 778 $680,991.64 817190 +... + + +CREATE TABLE "carriers" ( +carriertag text NOT NULL, +carriercert text NULL, + PRIMARY KEY (carriertag) +); + +First 3 rows: +carriertag carriercert +-------------------------------- ------------- +Rodriguez, Mcintyre and Richards +Lawson PLC GDP +Howard PLC GDP +... + + +CREATE TABLE "vehicles" ( +vehref text NOT NULL, +carrierbond text NULL, +vehtype text NULL, +veh_qual text NULL, +temp_mon_sys text NULL, + PRIMARY KEY (vehref), + FOREIGN KEY (carrierbond) REFERENCES carriers(carriertag) +); + +First 3 rows: +vehref carrierbond vehtype veh_qual temp_mon_sys +-------- -------------------------------- --------- ---------- -------------- +VH6122 Rodriguez, Mcintyre and Richards Aircraft Validated Interval +VH3281 Lawson PLC Aircraft Qualified Continuous +VH6131 Howard PLC Container Qualified Interval +... + + +CREATE TABLE "monitoringdevices" ( +mondevref text NOT NULL, +calibts timestamp without time zone NULL, +devacc text NULL, +recintmin bigint NULL, +temppts bigint NULL, + PRIMARY KEY (mondevref) +); + +First 3 rows: +mondevref calibts devacc recintmin temppts +----------- ------------------- -------- ----------- --------- +MD9886 2024-10-06 00:00:00 0.31 nan 891 +MD8695 2024-09-04 00:00:00 0.19 5 812 +MD3440 2024-11-12 00:00:00 0.39 15 650 +... + + +CREATE TABLE "qualitycompliance" ( +reckeyqc text NOT NULL, +qc_checklist jsonb NULL, + PRIMARY KEY (reckeyqc), + FOREIGN KEY (reckeyqc) REFERENCES shipments(reckey) +); + +First 3 rows: +reckeyqc qc_checklist +---------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- +CC880430 {'release': {'release_ts': '2025-02-19T00:00:00', 'quarantine_reason': None, 'product_release_status': 'Rejected'}, 'security': {'seal_status': 'Broken', 'security_incident': '2'}, 'gdp_quality': {'gdp_compliance': 'Full', 'sop_compliance': 'Full', 'responsible_person': 'Sarah Sharp', 'quality_approval_ts': '2025-01-25T00:00:00', 'quality_review_status': 'Pending', 'quality_agreement_status': 'Active'}, 'documentation': {'certification_status': 'Complete', 'documentation_complete': 'Complete'}, 'product_integrity': {'pack_condition': 'Good', 'label_condition': 'Clear', 'product_intact_check': 'Conditional'}, 'customs_and_regulatory': {'import_permit_status': 'Pending', 'custom_clearance_status': 'Cleared', 'regulatory_compliance_status': 'Non-compliant'}} +CC808096 {'release': {'release_ts': '2025-02-19T00:00:00', 'quarantine_reason': 'Damage', 'product_release_status': 'Rejected'}, 'security': {'seal_status': None, 'security_incident': '1'}, 'gdp_quality': {'gdp_compliance': 'Partial', 'sop_compliance': 'Full', 'responsible_person': 'Donna Day', 'quality_approval_ts': None, 'quality_review_status': 'Approved', 'quality_agreement_status': 'Pending'}, 'documentation': {'certification_status': 'Complete', 'documentation_complete': 'Partial'}, 'product_integrity': {'pack_condition': 'Damaged', 'label_condition': 'Illegible', 'product_intact_check': 'Failed'}, 'customs_and_regulatory': {'import_permit_status': 'Not Required', 'custom_clearance_status': 'Pending', 'regulatory_compliance_status': 'Under Review'}} +CC299394 {'release': {'release_ts': '2025-02-19T00:00:00', 'quarantine_reason': 'Damage', 'product_release_status': 'Released'}, 'security': {'seal_status': 'Broken', 'security_incident': '0'}, 'gdp_quality': {'gdp_compliance': 'Non-compliant', 'sop_compliance': 'Full', 'responsible_person': 'James Chan', 'quality_approval_ts': '2025-01-26T00:00:00', 'quality_review_status': 'Pending', 'quality_agreement_status': 'Expired'}, 'documentation': {'certification_status': 'Expired', 'documentation_complete': 'Partial'}, 'product_integrity': {'pack_condition': 'Damaged', 'label_condition': 'Illegible', 'product_intact_check': 'Passed'}, 'customs_and_regulatory': {'import_permit_status': 'Valid', 'custom_clearance_status': 'Pending', 'regulatory_compliance_status': 'Under Review'}} +... + + +CREATE TABLE "incidentandriskmanagement" ( +reckeyrisk text NOT NULL, +incident_risk_record jsonb NULL, + PRIMARY KEY (reckeyrisk), + FOREIGN KEY (reckeyrisk) REFERENCES shipments(reckey) +); + +First 3 rows: +reckeyrisk incident_risk_record +------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- +CC808096 {'risk': {'risk_level': 'Medium', 'risk_assessment_status': 'Ongoing'}, 'impact': {'market_impact': 'Limited', 'patient_impact': None, 'reputation_impact': 'Major'}, 'batch_decision': {'batch_release_decision': 'Rejected'}, 'lessons_learned': 'No', 'stability_and_quality': {'stability_data_review': 'Not Required', 'product_quality_impact': 'Confirmed', 'stability_impact_assessment': None}, 'reporting_notification': {'customer_response': None, 'authority_response': 'Investigation', 'customer_notification': 'Completed', 'incident_report_status': 'Reviewed', 'health_authority_notification': 'Pending'}, 'deviation_investigation': {'corrective_actions': 'Pending', 'preventive_actions': 'Implemented', 'investigation_status': None}} +CC299394 {'risk': {'risk_level': None, 'risk_assessment_status': 'Completed'}, 'impact': {'market_impact': 'Limited', 'patient_impact': 'Confirmed', 'reputation_impact': None}, 'batch_decision': {'batch_release_decision': 'Approved'}, 'lessons_learned': 'No', 'stability_and_quality': {'stability_data_review': 'Pending', 'product_quality_impact': None, 'stability_impact_assessment': None}, 'reporting_notification': {'customer_response': 'Accepted', 'authority_response': 'Investigation', 'customer_notification': 'Pending', 'incident_report_status': 'Draft', 'health_authority_notification': 'Completed'}, 'deviation_investigation': {'corrective_actions': 'Implemented', 'preventive_actions': 'Implemented', 'investigation_status': 'Completed'}} +CC122014 {'risk': {'risk_level': 'Low', 'risk_assessment_status': 'Required'}, 'impact': {'market_impact': None, 'patient_impact': 'Possible', 'reputation_impact': None}, 'batch_decision': {'batch_release_decision': 'Approved'}, 'lessons_learned': 'In Progress', 'stability_and_quality': {'stability_data_review': 'Not Required', 'product_quality_impact': 'Confirmed', 'stability_impact_assessment': None}, 'reporting_notification': {'customer_response': None, 'authority_response': None, 'customer_notification': 'Pending', 'incident_report_status': 'Draft', 'health_authority_notification': 'Completed'}, 'deviation_investigation': {'corrective_actions': None, 'preventive_actions': 'Implemented', 'investigation_status': None}} +... + + +CREATE TABLE "insuranceclaims" ( +reckeyclaim text NOT NULL, +claimneed text NULL, +claimstat text NULL, +claimusd real NULL, +costimpactusd real NULL, +respparty text NULL, + PRIMARY KEY (reckeyclaim), + FOREIGN KEY (reckeyclaim) REFERENCES shipments(reckey) +); + +First 3 rows: +reckeyclaim claimneed claimstat claimusd costimpactusd respparty +------------- ----------- ----------- ---------- --------------- ----------- +CC381686 No 79419.2 47835.3 Unknown +CC880430 Yes Approved 41425.4 9683.92 Shipper +CC808096 Yes Rejected 93706.9 321.82 Receiver +... + + +CREATE TABLE "reviewsandimprovements" ( +reckeyrev text NOT NULL, +procimprove text NULL, +trainneeds text NULL, +sop_update text NULL, +vendorimpact text NULL, +nextshipchg text NULL, +monfreqchg text NULL, +routeriskreassess text NULL, +packspecrev text NULL, +lanequalstat text NULL, +techupgrade text NULL, +costoptpot text NULL, +sustainimpact text NULL, +carbonkg real NULL, +energyscore text NULL, +docformat text NULL, +dataintegrity text NULL, +audittrailcomplete text NULL, +e_sigstat text NULL, +sysaccessctrl text NULL, +databackupstat text NULL, +reportgenstat text NULL, +distlist text NULL, +archivestat text NULL, +reviewnotes text NULL, +nextrevts timestamp without time zone NULL, +closestat text NULL, + PRIMARY KEY (reckeyrev), + FOREIGN KEY (reckeyrev) REFERENCES shipments(reckey) +); + +First 3 rows: +reckeyrev procimprove trainneeds sop_update vendorimpact nextshipchg monfreqchg routeriskreassess packspecrev lanequalstat techupgrade costoptpot sustainimpact carbonkg energyscore docformat dataintegrity audittrailcomplete e_sigstat sysaccessctrl databackupstat reportgenstat distlist archivestat reviewnotes nextrevts closestat +----------- ------------- ------------ ------------ ---------------- ------------- ------------ ------------------- ------------- -------------- ---------------- ------------ --------------- ---------- ------------- ----------- --------------- -------------------- ------------ --------------- ---------------- --------------- ---------- ------------- --------------------------------------------------------------------------------------------- ------------------- ------------ +CC381686 No Under Review Under Review Warning Increased Not Required Required Invalid Under Evaluation Medium Positive 2617.3 81 Hybrid Under Review 91 Invalid Adequate Pending Pending Extended Pending Agent usually ten food focus. Throughout return mean. 2025-03-13 00:00:00 Under Review +CC880430 No Yes Under Review Disqualification Major No Change Required Required Invalid No Low Positive 1825.7 86.1 Paper Verified 93.2 Invalid Compromised Current Pending Extended Completed Protect reason ask child month. President stuff back point kitchen. 2025-03-13 00:00:00 Open +CC808096 No Yes Under Review Disqualification Decreased Required Completed Valid No High Positive nan 75.5 Hybrid Under Review 93.8 Not Required Adequate Current Completed Extended Not Required Tend third child discuss draw message rock. Source development offer sing person stage night. 2025-03-07 00:00:00 Closed +... + + +CREATE TABLE "shipsensorlink" ( +shpnode text NOT NULL, +devnode text NOT NULL, + PRIMARY KEY (shpnode, devnode), + FOREIGN KEY (shpnode) REFERENCES shipments(reckey), + FOREIGN KEY (devnode) REFERENCES monitoringdevices(mondevref) +); + +First 3 rows: +shpnode devnode +--------- --------- +CC381686 MD9886 +CC880430 MD8695 +CC808096 MD3440 +... diff --git a/cross_border/cross_border_column_meaning_base.json b/cross_border/cross_border_column_meaning_base.json new file mode 100644 index 0000000000000000000000000000000000000000..fba0247884a272a6c79ec46919ceece8f32b5c53 --- /dev/null +++ b/cross_border/cross_border_column_meaning_base.json @@ -0,0 +1,160 @@ +{ + "cross_border|DataFlow|RecordRegistry": "TEXT. Unique data-flow record identifier. PK. Example: CB932064.", + "cross_border|DataFlow|FLOWSTAMP": "TEXT. Timestamp string representing when the flow was logged. Example: 2024-03-11T11:43:31. Example: 2024-03-11T11:43:31.", + "cross_border|RiskManagement|RISKTRACE": "BIGINT. Unique risk-management record identifier. PK.", + "cross_border|RiskManagement|flow_link": "TEXT. Identifier of the related data-flow record. FK to DataFlow.", + "cross_border|RiskManagement|recordRegistry": "TEXT. Redundant record registry reference string.", + "cross_border|DataProfile|profileTrace": "BIGINT. Unique data-profile identifier. PK.", + "cross_border|DataProfile|Flow_Sign": "TEXT. Associated data-flow identifier for this profile. FK to DataFlow.", + "cross_border|DataProfile|riskJoin": "BIGINT. Linked risk-management identifier. FK to RiskManagement.", + "cross_border|DataProfile|Record_Registry": "TEXT. Record registry reference string.", + "cross_border|DataProfile|DATATYPE": "TEXT. Primary data-type description. Possible values: Commercial, Financial, Industrial, Medical, Personal.", + "cross_border|DataProfile|dataSense": "TEXT. Data-sensitivity description. Possible values: Critical, High, Low, Medium.", + "cross_border|DataProfile|vol_gb": "REAL. Volume of data in gigabytes. Example: 1093.6. Example: 1093.6.", + "cross_border|DataProfile|REC_TALLY": "BIGINT. Total number of data records. Example: 2629296. Example: 2629296.", + "cross_border|DataProfile|subjectTally": "BIGINT. Count of data subjects represented. Example: 754585. Example: 754585.", + "cross_border|DataProfile|ret_days": "BIGINT. Data-retention period in days. Example: 2208. Example: 2208.", + "cross_border|DataProfile|FormatType": "TEXT. Data-format type (e.g., CSV, Parquet). Possible values: Mixed, Structured, Unstructured. Possible values: Mixed, Structured, Unstructured.", + "cross_border|DataProfile|qlty_score": "REAL. Data quality score. Example: 52.45. Example: 52.45.", + "cross_border|DataProfile|Int_check": "TEXT. Data-integrity check result. **NULL means integrity check not performed.**. Possible values: Failed, Partial, Passed. Possible values: Failed, Partial, Passed.", + "cross_border|DataProfile|CSUMVERIFY": "TEXT. Checksum-verification status. Possible values: Failed, Pending, Success. Possible values: Failed, Pending, Success.", + "cross_border|DataProfile|srcValState": "TEXT. Source-validation state. **NULL means source validation not performed.**. Possible values: Failed, Pending, Verified. Possible values: Failed, Pending, Verified.", + "cross_border|DataProfile|DEST_VAL_state": "TEXT. Destination-validation state. Possible values: Failed, Pending, Verified. Possible values: Failed, Pending, Verified.", + "cross_border|SecurityProfile|SECURITY_TRACE": "BIGINT. Unique security-profile identifier. PK.", + "cross_border|SecurityProfile|flowKey": "TEXT. Associated data-flow identifier. FK to DataFlow.", + "cross_border|SecurityProfile|RISKKEY": "BIGINT. Linked risk-management identifier. FK to RiskManagement.", + "cross_border|SecurityProfile|profile_key": "BIGINT. Linked data-profile identifier. FK to DataProfile.", + "cross_border|SecurityProfile|RecordRegistry": "TEXT. Record registry reference string. Example: CB932064.", + "cross_border|SecurityProfile|enc_state": "TEXT. Overall encryption state descriptor. Possible values: Full, Partial.", + "cross_border|SecurityProfile|ENCMETH": "TEXT. Encryption method applied. **NULL means encryption method not specified.**. Possible values: AES-256, Custom, RSA-2048, SM4. Possible values: AES-256, Custom, RSA-2048, SM4.", + "cross_border|SecurityProfile|keyManState": "TEXT. Key-management state descriptor. **NULL means key management state not recorded.**. Possible values: Centralized, Distributed, Hybrid. Possible values: Centralized, Distributed, Hybrid.", + "cross_border|SecurityProfile|MASK_LEVEL": "TEXT. Data-masking level applied. **NULL means masking level not specified.**. Possible values: Full, Partial. Possible values: Full, Partial.", + "cross_border|SecurityProfile|anonMeth": "TEXT. Anonymisation method employed. Possible values: K-Anonymity, L-Diversity, T-Closeness. Possible values: K-Anonymity, L-Diversity, T-Closeness.", + "cross_border|SecurityProfile|PSYMSTATE": "TEXT. Pseudonymisation state. **NULL means pseudonymisation not applied.**. Possible values: Applied, Partial. Possible values: Applied, Partial.", + "cross_border|SecurityProfile|authMeth": "TEXT. Authentication method used. Possible values: Basic, MFA, SSO. Possible values: Basic, MFA, SSO.", + "cross_border|SecurityProfile|AUTHZ_FRAME": "TEXT. Authorisation framework employed. Possible values: ABAC, Custom, RBAC. Possible values: ABAC, Custom, RBAC.", + "cross_border|SecurityProfile|acl_state": "TEXT. Access-control-list state descriptor. Possible values: Adequate, Strong, Weak. Possible values: Adequate, Strong, Weak.", + "cross_border|SecurityProfile|APISECSTATE": "TEXT. API security state descriptor. **NULL means API security not assessed.**. Possible values: Review Required, Secure, Vulnerable. Possible values: Review Required, Secure, Vulnerable.", + "cross_border|SecurityProfile|logIntCheck": "TEXT. Logging integrity-check status. Possible values: Failed, Passed, Pending. Possible values: Failed, Passed, Pending.", + "cross_border|SecurityProfile|LogRetDays": "BIGINT. Log-retention period in days. Example: 905. Example: 905.", + "cross_border|SecurityProfile|bkp_state": "TEXT. Backup state descriptor. **NULL means backup state not documented.**. Possible values: Current, Failed, Outdated. Possible values: Current, Failed, Outdated.", + "cross_border|SecurityProfile|DRECSTATE": "TEXT. Data-recovery state descriptor. Possible values: Missing, Tested, Untested. Possible values: Missing, Tested, Untested.", + "cross_border|SecurityProfile|bc_state": "TEXT. Business-continuity or blockchain state descriptor. Possible values: Active, Outdated, Review Required. Possible values: Active, Outdated, Review Required.", + "cross_border|VendorManagement|Vendor_Trace": "BIGINT. Unique vendor-management record identifier. PK.", + "cross_border|VendorManagement|SEC_JOIN": "BIGINT. Associated security-profile identifier. FK to SecurityProfile.", + "cross_border|VendorManagement|riskassoc": "BIGINT. Linked risk-management identifier. FK to RiskManagement.", + "cross_border|VendorManagement|recordregistry": "TEXT. Record registry reference string.", + "cross_border|VendorManagement|VENDASSESS": "TEXT. Vendor security assessment result. **NULL means vendor assessment not completed.**. Possible values: Completed, Due, In Progress. Possible values: Completed, Due, In Progress.", + "cross_border|VendorManagement|vendSecRate": "TEXT. Vendor security rating. Possible values: A, B, C, D. Possible values: A, B, C, D.", + "cross_border|VendorManagement|VEND_AUD_DATE": "DATE. Date of the most recent vendor audit. Example: 2024-05-30. Example: 2024-05-30.", + "cross_border|VendorManagement|contrState": "TEXT. Contract state descriptor. **NULL means contract state unknown.**. Possible values: Active, Expired, Under Review. Possible values: Active, Expired, Under Review.", + "cross_border|VendorManagement|CONTR_EXPIRE": "DATE. Contract expiration date. Example: 2027-01-12. Example: 2027-01-12.", + "cross_border|VendorManagement|dpa_state": "TEXT. Data-processing-agreement state. Possible values: Pending, Required, Signed. Possible values: Pending, Required, Signed.", + "cross_border|VendorManagement|SCCSTATE": "TEXT. Standard Contractual Clauses state. Possible values: Implemented, Not Required, Partial. Possible values: Implemented, Not Required, Partial.", + "cross_border|VendorManagement|bcr_state": "TEXT. Binding Corporate Rules state. Possible values: Approved, Not Applicable, Pending. Possible values: Approved, Not Applicable, Pending.", + "cross_border|VendorManagement|docuState": "TEXT. Documentation status descriptor. Possible values: Complete, Incomplete, Partial. Possible values: Complete, Incomplete, Partial.", + "cross_border|VendorManagement|pol_comp": "TEXT. Policy-compliance status. Possible values: Full, Non-compliant, Partial. Possible values: Full, Non-compliant, Partial.", + "cross_border|VendorManagement|procComp": "TEXT. Process-compliance status. **NULL means process compliance not assessed.**. Possible values: Full, Non-compliant, Partial. Possible values: Full, Non-compliant, Partial.", + "cross_border|VendorManagement|train_state": "TEXT. Training compliance state. Possible values: Current, Due, Overdue. Possible values: Current, Due, Overdue.", + "cross_border|VendorManagement|certState": "TEXT. Certification state descriptor. **NULL means certification state not recorded.**. Possible values: Expired, Pending, Valid. Possible values: Expired, Pending, Valid.", + "cross_border|VendorManagement|MONSTATE": "TEXT. Ongoing monitoring state. Possible values: Active, Inactive, Partial. Possible values: Active, Inactive, Partial.", + "cross_border|VendorManagement|rep_state": "TEXT. Reporting state descriptor. Possible values: Current, Delayed, Overdue. Possible values: Current, Delayed, Overdue.", + "cross_border|VendorManagement|stakeComm": "TEXT. Stakeholder communication status. Possible values: Limited, Poor, Regular. Possible values: Limited, Poor, Regular.", + "cross_border|Compliance|complianceTrace": "BIGINT. Unique compliance record identifier. PK.", + "cross_border|Compliance|risk_tie": "BIGINT. Linked risk-management identifier. FK to RiskManagement.", + "cross_border|Compliance|vendorTie": "BIGINT. Linked vendor-management identifier. FK to VendorManagement.", + "cross_border|Compliance|recordRegistry": "TEXT. Record registry reference string.", + "cross_border|Compliance|LEGALBASE": "TEXT. Legal basis for processing. **NULL means legal basis not determined.**. Possible values: Consent, Contract, Legal Obligation, Legitimate Interest. Possible values: Consent, Contract, Legal Obligation, Legitimate Interest.", + "cross_border|Compliance|consent_state": "TEXT. Consent state descriptor. Possible values: Expired, Not Required, Pending, Valid. Possible values: Expired, Not Required, Pending, Valid.", + "cross_border|Compliance|ConsentColl": "DATE. Date when consent was collected. Example: 13 Sep 2024. Example: 13 Sep 2024.", + "cross_border|Compliance|consent_exp": "DATE. Consent expiration date. Example: 05/17/2026. Example: 05/17/2026.", + "cross_border|Compliance|purp_limit": "TEXT. Purpose-limitation indicator. Possible values: General, Multiple, Specific. Possible values: General, Multiple, Specific.", + "cross_border|Compliance|PURP_DESC": "TEXT. Description of data-processing purpose. Possible values: Business Operations, Compliance, Marketing, Research. Possible values: Business Operations, Compliance, Marketing, Research.", + "cross_border|Compliance|gdprComp": "TEXT. GDPR compliance indicator. Possible values: Compliant, Non-compliant, Partial. Possible values: Compliant, Non-compliant, Partial.", + "cross_border|Compliance|CCPA_COMP": "TEXT. CCPA compliance indicator. Possible values: Compliant, Non-compliant, Partial. Possible values: Compliant, Non-compliant, Partial.", + "cross_border|Compliance|PIPLcomp": "TEXT. PIPL compliance indicator. Possible values: Compliant, Non-compliant, Partial. Possible values: Compliant, Non-compliant, Partial.", + "cross_border|Compliance|loc_law_comp": "TEXT. Local-law compliance indicator. Possible values: Compliant, Non-compliant, Partial. Possible values: Compliant, Non-compliant, Partial.", + "cross_border|Compliance|RegApprovals": "TEXT. Regulatory approvals information. Possible values: Not Required, Obtained, Pending. Possible values: Not Required, Obtained, Pending.", + "cross_border|Compliance|priv_imp_assess": "TEXT. Privacy-impact assessment status. Possible values: Completed, In Progress, Required. Possible values: Completed, In Progress, Required.", + "cross_border|Compliance|Datasubjright": "TEXT. Data-subject rights fulfilment status. Possible values: Fully Supported, Limited, Partial. Possible values: Fully Supported, Limited, Partial.", + "cross_border|AuditAndCompliance|AUDIT_TRACE": "BIGINT. Unique audit record identifier. PK.", + "cross_border|AuditAndCompliance|profjoin": "BIGINT. Linked data-profile identifier. FK to DataProfile.", + "cross_border|AuditAndCompliance|COMP_JOIN": "BIGINT. Linked compliance identifier. FK to Compliance.", + "cross_border|AuditAndCompliance|VendJoin": "BIGINT. Linked vendor-management identifier. FK to VendorManagement.", + "cross_border|AuditAndCompliance|record_registry": "TEXT. Record registry reference string.", + "cross_border|AuditAndCompliance|AudtrailState": "TEXT. Audit-trail state descriptor. Possible values: Complete, Missing, Partial. Possible values: Complete, Missing, Partial.", + "cross_border|AuditAndCompliance|FINDTALLY": "BIGINT. Total findings tally. Example: 3. Example: 3.", + "cross_border|AuditAndCompliance|critFindNum": "BIGINT. Number of critical findings. **NULL means no critical findings recorded.**. Example: 6.0. Example: 6.0.", + "cross_border|AuditAndCompliance|remed_state": "TEXT. Remediation state descriptor. Possible values: Completed, In Progress, Not Started. Possible values: Completed, In Progress, Not Started.", + "cross_border|AuditAndCompliance|REMED_DUE": "DATE. Remediation due date. Example: 2025-03-16. Example: 2025-03-16.", + "cross_border|AuditAndCompliance|authNotify": "TEXT. Authority-notification status. **NULL means authority not notified.**. Possible values: Not Required, Required, Submitted. Possible values: Not Required, Required, Submitted.", + "cross_border|AuditAndCompliance|border_mech": "TEXT. Cross-border mechanism descriptor. Possible values: Adequacy Decision, BCRs, Derogations, SCCs. Possible values: Adequacy Decision, BCRs, Derogations, SCCs.", + "cross_border|AuditAndCompliance|TRANSIMPASSESS": "TEXT. Transfer impact assessment status. **NULL means assessment not performed.**. Possible values: Completed, In Progress, Required. Possible values: Completed, In Progress, Required.", + "cross_border|AuditAndCompliance|localReqs": "TEXT. Local requirements descriptor. Possible values: Met, Not Met, Partial. Possible values: Met, Not Met, Partial.", + "cross_border|AuditAndCompliance|data_map_state": "TEXT. Data-mapping state. Possible values: Complete, Outdated, Partial. Possible values: Complete, Outdated, Partial.", + "cross_border|AuditAndCompliance|SYSINTSTATE": "TEXT. System-integration state. Possible values: Fully Integrated, Manual, Partial. Possible values: Fully Integrated, Manual, Partial.", + "cross_border|AuditAndCompliance|AccReqNum": "BIGINT. Data-access request count. Example: 959. Example: 959.", + "cross_border|AuditAndCompliance|DEL_REQ_NUM": "BIGINT. Deletion request count. Example: 409. Example: 409.", + "cross_border|AuditAndCompliance|rect_req_num": "BIGINT. Rectification request count. Example: 76. Example: 76.", + "cross_border|AuditAndCompliance|PORTREQNUM": "BIGINT. Portability request count. Example: 53. Example: 53.", + "cross_border|AuditAndCompliance|resp_time_day": "REAL. Average response time in days. Example: 4.5. Example: 4.5.", + "cross_border|DataFlow|flow_overview": { + "column_meaning": "JSONB column. Encapsulates the end-to-end characteristics of a single data-transfer stream—routing details, classification, security flagging, and real-time performance statistics—so dashboards and analytics engines can retrieve everything from one JSONB document.", + "fields_meaning": { + "routing": { + "origin_country": "TEXT. Origin location or system name. Example: Niue. Example: Niue.", + "destination_country": "TEXT. Destination nation or region code. Example: Djibouti. Example: Djibouti.", + "origin_actor": "TEXT. Actor or system initiating the transfer. Example: Hill Ltd. Example: Hill Ltd.", + "destination_actor": "TEXT. Destination actor or receiving system. **NULL means destination actor not recorded.**. Example: Davis, Harper and Weber. Example: Davis, Harper and Weber.", + "protocol": "TEXT. Communication-channel protocol (e.g., HTTPS, SFTP). Possible values: Blockchain, HTTPS, Private Network, SFTP. Possible values: Blockchain, HTTPS, Private Network, SFTP.", + "transfer_frequency": "TEXT. Frequency or schedule of the data channel. Possible values: Daily, Hourly, Real-time, Weekly. Possible values: Daily, Hourly, Real-time, Weekly." + }, + "classification": { + "flow_tag": "TEXT. Human-readable tag assigned to the flow. Example: DF7811.", + "data_category": "TEXT. Logical category of the data (e.g., HR, Finance). Possible values: Commercial, Financial, Industrial, Medical, Personal.", + "sensitivity_level": "TEXT. Data-sensitivity classification level. Possible values: Critical, High, Low, Medium.", + "encryption_status": "TEXT. Encryption status of the flow. **NULL means encryption status not specified.**. Possible values: Full, Partial." + }, + "performance": { + "data_size_mb": "REAL. Amount of data transferred in megabytes. **NULL means data size not measured.**. Example: 42668.42. Example: 42668.42.", + "duration_min": "BIGINT. Transfer duration in minutes. Example: 1068.03. Example: 1068.03.", + "bandwidth_util_pct": "REAL. Percentage of channel bandwidth utilised. Example: 68.81. Example: 68.81.", + "success_pct": "REAL. Successful-transfer percentage. Example: 99.93. Example: 99.93.", + "error_count": "BIGINT. Count of transfer error events. Example: 39. Example: 39.", + "retry_count": "BIGINT. Count of transfer retries. Example: 1. Example: 1." + } + } + }, + "cross_border|RiskManagement|risk_management_profile": { + "column_meaning": "JSONB column. Bundles quantitative risk scores, mitigation progress, incident history, cost exposure and compliance-maturity indicators into one JSONB structure for quick reporting and comparative risk analytics.", + "fields_meaning": { + "assessment": { + "risk_score": "REAL. Quantitative risk-assessment score. Example: 75.89. Example: 75.89.", + "residual_risk_level": "TEXT. Residual risk-level descriptor. Possible values: High, Low, Medium. Possible values: High, Low, Medium.", + "control_effectiveness_pct": "REAL. Control effectiveness score. **NULL means control effectiveness not evaluated.**. Example: 30.51. Example: 30.51.", + "compliance_score": "REAL. Compliance score. Example: 76.41. Example: 76.41.", + "maturity_level": "TEXT. Process-maturity level descriptor. Possible values: Initial, Managed, Optimized. Possible values: Initial, Managed, Optimized." + }, + "mitigation": { + "mitigation_state": "TEXT. Current risk-mitigation state. Possible values: Implemented, Partial, Pending. Possible values: Implemented, Partial, Pending.", + "secure_action": "TEXT. Security action plan description. Possible values: Adequate, Insufficient, Strong. Possible values: Adequate, Insufficient, Strong.", + "breach_notification": "TEXT. Breach-notification status. Possible values: Established, Missing, Partial. Possible values: Established, Missing, Partial.", + "incident_plan_status": "TEXT. Incident-response plan descriptor. Possible values: Active, Missing, Outdated. Possible values: Active, Missing, Outdated.", + "plan_state": "TEXT. Risk-management plan state. **NULL means plan state not set.**. Possible values: Delayed, Not Started, On Track. Possible values: Delayed, Not Started, On Track.", + "next_review_date": "DATE. Next scheduled risk review date. Example: 2025-06-02. Example: 2025-06-02." + }, + "incident_statistics": { + "incident_count": "BIGINT. Total number of recorded incidents. Example: 8. Example: 8.", + "breach_count": "BIGINT. Number of data breaches. **NULL means no breaches recorded.**. Example: 1.0. Example: 1.0.", + "near_miss_count": "BIGINT. Near-miss incident count. **NULL means near-miss count unavailable.**. Example: 1.0. Example: 1.0.", + "avg_resolution_hrs": "REAL. Average incident resolution time in hours. Example: 42.7. Example: 42.7.", + "sla_compliance_pct": "REAL. Service-level agreement compliance percentage. Example: 97.28. Example: 97.28." + }, + "financial_impact": { + "cost_usd": "REAL. Financial cost incurred (USD). Example: 62143.31. Example: 62143.31.", + "penalties_usd": "REAL. Penalty amount incurred (USD). Example: 1035760.69. Example: 1035760.69.", + "insurance_coverage_state": "TEXT. Insurance or coverage state descriptor. **NULL means coverage state not set.**. Possible values: Adequate, Limited. Possible values: Adequate, Limited." + } + } + } +} \ No newline at end of file diff --git a/cross_border/cross_border_kb.jsonl b/cross_border/cross_border_kb.jsonl new file mode 100644 index 0000000000000000000000000000000000000000..87e7515800ba749521eb028979f7dbea5b7a150f --- /dev/null +++ b/cross_border/cross_border_kb.jsonl @@ -0,0 +1,79 @@ +{"id": 0, "knowledge": "Data Transfer Efficiency (DTE)", "description": "Measures how efficiently information is transferred, considering success rate and error volume.", "definition": "DTE = (successful transfers rate) divided by (number of transfer errors plus one)", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 1, "knowledge": "Bandwidth Saturation Index (BSI)", "description": "Shows how close a transfer is to using up all available network resources.", "definition": "BSI = (percentage of network used) multiplied by (amount of data sent divided by time spent sending)", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 2, "knowledge": "Risk Exposure Score (RES)", "description": "Represents the overall level of risk by combining risk estimation and the strength of controls in place.", "definition": "RES = (risk level) multiplied by (one divided by control effectiveness)", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 3, "knowledge": "Compliance Cost Ratio (CCR)", "description": "Compares the cost spent on following regulations to the potential penalties.", "definition": "CCR = (total cost for compliance) divided by (potential penalty amount plus one)", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 4, "knowledge": "Data Sensitivity Index (DSI)", "description": "Shows how sensitive a batch of information is, based on volume and sensitivity rating.", "definition": "DSI = (amount of data) multiplied by (3 if sensitivity is high, 2 if medium, 1 if low)", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 5, "knowledge": "Security Robustness Score (SRS)", "description": "Reflects how strong the protections are for keeping data safe.", "definition": "SRS = 3 if all data is encrypted and only trusted users can access; 2 if either full encryption or strong access control is present; 1 otherwise", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 6, "knowledge": "Vendor Reliability Index (VRI)", "description": "Evaluates how trustworthy a partner organization is, considering both their security and contract status.", "definition": "VRI = (security rating value) multiplied by (1 if contract is active, 0.5 otherwise)", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 7, "knowledge": "Audit Finding Severity (AFS)", "description": "Indicates how serious audit findings are, especially critical issues.", "definition": "AFS = (number of critical issues) divided by (total findings plus one)", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 8, "knowledge": "Data Subject Request Load (DSRL)", "description": "Shows how many requests people made about their own data.", "definition": "DSRL = (number of access requests) + (number of deletion requests) + (number of correction requests) + (number of data transfer requests)", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 9, "knowledge": "Cross-Border Risk Factor (CBRF)", "description": "Measures the extra risk when sending data between different countries.", "definition": "CBRF = (overall risk score) multiplied by (2 if source country and target country are different, 1 if the same)", "type": "calculation_knowledge", "children_knowledge": [2]} +{"id": 10, "knowledge": "High-Risk Data Flow", "description": "Labels a transfer as high-risk if both risk and sensitivity are above the threshold.", "definition": "A transfer where risk score is above 0.7 and data sensitivity index is above 100", "type": "domain_knowledge", "children_knowledge": [2, 4]} +{"id": 11, "knowledge": "Secure Data Flow", "description": "Marks a transfer as secure if the protections are at the highest level.", "definition": "A transfer where security robustness score equals 3", "type": "domain_knowledge", "children_knowledge": [5]} +{"id": 12, "knowledge": "Non-Compliant Vendor", "description": "Marks a partner as non-compliant if they fail key requirements.", "definition": "A partner marked as non-compliant in policy or process", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 13, "knowledge": "Critical Audit Issue", "description": "Flags an audit when the severity of findings is high.", "definition": "An audit where audit finding severity is greater than 0.5", "type": "domain_knowledge", "children_knowledge": [7]} +{"id": 14, "knowledge": "Sensitive Data Exposure", "description": "Highlights situations where sensitive information is not well-protected.", "definition": "A case where data sensitivity index is above 100 and security robustness score is less than 2", "type": "domain_knowledge", "children_knowledge": [4, 5]} +{"id": 15, "knowledge": "Cross-Border Compliance Gap", "description": "Labels situations where compliance issues occur in cross-country transfers.", "definition": "A compliance record where compliance with regulations is missing in at least one country and the transfer is international", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 16, "knowledge": "Vendor Risk Tier", "description": "Sorts partners into risk levels based on reliability.", "definition": "High risk if vendor reliability index < 2, medium if ≥ 2 and < 3, low if ≥ 3", "type": "domain_knowledge", "children_knowledge": [6]} +{"id": 17, "knowledge": "Data Integrity Failure", "description": "Shows data batches that failed integrity or checksum checks.", "definition": "A case where integrity check or verification fails", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 18, "knowledge": "Overloaded Data Flow", "description": "Flags transfers that use too much bandwidth and are inefficient.", "definition": "A transfer where bandwidth saturation index > 50 and data transfer efficiency < 1.0", "type": "domain_knowledge", "children_knowledge": [0, 1]} +{"id": 19, "knowledge": "Regulatory Risk Exposure", "description": "Flags data flows with high risk due to cross-border rules and compliance gaps.", "definition": "A transfer where cross-border risk factor > 1.5 and there is a compliance gap", "type": "domain_knowledge", "children_knowledge": [9, 15]} +{"id": 20, "knowledge": "Success Percentage", "description": "Shows how often data transfers succeed.", "definition": "Ranges from 0 to 100. Higher means more reliable transfers.", "type": "value_illustration", "children_knowledge": -1} +{"id": 21, "knowledge": "Risk Assessment Score", "description": "Shows the overall risk level determined in an assessment.", "definition": "Ranges from 0 to 100. Higher means greater risk.", "type": "value_illustration", "children_knowledge": -1} +{"id": 22, "knowledge": "Data Volume", "description": "Shows the size of the data batch, in gigabytes.", "definition": "Ranges from small amounts (like 0.1) to large ones (like 1000).", "type": "value_illustration", "children_knowledge": -1} +{"id": 23, "knowledge": "Encryption Status", "description": "Shows whether all data is fully encrypted or only partially encrypted.", "definition": "Options: full encryption or partial encryption.", "type": "value_illustration", "children_knowledge": -1} +{"id": 24, "knowledge": "Vendor Security Rating", "description": "Shows a partner's security score, where a higher value means better protection.", "definition": "Possible values: 4 for top tier, 3 for good, 2 for average, 1 for weak.", "type": "value_illustration", "children_knowledge": -1} +{"id": 25, "knowledge": "GDPR Compliance Status", "description": "Shows how well the requirements for data protection are met.", "definition": "Values: compliant, non-compliant, or partial.", "type": "value_illustration", "children_knowledge": -1} +{"id": 26, "knowledge": "Response Time", "description": "Shows how many days it takes to reply to a request.", "definition": "Lower means faster response; higher means delays.", "type": "value_illustration", "children_knowledge": -1} +{"id": 27, "knowledge": "Error Count", "description": "Shows how many errors happened during data transfers.", "definition": "A higher value means more problems.", "type": "value_illustration", "children_knowledge": -1} +{"id": 28, "knowledge": "Control Effectiveness Score", "description": "Shows how well controls are working to reduce risk.", "definition": "Higher score means stronger controls.", "type": "value_illustration", "children_knowledge": -1} +{"id": 29, "knowledge": "Audit Log Retention", "description": "Shows for how many days logs are kept.", "definition": "Higher means longer history is available.", "type": "value_illustration", "children_knowledge": -1} +{"id": 30, "knowledge": "Data Flow Reliability Score (DFRS)", "description": "Measures how reliable a transfer is, considering both efficiency and retry attempts.", "definition": "DFRS = (transfer efficiency) multiplied by (one minus number of retries divided by errors plus one)", "type": "calculation_knowledge", "children_knowledge": [0, 27]} +{"id": 31, "knowledge": "Security Control Cost Ratio (SCCR)", "description": "Shows if security protections are cost-effective.", "definition": "SCCR = (security strength score) divided by (compliance cost plus one)", "type": "calculation_knowledge", "children_knowledge": [5]} +{"id": 32, "knowledge": "Vendor Compliance Burden (VCB)", "description": "Shows how much burden a partner faces from compliance, considering findings and security.", "definition": "VCB = (audit severity) multiplied by (5 minus security rating)", "type": "calculation_knowledge", "children_knowledge": [7, 24]} +{"id": 33, "knowledge": "Cross-Border Data Volume Risk (CDVR)", "description": "Measures risk from sending large amounts of data between countries.", "definition": "CDVR = (cross-border risk factor) multiplied by (data volume)", "type": "calculation_knowledge", "children_knowledge": [9, 22]} +{"id": 34, "knowledge": "Data Subject Request Pressure (DSRP)", "description": "Measures how much pressure comes from responding to data subject requests.", "definition": "DSRP = (total number of data subject requests) multiplied by (response time)", "type": "calculation_knowledge", "children_knowledge": [8, 26]} +{"id": 35, "knowledge": "Encryption Coverage Ratio (ECR)", "description": "Measures how much sensitive information is actually encrypted.", "definition": "ECR = (security robustness score) multiplied by (sensitivity index)", "type": "calculation_knowledge", "children_knowledge": [4, 5]} +{"id": 36, "knowledge": "Audit Remediation Load (ARL)", "description": "Shows how much work is needed to fix audit issues, based on findings and request pressure.", "definition": "ARL = (audit severity) multiplied by (number of data subject requests)", "type": "calculation_knowledge", "children_knowledge": [7, 8, 25]} +{"id": 37, "knowledge": "Bandwidth Risk Factor (BRF)", "description": "Measures risk from overusing network resources with sensitive data.", "definition": "BRF = (bandwidth saturation index) multiplied by (data sensitivity index)", "type": "calculation_knowledge", "children_knowledge": [1, 4]} +{"id": 38, "knowledge": "Vendor Risk Amplification (VRA)", "description": "Shows how much risk grows due to vendor issues.", "definition": "VRA = (vendor reliability index) multiplied by (risk exposure score)", "type": "calculation_knowledge", "children_knowledge": [2, 6]} +{"id": 39, "knowledge": "Critical Data Flow Risk", "description": "Labels a transfer as highly risky and unreliable if both risk and reliability scores are out of bounds.", "definition": "A transfer where risk exposure score is above 0.7 and reliability score is below 0.5", "type": "domain_knowledge", "children_knowledge": [2, 30]} +{"id": 40, "knowledge": "Overburdened Compliance Flow", "description": "Highlights transfers that are expensive to keep compliant and need a lot of remediation work.", "definition": "A transfer where compliance cost ratio is above 0.8 and audit remediation load is above 10", "type": "domain_knowledge", "children_knowledge": [3, 36]} +{"id": 41, "knowledge": "Unprotected Sensitive Data", "description": "Marks cases where highly sensitive data is not well encrypted.", "definition": "A case where data sensitivity index is above 100 and encryption coverage ratio is below 2", "type": "domain_knowledge", "children_knowledge": [4, 35]} +{"id": 42, "knowledge": "High-Pressure Data Flow", "description": "Flags data transfers under both heavy request pressure and high bandwidth usage.", "definition": "A transfer where data subject request pressure is above 50 and bandwidth saturation index is above 50", "type": "domain_knowledge", "children_knowledge": [1, 34]} +{"id": 43, "knowledge": "Vendor-Driven Risk Flow", "description": "Highlights data flows with extra risk due to partner issues.", "definition": "A transfer where vendor risk amplification is above 3 and vendor compliance burden is above 2", "type": "domain_knowledge", "children_knowledge": [32, 38]} +{"id": 44, "knowledge": "Cross-Border Audit Risk", "description": "Flags international transfers with large volume and audit issues.", "definition": "A transfer where cross-border data volume risk is above 1000 and audit finding severity is above 0.5", "type": "domain_knowledge", "children_knowledge": [7, 33]} +{"id": 45, "knowledge": "Insecure High-Volume Flow", "description": "Highlights large data batches with weak protections.", "definition": "A case where data volume is above 500 and security robustness score is below 2", "type": "domain_knowledge", "children_knowledge": [5, 22]} +{"id": 46, "knowledge": "Regulatory Overload Flow", "description": "Labels cases where both regulatory risk and compliance failure exist together.", "definition": "A transfer where there is regulatory risk exposure and compliance is marked non-compliant", "type": "domain_knowledge", "children_knowledge": [19, 25]} +{"id": 47, "knowledge": "Bandwidth-Constrained Risk", "description": "Flags cases where risk is worsened by bandwidth overuse.", "definition": "A transfer where bandwidth risk factor is above 100 and risk exposure score is above 0.7", "type": "domain_knowledge", "children_knowledge": [2, 37]} +{"id": 48, "knowledge": "Incident Resolution Efficiency (IRE)", "description": "Shows how quickly incidents get resolved compared to expectations.", "definition": "IRE = (percentage of incidents resolved on time) divided by (average resolution time plus one)", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 49, "knowledge": "Incident-Prone Data Flow", "description": "Flags data flows with both slow incident resolution and high risk.", "definition": "A transfer where incident resolution efficiency is below 0.5 and high-risk data flow is true", "type": "domain_knowledge", "children_knowledge": [10, 30]} +{"id": 50, "knowledge": "Data Flow Stability Index (DFSI)", "description": "Shows how stable a transfer is, considering reliability and errors.", "definition": "DFSI = (data flow reliability score) multiplied by (success percentage divided by error count plus one)", "type": "calculation_knowledge", "children_knowledge": [20, 27, 30]} +{"id": 51, "knowledge": "Compliance Overhead Ratio (COR)", "description": "Measures the extra burden compliance puts on operations.", "definition": "COR = (data subject request pressure) divided by (compliance cost plus one)", "type": "calculation_knowledge", "children_knowledge": [34]} +{"id": 52, "knowledge": "Security Posture Maturity (SPM)", "description": "Shows how mature security protections are, combining encryption coverage and log retention.", "definition": "SPM = (encryption coverage ratio) multiplied by (number of days logs are kept divided by 365)", "type": "calculation_knowledge", "children_knowledge": [29, 35]} +{"id": 53, "knowledge": "Vendor Risk Concentration (VRC)", "description": "Measures how much a single partner's risk issues affect the whole system.", "definition": "VRC = (vendor risk amplification) multiplied by (one minus vendor reliability index)", "type": "calculation_knowledge", "children_knowledge": [6, 38]} +{"id": 54, "knowledge": "Cross-Border Compliance Exposure (CBCE)", "description": "Measures compliance risk for international transfers based on volume and regulatory gaps.", "definition": "CBCE = (cross-border data volume risk) multiplied by (2 if compliance is non-compliant, 1 otherwise)", "type": "calculation_knowledge", "children_knowledge": [25, 33]} +{"id": 55, "knowledge": "Incident Impact Factor (IIF)", "description": "Measures the potential impact of an incident, combining risk and how well it is resolved.", "definition": "IIF = (risk exposure score) multiplied by (one minus incident resolution efficiency)", "type": "calculation_knowledge", "children_knowledge": [2, 48]} +{"id": 56, "knowledge": "Data Retention Risk Score (DRRS)", "description": "Measures risk from keeping sensitive data too long.", "definition": "DRRS = (data sensitivity index) multiplied by (days data is kept divided by 365)", "type": "calculation_knowledge", "children_knowledge": [4]} +{"id": 57, "knowledge": "Audit Compliance Pressure (ACP)", "description": "Shows how much pressure audit findings and compliance put on operations.", "definition": "ACP = (audit remediation load) multiplied by (audit finding severity)", "type": "calculation_knowledge", "children_knowledge": [7, 36]} +{"id": 58, "knowledge": "Bandwidth Compliance Risk (BCR)", "description": "Shows compliance risk when bandwidth is overused.", "definition": "BCR = (bandwidth risk factor) multiplied by (1.5 if compliance is partial, 2 if non-compliant, 1 otherwise)", "type": "calculation_knowledge", "children_knowledge": [25, 37]} +{"id": 59, "knowledge": "Vendor Security Cost Index (VSCI)", "description": "Compares the compliance burden from a partner to the cost of protections.", "definition": "VSCI = (vendor compliance burden) divided by (security control cost ratio plus one)", "type": "calculation_knowledge", "children_knowledge": [31, 32]} +{"id": 60, "knowledge": "Unstable High-Risk Flow", "description": "Flags highly risky data transfers that are unstable.", "definition": "A transfer where data flow stability index is below 0.5 and critical data flow risk exists", "type": "domain_knowledge", "children_knowledge": [39, 50]} +{"id": 61, "knowledge": "Overloaded Security Flow", "description": "Highlights cases where security protections are weak and compliance risk is high.", "definition": "A transfer where security posture maturity is below 1 and cross-border compliance exposure is above 100", "type": "domain_knowledge", "children_knowledge": [52, 54]} +{"id": 62, "knowledge": "Excessive Retention Risk", "description": "Flags cases where very sensitive data is kept for too long.", "definition": "A case where data retention risk score is above 50 and data sensitivity index is above 100", "type": "domain_knowledge", "children_knowledge": [4, 56]} +{"id": 63, "knowledge": "Vendor Compliance Risk Cluster", "description": "Flags partners who create clusters of compliance risk.", "definition": "A partner where vendor risk concentration is above 2 and vendor compliance burden is above 2", "type": "domain_knowledge", "children_knowledge": [32, 53]} +{"id": 64, "knowledge": "Incident-Prone Compliance Flow", "description": "Flags data flows with high incident impact and compliance problems.", "definition": "A transfer where incident impact factor is above 0.8 and compliance is non-compliant", "type": "domain_knowledge", "children_knowledge": [25, 55]} +{"id": 65, "knowledge": "Audit-Stressed Data Flow", "description": "Shows transfers under stress from both audit findings and compliance.", "definition": "A transfer where audit compliance pressure is above 5 and compliance overhead ratio is above 0.5", "type": "domain_knowledge", "children_knowledge": [51, 57]} +{"id": 66, "knowledge": "Bandwidth-Limited Compliance Risk", "description": "Flags cases where bandwidth issues make compliance worse.", "definition": "A transfer where bandwidth compliance risk is above 50 and cross-border compliance gap exists", "type": "domain_knowledge", "children_knowledge": [15, 58]} +{"id": 67, "knowledge": "Costly Vendor Risk Flow", "description": "Flags flows where partner risk and cost are both high.", "definition": "A transfer where vendor security cost index is above 1 and vendor risk amplification is above 3", "type": "domain_knowledge", "children_knowledge": [38, 59]} +{"id": 68, "knowledge": "Sensitive Unstable Flow", "description": "Flags unstable transfers involving highly sensitive data.", "definition": "A transfer where data flow stability index is below 0.5 and sensitive data exposure exists", "type": "domain_knowledge", "children_knowledge": [14, 50]} +{"id": 69, "knowledge": "High-Impact Audit Risk Flow", "description": "Flags flows with severe audit and regulatory risk.", "definition": "A transfer where regulatory risk exposure exists and audit compliance pressure is above 5", "type": "domain_knowledge", "children_knowledge": [19, 57]} +{"id": 70, "knowledge": "Transfer Path", "description": "Describes the direction of the data transfer, from source to target.", "definition": "A string in the format: source country, an arrow, and destination country", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 71, "knowledge": "Request Breakdown", "description": "Lists the types and numbers of requests individuals made about their data.", "definition": "A list that includes: access requests, deletion requests, correction requests, and transfer requests, with their respective counts", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 72, "knowledge": "Integrity Failure Count (IFC)", "description": "Counts failed checks on a batch of data.", "definition": "IFC = 1 if the integrity check fails, 0 otherwise, plus 1 if verification fails, 0 otherwise", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 73, "knowledge": "Failure Types List", "description": "Lists the reasons for data batch check failures.", "definition": "A comma-separated list: 'integrity check' if integrity fails, 'verification' if checksum fails", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 74, "knowledge": "High Audit Compliance Pressure", "description": "Flags data transfers with heavy audit-related workload.", "definition": "A transfer where audit compliance pressure is above 5", "type": "domain_knowledge", "children_knowledge": [57]} +{"id": 75, "knowledge": "Cross-Border Data Flow", "description": "Labels data transfers where source and target countries are not the same.", "definition": "A transfer where the countries for sending and receiving data are different", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 76, "knowledge": "Slow Remediation Timeline", "description": "Labels cases where the deadline to fix an issue has already passed.", "definition": "A case where the current date minus the fix deadline is greater than zero", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 77, "knowledge": "Nearing Remediation Deadline", "description": "Highlights transfers close to their fix deadline.", "definition": "A case where the time until the fix deadline is between minus five and zero", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 78, "knowledge": "High Vendor Risk Concentration", "description": "Flags partners with many risk issues over time.", "definition": "A case where the current date minus the fix deadline is greater than zero", "type": "domain_knowledge", "children_knowledge": -1} diff --git a/cross_border/cross_border_schema.txt b/cross_border/cross_border_schema.txt new file mode 100644 index 0000000000000000000000000000000000000000..4b986ea450f491d3cd6c28e6b9862812f5a5fe77 --- /dev/null +++ b/cross_border/cross_border_schema.txt @@ -0,0 +1,217 @@ +"CREATE" TABLE "DataFlow" ( +"RecordRegistry" text NOT NULL, +"FLOWSTAMP" text NULL, +flow_overview jsonb NULL, + "PRIMARY" KEY (RecordRegistry) +); + + + +"First" 3 rows: +RecordRegistry FLOWSTAMP flow_overview +---------------- ------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- +CB932064 2024-03-11T11:43:31 {'routing': {'protocol': 'Blockchain', 'origin_actor': 'Hill Ltd', 'origin_country': 'Niue', 'destination_actor': 'Davis, Harper and Weber', 'transfer_frequency': 'Weekly', 'destination_country': 'Djibouti'}, 'performance': {'error_count': 39, 'retry_count': 1, 'success_pct': 99.93, 'data_size_mb': 42668.42, 'duration_min': 1068, 'bandwidth_util_pct': 68.81}, 'classification': {'flow_tag': 'DF7811', 'data_category': 'Commercial', 'encryption_status': 'Full', 'sensitivity_level': 'High'}} +CB339111 2024-05-01T07:58:45 {'routing': {'protocol': 'SFTP', 'origin_actor': 'Boyer-Mcdonald', 'origin_country': 'Israel', 'destination_actor': None, 'transfer_frequency': 'Hourly', 'destination_country': 'Monaco'}, 'performance': {'error_count': 68, 'retry_count': 48, 'success_pct': 91.77, 'data_size_mb': 32803.96, 'duration_min': 995, 'bandwidth_util_pct': 7.52}, 'classification': {'flow_tag': 'DF9309', 'data_category': 'Personal', 'encryption_status': 'Partial', 'sensitivity_level': 'Low'}} +CB899685 2024-05-07T04:39:04 {'routing': {'protocol': 'Private Network', 'origin_actor': 'Curtis Inc', 'origin_country': 'United States Virgin Islands', 'destination_actor': 'Horton LLC', 'transfer_frequency': 'Real-time', 'destination_country': 'Germany'}, 'performance': {'error_count': 80, 'retry_count': 45, 'success_pct': 93.76, 'data_size_mb': 93843.18, 'duration_min': 1325, 'bandwidth_util_pct': 62.66}, 'classification': {'flow_tag': 'DF8105', 'data_category': 'Financial', 'encryption_status': 'Partial', 'sensitivity_level': 'Low'}} +... + + +"CREATE" TABLE "RiskManagement" ( +"RISKTRACE" bigint NOT NULL, +flow_link text NOT NULL, +"recordRegistry" text NULL, +risk_management_profile jsonb NULL, + "PRIMARY" KEY (RISKTRACE), + "FOREIGN" KEY (flow_link) REFERENCES DataFlow(RecordRegistry) +); + + + +"First" 3 rows: + RISKTRACE flow_link recordRegistry risk_management_profile +----------- ----------- ---------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- + 1 CB932064 CB932064 {'assessment': {'risk_score': 75.89, 'maturity_level': 'Optimized', 'compliance_score': 76.41, 'residual_risk_level': 'Medium', 'control_effectiveness_pct': 30.51}, 'mitigation': {'plan_state': 'On Track', 'secure_action': 'Adequate', 'mitigation_state': 'Pending', 'next_review_date': '2025-06-02', 'breach_notification': 'Partial', 'incident_plan_status': 'Missing'}, 'financial_impact': {'cost_usd': 62143.31, 'penalties_usd': 1035760.7, 'insurance_coverage_state': None}, 'incident_statistics': {'breach_count': 1, 'incident_count': 8, 'near_miss_count': 1, 'avg_resolution_hrs': 42.7, 'sla_compliance_pct': 97.28}} + 2 CB339111 CB339111 {'assessment': {'risk_score': 67.11, 'maturity_level': 'Optimized', 'compliance_score': 75.01, 'residual_risk_level': 'High', 'control_effectiveness_pct': 25.93}, 'mitigation': {'plan_state': 'On Track', 'secure_action': 'Strong', 'mitigation_state': 'Pending', 'next_review_date': '2025-10-14', 'breach_notification': 'Established', 'incident_plan_status': 'Active'}, 'financial_impact': {'cost_usd': 697139.75, 'penalties_usd': 1901980.2, 'insurance_coverage_state': 'Limited'}, 'incident_statistics': {'breach_count': 2, 'incident_count': 54, 'near_miss_count': 7, 'avg_resolution_hrs': 149.6, 'sla_compliance_pct': 92.63}} + 3 CB899685 CB899685 {'assessment': {'risk_score': 17.4, 'maturity_level': 'Managed', 'compliance_score': 51.95, 'residual_risk_level': 'Medium', 'control_effectiveness_pct': None}, 'mitigation': {'plan_state': 'Not Started', 'secure_action': 'Insufficient', 'mitigation_state': 'Pending', 'next_review_date': '2025-10-04', 'breach_notification': 'Established', 'incident_plan_status': 'Outdated'}, 'financial_impact': {'cost_usd': 81412.06, 'penalties_usd': 110601.04, 'insurance_coverage_state': None}, 'incident_statistics': {'breach_count': 6, 'incident_count': 40, 'near_miss_count': 3, 'avg_resolution_hrs': 92.3, 'sla_compliance_pct': 99.54}} +... + + +"CREATE" TABLE "DataProfile" ( +"profileTrace" bigint NOT NULL, +"Flow_Sign" text NOT NULL, +"riskJoin" bigint NOT NULL, +"Record_Registry" text NULL, +"DATATYPE" text NULL, +"dataSense" text NULL, +vol_gb real NULL, +"REC_TALLY" bigint NULL, +"subjectTally" bigint NULL, +ret_days bigint NULL, +"FormatType" text NULL, +qlty_score real NULL, +"Int_check" text NULL, +"CSUMVERIFY" text NULL, +"srcValState" text NULL, +"DEST_VAL_state" text NULL, + "PRIMARY" KEY (profileTrace), + "FOREIGN" KEY ("Flow_Sign") REFERENCES DataFlow(RecordRegistry), + "FOREIGN" KEY ("riskJoin") REFERENCES RiskManagement(RISKTRACE) +); + + + +"First" 3 rows: + profileTrace Flow_Sign riskJoin Record_Registry DATATYPE dataSense vol_gb REC_TALLY subjectTally ret_days FormatType qlty_score Int_check CSUMVERIFY srcValState DEST_VAL_state +-------------- ----------- ---------- ----------------- ---------- ----------- -------- ----------- -------------- ---------- ------------ ------------ ----------- ------------ ------------- ---------------- + 1 CB932064 1 CB932064 Commercial High 1093.6 2629296 754585 2208 Mixed 52.45 Passed Failed Pending Pending + 2 CB339111 2 CB339111 Personal Low 9970.36 921745 797722 3456 Unstructured 81.09 Passed Success Verified Verified + 3 CB899685 3 CB899685 Financial Low 7306.78 751112 384363 1728 Mixed 25.2 Pending Failed Failed +... + + +"CREATE" TABLE "SecurityProfile" ( +"SECURITY_TRACE" bigint NOT NULL, +"flowKey" text NOT NULL, +"RISKKEY" bigint NOT NULL, +profile_key bigint NOT NULL, +"RecordRegistry" text NULL, +enc_state text NULL, +"ENCMETH" text NULL, +"keyManState" text NULL, +"MASK_LEVEL" text NULL, +"anonMeth" text NULL, +"PSYMSTATE" text NULL, +"authMeth" text NULL, +"AUTHZ_FRAME" text NULL, +acl_state text NULL, +"APISECSTATE" text NULL, +"logIntCheck" text NULL, +"LogRetDays" bigint NULL, +bkp_state text NULL, +"DRECSTATE" text NULL, +bc_state text NULL, + "PRIMARY" KEY (SECURITY_TRACE), + "FOREIGN" KEY ("flowKey") REFERENCES DataFlow(RecordRegistry), + "FOREIGN" KEY ("RISKKEY") REFERENCES RiskManagement(RISKTRACE), + "FOREIGN" KEY (profile_key) REFERENCES DataProfile(profileTrace) +); + + + +"First" 3 rows: + SECURITY_TRACE flowKey RISKKEY profile_key RecordRegistry enc_state ENCMETH keyManState MASK_LEVEL anonMeth PSYMSTATE authMeth AUTHZ_FRAME acl_state APISECSTATE logIntCheck LogRetDays bkp_state DRECSTATE bc_state +---------------- --------- --------- ------------- ---------------- ----------- --------- ------------- ------------ ----------- ----------- ---------- ------------- ----------- ------------- ------------- ------------ ----------- ----------- ---------- + 1 CB932064 1 1 CB932064 Full T-Closeness Partial Basic ABAC Adequate Vulnerable Pending 905 Current Untested Active + 2 CB339111 2 2 CB339111 Partial Hybrid T-Closeness SSO ABAC Adequate Vulnerable Pending 439 Failed Tested Outdated + 3 CB899685 3 3 CB899685 Partial Custom Distributed Partial T-Closeness Partial SSO Custom Strong Vulnerable Passed 621 Failed Missing Outdated +... + + +"CREATE" TABLE "VendorManagement" ( +"Vendor_Trace" bigint NOT NULL, +"SEC_JOIN" bigint NOT NULL, +riskassoc bigint NOT NULL, +recordregistry text NULL, +"VENDASSESS" text NULL, +"vendSecRate" text NULL, +"VEND_AUD_DATE" date NULL, +"contrState" text NULL, +"CONTR_EXPIRE" date NULL, +dpa_state text NULL, +"SCCSTATE" text NULL, +bcr_state text NULL, +"docuState" text NULL, +pol_comp text NULL, +"procComp" text NULL, +train_state text NULL, +"certState" text NULL, +"MONSTATE" text NULL, +rep_state text NULL, +"stakeComm" text NULL, + "PRIMARY" KEY (Vendor_Trace), + "FOREIGN" KEY ("SEC_JOIN") REFERENCES SecurityProfile(SECURITY_TRACE), + "FOREIGN" KEY (riskassoc) REFERENCES RiskManagement(RISKTRACE) +); + + + +"First" 3 rows: + Vendor_Trace SEC_JOIN riskassoc recordregistry VENDASSESS vendSecRate VEND_AUD_DATE contrState CONTR_EXPIRE dpa_state SCCSTATE bcr_state docuState pol_comp procComp train_state certState MONSTATE rep_state stakeComm +-------------- ---------- ----------- ---------------- ------------ ------------- --------------- ------------ -------------- ----------- ----------- ----------- ----------- ---------- ------------- ------------- ----------- ---------- ----------- ----------- + 1 1 1 CB932064 Completed A 2024-05-30 Active 2027-01-12 Required Implemented Approved Complete Partial Non-compliant Due Pending Inactive Delayed Limited + 2 2 2 CB339111 Completed A 2024-06-30 Under Review 2026-08-16 Required Implemented Pending Incomplete Full Non-compliant Overdue Inactive Delayed Poor + 3 3 3 CB899685 B 2024-11-14 Expired 2026-04-26 Signed Partial Pending Incomplete Full Non-compliant Current Valid Partial Delayed Limited +... + + +"CREATE" TABLE "Compliance" ( +"complianceTrace" bigint NOT NULL, +risk_tie bigint NOT NULL, +"vendorTie" bigint NOT NULL, +"recordRegistry" text NULL, +"LEGALBASE" text NULL, +consent_state text NULL, +"ConsentColl" date NULL, +consent_exp date NULL, +purp_limit text NULL, +"PURP_DESC" text NULL, +"gdprComp" text NULL, +"CCPA_COMP" text NULL, +"PIPLcomp" text NULL, +loc_law_comp text NULL, +"RegApprovals" text NULL, +priv_imp_assess text NULL, +"Datasubjright" text NULL, + "PRIMARY" KEY (complianceTrace), + "FOREIGN" KEY (risk_tie) REFERENCES RiskManagement(RISKTRACE), + "FOREIGN" KEY ("vendorTie") REFERENCES VendorManagement(Vendor_Trace) +); + + + +"First" 3 rows: + complianceTrace risk_tie vendorTie recordRegistry LEGALBASE consent_state ConsentColl consent_exp purp_limit PURP_DESC gdprComp CCPA_COMP PIPLcomp loc_law_comp RegApprovals priv_imp_assess Datasubjright +----------------- ---------- ----------- ---------------- ---------------- --------------- ------------- ------------- ------------ ------------------- ---------- ------------- ------------- -------------- -------------- ----------------- --------------- + 1 1 1 CB932064 Legal Obligation Not Required 2026-05-17 General Business Operations Partial Compliant Non-compliant Non-compliant Obtained Completed Partial + 2 2 2 CB339111 Legal Obligation Valid 2025-02-25 Multiple Research Partial Non-compliant Partial Compliant Not Required In Progress Fully Supported + 3 3 3 CB899685 Expired 2025-03-30 Multiple Research Partial Partial Non-compliant Non-compliant Pending In Progress Limited +... + + +"CREATE" TABLE "AuditAndCompliance" ( +"AUDIT_TRACE" bigint NOT NULL, +profjoin bigint NOT NULL, +"COMP_JOIN" bigint NOT NULL, +"VendJoin" bigint NOT NULL, +record_registry text NULL, +"AudtrailState" text NULL, +"FINDTALLY" bigint NULL, +"critFindNum" bigint NULL, +remed_state text NULL, +"REMED_DUE" date NULL, +"authNotify" text NULL, +border_mech text NULL, +"TRANSIMPASSESS" text NULL, +"localReqs" text NULL, +data_map_state text NULL, +"SYSINTSTATE" text NULL, +"AccReqNum" bigint NULL, +"DEL_REQ_NUM" bigint NULL, +rect_req_num bigint NULL, +"PORTREQNUM" bigint NULL, +resp_time_day real NULL, + "PRIMARY" KEY (AUDIT_TRACE), + "FOREIGN" KEY (profjoin) REFERENCES DataProfile(profileTrace), + "FOREIGN" KEY ("COMP_JOIN") REFERENCES Compliance(complianceTrace), + "FOREIGN" KEY ("VendJoin") REFERENCES VendorManagement(Vendor_Trace) +); + + + +"First" 3 rows: + AUDIT_TRACE profjoin COMP_JOIN VendJoin record_registry AudtrailState FINDTALLY critFindNum remed_state REMED_DUE authNotify border_mech TRANSIMPASSESS localReqs data_map_state SYSINTSTATE AccReqNum DEL_REQ_NUM rect_req_num PORTREQNUM resp_time_day +------------- ---------- ----------- ---------- ----------------- --------------- ----------- ------------- ------------- ----------- ------------ ----------------- ---------------- ----------- ---------------- ---------------- ----------- ------------- -------------- ------------ --------------- + 1 1 1 1 CB932064 Complete 3 6 In Progress 2025-03-16 Not Required SCCs Required Not Met Partial Fully Integrated 959 409 76 53 4.5 + 2 2 2 2 CB339111 Complete 49 8 In Progress 2025-04-14 Required Adequacy Decision Completed Met Partial Manual 36 326 196 21 13.4 + 3 3 3 3 CB899685 Missing 6 5 Not Started 2025-04-10 Required Derogations In Progress Partial Partial Fully Integrated 650 122 0 89 23.2 +... diff --git a/crypto_exchange/crypto_exchange_column_meaning_base.json b/crypto_exchange/crypto_exchange_column_meaning_base.json new file mode 100644 index 0000000000000000000000000000000000000000..24e2f270f1112f78d4754990633f23af4b89b571 --- /dev/null +++ b/crypto_exchange/crypto_exchange_column_meaning_base.json @@ -0,0 +1,177 @@ +{ + "crypto_exchange|users|USERSTAMP": "TEXT. Unique user identifier. PK. Example: U583322.", + "crypto_exchange|users|acctScope": "TEXT. Scope or permission tier of the user account. **NULL means account scope not specified.**. Possible values: Futures, Margin, Options, Spot.", + "crypto_exchange|orders|ORD_STAMP": "TEXT. Unique order identifier. PK. Example: OR6015391.", + "crypto_exchange|orders|TimeCode": "TIMESTAMP. Timestamp when the order was submitted. Possible values: 2025/2/19. Possible values: 2025/2/19.", + "crypto_exchange|orders|exchSpot": "TEXT. Exchange trading pair or market symbol. Example: EX203.", + "crypto_exchange|orders|market_note": "TEXT. Descriptive note about the market context. Example: ETH-USDT.", + "crypto_exchange|orders|UserRef": "TEXT. User identifier who placed the order. FK to users.", + "crypto_exchange|orders|created_at": "TIMESTAMP. Record-creation timestamp. Example: 2025-02-18T09:54:51+09:00.", + "crypto_exchange|orders|UPDATED_AT": "TIMESTAMP. Last update timestamp for the order record. Possible values: 2025/2/19 08:29.", + "crypto_exchange|orderExecutions|RecordVault": "TEXT. Unique execution record identifier. PK. Example: OB333576. Example: OB333576.", + "crypto_exchange|orderExecutions|ExecMARK": "BIGSERIAL. Auto-increment execution index.", + "crypto_exchange|orderExecutions|FillAmt": "REAL. Executed fill amount. Example: 1.447931.", + "crypto_exchange|orderExecutions|remain_amt": "REAL. Remaining order amount. **NULL means remaining amount not recorded.**. Example: 0.545024.", + "crypto_exchange|orderExecutions|fillQuote": "REAL. Execution price quote. Example: 26244.44.", + "crypto_exchange|orderExecutions|FILL_SUM": "REAL. Cumulative fill value. Example: 38000.14.", + "crypto_exchange|orderExecutions|expireSpot": "TIMESTAMP. Expiration timestamp for the execution record. Example: 02/22/2025 08:29.", + "crypto_exchange|orderExecutions|CancelNote": "TEXT. Cancellation reason or note. **NULL means no cancellation note.**. Possible values: Expired, InsufficientFunds, UserRequested. Possible values: Expired, InsufficientFunds, UserRequested.", + "crypto_exchange|orderExecutions|EXECtune": "TEXT. Execution tuning parameter. **NULL means execution tune not specified.**. Possible values: Maker, Taker.", + "crypto_exchange|orderExecutions|Ord_Link": "TEXT. Identifier of the parent order. FK to orders.", + "crypto_exchange|fees|fee_range": "TEXT. Fee-tier or range descriptor. **NULL means fee range not provided.**. Possible values: Tier1, Tier2, Tier3, Tier4.", + "crypto_exchange|fees|FeeRate": "REAL. Applied fee rate (e.g., 0.001 = 0.1 %). Example: 0.0007. Example: 0.0007. Example: 0.0007.", + "crypto_exchange|fees|FEE_TOTAL": "TEXT. Total fee amount as text. Example: $26.60 .", + "crypto_exchange|fees|FeeCoin": "TEXT. Currency in which the fee is charged. **NULL means fee currency not specified.**. Possible values: USD, USDC, USDT. Possible values: USD, USDC, USDT.", + "crypto_exchange|fees|rebRate": "TEXT. Rebate rate if applicable. Example: 0.09%.", + "crypto_exchange|fees|REB_TOTAL": "TEXT. Total rebate amount. Example: 34.200126.", + "crypto_exchange|fees|Order_Link": "TEXT. Related order identifier. FK to orders.", + "crypto_exchange|marketdata|EXCH_SPOT": "TEXT. Exchange spot market symbol. PK.", + "crypto_exchange|marketdata|TimeTrack": "TIMESTAMP. Timestamp when the snapshot was taken.", + "crypto_exchange|marketdata|exchNote": "TEXT. Exchange-specific commentary.", + "crypto_exchange|marketdata|mktCombo": "TEXT. Market combination or instrument group.", + "crypto_exchange|marketstats|MKT_STATS_MARK": "BIGSERIAL. Unique market-statistics identifier. PK.", + "crypto_exchange|marketstats|fundRate": "REAL. Funding rate applicable to perpetual swaps. Example: 0.0004.", + "crypto_exchange|marketstats|FundSpot": "TIMESTAMP. Timestamp of the funding rate. Possible values: 2025/2/19 09:29, 2025/2/19 10:29, 2025/2/19 11:29, 2025/2/19 12:29, 2025/2/19 13:29, 2025/2/19 14:29, 2025/2/19 15:29, 2025/2/19 16:29. Possible values: 2025/2/19 09:29, 2025/2/19 10:29, 2025/2/19 11:29, 2025/2/19 12:29, 2025/2/19 13:29, 2025/2/19 14:29, 2025/2/19 15:29, 2025/2/19 16:29.", + "crypto_exchange|marketstats|openStake": "REAL. Open interest or stake. Example: 808922.74.", + "crypto_exchange|marketstats|VOLday": "REAL. Trading volume of the day. Example: 3045613.27.", + "crypto_exchange|marketstats|TRADE_DAY": "BIGINT. Trading-day counter. **NULL means trade-day not recorded.**. Example: 73628.0.", + "crypto_exchange|marketstats|turnoverDay": "REAL. Turnover for the day. Example: 9406054.28.", + "crypto_exchange|marketstats|priceShiftDay": "TEXT. Price shift over the day. **NULL means price shift not recorded.**. Example: 7.96%.", + "crypto_exchange|marketstats|HIGH_SPOT_DAY": "REAL. Day’s high price. Example: 27823.67.", + "crypto_exchange|marketstats|low_spot_day": "REAL. Day’s low price. Example: 25912.08.", + "crypto_exchange|marketstats|VwapDay": "REAL. Volume-weighted average price of the day. Example: 26269.45. Example: 26269.45.", + "crypto_exchange|marketstats|mktSIZE": "REAL. Market capitalisation size. Example: 438986638.7.", + "crypto_exchange|marketstats|Circ_Total": "REAL. Circulating supply total. Example: 79417014.51.", + "crypto_exchange|marketstats|TOTAL_SUPPLY": "REAL. Total supply figure. **NULL means total supply not available.**. Example: 96226091.91.", + "crypto_exchange|marketstats|MAXsupPLY": "TEXT. Maximum supply cap. Example: 188,391,541 tokens.", + "crypto_exchange|marketstats|MktHold": "REAL. Market value held by top addresses. Example: 0.026. Example: 0.026.", + "crypto_exchange|marketstats|tradeRank": "BIGINT. Trading-volume rank across markets. Example: 52.", + "crypto_exchange|marketstats|LIQUIDscore": "REAL. Liquidity score indicator. Example: 0.903.", + "crypto_exchange|marketstats|vol_meter": "TEXT. Volume-intensity meter reading. **NULL means volume meter not set.**. Example: 55.65.", + "crypto_exchange|marketstats|md_link": "TEXT. Reference to the underlying market data. FK to marketdata.", + "crypto_exchange|analyticsindicators|ANALYTICS_NODE": "BIGSERIAL. Unique analytics indicator identifier. PK.", + "crypto_exchange|analyticsindicators|md_ref": "TEXT. Reference to market data snapshot. FK to marketdata.", + "crypto_exchange|riskandmargin|MARG_FORM": "TEXT. Unique margin-form identifier. PK. Possible values: Cross, Isolated.", + "crypto_exchange|riskandmargin|ordStamp": "TEXT. Associated order identifier. FK to orders.", + "crypto_exchange|accountbalances|ACCTBAL_NODE": "BIGSERIAL. Unique account-balance record identifier. PK.", + "crypto_exchange|accountbalances|walletSum": "TEXT. Total wallet balance as text. Example: $316,482.99 .", + "crypto_exchange|accountbalances|AVAIL_SUM": "REAL. Available balance amount. Example: 250957.88.", + "crypto_exchange|accountbalances|frozenSum": "REAL. Frozen balance amount. Example: 65525.11.", + "crypto_exchange|accountbalances|marg_sum": "REAL. Margin allocated sum. Example: 901343.58.", + "crypto_exchange|accountbalances|unrealLINE": "REAL. Unrealised P/L line. Example: 3545.06.", + "crypto_exchange|accountbalances|REAL_LINE": "REAL. Realised P/L line. Example: -38455.08.", + "crypto_exchange|accountbalances|userTAG": "TEXT. User identifier for the balance. FK to users.", + "crypto_exchange|systemmonitoring|SYS_MON_PIVOT": "BIGSERIAL. Unique system-monitoring record identifier. PK.", + "crypto_exchange|systemmonitoring|APIRQTotal": "BIGINT. Total API requests processed. **NULL means API request total not recorded.**. Example: 7728.0.", + "crypto_exchange|systemmonitoring|apiErrTOTAL": "BIGINT. Total API error count. Example: 4.", + "crypto_exchange|systemmonitoring|ApiLatMark": "REAL. API latency marker in milliseconds. **NULL means latency not measured.**. Example: 547.0.", + "crypto_exchange|systemmonitoring|WS_STATE": "TEXT. WebSocket subsystem state. Possible values: Connected, Disconnected.", + "crypto_exchange|systemmonitoring|RateRemain": "BIGINT. Remaining rate-limit quota. Example: 939. Example: 939.", + "crypto_exchange|systemmonitoring|lastUpdNote": "TEXT. Notes for the last system update. Example: 9340653.", + "crypto_exchange|systemmonitoring|SeqCode": "TEXT. Sequence code for deployments. **NULL means sequence code not set.**. Example: 6559236.0. Example: 6559236.0.", + "crypto_exchange|systemmonitoring|SLIP_ratio": "REAL. Slippage ratio indicator. Example: -0.0077.", + "crypto_exchange|systemmonitoring|ExecTimeSpan": "REAL. Execution time span metric. **NULL means execution time span not captured.**. Example: 203.0. Example: 203.0.", + "crypto_exchange|systemmonitoring|queue_LINE": "BIGINT. Queue length indicator. Example: 810.", + "crypto_exchange|systemmonitoring|mktEffect": "REAL. Market-impact metric. Example: 0.0014.", + "crypto_exchange|systemmonitoring|Price_Effect": "REAL. Price-impact metric. Example: 0.0039.", + "crypto_exchange|Exchange_OrderType_Map|exchSpot": "TEXT. Exchange market symbol. PK. Example: EX203.", + "crypto_exchange|Exchange_OrderType_Map|ORDERtune": "TEXT. Order-type tuning parameter. PK.", + "crypto_exchange|orders|order_attributes": { + "column_meaning": "JSONB column. Aggregates the core descriptive parameters of an order (type, side, price, size, status, etc.) into one JSONB field for simpler retrieval and analytics.", + "fields_meaning": { + "type": "TEXT. Order-type or tuning parameter. Possible values: Limit, Market, Stop, StopLimit.", + "side": "TEXT. Deal-edge or strategy tag. Possible values: Buy, Sell.", + "limit_price": "REAL. Executed deal price quote. Example: 27080.39.", + "quantity": "REAL. Quantity executed in base units. Example: 1.992955. Example: 1.992955.", + "notional_value": "TEXT. Notional trade value. **NULL means notional value not recorded.**. Example: $81,019.90 .", + "status": "TEXT. Order-flow direction or category. Possible values: Cancelled, Filled, New, PartiallyFilled.", + "time_in_force": "TEXT. Intended order lifespan or validity period. **NULL means lifespan not specified.**. Possible values: FOK, GTC, GTD, IOC.", + "source": "TEXT. Base currency origin or funding source. Possible values: API, Bot, Mobile, Web.", + "client_order_id": "TEXT. Client-side marker or reference label. Example: CL5311016." + } + }, + "crypto_exchange|marketdata|orderbook_metrics": { + "column_meaning": "JSONB column. Captures the complete snapshot of best bid/ask quotes, depths, spreads and reference prices in a single JSONB column for fast order-book analytics.", + "fields_meaning": { + "best_bid": "REAL. Best bid price quote. Example: 27069.78.", + "best_ask": "REAL. Best ask price quote. Example: 27102.24.", + "bid_size": "REAL. Bid depth volume in units. Example: 64.091241.", + "ask_size": "REAL. Ask depth volume in units. Example: 87.620549.", + "bid_depth": "REAL. Aggregate bid depth. Example: 202.", + "ask_depth": "REAL. Aggregate ask depth. Example: 807.", + "spread_abs": "REAL. Absolute spread band. Example: 32.46.", + "spread_pct": "TEXT. Relative spread rate. **NULL means spread rate not calculated.**. Example: 11.99%.", + "mid_price": "REAL. Mid-price quote. Example: 27086.01. Example: 27086.01.", + "mark_price": "REAL. Mark price quote. Example: 27088.44.", + "index_price": "TEXT. Index price reference. Example: $27,096.65 ." + } + }, + "crypto_exchange|analyticsindicators|indicator_bundle": { + "column_meaning": "JSONB column. Groups together real-time technical, order-flow and sentiment indicators so that trading models can access a single JSONB payload instead of many separate columns.", + "fields_meaning": { + "order_flow": { + "buy_wall_pct": "REAL. Buy wall band measurement. Example: 0.017.", + "sell_wall_pct": "REAL. Sell wall band measurement. Example: 0.0891.", + "buy_pressure": "REAL. Aggregate buy-side force. Example: 0.926. Example: 0.926.", + "sell_pressure": "REAL. Aggregate sell-side force. Example: 0.536.", + "order_flow_imbalance": "REAL. Order-flow imbalance. Example: -0.599.", + "trade_flow_imbalance": "REAL. Trade imbalance indicator. **NULL means trade imbalance not computed.**. Example: 0.102. Example: 0.102.", + "large_order_ratio": "REAL. Rate of large-size trades. Example: 0.683. Example: 0.683." + }, + "technical": { + "rsi_14": "REAL. 14-period RSI value. Example: 15.28.", + "macd_hist": "REAL. MACD trailing value. Example: 4.77.", + "bollinger_width_pct": "REAL. Bollinger-band span. Example: 31.69." + }, + "sentiment": { + "market_sentiment": "TEXT. Market sentiment descriptor. Possible values: Bearish, Bullish, Neutral.", + "signal": "TEXT. Technical meter indicator. Possible values: Buy, Hold, Sell. Possible values: Buy, Hold, Sell." + }, + "participant_flow": { + "smart_money": "REAL. Smart-money force indicator. Example: 0.449.", + "retail_flow": "REAL. Retail-flow share. **NULL means retail flow not measured.**. Example: -0.893.", + "institutional_flow": "REAL. Institutional-flow share. Example: -0.04. Example: -0.04.", + "whale_activity": "TEXT. Whale-wallet activity descriptor. **NULL means whale motion not assessed.**. Possible values: High, Low, Medium.", + "market_maker_note": "TEXT. Maker-order flow descriptor. Possible values: High, Low, Medium." + }, + "cross_venue": { + "arbitrage_pct": "REAL. Arbitrage-potential score. Example: 0.0088.", + "cross_exchange_spread_pct": "REAL. Cross-exchange band width. **NULL means band not computed.**. Example: 0.0077.", + "funding_gap_pct": "REAL. Funding-rate gap across exchanges. Example: 0.0011.", + "basis_gap_pct": "REAL. Basis gap between spot and futures. Example: 0.0009." + } + } + }, + "crypto_exchange|riskandmargin|margin_risk_profile": { + "column_meaning": "JSONB column. Consolidates leverage, margin requirements, liquidation levels and qualitative risk flags into one JSONB column for quicker position-risk assessment and auditing.", + "fields_meaning": { + "leverage": "TEXT. Leverage scale descriptor. Possible values: 1, 2, 3, 5, 10, 20, 50, 100. Possible values: 1, 2, 3, 5, 10, 20, 50, 100.", + "initial_margin_pct": "REAL. Initial margin requirement. Example: 10794.0.", + "maintenance_margin_pct": "REAL. Maintenance margin requirement. Example: 5397.0. Example: 5397.0.", + "liquidation_price": "TEXT. Liquidation price quote. **NULL means liquidation quote not set.**. Example: $28,043.66 .", + "stop_loss_price": "REAL. Stop-price quote. Example: 27952.84.", + "stop_trigger_price": "REAL. Trigger price quote. Example: 29018.82. Example: 29018.82.", + "trailing_delta": "REAL. Trailing difference for stop orders. Example: 0.029.", + "iceberg_qty": "REAL. Iceberg order count. Example: 0.778158.", + "visible_qty": "REAL. Visible order count. Example: 1.214797.", + "liquidation_risk": "TEXT. Liquidation-risk factor. Example: 0.124.", + "counterparty_risk": "TEXT. Counter-party-risk factor. Example: 0.191.", + "settlement_risk": "TEXT. Settlement-risk factor. Example: 0.387.", + "custody_risk": "TEXT. Customer-specific factor. Example: 0.076.", + "network_risk": "TEXT. Net-position factor. Example: 0.937. Example: 0.937.", + "regulatory_risk": "TEXT. Regulatory factor. Example: 0.426.", + "position_size": "REAL. Position count. Example: 52.028722. Example: 52.028722.", + "position_notional": "REAL. Aggregate position size. Example: 1408958.08.", + "position_side": "TEXT. Position edge strategy. **NULL means position edge not defined.**. Possible values: Long, Short.", + "leverage_multiplier": "TEXT. Position magnitude description. Possible values: 1, 2, 3, 5, 10, 20, 50, 100. Possible values: 1, 2, 3, 5, 10, 20, 50, 100.", + "position_risk_pct": "REAL. Position-specific risk rate. Example: 0.762.", + "margin_ratio_pct": "TEXT. Margin rate tier. **NULL means margin rate not specified.**. Example: 65.90%.", + "margin_call_price": "REAL. Margin call quote. Example: 38557.58.", + "bankruptcy_price": "REAL. Break-point quote for liquidation. **NULL means break-point quote not set.**. Example: 20254.06.", + "collateral_ratio_pct": "TEXT. Collateral rate. **NULL means collateral rate not specified.**. Example: 48.00%.", + "collateral_amount": "TEXT. Collateral sum amount. Example: 750,485.81 USDT.", + "collateral_currency": "TEXT. Coin used as collateral. **NULL means collateral coin not specified.**. Possible values: BTC, ETH, USDC, USDT.", + "insurance_fund_share": "TEXT. Insurance fund share. Example: $28.51 ." + } + } +} \ No newline at end of file diff --git a/crypto_exchange/crypto_exchange_kb.jsonl b/crypto_exchange/crypto_exchange_kb.jsonl new file mode 100644 index 0000000000000000000000000000000000000000..588bf230d582342566049112404f12849527d2a7 --- /dev/null +++ b/crypto_exchange/crypto_exchange_kb.jsonl @@ -0,0 +1,53 @@ +{"id":0,"knowledge":"Spread Percentage","description":"Calculates the spread as a percentage of the midpoint price.","definition":"Spread Percentage = \\frac{\\text{best ask price} - \\text{best bid price}}{\\text{midpoint price}} \\times 100, \\text{where best ask price is the best ask price, best bid price is the best bid price, and midpoint price is the midpoint price.}","type":"calculation_knowledge","children_knowledge":-1} +{"id":1,"knowledge":"Slippage Impact","description":"Calculates the expected price slippage impact for a given order size.","definition":"Slippage Impact = \\frac{\\text{order quantity}}{\\text{quantity available at best bid or ask}} \\times \\text{raw spread}, \\text{where order quantity is the order quantity, quantity available at best bid/ask is the quantity available at best bid/ask, and raw spread is the raw spread.}","type":"calculation_knowledge","children_knowledge":-1} +{"id":2,"knowledge":"Position Value at Risk (PVaR)","description":"Calculates the value at risk for a position based on current market conditions.","definition":"PVaR = \\text{notional value of position} \\times \\text{volatility rating} \\times 0.01, \\text{where notional value of position is the notional value of position and volatility rating is the volatility or fluctuation rating.}","type":"calculation_knowledge","children_knowledge":-1} +{"id":3,"knowledge":"Arbitrage Potential Score (APS)","description":"Quantifies the potential arbitrage opportunity considering multiple factors.","definition":"APS = \\text{arbitrage potential} + \\text{cross-exchange spread} + (\\text{funding-rate gap} \\times 2) + \\text{basis gap}, \\text{where these components represent different types of arbitrage opportunities.}","type":"calculation_knowledge","children_knowledge":-1} +{"id":4,"knowledge":"Market Impact Cost (MIC)","description":"Estimates the market impact cost of executing a large order.","definition":"MIC = \\text{order quantity} \\times \\text{limit price} \\times \\text{large order ratio} \\times 0.01, \\text{where order quantity is the order quantity from the order's attributes}, \\text{limit price is the order's specified limit price}, \\text{and large order ratio is the rate of large-size trades found in the analytics indicators}.", "type": "calculation_knowledge","type":"calculation_knowledge","children_knowledge":-1} +{"id":5,"knowledge":"Liquidity Ratio","description":"Measures the ratio of available liquidity to total market volume.","definition":"Liquidity Ratio = \\frac{(\\text{quantity at best bid} + \\text{quantity at best ask}) \\times \\text{midpoint price}}{\\text{24h volume}}, \\text{where quantity at best bid and quantity at best ask are quantities at best bid and ask, midpoint price is the midpoint price, and 24h volume is the 24h volume.}","type":"calculation_knowledge","children_knowledge":-1} +{"id":6,"knowledge":"Realized Risk Ratio (RRR)","description":"Calculates the ratio of realized PnL to position value at risk.","definition":"RRR = \\frac{\\text{realized PnL}}{\\text{Position Value at Risk}}, \\text{where realized PnL is the realized PnL and Position Value at Risk is the Position Value at Risk.}","type":"calculation_knowledge","children_knowledge":[2]} +{"id":7,"knowledge":"Margin Utilization","description":"Calculates the percentage of margin being utilized.","definition":"Margin Utilization = \\frac{\\text{initial margin required}}{\\text{margin account balance}} \\times 100, \\text{where initial margin required is the initial margin required and margin account balance is the margin account balance.}","type":"calculation_knowledge","children_knowledge":-1} +{"id":8,"knowledge":"Order Fill Rate","description":"Calculates the percentage of an order that has been filled.","definition":"Order Fill Rate = \\frac{\\text{order quantity} - \\text{unfilled units}}{\\text{order quantity}} \\times 100, \\text{where order quantity is the order quantity and unfilled units is how many units remain unfilled.}","type":"calculation_knowledge","children_knowledge":-1} +{"id":9,"knowledge":"Market Efficiency Ratio (MER)","description":"Measures how efficiently orders are executed compared to expected slippage.","definition":"MER = \\frac{\\text{Slippage Impact}}{\\text{average slippage measure}}, \\text{where Slippage Impact is the calculated expected slippage and average slippage measure is the average slippage measure.}","type":"calculation_knowledge","children_knowledge":[1]} +{"id":10,"knowledge":"Whale Order","description":"Identifies large orders that could significantly impact market prices.","definition":"An order where the order quantity exceeds 10% of the available liquidity (quantity at best bid or ask) at the current best bid or ask price.","type":"domain_knowledge","children_knowledge":-1} +{"id":11,"knowledge":"Liquidation Risk Level","description":"Categorizes positions based on their proximity to liquidation.","definition":"Positions are categorized as 'Safe' or 'High Risk' based on how close the current market price is to the liquidation price. A position is 'High Risk' when liquidation price >= market price * 0.95.","type":"domain_knowledge","children_knowledge":-1} +{"id":12,"knowledge":"Arbitrage Window","description":"Identifies time periods with significant arbitrage opportunities across markets.","definition":"A market condition where the Arbitrage-Potential Score exceeds 0.05, indicating substantial price discrepancies that can be exploited.","type":"domain_knowledge","children_knowledge":[3]} +{"id":13,"knowledge":"Over-Leveraged Position","description":"Identifies positions with excessive leverage relative to market volatility.","definition":"A position where the leverage multiplied by the volatility measure exceeds 500, indicating high risk exposure.","type":"domain_knowledge","children_knowledge":-1} +{"id":14,"knowledge":"Market Maker Activity","description":"Identifies periods of high market maker participation.","definition":"Market conditions where execution tuning parameter is predominantly 'Maker' and maker-order flow is 'High', indicating strong liquidity provision by market makers.","type":"domain_knowledge","children_knowledge":-1} +{"id":15,"knowledge":"Smart Money Flow","description":"Identifies directional bias of sophisticated traders.","definition":"Market conditions where smart-money force exceeds both retail-flow share and institutional-flow share by at least 20%, indicating strong directional bias from sophisticated traders.","type":"domain_knowledge","children_knowledge":-1} +{"id":16,"knowledge":"Liquidity Crisis","description":"Identifies periods of severely reduced market liquidity.","definition":"Market conditions where the Liquidity Ratio falls below 0.01, indicating insufficient market depth relative to typical trading volume.","type":"domain_knowledge","children_knowledge":[5]} +{"id":17,"knowledge":"Momentum Divergence","description":"Identifies when price action diverges from momentum indicators.","definition":"Market condition where price makes new highs/lows while momentum indicators (aggregate buy-side force, aggregate sell-side force) move in the opposite direction.","type":"domain_knowledge","children_knowledge":-1} +{"id":18,"knowledge":"Margin Call Risk","description":"Identifies accounts at risk of receiving a margin call.","definition":"Accounts where the Margin Utilization exceeds 80%, putting them at risk of margin calls if market prices move adversely.","type":"domain_knowledge","children_knowledge":[7]} +{"id":19,"knowledge":"Technical Breakout","description":"Identifies when price breaks significant technical levels with volume.","definition":"Market condition where price exceeds the day’s high price or falls below the day’s low price with volume (trading volume of the day) at least 50% above the 30-day average.","type":"domain_knowledge","children_knowledge":-1} +{"id":20,"knowledge":"dealedge","description":"Illustrates the meaning of different values for the dealedge enum.","definition":"An enum with values 'Buy' or 'Sell'. 'Buy' indicates the order is to purchase the base asset using the quote asset (e.g., buying BTC with USDT). 'Sell' indicates the order is to sell the base asset for the quote asset (e.g., selling BTC for USDT).","type":"value_illustration","children_knowledge":-1} +{"id":21,"knowledge":"orderflow","description":"Illustrates the meaning of different values for the orderflow enum.","definition":"An enum with values 'New', 'PartiallyFilled', 'Cancelled', or 'Filled'. 'New' indicates a newly placed order that hasn't been matched. 'PartiallyFilled' means some portion has been executed but not all. 'Cancelled' means the order was cancelled before full execution. 'Filled' means the order has been completely executed.","type":"value_illustration","children_knowledge":-1} +{"id":22,"knowledge":"timespan","description":"Illustrates the meaning of different values for the timespan enum.","definition":"An enum with values 'IOC', 'GTC', 'GTD', or 'FOK'. 'IOC' (Immediate-or-Cancel) means execute immediately available portion or cancel. 'GTC' (Good-Till-Cancelled) means the order remains active until explicitly cancelled. 'GTD' (Good-Till-Date) means the order remains active until a specified date. 'FOK' (Fill-or-Kill) means execute completely immediately or cancel entirely.","type":"value_illustration","children_knowledge":-1} +{"id":23,"knowledge":"posedge","description":"Illustrates the meaning of different values for the posedge enum.","definition":"An enum with values 'Long' or 'Short'. 'Long' indicates a position that profits from price increases of the underlying asset. 'Short' indicates a position that profits from price decreases of the underlying asset.","type":"value_illustration","children_knowledge":-1} +{"id":24,"knowledge":"posmagn","description":"Illustrates the meaning of different values for the posmagn enum.","definition":"An enum with values '1', '2', '3', '5', '10', '20', '50', or '100', representing leverage multipliers. For example, '10' means the position uses 10x leverage, amplifying both potential profits and losses by a factor of 10 compared to an unleveraged position.","type":"value_illustration","children_knowledge":-1} +{"id":25,"knowledge":"mktfeel","description":"Illustrates the meaning of different values for the mktfeel enum.","definition":"An enum with values 'Bearish', 'Bullish', or 'Neutral'. 'Bearish' indicates negative market sentiment with expectations of price decreases. 'Bullish' indicates positive market sentiment with expectations of price increases. 'Neutral' indicates balanced market sentiment with no strong directional bias.","type":"value_illustration","children_knowledge":-1} +{"id":26,"knowledge":"techmeter","description":"Illustrates the meaning of different values for the techmeter enum.","definition":"An enum with values 'Buy', 'Sell', or 'Hold'. 'Buy' indicates technical indicators suggest purchasing the asset. 'Sell' indicates technical indicators suggest selling the asset. 'Hold' indicates technical indicators suggest maintaining current positions without new trades.","type":"value_illustration","children_knowledge":-1} +{"id":27,"knowledge":"whalemotion","description":"Illustrates the meaning of different values for the whalemotion enum.","definition":"An enum with values 'Low', 'Medium', or 'High'. 'Low' indicates minimal activity from large traders. 'Medium' indicates moderate activity from large traders. 'High' indicates significant activity from large traders, potentially signaling important market movements.","type":"value_illustration","children_knowledge":-1} +{"id":28,"knowledge":"makermotion","description":"Illustrates the meaning of different values for the makermotion enum.","definition":"An enum with values 'Low', 'Medium', or 'High'. 'Low' indicates minimal market maker activity with potentially wider spreads. 'Medium' indicates normal market maker activity. 'High' indicates substantial market maker activity, typically resulting in tighter spreads and higher liquidity.","type":"value_illustration","children_knowledge":-1} +{"id":29,"knowledge":"exectune","description":"Illustrates the meaning of different values for the exectune enum.","definition":"An enum with values 'Maker' or 'Taker'. 'Maker' indicates the order added liquidity to the order book by not matching immediately. 'Taker' indicates the order removed liquidity from the order book by matching with existing orders immediately upon placement.","type":"value_illustration","children_knowledge":-1} +{"id":30,"knowledge":"Risk-Adjusted Return","description":"Calculates return on position adjusted for risk exposure.","definition":"Risk-Adjusted Return = \\frac{\\text{realized PnL}}{\\text{Position Value at Risk} \\times \\text{position risk ratio}}, \\text{where realized PnL is the realized PnL, Position Value at Risk is the Position Value at Risk, and position risk ratio is the position risk ratio.}","type":"calculation_knowledge","children_knowledge":[2,6]} +{"id":31,"knowledge":"True Cost of Execution","description":"Calculates the total cost of order execution including fees and slippage.","definition":"True Cost of Execution = \\text{total fee charged} + (\\text{order quantity} \\times \\text{limit or stop price} \\times \\text{Slippage Impact} \\times 0.01), \\text{where total fee charged is the total fee charged and Slippage Impact is the expected price slippage impact for the order size.}","type":"calculation_knowledge","children_knowledge":[1]} +{"id":32,"knowledge":"Order Book Imbalance Ratio","description":"Quantifies the imbalance between bid and ask sides of the order book.","definition":"Order Book Imbalance Ratio = \\frac{\\text{deeper bid liquidity} - \\text{deeper ask liquidity}}{\\text{deeper bid liquidity} + \\text{deeper ask liquidity}}, \\text{where deeper bid liquidity is the deeper bid liquidity and deeper ask liquidity is the deeper ask liquidity. A positive Imbalance Ratio indicates stronger buying pressure, while negative indicates stronger selling pressure.}","type":"calculation_knowledge","children_knowledge":[5]} +{"id":33,"knowledge":"Effective Leverage","description":"Calculates the actual leverage considering both explicit leverage setting and position size relative to account balance.","definition":"Effective Leverage = \\text{position leverage} \\times \\frac{\\text{notional value of position}}{\\text{total wallet balance}}, \\text{where position leverage is the position leverage, notional value of position is the notional value of position, and total wallet balance is the total wallet balance.}","type":"calculation_knowledge","children_knowledge":-1} +{"id":34,"knowledge":"Profit Factor","description":"Measures the ratio of profitable trades to losing trades adjusted for their values.","definition":"Profit Factor = \\frac{\\sum \\text{positive realized PnL}}{|\\sum \\text{negative realized PnL}|}, \\text{where realized PnL is the realized PnL, calculated separately for positive and negative values.}","type":"calculation_knowledge","children_knowledge":[6]} +{"id":35,"knowledge":"Arbitrage ROI","description":"Calculates the potential return on investment for an arbitrage opportunity.","definition":"Arbitrage ROI = \\frac{\\text{Arbitrage Opportunity Score} \\times \\text{limit or stop price}}{\\text{total fee charged} \\times 2}, \\text{where Arbitrage Opportunity Score is the Arbitrage Opportunity Score and total fee charged is multiplied by 2 to account for fees on both transactions involved in arbitrage.}","type":"calculation_knowledge","children_knowledge":[3,31]} +{"id":36,"knowledge":"Market Depth Ratio","description":"Measures the ratio of order book depth to position size to assess market liquidity for position exit.","definition":"Market Depth Ratio = \\frac{\\text{deeper bid or ask liquidity}}{\\text{order quantity}} \\times \\text{Liquidity Ratio}, \\text{where deeper bid/ask liquidity is used depending on position direction (position side), order quantity is the order quantity, and Liquidity Ratio measures available liquidity to total market volume.}","type":"calculation_knowledge","children_knowledge":[5,8]} +{"id":37,"knowledge":"Volatility-Adjusted Spread","description":"Normalizes the spread by the market volatility to determine if spread is wide relative to expected price movement.","definition":"Volatility-Adjusted Spread = \\frac{\\text{Spread Percentage}}{\\text{volatility rating} \\times 0.1}, \\text{where Spread Percentage is the spread as percentage of midpoint price and volatility rating is the volatility or fluctuation rating.}","type":"calculation_knowledge","children_knowledge":[0]} +{"id":38,"knowledge":"Risk-to-Reward Ratio","description":"Calculates the ratio of potential risk to potential reward for a position.","definition":"Risk-to-Reward Ratio = \\frac{|\\text{entry price} - (\\text{position side} == 'Long' ? \\text{stop price} : \\text{advanced trigger price})|}{|\\text{entry price} - (\\text{position side} == 'Long' ? \\text{advanced trigger price} : \\text{stop price})|}, \\text{where entry price is the entry price, stop price is the stop price, advanced trigger price is the advanced trigger price, and position side determines position direction (Long or Short).}","type":"calculation_knowledge","children_knowledge":[9,23]} +{"id":39,"knowledge":"Technical Signal Strength","description":"Quantifies the strength of technical signals based on multiple indicators.","definition":"Technical Signal Strength = \\frac{|\\text{RSI indicator} - 50| + |\\text{MACD line}| + (\\text{Bollinger Band width} \\times 0.01)}{3} \\times (\\text{technical meter} == 'Buy' ? 1 : \\text{technical meter} == 'Sell' ? -1 : 0), \\text{where RSI indicator is the RSI indicator, MACD line is the MACD line, Bollinger Band width is the Bollinger Band width, and technical meter determines direction (Buy, Sell, Hold).}","type":"calculation_knowledge","children_knowledge":[17,26]} +{"id":40,"knowledge":"Critically Over-Leveraged Position","description":"Identifies positions with extremely dangerous leverage levels requiring immediate risk management.","definition":"A position that qualifies as an Over-Leveraged Position where additionally the Effective Leverage exceeds 20 and the Margin Utilization exceeds 90%, creating extreme liquidation risk.","type":"domain_knowledge","children_knowledge":[7, 33]} +{"id":41,"knowledge":"High-Quality Arbitrage Opportunity","description":"Identifies particularly favorable arbitrage opportunities with minimal execution risk.","definition":"An Arbitrage Window where the Arbitrage ROI exceeds 0.5% and the Market Efficiency Ratio is less than 1.2, indicating high potential return with low execution risk.","type":"domain_knowledge","children_knowledge":[12,35,9]} +{"id":42,"knowledge":"Technical Reversal Signal","description":"Identifies strong indications of potential market direction reversal.","definition":"A market condition where Technical Signal Strength exceeds 8 in absolute value while simultaneously showing Momentum Divergence, providing reinforcing signals of a potential trend reversal.","type":"domain_knowledge","children_knowledge":[17,39]} +{"id":43,"knowledge":"Liquidity Constrained Position","description":"Identifies positions that may be difficult to exit due to insufficient market liquidity.","definition":"A position where the Market Depth Ratio is less than 2.0, indicating that the position size is large relative to available market depth, potentially leading to significant slippage upon exit.","type":"domain_knowledge","children_knowledge":[36]} +{"id":44,"knowledge":"Optimal Trading Window","description":"Identifies periods with ideal conditions for order execution.","definition":"Market conditions where the Volatility-Adjusted Spread is less than 1.0 and the Market Maker Activity indicates 'High', suggesting tight spreads relative to volatility and strong liquidity provision.","type":"domain_knowledge","children_knowledge":[14,37]} +{"id":45,"knowledge":"Risk-Efficient Position","description":"Identifies positions with favorable risk-adjusted characteristics.","definition":"A position where the Risk-Adjusted Return exceeds 1.5 and the Risk-to-Reward Ratio is less than 0.5, indicating strong returns relative to risk exposure and favorable potential profit compared to potential loss.","type":"domain_knowledge","children_knowledge":[30,38]} +{"id":46,"knowledge":"Whale-Driven Market","description":"Identifies periods where large traders significantly influence price direction.","definition":"Market conditions where whale-wallet activity is 'High' and there is at least one Whale Order in the same direction as the Smart Money Flow, indicating coordinated activity among large market participants.","type":"domain_knowledge","children_knowledge":[10,15,27]} +{"id":47,"knowledge":"Liquidation Cascade Risk","description":"Identifies market conditions prone to cascading liquidations.","definition":"Market conditions where more than 15% of open positions are classified as Liquidation Risk Level 'High Risk' and the Order Book Imbalance Ratio exceeds 0.3 in absolute value, indicating concentrated risk and imbalanced liquidity.","type":"domain_knowledge","children_knowledge":[11,32]} +{"id":48,"knowledge":"Perfect Technical Setup","description":"Identifies ideal conditions for technical trading strategies.","definition":"Market conditions where Technical Signal Strength exceeds 7, the technical meter direction matches market sentiment direction, and no Momentum Divergence is present, indicating strong, consistent technical signals.","type":"domain_knowledge","children_knowledge":[17,25,26,39]} +{"id":49,"knowledge":"Flash Crash Vulnerability","description":"Identifies conditions where markets are susceptible to sudden, severe price drops.","definition":"Market conditions where Liquidation Cascade Risk is present, more than 30% of positions qualify as Over-Leveraged Position, and a Liquidity Crisis is developing, creating perfect conditions for a potential flash crash.","type":"domain_knowledge","children_knowledge":[13,16,47]} +{"id":50,"knowledge":"Flow Dominance","description":"Categorizes market flow based on which group (smart money, retail, or institutional) has the highest trading volume.","definition":"Categorized as 'Smart Money Dominant' when smart-money force > retail-flow share * 1.2 AND smart-money force > institutional-flow share * 1.2; 'Retail Dominant' when retail-flow share > smart-money force * 1.2 AND retail-flow share > institutional-flow share * 1.2; 'Institutional Dominant' when institutional-flow share > smart-money force * 1.2 AND institutional-flow share > retail-flow share * 1.2; otherwise 'Mixed'.","type":"domain_knowledge","children_knowledge":-1} +{"id":51,"knowledge":"Smart Money Accuracy","description":"Measures the success rate of smart money flow in predicting the 4-hour price movement direction.","definition":"The proportion of times the Smart Money Flow direction matches the 4-hour price movement direction, calculated as: $$ \\frac{\\text{COUNT(CASE WHEN (smart money force > retail flow AND smart money force > institutional flow AND next\\_price\\_4h > mid\\_price) OR (smart money force < retail flow AND smart money force < institutional flow AND next\\_price\\_4h < mid\\_price) THEN 1 ELSE 0 END)}}{\\text{COUNT(*)}} $$","type":"calculation_knowledge","children_knowledge":[15]} +{"id":52,"knowledge":"Effective Leverage Risk Classification","description":"Categorizes positions based on their effective leverage to determine risk exposure.","definition":"A position is labeled as 'High Risk' if its Effective Leverage exceeds 20, otherwise as 'Normal'.","type":"domain_knowledge","children_knowledge":[33]} \ No newline at end of file diff --git a/crypto_exchange/crypto_exchange_schema.txt b/crypto_exchange/crypto_exchange_schema.txt new file mode 100644 index 0000000000000000000000000000000000000000..07ecbc70f4bacc7132dff4cc07c2d55357c7d316 --- /dev/null +++ b/crypto_exchange/crypto_exchange_schema.txt @@ -0,0 +1,249 @@ +"CREATE" TABLE "orders" ( +"ORD_STAMP" text NOT NULL, +"TimeCode" timestamp without time zone NOT NULL, +"exchSpot" text NULL, +market_note text NULL, +"UserRef" text NULL, +created_at timestamp without time zone NULL, +"UPDATED_AT" timestamp without time zone NULL, +order_attributes jsonb NULL, + "PRIMARY" KEY (ORD_STAMP), + "FOREIGN" KEY ("UserRef") REFERENCES users(USERSTAMP) +); + + + +"First" 3 rows: +ORD_STAMP TimeCode exchSpot market_note UserRef created_at UPDATED_AT order_attributes +----------- ------------------- ---------- ------------- --------- ------------------- ------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- +OR6015391 2025-02-19 00:00:00 EX203 ETH-USDT U583322 2025-02-18 08:54:51 2025-02-19 08:29:00 {'side': 'Sell', 'type': 'Stop', 'source': 'API', 'status': 'New', 'quantity': 1.992955, 'limit_price': 27080.39, 'time_in_force': 'IOC', 'notional_value': None, 'client_order_id': 'CL5311016'} +OR9929123 2025-02-19 00:00:00 EX506 ADA-USDC U810391 2025-02-18 18:01:42 2025-02-19 08:29:00 {'side': 'Sell', 'type': 'Market', 'source': 'Web', 'status': 'PartiallyFilled', 'quantity': 8.040975, 'limit_price': 10075.88, 'time_in_force': None, 'notional_value': '$81,019.90 ', 'client_order_id': 'CL4886815'} +OR8906157 2025-02-19 00:00:00 EX497 BTC-USDT U485932 2025-02-18 19:34:55 2025-02-19 08:29:00 {'side': 'Sell', 'type': 'Limit', 'source': 'Mobile', 'status': 'Cancelled', 'quantity': 7.975719, 'limit_price': 10665.39, 'time_in_force': 'GTD', 'notional_value': '$85,064.15 ', 'client_order_id': 'CL8161496'} +... + + +"CREATE" TABLE "marketdata" ( +"EXCH_SPOT" text NOT NULL, +"TimeTrack" timestamp without time zone NULL, +"exchNote" text NULL, +"mktCombo" text NULL, +orderbook_metrics jsonb NULL, + "PRIMARY" KEY (EXCH_SPOT) +); + + + +"First" 3 rows: +EXCH_SPOT TimeTrack exchNote mktCombo orderbook_metrics +----------- ------------------- ---------- ---------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ +EX203 2025-02-19 00:00:00 EX203 ETH-USDT {'ask_size': 87.62055, 'best_ask': 27102.24, 'best_bid': 27069.78, 'bid_size': 64.09124, 'ask_depth': 807, 'bid_depth': 202, 'mid_price': 27086.01, 'mark_price': 27088.44, 'spread_abs': 32.46, 'spread_pct': '11.99%', 'index_price': '$27,096.65 '} +EX506 2025-02-19 00:00:00 EX506 ADA-USDC {'ask_size': 35.842125, 'best_ask': 10081.74, 'best_bid': 10067.6, 'bid_size': 13.783624, 'ask_depth': 821, 'bid_depth': 30, 'mid_price': 10074.67, 'mark_price': 10072.04, 'spread_abs': 14.14, 'spread_pct': '14.03%', 'index_price': '$10,071.24 '} +EX497 2025-02-19 00:00:00 EX497 BTC-USDT {'ask_size': 88.55482, 'best_ask': 10670.09, 'best_bid': 10660.45, 'bid_size': 32.7479, 'ask_depth': 124, 'bid_depth': 370, 'mid_price': 10665.27, 'mark_price': 10659.73, 'spread_abs': 9.64, 'spread_pct': '9.04%', 'index_price': '$10,670.97 '} +... + + +"CREATE" TABLE "users" ( +"USERSTAMP" text NOT NULL, +"acctScope" text NULL, + "PRIMARY" KEY (USERSTAMP) +); + + + +"First" 3 rows: +USERSTAMP acctScope +----------- ----------- +U583322 Margin +U810391 Spot +U485932 Options +... + + +"CREATE" TABLE "orderExecutions" ( +"RecordVault" text NOT NULL, +"ExecMARK" bigint NOT NULL DEFAULT nextval('"orderExecutions_ExecMARK_seq"'::regclass), +"FillAmt" real NULL, +remain_amt real NULL, +"fillQuote" real NULL, +"FILL_SUM" real NULL, +"expireSpot" timestamp without time zone NULL, +"CancelNote" text NULL, +"EXECtune" text NULL, +"Ord_Link" text NULL, + "PRIMARY" KEY (RecordVault), + "FOREIGN" KEY ("Ord_Link") REFERENCES orders(ORD_STAMP) +); + + + +"First" 3 rows: +RecordVault ExecMARK FillAmt remain_amt fillQuote FILL_SUM expireSpot CancelNote EXECtune Ord_Link +------------- ---------- --------- ------------ ----------- ---------- ------------------- ----------------- ---------- ---------- +OB333576 1 1.44793 0.545024 26244.4 38000.1 2025-02-22 08:29:00 Expired OR6015391 +OB798737 2 2.09815 5.94283 10230.1 21464.3 2025-02-26 08:29:00 OR9929123 +OB179652 3 3.58802 4.38769 10911.7 39151.5 2025-03-19 08:29:00 InsufficientFunds OR8906157 +... + + +"CREATE" TABLE "fees" ( +fee_range text NULL, +"FeeRate" real NULL, +"FEE_TOTAL" text NULL, +"FeeCoin" text NULL, +"rebRate" text NULL, +"REB_TOTAL" text NULL, +"Order_Link" text NULL, + "FOREIGN" KEY ("Order_Link") REFERENCES orders(ORD_STAMP) +); + + + +"First" 3 rows: +fee_range FeeRate FEE_TOTAL FeeCoin rebRate REB_TOTAL Order_Link +----------- --------- ----------- --------- --------- ----------- ------------ +Tier4 0.0007 $26.60 USDC 0.09% 34.2001 OR6015391 +Tier1 0.0015 $32.20 USDC 0.03% 6.4393 OR9929123 +Tier3 0.0017 $66.56 USD 0.03% 11.7454 OR8906157 +... + + +"CREATE" TABLE "marketstats" ( +"MKT_STATS_MARK" bigint NOT NULL DEFAULT nextval('"marketstats_MKT_STATS_MARK_seq"'::regclass), +"fundRate" real NULL, +"FundSpot" timestamp without time zone NULL, +"openStake" real NULL, +"VOLday" real NULL, +"TRADE_DAY" bigint NULL, +"turnoverDay" real NULL, +"priceShiftDay" text NULL, +"HIGH_SPOT_DAY" real NULL, +low_spot_day real NULL, +"VwapDay" real NULL, +"mktSIZE" real NULL, +"Circ_Total" real NULL, +"TOTAL_SUPPLY" real NULL, +"MAXsupPLY" text NULL, +"MktHold" real NULL, +"tradeRank" bigint NULL, +"LIQUIDscore" real NULL, +vol_meter text NULL, +md_link text NULL, + "PRIMARY" KEY (MKT_STATS_MARK), + "FOREIGN" KEY (md_link) REFERENCES marketdata(EXCH_SPOT) +); + + + +"First" 3 rows: + MKT_STATS_MARK fundRate FundSpot openStake VOLday TRADE_DAY turnoverDay priceShiftDay HIGH_SPOT_DAY low_spot_day VwapDay mktSIZE Circ_Total TOTAL_SUPPLY MAXsupPLY MktHold tradeRank LIQUIDscore vol_meter md_link +---------------- ---------- ------------------- ----------- ----------- ----------- ------------- --------------- --------------- -------------- --------- ----------- ------------ -------------- ------------------ --------- ----------- ------------- ----------- --------- + 1 0.0004 2025-02-19 14:29:00 808923 3.04561e+06 73628 9.40605e+06 7.96% 27823.7 25912.1 26269.5 4.38987e+08 7.9417e+07 9.62261e+07 188,391,541 tokens 0.026 52 0.903 55.65 EX203 + 2 0.001 2025-02-19 09:29:00 809954 7.63342e+06 96633 7.83748e+06 -13.74% 10741.9 9151.16 9857.69 9.33923e+08 4.95459e+07 6.13324e+07 67,917,061 tokens 0.3111 76 0.832 96.53 EX506 + 3 -0.0001 2025-02-19 12:29:00 508323 8.59084e+06 16878 3.6968e+06 11080.1 10040.8 11107.2 9.62119e+08 7.88948e+07 8.30716e+07 110,069,663 tokens 0.4038 29 0.916 36.88 EX497 +... + + +"CREATE" TABLE "analyticsindicators" ( +"ANALYTICS_NODE" bigint NOT NULL DEFAULT nextval('"analyticsindicators_ANALYTICS_NODE_seq"'::regclass), +md_ref text NULL, +indicator_bundle jsonb NULL, + "PRIMARY" KEY (ANALYTICS_NODE), + "FOREIGN" KEY (md_ref) REFERENCES marketdata(EXCH_SPOT) +); + + + +"First" 3 rows: + ANALYTICS_NODE md_ref indicator_bundle +---------------- -------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- + 1 EX203 {'sentiment': {'signal': 'Buy', 'market_sentiment': 'Bearish'}, 'technical': {'rsi_14': 15.28, 'macd_hist': 4.77, 'bollinger_width_pct': 31.69}, 'order_flow': {'buy_pressure': 0.926, 'buy_wall_pct': 0.017, 'sell_pressure': 0.536, 'sell_wall_pct': 0.0891, 'large_order_ratio': 0.683, 'order_flow_imbalance': -0.599, 'trade_flow_imbalance': 0.102}, 'cross_venue': {'arbitrage_pct': 0.0088, 'basis_gap_pct': 0.0009, 'funding_gap_pct': 0.0011, 'cross_exchange_spread_pct': 0.0077}, 'participant_flow': {'retail_flow': -0.893, 'smart_money': 0.449, 'whale_activity': 'Low', 'market_maker_note': 'Medium', 'institutional_flow': -0.04}} + 2 EX506 {'sentiment': {'signal': 'Sell', 'market_sentiment': 'Bearish'}, 'technical': {'rsi_14': 85.88, 'macd_hist': 7.27, 'bollinger_width_pct': 98.83}, 'order_flow': {'buy_pressure': 0.253, 'buy_wall_pct': 0.0503, 'sell_pressure': 0.515, 'sell_wall_pct': 0.0868, 'large_order_ratio': 0.659, 'order_flow_imbalance': -0.266, 'trade_flow_imbalance': 0.011}, 'cross_venue': {'arbitrage_pct': 0.0095, 'basis_gap_pct': 0.0036, 'funding_gap_pct': 0.0015, 'cross_exchange_spread_pct': 0.0079}, 'participant_flow': {'retail_flow': -0.229, 'smart_money': -0.076, 'whale_activity': 'Medium', 'market_maker_note': 'High', 'institutional_flow': 0.382}} + 465 EX275 {'sentiment': {'signal': 'Sell', 'market_sentiment': 'Neutral'}, 'technical': {'rsi_14': 54.25, 'macd_hist': 7.19, 'bollinger_width_pct': 72.93}, 'order_flow': {'buy_pressure': 0.546, 'buy_wall_pct': 0.0153, 'sell_pressure': 0.262, 'sell_wall_pct': 0.0745, 'large_order_ratio': 0.982, 'order_flow_imbalance': -0.899, 'trade_flow_imbalance': None}, 'cross_venue': {'arbitrage_pct': 0.0029, 'basis_gap_pct': 0.0034, 'funding_gap_pct': 0.0087, 'cross_exchange_spread_pct': 0.0018}, 'participant_flow': {'retail_flow': None, 'smart_money': 0.539, 'whale_activity': None, 'market_maker_note': 'Low', 'institutional_flow': -0.212}} +... + + +"CREATE" TABLE "riskandmargin" ( +"MARG_FORM" text NOT NULL, +"ordStamp" text NULL, +margin_risk_profile jsonb NULL, + "PRIMARY" KEY (MARG_FORM), + "FOREIGN" KEY ("ordStamp") REFERENCES orders(ORD_STAMP) +); + + + +"First" 3 rows: +MARG_FORM ordStamp margin_risk_profile +----------- ---------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- +Isolated OR6015391 {'leverage': '5', 'iceberg_qty': 0.778158, 'visible_qty': 1.214797, 'custody_risk': '0.076', 'network_risk': '0.937', 'position_side': 'Short', 'position_size': 52.02872, 'trailing_delta': 0.029, 'regulatory_risk': '0.426', 'settlement_risk': '0.387', 'stop_loss_price': 27952.84, 'bankruptcy_price': 20254.06, 'liquidation_risk': '0.124', 'margin_ratio_pct': '65.90%', 'collateral_amount': None, 'counterparty_risk': '0.191', 'liquidation_price': '$28,043.66 ', 'margin_call_price': 38557.58, 'position_notional': 1408958.1, 'position_risk_pct': 0.762, 'initial_margin_pct': 10794, 'stop_trigger_price': 29018.82, 'collateral_currency': 'USDT', 'leverage_multiplier': '3', 'collateral_ratio_pct': None, 'insurance_fund_share': '$28.51 ', 'maintenance_margin_pct': 5397} +Cross OR8906157 {'leverage': '2', 'iceberg_qty': 2.641267, 'visible_qty': 5.334452, 'custody_risk': '0.728', 'network_risk': '0.803', 'position_side': 'Short', 'position_size': 92.888084, 'trailing_delta': 0.03, 'regulatory_risk': '0.416', 'settlement_risk': '0.538', 'stop_loss_price': 10558.32, 'bankruptcy_price': 11913.55, 'liquidation_risk': '0.639', 'margin_ratio_pct': '82.60%', 'collateral_amount': None, 'counterparty_risk': '0.286', 'liquidation_price': '$11,661.77 ', 'margin_call_price': 6818.72, 'position_notional': 990687.6, 'position_risk_pct': 0.553, 'initial_margin_pct': 42532.07, 'stop_trigger_price': 11263.9, 'collateral_currency': 'USDT', 'leverage_multiplier': '2', 'collateral_ratio_pct': '43.80%', 'insurance_fund_share': '$4.53 ', 'maintenance_margin_pct': 21266.03} +... + + +"CREATE" TABLE "accountbalances" ( +"ACCTBAL_NODE" bigint NOT NULL DEFAULT nextval('"accountbalances_ACCTBAL_NODE_seq"'::regclass), +"walletSum" text NULL, +"AVAIL_SUM" real NULL, +"frozenSum" real NULL, +marg_sum real NULL, +"unrealLINE" real NULL, +"REAL_LINE" real NULL, +"userTAG" text NULL, + "PRIMARY" KEY (ACCTBAL_NODE), + "FOREIGN" KEY ("userTAG") REFERENCES users(USERSTAMP) +); + + + +"First" 3 rows: + ACCTBAL_NODE walletSum AVAIL_SUM frozenSum marg_sum unrealLINE REAL_LINE userTAG +-------------- ----------- ----------- ----------- ---------- ------------ ----------- --------- + 1 $316,482.99 250958 65525.1 901344 3545.06 -38455.1 U583322 + 2 $506,236.34 91692.6 414544 572884 52010.2 9741.09 U810391 + 3 $729,963.07 545563 184400 321804 52597.6 -81686.6 U485932 +... + + +"CREATE" TABLE "systemmonitoring" ( +"SYS_MON_PIVOT" bigint NOT NULL DEFAULT nextval('"systemmonitoring_SYS_MON_PIVOT_seq"'::regclass), +"APIRQTotal" bigint NULL, +"apiErrTOTAL" bigint NULL, +"ApiLatMark" real NULL, +"WS_STATE" text NULL, +"RateRemain" bigint NULL, +"lastUpdNote" text NULL, +"SeqCode" text NULL, +"SLIP_ratio" real NULL, +"ExecTimeSpan" real NULL, +"queue_LINE" bigint NULL, +"mktEffect" real NULL, +"Price_Effect" real NULL, + "PRIMARY" KEY (SYS_MON_PIVOT) +); + + + +"First" 3 rows: + SYS_MON_PIVOT APIRQTotal apiErrTOTAL ApiLatMark WS_STATE RateRemain lastUpdNote SeqCode SLIP_ratio ExecTimeSpan queue_LINE mktEffect Price_Effect +--------------- ------------ ------------- ------------ ------------ ------------ ------------- ----------- ------------ -------------- ------------ ----------- -------------- + 1 nan 4 547 Connected 939 9340653 6.55924e+06 -0.0077 203 810 0.0014 0.0039 + 2 7728 48 199 Connected 408 1943398 5.03344e+06 0.0052 nan 985 -0.0074 0.0011 + 3 nan 41 441 Disconnected 981 5199723 8.93482e+06 0.003 431 649 -0.0046 0.0037 +... + + +"CREATE" TABLE "Exchange_OrderType_Map" ( +"exchSpot" text NOT NULL, +"ORDERtune" text NOT NULL, + "PRIMARY" KEY (exchSpot, ORDERtune) +); + + + +"First" 3 rows: +exchSpot ORDERtune +---------- ----------- +EX203 Stop +EX506 Market +EX497 Limit +... diff --git a/cybermarket_pattern/cybermarket_pattern_column_meaning_base.json b/cybermarket_pattern/cybermarket_pattern_column_meaning_base.json new file mode 100644 index 0000000000000000000000000000000000000000..ed7ede2c45731852070f14a0d227c9278641ce75 --- /dev/null +++ b/cybermarket_pattern/cybermarket_pattern_column_meaning_base.json @@ -0,0 +1,213 @@ +{ + "cybermarket_pattern|markets|PlatCode": "TEXT. Unique identifier code for the marketplace platform. PK.", + "cybermarket_pattern|markets|PlatName": "TEXT. Human-readable name of the marketplace platform. Example: Market_84.", + "cybermarket_pattern|markets|PlatformType": "TEXT. Classification of the platform (e.g., forum, service, marketplace).", + "cybermarket_pattern|markets|AgeDays": "BIGINT. Number of days since the platform’s first observed appearance. Example: 168.", + "cybermarket_pattern|markets|OperStatus": "TEXT. Current operational status of the platform (active, offline, seized, etc.). Possible values: Active, Closed, Suspended, Under Investigation.", + "cybermarket_pattern|markets|RepScore": "TEXT. Reputation score assigned to the platform. **NULL means reputation has not yet been assessed or is unavailable.**. Example: $65.20 .", + "cybermarket_pattern|markets|ConfidenceLevel": "TEXT. Analyst-assigned confidence in the accuracy of platform metrics. Possible values: High, Low, Medium, Unknown.", + "cybermarket_pattern|markets|SizeCat": "TEXT. Categorical size bucket of the platform (small, medium, large). Possible values: Large, Medium, Mega, Small.", + "cybermarket_pattern|markets|DayTxnVol": "REAL. Average daily transaction volume on the platform. Example: 5388.", + "cybermarket_pattern|markets|ActiveUsersMo": "TEXT. Estimated number of active users per month. Example: $38,933 .", + "cybermarket_pattern|markets|SellerCount": "BIGINT. Number of distinct sellers currently active on the platform. **NULL means seller count could not be determined at snapshot time.**. Example: 985.0.", + "cybermarket_pattern|markets|AcqCount": "BIGINT. Number of distinct buyer (acquirer) accounts on the platform. Example: 6009.", + "cybermarket_pattern|markets|ItemListings": "BIGINT. Current number of active product listings on the platform. Example: 34834.", + "cybermarket_pattern|markets|lastUpdated": "TIMESTAMP. Timestamp when this platform record was last refreshed. Example: 2025/2/16 15:29.", + "cybermarket_pattern|markets|RefreshHrs": "BIGINT. Scheduled refresh interval for this record, in hours. Possible values: 1, 4, 8, 24.", + "cybermarket_pattern|vendors|SellerKey": "TEXT. Unique identifier for a vendor (seller) account. PK.", + "cybermarket_pattern|vendors|DaysActive": "BIGINT. Total number of days the vendor has been active across all platforms. Example: 319.", + "cybermarket_pattern|vendors|PerformanceRating": "REAL. Aggregated performance rating derived from vendor transactions. Example: 4.5.", + "cybermarket_pattern|vendors|TotalTxns": "TEXT. Total number of transactions associated with the vendor. Example: $917 .", + "cybermarket_pattern|vendors|CompletedTxns": "BIGINT. Number of transactions successfully completed by the vendor. Example: 572.", + "cybermarket_pattern|vendors|DisputedEvents": "BIGINT. Count of transactions that resulted in disputes. Example: 33.", + "cybermarket_pattern|vendors|VerTier": "TEXT. Verification-tier level assigned to the vendor. **NULL means verification tier not yet assigned or unavailable.**. Possible values: Advanced, Basic, Premium.", + "cybermarket_pattern|vendors|LastActiveDt": "TIMESTAMP. Datetime when the vendor was last observed active. **NULL means no last-active information is available.**. Example: 2025/2/5.", + "cybermarket_pattern|vendors|AccessLevel": "TEXT. Access-privilege level granted to the vendor account. **NULL means access level has not been set.**. Possible values: Full, Partial.", + "cybermarket_pattern|vendors|InvestigationFlag": "TEXT. Indicator that the vendor is under investigation. **NULL means investigation status is not recorded.**. Possible values: Active, Closed, Monitoring.", + "cybermarket_pattern|vendors|LE_Interest": "TEXT. Level of law-enforcement interest in the vendor. **NULL means no law-enforcement interest has been documented.**. Possible values: High, Low, Medium.", + "cybermarket_pattern|vendors|ComplianceRisk": "TEXT. Overall compliance-risk categorisation for the vendor. Possible values: High, Low, Medium.", + "cybermarket_pattern|buyers|AcqCode": "TEXT. Unique identifier for a buyer (acquirer) account. PK.", + "cybermarket_pattern|buyers|ProfileAge": "BIGINT. Age of the buyer’s profile in days. Example: 326.", + "cybermarket_pattern|buyers|PurchaseCount": "BIGINT. Number of purchases made by the buyer. Example: 10.", + "cybermarket_pattern|buyers|AuthLevel": "TEXT. Authentication level attained by the buyer. **NULL means authentication level has not been determined.**. Possible values: Advanced, Basic.", + "cybermarket_pattern|products|ProdCat": "TEXT. High-level product category. PK (composite). Possible values: Data, Digital, Physical, Service.", + "cybermarket_pattern|products|Subcategory": "TEXT. Specific product subcategory. PK (composite). Possible values: Type_A, Type_B, Type_C, Type_D.", + "cybermarket_pattern|products|ListingAge": "BIGINT. Age of the listing in days. PK (composite). Example: 155.", + "cybermarket_pattern|products|SellerPointer": "TEXT. Identifier for the vendor offering the listing. PK (composite). FK to vendors(SellerKey).", + "cybermarket_pattern|transactions|EventCode": "TEXT. Unique identifier for the commercial event (transaction). PK.", + "cybermarket_pattern|transactions|RecordTag": "TEXT. Secondary unique tag for cross-system correlation. Example: DN541412.", + "cybermarket_pattern|transactions|EventTimestamp": "TIMESTAMP. Datetime when the transaction event was recorded. Example: 2024/4/2.", + "cybermarket_pattern|transactions|PlatformKey": "TEXT. Code of the platform where the transaction occurred. FK to markets(PlatCode).", + "cybermarket_pattern|transactions|VendorLink": "TEXT. Identifier of the vendor involved in the transaction. FK to vendors(SellerKey). Example: V63085.", + "cybermarket_pattern|transactions|AcqLink": "TEXT. Identifier of the buyer involved in the transaction. FK to buyers(AcqCode). Example: B41538.", + "cybermarket_pattern|transactions|OriginRegion": "TEXT. Geographical origin region of the shipment. Possible values: Region_A, Region_B, Region_C, Unknown.", + "cybermarket_pattern|transactions|DestRegion": "TEXT. Destination region of the shipment. Possible values: Region_X, Region_Y, Region_Z, Unknown.", + "cybermarket_pattern|transactions|CrossBorder": "BOOLEAN. Indicates whether the shipment crosses national borders. Possible values: No, Yes.", + "cybermarket_pattern|transactions|RouteComplex": "TEXT. Complexity classification of the shipping route. Possible values: Complex, Medium, Simple.", + "cybermarket_pattern|transaction_products|EventLink": "TEXT. Identifier of the transaction that sold the product. PK (composite). FK to transactions(EventCode).", + "cybermarket_pattern|transaction_products|ProdCat": "TEXT. Product category for the sold item. PK (composite). FK to products. Possible values: Data, Digital, Physical, Service.", + "cybermarket_pattern|transaction_products|Subcategory": "TEXT. Product subcategory for the sold item. PK (composite). FK to products. Possible values: Type_A, Type_B, Type_C, Type_D.", + "cybermarket_pattern|transaction_products|ListingAge": "BIGINT. Age of the listing at the time of sale. PK (composite). FK to products. Example: 155.", + "cybermarket_pattern|transaction_products|SellerPointer": "TEXT. Vendor identifier for the sold listing. PK (composite). FK to products.", + "cybermarket_pattern|transaction_products|PriceAmt": "REAL. Price at which the item was sold. Example: 1166.46.", + "cybermarket_pattern|transaction_products|QtySold": "BIGINT. Quantity of items sold in the transaction. Example: 92.", + "cybermarket_pattern|vendor_markets|VendorKey": "TEXT. Identifier of the vendor. PK (composite). FK to vendors(SellerKey).", + "cybermarket_pattern|vendor_markets|PlatformID": "TEXT. Identifier of the platform. PK (composite). FK to markets(PlatCode).", + "cybermarket_pattern|vendor_countries|SellerKey": "TEXT. Identifier of the vendor. PK (composite). FK to vendors(SellerKey).", + "cybermarket_pattern|vendor_countries|OpRegions": "TEXT. Operational region or country for the vendor. PK (composite). Possible values: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10.", + "cybermarket_pattern|vendor_payment_methods|VendorLink": "TEXT. Identifier of the vendor. PK (composite). FK to vendors(SellerKey). Example: V63085.", + "cybermarket_pattern|vendor_payment_methods|AcceptedPmtTypes": "TEXT. Payment type accepted by the vendor. PK (composite). Possible values: 1, 2, 3, 4, 5.", + "cybermarket_pattern|communications|EventLink": "TEXT. Identifier of the transaction related to this communication record. PK. FK to transactions(EventCode).", + "cybermarket_pattern|connection_security|TxnPointer": "TEXT. Identifier of the transaction under security review. PK. FK to transactions(EventCode).", + "cybermarket_pattern|connection_security|OpSecMetric": "REAL. Operational-security metric for the transaction session. Example: 95.7.", + "cybermarket_pattern|connection_security|ThreatIntelIndex": "REAL. Threat-intelligence index derived from external feeds. Example: 79.2.", + "cybermarket_pattern|connection_security|DetectionAvoidance": "REAL. Score indicating detection-avoidance behaviour. Example: 94.65125815301414.", + "cybermarket_pattern|connection_security|AnonLevel": "TEXT. Level of anonymity observed. Possible values: High, Low, Medium.", + "cybermarket_pattern|connection_security|TraceScore": "REAL. Score indicating how easily the actor can be traced. Example: 14.4.", + "cybermarket_pattern|connection_security|CorrelationValue": "REAL. Correlation strength between session artefacts. Example: 69.6.", + "cybermarket_pattern|connection_security|PatternMatchScore": "REAL. Pattern-match score against known malicious signatures. Example: 36.5.", + "cybermarket_pattern|connection_security|BehaviorScore": "REAL. Behavioural risk score for the session. Example: 15.8.", + "cybermarket_pattern|connection_security|ML_Confidence": "REAL. Machine-learning confidence level for the classification. Example: 97.2.", + "cybermarket_pattern|connection_security|AnomalyValue": "REAL. Aggregate anomaly score for the session. **NULL means anomaly metric has not been computed.**. Example: 38.9.", + "cybermarket_pattern|connection_security|FalsePosProb": "REAL. Probability that the detection is a false positive. Example: 10.4.", + "cybermarket_pattern|risk_analytics|TxnLink": "TEXT. Identifier of the transaction under risk analysis. PK. FK to transactions(EventCode).", + "cybermarket_pattern|risk_analytics|RiskIndicatorCount": "BIGINT. Number of risk indicators triggered for the transaction. Example: 9.", + "cybermarket_pattern|risk_analytics|FraudProb": "REAL. Model-computed probability that the transaction is fraudulent. Example: 67.3.", + "cybermarket_pattern|risk_analytics|ML_Risk": "TEXT. Machine-learning-derived risk category. Possible values: High, Low, Medium.", + "cybermarket_pattern|risk_analytics|LinkedEvents": "BIGINT. Number of linked suspicious events. Example: 29.", + "cybermarket_pattern|risk_analytics|ChainLength": "BIGINT. Length of the associated blockchain transaction chain. **NULL means chain length could not be determined.**. Possible values: 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0.", + "cybermarket_pattern|alerts|EventTag": "TEXT. Identifier of the transaction for which the alert was generated. PK. FK to transactions(EventCode). Example: TX4833222.", + "cybermarket_pattern|alerts|ReviewFreq": "TEXT. Recommended frequency for reviewing the alert or case. Possible values: Daily, Monthly, Weekly.", + "cybermarket_pattern|alerts|NextReviewDt": "DATE. Date of the next scheduled review. Example: 2025/3/17.", + "cybermarket_pattern|alerts|AnnotationCount": "BIGINT. Number of analyst annotations attached to the alert record. Example: 3.", + "cybermarket_pattern|transactions|Transaction_Velocity": "text. The rate at which transactions are completed, measured in USD/hour. Example: 45.67 USD/hour.", + "cybermarket_pattern|vendors|RegStandeff": "text. The compliance efficiency score of a vendor, calculated as the ratio of compliance score to rule violation count. Example: 14.95 Score/violation.", + "cybermarket_pattern|transactions|GeoDistScore": "text. The geographical distance involved in the transaction, measured in km. Example: 137.66 USD/border-crossing.", + "cybermarket_pattern|transactions|Border_cross_pre": "text. The premium associated with cross-border transactions, calculated based on payment amount and cross-border flag. Example: 78.04 USD/border-crossing.", + "cybermarket_pattern|connection_security|Data_proctecteff": "text. A measure of data protection efficiency, calculated as the ratio of vulnerability count to data protection level, measured in Vulnerabilities/GB. Example: 1.82 Vulnerabilities/GB.", + "cybermarket_pattern|connection_security|encrytion_cost": "text. The cost of encryption required for the transaction, measured in USD/encryption-bit. Example: 8.21 USD/encryption-bit.", + "cybermarket_pattern|connection_security|anonymity_cost": "text. A score reflecting the level of traceability in a transaction, associated with the anonymity level, measured in USD/anonymity-point. Example: 51.71 USD/anonymity-point.", + "cybermarket_pattern|connection_security|Bot_act_index": "text. A metric representing bot activity, calculated as actions per IP per hour, measured in Actions/IP/hour. Example: 40.87 Actions/IP/hour.", + "cybermarket_pattern|connection_security|Connection_duration": "text. The diversity of connection durations in a given session, measured in hours. Example: 8.92 hours.", + "cybermarket_pattern|connection_security|Threat_handle_rate": "text. The rate at which threats are handled in the system, measured in Threats/hour. Example: 0.12 Threats/hour.", + "cybermarket_pattern|markets|platform_compliance": { + "column_meaning": "JSONB column. Combines compliance and security-related metrics for the platform, including audit results, vulnerability instances, and security measures.", + "fields_meaning": { + "sec_audit_stat": "TEXT. Status or date of the most recent security audit. Possible values: Fail, Pass, Warning.", + "vuln_inst_count": "BIGINT. Count of discovered security vulnerabilities on the platform. Example: 20.", + "sec_event_count": "BIGINT. Total number of recorded security events/incidents. Example: 0.", + "protection_meas_count": "BIGINT. Count of active security protection or mitigation measures. Example: 17.", + "data_ret_policy_stat": "TEXT. Declared data-retention policy of the platform. Possible values: Active, Archived, Deleted.", + "geo_dist_scr": "text. A score representing the geographical distribution of market activity, calculated based on transaction volume and active users, measured in km. Example: 2140.32 km.", + "laund_ve_score": "text. The probability of fraudulent activity in a market, based on laundering velocity, measured in USD/risk/day. Example: 0.62 USD/risk/day." + } + }, + "cybermarket_pattern|vendors|vendor_compliance_ratings": { + "column_meaning": "JSONB column. Stores the vendor's compliance-related performance metrics, including verification scores, trust metrics, and conflict resolution scores.", + "fields_meaning": { + "prof_complete_pct": "REAL. Percentage completeness of the vendor’s profile information. Example: 82.4.", + "id_ver_score_val": "REAL. Score indicating strength of identity-verification evidence. Example: 50.0.", + "feedback_integrity_scr": "REAL. Metric indicating trustworthiness of feedback left for the vendor. Example: 87.8.", + "platform_engage_scr": "REAL. Measure of the vendor’s engagement across supported platforms. Example: 76.3.", + "comm_trust_scr": "REAL. Communication trust metric derived from message analysis. **NULL means this metric has not been computed.**. Example: 35.8.", + "escrow_adher_score": "REAL. Percentage of the vendor’s transactions that adhere to escrow requirements. Example: 12.8.", + "conflict_res_scr": "REAL. Score reflecting the vendor’s conflict-resolution performance. **NULL means no conflict-resolution score is available.**. Example: 33.6.", + "viol_count": "BIGINT. Total number of policy violations attributed to the vendor. Example: 10.", + "penalty_event_count": "BIGINT. Number of penalty or sanction events applied to the vendor. Possible values: 0, 1, 2, 3, 4, 5, 6, 7.", + "warn_count": "BIGINT. Count of formal warnings issued to the vendor. Example: 6.", + "reg_compliance_scr": "REAL. Score measuring the vendor’s adherence to regulatory standards. Example: 20.0.", + "liq_rate": "text. The liquidity rate of the vendor, calculated based on successful transactions and product prices, measured in USD/day. Example: 350.45 USD/day." + } + }, + "cybermarket_pattern|buyers|buyer_risk_profile": { + "column_meaning": "JSONB column. Contains the buyer’s risk-related metrics, including risk score, behavior consistency, and purchase pattern.", + "fields_meaning": { + "risk_metric_scr": "REAL. Risk metric assigned to the buyer based on behaviour. **NULL means risk metric has not been calculated.**. Example: 29.5.", + "behavior_consistency_scr": "REAL. Behaviour-consistency score across sessions. **NULL means behavioural-consistency metric not available.**. Example: 83.4.", + "spend_pattern": "TEXT. Categorised spending pattern for the buyer. Possible values: High, Low, Medium, Variable.", + "purchase_freq": "TEXT. Buying-frequency classification (e.g., low, medium, high). Possible values: Heavy, Occasional, One-time, Regular.", + "risk_dollar_ratio": "text. The risk indicator associated with each buyer, calculated as the ratio of buyer risk score to payment amount in USD. Example: 0.1079 RiskScore/USD." + } + }, + "cybermarket_pattern|products|product_availability": { + "column_meaning": "JSONB column. Stores product-related metrics, including availability, price, and listing age.", + "fields_meaning": { + "price_amt": "REAL. Listing price amount. Example: 1166.46.", + "qty_avail": "BIGINT. Quantity available for sale in this listing." + } + }, + "cybermarket_pattern|transactions|transaction_financials": { + "column_meaning": "JSONB column. Contains financial details related to the transaction, such as value, fees, escrow usage, and shipping costs.", + "fields_meaning": { + "pmt_method_type": "TEXT. Payment method used for the transaction. **NULL means payment method information is not available.**. Possible values: Crypto_A, Crypto_B, Crypto_C, Token.", + "value_amt_usd": "TEXT. Total monetary value of the transaction. Example: $16,635.50 .", + "fee_amt_usd": "TEXT. Platform or escrow fee charged for the transaction. Example: $637.83 .", + "escrow_used_stat": "TEXT. Indicator describing escrow usage for the transaction. **NULL means escrow usage detail is unavailable.**. Possible values: No, Yes.", + "escrow_duration_hrs": "BIGINT. Number of hours funds were held in escrow. **NULL means escrow duration could not be determined.**. Example: 72.0.", + "multisig_flag_stat": "BOOLEAN. Indicates whether multisignature payments were employed. **NULL means multisignature information is missing.**. Possible values: No, Yes.", + "completion_state_stat": "TEXT. Final completion state of the transaction (e.g., completed, disputed). Possible values: Completed, Disputed, Failed, In Progress.", + "process_time_hrs": "BIGINT. Processing time from order to completion, in hours. **NULL means processing time has not been calculated.**. Example: 114.9.", + "shipping_method_type": "TEXT. Shipping method chosen for physical goods. Possible values: Custom, Digital, Express, Standard.", + "shipping_cost_density": "text. The density of shipping cost per unit of geographical distance, measured in USD/km. Example: 1.95 USD/km." + } + }, + "cybermarket_pattern|communications|communication_details": { + "column_meaning": "JSONB column. Stores metadata about communications related to transactions, including sentiment and message frequency.", + "fields_meaning": { + "encryption_type": "TEXT. Type of encryption used in the communications. Possible values: Custom, Enhanced, Standard.", + "comm_channel_type": "TEXT. Communication channel (e.g., forum PM, email). Possible values: External, Internal, Mixed.", + "msg_count_total": "BIGINT. Number of messages exchanged in the communication thread. Example: 36.", + "comm_freq_type": "TEXT. Frequency classification of communications. Possible values: High, Low, Medium.", + "lang_pattern_type": "TEXT. Language pattern or grammar classification inferred from messages. Possible values: Consistent, Suspicious, Variable.", + "sentiment_val": "REAL. Sentiment-analysis score of the message corpus. Example: -0.69.", + "keyword_match_count": "BIGINT. Count of significant keyword matches detected. Example: 20.", + "suspic_index_scr": "REAL. Suspicion score derived from content analysis. Example: 14.7." + } + }, + "cybermarket_pattern|connection_security|connection_security_metrics": { + "column_meaning": "JSONB column. Captures the security metrics for the connection during the transaction, such as IP count, VPN usage, and encryption strength.", + "fields_meaning": { + "ip_count_total": "BIGINT. Number of unique IP addresses observed for the transaction. **NULL means IP address data was not collected.**. Possible values: 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0.", + "tor_node_count": "BIGINT. Count of Tor nodes traversed during the connection. **NULL means Tor-usage information is unavailable.**. Example: 4.0.", + "vpn_detect_status": "TEXT. Indicator whether a VPN was detected in the connection. Possible values: No, Suspected, Yes.", + "browser_fp_unique_scr": "REAL. Uniqueness score of the browser fingerprint. Example: 6.4.", + "device_fp_risk_score": "REAL. Risk score of the device fingerprint. **NULL means device-fingerprint data is unavailable.**. Example: 63.7.", + "conn_pattern_metric_scr": "REAL. Metric describing anomalies in connection patterns. Example: 48.1.", + "encryption_strength_scr": "TEXT. Strength classification of the TLS/SSL encryption. Possible values: Military-grade, Standard, Strong.", + "auth_protocol_type": "TEXT. Authentication protocol used during the session. Possible values: 2FA, Basic, Multi-factor.", + "session_sec_rating_scr": "REAL. Overall security rating of the session. **NULL means session-security rating not assessed.**. Example: 45.3.", + "data_protection_class": "TEXT. Data-protection classification assigned to the session. Possible values: Basic, Enhanced, Maximum.", + "privacy_score_val": "REAL. Score measuring privacy preservation during the session. Example: 62.2." + } + }, + "cybermarket_pattern|risk_analytics|wallet_risk_assessment": { + "column_meaning": "JSONB column. Contains risk assessment metrics related to cryptocurrency wallets, such as wallet score, transaction chain length, and turnover rate.", + "fields_meaning": { + "wallet_score_val": "REAL. Risk score of the cryptocurrency wallet involved. Example: 34.4.", + "wallet_age_days": "BIGINT. Age of the cryptocurrency wallet in days. Example: 722.", + "wallet_value_usd": "REAL. Current value held in the wallet. Example: 98937.33.", + "turnover_rate_val": "REAL. Financial turnover rate of the wallet. **NULL means turnover rate has not been calculated.**. Example: 4.29.", + "pattern_classification": "TEXT. Behaviour pattern classification for the transaction group. Possible values: High-risk, Normal, Suspicious.", + "cluster_coeff_val": "REAL. Network cluster coefficient of the wallet node. Example: 0.695.", + "network_centrality_val": "REAL. Network-centrality measure within the blockchain graph. Example: 40.9.", + "conn_diversity_score": "REAL. Diversity of counterparties connected to the wallet. **NULL means diversity metric has not been calculated.**. Example: 63.6.", + "temporal_pattern_score": "REAL. Temporal-pattern score for transaction activity. Example: 5.8.", + "geo_dist_score_val": "REAL. Geographical distribution score of connected entities. Example: 27.0." + } + }, + "cybermarket_pattern|alerts|alert_case_management": { + "column_meaning": "JSONB column. Stores details about the alert case, including severity level, response time, and actions taken.", + "fields_meaning": { + "severity_level_stat": "TEXT. Severity level assigned to the alert. Possible values: Critical, High, Low, Medium.", + "alert_category": "TEXT. Categorical class of the alert. **NULL means alert has not yet been classified.**. Possible values: Behavior, Pattern, Security, Transaction.", + "confidence_metric_val": "REAL. Confidence score associated with the alert classification. Example: 59.6.", + "invest_priority_stat": "TEXT. Investigation-priority level for the alert. Possible values: High, Low, Medium.", + "resp_time_min": "BIGINT. Time taken to respond to the alert, in minutes. Example: 1411.", + "escalation_tier_stat": "TEXT. Escalation tier applied to the alert. **NULL means escalation tier has not been assigned.**. Possible values: Level1, Level2, Level3.", + "case_state_stat": "TEXT. Current state of the incident case. Possible values: Ongoing, Open, Resolved.", + "resolve_hours": "BIGINT. Hours taken to resolve the alert. Example: 122.", + "action_taken_stat": "TEXT. Action undertaken in response to the alert. **NULL means no action has yet been recorded.**. Possible values: Restriction, Termination, Warning.", + "needs_followup_stat": "BOOLEAN. Indicates whether further follow-up is required. Possible values: No, Yes." + } + } +} \ No newline at end of file diff --git a/cybermarket_pattern/cybermarket_pattern_kb.jsonl b/cybermarket_pattern/cybermarket_pattern_kb.jsonl new file mode 100644 index 0000000000000000000000000000000000000000..0b6bdc0e6251945cd651c82a2c1d20474dbfd7a5 --- /dev/null +++ b/cybermarket_pattern/cybermarket_pattern_kb.jsonl @@ -0,0 +1,30 @@ +{"id": 0, "knowledge": "Marketplace Risk Score (MRS)", "description": "Evaluates the overall risk of a platform.", "definition": "MRS = \\frac{0.4 \\times \\text{vuln\\_count} + 0.3 \\times \\text{event\\_count} + 0.3 \\times \\text{reputation\\_score}}{100}", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 1, "knowledge": "Transaction Velocity Rate", "description": "Measures transaction completion speed.", "definition": "TVR = \\frac{\\text{total\\_transactions}}{\\text{active\\_time\\_in\\_hours}}", "type": "calculation_knowledge", "children_knowledge": [0]} +{"id": 2, "knowledge": "Compliance Efficiency Index (CEI)", "description": "Represents vendor's compliance performance.", "definition": "CEI = \\frac{\\text{compliance\\_score}}{\\text{violation\\_count}}", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 3, "knowledge": "Data Protection Efficiency", "description": "Efficiency of data protection per vulnerability.", "definition": "DPE = \\frac{\\text{protection\\_measures}}{\\text{vulnerability\\_instances}}", "type": "calculation_knowledge", "children_knowledge": [1, 0]} +{"id": 4, "knowledge": "Anonymity Cost Index", "description": "Traceability score per anonymity level.", "definition": "ACI = \\frac{\\text{traceability\\_cost}}{\\text{anonymity\\_score}}", "type": "calculation_knowledge", "children_knowledge": [3, 2]} +{"id": 5, "knowledge": "Buyer Risk Dollar Ratio", "description": "Risk per dollar spent by a buyer.", "definition": "BRDR = \\frac{\\text{risk\\_score}}{\\text{total\\_spending\\_usd}}", "type": "calculation_knowledge", "children_knowledge": [1, 4]} +{"id": 6, "knowledge": "Platform Liquidity Rate", "description": "Liquidity as a function of transaction flow and price.", "definition": "PLR = \\frac{\\text{successful\\_txns} \\times \\text{average\\_price}}{\\text{days\\_active}}", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 7, "knowledge": "Threat Handling Rate", "description": "How efficiently threats are managed over time.", "definition": "THR = \\frac{\\text{threats\\_handled}}{\\text{total\\_hours}}", "type": "calculation_knowledge", "children_knowledge": [6, 3]} +{"id": 8, "knowledge": "Suspicion Signal Density", "description": "Keyword hits per message volume in communication.", "definition": "SSD = \\frac{\\text{keyword\\_matches}}{\\text{total\\_messages}}", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 9, "knowledge": "Wallet Turnover Rate", "description": "Turnover of funds in a crypto wallet.", "definition": "WTR = \\frac{\\text{value\\_moved}}{\\text{wallet\\_age\\_in\\_days}}", "type": "calculation_knowledge", "children_knowledge": [1, 2]} +{"id": 10, "knowledge": "High Risk Vendor", "description": "A vendor flagged with active investigation or high law-enforcement interest.", "definition": "A vendor is deemed High Risk if there is an active regulatory or law-enforcement investigation in progress, or formal records indicate high interest from authorities.", "type": "domain_knowledge", "children_knowledge": [3, 4]} +{"id": 11, "knowledge": "Cross-Border Transaction", "description": "Indicates the shipment moves across national boundaries.", "definition": "Any transaction where the shipment passes from one country to another, triggering customs or import controls.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 12, "knowledge": "Advanced Verification Tier", "description": "Vendors with strong identity validation.", "definition": "Vendors who have completed multi factor or document backed identity validation are classified as Advanced tier.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 13, "knowledge": "Escrow Compliance", "description": "Measures vendor behavior towards using escrow.", "definition": "A vendor is Escrow Compliant when they consistently route payments through an escrow mechanism until delivery is confirmed.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 14, "knowledge": "Premium Authentication", "description": "Sessions with multi-factor authentication or 2FA.", "definition": "Sessions secured by two factor or multi factor authentication are considered to have Premium authentication.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 15, "knowledge": "Traceable Communication", "description": "Communications with high traceability signals.", "definition": "Message threads exhibiting a high volume of flagged keywords and linguistic anomalies are marked as Traceable for further review.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 16, "knowledge": "Suspicious Buyer", "description": "Buyers with low behavior consistency or high risk ratio.", "definition": "A buyer is labeled Suspicious when their behavioural consistency score is low and their risk per dollar metric is notably high.", "type": "domain_knowledge", "children_knowledge": [7, 1]} +{"id": 17, "knowledge": "Secure Platform", "description": "Platforms with low vulnerabilities and high protection count.", "definition": "A platform with fewer than ten unresolved security vulnerabilities and more than fifteen active protection measures qualifies as Secure.", "type": "domain_knowledge", "children_knowledge": [2, 14]} +{"id": 18, "knowledge": "Fraud-Flagged Transaction", "description": "Transaction flagged by fraud model probability.", "definition": "Transactions with a machine learning fraud probability exceeding 70 percent are designated as Fraud Flagged.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 19, "knowledge": "Tier-3 Escalation Case", "description": "Alert cases escalated to the highest priority tier.", "definition": "Alert cases escalated to Tier-3 represent the highest priority and warrant immediate incident response action.", "type": "domain_knowledge", "children_knowledge": [7, 12]} +{"id": 20, "knowledge": "Platform Operational Status", "description": "Indicates current state of platform functionality.", "definition": "Values: Active, Closed, Suspended, Under Investigation. Label descriptions: - Active: The marketplace is online, accepting new listings and processing transactions normally.- Closed: Operators have permanently shut down the marketplace; no further log ins or transactions are possible. - Suspended: The marketplace is temporarily offline, often due to policy violations or maintenance, and may return to service. - Under Investigation: Law enforcement or compliance review is in progress; user activity is restricted or frozen.", "type": "value_illustration", "children_knowledge": -1} +{"id": 21, "knowledge": "Vendor Access Levels", "description": "Privileges granted to vendor accounts.", "definition": "Values: Full, Partial. Label descriptions: - Full: Vendor can create listings, modify inventory, withdraw funds, and communicate without restriction. - Partial: Vendor functionality is limited, typically view only or blocked from withdrawals until verification is complete.", "type": "value_illustration", "children_knowledge": [5, 9]} +{"id": 22, "knowledge": "Buyer Authentication Levels", "description": "Levels of login verification for buyers.", "definition": "Values: Advanced, Basic. Label descriptions: - Advanced: Account protected by multi factor authentication or hardware-backed credentials, offering high assurance. - Basic: Account relies on single factor authentication such as a password, with no secondary verification.", "type": "value_illustration", "children_knowledge": [4, 18]} +{"id": 23, "knowledge": "Product Categories", "description": "The general type of products listed.", "definition": "Values: Data, Digital, Physical, Service. Label descriptions: - Data: Digital information assets such as credential dumps, personal records, or proprietary databases. - Digital: Intangible goods like software licenses, media subscriptions, or downloadable files. - Physical: Tangible merchandise shipped to the buyer, including hardware devices or printed documents. - Service: Intangible labor or expertise, e.g., penetration testing, content creation, or laundering assistance.", "type": "value_illustration", "children_knowledge": [16, 22]} +{"id": 24, "knowledge": "Shipping Route Complexity", "description": "Describes the complexity of delivery routes.", "definition": "Values: Simple, Medium, Complex.", "type": "value_illustration", "children_knowledge": [16, 3]} +{"id": 25, "knowledge": "Session Anonymity Levels", "description": "How anonymous a session appears.", "definition": "Values: High, Medium, Low.", "type": "value_illustration", "children_knowledge": [5, 16]} +{"id": 26, "knowledge": "Alert Severity Levels", "description": "Importance level of alerts.", "definition": "Values: Critical, High, Medium, Low.", "type": "value_illustration", "children_knowledge": -1} +{"id": 27, "knowledge": "Escrow Usage States", "description": "Whether escrow was used in a transaction.", "definition": "Values: Yes, No.\nLabel descriptions:\n- Yes: Funds were held in escrow pending delivery confirmation or dispute resolution, reducing counterparty risk.\n- No: Payment was released directly without escrow protection, increasing the chance of fraud.", "type": "value_illustration", "children_knowledge": [6, 2]} +{"id": 28, "knowledge": "Language Patterns", "description": "Communication grammar detected in messages.", "definition": "Values: Consistent, Suspicious, Variable.\nLabel descriptions:\n- Consistent: Uniform grammar, vocabulary, and tone across messages, suggesting a single genuine author.\n- Suspicious: Irregular phrasing, sudden language switches, or machine translated text indicating possible deception.\n- Variable: Mixed linguistic styles from multiple senders or intentionally altered patterns to hinder profiling.", "type": "value_illustration", "children_knowledge": [26, 24]} +{"id": 29, "knowledge": "Spend Pattern Categories", "description": "Buyer spending trends.", "definition": "Values: High, Low, Medium, Variable.\nLabel descriptions:\n- High: Frequent, high value purchases indicative of heavy marketplace engagement.\n- Low: Infrequent, low value purchases typical of casual or opportunistic buyers.\n- Medium: Steady purchasing cadence with moderate transaction amounts.\n- Variable: Irregular bursts of spending with no predictable pattern.", "type": "value_illustration", "children_knowledge": [13, 4]} diff --git a/cybermarket_pattern/cybermarket_pattern_schema.txt b/cybermarket_pattern/cybermarket_pattern_schema.txt new file mode 100644 index 0000000000000000000000000000000000000000..48d78915b2648ba29ddb75d96a1b43f41ba13d8a --- /dev/null +++ b/cybermarket_pattern/cybermarket_pattern_schema.txt @@ -0,0 +1,322 @@ +"CREATE" TABLE "markets" ( +"PlatCode" text NOT NULL, +"PlatName" text NULL, +"PlatformType" text NULL, +"AgeDays" bigint NULL, +"OperStatus" text NULL, +"RepScore" text NULL, +"ConfidenceLevel" text NULL, +"SizeCat" text NULL, +"DayTxnVol" real NULL, +"ActiveUsersMo" text NULL, +"SellerCount" bigint NULL, +"AcqCount" bigint NULL, +"ItemListings" bigint NULL, +"lastUpdated" timestamp without time zone NULL, +"RefreshHrs" bigint NULL, +platform_compliance jsonb NULL, + "PRIMARY" KEY (PlatCode) +); + + + +"First" 3 rows: +PlatCode PlatName PlatformType AgeDays OperStatus RepScore ConfidenceLevel SizeCat DayTxnVol ActiveUsersMo SellerCount AcqCount ItemListings lastUpdated RefreshHrs platform_compliance +---------- ---------- -------------- --------- ------------------- ---------- ----------------- --------- ----------- --------------- ------------- ---------- -------------- ------------------- ------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ +MK7747 Market_84 Forum 59 Under Investigation Medium Mega 990 4178 985 6009 34834 2025-02-16 15:29:00 8 {'geo_dist_scr': '9.8 Transactions/user/day', 'laund_ve_score': '0.16 USD/risk/day', 'sec_audit_stat': 'Warning', 'sec_event_count': 0, 'vuln_inst_count': 28, 'data_ret_policy_stat': 'Deleted', 'protection_meas_count': 17} +MK9078 Market_35 Service 98 Active High Medium 615 4761 249 2226 34208 2025-02-17 08:29:00 4 {'geo_dist_scr': '2.47 Transactions/user/day', 'laund_ve_score': '0.49 USD/risk/day', 'sec_audit_stat': 'Pass', 'sec_event_count': 1, 'vuln_inst_count': 1, 'data_ret_policy_stat': 'Deleted', 'protection_meas_count': 2} +MK5795 Market_91 Forum 536 Suspended Low Medium 670 1467 446 2678 38741 2025-02-14 22:29:00 4 {'geo_dist_scr': '9.48 Transactions/user/day', 'laund_ve_score': '0.48 USD/risk/day', 'sec_audit_stat': 'Warning', 'sec_event_count': 5, 'vuln_inst_count': 21, 'data_ret_policy_stat': 'Active', 'protection_meas_count': 12} +... + + +"CREATE" TABLE "vendors" ( +"SellerKey" text NOT NULL, +"DaysActive" bigint NULL, +"PerformanceRating" real NULL, +"TotalTxns" text NULL, +"CompletedTxns" bigint NULL, +"DisputedEvents" bigint NULL, +"VerTier" text NULL, +"LastActiveDt" timestamp without time zone NULL, +"AccessLevel" text NULL, +"InvestigationFlag" text NULL, +"LE_Interest" text NULL, +"ComplianceRisk" text NULL, +"RegStandeff" text NULL, +vendor_compliance_ratings jsonb NULL, + "PRIMARY" KEY (SellerKey) +); + + + +"First" 3 rows: +SellerKey DaysActive PerformanceRating TotalTxns CompletedTxns DisputedEvents VerTier LastActiveDt AccessLevel InvestigationFlag LE_Interest ComplianceRisk RegStandeff vendor_compliance_ratings +----------- ------------ ------------------- ----------- --------------- ---------------- --------- ------------------- ------------- ------------------- ------------- ---------------- --------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- +V63085 319 4.5 $917 28 33 NaT Low 11.61 Score/violation {'liq_rate': '783.97 USD/day', 'viol_count': 4, 'warn_count': 6, 'comm_trust_scr': 35.8, 'conflict_res_scr': None, 'id_ver_score_val': 50, 'prof_complete_pct': 82.4, 'escrow_adher_score': 12.8, 'reg_compliance_scr': 59.7873, 'penalty_event_count': 3, 'platform_engage_scr': 76.3, 'feedback_integrity_scr': 87.8} +V99720 426 3.4 $82 38 81 NaT Active High 63.88 Score/violation {'liq_rate': '218.16 USD/day', 'viol_count': 3, 'warn_count': 5, 'comm_trust_scr': 95.6, 'conflict_res_scr': None, 'id_ver_score_val': 50, 'prof_complete_pct': 35.9, 'escrow_adher_score': 55.5, 'reg_compliance_scr': 76.36598, 'penalty_event_count': 5, 'platform_engage_scr': 43, 'feedback_integrity_scr': 46.2} +V25559 26 4.1 $165 7 34 2025-02-05 00:00:00 Monitoring High 14.7 Score/violation {'liq_rate': '289.98 USD/day', 'viol_count': 5, 'warn_count': 4, 'comm_trust_scr': 64, 'conflict_res_scr': None, 'id_ver_score_val': 50, 'prof_complete_pct': 67, 'escrow_adher_score': 97.6, 'reg_compliance_scr': 91.33712, 'penalty_event_count': 0, 'platform_engage_scr': 27.3, 'feedback_integrity_scr': 91.9} +... + + +"CREATE" TABLE "buyers" ( +"AcqCode" text NOT NULL, +"ProfileAge" bigint NULL, +"PurchaseCount" bigint NULL, +"AuthLevel" text NULL, +buyer_risk_profile jsonb NULL, + "PRIMARY" KEY (AcqCode) +); + + + +"First" 3 rows: +AcqCode ProfileAge PurchaseCount AuthLevel buyer_risk_profile +--------- ------------ --------------- ----------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------- +B41538 326 10 Advanced {'purchase_freq': 'Heavy', 'spend_pattern': 'Variable', 'risk_metric_scr': 8.421068, 'risk_dollar_ratio': '0.0644 RiskScore/USD', 'behavior_consistency_scr': 83.4} +B57052 166 40 Basic {'purchase_freq': 'Regular', 'spend_pattern': 'High', 'risk_metric_scr': 6.1413317, 'risk_dollar_ratio': '0.1648 RiskScore/USD', 'behavior_consistency_scr': 28.8} +B79369 81 96 Basic {'purchase_freq': 'One-time', 'spend_pattern': 'Variable', 'risk_metric_scr': 3.258246, 'risk_dollar_ratio': '0.184 RiskScore/USD', 'behavior_consistency_scr': 53.8} +... + + +"CREATE" TABLE "products" ( +"ProdCat" text NOT NULL, +"Subcategory" text NOT NULL, +"ListingAge" bigint NOT NULL, +"SellerPointer" text NOT NULL, +product_availability jsonb NULL, + "PRIMARY" KEY (ProdCat, Subcategory, ListingAge, SellerPointer), + "FOREIGN" KEY ("SellerPointer") REFERENCES vendors(SellerKey) +); + + + +"First" 3 rows: +ProdCat Subcategory ListingAge SellerPointer product_availability +--------- ------------- ------------ --------------- ----------------------------------------- +Digital Type_B 155 V63085 {'price_amt': 232.59462, 'qty_avail': 92} +Data Type_C 105 V25559 {'price_amt': 130.78629, 'qty_avail': 48} +Digital Type_B 116 V61030 {'price_amt': 248.43217, 'qty_avail': 59} +... + + +"CREATE" TABLE "transactions" ( +"EventCode" text NOT NULL, +"RecordTag" text NULL, +"EventTimestamp" timestamp without time zone NULL, +"PlatformKey" text NULL, +"VendorLink" text NULL, +"AcqLink" text NULL, +"OriginRegion" text NULL, +"DestRegion" text NULL, +"CrossBorder" bigint NULL, +"RouteComplex" text NULL, +"Transaction_Velocity" text NULL, +"Border_cross_pre" text NULL, +"GeoDistScore" text NULL, +transaction_financials jsonb NULL, + "PRIMARY" KEY (EventCode), + "FOREIGN" KEY ("PlatformKey") REFERENCES markets(PlatCode), + "FOREIGN" KEY ("VendorLink") REFERENCES vendors(SellerKey), + "FOREIGN" KEY ("AcqLink") REFERENCES buyers(AcqCode) +); + + + +"First" 3 rows: +EventCode RecordTag EventTimestamp PlatformKey VendorLink AcqLink OriginRegion DestRegion CrossBorder RouteComplex Transaction_Velocity Border_cross_pre GeoDistScore transaction_financials +----------- ----------- ------------------- ------------- ------------ --------- -------------- ------------ ------------- -------------- ---------------------- -------------------------- -------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- +TX4833222 DN541412 2024-04-02 00:00:00 MK7747 V63085 B41538 Region_B Region_Y 1 Complex 30.8 USD/hour 137.66 USD/border-crossing 2140.32 km {'fee_amt_usd': '$637.83 ', 'value_amt_usd': '78.03532317523569', 'pmt_method_type': 'Crypto_A', 'escrow_used_stat': 'No', 'process_time_hrs': 4, 'multisig_flag_stat': True, 'escrow_duration_hrs': None, 'shipping_method_type': 'Express', 'completion_state_stat': 'In Progress', 'shipping_cost_density': '1.95 USD/km'} +TX7875482 DN772007 2024-05-16 00:00:00 MK9078 V25559 B57052 Region_A Region_Y 1 Medium 68.45 USD/hour 194.23 USD/border-crossing 4340.25 km {'fee_amt_usd': '$318.27 ', 'value_amt_usd': '190.38928974993365', 'pmt_method_type': None, 'escrow_used_stat': 'No', 'process_time_hrs': 11, 'multisig_flag_stat': False, 'escrow_duration_hrs': None, 'shipping_method_type': 'Standard', 'completion_state_stat': 'In Progress', 'shipping_cost_density': '1.51 USD/km'} +TX9295302 DN873987 2024-08-23 00:00:00 MK5795 V61030 B79369 Region_A Unknown 0 Medium 31.85 USD/hour 34.12 USD/border-crossing 3264.72 km {'fee_amt_usd': '$193.46 ', 'value_amt_usd': '147.738818653224', 'pmt_method_type': None, 'escrow_used_stat': None, 'process_time_hrs': 18, 'multisig_flag_stat': None, 'escrow_duration_hrs': None, 'shipping_method_type': 'Express', 'completion_state_stat': 'Failed', 'shipping_cost_density': '0.2 USD/km'} +... + + +"CREATE" TABLE "transaction_products" ( +"EventLink" text NOT NULL, +"ProdCat" text NOT NULL, +"Subcategory" text NOT NULL, +"ListingAge" bigint NOT NULL, +"SellerPointer" text NOT NULL, +"PriceAmt" real NULL, +"QtySold" bigint NULL, + "PRIMARY" KEY (EventLink, ProdCat, Subcategory, ListingAge, SellerPointer), + "FOREIGN" KEY ("EventLink") REFERENCES transactions(EventCode), + "FOREIGN" KEY ("ProdCat") REFERENCES products("ProdCat"), + "FOREIGN" KEY ("ProdCat") REFERENCES products(Subcategory), + "FOREIGN" KEY ("ProdCat") REFERENCES products(ListingAge), + "FOREIGN" KEY ("ProdCat") REFERENCES products(SellerPointer), + "FOREIGN" KEY ("Subcategory") REFERENCES products(ProdCat), + "FOREIGN" KEY ("Subcategory") REFERENCES products("Subcategory"), + "FOREIGN" KEY ("Subcategory") REFERENCES products(ListingAge), + "FOREIGN" KEY ("Subcategory") REFERENCES products(SellerPointer), + "FOREIGN" KEY ("ListingAge") REFERENCES products(ProdCat), + "FOREIGN" KEY ("ListingAge") REFERENCES products(Subcategory), + "FOREIGN" KEY ("ListingAge") REFERENCES products("ListingAge"), + "FOREIGN" KEY ("ListingAge") REFERENCES products(SellerPointer), + "FOREIGN" KEY ("SellerPointer") REFERENCES products(ProdCat), + "FOREIGN" KEY ("SellerPointer") REFERENCES products(Subcategory), + "FOREIGN" KEY ("SellerPointer") REFERENCES products(ListingAge), + "FOREIGN" KEY ("SellerPointer") REFERENCES products("SellerPointer") +); + + + +"First" 3 rows: +EventLink ProdCat Subcategory ListingAge SellerPointer PriceAmt QtySold +----------- --------- ------------- ------------ --------------- ---------- --------- +TX4833222 Digital Type_B 155 V63085 232.595 92 +TX7875482 Data Type_C 105 V25559 130.786 48 +TX9295302 Digital Type_B 116 V61030 248.432 59 +... + + +"CREATE" TABLE "vendor_markets" ( +"VendorKey" text NOT NULL, +"PlatformID" text NOT NULL, + "PRIMARY" KEY (VendorKey, PlatformID), + "FOREIGN" KEY ("VendorKey") REFERENCES vendors(SellerKey), + "FOREIGN" KEY ("PlatformID") REFERENCES markets(PlatCode) +); + + + +"First" 3 rows: +VendorKey PlatformID +----------- ------------ +V63085 MK7747 +V25559 MK9078 +V61030 MK5795 +... + + +"CREATE" TABLE "vendor_countries" ( +"SellerKey" text NOT NULL, +"OpRegions" text NOT NULL, + "PRIMARY" KEY (SellerKey, OpRegions), + "FOREIGN" KEY ("SellerKey") REFERENCES vendors("SellerKey") +); + + + +"First" 3 rows: +SellerKey OpRegions +----------- ----------- +V63085 5 +V25559 5 +V61030 10 +... + + +"CREATE" TABLE "vendor_payment_methods" ( +"VendorLink" text NOT NULL, +"AcceptedPmtTypes" text NOT NULL, + "PRIMARY" KEY (VendorLink, AcceptedPmtTypes), + "FOREIGN" KEY ("VendorLink") REFERENCES vendors(SellerKey) +); + + + +"First" 3 rows: +VendorLink AcceptedPmtTypes +------------ ------------------ +V63085 4 +V25559 5 +V61030 5 +... + + +"CREATE" TABLE "risk_analytics" ( +"TxnLink" text NOT NULL, +"RiskIndicatorCount" bigint NULL, +"FraudProb" real NULL, +"ML_Risk" text NULL, +"LinkedEvents" bigint NULL, +"ChainLength" bigint NULL, +wallet_risk_assessment jsonb NULL, + "PRIMARY" KEY (TxnLink), + "FOREIGN" KEY ("TxnLink") REFERENCES transactions(EventCode) +); + + + +"First" 3 rows: +TxnLink RiskIndicatorCount FraudProb ML_Risk LinkedEvents ChainLength wallet_risk_assessment +--------- -------------------- ----------- --------- -------------- ------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- +TX4833222 9 67.3 2.10192 29 4 {'wallet_age_days': 722, 'wallet_score_val': 34.4, 'wallet_value_usd': 98937.33, 'cluster_coeff_val': 0.695, 'turnover_rate_val': None, 'geo_dist_score_val': 27, 'conn_diversity_score': 63.6, 'network_centrality_val': 40.9, 'pattern_classification': 'High-risk', 'temporal_pattern_score': 5.8} +TX7875482 3 78.8 3.23443 8 9 {'wallet_age_days': 307, 'wallet_score_val': 29.8, 'wallet_value_usd': 16240.76, 'cluster_coeff_val': 0, 'turnover_rate_val': 4.29, 'geo_dist_score_val': 87, 'conn_diversity_score': 85.4, 'network_centrality_val': 3.2, 'pattern_classification': 'High-risk', 'temporal_pattern_score': 73.9} +TX9295302 10 76.1 8.19234 47 nan {'wallet_age_days': 879, 'wallet_score_val': 47.4, 'wallet_value_usd': 26348.08, 'cluster_coeff_val': 0.364, 'turnover_rate_val': None, 'geo_dist_score_val': 97.6, 'conn_diversity_score': None, 'network_centrality_val': 58.5, 'pattern_classification': 'High-risk', 'temporal_pattern_score': 19.8} +... + + +"CREATE" TABLE "communications" ( +"EventLink" text NOT NULL, +communication_details jsonb NULL, + "PRIMARY" KEY (EventLink), + "FOREIGN" KEY ("EventLink") REFERENCES transactions(EventCode) +); + + + +"First" 3 rows: +EventLink communication_details +----------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- +TX4833222 {'sentiment_val': -0.69, 'comm_freq_type': 'Medium', 'encryption_type': 'Custom', 'msg_count_total': 59, 'suspic_index_scr': 14.7, 'comm_channel_type': 'Mixed', 'lang_pattern_type': 'Variable', 'keyword_match_count': 20} +TX7875482 {'sentiment_val': 0.27, 'comm_freq_type': 'High', 'encryption_type': 'Standard', 'msg_count_total': 271, 'suspic_index_scr': 46, 'comm_channel_type': 'External', 'lang_pattern_type': 'Suspicious', 'keyword_match_count': 15} +TX9295302 {'sentiment_val': 0.3, 'comm_freq_type': 'Low', 'encryption_type': 'Enhanced', 'msg_count_total': 170, 'suspic_index_scr': 81.5, 'comm_channel_type': 'External', 'lang_pattern_type': 'Consistent', 'keyword_match_count': 16} +... + + +"CREATE" TABLE "connection_security" ( +"TxnPointer" text NOT NULL, +"OpSecMetric" real NULL, +"ThreatIntelIndex" real NULL, +"DetectionAvoidance" real NULL, +"AnonLevel" text NULL, +"TraceScore" real NULL, +"CorrelationValue" real NULL, +"PatternMatchScore" real NULL, +"BehaviorScore" real NULL, +"ML_Confidence" real NULL, +"AnomalyValue" real NULL, +"FalsePosProb" real NULL, +"Threat_handle_rate" text NULL, +"Data_proctecteff" text NULL, +encrytion_cost text NULL, +anonymity_cost text NULL, +"Bot_act_index" text NULL, +"Connection_duration" text NULL, +connection_security_metrics jsonb NULL, + "PRIMARY" KEY (TxnPointer), + "FOREIGN" KEY ("TxnPointer") REFERENCES transactions(EventCode) +); + + + +"First" 3 rows: +TxnPointer OpSecMetric ThreatIntelIndex DetectionAvoidance AnonLevel TraceScore CorrelationValue PatternMatchScore BehaviorScore ML_Confidence AnomalyValue FalsePosProb Threat_handle_rate Data_proctecteff encrytion_cost anonymity_cost Bot_act_index Connection_duration connection_security_metrics +------------ ------------- ------------------ -------------------- ----------- ------------ ------------------ ------------------- --------------- --------------- -------------- -------------- -------------------- ----------------------- ----------------------- ------------------------- --------------------- --------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- +TX4833222 95.7 14.4363 94.6513 1.00297 14.4 69.6 36.5 15.8 97.2 38.9 10.4 0.12 Threats/hour 1.82 Vulnerabilities/GB 8.21 USD/encryption-bit 51.71 USD/anonymity-point 40.87 Actions/IP/hour 8.92 hours {'ip_count_total': 2, 'tor_node_count': 4, 'privacy_score_val': 62.2, 'vpn_detect_status': 'No', 'auth_protocol_type': 'Basic', 'device_fp_risk_score': 63.7, 'browser_fp_unique_scr': 6.4, 'data_protection_class': '754.0646111300096', 'session_sec_rating_scr': 12.701034, 'conn_pattern_metric_scr': 48.1, 'encryption_strength_scr': '95'} +TX7875482 58.2 72.492 45.5951 5.25037 35.6 95.1 42.6 83.3 38.9 nan 84.1 0.14 Threats/hour 0.95 Vulnerabilities/GB 8.19 USD/encryption-bit 30.98 USD/anonymity-point 32.55 Actions/IP/hour 11.34 hours {'ip_count_total': 9, 'tor_node_count': 4, 'privacy_score_val': 50.8, 'vpn_detect_status': 'No', 'auth_protocol_type': 'Basic', 'device_fp_risk_score': None, 'browser_fp_unique_scr': 61.5, 'data_protection_class': '337.3957605153179', 'session_sec_rating_scr': 6.5542607, 'conn_pattern_metric_scr': 10.8, 'encryption_strength_scr': '81'} +TX9295302 24.1 84.9003 82.1625 1.26364 74.2 3 8.5 10.1 95.2 85.2 22.2 18.73 Threats/hour 0.36 Vulnerabilities/GB 5.99 USD/encryption-bit 82.7 USD/anonymity-point 42.24 Actions/IP/hour 7.87 hours {'ip_count_total': 4, 'tor_node_count': None, 'privacy_score_val': 73.7, 'vpn_detect_status': 'Suspected', 'auth_protocol_type': '2FA', 'device_fp_risk_score': 17.9, 'browser_fp_unique_scr': 79, 'data_protection_class': '24.80841464750114', 'session_sec_rating_scr': 8.994444, 'conn_pattern_metric_scr': 86.9, 'encryption_strength_scr': '239'} +... + + +"CREATE" TABLE "alerts" ( +"EventTag" text NOT NULL, +"ReviewFreq" text NULL, +"NextReviewDt" date NULL, +"AnnotationCount" bigint NULL, +alert_case_management jsonb NULL, + "PRIMARY" KEY (EventTag), + "FOREIGN" KEY ("EventTag") REFERENCES transactions(EventCode) +); + + + +"First" 3 rows: +EventTag ReviewFreq NextReviewDt AnnotationCount alert_case_management +---------- ------------ -------------- ----------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ +TX4833222 Weekly 2025-03-17 3 {'resolve_hours': 122, 'resp_time_min': 1411, 'alert_category': None, 'case_state_stat': 'Resolved', 'action_taken_stat': 'Termination', 'needs_followup_stat': False, 'severity_level_stat': 'High', 'escalation_tier_stat': 'Level3', 'invest_priority_stat': 'High', 'confidence_metric_val': 59.6} +TX7875482 Monthly 2025-03-13 1 {'resolve_hours': 43, 'resp_time_min': 686, 'alert_category': 'Pattern', 'case_state_stat': 'Resolved', 'action_taken_stat': 'Warning', 'needs_followup_stat': False, 'severity_level_stat': 'Low', 'escalation_tier_stat': 'Level2', 'invest_priority_stat': 'Low', 'confidence_metric_val': 89.1} +TX9295302 Weekly 2025-03-19 9 {'resolve_hours': 57, 'resp_time_min': 1068, 'alert_category': 'Transaction', 'case_state_stat': 'Resolved', 'action_taken_stat': 'Termination', 'needs_followup_stat': True, 'severity_level_stat': 'Low', 'escalation_tier_stat': 'Level2', 'invest_priority_stat': 'High', 'confidence_metric_val': 76.7} +... diff --git a/disaster_relief/disaster_relief_column_meaning_base.json b/disaster_relief/disaster_relief_column_meaning_base.json new file mode 100644 index 0000000000000000000000000000000000000000..76b8dda0c5c25472897f507a36781418d60cd9a2 --- /dev/null +++ b/disaster_relief/disaster_relief_column_meaning_base.json @@ -0,0 +1,192 @@ +{ + "disaster_relief|DisasterEvents|DistRegistry": "VARCHAR(20). Unique disaster event identifier. PK.", + "disaster_relief|DisasterEvents|TIMEMARK": "TIMESTAMP. Timestamp when the disaster event was recorded. Example: 2024-12-21 19:49.", + "disaster_relief|DisasterEvents|HazType": "HazType_enum. Type of the disaster (e.g., Wildfire, Earthquake). Possible values: Earthquake, Flood, Hurricane, Tsunami, Wildfire.", + "disaster_relief|DisasterEvents|haz_level": "TEXT. Severity level of the disaster event. Possible values: Severity-1, Severity-2, Severity-3, Severity-4, Severity-5.", + "disaster_relief|DisasterEvents|affectedArea": "TEXT. Geographic area impacted by the disaster. Example: East Jeremy.", + "disaster_relief|DisasterEvents|REGION_TAG": "TEXT. Regional tag for categorization. Example: RC7250.", + "disaster_relief|DisasterEvents|latcoord": "REAL. Latitude coordinate of the disaster center. Example: 51.203609.", + "disaster_relief|DisasterEvents|LON_COORD": "REAL. Longitude coordinate of the disaster center. Example: 36.221141.", + "disaster_relief|DisasterEvents|damageReport": "TEXT. Detailed report of the damage caused by the disaster. Possible values: Catastrophic, Minor, Moderate, Severe.", + "disaster_relief|DistributionHubs|HubRegistry": "VARCHAR(20). Unique distribution hub identifier. PK.", + "disaster_relief|DistributionHubs|HUB_CAP_TONS": "REAL. Hub's capacity in tons. Example: 5101.", + "disaster_relief|DistributionHubs|hubUtilPct": "REAL. Hub's utilization percentage. Example: 9.4.", + "disaster_relief|DistributionHubs|store_cap_m3": "REAL. Hub's storage capacity in cubic meters. Example: 93293.", + "disaster_relief|DistributionHubs|STOREAVAILM3": "REAL. Available storage capacity in cubic meters. Example: 7279.", + "disaster_relief|DistributionHubs|coldStoreCapM3": "REAL. Cold storage capacity in cubic meters. Example: 249.", + "disaster_relief|DistributionHubs|COLD_STORE_TEMP_C": "REAL. Cold storage temperature in Celsius. **NULL means cold storage temperature not recorded.**. Example: 6.6.", + "disaster_relief|DistributionHubs|warehouse_state": "TEXT. Warehouse operational state. Possible values: Excellent, Fair, Good, Poor.", + "disaster_relief|DistributionHubs|INVACCPCT": "REAL. Inventory acceptance percentage. Example: 91.3.", + "disaster_relief|DistributionHubs|stockTurnRate": "REAL. Stock turnover rate. Example: 3.31.", + "disaster_relief|Operations|OpsRegistry": "VARCHAR(20). Unique operation registry identifier. PK.", + "disaster_relief|Operations|emergLevel": "EmergLevel_enum. Emergency level of the operation. Possible values: Black, Orange, Red, Yellow.", + "disaster_relief|Operations|RESP_PHASE": "TEXT. Response phase during the operation. Possible values: Emergency, Initial, Reconstruction, Recovery.", + "disaster_relief|Operations|OpsStatus": "TEXT. Operational status description. Possible values: Active, Completed, Planning, Scaling Down.", + "disaster_relief|Operations|COORDCENTER": "TEXT. Coordination center for the operation. Example: CC7649.", + "disaster_relief|Operations|ops_start_date": "DATE. Date when the operation started. Example: 2025-01-26.", + "disaster_relief|Operations|ESTDURATIONDAYS": "BIGINT. Estimated duration of the operation in days. Example: 12.", + "disaster_relief|Operations|Priority_rank": "PriorityRank_enum. Priority rank of the operation. Possible values: Critical, High, Low, Medium.", + "disaster_relief|Operations|resourceAllocState": "ResourceAllocState_enum. Resource allocation status. Possible values: Critical, Limited, Sufficient.", + "disaster_relief|Operations|SUPPLY_FLOW_STATE": "TEXT. Supply flow status during the operation. Possible values: Disrupted, Stable, Strained.", + "disaster_relief|Supplies|SupplyRegistry": "VARCHAR(20). Unique supply registry identifier. PK.", + "disaster_relief|Supplies|supply_dist_ref": "VARCHAR(20). Disaster event reference. FK to DisasterEvents.", + "disaster_relief|Supplies|SUPPLYHUBREF": "VARCHAR(20). Distribution hub reference. FK to DistributionHubs.", + "disaster_relief|Supplies|FoodTons": "REAL. Amount of food supplies in tons. Example: 479.4.", + "disaster_relief|Supplies|water_liters": "REAL. Amount of water supplies in liters. Example: 283596.", + "disaster_relief|Supplies|MEDUNITS": "BIGINT. Number of medical units. Example: 15593.0.", + "disaster_relief|Supplies|shelter_units": "BIGINT. Number of shelter units. Example: 7534.", + "disaster_relief|Supplies|blanketUnits": "BIGINT. Number of blankets. Example: 26671.0.", + "disaster_relief|Supplies|HYGIENE_UNITS": "BIGINT. Number of hygiene units. Example: 46981.", + "disaster_relief|Supplies|powergenunits": "BIGINT. Number of power generators. Example: 568.", + "disaster_relief|Supplies|FUEL_RESERVE_LITERS": "REAL. Amount of fuel reserves in liters. Example: 86011.", + "disaster_relief|Transportation|TransportRegistry": "VARCHAR(20). Unique transportation registry identifier. PK.", + "disaster_relief|Transportation|transportDistRef": "VARCHAR(20). Disaster event reference. FK to DisasterEvents.", + "disaster_relief|Transportation|TRANSPORT_HUB_REF": "VARCHAR(20). Distribution hub reference. FK to DistributionHubs.", + "disaster_relief|Transportation|transportsupref": "VARCHAR(20). Supply reference. FK to Supplies.", + "disaster_relief|Transportation|VEHICLECOUNT": "BIGINT. Number of vehicles available. **NULL means vehicle count not recorded.**. Example: 141.0.", + "disaster_relief|Transportation|trucks_available": "BIGINT. Number of available trucks. Example: 88.", + "disaster_relief|Transportation|HelosAvailable": "BIGINT. Number of available helicopters. Example: 7.", + "disaster_relief|Transportation|BOATSAVAILABLE": "BIGINT. Number of available boats. Example: 7.", + "disaster_relief|Transportation|lastmileStatus": "TEXT. Status of the last-mile delivery. Possible values: Delayed, On Track, Suspended.", + "disaster_relief|Transportation|distribution_points": "BIGINT. Number of distribution points. Example: 35.", + "disaster_relief|Transportation|deliveryStatus": "TEXT. Overall delivery status.", + "disaster_relief|HumanResources|HRRegistry": "VARCHAR(20). Unique human resources registry identifier. PK.", + "disaster_relief|HumanResources|hrdistref": "VARCHAR(20). Disaster event reference. FK to DisasterEvents.", + "disaster_relief|HumanResources|HR_Ops_Ref": "VARCHAR(20). Operations reference. FK to Operations.", + "disaster_relief|Financials|FinanceRegistry": "VARCHAR(20). Unique financials registry identifier. PK.", + "disaster_relief|Financials|finDistRef": "VARCHAR(20). Disaster event reference. FK to DisasterEvents.", + "disaster_relief|Financials|FIN_OPS_REF": "VARCHAR(20). Operations reference. FK to Operations.", + "disaster_relief|Financials|BUDGETALLOTUSD": "TEXT. Allotted budget in USD. Example: $4,227,090.00.", + "disaster_relief|Financials|funds_util_pct": "TEXT. Percentage of funds utilized. Example: 9.8%.", + "disaster_relief|Financials|costBeneUSD": "TEXT. Cost-benefit ratio in USD. Example: $844.12.", + "disaster_relief|Financials|OPS_COSTS_USD": "REAL. Operational costs in USD. Example: 88256.", + "disaster_relief|Financials|transportcostsUSD": "BIGINT. Transportation costs in USD. Example: 976202.", + "disaster_relief|Financials|STORAGE_COSTS_USD": "REAL. Storage costs in USD. Example: 111548.", + "disaster_relief|Financials|personnelcostsUsd": "REAL. Personnel costs in USD. Example: 364821.", + "disaster_relief|Financials|funding_state": "TEXT. Funding state. Possible values: Adequate, Critical, Limited.", + "disaster_relief|Financials|DONOR_COMMITMENTS_USD": "REAL. Donor commitments in USD. **NULL means no donor commitments.**. Example: 4068978.0.", + "disaster_relief|Financials|resource_gaps_usd": "BIGINT. Resource gaps in USD. Example: 95367.", + "disaster_relief|BeneficiariesAndAssessments|BeneRegistry": "VARCHAR(20). Unique beneficiary registry identifier. PK.", + "disaster_relief|BeneficiariesAndAssessments|bene_dist_ref": "VARCHAR(20). Disaster event reference. FK to DisasterEvents.", + "disaster_relief|BeneficiariesAndAssessments|BENEOpsRef": "VARCHAR(20). Operations reference. FK to Operations.", + "disaster_relief|BeneficiariesAndAssessments|beneRegister": "BeneRegister_enum. Beneficiary registration status. **NULL means registration not completed.**. Possible values: Complete, Partial, Pending.", + "disaster_relief|BeneficiariesAndAssessments|VULNERABILITY_REVIEW": "VulnerabilityReview_enum. Vulnerability review status. Possible values: Complete, In Progress, Pending.", + "disaster_relief|BeneficiariesAndAssessments|needs_assess_status": "TEXT. Status of needs assessment. Possible values: Due, Overdue, Updated.", + "disaster_relief|BeneficiariesAndAssessments|DISTEQUITYIDX": "REAL. Disaster equity index. Example: 0.54.", + "disaster_relief|BeneficiariesAndAssessments|bene_feedbackScore": "REAL. Beneficiary feedback score. **NULL means feedback not recorded.**. Example: 1.3.", + "disaster_relief|BeneficiariesAndAssessments|COMMENGAGELVL": "TEXT. Community engagement level. Possible values: High, Low, Medium.", + "disaster_relief|BeneficiariesAndAssessments|local_capacity_growth": "TEXT. Local capacity growth description. **NULL means no growth recorded.**. Possible values: Active, Limited.", + "disaster_relief|EnvironmentAndHealth|EnvHealthRegistry": "VARCHAR(20). Unique environment and health registry identifier. PK.", + "disaster_relief|EnvironmentAndHealth|env_dist_ref": "VARCHAR(20). Disaster event reference. FK to DisasterEvents.", + "disaster_relief|CoordinationAndEvaluation|CoordEvalRegistry": "VARCHAR(20). Unique coordination and evaluation registry identifier. PK.", + "disaster_relief|CoordinationAndEvaluation|coord_dist_ref": "VARCHAR(20). Disaster event reference. FK to DisasterEvents.", + "disaster_relief|CoordinationAndEvaluation|coordOpsRef": "VARCHAR(20). Operations reference. FK to Operations.", + "disaster_relief|CoordinationAndEvaluation|SECINCIDENTCOUNT": "BIGINT. Security incident count. Example: 83.", + "disaster_relief|CoordinationAndEvaluation|safety_ranking": "SafetyRanking_enum. Safety ranking level. **NULL means safety ranking not assessed.**. Possible values: High Risk, Moderate, Safe.", + "disaster_relief|CoordinationAndEvaluation|ACCESSLIMITATION": "TEXT. Access limitation description. **NULL means no access limitations.**. Possible values: Partial, Severe.", + "disaster_relief|CoordinationAndEvaluation|coord_effect_lvl": "CoordEffectLvl_enum. Coordination effect level. Possible values: High, Low, Medium.", + "disaster_relief|CoordinationAndEvaluation|PARTNERORGS": "TEXT. Partner organizations involved. Example: 9.0.", + "disaster_relief|CoordinationAndEvaluation|infoSharingState": "InfoSharingState_enum. Information-sharing effectiveness. Possible values: Effective, Limited, Poor.", + "disaster_relief|CoordinationAndEvaluation|report_compliance": "REAL. Report compliance percentage. Example: 91.0.", + "disaster_relief|CoordinationAndEvaluation|DATAQUALITYVALUE": "BIGINT. Data quality score. Example: 3.8.", + "disaster_relief|CoordinationAndEvaluation|monitoring_freq": "TEXT. Monitoring frequency description. Possible values: Daily, Monthly, Weekly.", + "disaster_relief|CoordinationAndEvaluation|evaluationStage": "EvaluationStage_enum. Evaluation stage status. **NULL means evaluation stage not set.**. Possible values: Current, Due, Overdue.", + "disaster_relief|CoordinationAndEvaluation|LESSONS_LEARNED_STAGE": "LessonsLearnedStage_enum. Lessons learned stage. Possible values: Documented, In Progress, Pending.", + "disaster_relief|CoordinationAndEvaluation|contingencyplanstage": "ContingencyPlanStage_enum. Contingency plan stage. Possible values: Due, Overdue, Updated.", + "disaster_relief|CoordinationAndEvaluation|risk_mitigation_steps": "RiskMitigationSteps_enum. Risk mitigation steps status. Possible values: Adequate, Insufficient, Partial.", + "disaster_relief|CoordinationAndEvaluation|INSURANCE_SCOPE": "TEXT. Insurance coverage scope. Possible values: Full, Partial.", + "disaster_relief|CoordinationAndEvaluation|complianceState": "ComplianceState_enum. Compliance status. Possible values: Compliant, Non-Compliant, Partial.", + "disaster_relief|CoordinationAndEvaluation|audit_state": "AuditState_enum. Audit state. Possible values: Completed, Due, Overdue.", + "disaster_relief|CoordinationAndEvaluation|QUALITYCONTROLSTEPS": "QualityControlSteps_enum. Quality control steps evaluation. Possible values: Moderate, Strong, Weak.", + "disaster_relief|CoordinationAndEvaluation|stakeholder_satisf": "REAL. Stakeholder satisfaction score. Example: 4.0.", + "disaster_relief|CoordinationAndEvaluation|mediacoversentiment": "MediaCoverSentiment_enum. Media coverage sentiment. Possible values: Negative, Neutral, Positive.", + "disaster_relief|CoordinationAndEvaluation|PUBLIC_PERCEPTION": "TEXT. Public perception of the response. Example: 3.5★.", + "disaster_relief|CoordinationAndEvaluation|documentation_state": "TEXT. Documentation state of the coordination efforts. Possible values: Complete, Incomplete, Partial.", + "disaster_relief|CoordinationAndEvaluation|LESSONSRECORDED": "TEXT. Status of lessons learned recording. Example: 5.", + "disaster_relief|CoordinationAndEvaluation|bestPracticesListed": "TEXT. Best practices listing status. Example: 2.", + "disaster_relief|CoordinationAndEvaluation|IMPROVEMENT_RECS": "TEXT. Improvement recommendations. Example: 25.", + "disaster_relief|CoordinationAndEvaluation|next_review_date": "DATE. Next scheduled review date. Example: 2025-04-22.", + "disaster_relief|CoordinationAndEvaluation|notes": "TEXT. Additional notes. **NULL means no notes recorded.**. Possible values: Delayed due to weather, Delivered without incident, Handle with care, Priority shipment, Rerouted to alternate hub.", + "disaster_relief|Operation_Hub_Map|OpsRegistry": "VARCHAR(20). Operations registry identifier. PK. FK to Operations.", + "disaster_relief|Operation_Hub_Map|HubRegistry": "VARCHAR(20). Distribution hub registry identifier. PK. FK to DistributionHubs.", + "disaster_relief|Operation_Hub_Map|hub_role": "TEXT. Role or function of the hub in the operation. Possible values: Backup Hub, Forward Depot, Primary Hub.", + "disaster_relief|Operation_Hub_Map|ALLOCATED_CAP_TONS": "REAL. Allocated capacity in tons. Example: 202.6.", + "disaster_relief|DisasterEvents|impact_summary": { + "column_meaning": "JSONB column. Summarizes the impact of the disaster including population effects and damage levels.", + "fields_meaning": { + "population_impact": { + "affected": "BIGINT. Number of affected population. Example: 228943.", + "displaced": "BIGINT. Number of displaced individuals. **NULL means displacement data not available.**. Example: 31578.0.", + "casualties": "BIGINT. Number of casualties due to the disaster. Example: 174.", + "injured": "BIGINT. Number of injuries caused by the disaster. **NULL means injury data not available.**. Example: 4524.0.", + "missing": "BIGINT. Number of missing persons. **NULL means missing persons data not recorded.**. Example: 212.0." + }, + "infrastructure_damage": { + "infra_damage_pct": "REAL. Percentage of infrastructure damaged. Example: 19.0.", + "power_outage_pct": "REAL. Percentage of power outages. **NULL means power outage data not recorded.**. Example: 93.2.", + "water_damage_pct": "BIGINT. Percentage of water infrastructure damage. **NULL means water damage data not available.**. Example: 77.4." + }, + "communication_and_transport": { + "communication_state": "TEXT. Communication network status during the disaster. Possible values: Down, Limited, OP.", + "transport_access": "TEXT. Access to transportation routes during the disaster. Possible values: Full, Limited, Minimal." + } + } + }, + "disaster_relief|HumanResources|staffing_details": { + "column_meaning": "JSONB column. Details about staffing levels and composition across different roles.", + "fields_meaning": { + "staff_counts": { + "total": "BIGINT. Total number of staff. **NULL means staff count not recorded.**. Example: 234.0.", + "medical": "BIGINT. Number of medical staff. Example: 52.", + "logistics": "BIGINT. Number of logistics staff. Example: 99.", + "security": "BIGINT. Number of security staff. **NULL means security staff not recorded.**. Example: 46.0.", + "volunteers": "BIGINT. Total number of volunteers. Example: 435.0." + }, + "availability_and_equipment": { + "availability_pct": "REAL. Staff availability percentage. Example: 94.1.", + "training_state": "TEXT. Training state of staff. **NULL means training state not recorded.**. Possible values: Complete, In Progress, Required.", + "ppe_status": "TEXT. Personal protective equipment status. Possible values: Critical, LTD, ✓.", + "comm_equipment": "TEXT. Communication equipment status. Possible values: Insufficient, Limited, Sufficient." + } + } + }, + "disaster_relief|Transportation|delivery_metrics": { + "column_meaning": "JSONB column. Captures delivery statistics including capacity, performance, and vehicle metrics.", + "fields_meaning": { + "delivery_capacity": { + "total_tons": "REAL. Total delivery tonnage. Example: 368.", + "daily_tons": "REAL. Daily delivery tonnage. Example: 227." + }, + "delivery_performance": { + "avg_hours": "REAL. Average delivery hours. **NULL means delivery time not recorded.**. Example: 20.7.", + "success_rate": "REAL. Delivery success rate percentage. Example: 79.4.", + "route_optimization_status": "RouteOptStatus_enum. Route optimization status. Possible values: In Progress, Optimized, Required." + }, + "vehicle_metrics": { + "fuel_efficiency_lpk": "REAL. Fuel efficiency in liters per kilometer. Example: 10.6.", + "maintenance_state": "TEXT. Vehicle maintenance state. Possible values: Due, Overdue, Up to Date.", + "break_rate": "REAL. Vehicle breakdown rate. Example: 9.17." + } + } + }, + "disaster_relief|EnvironmentAndHealth|health_environment_profile": { + "column_meaning": "JSONB column. Summarizes environmental and health conditions during the disaster response.", + "fields_meaning": { + "environment": { + "impact_rate": "TEXT. Environmental impact rate. Possible values: High, Low, Medium.", + "waste_management": "TEXT. Waste management status. Possible values: Adequate, Critical, Limited.", + "recycling_pct": "REAL. Recycling percentage. Example: 68.4.", + "carbon_tons": "REAL. Carbon emissions in tons. **NULL means carbon data not recorded.**. Example: 793.5.", + "renewable_energy_pct": "REAL. Renewable energy percentage. Example: 43.3.", + "water_quality_index": "TEXT. Water quality index. Example: 85.6 WQI.", + "sanitation_coverage": "REAL. Sanitation coverage percentage. Example: 53.5." + }, + "health": { + "disease_risk": "DiseaseRisk_enum. Disease risk level. Possible values: High, Low, Medium.", + "medical_capacity": "MedicalEmergencyCapacity_enum. Medical emergency capacity level. Possible values: Adequate, Critical, Limited.", + "vaccination_coverage": "REAL. Vaccination coverage percentage. **NULL means vaccination coverage not recorded.**. Example: 14.8.", + "mental_health_aid": "TEXT. Mental health aid status. Possible values: Available, Limited." + } + } + } +} \ No newline at end of file diff --git a/disaster_relief/disaster_relief_kb.jsonl b/disaster_relief/disaster_relief_kb.jsonl new file mode 100644 index 0000000000000000000000000000000000000000..ae882fdf6770dd6b5bdb615f4c76feadd388543c --- /dev/null +++ b/disaster_relief/disaster_relief_kb.jsonl @@ -0,0 +1,58 @@ +{"id": 0, "knowledge": "Disaster Mortality Rate (DMR)", "description": "Calculates the percentage of the affected population that resulted in fatalities.", "definition": "The percentage of a disaster's total affected population that resulted in a confirmed death.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 1, "knowledge": "Population Displacement Rate (PDR)", "description": "Measures the percentage of the affected population that has been displaced from their homes.", "definition": "The percentage of a disaster's total affected population that has been forced to leave their homes.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 2, "knowledge": "Infrastructure Damage Index (IDI)", "description": "A composite score from 0 to 100 that quantifies the overall damage to critical infrastructure.", "definition": "A composite score calculated as the average of infrastructure, power, and water damage percentages. Formula: (COALESCE(infra_damage_pct, 0) + COALESCE(power_outage_pct, 0) + COALESCE(water_damage_pct, 0)) / 3.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 3, "knowledge": "Supply Sufficiency (Days)", "description": "Estimates for how many days the current supplies can last. This is determined by the most limiting resource (either food or water).", "definition": "Calculates the minimum number of days supplies can last, assuming a daily consumption of 2kg of food and 3 liters of water per person. Formula: LEAST((food_tons * 1000) / (affected_population * 2), water_liters / (affected_population * 3)).", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 4, "knowledge": "Financial Burn Rate (FBR)", "description": "Calculates the average daily operational spending from the allocated budget.", "definition": "The average daily expenditure of an operation. Formula: total_operational_costs / (current_date - operation_start_date).", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 5, "knowledge": "Cost Per Person Affected (CPPA)", "description": "Calculates the total operational cost incurred per person in the affected population.", "definition": "The total cost of an operation divided by the affected population. Total cost is the sum of ops_costs_usd, transportcostsUSD, storage_costs_usd, and personnelcostsUsd.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 6, "knowledge": "Logistical Throughput per Vehicle (LTV)", "description": "Measures the average daily delivery capacity in tons per available transport vehicle.", "definition": "The average daily tonnage delivered per vehicle. Formula: daily_tons / vehicle_count.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 7, "knowledge": "Hub Strain Index (HSI)", "description": "A score indicating the operational pressure on a distribution hub, considering both its internal metrics and the severity of the disasters it serves.", "definition": "A score blending internal stress and external demand. Formula: (0.5 * ((hubUtilPct / 100) + (stockTurnRate / 10))) + (0.5 * avg_dsi_served).", "type": "calculation_knowledge", "children_knowledge": [16]} +{"id": 8, "knowledge": "Population Impact Score (PIS)", "description": "A weighted score quantifying the overall human impact of a disaster.", "definition": "A weighted score measuring human suffering, calculated with the formula: (10 * COALESCE(casualties, 0)) + (3 * COALESCE(injured, 0)) + (2 * COALESCE(missing, 0)) + (1 * COALESCE(displaced, 0)).", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 9, "knowledge": "Medical Support Ratio (MSR)", "description": "Calculates the ratio of medical staff to the total number of casualties and injured, indicating direct medical response capacity.", "definition": "The ratio of available medical personnel to the total number of casualties and injured. Formula: medical_personnel / (casualties + injured).", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 10, "knowledge": "Aid Delivery Cost per Ton (ADC)", "description": "Calculates the average transportation cost to deliver one ton of supplies.", "definition": "The total transport costs for a given transport ID, divided by the total tons delivered by that same transport ID.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 11, "knowledge": "Budget Runway (Days)", "description": "Estimates the number of days until the allocated budget is depleted, based on the current burn rate.", "definition": "The estimated number of days an operation can continue before its budget is exhausted. Formula: (allocated_budget - total_operational_costs) / Financial Burn Rate.", "type": "calculation_knowledge", "children_knowledge": [4]} +{"id": 12, "knowledge": "Shelter Gap", "description": "Calculates the shortfall in shelter units relative to the displaced population.", "definition": "The difference between the number of displaced individuals and the number of available shelter units. Formula: GREATEST(0, displaced_population - shelter_units).", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 13, "knowledge": "Operational Readiness Score (ORS)", "description": "A score from 0 to 100 assessing the readiness of the response team and their equipment.", "definition": "A composite score indicating a team's preparedness, based on personnel availability, equipment status, and maintenance levels.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 14, "knowledge": "Response Effectiveness Score (RES)", "description": "A composite score measuring the overall effectiveness of the response operation.", "definition": "A weighted score of an operation's success. Formula: (0.4 * success_rate) + (0.3 * bene_feedbackscore * 10) + (0.3 * distequityidx * 100).", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 15, "knowledge": "Coordination Quality Index (CQI)", "description": "An index measuring the quality of inter-agency coordination and information management.", "definition": "A composite score assessing collaboration effectiveness. Formula: (report_compliance + (stakeholder_satisf * 20) + (dataqualityvalue * 20)) / 3.0.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 16, "knowledge": "Overall Disaster Severity Index (DSI)", "description": "A comprehensive index combining human and infrastructural impact into a single severity score.", "definition": "A composite index calculated by the formula: (0.6 * Population Impact Score) + (0.4 * Infrastructure Damage Index).", "type": "calculation_knowledge", "children_knowledge": [2, 8]} +{"id": 17, "knowledge": "Total Humanitarian Footprint (THF)", "description": "A measure of the total personnel involved in the response, including staff and volunteers.", "definition": "The total number of personnel, including both paid staff and volunteers, dedicated to a response operation.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 18, "knowledge": "Supply Chain Fragility Index (SCFI)", "description": "A score indicating the vulnerability of a supply chain, considering both hub strain and vehicle reliability.", "definition": "A score assessing supply chain vulnerability. Formula: (0.5 * ((hubutilpct / 100) + (stockturnrate / 10))) + (0.5 * break_rate).", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 19, "knowledge": "Public Health Risk Score (PHRS)", "description": "A score assessing the overall public health risk, calculated from medical support, disease risk, sanitation, and water quality.", "definition": "A composite risk score based on multiple factors. It is calculated by first deriving component scores: disease_risk_score (High=3, Medium=2, else 1), sanitation_risk (100 - sanitation_coverage), and water_quality_risk (100 - water_quality_index). The final formula is: ((1 - MSR) * 40) + (disease_risk_score * 10) + (sanitation_risk * 0.15) + (water_quality_risk * 0.15).", "type": "calculation_knowledge", "children_knowledge": [9]} +{"id": 20, "knowledge": "Financial Health Score (FHS)", "description": "An index assessing the financial stability of an operation based on budget and spending.", "definition": "A score assessing financial stability, calculated by averaging the budget remaining percentage and funding sufficiency. Formula: 0.5 * (100 - funds_util_pct) + 0.5 * ((budgetallotusd - COALESCE(resource_gaps_usd, 0)) / NULLIF(budgetallotusd, 0)) * 100.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 21, "knowledge": "Mass Casualty Incident (MCI)", "description": "An event with a number of casualties that overwhelms or significantly challenges the available medical resources.", "definition": "An event is classified as an 'MCI' if the total number of fatalities and injuries is greater than 100, OR if the Medical Support Ratio (MSR) is less than 0.01.", "type": "domain_knowledge", "children_knowledge": [9]} +{"id": 23, "knowledge": "Imminent Supply Depletion Alert", "description": "A critical warning that essential life-sustaining supplies will run out within 48 hours.", "definition": "An 'Imminent Supply Depletion Alert' is triggered when the calculated Supply Sufficiency (Days) is less than 2.", "type": "domain_knowledge", "children_knowledge": [3]} +{"id": 24, "knowledge": "Funding Crisis", "description": "An operational state where an operation is at immediate risk of halting due to lack of funds, characterized by a 'Critical' funding status and a very short budget runway.", "definition": "A 'Funding Crisis' is active when the operation's funding state is 'Critical' and its estimated Budget Runway is less than 7 days.", "type": "domain_knowledge", "children_knowledge": [11, 47]} +{"id": 25, "knowledge": "Logistical Gridlock", "description": "A severe breakdown in the supply chain, rendering aid delivery ineffective.", "definition": "A 'Logistical Gridlock' is identified if the supply flow state is 'Disrupted' AND transportation access is 'Minimal'.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 26, "knowledge": "Public Health Emergency", "description": "A state where the risk of disease is high and the capacity to respond is critically low.", "definition": "A 'Public Health Emergency' exists if the Public Health Risk Score (PHRS) exceeds a critical threshold of 70.", "type": "domain_knowledge", "children_knowledge": [19]} +{"id": 27, "knowledge": "High-Risk Operation", "description": "An operation facing severe security threats or access constraints that endanger personnel and mission objectives.", "definition": "An operation is 'High-Risk' if it has a 'High Risk' safety ranking OR if the number of security incidents exceeds 5 per week.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 28, "knowledge": "Highly Effective Operation", "description": "An operation that demonstrates excellence across coordination, logistics, and beneficiary satisfaction.", "definition": "An operation is 'Highly Effective' if its Response Effectiveness Score (RES) is greater than 75 AND its Coordination Quality Index (CQI) is greater than 75.", "type": "domain_knowledge", "children_knowledge": [14, 15]} +{"id": 29, "knowledge": "Severe Shelter Crisis", "description": "A critical lack of adequate shelter for the displaced population.", "definition": "A 'Severe Shelter Crisis' is active if the calculated Shelter Gap is greater than 1,000 units.", "type": "domain_knowledge", "children_knowledge": [12]} +{"id": 30, "knowledge": "Major Disaster", "description": "Classifies a disaster as 'Major' based on the comprehensive severity index, indicating a need for national-level response.", "definition": "A 'Major Disaster' is any event where the Overall Disaster Severity Index (DSI) is between 500,000 and 2,000,000.", "type": "domain_knowledge", "children_knowledge": [16]} +{"id": 31, "knowledge": "Overwhelmed Logistical Hub", "description": "Flags a distribution hub that is under extreme pressure and likely a bottleneck in the supply chain.", "definition": "A hub is 'Overwhelmed' if its Hub Strain Index (HSI) is greater than 1,000,000.", "type": "domain_knowledge", "children_knowledge": [7]} +{"id": 32, "knowledge": "Failing Operation", "description": "An operation characterized by poor logistical performance and low effectiveness.", "definition": "An operation is 'Failing' if its Logistical Throughput per Vehicle (LTV) is below 1.0 OR its Response Effectiveness Score (RES) is below 40.", "type": "domain_knowledge", "children_knowledge": [6, 14]} +{"id": 33, "knowledge": "Full-Scale Emergency", "description": "Defines the criteria for a full-scale emergency response.", "definition": "A 'Full-Scale Emergency' is any operation where the emergency level is 'Red' or 'Black'.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 34, "knowledge": "Successful Community Partnership", "description": "Identifies operations with excellent community relations and equitable aid distribution.", "definition": "A 'Successful Community Partnership' exists when community engagement is 'High' and the distribution equity index is greater than 0.9.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 35, "knowledge": "High Cost, Low Impact Operation", "description": "An operation that is financially expensive but is having little positive effect on the ground.", "definition": "An operation is 'High Cost, Low Impact' if its Cost Per Person Affected (CPPA) is above average AND its Response Effectiveness Score (RES) is below average.", "type": "domain_knowledge", "children_knowledge": [5, 14]} +{"id": 36, "knowledge": "Operation with Fragile Supply Chain", "description": "Identifies an operation whose logistics are at high risk of failure.", "definition": "An operation has a 'Fragile Supply Chain' if its Supply Chain Fragility Index (SCFI) is greater than 5.0.", "type": "domain_knowledge", "children_knowledge": [18]} +{"id": 37, "knowledge": "Financially Unstable Operation", "description": "An operation with poor financial health, indicating a risk of budget overruns or failure to meet objectives.", "definition": "An operation is 'Financially Unstable' if its Financial Health Score (FHS) is below 50.", "type": "domain_knowledge", "children_knowledge": [20]} +{"id": 38, "knowledge": "High-Accountability Response", "description": "A response characterized by strong compliance, documentation, and quality control.", "definition": "A 'High-Accountability Response' is one where the compliance state is 'Compliant', the audit state is 'Completed', and quality control steps are 'Strong'.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 39, "knowledge": "Negative Media Narrative", "description": "A situation where public and media perception of the response effort is poor.", "definition": "A 'Negative Media Narrative' is present if the media coverage sentiment is 'Negative' and the public perception score is below 60.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 40, "knowledge": "Gridlock Severity Index (GSI)", "description": "A weighted score to prioritize operations in logistical gridlock, based on mission priority, personnel count, and transport access restrictions.", "definition": "A weighted sum calculated after converting categorical data to scores: priority_rank ('critical'=1 to 'low'=4) and transport_access ('minimal'=1.0, 'limited'=0.5). The final formula is: ((10 - priority_score) * 0.5) + (total_personnel * 0.2) + (transport_restriction_score * 30).", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 41, "knowledge": "Hazard Severity Levels", "description": "A classification of a disaster's intensity, guiding the scale of the required response.", "definition": "'Level 1' indicates a minor, localized event. 'Level 3' indicates a moderate event with regional impact. 'Level 5' indicates a catastrophic event with widespread, severe impact requiring international assistance.", "type": "value_illustration", "children_knowledge": -1} +{"id": 42, "knowledge": "Emergency Alert Levels", "description": "Color-coded levels that define the urgency and command structure for an ongoing operation.", "definition": "'Yellow': Monitoring required. 'Orange': Active response, regional command. 'Red': Full-scale emergency, national command. 'Black': Catastrophic event, international-level coordination.", "type": "value_illustration", "children_knowledge": -1} +{"id": 43, "knowledge": "Response Phases", "description": "The distinct phases of a disaster management cycle, from immediate action to long-term rebuilding.", "definition": "'Initial': First 24-72 hours, focus on life-saving. 'Emergency': Period of active, large-scale relief. 'Recovery': Transition to restoring services. 'Reconstruction': Long-term rebuilding of infrastructure and society.", "type": "value_illustration", "children_knowledge": -1} +{"id": 44, "knowledge": "Distribution Equity Index", "description": "An index from 0 to 1 that measures the fairness and impartiality of aid distribution among affected populations.", "definition": "A score of 1.0 represents perfect equity. A score below 0.7 may indicate significant disparities. A score below 0.5 indicates severe inequity.", "type": "value_illustration", "children_knowledge": -1} +{"id": 45, "knowledge": "Damage Report Categories", "description": "A qualitative assessment of the physical destruction caused by a disaster.", "definition": "'Minor': Limited damage. 'Moderate': Visible damage to some structures. 'Severe': Widespread destruction. 'Catastrophic': Near-total destruction of the built environment in the affected area.", "type": "value_illustration", "children_knowledge": -1} +{"id": 46, "knowledge": "Coordination Effectiveness Levels", "description": "A rating of how well different agencies, government bodies, and NGOs are working together.", "definition": "'Low': Poor communication, duplicated efforts. 'Medium': Functional coordination with some gaps. 'High': Seamless, efficient, and collaborative partnership among all stakeholders.", "type": "value_illustration", "children_knowledge": -1} +{"id": 47, "knowledge": "Funding States", "description": "Describes the financial health of an operation in relation to its needs.", "definition": "'Adequate': Sufficient funds are available. 'Limited': Funding is available but does not cover all needs, requiring prioritization. 'Critical': A severe funding gap threatens essential activities.", "type": "value_illustration", "children_knowledge": -1} +{"id": 48, "knowledge": "Data Quality Value", "description": "A score from 0 to 100 representing the reliability, completeness, and timeliness of the data being reported.", "definition": "A score above 90 is 'Excellent'. 70-89 is 'Good'. 50-69 is 'Fair'. Below 50 is 'Poor' and indicates that operational decisions are based on unreliable information.", "type": "value_illustration", "children_knowledge": -1} +{"id": 49, "knowledge": "Supply Flow States", "description": "Describes the overall health and functionality of the supply chain for a relief operation.", "definition": "'Stable': Goods are moving predictably and efficiently. 'Strained': The supply chain is functioning but experiencing delays or difficulties. 'Disrupted': The supply chain is broken, and goods are not reaching their destinations.", "type": "value_illustration", "children_knowledge": -1} +{"id": 50, "knowledge": "Last-Mile Delivery Status", "description": "The status of the final and most critical stage of delivery, from a local hub to the final distribution points.", "definition": "'On Track': Deliveries are proceeding as planned. 'Delayed': Deliveries are facing obstacles and are behind schedule. 'Suspended': Deliveries have been halted due to safety, access, or other critical issues.", "type": "value_illustration", "children_knowledge": -1} +{"id": 51, "knowledge": "Catastrophic Disaster", "description": "Classifies a disaster as 'Catastrophic' based on the DSI, indicating a need for maximum-level international response.", "definition": "A 'Catastrophic Disaster' is any event where the Overall Disaster Severity Index (DSI) is greater than 15,000.", "type": "domain_knowledge", "children_knowledge": [16]} +{"id": 52, "knowledge": "Cost Per Beneficiary (CPB)", "description": "Calculates the overall average cost to assist a single beneficiary, based on the total costs of completed operations divided by the total number of people affected.", "definition": "Calculated by dividing the total costs by the total affected population. Total costs are the sum of ops_costs_usd, transportcostsUSD, storage_costs_usd, and personnelcostsUsd.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 53, "knowledge": "Average Vehicle Breakdown Rate", "description": "Calculates the mean vehicle breakdown rate for a specified cohort of transportation units.", "definition": "The mean rate at which transport vehicles become non-operational, calculated for a specific group of vehicles. The underlying 'break_rate' data is stored as a percentage.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 54, "knowledge": "High Risk Operation", "description": "An active operation with significant safety or security concerns.", "definition": "An operation is considered 'High Risk' if its status is 'Active' AND its safety ranking is 'High Risk' OR it has more than 5 security incidents.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 55, "knowledge": "Medical Staff Ratio", "description": "The number of medical staff available per 1000 affected people.", "definition": "A ratio indicating medical staff availability. Formula: (medical_staff_count / affected_population) * 1000.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 56, "knowledge": "Hub Overload", "description": "A state where a distribution hub's operational metrics indicate it is under excessive strain.", "definition": "A hub is considered overloaded if its combined utilization and turnover score is greater than 1.5. Formula: (hubutilpct / 100) + (stockturnrate / 10) > 1.5.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 57, "knowledge": "High Burn Rate Alert", "description": "An alert for operations that have consumed a significant portion of their budget.", "definition": "A 'High Burn Rate' alert is triggered for an operation if its total costs exceed 20% of its total allocated budget.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 58, "knowledge": "Fragile Supply Chain", "description": "Identifies operations whose logistical infrastructure is considered unstable or at risk.", "definition": "An operation is considered to have a 'Fragile Supply Chain' if its Supply Chain Fragility Index (SCFI) is greater than 5.0.", "type": "domain_knowledge", "children_knowledge": [18]} diff --git a/disaster_relief/disaster_relief_schema.txt b/disaster_relief/disaster_relief_schema.txt new file mode 100644 index 0000000000000000000000000000000000000000..7004d0eee79fb176fb053721a352aafb7d4b957e --- /dev/null +++ b/disaster_relief/disaster_relief_schema.txt @@ -0,0 +1,273 @@ +CREATE TABLE "disasterevents" ( +distregistry character varying NOT NULL, +timemark timestamp without time zone NOT NULL, +haztype USER-DEFINED NOT NULL, +haz_level text NULL, +affectedarea text NULL, +region_tag text NULL, +latcoord real NULL, +lon_coord real NULL, +damagereport text NULL, +impact_summary jsonb NULL, + PRIMARY KEY (distregistry) +); + +First 3 rows: +distregistry timemark haztype haz_level affectedarea region_tag latcoord lon_coord damagereport impact_summary +-------------- ------------------- ---------- ----------- ---------------- ------------ ---------- ----------- -------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- +DIST_BI3UF 2024-12-21 19:49:00 Wildfire Severity-3 East Jeremy RC7250 51.2036 36.2211 Severe {'population_impact': {'injured': 4524, 'missing': 212, 'affected': 228943, 'displaced': None, 'casualties': 174}, 'infrastructure_damage': {'infra_damage_pct': 19, 'power_outage_pct': 93.2, 'water_damage_pct': 77}, 'communication_and_transport': {'transport_access': 'Full', 'communication_state': 'Limited'}} +DIST_G6W29 2024-03-13 16:56:00 Earthquake Severity-5 Lake Mariah RC2170 -89.8906 62.0815 Moderate {'population_impact': {'injured': None, 'missing': 363, 'affected': 241273, 'displaced': 31578, 'casualties': 8}, 'infrastructure_damage': {'infra_damage_pct': 68.9, 'power_outage_pct': None, 'water_damage_pct': None}, 'communication_and_transport': {'transport_access': 'Full', 'communication_state': 'OP'}} +DIST_STZJD 2024-12-08 06:09:00 Earthquake Severity-5 New Kellychester RC8678 80.0269 -146.007 Minor {'population_impact': {'injured': None, 'missing': 222, 'affected': 389569, 'displaced': None, 'casualties': 355}, 'infrastructure_damage': {'infra_damage_pct': 79.7, 'power_outage_pct': 83.8, 'water_damage_pct': 27}, 'communication_and_transport': {'transport_access': 'Limited', 'communication_state': 'Limited'}} +... + + +CREATE TABLE "humanresources" ( +hrregistry character varying NOT NULL, +hrdistref character varying NULL, +hr_ops_ref character varying NULL, +staffing_details jsonb NULL, + PRIMARY KEY (hrregistry), + FOREIGN KEY (hrdistref) REFERENCES disasterevents(distregistry), + FOREIGN KEY (hr_ops_ref) REFERENCES operations(opsregistry) +); + +First 3 rows: +hrregistry hrdistref hr_ops_ref staffing_details +------------ ----------- ------------ ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- +HR_IMV6 DIST_BI3UF OPS_WKU4V {'staff_counts': {'total': None, 'medical': 52, 'security': None, 'logistics': 99, 'volunteers': 435}, 'availability_and_equipment': {'ppe_status': 'LTD', 'comm_equipment': 'Sufficient', 'training_state': 'Complete', 'availability_pct': 94.1}} +HR_S52O DIST_G6W29 OPS_UCBKX {'staff_counts': {'total': None, 'medical': 30, 'security': None, 'logistics': 197, 'volunteers': None}, 'availability_and_equipment': {'ppe_status': 'Critical', 'comm_equipment': None, 'training_state': 'In Progress', 'availability_pct': 87.7}} +HR_XW6X DIST_STZJD OPS_4OUKN {'staff_counts': {'total': 234, 'medical': 38, 'security': 46, 'logistics': 93, 'volunteers': 781}, 'availability_and_equipment': {'ppe_status': '✓', 'comm_equipment': 'Sufficient', 'training_state': 'In Progress', 'availability_pct': 82.5}} +... + + +CREATE TABLE "transportation" ( +transportregistry character varying NOT NULL, +transportdistref character varying NULL, +transport_hub_ref character varying NULL, +transportsupref character varying NULL, +vehiclecount bigint NULL, +trucks_available bigint NULL, +helosavailable bigint NULL, +boatsavailable bigint NULL, +lastmilestatus text NULL, +distribution_points bigint NULL, +deliverystatus text NULL, +delivery_metrics jsonb NULL, + PRIMARY KEY (transportregistry), + FOREIGN KEY (transportdistref) REFERENCES disasterevents(distregistry), + FOREIGN KEY (transport_hub_ref) REFERENCES distributionhubs(hubregistry), + FOREIGN KEY (transportsupref) REFERENCES supplies(supplyregistry) +); + +First 3 rows: +transportregistry transportdistref transport_hub_ref transportsupref vehiclecount trucks_available helosavailable boatsavailable lastmilestatus distribution_points deliverystatus delivery_metrics +------------------- ------------------ ------------------- ----------------- -------------- ------------------ ---------------- ---------------- ---------------- --------------------- ---------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- +TRANS_PK3RF DIST_BI3UF HUB_98QR SUP_T9G1R 141 88 7 7 On Track 35 In Transit {'vehicle_metrics': {'break_rate': 9.17, 'maintenance_state': 'Overdue', 'fuel_efficiency_lpk': 10.6}, 'delivery_capacity': {'daily_tons': 227, 'total_tons': 368}, 'delivery_performance': {'avg_hours': None, 'success_rate': 79.4, 'route_optimization_status': 'In Progress'}} +TRANS_E4XKN DIST_DJGG7 HUB_R7WU SUP_7IX64 nan 13 10 11 Delayed 40 Delivered {'vehicle_metrics': {'break_rate': 3.13, 'maintenance_state': 'Due', 'fuel_efficiency_lpk': 16.7}, 'delivery_capacity': {'daily_tons': 66, 'total_tons': 1686}, 'delivery_performance': {'avg_hours': 12, 'success_rate': 98.1, 'route_optimization_status': 'Required'}} +TRANS_2547J DIST_G6W29 HUB_VC24 SUP_49AM8 nan 87 8 2 On Track 14 Delivered {'vehicle_metrics': {'break_rate': 6.85, 'maintenance_state': 'Overdue', 'fuel_efficiency_lpk': 17.7}, 'delivery_capacity': {'daily_tons': 364, 'total_tons': 2771}, 'delivery_performance': {'avg_hours': 20.7, 'success_rate': 92.6, 'route_optimization_status': 'Optimized'}} +... + + +CREATE TABLE "supplies" ( +supplyregistry character varying NOT NULL, +supply_dist_ref character varying NULL, +supplyhubref character varying NULL, +foodtons real NULL, +water_liters real NULL, +medunits bigint NULL, +shelter_units bigint NULL, +blanketunits bigint NULL, +hygiene_units bigint NULL, +powergenunits bigint NULL, +fuel_reserve_liters real NULL, + PRIMARY KEY (supplyregistry), + FOREIGN KEY (supply_dist_ref) REFERENCES disasterevents(distregistry), + FOREIGN KEY (supplyhubref) REFERENCES distributionhubs(hubregistry) +); + +First 3 rows: +supplyregistry supply_dist_ref supplyhubref foodtons water_liters medunits shelter_units blanketunits hygiene_units powergenunits fuel_reserve_liters +---------------- ----------------- -------------- ---------- -------------- ---------- --------------- -------------- --------------- --------------- --------------------- +SUP_T9G1R DIST_BI3UF HUB_98QR 479.4 283596 nan 7534 nan 46981 568 86011 +SUP_49AM8 DIST_G6W29 HUB_VC24 187.1 139603 nan 1968 26671 99277 733 59156 +SUP_2COVR DIST_STZJD HUB_0TQE 640.8 878396 15593 6390 78690 31061 881 46825 +... + + +CREATE TABLE "distributionhubs" ( +hubregistry character varying NOT NULL, +hub_cap_tons real NULL, +hubutilpct real NULL, +store_cap_m3 real NULL, +storeavailm3 real NULL, +coldstorecapm3 real NULL, +cold_store_temp_c real NULL, +warehouse_state text NULL, +invaccpct real NULL, +stockturnrate real NULL, + PRIMARY KEY (hubregistry) +); + +First 3 rows: +hubregistry hub_cap_tons hubutilpct store_cap_m3 storeavailm3 coldstorecapm3 cold_store_temp_c warehouse_state invaccpct stockturnrate +------------- -------------- ------------ -------------- -------------- ---------------- ------------------- ----------------- ----------- --------------- +HUB_98QR 5101 9.4 93293 7279 249 6.6 Fair 91.3 3.31 +HUB_VC24 1825 52.3 45603 9050 151 nan Excellent 98.4 0.63 +HUB_0TQE 7553 79.7 2908 9396 395 nan Good 92.9 1.14 +... + + +CREATE TABLE "operations" ( +opsregistry character varying NOT NULL, +emerglevel USER-DEFINED NULL, +resp_phase text NULL, +opsstatus text NULL, +coordcenter text NULL, +ops_start_date date NULL, +estdurationdays bigint NULL, +priority_rank USER-DEFINED NULL, +resourceallocstate USER-DEFINED NULL, +supply_flow_state text NULL, + PRIMARY KEY (opsregistry) +); + +First 3 rows: +opsregistry emerglevel resp_phase opsstatus coordcenter ops_start_date estdurationdays priority_rank resourceallocstate supply_flow_state +------------- ------------ -------------- ------------ ------------- ---------------- ----------------- --------------- -------------------- ------------------- +OPS_WKU4V Black Reconstruction Completed CC7649 2025-01-26 12 High Limited Disrupted +OPS_UCBKX Black Recovery Completed CC6010 2025-02-08 362 Medium Limited Stable +OPS_4OUKN Orange Emergency Scaling Down CC3314 2025-02-17 291 Medium Critical Strained +... + + +CREATE TABLE "financials" ( +financeregistry character varying NOT NULL, +findistref character varying NULL, +fin_ops_ref character varying NULL, +budgetallotusd text NULL, +funds_util_pct text NULL, +costbeneusd text NULL, +ops_costs_usd real NULL, +transportcostsusd bigint NULL, +storage_costs_usd real NULL, +personnelcostsusd real NULL, +funding_state text NULL, +donor_commitments_usd real NULL, +resource_gaps_usd bigint NULL, + PRIMARY KEY (financeregistry), + FOREIGN KEY (findistref) REFERENCES disasterevents(distregistry), + FOREIGN KEY (fin_ops_ref) REFERENCES operations(opsregistry) +); + +First 3 rows: +financeregistry findistref fin_ops_ref budgetallotusd funds_util_pct costbeneusd ops_costs_usd transportcostsusd storage_costs_usd personnelcostsusd funding_state donor_commitments_usd resource_gaps_usd +----------------- ------------ ------------- ---------------- ---------------- ------------- --------------- ------------------- ------------------- ------------------- --------------- ----------------------- ------------------- +FIN_UJTF DIST_BI3UF OPS_WKU4V $4,227,090.00 9.8% $844.12 88256 976202 111548 364821 Critical nan 95367 +FIN_HB92 DIST_G6W29 OPS_UCBKX $3,625,344.00 35.0% $18.76 919777 77922 272650 470856 Adequate 4.06898e+06 442493 +FIN_YM8Z DIST_STZJD OPS_4OUKN $7,987,244.00 42.5% $837.72 594338 811935 492222 906025 Adequate 6.77819e+06 426146 +... + + +CREATE TABLE "beneficiariesandassessments" ( +beneregistry character varying NOT NULL, +bene_dist_ref character varying NULL, +beneopsref character varying NULL, +beneregister USER-DEFINED NULL, +vulnerability_review USER-DEFINED NULL, +needs_assess_status text NULL, +distequityidx real NULL, +bene_feedbackscore real NULL, +commengagelvl text NULL, +local_capacity_growth text NULL, + PRIMARY KEY (beneregistry), + FOREIGN KEY (bene_dist_ref) REFERENCES disasterevents(distregistry), + FOREIGN KEY (beneopsref) REFERENCES operations(opsregistry) +); + +First 3 rows: +beneregistry bene_dist_ref beneopsref beneregister vulnerability_review needs_assess_status distequityidx bene_feedbackscore commengagelvl local_capacity_growth +-------------- --------------- ------------ -------------- ---------------------- --------------------- --------------- -------------------- --------------- ----------------------- +BENE_ZVHK DIST_BI3UF OPS_WKU4V Complete Due 0.54 nan High +BENE_UCVM DIST_G6W29 OPS_UCBKX Pending Complete Overdue 0.87 1.3 Low Limited +BENE_FG6D DIST_STZJD OPS_4OUKN Complete Pending Overdue 0.88 1.5 Medium Active +... + + +CREATE TABLE "environmentandhealth" ( +envhealthregistry character varying NOT NULL, +env_dist_ref character varying NULL, +health_environment_profile jsonb NULL, + PRIMARY KEY (envhealthregistry), + FOREIGN KEY (env_dist_ref) REFERENCES disasterevents(distregistry) +); + +First 3 rows: +envhealthregistry env_dist_ref health_environment_profile +------------------- -------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ +ENV_FUP5 DIST_BI3UF {'health': {'disease_risk': 'High', 'medical_capacity': 'Adequate', 'mental_health_aid': 'Limited', 'vaccination_coverage': 14.8}, 'environment': {'carbon_tons': 793.5, 'impact_rate': 'Low', 'recycling_pct': 68.4, 'waste_management': 'Adequate', 'sanitation_coverage': 53.5, 'water_quality_index': '85.6 WQI', 'renewable_energy_pct': 43.3}} +ENV_RNQP DIST_G6W29 {'health': {'disease_risk': 'High', 'medical_capacity': 'Critical', 'mental_health_aid': None, 'vaccination_coverage': 77.7}, 'environment': {'carbon_tons': None, 'impact_rate': 'Low', 'recycling_pct': 62.4, 'waste_management': 'Adequate', 'sanitation_coverage': 65.5, 'water_quality_index': '90.9 WQI', 'renewable_energy_pct': 27.5}} +ENV_D95Y DIST_STZJD {'health': {'disease_risk': 'Medium', 'medical_capacity': 'Adequate', 'mental_health_aid': 'Limited', 'vaccination_coverage': 24.6}, 'environment': {'carbon_tons': 270.2, 'impact_rate': 'High', 'recycling_pct': 18.1, 'waste_management': 'Adequate', 'sanitation_coverage': 6.6, 'water_quality_index': '11.7 WQI', 'renewable_energy_pct': 27.7}} +... + + +CREATE TABLE "coordinationandevaluation" ( +coordevalregistry character varying NOT NULL, +coord_dist_ref character varying NULL, +coordopsref character varying NULL, +secincidentcount bigint NULL, +safety_ranking USER-DEFINED NULL, +accesslimitation text NULL, +coord_effect_lvl USER-DEFINED NULL, +partnerorgs text NULL, +infosharingstate USER-DEFINED NULL, +report_compliance real NULL, +dataqualityvalue bigint NULL, +monitoring_freq text NULL, +evaluationstage USER-DEFINED NULL, +lessons_learned_stage USER-DEFINED NULL, +contingencyplanstage USER-DEFINED NULL, +risk_mitigation_steps USER-DEFINED NULL, +insurance_scope text NULL, +compliancestate USER-DEFINED NULL, +audit_state USER-DEFINED NULL, +qualitycontrolsteps USER-DEFINED NULL, +stakeholder_satisf real NULL, +mediacoversentiment USER-DEFINED NULL, +public_perception text NULL, +documentation_state text NULL, +lessonsrecorded text NULL, +bestpracticeslisted text NULL, +improvement_recs text NULL, +next_review_date date NULL, +notes text NULL, + PRIMARY KEY (coordevalregistry), + FOREIGN KEY (coord_dist_ref) REFERENCES disasterevents(distregistry), + FOREIGN KEY (coordopsref) REFERENCES operations(opsregistry) +); + +First 3 rows: +coordevalregistry coord_dist_ref coordopsref secincidentcount safety_ranking accesslimitation coord_effect_lvl partnerorgs infosharingstate report_compliance dataqualityvalue monitoring_freq evaluationstage lessons_learned_stage contingencyplanstage risk_mitigation_steps insurance_scope compliancestate audit_state qualitycontrolsteps stakeholder_satisf mediacoversentiment public_perception documentation_state lessonsrecorded bestpracticeslisted improvement_recs next_review_date notes +------------------- ---------------- ------------- ------------------ ---------------- ------------------ ------------------ ------------- ------------------ ------------------- ------------------ ----------------- ----------------- ----------------------- ---------------------- ----------------------- ----------------- ----------------- ------------- --------------------- -------------------- --------------------- ------------------- --------------------- ----------------- --------------------- ------------------ ------------------ ------------------------- +COORD_LOM8 DIST_BI3UF OPS_WKU4V 83 Medium 9 Poor 91 4 Monthly Overdue In Progress Overdue Insufficient Full Partial Completed Moderate 4 Positive 3.5★ Partial 5 2 25 2025-04-22 +COORD_EAPL DIST_G6W29 OPS_UCBKX 45 Safe Low Limited 77.4 3 Daily Overdue Documented Due Insufficient Partial Partial Due Moderate 2.1 Positive 1.5★ Partial 22 4 16 2025-03-09 Rerouted to alternate hub +COORD_CB54 DIST_STZJD OPS_4OUKN 49 Moderate Severe Low 21 Limited 77.1 4 Monthly Due Pending Overdue Partial Full Partial Completed Strong 3.8 Neutral 2.1★ Incomplete 19 4 26 2025-07-05 Rerouted to alternate hub +... + + +CREATE TABLE "operation_hub_map" ( +opsregistry character varying NOT NULL, +hubregistry character varying NOT NULL, +hub_role text NULL, +allocated_cap_tons real NULL, + PRIMARY KEY (opsregistry, hubregistry), + FOREIGN KEY (opsregistry) REFERENCES operations(opsregistry), + FOREIGN KEY (hubregistry) REFERENCES distributionhubs(hubregistry) +); + +First 3 rows: +opsregistry hubregistry hub_role allocated_cap_tons +------------- ------------- ---------- -------------------- +OPS_19HN9 HUB_DRD0 overflow 51.65 +OPS_FCK43 HUB_G46G overflow 21.94 +OPS_1CVYJ HUB_N2JE backup 21.68 +... diff --git a/exchange_traded_funds/exchange_traded_funds_column_meaning_base.json b/exchange_traded_funds/exchange_traded_funds_column_meaning_base.json new file mode 100644 index 0000000000000000000000000000000000000000..e4f7422d3165171a380cc5dcda6bf2e6f1579245 --- /dev/null +++ b/exchange_traded_funds/exchange_traded_funds_column_meaning_base.json @@ -0,0 +1,205 @@ +{ + "exchange_traded_funds|families|famcode": "A SERIAL primary key uniquely identifying each fund family or asset management company in the database.", + "exchange_traded_funds|families|groupname": "Unique name of the fund family or asset management company (e.g., 'Vanguard', 'BlackRock', 'Fidelity'). Must be unique across all records.", + "exchange_traded_funds|exchanges|xchgnum": "A SERIAL primary key uniquely identifying each stock exchange or trading venue in the database.", + "exchange_traded_funds|exchanges|marketcode": "Unique abbreviated code for the exchange (e.g., 'NYSE', 'NASDAQ', 'LSE'). Must be unique across all records.", + "exchange_traded_funds|exchanges|tradingvenue": "Full official name of the stock exchange or trading venue where funds are listed and traded.", + "exchange_traded_funds|exchanges|localtime": "Local timezone identifier for the exchange's trading hours and time zone (e.g., 'EST', 'GMT', 'JST').", + "exchange_traded_funds|categories|catref": "A SERIAL primary key uniquely identifying each fund category or investment classification in the database.", + "exchange_traded_funds|categories|classtype": "Unique fund category classification (e.g., 'Large Cap Growth', 'International Equity', 'Bond Index', 'Sector Equity', 'Commodity'). Must be unique across all records.", + "exchange_traded_funds|sectors|secid": "A SERIAL primary key uniquely identifying each economic sector or industry classification in the database.", + "exchange_traded_funds|sectors|industrytag": "Unique sector or industry name (e.g., 'Technology', 'Healthcare', 'Financial Services', 'Energy', 'Consumer Defensive'). Must be unique across all records.", + "exchange_traded_funds|bond_ratings|ratekey": "A SERIAL primary key uniquely identifying each credit rating level for bond classifications in the database.", + "exchange_traded_funds|bond_ratings|creditmark": "Unique credit rating designation (e.g., 'AAA', 'AA', 'A', 'BBB', 'BB', 'B', 'Below B', 'US Government'). Must be unique across all records.", + "exchange_traded_funds|securities|securityref": "A SERIAL primary key uniquely identifying each individual security or holding instrument in the database.", + "exchange_traded_funds|securities|instrumentcode": "Unique ticker symbol or identifier for the security (e.g., 'AAPL', 'MSFT', 'GOOGL'). Must be unique across all records.", + "exchange_traded_funds|securities|securitylabel": "Full company name or description of the security (e.g., 'Apple Inc.', 'Microsoft Corporation').", + "exchange_traded_funds|funds|productnum": "A SERIAL primary key uniquely identifying each fund product in the database.", + "exchange_traded_funds|funds|tickersym": "Unique ticker symbol for the fund as traded on exchanges (e.g., 'SPY', 'VTI', 'QQQ'). Must be unique across all records.", + "exchange_traded_funds|funds|shortlabel": "Abbreviated display name of the fund for user interfaces and brief references. Contains NULL when short label is not available or fund uses only full description.", + "exchange_traded_funds|funds|fulldescription": "Complete official name and description of the fund including investment objective and strategy details. Contains NULL when detailed description is not available or pending.", + "exchange_traded_funds|funds|parentgroup": "Foreign key referencing families.GroupName, indicating which fund family or asset management company manages this fund.", + "exchange_traded_funds|funds|listingvenue": "Foreign key referencing exchanges.MarketCode, indicating the primary exchange where the fund is listed and traded.", + "exchange_traded_funds|funds|productclass": "Foreign key referencing categories.ClassType, indicating the fund's investment category classification.", + "exchange_traded_funds|funds|launchdate": "Date when the fund was first established and began operations (YYYY-MM-DD). Contains NULL when launch date is not available or fund is in pre-launch phase.", + "exchange_traded_funds|funds|strategynotes": "Detailed text description of the fund's investment strategy, objectives, and methodology. Contains NULL when strategy notes are not available or not provided by fund management.", + "exchange_traded_funds|family_categories|linkid": "A SERIAL primary key uniquely identifying each relationship between fund families and categories they offer.", + "exchange_traded_funds|family_categories|familylink": "Foreign key referencing families.GroupName, indicating which fund family offers products in this category.", + "exchange_traded_funds|family_categories|categorylink": "Foreign key referencing categories.ClassType, indicating which category the fund family operates in.", + "exchange_traded_funds|family_exchanges|connectref": "A SERIAL primary key uniquely identifying each relationship between fund families and exchanges they list on.", + "exchange_traded_funds|family_exchanges|familyref": "Foreign key referencing families.GroupName, indicating which fund family has listings on the exchange.", + "exchange_traded_funds|family_exchanges|exchangeref": "Foreign key referencing exchanges.MarketCode, indicating which exchange the fund family lists products on.", + "exchange_traded_funds|sector_allocations|allockey": "A SERIAL primary key uniquely identifying each sector allocation record for funds.", + "exchange_traded_funds|sector_allocations|productlink": "Foreign key referencing funds.TickerSym, indicating which fund has this sector allocation.", + "exchange_traded_funds|sector_allocations|sectorlink": "Foreign key referencing sectors.SecID, indicating which economic sector the allocation applies to.", + "exchange_traded_funds|sector_allocations|weightpct": "Percentage (0-1) of the fund's assets allocated to this specific sector, must sum to 1.0 across all sectors for each fund.", + "exchange_traded_funds|bond_allocations|bondallocid": "A SERIAL primary key uniquely identifying each bond credit rating allocation record for funds.", + "exchange_traded_funds|bond_allocations|fundlink": "Foreign key referencing funds.TickerSym, indicating which fund has this bond allocation.", + "exchange_traded_funds|bond_allocations|ratinglink": "Foreign key referencing bond_ratings.RateKey, indicating which credit rating category the allocation applies to.", + "exchange_traded_funds|bond_allocations|allocationpct": "Percentage (0-1) of the fund's bond holdings in this credit rating category.", + "exchange_traded_funds|holdings|holdref": "A SERIAL primary key uniquely identifying each individual security holding record within funds.", + "exchange_traded_funds|holdings|instrumentref": "Foreign key referencing funds.TickerSym, indicating which fund holds this security.", + "exchange_traded_funds|holdings|securityref": "Foreign key referencing securities.InstrumentCode, indicating which specific security is held.", + "exchange_traded_funds|holdings|holdingpct": "Percentage (0-1) of the fund's total assets represented by this individual security holding.", + "exchange_traded_funds|holdings|positionrank": "Ranking of this holding within the fund's portfolio (1 = largest holding, 2 = second largest, etc.). Contains NULL when position ranking is not available or not tracked.", + "exchange_traded_funds|performance|perfid": "A SERIAL primary key uniquely identifying each fund performance record.", + "exchange_traded_funds|performance|productref": "Foreign key referencing funds.TickerSym, ensuring 1:1 relationship for performance data per fund.", + "exchange_traded_funds|performance|reportdate": "Date when the performance data was calculated or reported (YYYY-MM-DD). Contains NULL when report date is not available.", + "exchange_traded_funds|annual_returns|yearlyid": "A SERIAL primary key uniquely identifying each annual return record for funds by year.", + "exchange_traded_funds|annual_returns|portfolioref": "Foreign key referencing funds.TickerSym, indicating which fund the annual return data belongs to.", + "exchange_traded_funds|annual_returns|calendaryear": "Calendar year (YYYY) for which the return performance is recorded.", + "exchange_traded_funds|annual_returns|fundperf": "Fund's return performance for the specific calendar year as decimal (-1 to positive). Contains NULL when fund performance data is not available for that year (e.g., fund not yet launched).", + "exchange_traded_funds|annual_returns|categoryperf": "Category or benchmark average return performance for the same calendar year for comparison. Contains NULL when category performance data is not available for comparison.", + "exchange_traded_funds|risk_metrics|riskid": "A SERIAL primary key uniquely identifying each risk metrics record for funds.", + "exchange_traded_funds|risk_metrics|investmentref": "Foreign key referencing funds.TickerSym, ensuring 1:1 relationship for risk analysis per fund.", + "exchange_traded_funds|funds|fundclass": { + "column_meaning": "JSONB column. Consolidates fund classification and strategy information including geographic focus, investment strategy, market cap focus, and basic fund characteristics.", + "fields_meaning": { + "GeoZone_Class": "Geographic region or market focus of the fund (e.g., 'US', 'International', 'Emerging Markets', 'Global'). Contains NULL when geographic focus is not specified or fund has global diversification without specific regional focus.", + "Strategy_Type": "Investment strategy classification (e.g., 'Index', 'Active', 'Passive', 'Smart Beta') describing the fund's management approach. Contains NULL when strategy type is not classified or is proprietary/unique.", + "Cap_Size": "Market capitalization focus (e.g., 'Large Cap', 'Mid Cap', 'Small Cap', 'Multi Cap') indicating the size of companies the fund invests in. Contains NULL when fund does not focus on specific market cap sizes or invests in non-equity assets.", + "Quote_Mode": "Trading quote type or mechanism (e.g., 'NAV', 'Market', 'Real-time') indicating how the fund is priced and quoted. Contains NULL when quote mode is not specified or uses non-standard pricing mechanisms.", + "Currency_Base": "Base currency in which the fund is denominated and reports net asset value (e.g., 'USD', 'EUR', 'GBP'). Contains NULL when currency information is not available." + } + }, + "exchange_traded_funds|funds|fundmetrics": { + "column_meaning": "JSONB column. Groups fundamental financial metrics and operational data including assets under management, yield rates, expense ratios, and turnover statistics.", + "fields_meaning": { + "Net_Worth": "Total net assets under management (AUM) for the fund in the base currency, representing the fund's total market value. Contains NULL when AUM data is not available or not reported.", + "Yield_Rate": "Current dividend yield percentage (0-1) that the fund distributes to shareholders annually. Contains NULL when fund does not pay dividends or yield data is not available.", + "Turnover_Ratio": "Annual portfolio turnover ratio (0-1) indicating how frequently the fund's holdings are bought and sold within a year. Contains NULL when turnover data is not available or not applicable (e.g., for newly launched funds).", + "Expense_Net": "Net expense ratio (0-1) representing the annual fee charged to investors as a percentage of assets under management. Contains NULL when expense ratio is not yet determined or not available.", + "Benchmark_Exp": "Benchmark or category average expense ratio for comparison with similar funds in the same category. Contains NULL when benchmark data is not available for comparison." + } + }, + "exchange_traded_funds|funds|tradingdata": { + "column_meaning": "JSONB column. Aggregates trading and market data including volume metrics and moving averages for technical analysis and liquidity assessment.", + "fields_meaning": { + "volume_metrics": { + "Vol_3M": "Average daily trading volume over the past 3 months, indicating liquidity and investor interest. Contains NULL when volume data is not available or fund has insufficient trading history.", + "Vol_Recent": "Recent average daily trading volume over a shorter time period (typically 10 days). Contains NULL when recent volume data is not available." + }, + "moving_averages": { + "MA_50": "50-day moving average price of the fund, used for technical analysis and trend identification. Contains NULL when insufficient price history exists for calculation.", + "MA_200": "200-day moving average price of the fund, used for long-term trend analysis. Contains NULL when fund has less than 200 days of trading history." + } + } + }, + "exchange_traded_funds|funds|allocweights": { + "column_meaning": "JSONB column. Contains asset allocation percentages and portfolio composition data including equity, bond weights and bond characteristics for portfolio analysis.", + "fields_meaning": { + "asset_allocation": { + "Equity_Weight": "Percentage (0-1) of the fund's assets allocated to equity securities (stocks). Contains NULL when equity allocation data is not available or not applicable for fund type.", + "Bond_Weight": "Percentage (0-1) of the fund's assets allocated to fixed-income securities (bonds). Contains NULL when bond allocation data is not available or fund does not invest in bonds." + }, + "bond_characteristics": { + "Avg_Maturity": "Average maturity in years of the fund's bond holdings, applicable for fixed-income funds. Contains NULL when fund does not hold bonds or maturity data is not available.", + "Duration_Yrs": "Average duration in years of the fund's bond holdings, measuring interest rate sensitivity. Contains NULL when fund does not hold bonds or duration data is not available." + } + } + }, + "exchange_traded_funds|funds|valuationratios": { + "column_meaning": "JSONB column. Consolidates fundamental valuation ratios for equity holdings including price-to-book, price-to-earnings, price-to-cash-flow, and price-to-sales ratios.", + "fields_meaning": { + "valuation_metrics": { + "PB_Ratio": "Weighted average price-to-book ratio of the fund's equity holdings, indicating valuation characteristics. Contains NULL when P/B ratio is not available or not applicable for fund's holdings.", + "PCF_Ratio": "Weighted average price-to-cash-flow ratio of the fund's equity holdings. Contains NULL when P/CF ratio is not available or not applicable.", + "PE_Ratio": "Weighted average price-to-earnings ratio of the fund's equity holdings. Contains NULL when P/E ratio is not available or holdings have negative earnings.", + "PS_Ratio": "Weighted average price-to-sales ratio of the fund's equity holdings. Contains NULL when P/S ratio is not available or not applicable." + } + } + }, + "exchange_traded_funds|performance|pricerange52w": { + "column_meaning": "JSONB column. Groups 52-week price range data including highs, lows, deltas, and percentage changes for price movement analysis and volatility assessment.", + "fields_meaning": { + "high_metrics": { + "High_52W": "Highest price reached by the fund during the past 52 weeks. Contains NULL when fund has less than 52 weeks of trading history.", + "High_Delta": "Absolute price difference between current price and 52-week high. Contains NULL when 52-week high is not available.", + "High_Delta_Pct": "Percentage difference between current price and 52-week high, expressed as decimal (-1 to 0). Contains NULL when 52-week high is not available." + }, + "low_metrics": { + "Low_52W": "Lowest price reached by the fund during the past 52 weeks. Contains NULL when fund has less than 52 weeks of trading history.", + "Low_Delta": "Absolute price difference between current price and 52-week low. Contains NULL when 52-week low is not available.", + "Low_Delta_Pct": "Percentage difference between current price and 52-week low, expressed as decimal (0 to positive). Contains NULL when 52-week low is not available." + }, + "range_metrics": { + "Range_Move": "Absolute price movement range between 52-week high and low. Contains NULL when 52-week range data is not available.", + "Range_Move_Pct": "Percentage movement range between 52-week high and low positions. Contains NULL when 52-week range data is not available." + } + } + }, + "exchange_traded_funds|performance|returnmetrics": { + "column_meaning": "JSONB column. Aggregates fund performance returns across different time periods including fund and benchmark returns for comprehensive performance comparison.", + "fields_meaning": { + "fund_returns": { + "Return_YTD": "Fund's year-to-date return performance as decimal (-1 to positive), calculated from January 1st to report date. Contains NULL when YTD data is not available.", + "Return_1M": "Fund's 1-month return performance as decimal, measuring short-term performance. Contains NULL when fund has less than 1 month of history.", + "Return_3M": "Fund's 3-month return performance as decimal, measuring quarterly performance. Contains NULL when fund has less than 3 months of history.", + "Return_1Y": "Fund's 1-year return performance as decimal, measuring annual performance. Contains NULL when fund has less than 1 year of history.", + "Return_3Y": "Fund's annualized 3-year return performance as decimal, measuring medium-term performance. Contains NULL when fund has less than 3 years of history.", + "Return_5Y": "Fund's annualized 5-year return performance as decimal, measuring long-term performance. Contains NULL when fund has less than 5 years of history.", + "Return_10Y": "Fund's annualized 10-year return performance as decimal, measuring very long-term performance. Contains NULL when fund has less than 10 years of history." + }, + "benchmark_returns": { + "Bench_Return_YTD": "Benchmark or category average year-to-date return for comparison with fund performance. Contains NULL when benchmark data is not available.", + "Bench_Return_1M": "Benchmark or category average 1-month return for performance comparison. Contains NULL when benchmark data is not available.", + "Bench_Return_3M": "Benchmark or category average 3-month return for performance comparison. Contains NULL when benchmark data is not available.", + "Bench_Return_1Y": "Benchmark or category average 1-year return for performance comparison. Contains NULL when benchmark data is not available.", + "Bench_Return_3Y": "Benchmark or category average annualized 3-year return for performance comparison. Contains NULL when benchmark data is not available.", + "Bench_Return_5Y": "Benchmark or category average annualized 5-year return for performance comparison. Contains NULL when benchmark data is not available.", + "Bench_Return_10Y": "Benchmark or category average annualized 10-year return for performance comparison. Contains NULL when benchmark data is not available." + } + } + }, + "exchange_traded_funds|performance|histstats": { + "column_meaning": "JSONB column. Consolidates historical performance statistics including positive/negative year counts and top holdings information for fund analysis.", + "fields_meaning": { + "Positive_Years": "Number of calendar years with positive returns in the fund's history, indicating consistency. Contains NULL when fund has insufficient history or annual return data is not available.", + "Negative_Years": "Number of calendar years with negative returns in the fund's history, indicating volatility periods. Contains NULL when fund has insufficient history or annual return data is not available.", + "Top_Holdings": "Comma-separated list or description of the fund's largest security holdings for transparency. Contains NULL when holdings data is not available or not disclosed.", + "Top_Weight": "Percentage (0-1) of total assets represented by the single largest holding in the fund. Contains NULL when holdings data is not available or not disclosed." + } + }, + "exchange_traded_funds|risk_metrics|risk3y": { + "column_meaning": "JSONB column. Groups 3-year risk and performance metrics including alpha, beta, returns, volatility, and risk-adjusted ratios for short-term risk analysis.", + "fields_meaning": { + "risk_measures_3y": { + "Alpha_3Y": "3-year alpha coefficient measuring the fund's excess return compared to its benchmark, indicating manager skill. Contains NULL when fund has less than 3 years of history or benchmark data is not available.", + "Beta_3Y": "3-year beta coefficient measuring the fund's sensitivity to market movements (1.0 = same as market). Contains NULL when fund has less than 3 years of history or market correlation cannot be calculated.", + "Avg_Return_3Y": "Average annualized return over 3 years as decimal, used for risk-adjusted performance calculations. Contains NULL when fund has less than 3 years of history.", + "R_Squared_3Y": "3-year R-squared statistic (0-1) measuring how closely the fund's performance correlates with its benchmark. Contains NULL when fund has less than 3 years of history or benchmark data is not available.", + "Volatility_3Y": "3-year standard deviation of returns measuring the fund's price volatility and risk level. Contains NULL when fund has less than 3 years of history.", + "Sharpe_Ratio_3Y": "3-year Sharpe ratio measuring risk-adjusted return per unit of volatility (higher is better). Contains NULL when fund has less than 3 years of history or risk-free rate data is not available.", + "Treynor_Ratio_3Y": "3-year Treynor ratio measuring risk-adjusted return per unit of systematic risk (beta). Contains NULL when fund has less than 3 years of history or beta cannot be calculated." + } + } + }, + "exchange_traded_funds|risk_metrics|risk5y": { + "column_meaning": "JSONB column. Groups 5-year risk and performance metrics including alpha, beta, returns, volatility, and risk-adjusted ratios for medium-term risk analysis.", + "fields_meaning": { + "risk_measures_5y": { + "Alpha_5Y": "5-year alpha coefficient measuring the fund's excess return compared to its benchmark over medium term. Contains NULL when fund has less than 5 years of history or benchmark data is not available.", + "Beta_5Y": "5-year beta coefficient measuring the fund's market sensitivity over medium term. Contains NULL when fund has less than 5 years of history or market correlation cannot be calculated.", + "Avg_Return_5Y": "Average annualized return over 5 years as decimal for medium-term risk analysis. Contains NULL when fund has less than 5 years of history.", + "R_Squared_5Y": "5-year R-squared statistic measuring medium-term correlation with benchmark. Contains NULL when fund has less than 5 years of history or benchmark data is not available.", + "Volatility_5Y": "5-year standard deviation measuring medium-term volatility and risk characteristics. Contains NULL when fund has less than 5 years of history.", + "Sharpe_Ratio_5Y": "5-year Sharpe ratio for medium-term risk-adjusted performance evaluation. Contains NULL when fund has less than 5 years of history or risk-free rate data is not available.", + "Treynor_Ratio_5Y": "5-year Treynor ratio for medium-term systematic risk-adjusted performance. Contains NULL when fund has less than 5 years of history or beta cannot be calculated." + } + } + }, + "exchange_traded_funds|risk_metrics|risk10y": { + "column_meaning": "JSONB column. Groups 10-year risk and performance metrics including alpha, beta, returns, volatility, and risk-adjusted ratios for long-term risk analysis.", + "fields_meaning": { + "risk_measures_10y": { + "Alpha_10Y": "10-year alpha coefficient measuring long-term excess return and manager performance. Contains NULL when fund has less than 10 years of history or benchmark data is not available.", + "Beta_10Y": "10-year beta coefficient measuring long-term market sensitivity and systematic risk. Contains NULL when fund has less than 10 years of history or market correlation cannot be calculated.", + "Avg_Return_10Y": "Average annualized return over 10 years as decimal for long-term risk analysis. Contains NULL when fund has less than 10 years of history.", + "R_Squared_10Y": "10-year R-squared statistic measuring long-term correlation with benchmark performance. Contains NULL when fund has less than 10 years of history or benchmark data is not available.", + "Volatility_10Y": "10-year standard deviation measuring long-term volatility and risk profile. Contains NULL when fund has less than 10 years of history.", + "Sharpe_Ratio_10Y": "10-year Sharpe ratio for comprehensive long-term risk-adjusted performance evaluation. Contains NULL when fund has less than 10 years of history or risk-free rate data is not available.", + "Treynor_Ratio_10Y": "10-year Treynor ratio for long-term systematic risk-adjusted return measurement. Contains NULL when fund has less than 10 years of history or beta cannot be calculated." + } + } + } +} \ No newline at end of file diff --git a/exchange_traded_funds/exchange_traded_funds_kb.jsonl b/exchange_traded_funds/exchange_traded_funds_kb.jsonl new file mode 100644 index 0000000000000000000000000000000000000000..5e25d8b93c73f1f13ffce1be4aaa5baf081901a7 --- /dev/null +++ b/exchange_traded_funds/exchange_traded_funds_kb.jsonl @@ -0,0 +1,89 @@ +{"id": 0, "knowledge": "Annual Fund Outperformance", "description": "Calculates the excess return of a fund compared to its category benchmark for a given year.", "definition": "\text{Annual Fund Outperformance} = \text{Fund's annual return} - \text{Category's average annual return}", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 1, "knowledge": "Relative Expense Ratio", "description": "Measures the difference between a fund's net expense ratio and its benchmark's expense ratio.", "definition": "\text{Relative Expense} = \text{Fund's net expense ratio} - \text{Benchmark's expense ratio}", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 2, "knowledge": "Return on Cost (ROC)", "description": "Assesses the fund's one-year performance relative to its cost, indicating how much return is generated per unit of expense.", "definition": "ROC = \frac{\text{Fund's 1-year return}}{\text{Net expense ratio}}", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 3, "knowledge": "Yield-to-Expense Ratio (YTER)", "description": "Evaluates an income-generating fund's efficiency by comparing its dividend yield to its net expense ratio.", "definition": "YTER = \frac{\text{Fund's yield rate}}{\text{Net expense ratio}}", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 4, "knowledge": "Price Position in 52-Week Range", "description": "Calculates the current price's position within its 52-week high-low range as a percentage.", "definition": "\text{Position} = \frac{\text{Recent price} - \text{52-week low price}}{\text{52-week high price} - \text{52-week low price}} \times 100", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 5, "knowledge": "Short-Term Momentum Indicator", "description": "A technical indicator that signals short-term trend strength by comparing the 50-day and 200-day moving averages.", "definition": "\text{Momentum} = \text{50-day moving average} - \text{200-day moving average}", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 6, "knowledge": "Information Ratio (Simplified)", "description": "Measures a fund's risk-adjusted excess return over its benchmark, using volatility as the measure of risk.", "definition": "IR = \frac{\text{Fund's 3-year average return} - \text{Benchmark's 3-year average return}}{\text{Fund's 3-year volatility}}", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 7, "knowledge": "Positive Return Consistency", "description": "Calculates the percentage of a fund's historical years that have yielded positive returns.", "definition": "\text{Consistency} = \frac{\text{Number of years with positive returns}}{\text{number of positive years with positive returns} + \text{number if years with negative returns}} \times 100", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 8, "knowledge": "Cost-Adjusted Annual Outperformance", "description": "Evaluates a fund's performance by considering both its excess return over its category and its cost relative to its benchmark.", "definition": "A fund's performance is adjusted by combining its 'Annual Fund Outperformance' with its 'Relative Expense Ratio'. A higher positive outperformance and a lower (negative) relative expense are desirable.", "type": "calculation_knowledge", "children_knowledge": [0, 1]} +{"id": 9, "knowledge": "Total Return Fund", "description": "Defines a fund that aims to provide both capital appreciation and income through dividends.", "definition": "A fund is classified as a Total Return Fund if it has both a non-zero allocation to stocks and a non-zero dividend yield.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 10, "knowledge": "Alpha Generator", "description": "A fund that has demonstrated an ability to outperform its benchmark on a risk-adjusted basis.", "definition": "A fund is considered an Alpha Generator if its 5-year alpha is greater than 0.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 11, "knowledge": "Market-Tracking Fund", "description": "A fund whose performance is highly correlated with its market benchmark.", "definition": "A fund is considered a Market-Tracking Fund if its 3-year R-squared value is greater than 90, indicating a strong correlation with its benchmark.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 12, "knowledge": "Low-Turnover Strategy", "description": "An investment strategy characterized by infrequent trading of portfolio holdings, often associated with long-term, passive, or buy-and-hold approaches.", "definition": "A fund employs a Low-Turnover Strategy if its annual portfolio turnover is less than 30%.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 13, "knowledge": "High-Conviction Portfolio", "description": "A portfolio where the manager allocates a significant portion of assets to a small number of their best ideas.", "definition": "A fund is considered to have a High-Conviction Portfolio if the weight of its single largest holding is greater than 8%.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 14, "knowledge": "Efficient Income Generator", "description": "A fund that provides a high dividend yield relative to its management cost.", "definition": "A fund is classified as an Efficient Income Generator if its 'Yield-to-Expense Ratio (YTER)' is greater than 15.", "type": "domain_knowledge", "children_knowledge": [3]} +{"id": 15, "knowledge": "Consistent Outperformer", "description": "A fund that not only generates alpha but also consistently delivers positive annual returns.", "definition": "A fund is a Consistent Outperformer if it is an 'Alpha Generator' and has a 'Positive Return Consistency' score greater than 80.", "type": "domain_knowledge", "children_knowledge": [10, 7]} +{"id": 16, "knowledge": "Passive Alpha Generator", "description": "A rare type of fund that closely tracks a market benchmark but still manages to produce positive alpha.", "definition": "A fund is a Passive Alpha Generator if it is both a 'Market-Tracking Fund' and an 'Alpha Generator'.", "type": "domain_knowledge", "children_knowledge": [11, 10]} +{"id": 17, "knowledge": "Golden Cross Signal", "description": "A bullish technical signal indicating potential for a major rally, based on moving average trends.", "definition": "A Golden Cross Signal occurs for a fund when its 'Short-Term Momentum Indicator' is positive, suggesting its short-term average price has crossed above its long-term average price.", "type": "domain_knowledge", "children_knowledge": [5]} +{"id": 18, "knowledge": "Geographic Focus", "description": "Classification of funds based on their primary geographic area of investment.", "definition": "Funds are categorized by their geographic scope, such as 'UNITED_STATES', 'International', 'Global', 'Pacific/Asia ex-Japan Stk', and 'Emerging Markets'.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 19, "knowledge": "High-Quality Credit Portfolio", "description": "A bond fund that primarily holds securities with very low credit risk.", "definition": "A fund is defined as having a High-Quality Credit Portfolio if the sum of its allocations to government, AAA, and AA rated bonds exceeds 60% of its total bond holdings.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 20, "knowledge": "Yield Rate", "description": "Illustrates the fund's annual dividend yield.", "definition": "Represents the annual dividend per share as a percentage of the share's price. A value of 0.02 signifies a 2% yield. This is a crucial metric for income-focused investors.", "type": "value_illustration", "children_knowledge": -1} +{"id": 21, "knowledge": "Turnover Ratio", "description": "Illustrates the fund's portfolio turnover.", "definition": "Measures how frequently assets within a fund are bought and sold. A ratio of 1.0 (100%) means the fund replaces its entire portfolio once per year. A low value (<0.3) suggests a buy-and-hold strategy, while a high value (>1.0) indicates active trading.", "type": "value_illustration", "children_knowledge": -1} +{"id": 22, "knowledge": "Beta", "description": "Illustrates the fund's market sensitivity (Beta).", "definition": "Measures a fund's volatility in relation to the overall market. Beta > 1 indicates the fund is more volatile than the market. Beta < 1 indicates it is less volatile. Beta = 1 implies its movement matches the market.", "type": "value_illustration", "children_knowledge": -1} +{"id": 23, "knowledge": "R-Squared", "description": "Illustrates the fund's correlation to its benchmark (R-squared).", "definition": "Represents the percentage of a fund's movements that can be explained by movements in its benchmark index. A value of 95 means that 95% of the fund's performance is attributable to the benchmark's performance.", "type": "value_illustration", "children_knowledge": -1} +{"id": 24, "knowledge": "Sharpe Ratio", "description": "Illustrates the fund's risk-adjusted return (Sharpe Ratio).", "definition": "Measures the fund's excess return per unit of total risk (volatility). A higher Sharpe Ratio is better. A ratio > 1 is generally considered good, > 2 is very good, and > 3 is excellent.", "type": "value_illustration", "children_knowledge": -1} +{"id": 25, "knowledge": "Duration", "description": "Illustrates a bond fund's interest rate sensitivity (Duration).", "definition": "Measures how much a bond fund's price is likely to change for every 1% change in interest rates. A duration of 7 years means the fund's price will likely fall by about 7% if interest rates rise by 1%.", "type": "value_illustration", "children_knowledge": -1} +{"id": 26, "knowledge": "Net Expense Ratio", "description": "Illustrates the fund's net annual cost (Expense Ratio).", "definition": "The annual fee charged to investors as a percentage of assets. A value of 0.005 corresponds to a 0.5% annual fee. Lower is generally better, as costs directly reduce returns.", "type": "value_illustration", "children_knowledge": -1} +{"id": 27, "knowledge": "Equity Weight", "description": "Illustrates the fund's allocation to stocks.", "definition": "The percentage of the fund's assets invested in equities (stocks). A value of 0.9 indicates that 90% of the fund is invested in stocks, suggesting a growth-oriented strategy.", "type": "value_illustration", "children_knowledge": -1} +{"id": 28, "knowledge": "Sector Weight", "description": "Illustrates the fund's concentration in a specific economic sector.", "definition": "The percentage of a fund's assets invested in a particular sector, like 'technology' or 'healthcare'. A high value, such as 0.4 (40%), indicates a significant bet on that sector's performance.", "type": "value_illustration", "children_knowledge": -1} +{"id": 29, "knowledge": "Credit Quality", "description": "Illustrates the credit quality of bond holdings.", "definition": "Indicates the creditworthiness of the bonds a fund holds. 'Government' is the highest quality. 'AAA' and 'AA' are considered high-grade investment quality. Ratings below 'BBB' are considered speculative or high-yield.", "type": "value_illustration", "children_knowledge": -1} +{"id": 30, "knowledge": "Performance-Cost Efficiency Score", "description": "Calculates a fund's efficiency by measuring its annual outperformance relative to its cost compared to peers.", "definition": "\text{PCES} = \frac{\text{Annual Fund Outperformance}}{\text{Relative Expense Ratio}}", "type": "calculation_knowledge", "children_knowledge": [0, 1]} +{"id": 31, "knowledge": "Momentum-Weighted Price Strength", "description": "A composite technical indicator that scores a fund based on its current price position, giving more weight to funds with stronger upward momentum.", "definition": "\text{MWPS} = \text{Price Position} \times (1 + \frac{\text{Momentum}}{\text{200-Day Moving Average}})", "type": "calculation_knowledge", "children_knowledge": [4, 5]} +{"id": 32, "knowledge": "Consistency-Adjusted Information Ratio", "description": "Refines the Information Ratio by factoring in the historical consistency of a fund's positive returns.", "definition": "\text{CAIR} = \text{Information Ratio} \times \frac{\text{Positive Return Consistency}}{100}", "type": "calculation_knowledge", "children_knowledge": [6, 7]} +{"id": 33, "knowledge": "Total Value Score", "description": "A holistic performance score combining risk-adjusted returns, cost-efficiency, and historical consistency.", "definition": "\text{TVS} = \\sqrt[3]{\text{Return on Cost} \times \text{Information Ratio} \times \text{Positive Return Consistency}}", "type": "calculation_knowledge", "children_knowledge": [2, 6, 7]} +{"id": 34, "knowledge": "Active Manager Value", "description": "Quantifies the net value an active manager provides by subtracting the fund's relative cost from its demonstrated ability to generate risk-adjusted excess returns.", "definition": "\text{AMV} = \text{Information Ratio} - (\text{Relative Expense Ratio} \times 10)", "type": "calculation_knowledge", "children_knowledge": [6, 1]} +{"id": 35, "knowledge": "Quality-Income Score", "description": "A metric that assesses income-generating funds on both the efficiency and the credit quality of their yield.", "definition": "This score is the 'Yield-to-Expense Ratio' for a fund, but only applies if the fund also qualifies as a 'High-Quality Credit Portfolio'.", "type": "calculation_knowledge", "children_knowledge": [3, 19]} +{"id": 36, "knowledge": "Composite Momentum Strength", "description": "A score that confirms a bullish technical trend by combining a positive momentum signal with the fund's price strength.", "definition": "\text{CMS} = \text{Short-Term Momentum Indicator} \times \text{Price Position in 52-Week Range}", "type": "calculation_knowledge", "children_knowledge": [5, 4]} +{"id": 37, "knowledge": "Net Yield Advantage", "description": "Calculates a fund's final yield advantage or disadvantage after accounting for its cost relative to peers.", "definition": "\text{NYA} = \text{Yield Rate} - \text{Relative Expense Ratio}", "type": "calculation_knowledge", "children_knowledge": [1]} +{"id": 38, "knowledge": "Holistic Outperformance Metric", "description": "A comprehensive metric that blends a fund's raw outperformance with its risk-adjusted, cost-adjusted performance.", "definition": "\text{HOM} = \frac{\text{Annual Fund Outperformance} + \text{Cost-Adjusted Annual Outperformance}}{2}", "type": "calculation_knowledge", "children_knowledge": [0, 8]} +{"id": 39, "knowledge": "Risk-Return Efficiency Index", "description": "An index that evaluates how effectively a fund translates risk (volatility) into returns, adjusted for costs.", "definition": "\text{RREI} = \frac{\text{Return on Cost}}{\text{3-Year Volatility}}", "type": "calculation_knowledge", "children_knowledge": [2]} +{"id": 40, "knowledge": "Elite Active Manager", "description": "Identifies a top-tier fund manager who generates consistent, risk-adjusted outperformance through a high-conviction, concentrated portfolio.", "definition": "A fund is run by an Elite Active Manager if it qualifies as a 'Consistent Outperformer' and maintains a 'High-Conviction Portfolio'.", "type": "domain_knowledge", "children_knowledge": [15, 13]} +{"id": 41, "knowledge": "Ideal Index Fund", "description": "Defines a fund that perfectly embodies the principles of passive investing: low cost, low trading, and tight benchmark tracking.", "definition": "A fund is an Ideal Index Fund if it is a 'Market-Tracking Fund', employs a 'Low-Turnover Strategy', and has a negative 'Relative Expense Ratio'.", "type": "domain_knowledge", "children_knowledge": [11, 12, 1]} +{"id": 42, "knowledge": "Premier Income Fund", "description": "Identifies a superior income-focused fund that is not only efficient in generating yield but also prioritizes the safety of its underlying bond holdings.", "definition": "A fund is a Premier Income Fund if it is both an 'Efficient Income Generator' and has a 'High-Quality Credit Portfolio'.", "type": "domain_knowledge", "children_knowledge": [14, 19]} +{"id": 43, "knowledge": "Rebound Prospect", "description": "A fund that has recently underperformed but is showing strong technical signs of a potential turnaround in its price trend.", "definition": "A fund is a Rebound Prospect if its 'Annual Fund Outperformance' for the last reported year is negative, but it is currently showing a 'Golden Cross Signal'.", "type": "domain_knowledge", "children_knowledge": [0, 17]} +{"id": 44, "knowledge": "High-Fee Strategic Bet", "description": "Classifies a fund as an expensive, actively-managed portfolio that deviates significantly from market benchmarks, representing a pure play on manager skill.", "definition": "A fund is a High-Fee Strategic Bet if its 'Relative Expense Ratio' is positive and it does not qualify as a 'Market-Tracking Fund'.", "type": "domain_knowledge", "children_knowledge": [1, 11]} +{"id": 45, "knowledge": "Focused Alpha Leader", "description": "An actively managed fund that successfully generates excess returns by taking concentrated bets on its best ideas.", "definition": "A fund is a Focused Alpha Leader if it is classified as both an 'Alpha Generator' and a 'High-Conviction Portfolio'.", "type": "domain_knowledge", "children_knowledge": [10, 13]} +{"id": 46, "knowledge": "Reliable Core Holding", "description": "Identifies a fund suitable as a core portfolio holding due to its history of steady returns and its balanced approach to providing both growth and income.", "definition": "A fund is a Reliable Core Holding if it is a 'Consistent Outperformer' and is also structured as a 'Total Return Fund'.", "type": "domain_knowledge", "children_knowledge": [15, 9]} +{"id": 47, "knowledge": "Contrarian Value Play", "description": "A fund that is currently out of favor with the market but is managed with a patient, low-cost, long-term strategy, making it a potential value investment.", "definition": "A fund is a Contrarian Value Play if its Price Position in 52-Week Range is below 25%, it follows a Low-Turnover Strategy, and its Relative Expense Ratio is negative.", "type": "domain_knowledge", "children_knowledge": [4, 12, 1]} +{"id": 48, "knowledge": "Global Alpha Specialist", "description": "An investment fund that specializes in a non-US market and has demonstrated a skillful ability to outperform its relevant benchmark.", "definition": "A fund is a Global Alpha Specialist if it is an 'Alpha Generator' and its 'Geographic Focus' is anything other than 'UNITED_STATES' or 'US'.", "type": "domain_knowledge", "children_knowledge": [10, 18]} +{"id": 49, "knowledge": "High-Cost Underperformer", "description": "Flags a fund that is both more expensive than its peers and has failed to outperform its category, representing poor value for investors.", "definition": "A fund is a High-Cost Underperformer if it has a positive 'Relative Expense Ratio' and a negative 'Annual Fund Outperformance'.", "type": "domain_knowledge", "children_knowledge": [1, 0]} +{"id": 50, "knowledge": "Appraisal Ratio", "description": "Measures a fund manager's skill in stock selection by calculating the alpha generated per unit of specific, unsystematic risk taken.", "definition": "\text{Appraisal Ratio} = \frac{\text{3-Year Alpha}}{\text{Unsystematic Risk}}, \text{ where Unsystematic Risk} = \text{3-Year Volatility} \times \\sqrt{1 - \text{3-Year R-Squared}}", "type": "calculation_knowledge", "children_knowledge": [10]} +{"id": 51, "knowledge": "Composite Valuation Score", "description": "Calculates a single score representing a fund's valuation attractiveness by averaging the inverted values of its key price-to-metric ratios.", "definition": "CVS = \frac{1}{4} \times (\frac{1}{\text{P/E Ratio}} + \frac{1}{\text{P/S Ratio}} + \frac{1}{\text{P/B Ratio}} + \frac{1}{\text{P/CF Ratio}})", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 52, "knowledge": "Active Management Footprint", "description": "Quantifies the degree of a fund's active management by measuring its deviation from the benchmark relative to its trading activity.", "definition": "AMF = \frac{1 - \text{3-Year R-Squared}}{\text{Turnover Ratio}}", "type": "calculation_knowledge", "children_knowledge": [11, 12]} +{"id": 53, "knowledge": "Cost-Adjusted Alpha", "description": "Calculates the fund's alpha after penalizing it for having higher expenses than its benchmark, revealing the true value added by the manager.", "definition": "CAA = \text{3-Year Alpha} - \text{Relative Expense Ratio}", "type": "calculation_knowledge", "children_knowledge": [1, 10]} +{"id": 54, "knowledge": "Momentum-Adjusted Information Ratio", "description": "A dynamic version of the Information Ratio that is enhanced by the fund's current price momentum, rewarding funds that are outperforming on a risk-adjusted basis and are also in a strong uptrend.", "definition": "MAIR = \text{Information Ratio (Simplified)} \times (1 + \frac{\text{Short-Term Momentum Indicator}}{\text{200-Day Moving Average}})", "type": "calculation_knowledge", "children_knowledge": [5, 6]} +{"id": 55, "knowledge": "Secure Income Efficiency Score", "description": "A composite score for income funds that measures yield-generating efficiency while heavily weighting for the credit safety of the underlying assets.", "definition": "SIES = \text{Yield-to-Expense Ratio} \times (\text{Allocation to High-Quality Credit})", "type": "calculation_knowledge", "children_knowledge": [3, 19]} +{"id": 56, "knowledge": "Manager Skill Ratio", "description": "Measures the amount of alpha a fund manager generates for each dollar of fee charged to investors.", "definition": "MSR = \frac{\text{3-Year Alpha}}{\text{Net Expense Ratio}}", "type": "calculation_knowledge", "children_knowledge": [10]} +{"id": 57, "knowledge": "Capital Preservation Index", "description": "Scores a fund on its ability to protect capital by combining its history of avoiding down years with its ability to stay above its 52-week lows.", "definition": "CPI = (\frac{1}{1 + \text{Negative Years}}) \times (1 - \frac{\text{52-Week Low} - \text{Current Price}}{\text{52-Week Low}})", "type": "calculation_knowledge", "children_knowledge": [7]} +{"id": 58, "knowledge": "Portfolio Liquidity Pressure", "description": "Estimates the potential market impact of a fund's trading activity by comparing its annual turnover to its average daily trading volume.", "definition": "PLP = \frac{\text{Net Worth} \times \text{Turnover Ratio}}{\text{Average Daily Volume (3M) } \times 252}", "type": "calculation_knowledge", "children_knowledge": [12]} +{"id": 59, "knowledge": "Growth-Value Spectrum Score", "description": "A quantitative factor score that places a fund on a spectrum from deep value to high growth based on the interplay of its Price-to-Earnings and Price-to-Book ratios.", "definition": "\text{GVS Score} = \\ln(\frac{\text{P/E Ratio}}{\text{P/B Ratio}})", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 60, "knowledge": "Closet Indexer", "description": "A fund that is marketed as an active fund and charges higher fees, but its portfolio holdings and performance closely track a benchmark index.", "definition": "A fund is identified as a Closet Indexer if it is a 'Market-Tracking Fund' but has a positive 'Relative Expense Ratio'.", "type": "domain_knowledge", "children_knowledge": [1, 11]} +{"id": 61, "knowledge": "Strategic Beta Fund", "description": "An investment fund that occupies the middle ground between passive and active management, using a rules-based system to target specific factors or market segments beyond simple market-cap weighting.", "definition": "A fund is classified as Strategic Beta if it employs a 'Low-Turnover Strategy' but is explicitly not a 'Market-Tracking Fund'.", "type": "domain_knowledge", "children_knowledge": [11, 12]} +{"id": 62, "knowledge": "Fallen Angel", "description": "A once highly-regarded fund that has seen a significant decline in its performance and ability to generate alpha.", "definition": "A fund is a Fallen Angel if it was previously considered a 'Consistent Outperformer' but its most recent 3-year alpha is now negative.", "type": "domain_knowledge", "children_knowledge": [15]} +{"id": 63, "knowledge": "Quality-Growth at a Reasonable Price (Q-GARP)", "description": "An investment style that seeks to own high-quality, growing companies without overpaying. These funds blend quality, growth, and value characteristics.", "definition": "A fund follows a Q-GARP strategy if it is a 'Consistent Outperformer' and has a 'Composite Valuation Score' in the top 50th percentile of its category.", "type": "domain_knowledge", "children_knowledge": [15, 51]} +{"id": 64, "knowledge": "High-Conviction Value Investor", "description": "A fund manager who adheres to a strict value discipline, evidenced by attractive valuation metrics, while taking large, concentrated positions in their best ideas.", "definition": "A fund is a High-Conviction Value Investor if it is a 'High-Conviction Portfolio' and has a 'Composite Valuation Score' in the top 25th percentile of its category.", "type": "domain_knowledge", "children_knowledge": [13, 51]} +{"id": 65, "knowledge": "Efficient Core Holding", "description": "An ideal fund for the core of a portfolio, characterized by extremely low costs, tight benchmark tracking, and proven efficiency in translating assets into returns.", "definition": "A fund is an Efficient Core Holding if it qualifies as an 'Ideal Index Fund' and also exhibits a high 'Return on Cost (ROC)'.", "type": "domain_knowledge", "children_knowledge": [41, 2]} +{"id": 66, "knowledge": "Momentum-Driven Growth Fund", "description": "A fund that specifically targets high-growth companies that are also exhibiting strong, positive price momentum in the market.", "definition": "A fund is a Momentum-Driven Growth Fund if its 'Growth-Value Spectrum Score' indicates a growth tilt and it is also currently signaling a 'Golden Cross Signal'.", "type": "domain_knowledge", "children_knowledge": [17, 59]} +{"id": 67, "knowledge": "Defensive Anchor", "description": "A fund suitable for mitigating portfolio volatility, characterized by low market sensitivity and a strong track record of preserving capital.", "definition": "A fund is a Defensive Anchor if its 3-year Beta is less than 0.75 and it has a high 'Capital Preservation Index' score.", "type": "domain_knowledge", "children_knowledge": [57]} +{"id": 68, "knowledge": "True Active Differentiator", "description": "A fund that demonstrates genuine active management through significant deviation from its benchmark, skilled stock selection, and a high-conviction approach.", "definition": "A fund is a True Active Differentiator if it has 'Active Management Footprint' > 0.5, 'Appraisal Ratio' > 0.2, and is also a 'High-Conviction Portfolio'.", "type": "domain_knowledge", "children_knowledge": [13, 50, 52]} +{"id": 69, "knowledge": "Speculative Turnaround Play", "description": "A high-risk, high-reward fund that has been performing poorly and is costly, but is showing technical signs of a potential, albeit uncertain, recovery.", "definition": "A fund is a Speculative Turnaround Play if it is a 'High-Cost Underperformer' but has recently triggered a 'Golden Cross Signal'.", "type": "domain_knowledge", "children_knowledge": [17, 49]} +{"id": 70, "knowledge": "Year-over-Year Performance Trend", "description": "Calculates the change in a fund's performance relative to its category from one year to the next.", "definition": "\\Delta_{YoY} = \text{Outperformance}{\text{current year}} - \text{Outperformance}{\text{previous year}}", "type": "calculation_knowledge", "children_knowledge": [0]} +{"id": 71, "knowledge": "Category Average Duration", "description": "Calculates the average duration for all funds within the same investment category.", "definition": "\bar{D}c = \frac{\\sum \text{Fund Durations}}{\text{Total number of funds}} \text{ for a given category.}", "type": "calculation_knowledge", "children_knowledge": [25]} +{"id": 72, "knowledge": "Duration Advantage", "description": "Measures how much lower a fund's duration is compared to its category average.", "definition": "D{\text{adv}} = \text{Category Average Duration} - \text{Fund's Duration}", "type": "calculation_knowledge", "children_knowledge": [25, 71]} +{"id": 73, "knowledge": "Average Upside Outperformance", "description": "Measures a fund's average outperformance during years when its category had positive returns.", "definition": "\bar{O}{\text{up}} = \text{Average 'Annual Fund Outperformance' during years with positive category returns.}", "type": "calculation_knowledge", "children_knowledge": [0]} +{"id": 74, "knowledge": "Average Downside Outperformance", "description": "Measures a fund's average outperformance during years when its category had negative returns. A smaller negative number indicates better downside protection.", "definition": "\bar{O}{\text{down}} = \text{Average 'Annual Fund Outperformance' during years with non-positive category returns.}", "type": "calculation_knowledge", "children_knowledge": [0]} +{"id": 75, "knowledge": "Capture Differential", "description": "Calculates the difference between a fund's average outperformance in up markets versus down markets, indicating its overall adaptability.", "definition": "C_{\text{diff}} = \text{Average Upside Outperformance} - \text{Average Downside Outperformance}", "type": "calculation_knowledge", "children_knowledge": [73, 74]} +{"id": 76, "knowledge": "Average Daily Value Traded (3M)", "description": "Calculates the average monetary value of a fund's shares traded daily over the last 3 months.", "definition": "\text{ADVT}{3M} = \text{Average daily volume (3M)} \times \text{Average share price (e.g., 200-day MA)}", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 77, "knowledge": "Beta Drift", "description": "Measures the change in a fund's sensitivity to market movements over time.", "definition": "\\Delta{\beta} = \text{Beta}{\text{1st Period}} - \text{Beta}{\text{2nd Period}}", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 78, "knowledge": "R-Squared Drift", "description": "Measures the change in a fund's performance correlation with its benchmark over time.", "definition": "\\Delta_{R^2} = R^2_{\text{1st Period}} - R^2_{\text{2nd Period}}", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 79, "knowledge": "Style Drift", "description": "A classification of how a fund's investment style has changed by comparing its recent risk characteristics (Beta, R-Squared) to its long-term history.", "definition": "A fund is considered to have drifted if its absolute Beta change exceeds 0.15 or its absolute R-Squared change exceeds 10.", "type": "domain_knowledge", "children_knowledge": [77, 78]} +{"id": 80, "knowledge": "Composite Score", "description": "A normalized score that averages a fund's percentile rank across multiple key performance and cost metrics, allowing for peer-group comparison.", "definition": "\text{Score} = \frac{\text{Percentile Rank}(\text{5Y Alpha}) + \text{Percentile Rank}(\text{3Y Sharpe}) + \text{Percentile Rank}(\text{Inverse Net Expense})}{3}", "type": "calculation_knowledge", "children_knowledge": [10, 24, 26]} +{"id": 81, "knowledge": "Category Dominator", "description": "Identifies the single best-performing fund within an investment category based on a multi-factor composite score.", "definition": "The fund with the highest 'Composite Score' within its investment category, provided the category contains at least 10 funds.", "type": "domain_knowledge", "children_knowledge": [80]} +{"id": 82, "knowledge": "Alpha-Turnover Slope", "description": "Calculates the slope of the linear regression line between a fund's alpha (dependent variable, Y) and its turnover ratio (independent variable, X).", "definition": "\beta_{\\alpha, T} = \text{Slope of regression}(\text{3-Year Alpha}, \text{Turnover Ratio})", "type": "calculation_knowledge", "children_knowledge": [10, 12]} +{"id": 83, "knowledge": "Fit Quality", "description": "Measures how well the turnover ratio explains the variation in alpha in the regression model.", "definition": "R^2 = \text{R-squared of regression}(\text{3-Year Alpha}, \text{Turnover Ratio})", "type": "calculation_knowledge", "children_knowledge": [10, 12]} +{"id": 84, "knowledge": "Valuation Data Availability", "description": "A classification that categorizes funds based on whether they disclose key valuation metrics.", "definition": "A fund is classified as 'Transparent' if it provides numeric values for both its P/E and P/B ratios; otherwise, it is 'Opaque'.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 85, "knowledge": "Median 1-Year Return", "description": "Calculates the median (50th percentile) of the 1-year returns for a group of funds.", "definition": "M = \text{The 50th percentile of 1-year returns for a specified group of funds.}", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 86, "knowledge": "Wasted Fee Amount", "description": "The total dollar amount of fees paid by investors in a Closet Indexer fund that are in excess of the benchmark's expense ratio.", "definition": "WFA = (\text{Net Expense Ratio} - \text{Benchmark Expense Ratio}) \times \text{Net Worth}", "type": "calculation_knowledge", "children_knowledge": [1, 60]} +{"id": 87, "knowledge": "Family Sector Concentration Profile", "description": "An analytical profile that identifies the single economic sector a fund family has the highest average allocation to, across all its funds.", "definition": "For each fund family, this profile is the economic sector with the highest average weight across all of the family's funds.", "type": "domain_knowledge", "children_knowledge": [28]} +{"id": 88, "knowledge": "Top-Tier Family", "description": "A classification for a fund family that exhibits desirable risk-return characteristics, specifically low market risk and a proven ability to generate alpha.", "definition": "A fund family is 'Top-Tier' if its average 3-year beta is less than 1.0 and it manages 5 or more alpha-generating funds.", "type": "domain_knowledge", "children_knowledge": [10, 22]} \ No newline at end of file diff --git a/exchange_traded_funds/exchange_traded_funds_schema.txt b/exchange_traded_funds/exchange_traded_funds_schema.txt new file mode 100644 index 0000000000000000000000000000000000000000..dc15571ec1e2bbbec39bcae80124ba1d05eda62f --- /dev/null +++ b/exchange_traded_funds/exchange_traded_funds_schema.txt @@ -0,0 +1,272 @@ +CREATE TABLE "families" ( +famcode bigint NOT NULL DEFAULT nextval('families_famcode_seq'::regclass), +groupname text NOT NULL, + PRIMARY KEY (famcode) +); + +First 3 rows: + famcode groupname +--------- ---------------------------- + 1 DWS + 2 Virtus + 3 American Century Investments +... + + +CREATE TABLE "funds" ( +productnum bigint NOT NULL DEFAULT nextval('funds_productnum_seq'::regclass), +tickersym character varying NOT NULL, +shortlabel character varying NULL, +fulldescription text NULL, +parentgroup character varying NULL, +listingvenue character varying NULL, +productclass character varying NULL, +launchdate date NULL, +strategynotes text NULL, +fundclass jsonb NULL, +fundmetrics jsonb NULL, +tradingdata jsonb NULL, +allocweights jsonb NULL, +valuationratios jsonb NULL, + PRIMARY KEY (productnum), + FOREIGN KEY (listingvenue) REFERENCES exchanges(marketcode), + FOREIGN KEY (parentgroup) REFERENCES families(groupname), + FOREIGN KEY (productclass) REFERENCES categories(classtype) +); + +First 3 rows: + productnum tickersym shortlabel fulldescription parentgroup listingvenue productclass launchdate strategynotes fundclass fundmetrics tradingdata allocweights valuationratios +------------ ----------- ------------------------------- ---------------------------------------------------------------- ---------------------------- -------------- ------------------------ ------------ ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------- + 2 AADR AllianzGI Health Sciences Fund Virtus AllianzGI Health Sciences Fund Class P Virtus NGM Foreign Large Growth 2010-07-20 The investment seeks long-term capital appreciation. The fund seeks to achieve its objective by normally investing at least 80% of its net assets (plus borrowings made for investment purposes) in health sciences-related companies. The portfolio manager considers health sciences-related companies to include companies that design, manufacture or sell products or services used for or in connection with healthcare, medicine or life sciences. The fund will invest primarily in common stocks and other equity securities. {'Cap_Size': 'Large', 'Quote_Mode': 'ETF', 'Currency_Base': 'DOLLAR', 'GeoZone_Class': 'US', 'Strategy_Type': 'Blend'} {'Net_Worth': 88836160, 'Yield_Rate': 0.0031, 'Expense_Net': 0.011, 'Benchmark_Exp': 0.0066, 'Turnover_Ratio': None} {'volume_metrics': {'Vol_3M': 2596, 'Vol_Recent': 3170}, 'moving_averages': {'MA_50': 64.555, 'MA_200': 65.297}} {'asset_allocation': {'Bond_Weight': None, 'Equity_Weight': None}, 'bond_characteristics': {'Avg_Maturity': None, 'Duration_Yrs': None}} {'valuation_metrics': {'PB_Ratio': 1.71, 'PE_Ratio': 13.34, 'PS_Ratio': 1.2, 'PCF_Ratio': 7.46}} + 276 COMB American Century Focused Dynami American Century Investments Focused Dynamic Growth Fund I Class American Century Investments PCX Commodities Broad Basket 2017-05-19 The investment seeks long-term capital growth. The portfolio managers look for stocks of early and rapid stage growth companies they believe will increase in value over time. The portfolio managers make their investment decisions based primarily on their analysis of individual companies, rather than on broad economic forecasts. The portfolio managers use a variety of analytical research tools and techniques to identify the stocks of companies that meet their investment criteria. Under normal market conditions, the portfolio managers seek securities of companies whose earnings or revenues are not only growing, but growing at an accelerated pace. {'Cap_Size': None, 'Quote_Mode': 'ETF', 'Currency_Base': 'Dollar', 'GeoZone_Class': 'us', 'Strategy_Type': None} {'Net_Worth': 221823312, 'Yield_Rate': 0.0006, 'Expense_Net': 0.0025, 'Benchmark_Exp': 0.0081, 'Turnover_Ratio': None} {'volume_metrics': {'Vol_3M': 56798, 'Vol_Recent': 47410}, 'moving_averages': {'MA_50': 30.55, 'MA_200': 28.137}} {'asset_allocation': {'Bond_Weight': None, 'Equity_Weight': None}, 'bond_characteristics': {'Avg_Maturity': None, 'Duration_Yrs': None}} {'valuation_metrics': {'PB_Ratio': None, 'PE_Ratio': None, 'PS_Ratio': None, 'PCF_Ratio': None}} + 384 DMRL 361 Domestic Long/Short Equity 361 Domestic Long/Short Equity Fund Class Y 361 Funds PCX Large Blend 2017-07-31 The investment seeks to achieve long-term capital appreciation; the fund also seeks to preserve capital in down markets. In pursuing its investment objectives, the fund seeks to invest at least 80% of the value of its net assets (which include borrowings for investment purposes) in equity securities such as common stocks, warrants and rights of issuers that are organized in the United States and the securities of which are principally traded on a major U.S. exchange. It employs a strategy of taking long and short positions in equity securities publicly traded in the U.S. {'Cap_Size': 'Large', 'Quote_Mode': 'ETF', 'Currency_Base': 'usd', 'GeoZone_Class': 'usa', 'Strategy_Type': 'Blend'} {'Net_Worth': 414012928, 'Yield_Rate': 0.0082, 'Expense_Net': 0.0035, 'Benchmark_Exp': 0.0036, 'Turnover_Ratio': 6.89} {'volume_metrics': {'Vol_3M': 2519, 'Vol_Recent': 1200}, 'moving_averages': {'MA_50': 76.871, 'MA_200': 72.836}} {'asset_allocation': {'Bond_Weight': 0, 'Equity_Weight': 0.9984}, 'bond_characteristics': {'Avg_Maturity': None, 'Duration_Yrs': None}} {'valuation_metrics': {'PB_Ratio': 4.42, 'PE_Ratio': 26.46, 'PS_Ratio': 2.96, 'PCF_Ratio': 17.56}} +... + + +CREATE TABLE "performance" ( +perfid bigint NOT NULL DEFAULT nextval('performance_perfid_seq'::regclass), +productref character varying NOT NULL, +reportdate date NULL, +pricerange52w jsonb NULL, +returnmetrics jsonb NULL, +histstats jsonb NULL, + PRIMARY KEY (perfid), + FOREIGN KEY (productref) REFERENCES funds(tickersym) +); + +First 3 rows: + perfid productref reportdate pricerange52w returnmetrics histstats +-------- ------------ ------------ ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------ + 1672 SILX 1970-01-01 {'low_metrics': {'Low_52W': 4.37, 'Low_Delta': 0.36, 'Low_Delta_Pct': 0.08238}, 'high_metrics': {'High_52W': 10.18, 'High_Delta': -5.45, 'High_Delta_Pct': -0.53536}, 'range_metrics': {'Range_Move': 5.81, 'Range_Move_Pct': 0.57073}} {'fund_returns': {'Return_1M': None, 'Return_1Y': None, 'Return_3M': None, 'Return_3Y': None, 'Return_5Y': None, 'Return_10Y': None, 'Return_YTD': None}, 'benchmark_returns': {'Bench_Return_1M': None, 'Bench_Return_1Y': None, 'Bench_Return_3M': None, 'Bench_Return_3Y': None, 'Bench_Return_5Y': None, 'Bench_Return_10Y': None, 'Bench_Return_YTD': None}} {'Top_Weight': None, 'Top_Holdings': None, 'Negative_Years': None, 'Positive_Years': None} + 501 ENZL 2021-06-30 {'low_metrics': {'Low_52W': 57.86, 'Low_Delta': 0.19, 'Low_Delta_Pct': 0.00328}, 'high_metrics': {'High_52W': 71.72, 'High_Delta': -13.67, 'High_Delta_Pct': -0.1906}, 'range_metrics': {'Range_Move': 13.86, 'Range_Move_Pct': 0.19325}} {'fund_returns': {'Return_1M': -0.0058, 'Return_1Y': 0.0936, 'Return_3M': -0.0159, 'Return_3Y': 0.1085, 'Return_5Y': 0.1082, 'Return_10Y': 0.1097, 'Return_YTD': -0.0999}, 'benchmark_returns': {'Bench_Return_1M': None, 'Bench_Return_1Y': None, 'Bench_Return_3M': None, 'Bench_Return_3Y': None, 'Bench_Return_5Y': None, 'Bench_Return_10Y': None, 'Bench_Return_YTD': None}} {'Top_Weight': None, 'Top_Holdings': None, 'Negative_Years': 2, 'Positive_Years': 8} + 6 ACIO 2021-06-30 {'low_metrics': {'Low_52W': 26.86, 'Low_Delta': 4.46, 'Low_Delta_Pct': 0.16605}, 'high_metrics': {'High_52W': 32.71, 'High_Delta': -1.39, 'High_Delta_Pct': -0.04249}, 'range_metrics': {'Range_Move': 5.85, 'Range_Move_Pct': 0.17884}} {'fund_returns': {'Return_1M': 0.013, 'Return_1Y': 0.2055, 'Return_3M': 0.0655, 'Return_3Y': None, 'Return_5Y': None, 'Return_10Y': None, 'Return_YTD': 0.0876}, 'benchmark_returns': {'Bench_Return_1M': None, 'Bench_Return_1Y': None, 'Bench_Return_3M': None, 'Bench_Return_3Y': None, 'Bench_Return_5Y': None, 'Bench_Return_10Y': None, 'Bench_Return_YTD': None}} {'Top_Weight': None, 'Top_Holdings': None, 'Negative_Years': 0, 'Positive_Years': 1} +... + + +CREATE TABLE "family_categories" ( +linkid bigint NOT NULL DEFAULT nextval('family_categories_linkid_seq'::regclass), +familylink character varying NULL, +categorylink character varying NULL, + PRIMARY KEY (linkid), + FOREIGN KEY (categorylink) REFERENCES categories(classtype), + FOREIGN KEY (familylink) REFERENCES families(groupname) +); + +First 3 rows: + linkid familylink categorylink +-------- ---------------------------- ------------------------- + 1 Virtus Foreign Large Growth + 2 American Century Investments Pacific/Asia ex-Japan Stk + 3 Thrivent Funds Large Value +... + + +CREATE TABLE "exchanges" ( +xchgnum bigint NOT NULL DEFAULT nextval('exchanges_xchgnum_seq'::regclass), +marketcode character varying NOT NULL, +tradingvenue text NOT NULL, +exchangetime text NOT NULL, + PRIMARY KEY (xchgnum) +); + +First 3 rows: + xchgnum marketcode tradingvenue exchangetime +--------- ------------ -------------- -------------- + 1 PCX NYSEArca ny + 2 NGM NasdaqGM New York + 6 BTS BATS America/NYC +... + + +CREATE TABLE "annual_returns" ( +yearlyid bigint NOT NULL DEFAULT nextval('annual_returns_yearlyid_seq'::regclass), +portfolioref character varying NULL, +calendaryear bigint NULL, +fundperf real NULL, +categoryperf real NULL, + PRIMARY KEY (yearlyid), + FOREIGN KEY (portfolioref) REFERENCES funds(tickersym) +); + +First 3 rows: + yearlyid portfolioref calendaryear fundperf categoryperf +---------- -------------- -------------- ---------- -------------- + 1 AAAU 2019 0.18579 nan + 2 AAAU 2020 0.23963 nan + 3 AADR 2006 nan 0.21884 +... + + +CREATE TABLE "categories" ( +catref bigint NOT NULL DEFAULT nextval('categories_catref_seq'::regclass), +classtype text NOT NULL, + PRIMARY KEY (catref) +); + +First 3 rows: + catref classtype +-------- ------------------------- + 1 Foreign Large Growth + 2 Pacific/Asia ex-Japan Stk + 3 Large Value +... + + +CREATE TABLE "family_exchanges" ( +connectref bigint NOT NULL DEFAULT nextval('family_exchanges_connectref_seq'::regclass), +familyref character varying NULL, +exchangeref character varying NULL, + PRIMARY KEY (connectref), + FOREIGN KEY (exchangeref) REFERENCES exchanges(marketcode), + FOREIGN KEY (familyref) REFERENCES families(groupname) +); + +First 3 rows: + connectref familyref exchangeref +------------ ---------------------------- ------------- + 1 DWS PCX + 2 Virtus NGM + 3 American Century Investments NGM +... + + +CREATE TABLE "risk_metrics" ( +riskid bigint NOT NULL DEFAULT nextval('risk_metrics_riskid_seq'::regclass), +investmentref character varying NOT NULL, +risk3y jsonb NULL, +risk5y jsonb NULL, +risk10y jsonb NULL, + PRIMARY KEY (riskid), + FOREIGN KEY (investmentref) REFERENCES funds(tickersym) +); + +First 3 rows: + riskid investmentref risk3y risk5y risk10y +-------- --------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ + 1 AAAU {'risk_measures_3y': {'Beta_3Y': 0.07, 'Alpha_3Y': 13.18, 'R_Squared_3Y': 0.54, 'Avg_Return_3Y': 1.23, 'Volatility_3Y': 14.93, 'Sharpe_Ratio_3Y': 0.91, 'Treynor_Ratio_3Y': 187.1}} {'risk_measures_5y': {'Beta_5Y': None, 'Alpha_5Y': None, 'R_Squared_5Y': None, 'Avg_Return_5Y': None, 'Volatility_5Y': None, 'Sharpe_Ratio_5Y': None, 'Treynor_Ratio_5Y': None}} {'risk_measures_10y': {'Beta_10Y': None, 'Alpha_10Y': None, 'R_Squared_10Y': None, 'Avg_Return_10Y': None, 'Volatility_10Y': None, 'Sharpe_Ratio_10Y': None, 'Treynor_Ratio_10Y': None}} + 2 AADR {'risk_measures_3y': {'Beta_3Y': 1.11, 'Alpha_3Y': -1.3, 'R_Squared_3Y': 75.96, 'Avg_Return_3Y': 0.85, 'Volatility_3Y': 22.42, 'Sharpe_Ratio_3Y': 0.4, 'Treynor_Ratio_3Y': 6.11}} {'risk_measures_5y': {'Beta_5Y': 1.11, 'Alpha_5Y': 0.38, 'R_Squared_5Y': 70.49, 'Avg_Return_5Y': 1.1, 'Volatility_5Y': 19.3, 'Sharpe_Ratio_5Y': 0.62, 'Treynor_Ratio_5Y': 9.66}} {'risk_measures_10y': {'Beta_10Y': 0.96, 'Alpha_10Y': 3.32, 'R_Squared_10Y': 73.64, 'Avg_Return_10Y': 0.79, 'Volatility_10Y': 16.78, 'Sharpe_Ratio_10Y': 0.53, 'Treynor_Ratio_10Y': 8.15}} + 3 AAXJ {'risk_measures_3y': {'Beta_3Y': 0.9, 'Alpha_3Y': 1.2, 'R_Squared_3Y': 74.34, 'Avg_Return_3Y': 0.8, 'Volatility_3Y': 18.48, 'Sharpe_Ratio_3Y': 0.46, 'Treynor_Ratio_3Y': 7.8}} {'risk_measures_5y': {'Beta_5Y': 0.94, 'Alpha_5Y': 1.89, 'R_Squared_5Y': 73.28, 'Avg_Return_5Y': 0.97, 'Volatility_5Y': 15.91, 'Sharpe_Ratio_5Y': 0.66, 'Treynor_Ratio_5Y': 10.37}} {'risk_measures_10y': {'Beta_10Y': 0.99, 'Alpha_10Y': 0.3, 'R_Squared_10Y': 78.24, 'Avg_Return_10Y': 0.55, 'Volatility_10Y': 16.83, 'Sharpe_Ratio_10Y': 0.36, 'Treynor_Ratio_10Y': 4.81}} +... + + +CREATE TABLE "sector_allocations" ( +allockey bigint NOT NULL DEFAULT nextval('sector_allocations_allockey_seq'::regclass), +productlink character varying NULL, +sectorlink bigint NULL, +weightpct real NOT NULL, + PRIMARY KEY (allockey), + FOREIGN KEY (productlink) REFERENCES funds(tickersym), + FOREIGN KEY (sectorlink) REFERENCES sectors(secid) +); + +First 3 rows: + allockey productlink sectorlink weightpct +---------- ------------- ------------ ----------- + 1 AADR 1 0.2536 + 2 AADR 2 0.0736 + 3 AADR 3 0.1164 +... + + +CREATE TABLE "sectors" ( +secid bigint NOT NULL DEFAULT nextval('sectors_secid_seq'::regclass), +industrytag text NOT NULL, + PRIMARY KEY (secid) +); + +First 3 rows: + secid industrytag +------- ---------------------- + 1 basic_materials + 2 communication_services + 3 consumer_cyclical +... + + +CREATE TABLE "bond_allocations" ( +bondallocid bigint NOT NULL DEFAULT nextval('bond_allocations_bondallocid_seq'::regclass), +fundlink character varying NULL, +ratinglink bigint NULL, +allocationpct real NOT NULL, + PRIMARY KEY (bondallocid), + FOREIGN KEY (fundlink) REFERENCES funds(tickersym), + FOREIGN KEY (ratinglink) REFERENCES bond_ratings(ratekey) +); + +First 3 rows: + bondallocid fundlink ratinglink allocationpct +------------- ---------- ------------ --------------- + 1 ADFI 1 0 + 2 ADFI 2 0.377 + 3 ADFI 3 0.0279 +... + + +CREATE TABLE "bond_ratings" ( +ratekey bigint NOT NULL DEFAULT nextval('bond_ratings_ratekey_seq'::regclass), +creditmark text NOT NULL, + PRIMARY KEY (ratekey) +); + +First 3 rows: + ratekey creditmark +--------- ------------- + 1 us_government + 2 aaa + 3 aa +... + + +CREATE TABLE "holdings" ( +holdref bigint NOT NULL DEFAULT nextval('holdings_holdref_seq'::regclass), +instrumentref character varying NULL, +securitykey bigint NULL, +holdingpct real NOT NULL, +positionrank bigint NULL, + PRIMARY KEY (holdref), + FOREIGN KEY (instrumentref) REFERENCES funds(tickersym), + FOREIGN KEY (securitykey) REFERENCES securities(securityref) +); + +First 3 rows: + holdref instrumentref securitykey holdingpct positionrank +--------- --------------- ------------- ------------ -------------- + 1 AAAU 1 0.1098 1 + 2 AAAU 2 0.0258 2 + 3 AAAU 3 0.0241 3 +... + + +CREATE TABLE "securities" ( +securityref bigint NOT NULL DEFAULT nextval('securities_securityref_seq'::regclass), +securitylabel text NOT NULL, + PRIMARY KEY (securityref) +); + +First 3 rows: + securityref securitylabel +------------- ------------------------------------- + 1 Cayman Real Assets Fund Ltd. + 2 CCI - Crown Castle International Corp + 3 LNG - Cheniere Energy Inc +... diff --git a/fake_account/fake_account_column_meaning_base.json b/fake_account/fake_account_column_meaning_base.json new file mode 100644 index 0000000000000000000000000000000000000000..90f6120691ee08e30e128a2336f84c9e8f4b065c --- /dev/null +++ b/fake_account/fake_account_column_meaning_base.json @@ -0,0 +1,186 @@ +{ + "fake_account|platforms|PLT_CODE": "TEXT. Unique platform code. PK. Example: PL331.", + "fake_account|platforms|PLT_KIND": "TEXT. Type of the platform. Not NULL. Possible values: Forum, Microblog, Social Network, Video Platform.", + "fake_account|accounts|acct_ref": "TEXT. Unique account reference. PK. Example: ACC7210284.", + "fake_account|accounts|plt_key": "TEXT. Platform code reference. FK to platforms.", + "fake_account|accounts|OrigStamp": "DATE. Original account creation timestamp. **NULL means account creation timestamp not provided.**. Example: 26-Dec-23.", + "fake_account|accounts|AGE_D": "BIGINT. Age of the account in days. Example: 393.", + "fake_account|accounts|StateFlag": "TEXT. Current state of the account. Possible values: Active, Deleted, Dormant, Suspended.", + "fake_account|accounts|acct_form": "TEXT. Account form or type. Possible values: Bot, Business, Hybrid, Personal.", + "fake_account|accounts|VerifyMark": "TEXT. Verification status of the account. Possible values: Failed, Pending, Suspicious, Unverified.", + "fake_account|accounts|ProfileScore": "REAL. Profile score of the account. Example: 0.167.", + "fake_account|profiles|acct_anchor": "TEXT. Account reference. PK. FK to accounts.", + "fake_account|profiles|HandleMask": "TEXT. Masked handle of the user. Possible values: Natural, Random, Sequential, Template.", + "fake_account|profiles|usrn_Ent": "REAL. User entry value. Example: 0.835.", + "fake_account|profiles|USR_LEN": "BIGINT. Length of the username. Example: 13.", + "fake_account|profiles|UsrPtn": "TEXT. User pattern description. Possible values: AlphaNum, Generated, Meaningful, Random.", + "fake_account|profiles|DispChg": "BIGINT. Displacement change value. **NULL means displacement change not recorded.**. Example: 8.0.", + "fake_account|profiles|pic_form": "TEXT. Picture format. **NULL means picture format not specified.**. Possible values: AI Generated, Celebrity, Real, Stock.", + "fake_account|profiles|PicScore": "REAL. Profile picture score. Example: 0.772.", + "fake_account|profiles|BIO_L": "BIGINT. Length of bio information. Example: 118.0.", + "fake_account|profiles|BioLang": "TEXT. Bio language type. Possible values: en, mixed, multiple, unknown.", + "fake_account|profiles|BioLinks": "BIGINT. Number of bio links. Possible values: 0, 1, 2, 3, 4, 5.", + "fake_account|profiles|BioKwHit": "TEXT. Bio keyword hits. **NULL means bio keyword hits not counted.**. Possible values: Normal, Promo, Spam, Suspicious.", + "fake_account|profiles|LocFlag": "TEXT. Location flag status. Possible values: Fake, Multiple, No, Yes.", + "fake_account|profiles|LOC_MOV": "BIGINT. Location movement count. Example: 2.", + "fake_account|profiles|mail_dom": "TEXT. Email domain. Possible values: Custom, Disposable, Free, Unknown.", + "fake_account|profiles|TelState": "TEXT. Telephone state. **NULL means telephone state not recorded.**. Possible values: Invalid, VOIP, Valid.", + "fake_account|security_sessions|acct_gate": "TEXT. Account reference for session. PK. FK to accounts.", + "fake_account|content_activity|acct_slot": "TEXT. Account reference for content activity. PK. FK to accounts.", + "fake_account|network_metrics|acct_node": "TEXT. Account reference for network metrics. PK. FK to accounts.", + "fake_account|network_metrics|FollowNum": "BIGINT. Number of followers. Example: 32353.", + "fake_account|network_metrics|FollowgNum": "BIGINT. Number of followees. Example: 53330.", + "fake_account|network_metrics|FollGrow": "REAL. Follower growth rate. Example: 0.697.", + "fake_account|network_metrics|FingGrow": "REAL. Following growth rate. Example: 0.899.", + "fake_account|network_metrics|FollRatio": "REAL. Follower-followee ratio. Example: 5.162.", + "fake_account|network_metrics|MutConn": "REAL. Mutual connections count. Example: 0.964.", + "fake_account|network_metrics|ConnGrowPtn": "TEXT. Connection growth pattern. Possible values: Bot-like, Burst, Organic, Suspicious.", + "fake_account|network_metrics|ConnQual": "REAL. Connection quality score. Example: 0.819.", + "fake_account|network_metrics|EngRate": "REAL. Engagement rate. Example: 0.132.", + "fake_account|network_metrics|EngAuth": "REAL. Engagement authority score. Example: 0.954.", + "fake_account|network_metrics|LikeRt": "REAL. Like rate score. Example: 0.738.", + "fake_account|network_metrics|ComRt": "REAL. Comment rate score. **NULL means comment rate not recorded.**. Example: 0.282.", + "fake_account|network_metrics|ShareRt": "REAL. Share rate score. Example: 0.696.", + "fake_account|network_metrics|IntRecip": "REAL. Interaction reciprocity score. Example: 0.817.", + "fake_account|network_metrics|IntDiv": "REAL. Interaction diversity score. Example: 0.681.", + "fake_account|network_metrics|TempIntPtn": "TEXT. Temporary interaction pattern. Possible values: Automated, Natural, Periodic, Random.", + "fake_account|interaction_metrics|acct_dm": "TEXT. Account reference for direct messaging metrics. PK. FK to accounts.", + "fake_account|interaction_metrics|MsgSim": "REAL. Message similarity score. Example: 0.041.", + "fake_account|interaction_metrics|MsgFreq": "REAL. Message frequency. Example: 60.1.", + "fake_account|interaction_metrics|MsgTargetDiv": "REAL. Message target diversity. Example: 0.498.", + "fake_account|interaction_metrics|RespTimePtn": "TEXT. Response time pattern. **NULL means response time pattern not recorded.**. Possible values: Delayed, Instant, Natural, Random.", + "fake_account|interaction_metrics|ConvNat": "REAL. Conversation nature score. Example: 0.825.", + "fake_account|interaction_metrics|SentVar": "REAL. Sentiment variance score. Example: 0.005.", + "fake_account|interaction_metrics|LangSoph": "REAL. Language sophistication score. Example: 0.03.", + "fake_account|interaction_metrics|TxtUniq": "REAL. Text uniqueness score. Example: 0.44.", + "fake_account|interaction_metrics|KeyPtnHit": "REAL. Key pattern hit score. **NULL means key pattern hit not recorded.**. Example: 0.589.", + "fake_account|interaction_metrics|TopCoh": "REAL. Top coherence score. Example: 0.856.", + "fake_account|behavioral_scores|acct_beh": "TEXT. Account reference for behavioral scores. PK. FK to accounts.", + "fake_account|risk_and_moderation|acct_risk": "TEXT. Account reference for risk and moderation data. PK. FK to accounts.", + "fake_account|cluster_analysis|CLSTR_PIN": "TEXT. Unique cluster identifier. PK. Example: CL0029.", + "fake_account|cluster_analysis|ClusterQty": "BIGINT. Number of accounts in the cluster. Example: 5.", + "fake_account|cluster_analysis|CluRole": "TEXT. Role or function of the cluster. Possible values: Botnet, Community, InfluenceNetwork, SocialGroup, SpamRing.", + "fake_account|cluster_analysis|NetInfl": "REAL. Network influence score. Example: 0.362.", + "fake_account|cluster_analysis|CoordScore": "REAL. Coordination score. Example: 0.363.", + "fake_account|account_clusters|acct_bridge": "TEXT. Account reference for the cluster. PK. FK to accounts.", + "fake_account|account_clusters|clu_ref": "TEXT. Cluster reference identifier. PK. FK to cluster_analysis.", + "fake_account|monitoring|RecKey": "TEXT. Unique monitoring record key. PK. Example: FA410087.", + "fake_account|monitoring|snap_ts": "TIMESTAMPTZ. Snapshot timestamp. **NULL means snapshot timestamp not recorded.**. Example: 2024-08-21T08:30:21.", + "fake_account|monitoring|acct_mon": "TEXT. Account reference for monitoring. FK to accounts.", + "fake_account|monitoring|DetectSrc": "TEXT. Source of detection. Possible values: Algorithm, Manual Review, Pattern Match, User Report.", + "fake_account|monitoring|DetectConf": "REAL. Detection confidence score. **NULL means detection confidence not recorded.**. Example: 0.132.", + "fake_account|monitoring|MonPrio": "TEXT. Monitoring priority. Possible values: High, Low, Medium, Urgent.", + "fake_account|monitoring|InvestState": "TEXT. Investigation state. Possible values: Active, Completed, Pending.", + "fake_account|monitoring|ActionDone": "TEXT. Action taken. **NULL means no action recorded.**. Possible values: Restriction, Suspension, Warning.", + "fake_account|monitoring|RevFreq": "TEXT. Review frequency. Possible values: Daily, Monthly, Quarterly, Weekly.", + "fake_account|monitoring|LastRev": "DATE. Last review date. Example: 2025/1/1.", + "fake_account|monitoring|NextRev": "DATE. Next review date. Example: 2025/4/10.", + "fake_account|monitoring|ConfScore": "REAL. Confidence score. **NULL means confidence score not assigned.**. Example: 0.253.", + "fake_account|monitoring|FPP": "REAL. False positive probability. Example: 0.692.", + "fake_account|monitoring|MethRel": "REAL. Method reliability score. Example: 0.927.", + "fake_account|monitoring|ModelVer": "TEXT. Model version used. Example: v1.5.", + "fake_account|monitoring|FeatVer": "TEXT. Feature version used. Example: f1.7.", + "fake_account|monitoring|LastUp": "TIMESTAMPTZ. Last update timestamp. Example: 2025/2/18 12:00.", + "fake_account|monitoring|UpFreqH": "BIGINT. Update frequency in hours. Example: 3.", + "fake_account|security_sessions|session_telemetry": { + "column_meaning": "JSONB column. Aggregates IP reputation, device-mix, VPN / proxy usage and log-in behaviour so threat-detection jobs can retrieve the full session context from one JSONB column.", + "fields_meaning": { + "ip_reputation": { + "registration_ip": "INET. Registered IP address. **NULL means IP not recorded.**. Example: 186.221.8.216.", + "ip_reputation_score": "REAL. IP reputation score. Example: 0.729.", + "country_count": "BIGINT. Country of the IP address. Example: 14.", + "proxy_hits": "BIGINT. Proxy hits during session. **NULL means proxy hits not recorded.**. Example: 98.0.", + "tor_flag": "TEXT. Tor usage flag. Possible values: No, Suspected, Yes." + }, + "vpn_usage_pct": "TEXT. VPN usage percentage. Example: 0.00%.", + "device_profile": { + "device_count": "BIGINT. Device number. Example: 20.", + "device_mix_json": "JSONB. Device mix used in session. Example: {'Mobile': 0.8966897433246256, 'Desktop': 0.5115558690500416, 'Tablet': 0.3299024746948329}.", + "browser_diversity_idx": "REAL. Browser mix score. Example: 0.016.", + "ua_consistency": "REAL. User agent consistency score. Example: 0.639." + }, + "login_behavior": { + "login_chronology": "TEXT. Login chronology or history. **NULL means login chronology not available.**. Possible values: Bot-like, Burst, Random, Regular.", + "login_freq_per_day": "TEXT. Login frequency descriptor. Possible values: High, Low, Medium, Suspicious.", + "location_variability": "REAL. Location variance during session. Example: 0.311." + }, + "session_stats": { + "avg_session_duration_min": "REAL. Session duration. Example: 2766.0.", + "session_count": "BIGINT. Number of sessions during the account lifecycle. Example: 419." + }, + "activity_pattern": { + "activity_regularity": "REAL. Activity registration score. Example: 0.313.", + "activity_spread_code": "TEXT. Activity spread during the session. Example: {'Morning': 0.08864600070960105, 'Afternoon': 0.7421617693224358, 'Night': 0.3718840943461962}." + } + } + }, + "fake_account|content_activity|content_metrics": { + "column_meaning": "JSONB column. Packs post-rate, linguistic diversity, hashtag / mention patterns and media-sharing ratios into one JSONB blob for real-time content-quality scoring.", + "fields_meaning": { + "posting": { + "total_posts": "BIGINT. Total number of posts. Example: 3713.", + "posts_per_day": "REAL. Posting frequency. Example: 28.6.", + "post_gap_variability": "REAL. Variation in post gaps. Example: 0.835." + }, + "content_quality": { + "content_similarity": "REAL. Content similarity score. Example: 0.789.", + "content_uniqueness": "REAL. Content uniqueness score. Example: 0.858.", + "content_diversity": "REAL. Content diversity score. Example: 0.153.", + "topic_entropy": "REAL. Top entity identification score. Example: 0.859." + }, + "language_tags": { + "language_count": "BIGINT. Number of languages used. Possible values: 1, 2, 3, 4, 5.", + "hashtag_pattern": "TEXT. Tag pattern description. Possible values: Normal, Random, Spam, Trending.", + "hashtag_ratio": "REAL. Tag relevance score. Example: 0.829.", + "mention_pattern": "TEXT. Mention pattern. Possible values: Normal, Random, Spam, Targeted.", + "mention_ratio": "TEXT. Mention response rate. Example: 0.40%." + }, + "link_media": { + "url_freq": "REAL. URL frequency in content. **NULL means URL frequency not recorded.**. Example: 0.734.", + "url_diversity": "REAL. URL diversity score. Example: 0.028.", + "media_upload_rate": "REAL. Media upload rate. Example: 0.733.", + "media_reuse_rate": "REAL. Media response rate. **NULL means media response rate not recorded.**. Example: 0.109." + } + } + }, + "fake_account|behavioral_scores|behavioral_anomaly_scores": { + "column_meaning": "JSONB column. Consolidates all automated-behaviour, spam, pattern-anomaly and commercial-intent scores so model pipelines can ingest a single JSONB document per account.", + "fields_meaning": { + "automation_spam": { + "automated_behavior": "REAL. Automatic behavior score. Example: 0.789.", + "bot_likelihood": "REAL. Bot likelihood score. **NULL means bot likelihood not calculated.**. Example: 0.203.", + "spam_score": "REAL. Spam score. Example: 0.315." + }, + "commercial_intent_score": "REAL. Communication intensity score. **NULL means communication intensity not recorded.**. Example: 0.558.", + "pattern_scores": { + "behavior_pattern": "REAL. Behavior pattern score. Example: 0.087.", + "temporal_pattern": "REAL. Temporary behavior pattern score. **NULL means temporary pattern not recorded.**. Example: 0.883.", + "network_pattern": "REAL. Network behavior pattern score. Example: 0.307.", + "content_pattern": "REAL. Content pattern score. Example: 0.178.", + "profile_pattern": "REAL. Profile behavior pattern score. Example: 0.869.", + "technical_pattern": "REAL. Technical behavior pattern score. Example: 0.246." + } + } + }, + "fake_account|risk_and_moderation|risk_profile": { + "column_meaning": "JSONB column. Bundles risk, authenticity / credibility, violation history and overall reputation into one JSONB field for faster trust-and-safety look-ups and enforcement rules.", + "fields_meaning": { + "risk_scores": { + "risk_value": "REAL. Risk value score. **NULL means risk value not assigned.**. Example: 0.155.", + "threat_level": "TEXT. Threat level description. Possible values: Critical, High, Low, Medium.", + "authenticity_score": "REAL. Authorization score. Example: 0.15.", + "credibility_score": "REAL. Credit score. Example: 0.441.", + "reputation_score": "REAL. Reputation score. Example: 0.23.", + "trust_score": "REAL. Trust value score. Example: 0.353.", + "impact_score": "REAL. Impact value score. Example: 0.544." + }, + "violation_history": { + "abuse_count": "BIGINT. Abuse count against the account. Example: 24.", + "violation_distribution_json": "JSONB. Violations distribution details. Example: {'Spam': 0.9353273606557855, 'Fake': 0.4490233249279981, 'Abuse': 0.08219689664928109}.", + "suspension_history_json": "JSONB. Suspension history data. Possible values: 0, 1, 2, 3, 4, 5.", + "warning_count": "BIGINT. Warning count. Example: 0.", + "appeal_count": "BIGINT. Appeal count. Possible values: 0, 1, 2, 3, 4, 5." + } + } + } +} \ No newline at end of file diff --git a/fake_account/fake_account_kb.jsonl b/fake_account/fake_account_kb.jsonl new file mode 100644 index 0000000000000000000000000000000000000000..e55d48cbfb41796e89630a300fe8041027cb1f6f --- /dev/null +++ b/fake_account/fake_account_kb.jsonl @@ -0,0 +1,87 @@ +{"id": 0, "knowledge": "Account Activity Frequency (AAF)", "description": "Measures how frequently an account engages in platform activities relative to its age.", "definition": "It is defined as: AAF = session countthe age of an account.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 1, "knowledge": "Content Authenticity Score (CAS)", "description": "Aggregates multiple authenticity indicators into a single score.", "definition": "It is computed as 0.3 * authenticity score + 0.3 * content uniqueness score + 0.4 * conversation naturalness value; all three components are first scaled to the 0 to 1 range.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 2, "knowledge": "Network Growth Velocity (NGV)", "description": "Measures the rate of network growth considering both followers and following.", "definition": "It is the Euclidean length of an account's growth vector, calculated as the square root of (follower growth rate² + following growth rate²).", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 3, "knowledge": "Bot Behavior Index (BBI)", "description": "Combines multiple bot-detection metrics into a single score.", "definition": "BBI = 0.4 * bot likelihood score + 0.3 * automated behavior score + 0.3 * (1 - conversation naturalness value)", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 4, "knowledge": "Security Risk Score (SRS)", "description": "Calculates overall security risk based on multiple factors.", "definition": "SRS = 0.4 * risk value + 0.3 * (1 - trust value) + 0.3 * impact value", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 5, "knowledge": "Profile Credibility Index (PCI)", "description": "Evaluates overall profile credibility.", "definition": "PCI = 0.3 * credibility score + 0.3 * reputation score + 0.4 * profile completeness", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 6, "knowledge": "Coordinated Activity Score (CAS)", "description": "Measures the likelihood that a set of accounts are acting in a coordinated manner by combining structural coordination and influence.", "definition": "CAS = 0.5 * coordination score + 0.3 * network influence centrality + 0.2 * cluster size", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 7, "knowledge": "Technical Evasion Index (TEI)", "description": "Quantifies attempts to evade detection.", "definition": "TEI = 0.4 * VPN ratio + 0.3 * proxy count divided by 10 + 0.3 * login country count divided by 20", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 8, "knowledge": "Content Manipulation Score (CMS)", "description": "Evaluates content manipulation patterns.", "definition": "CMS = 0.4 * (1 - content uniqueness score) + 0.3 * media reuse ratio + 0.3 * (1 - text uniqueness)", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 9, "knowledge": "Moderation Priority Score (MPS)", "description": "Calculates priority for moderation review.", "definition": "MPS = 0.3 * abuse report count divided by 1000 + 0.4 * impact value + 0.3 * risk value", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 10, "knowledge": "High-Risk Account", "description": "Identifies accounts requiring immediate attention.", "definition": "An account with SRS > 0.8 and at least one active security detection with threatlvl = 'Critical'", "type": "domain_knowledge", "children_knowledge": [4]} +{"id": 11, "knowledge": "Bot Network", "description": "Identifies coordinated bot activity.", "definition": "A cluster where clustsize > 10 and average BBI > 0.7 for all accounts in cluster", "type": "domain_knowledge", "children_knowledge": [3]} +{"id": 12, "knowledge": "Trusted Account", "description": "Identifies highly trustworthy accounts.", "definition": "An account with PCI > 0.8 and no security detections in the past 180 days", "type": "domain_knowledge", "children_knowledge": [5]} +{"id": 13, "knowledge": "Content Farm", "description": "Identifies accounts mass-producing similar content.", "definition": "An account with CMS > 0.7 and post over 50 posts per day", "type": "domain_knowledge", "children_knowledge": [8]} +{"id": 14, "knowledge": "Sockpuppet Network", "description": "Identifies related accounts used for manipulation.", "definition": "A group of accounts where the number of linked accounts is greater than 5 and the coordinated activity score is above 0.8", "type": "domain_knowledge", "children_knowledge": [6]} +{"id": 15, "knowledge": "Dormant Bot", "description": "Identifies inactive bot accounts.", "definition": "An account with account status = 'Dormant'.", "type": "domain_knowledge", "children_knowledge": [3]} +{"id": 16, "knowledge": "VPN Abuser", "description": "Identifies accounts systematically using VPNs.", "definition": "An account with TEI > 0.8 and at least 3 different countries in login locations", "type": "domain_knowledge", "children_knowledge": [7]} +{"id": 17, "knowledge": "Engagement Manipulator", "description": "Identifies artificial engagement patterns.", "definition": "An account where engagement authenticity < 0.3 and temporal interaction pattern = 'Automated'", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 18, "knowledge": "Serial Violator", "description": "Identifies repeat policy violators.", "definition": "An account with suspicious history count > 2 and warning count > 5", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 19, "knowledge": "Amplification Network", "description": "Identifies coordinated content amplification.", "definition": "A cluster where cluster role = 'Amplifier' and coordination score > 0.8", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 20, "knowledge": "detection score", "description": "Illustrates confidence value in detection scores.", "definition": "Ranges from 0 to 1. Values above 0.8 indicate high-confidence detections, while values below 0.3 suggest uncertain results requiring manual review.", "type": "value_illustration", "children_knowledge": -1} +{"id": 21, "knowledge": "coordination score", "description": "Illustrates coordination score meaning.", "definition": "Ranges from 0 to 1. Scores above 0.7 strongly indicate coordinated behavior, while scores below 0.2 suggest independent actions.", "type": "value_illustration", "children_knowledge": -1} +{"id": 22, "knowledge": "engagement authenticity score", "description": "Illustrates engagement authenticity score.", "definition": "Ranges from 0 to 1. Scores above 0.9 indicate highly authentic engagement, while scores below 0.4 suggest artificial or automated engagement.", "type": "value_illustration", "children_knowledge": -1} +{"id": 23, "knowledge": "content uniqueness scoring", "description": "Illustrates content uniqueness scoring.", "definition": "Ranges from 0 to 1. Scores above 0.8 indicate highly unique content, while scores below 0.3 suggest duplicate or templated content.", "type": "value_illustration", "children_knowledge": -1} +{"id": 24, "knowledge": "conversation naturalness value", "description": "Illustrates conversation naturalness value.", "definition": "Ranges from 0 to 1. Values above 0.7 indicate natural human conversation, while values below 0.3 suggest automated or scripted responses.", "type": "value_illustration", "children_knowledge": -1} +{"id": 25, "knowledge": "reputation scoring", "description": "Illustrates IP reputation scoring.", "definition": "Ranges from 0 to 1. Scores above 0.8 indicate trusted IPs, while scores below 0.3 suggest potentially malicious or compromised IPs.", "type": "value_illustration", "children_knowledge": -1} +{"id": 26, "knowledge": "profile completeness degree", "description": "Illustrates profile completeness scoring.", "definition": "Ranges from 0 to 1. Values above 0.8 indicate well-maintained profiles, while values below 0.4 suggest placeholder or abandoned profiles.", "type": "value_illustration", "children_knowledge": -1} +{"id": 27, "knowledge": "trust value", "description": "Illustrates trust value meaning.", "definition": "Ranges from 0 to 1. Values above 0.7 indicate highly trusted accounts, while values below 0.3 suggest untrusted or suspicious accounts.", "type": "value_illustration", "children_knowledge": -1} +{"id": 28, "knowledge": "impact value", "description": "Illustrates impact value meaning.", "definition": "Ranges from 0 to 1. Values above 0.7 indicate high-impact violations requiring immediate attention, while values below 0.3 suggest low-priority issues.", "type": "value_illustration", "children_knowledge": -1} +{"id": 29, "knowledge": "bot-likely score", "description": "Illustrates bot likelihood scoring.", "definition": "Ranges from 0 to 100. Scores above 70 strongly indicate bot behavior, while scores below 20 suggest human-like behavior patterns.", "type": "value_illustration", "children_knowledge": -1} +{"id": 30, "knowledge": "Cross-Platform Risk Index (CPRI)", "description": "Evaluates risk across multiple platform types for the same account.", "definition": "It is defined as: CPRI = SRS * (1 + 0.2 * login country count).", "type": "calculation_knowledge", "children_knowledge": [4]} +{"id": 31, "knowledge": "Network Manipulation Index (NMI)", "description": "Measures the extent of network manipulation considering bot behavior and coordination.", "definition": "It is defined as: NMI = 0.6 * BBI + 0.4 * CAS.", "type": "calculation_knowledge", "children_knowledge": [3, 6]} +{"id": 32, "knowledge": "Enhanced Trust Score (ETS)", "description": "Calculates trust score considering both profile credibility and content authenticity.", "definition": "It is defined as: ETS = 0.5 * PCI + 0.5 * CAS.", "type": "calculation_knowledge", "children_knowledge": [5, 1]} +{"id": 33, "knowledge": "Coordinated Bot Risk (CBR)", "description": "Assesses risk from coordinated bot networks.", "definition": "It is defined as: CBR = BBI * CAS * cluster size.", "type": "calculation_knowledge", "children_knowledge": [3, 6]} +{"id": 34, "knowledge": "Content Security Index (CSI)", "description": "Evaluates content security considering manipulation and authenticity.", "definition": "It is defined as: CSI = 0.7 * (1 - CMS) + 0.3 * CAS.", "type": "calculation_knowledge", "children_knowledge": [8, 1]} +{"id": 35, "knowledge": "Automated Behavior Score (ABS)", "description": "Measures degree of automation in account behavior.", "definition": "It is defined as: ABS = 0.4 * BBI + 0.3 * TEI + 0.3 * (1 - CAS).", "type": "calculation_knowledge", "children_knowledge": [3, 7, 1]} +{"id": 36, "knowledge": "Network Trust Score (NTS)", "description": "Evaluates trustworthiness of account's network connections.", "definition": "It is defined as: NTS = PCI * (1 - NGV) * (1 - CBR).", "type": "calculation_knowledge", "children_knowledge": [5, 2, 33]} +{"id": 37, "knowledge": "Content Impact Score (CIS)", "description": "Estimates the potential impact of manipulated content by blending manipulation intensity, moderation priority, and account influence.", "definition": "It is defined as: CIS = cms * mps * network influence centrality nic.", "type": "calculation_knowledge", "children_knowledge": [8, 9]} +{"id": 38, "knowledge": "Authentication Risk Score (ARS)", "description": "Assesses authentication-related risks.", "definition": "It is defined as: ARS = 0.5 * TEI + 0.3 * (1 - PCI) + 0.2 * SRS.", "type": "calculation_knowledge", "children_knowledge": [7, 5, 4]} +{"id": 39, "knowledge": "Behavioral Anomaly Score (BAS)", "description": "Quantifies unusual behavior patterns.", "definition": "It is defined as: BAS = 0.4 * BBI + 0.4 * AAF + 0.2 * NGV.", "type": "calculation_knowledge", "children_knowledge": [3, 0, 2]} +{"id": 40, "knowledge": "High-Risk Bot Network", "description": "Identifies dangerous coordinated bot networks.", "definition": "A Bot Network with CBR > 0.8 and SRS > 0.7", "type": "domain_knowledge", "children_knowledge": [33, 4]} +{"id": 41, "knowledge": "Trusted Content Creator", "description": "Identifies reliable content creators.", "definition": "An account with ETS > 0.8 and CIS < 0.2", "type": "domain_knowledge", "children_knowledge": [32, 37]} +{"id": 42, "knowledge": "Authentication Risk Account", "description": "Identifies accounts with suspicious authentication patterns.", "definition": "An account with ARS > 0.7 and at least one VPN Abuser detection", "type": "domain_knowledge", "children_knowledge": [38, 16]} +{"id": 43, "knowledge": "Network Security Threat", "description": "Identifies accounts posing network-level security risks.", "definition": "An account with NTS < 0.3 and is part of a Bot Network", "type": "domain_knowledge", "children_knowledge": [36, 11]} +{"id": 44, "knowledge": "Content Manipulation Ring", "description": "Identifies coordinated content manipulation groups.", "definition": "A Sockpuppet Network where all accounts have CMS > 0.7", "type": "domain_knowledge", "children_knowledge": [14, 8]} +{"id": 45, "knowledge": "Automated Spam Network", "description": "Identifies automated spam distribution networks.", "definition": "A Bot Network where average ABS > 0.8 and all accounts are Content Farms", "type": "domain_knowledge", "children_knowledge": [11, 35, 13]} +{"id": 46, "knowledge": "Cross-Platform Threat", "description": "Identifies threats operating across multiple platforms.", "definition": "A High-Risk Account with CPRI > 0.9 and is part of a Sockpuppet Network", "type": "domain_knowledge", "children_knowledge": [10, 30, 14]} +{"id": 47, "knowledge": "Behavioral Anomaly Cluster", "description": "Identifies groups showing unusual behavior patterns.", "definition": "A cluster where average BAS > 0.8 and contains at least one Bot Network", "type": "domain_knowledge", "children_knowledge": [39, 11]} +{"id": 48, "knowledge": "Mass Manipulation Campaign", "description": "Identifies large-scale manipulation efforts.", "definition": "A Content Manipulation Ring where CIS > 0.8 for all accounts", "type": "domain_knowledge", "children_knowledge": [44, 37]} +{"id": 49, "knowledge": "Advanced Persistent Threat", "description": "Identifies sophisticated, persistent security threats.", "definition": "A High-Risk Bot Network with NMI > 0.9 and TEI > 0.8", "type": "domain_knowledge", "children_knowledge": [40, 31, 7]} +{"id": 50, "knowledge": "Temporal Pattern Deviation Score (TPDS)", "description": "Measures deviation from established temporal activity patterns.", "definition": "It is calculated by comparing the observed hourly activity of an account with its expected pattern: for each of the 24 hours, take the percentage deviation between observed and expected frequency, square these deviations, sum them for all hours, and finally take the square root of that total.", "type": "calculation_knowledge", "children_knowledge": [0, 39]} +{"id": 51, "knowledge": "Network Influence Centrality (NIC)", "description": "Quantifies account's position and influence in interaction network.", "definition": "It is defined as: NIC = 0.4 * connection quality score + 0.3 * network influence score + 0.3 * interaction diversity.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 52, "knowledge": "Multi-Account Correlation Index (MACI)", "description": "Measures behavioral correlation across linked accounts.", "definition": "It is the average behavioural correlation across every pair of linked accounts, computed as the mean Pearson correlation coefficient for the chosen activity metric.", "type": "calculation_knowledge", "children_knowledge": [35, 31]} +{"id": 53, "knowledge": "Reputation Volatility Index (RVI)", "description": "Quantifies stability of account reputation over time.", "definition": "It is the coefficient of variation (standard deviation divided by mean) of the account's reputation score over time, scaled by the relative rate of change during the measurement window.", "type": "calculation_knowledge", "children_knowledge": [5, 4]} +{"id": 54, "knowledge": "Content Distribution Pattern Score (CDPS)", "description": "Analyzes patterns in content posting and sharing.", "definition": "It is defined as: CDPS = 0.4 * entropy of post times + 0.3 * burstiness + 0.3 * (1 - periodicity).", "type": "calculation_knowledge", "children_knowledge": [8, 37]} +{"id": 55, "knowledge": "Behavioral Consistency Score (BCS)", "description": "Measures consistency of account behavior patterns.", "definition": "It is defined as: BCS = (1 - TPDS) * (1 - RVI) * (1 − behavior pattern deviation divided by 100).", "type": "calculation_knowledge", "children_knowledge": [50, 53]} +{"id": 56, "knowledge": "Network Synchronization Index (NSI)", "description": "Quantifies synchronized activities across account clusters.", "definition": "It is the mean time synchronisation score for all unique account pairs in the cluster, multiplied by the multi-account correlation index, thereby emphasising both pair-wise synchronicity and overall behavioural alignment.", "type": "calculation_knowledge", "children_knowledge": [52, 31]} +{"id": 57, "knowledge": "Content Amplification Effect (CAE)", "description": "Measures the cascade effect of content sharing.", "definition": "It is defined as: CAE = cis * NIC * log(1 + reshare count).", "type": "calculation_knowledge", "children_knowledge": [37, 51]} +{"id": 58, "knowledge": "Authentication Pattern Score (APS)", "description": "Evaluates consistency of authentication behaviors.", "definition": "It is defined as: APS = (1 - TEI) * BCS * (1 - authentication anomaly count divided by 100).", "type": "calculation_knowledge", "children_knowledge": [7, 55]} +{"id": 59, "knowledge": "Cross-Platform Correlation Score (CPCS)", "description": "Measures behavioral correlation across platforms.", "definition": "It is defined as: CPCS = CPRI * MACI * (1 + platform link count divided by 10).", "type": "calculation_knowledge", "children_knowledge": [30, 52]} +{"id": 60, "knowledge": "Coordinated Influence Operation", "description": "Identifies sophisticated influence campaigns.", "definition": "A network where NSI > 0.8 and CAE > 0.7 and contains at least one Content Manipulation Ring", "type": "domain_knowledge", "children_knowledge": [56, 57, 44]} +{"id": 61, "knowledge": "Behavioral Pattern Anomaly", "description": "Identifies accounts with inconsistent behavioral patterns.", "definition": "An account with BCS < 0.3 and TPDS > 0.7 and is not a Trusted Account", "type": "domain_knowledge", "children_knowledge": [55, 50, 12]} +{"id": 62, "knowledge": "Cross-Platform Bot Network", "description": "Identifies coordinated bot activity across platforms.", "definition": "A Bot Network where CPCS > 0.8 and all accounts have similar MACI patterns", "type": "domain_knowledge", "children_knowledge": [11, 59, 52]} +{"id": 63, "knowledge": "Authentication Anomaly Cluster", "description": "Identifies groups with suspicious authentication patterns.", "definition": "A cluster where average APS < 0.3 and contains at least one Authentication Risk Account", "type": "domain_knowledge", "children_knowledge": [58, 42]} +{"id": 64, "knowledge": "Network Influence Hub", "description": "Identifies accounts with unusual influence patterns.", "definition": "An account with NIC > 0.8 and CAE > 0.7 that is part of a Coordinated Influence Operation", "type": "domain_knowledge", "children_knowledge": [51, 57, 60]} +{"id": 65, "knowledge": "Reputation Manipulation Ring", "description": "Identifies coordinated reputation manipulation.", "definition": "A Content Manipulation Ring where all accounts have RVI > 0.7 and similar CDPS patterns", "type": "domain_knowledge", "children_knowledge": [44, 53, 54]} +{"id": 66, "knowledge": "Synchronized Behavior Cluster", "description": "Identifies groups with highly synchronized activities.", "definition": "A cluster where NSI > 0.9 and all accounts have similar BCS patterns", "type": "domain_knowledge", "children_knowledge": [56, 55]} +{"id": 67, "knowledge": "Multi-Platform Threat Network", "description": "Identifies sophisticated cross-platform threats.", "definition": "A Cross-Platform Threat where CPCS > 0.8 and all accounts are part of a Synchronized Behavior Cluster", "type": "domain_knowledge", "children_knowledge": [46, 59, 66]} +{"id": 68, "knowledge": "Advanced Influence Campaign", "description": "Identifies sophisticated influence operations.", "definition": "A Mass Manipulation Campaign containing at least one Network Influence Hub and high NSI", "type": "domain_knowledge", "children_knowledge": [48, 64, 56]} +{"id": 69, "knowledge": "Persistent Pattern Anomaly", "description": "Identifies sustained abnormal behavior patterns.", "definition": "A Behavioral Pattern Anomaly that persists for over 30 days and maintains high TPDS", "type": "domain_knowledge", "children_knowledge": [61, 50]} +{"id": 70, "knowledge": "TEI quartile", "description": "Categorizes accounts into four groups based on their TEI values.", "definition": "Assign each account to one of four quartiles based on its TEI distribution: quartile-1 for the lowest 25 percent of TEI values, quartile-2 for the next 25 percent, quartile-3 for the third quartile, and quartile-4 for the top 25 percent.", "type": "calculation_knowledge", "children_knowledge": [7]} +{"id": 71, "knowledge": "Latest Bot Likelihood Score (LBS)", "description": "The most recent bot likelihood score for an account based on security detection timestamps.", "definition": "For an account, use the bot-likelihood score that was recorded at its most recent detection timestamp.", "type": "calculation_knowledge", "children_knowledge": [29]} +{"id": 72, "knowledge": "Reputational Risk", "description": "Measures the potential risk to an account's reputation based on past moderation actions and low reputation scores.", "definition": "An account with reputscore < 30 and high abuserepnum, prioritized by the top quartile of abuse reports.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 73, "knowledge": "High-Impact Amplifier", "description": "Identifies accounts that exert strong influence while posting at a high daily rate, acting as key amplifiers in coordinated networks.", "definition": "An account with nic > 0.8 and posts per day > 30.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 74, "knowledge": "High-Activity Account", "description": "Identifies accounts with elevated engagement levels based on the number of sessions or total posting frequency.", "definition": "An account with session_count > 1000 or total_post_frequency > 50.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 75, "knowledge": "Session Count (SC)", "description": "Measures the total number of session records associated with an account.", "definition": "Session count equals the number of session entries where session profile reference matches the profile key and profile account reference equals the target account.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 76, "knowledge": "Total Post Frequency (TPF)", "description": "Measures the total posting frequency across all sessions for an account.", "definition": "Total post frequency equals the sum of post frequencies from all content entries where content session reference matches session reference, session profile reference equals the profile key, and profile account reference equals the target account.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 77, "knowledge": "High-Activity Account", "description": "Identifies accounts with elevated engagement levels based on the number of sessions or total posting frequency.", "definition": "An account with session count > 1000 or total post frequency > 50.", "type": "domain_knowledge", "children_knowledge": [75, 76]} +{"id": 78, "knowledge": "influence ranking by NIC", "description": "A ranking system that orders accounts based on their NIC scores from highest to lowest.", "definition": "The rank of an account is equal to the number of other accounts with higher network influence centrality score, plus one.", "type": "calculation_knowledge", "children_knowledge": [51]} +{"id": 79, "knowledge": "TEI Risk Category", "description": "A categorical risk level assigned based on an account's TEI Quartile.", "definition": "A category assigned as 'Low Risk' (Quartile 1), 'Moderate Risk' (Quartile 2), 'High Risk' (Quartile 3), or 'Very High Risk' (Quartile 4) based on the account's calculated TEI Quartile.", "type": "domain_knowledge", "children_knowledge": [70]} +{"id": 80, "knowledge": "cluster identifier", "description": "A key used to group related accounts identified as part of the same network or cluster.", "definition": "In this context, the platform identifier (`platident`) associated with the accounts in the potential Amplification Network is the cluster identifier.", "type": "domain_knowledge", "children_knowledge": [19]} +{"id": 81, "knowledge": "member count", "description": "The total number of unique accounts within an identified cluster.", "definition": "It is defined as: Calculated using COUNT(DISTINCT account_index) for accounts grouped by the cluster identifier.", "type": "calculation_knowledge", "children_knowledge": [80]} +{"id": 82, "knowledge": "maximum coordination score", "description": "The highest coordination score observed among the members of an identified cluster.", "definition": "It refers to the maximum coordination score value among all accounts within the given cluster.", "type": "calculation_knowledge", "children_knowledge": [80]} +{"id": 83, "knowledge": "member account IDs", "description": "A collection (array) of the unique account indexes belonging to an identified cluster.", "definition": "It refers to the list of account identifiers for all members included in a specific cluster.", "type": "calculation_knowledge", "children_knowledge": [80]} +{"id": 84, "knowledge": "last activity proxy time", "description": "An estimated timestamp of the last known activity for an account, used when direct session timestamps are unavailable or insufficient.", "definition": "It is defined as: Derived as max(detect time) from associated securitydetection records for an account.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 85, "knowledge": "review priority", "description": "A flag or status assigned to an account to indicate the need or priority level for manual review.", "definition": "A field in the `account` table set to a specific value like 'Review_Inactive_Trusted' to signal that an otherwise trusted account requires review due to prolonged inactivity.", "type": "value_illustration", "children_knowledge": [12, 84]} +{"id": 86, "knowledge": "Account Inactivity", "description": "A condition indicating that an account has not demonstrated recent activity based on available data proxies.", "definition": "Condition met when: last activity proxy time < (current date - 90 days)", "type": "domain_knowledge", "children_knowledge": [84]} diff --git a/fake_account/fake_account_schema.txt b/fake_account/fake_account_schema.txt new file mode 100644 index 0000000000000000000000000000000000000000..9f6ef363c82a7de9d8efd77b17c6236dc1086c53 --- /dev/null +++ b/fake_account/fake_account_schema.txt @@ -0,0 +1,252 @@ +CREATE TABLE "security_sessions" ( +acct_gate text NOT NULL, +session_telemetry jsonb NULL, + PRIMARY KEY (acct_gate), + FOREIGN KEY (acct_gate) REFERENCES accounts(acct_ref) +); + +First 3 rows: +acct_gate session_telemetry +----------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- +ACC7210284 {'ip_reputation': {'tor_flag': 'Yes', 'proxy_hits': 98, 'country_count': 14, 'registration_ip': '186.221.8.216', 'ip_reputation_score': 0.729}, 'session_stats': {'session_count': 419, 'avg_session_duration_min': 2766}, 'vpn_usage_pct': '0.00%', 'device_profile': {'device_count': 20, 'ua_consistency': 0.639, 'device_mix_json': {'Mobile': 0.8966897433246256, 'Tablet': 0.3299024746948329, 'Desktop': 0.5115558690500416}, 'browser_diversity_idx': 0.016}, 'login_behavior': {'login_chronology': 'Burst', 'login_freq_per_day': 'Medium', 'location_variability': 0.311}, 'activity_pattern': {'activity_regularity': 0.313, 'activity_spread_code': "{'Morning': 0.08864600070960105, 'Afternoon': 0.7421617693224358, 'Night': 0.3718840943461962}"}} +ACC2686094 {'ip_reputation': {'tor_flag': 'Suspected', 'proxy_hits': 95, 'country_count': 8, 'registration_ip': '92.98.237.121', 'ip_reputation_score': 0.387}, 'session_stats': {'session_count': 78, 'avg_session_duration_min': 946.4}, 'vpn_usage_pct': '0.30%', 'device_profile': {'device_count': 19, 'ua_consistency': 0.655, 'device_mix_json': {'Mobile': 0.9878046413997664, 'Tablet': 0.9124561678632256, 'Desktop': 0.15058935932480988}, 'browser_diversity_idx': 0.435}, 'login_behavior': {'login_chronology': 'Burst', 'login_freq_per_day': 'High', 'location_variability': 0.868}, 'activity_pattern': {'activity_regularity': 0.783, 'activity_spread_code': "{'Morning': 0.08875266288036487, 'Afternoon': 0.0331719695193855, 'Night': 0.9836352647098076}"}} +ACC7106934 {'ip_reputation': {'tor_flag': 'No', 'proxy_hits': 16, 'country_count': 15, 'registration_ip': '187.186.211.81', 'ip_reputation_score': 0.576}, 'session_stats': {'session_count': 220, 'avg_session_duration_min': 544.7}, 'vpn_usage_pct': '0.20%', 'device_profile': {'device_count': 3, 'ua_consistency': 0.548, 'device_mix_json': {'Mobile': 0.022321611427355448, 'Tablet': 0.714207407707902, 'Desktop': 0.9057269667332837}, 'browser_diversity_idx': 0.063}, 'login_behavior': {'login_chronology': None, 'login_freq_per_day': 'High', 'location_variability': 0.686}, 'activity_pattern': {'activity_regularity': 0.609, 'activity_spread_code': "{'Morning': 0.027120493994065686, 'Afternoon': 0.8886273105427136, 'Night': 0.9271582067840369}"}} +... + + +CREATE TABLE "content_activity" ( +acct_slot text NOT NULL, +content_metrics jsonb NULL, + PRIMARY KEY (acct_slot), + FOREIGN KEY (acct_slot) REFERENCES accounts(acct_ref) +); + +First 3 rows: +acct_slot content_metrics +----------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- +ACC7210284 {'posting': {'total_posts': 3713, 'posts_per_day': None, 'post_gap_variability': 0.835}, 'link_media': {'url_freq': 0.734, 'url_diversity': 0.028, 'media_reuse_rate': 0.109, 'media_upload_rate': 0.733}, 'language_tags': {'hashtag_ratio': 0.829, 'mention_ratio': '0.40%', 'language_count': 4, 'hashtag_pattern': 'Trending', 'mention_pattern': 'Normal'}, 'content_quality': {'topic_entropy': 0.859, 'content_diversity': 0.153, 'content_similarity': 0.789, 'content_uniqueness': 0.858}} +ACC2686094 {'posting': {'total_posts': 1436, 'posts_per_day': 12.2, 'post_gap_variability': 0.832}, 'link_media': {'url_freq': 0.322, 'url_diversity': 0.872, 'media_reuse_rate': 0.993, 'media_upload_rate': 0.355}, 'language_tags': {'hashtag_ratio': 0.124, 'mention_ratio': '0.50%', 'language_count': 4, 'hashtag_pattern': 'Normal', 'mention_pattern': 'Random'}, 'content_quality': {'topic_entropy': 0.699, 'content_diversity': 0.546, 'content_similarity': 0.137, 'content_uniqueness': 0.365}} +ACC7106934 {'posting': {'total_posts': 789, 'posts_per_day': 92.3, 'post_gap_variability': 0.736}, 'link_media': {'url_freq': 0.715, 'url_diversity': 0.479, 'media_reuse_rate': None, 'media_upload_rate': 0.286}, 'language_tags': {'hashtag_ratio': 0.823, 'mention_ratio': '0.70%', 'language_count': 1, 'hashtag_pattern': 'Random', 'mention_pattern': 'Normal'}, 'content_quality': {'topic_entropy': 0.341, 'content_diversity': 0.907, 'content_similarity': 0.557, 'content_uniqueness': 0.98}} +... + + +CREATE TABLE "platforms" ( +PLT_CODE text NOT NULL, +PLT_KIND text NOT NULL, + PRIMARY KEY (PLT_CODE) +); + +First 3 rows: +PLT_CODE PLT_KIND +---------- -------------- +PL331 Microblog +PL784 Social Network +PL235 Social Network +... + + +CREATE TABLE "accounts" ( +acct_ref text NOT NULL, +plt_key text NULL, +OrigStamp date NULL, +AGE_D bigint NULL, +StateFlag text NULL, +acct_form text NULL, +VerifyMark text NULL, +ProfileScore real NULL, + PRIMARY KEY (acct_ref), + FOREIGN KEY (plt_key) REFERENCES platforms(PLT_CODE) +); + +First 3 rows: +acct_ref plt_key OrigStamp AGE_D StateFlag acct_form VerifyMark ProfileScore +---------- --------- ----------- ------- ----------- ----------- ------------ -------------- +ACC7210284 PL331 2023-12-26 393 Active Personal Unverified 0.167 +ACC2686094 PL784 2023-03-20 353 Deleted Bot Unverified 0.32 +ACC7106934 PL235 2023-07-12 244 Active Hybrid Pending 0.963 +... + + +CREATE TABLE "profiles" ( +acct_anchor text NOT NULL, +HandleMask text NULL, +usrn_Ent real NULL, +USR_LEN bigint NULL, +UsrPtn text NULL, +DispChg bigint NULL, +pic_form text NULL, +PicScore real NULL, +BIO_L bigint NULL, +BioLang text NULL, +BioLinks bigint NULL, +BioKwHit text NULL, +LocFlag text NULL, +LOC_MOV bigint NULL, +mail_dom text NULL, +TelState text NULL, + PRIMARY KEY (acct_anchor), + FOREIGN KEY (acct_anchor) REFERENCES accounts(acct_ref) +); + +First 3 rows: +acct_anchor HandleMask usrn_Ent USR_LEN UsrPtn DispChg pic_form PicScore BIO_L BioLang BioLinks BioKwHit LocFlag LOC_MOV mail_dom TelState +------------- ------------ ---------- --------- ---------- --------- ---------- ---------- ------- --------- ---------- ---------- --------- --------- ---------- ---------- +ACC7210284 Sequential 0.835 13 Random 8 Stock 0.772 118 en 3 Suspicious Fake 2 Free Invalid +ACC2686094 Template 0.721 5 Generated nan 0.762 72 en 2 Suspicious Fake 0 Free +ACC7106934 Template 0.221 11 Meaningful 0 Stock 0.237 nan multiple 2 Normal No 0 Unknown +... + + +CREATE TABLE "network_metrics" ( +acct_node text NOT NULL, +FollowNum bigint NULL, +FollowgNum bigint NULL, +FollGrow real NULL, +FingGrow real NULL, +FollRatio real NULL, +MutConn real NULL, +ConnGrowPtn text NULL, +ConnQual real NULL, +EngRate real NULL, +EngAuth real NULL, +LikeRt real NULL, +ComRt real NULL, +ShareRt real NULL, +IntRecip real NULL, +IntDiv real NULL, +TempIntPtn text NULL, + PRIMARY KEY (acct_node), + FOREIGN KEY (acct_node) REFERENCES accounts(acct_ref) +); + +First 3 rows: +acct_node FollowNum FollowgNum FollGrow FingGrow FollRatio MutConn ConnGrowPtn ConnQual EngRate EngAuth LikeRt ComRt ShareRt IntRecip IntDiv TempIntPtn +----------- ----------- ------------ ---------- ---------- ----------- --------- ------------- ---------- --------- --------- -------- ------- --------- ---------- -------- ------------ +ACC7210284 32353 53330 0.697 0.899 5.162 0.964 Suspicious 0.819 0.132 0.954 0.738 0.282 0.696 0.817 0.681 Natural +ACC2686094 70241 97273 0.018 0.45 7.752 0.444 Burst 0.729 0.525 0.241 0.335 0.917 0.729 0.665 0.905 Periodic +ACC7106934 47575 75481 0.621 0.89 0.887 0.241 Suspicious 0.153 0.77 0.675 0.504 0.618 0.431 0.033 0.626 Periodic +... + + +CREATE TABLE "interaction_metrics" ( +acct_dm text NOT NULL, +MsgSim real NULL, +MsgFreq real NULL, +MsgTargetDiv real NULL, +RespTimePtn text NULL, +ConvNat real NULL, +SentVar real NULL, +LangSoph real NULL, +TxtUniq real NULL, +KeyPtnHit real NULL, +TopCoh real NULL, + PRIMARY KEY (acct_dm), + FOREIGN KEY (acct_dm) REFERENCES accounts(acct_ref) +); + +First 3 rows: +acct_dm MsgSim MsgFreq MsgTargetDiv RespTimePtn ConvNat SentVar LangSoph TxtUniq KeyPtnHit TopCoh +---------- -------- --------- -------------- ------------- --------- --------- ---------- --------- ----------- -------- +ACC7210284 0.041 60.1 0.498 Natural 0.825 0.005 0.03 0.44 0.589 0.856 +ACC2686094 0.428 14.3 0.78 Delayed 0.359 0.974 0.949 0.62 0.488 0.686 +ACC7106934 0.73 74.7 0.944 Random 0.697 0.381 0.218 0.518 0.458 0.021 +... + + +CREATE TABLE "behavioral_scores" ( +acct_beh text NOT NULL, +behavioral_anomaly_scores jsonb NULL, + PRIMARY KEY (acct_beh), + FOREIGN KEY (acct_beh) REFERENCES accounts(acct_ref) +); + +First 3 rows: +acct_beh behavioral_anomaly_scores +---------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- +ACC7210284 {'pattern_scores': {'content_pattern': 0.178, 'network_pattern': 0.307, 'profile_pattern': 0.869, 'behavior_pattern': 0.087, 'temporal_pattern': 0.883, 'technical_pattern': 0.246}, 'automation_spam': {'spam_score': 0.315, 'bot_likelihood': 0.203, 'automated_behavior': 0.789}, 'commercial_intent_score': 0.558} +ACC2686094 {'pattern_scores': {'content_pattern': 0.924, 'network_pattern': 0.106, 'profile_pattern': 0.114, 'behavior_pattern': 0.993, 'temporal_pattern': None, 'technical_pattern': 0.827}, 'automation_spam': {'spam_score': 0.093, 'bot_likelihood': None, 'automated_behavior': 0.826}, 'commercial_intent_score': 0.461} +ACC7106934 {'pattern_scores': {'content_pattern': None, 'network_pattern': 0.217, 'profile_pattern': 0.412, 'behavior_pattern': 0.079, 'temporal_pattern': 0.187, 'technical_pattern': 0.618}, 'automation_spam': {'spam_score': 0.672, 'bot_likelihood': 0.204, 'automated_behavior': 0.164}, 'commercial_intent_score': 0.68} +... + + +CREATE TABLE "risk_and_moderation" ( +acct_risk text NOT NULL, +risk_profile jsonb NULL, + PRIMARY KEY (acct_risk), + FOREIGN KEY (acct_risk) REFERENCES accounts(acct_ref) +); + +First 3 rows: +acct_risk risk_profile +----------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- +ACC7210284 {'risk_scores': {'risk_value': 0.155, 'trust_score': None, 'impact_score': 0.544, 'threat_level': 'Critical', 'reputation_score': None, 'credibility_score': 0.441, 'authenticity_score': 0.15}, 'violation_history': {'abuse_count': 24, 'appeal_count': 3, 'warning_count': 0, 'suspension_history_json': 5, 'violation_distribution_json': {'Fake': 0.4490233249279981, 'Spam': 0.9353273606557855, 'Abuse': 0.08219689664928109}}} +ACC2686094 {'risk_scores': {'risk_value': 0.969, 'trust_score': 0.767, 'impact_score': 0.431, 'threat_level': 'High', 'reputation_score': 0.609, 'credibility_score': 0.045, 'authenticity_score': 0.187}, 'violation_history': {'abuse_count': 31, 'appeal_count': 0, 'warning_count': 6, 'suspension_history_json': 5, 'violation_distribution_json': {'Fake': 0.17330889342459832, 'Spam': 0.19905311146545834, 'Abuse': 0.5063087283261145}}} +ACC7106934 {'risk_scores': {'risk_value': None, 'trust_score': 0.703, 'impact_score': 0.017, 'threat_level': 'Low', 'reputation_score': 0.741, 'credibility_score': 0.08, 'authenticity_score': None}, 'violation_history': {'abuse_count': 81, 'appeal_count': 5, 'warning_count': 5, 'suspension_history_json': 5, 'violation_distribution_json': {'Fake': 0.5419172233957493, 'Spam': 0.7767880747042554, 'Abuse': 0.27787861651690926}}} +... + + +CREATE TABLE "account_clusters" ( +acct_bridge text NOT NULL, +clu_ref text NOT NULL, + PRIMARY KEY (acct_bridge, clu_ref), + FOREIGN KEY (acct_bridge) REFERENCES accounts(acct_ref), + FOREIGN KEY (clu_ref) REFERENCES cluster_analysis(CLSTR_PIN) +); + +First 3 rows: +acct_bridge clu_ref +------------- --------- +ACC7210284 CL0029 +ACC7210284 CL0007 +ACC7210284 CL0190 +... + + +CREATE TABLE "cluster_analysis" ( +CLSTR_PIN text NOT NULL, +ClusterQty bigint NULL, +CluRole text NULL, +NetInfl real NULL, +CoordScore real NULL, + PRIMARY KEY (CLSTR_PIN) +); + +First 3 rows: +CLSTR_PIN ClusterQty CluRole NetInfl CoordScore +----------- ------------ ---------------- --------- ------------ +CL0029 5 SocialGroup 0.362 0.363 +CL0007 8 InfluenceNetwork 0.951 0.342 +CL0190 7 Community 0.192 1 +... + + +CREATE TABLE "monitoring" ( +RecKey text NOT NULL, +snap_ts timestamp with time zone NULL, +acct_mon text NULL, +DetectSrc text NULL, +DetectConf real NULL, +MonPrio text NULL, +InvestState text NULL, +ActionDone text NULL, +RevFreq text NULL, +LastRev date NULL, +NextRev date NULL, +ConfScore real NULL, +FPP real NULL, +MethRel real NULL, +ModelVer text NULL, +FeatVer text NULL, +LastUp timestamp with time zone NULL, +UpFreqH bigint NULL, + PRIMARY KEY (RecKey), + FOREIGN KEY (acct_mon) REFERENCES accounts(acct_ref) +); + +First 3 rows: +RecKey snap_ts acct_mon DetectSrc DetectConf MonPrio InvestState ActionDone RevFreq LastRev NextRev ConfScore FPP MethRel ModelVer FeatVer LastUp UpFreqH +-------- ------------------------- ---------- ------------- ------------ --------- ------------- ------------ --------- ---------- ---------- ----------- ----- --------- ---------- --------- ------------------------- --------- +FA410087 2024-08-21 08:30:21+08:00 ACC7210284 Manual Review nan Low Pending Monthly 2025-01-01 2025-04-10 0.253 0.692 0.927 v1.5 f1.7 2025-02-18 12:00:00+08:00 3 +FA122676 2025-02-02 08:30:21+08:00 ACC2686094 User Report 0.851 Medium Active Suspension Quarterly 2025-01-15 2025-05-05 0.754 0.885 0.342 v4.6 f3.2 2025-02-19 01:18:00+08:00 93 +FA731882 2024-12-24 08:30:21+08:00 ACC7106934 User Report 0.978 Urgent Active Warning Quarterly 2024-11-29 2025-03-28 0.275 0.906 0.279 v1.6 f5.1 2025-02-18 18:57:00+08:00 57 +... diff --git a/households/households_column_meaning_base.json b/households/households_column_meaning_base.json new file mode 100644 index 0000000000000000000000000000000000000000..412f1f2a116d6789dd3c778e333284de2288e148 --- /dev/null +++ b/households/households_column_meaning_base.json @@ -0,0 +1,51 @@ +{ + "households|locations|regioncode": "Administrative region code or identifier representing the geographic administrative division where households are located. Forms part of composite primary key with ZoneNum. EX. Taguatinga,Samambaia ", + "households|locations|zonenum": "Macrozone numerical identifier representing specific geographic zones within administrative regions for detailed location classification. Forms part of composite primary key with RegionCode. EX.315,222,332", + "households|infrastructure|infraref": "A SERIAL primary key uniquely identifying each infrastructure configuration record in the database. EX. 1,2,3", + "households|infrastructure|wateraccess": "Piped water supply access type indicating the household's water infrastructure connectivity. Part of unique constraint combination. ex.Yes, available at least in one room", + "households|infrastructure|roadsurface": "Street pavement condition and type describing the road infrastructure quality around the household location. Part of unique constraint combination. EX. Asphalt, concrete;Gravel surface", + "households|infrastructure|parkavail": "Private parking space availability indicating whether households have dedicated parking facilities. Part of unique constraint combination. EX.Available, not available", + "households|households|housenum": "A BIGINT primary key uniquely identifying each household unit in the database system. EX.3,4,7", + "households|households|residentcount": "Number of people currently residing in the household, representing the total household size including all family members and occupants. EX. 1,2 3", + "households|households|locregion": "Foreign key referencing locations.RegionCode, indicating the administrative region where this household is located. Part of composite foreign key constraint. EX.Taguatinga,Samambaia", + "households|households|loczone": "Foreign key referencing locations.ZoneNum, indicating the specific macrozone within the administrative region. Part of composite foreign key constraint. EX.315,222,332", + "households|households|serviceplan": "Foreign key referencing service_types.ServiceRef, indicating which social service package or plan the household is enrolled in or eligible for. Contains NULL when household is not enrolled in any social service programs or eligibility has not been determined. EX. 1", + "households|properties|propref": "A SERIAL primary key uniquely identifying each residential property record in the database. EX. 1,2,3", + "households|properties|houselink": "Foreign key referencing households.HouseNum with unique constraint, ensuring 1:1 relationship between properties and households. EX.3,4,7", + "households|properties|infralink": "Foreign key referencing infrastructure.InfraRef, indicating which infrastructure configuration applies to this property's location and services. EX.1,2", + "households|transportation_assets|transref": "A SERIAL primary key uniquely identifying each transportation asset record for households. EX. 1,2,3", + "households|transportation_assets|housetag": "Foreign key referencing households.HouseNum with unique constraint, ensuring 1:1 relationship for transportation assets per household. EX.3,4,7", + "households|service_types|serviceref": "A SERIAL primary key uniquely identifying each social service type or package configuration available to households. Ex. 1,2,3", + "households|service_types|domestichelp": "Domestic worker service availability indicating whether households have access to or utilize domestic help services. Part of unique constraint combination. EX. No domestic workers, Yes, occasional", + "households|service_types|socsupport": "Social assistance program participation indicating the type of government or community support services available. Part of unique constraint combination. Ex. Yes, No", + "households|amenities|amenityref": "A SERIAL primary key uniquely identifying each household amenities and utilities record. EX. 1,2,3", + "households|amenities|houseid": "Foreign key referencing households.HouseNum with unique constraint, ensuring 1:1 relationship for amenities configuration per household. Ex.3,4,7", + "households|amenities|cablestatus": "Cable television service availability and subscription status indicating the household's access to cable TV services. Ex. avail, available,yes", + "households|households|socioeconomic": { + "column_meaning": "JSONB column. Groups socioeconomic characteristics of the household including tenure status, income classification, and expenditure patterns for demographic analysis.", + "fields_meaning": { + "Tenure_Type": "Household tenure classification indicating the ownership or occupancy status. Contains NULL when tenure status is unknown, transitional, or under legal dispute. Ex. OWNED, RENTED, OCCUPIED", + "Income_Bracket": "Income classification level representing the household's economic status and earning capacity. Contains NULL when income information is not disclosed, not available, or household income is irregular/informal. Ex.More than R$ 1,760 and less than R$ 2,640 ", + "Expend_Coeff": "Household expenditure coefficient as a real number representing the spending pattern or consumption multiplier factor for economic analysis. Contains NULL when expenditure data is not available or household spending patterns cannot be reliably calculated. EX.60.6315" + } + }, + "households|properties|dwelling_specs": { + "column_meaning": "JSONB column. Combines dwelling characteristics including structural type and room specifications for property classification and capacity assessment.", + "fields_meaning": { + "Dwelling_Class": "Dwelling type classification describing the structural and architectural category of the residential unit. Contains NULL when dwelling type is non-standard, mixed-use, or classification is pending assessment. Ex. Brickwork house, Apartment", + "Bath_Count": "Total number of bathrooms in the residential property, including full bathrooms and half-bathrooms. Contains NULL when bathroom count is not available or property has shared/communal bathroom facilities that cannot be counted per household. Ex. 1, 2, 3", + "Room_Count": "Total number of bedrooms in the residential property, representing the sleeping accommodation capacity. Contains NULL when room count is not available or property has non-standard room configurations that cannot be classified as bedrooms. Ex. 1, 2, 3" + } + }, + "households|transportation_assets|vehicleinventory": { + "column_meaning": "JSONB column. Aggregates transportation assets owned by household including counts of different vehicle types and age information for mobility analysis.", + "fields_meaning": { + "vehicle_counts": { + "Auto_Count": "Number of passenger vehicles owned by the household, defaulting to 0 if no vehicles are owned. Contains NULL when vehicle ownership status is unknown or verification is pending. ex.0", + "Bike_Count": "Number of bicycles owned by the household for transportation and recreation, defaulting to 0 if none are owned. Contains NULL when bicycle ownership information is not available or not tracked. ex.0,1", + "Motor_Count": "Number of motorcycles, scooters, or motorized two-wheelers owned by the household, defaulting to 0 if none are owned. Contains NULL when motorcycle ownership information is not available or not applicable. EX.0,1,2" + }, + "Newest_Year": "Year of manufacture for the newest vehicle in the household's transportation fleet, stored as text to accommodate various date formats and null values. Contains NULL when no vehicles are owned, vehicle age information is not available, or vehicles are too old to have reliable manufacture date records. EX.2012 To 2013, Not applicable, 2014 or newer" + } + } +} \ No newline at end of file diff --git a/households/households_kb.jsonl b/households/households_kb.jsonl new file mode 100644 index 0000000000000000000000000000000000000000..146a4249674e6bfd6ca1d144626f32dbdf2d8eaf --- /dev/null +++ b/households/households_kb.jsonl @@ -0,0 +1,45 @@ +{"id": 1, "knowledge": "Household Tenure Status", "description": "Illustrates the types of household tenure based on ownership or occupancy.", "definition": "Values based on schema include 'OWNED', 'RENTED', 'OCCUPIED'. The 'OWNED' status corresponds to owner-occupied properties.", "type": "value_illustration", "children_knowledge": -1} +{"id": 2, "knowledge": "Income Classification", "description": "Illustrates the income brackets for household economic status.", "definition": "Ranges from 'Low Income' to 'Very High Income'. Null indicates undisclosed or irregular income.", "type": "value_illustration", "children_knowledge": -1} +{"id": 3, "knowledge": "Water Access Type", "description": "Illustrates water supply types. For scoring, 'Yes' (piped access) is assigned 4 points, while other statuses are assigned 1 point.", "definition": "Values based on schema include 'Yes' and other non-piped statuses.", "type": "value_illustration", "children_knowledge": -1} +{"id": 4, "knowledge": "Road Surface Quality", "description": "Illustrates road surface types. For scoring, 'Asphalt' and 'Concrete' surfaces are assigned 4 points, while others are 1 point.", "definition": "Values based on schema include 'Asphalt', 'Concrete', 'Gravel', etc.", "type": "value_illustration", "children_knowledge": -1} +{"id": 5, "knowledge": "Parking Availability", "description": "Illustrates parking options. For scoring, 'Available' status is assigned 4 points, while 'not available' is 1 point.", "definition": "Values based on schema include 'Available' and 'not available'.", "type": "value_illustration", "children_knowledge": -1} +{"id": 6, "knowledge": "Dwelling Type", "description": "Illustrates dwelling categories. For scoring, 'Brickwork house' and 'Condominium' are 4 points, 'Apartment' is 3 points, and all other types are 1 point.", "definition": "Values based on schema include 'Brickwork house', 'Apartment', 'Condominium', etc.", "type": "value_illustration", "children_knowledge": -1} +{"id": 7, "knowledge": "Cable TV Status", "description": "Illustrates the availability of cable television.", "definition": "Values indicating availability, based on schema, are 'avail', 'available', and 'yes'.", "type": "value_illustration", "children_knowledge": -1} +{"id": 8, "knowledge": "Domestic Help Availability", "description": "Illustrates the types of domestic worker services for households.", "definition": "Includes 'Full-time' (daily), 'Part-time' (periodic), 'Occasional' (as-needed), 'None' (no help), and 'Live-in' (resident worker). Null indicates informal or undisclosed arrangements.", "type": "value_illustration", "children_knowledge": -1} +{"id": 9, "knowledge": "Social Support Status", "description": "Indicates whether a household participates in social assistance programs.", "definition": "Binary indicator where 'Yes' means the household accepts social assistance and 'No' means it does not. Part of unique constraint combination.", "type": "value_illustration", "children_knowledge": [2]} +{"id": 10, "knowledge": "Vehicle Year Range", "description": "Illustrates the year ranges for the newest vehicle owned by a household.", "definition": "Text ranges like '1995 to 1999', '2005 to 2009', or '2010 to 2013'. Null indicates no vehicles or unknown age.", "type": "value_illustration", "children_knowledge": -1} +{"id": 11, "knowledge": "Household Density", "description": "Calculates the average number of residents per bedroom in a household.", "definition": "Calculated as the number of residents divided by the number of bedrooms.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 12, "knowledge": "Expenditure Ratio", "description": "Calculates the household’s expenditure coefficient relative to its income bracket.", "definition": "Calculated as the expenditure coefficient divided by a numeric mapping of income bracket.", "type": "calculation_knowledge", "children_knowledge": [2]} +{"id": 13, "knowledge": "Infrastructure Quality Score", "description": "Calculates a composite score for infrastructure quality.", "definition": "Calculated as the average of the individual scores for Water Access, Road Surface, and Parking Availability.", "type": "calculation_knowledge", "children_knowledge": [3, 4, 5]} +{"id": 14, "knowledge": "Vehicle Ownership Index", "description": "Calculates the total number of vehicles owned by a household.", "definition": "Calculated as the sum of car, bicycle, and motorcycle counts.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 15, "knowledge": "Bathroom Ratio", "description": "Calculates the number of bathrooms per resident in a household.", "definition": "Calculated as the number of bathrooms divided by the number of residents.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 16, "knowledge": "Service Support Score", "description": "Calculates a score for social service support based on domestic help and social assistance status.", "definition": "A weighted score combining domestic help availability and social assistance participation status (Yes/No).", "type": "calculation_knowledge", "children_knowledge": [8, 9]} +{"id": 17, "knowledge": "Dwelling Capacity", "description": "Calculates the potential capacity of a dwelling based on bedrooms and bathrooms.", "definition": "Calculated as twice the number of bedrooms plus the number of bathrooms.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 18, "knowledge": "Mobility Score", "description": "Calculates a household’s mobility based on vehicle ownership and newest vehicle age.", "definition": "The product of the vehicle count and a numeric mapping of the newest vehicle year.", "type": "calculation_knowledge", "children_knowledge": [10, 14]} +{"id": 19, "knowledge": "Socioeconomic Index", "description": "Calculates a composite index for household socioeconomic status.", "definition": "Calculated as a weighted sum of income score, expenditure ratio, and tenure score.", "type": "calculation_knowledge", "children_knowledge": [1, 12]} +{"id": 20, "knowledge": "Living Condition Score", "description": "Calculates a composite score for a household's living conditions.", "definition": "Calculated as a 50/50 weighted average of the Dwelling Type score and the Infrastructure Quality Score.", "type": "calculation_knowledge", "children_knowledge": [6, 13]} +{"id": 21, "knowledge": "Affluent Household", "description": "Defines a household with high socioeconomic status.", "definition": "A household with a 'Tenure_Type' of 'OWNED' and an 'Income_Bracket' of either 'High Income' or 'Very High Income'.", "type": "domain_knowledge", "children_knowledge": [1, 2]} +{"id": 22, "knowledge": "Urban Household", "description": "Defines a household located in an urban area based on infrastructure.", "definition": "A household with 'Municipal Piped' Water Access Type and high-quality Road Surface Quality.", "type": "domain_knowledge", "children_knowledge": [3, 4]} +{"id": 23, "knowledge": "Mobile Household", "description": "Defines a household with high mobility based on vehicle ownership.", "definition": "A household with a high Vehicle Ownership Index and a recent Vehicle Year Range.", "type": "domain_knowledge", "children_knowledge": [10, 14]} +{"id": 24, "knowledge": "Supported Household", "description": "Defines a household receiving social assistance.", "definition": "A household with social support status marked as 'Yes'.", "type": "domain_knowledge", "children_knowledge": [9]} +{"id": 25, "knowledge": "Crowded Household", "description": "Defines a household with high occupancy relative to its capacity.", "definition": "A household with Household Density greater than a threshold.", "type": "domain_knowledge", "children_knowledge": [11]} +{"id": 26, "knowledge": "Modern Dwelling", "description": "Defines a dwelling with modern amenities and structure.", "definition": "A dwelling with specific Dwelling Type and active Cable TV Status.", "type": "domain_knowledge", "children_knowledge": [6, 7]} +{"id": 27, "knowledge": "Well-Equipped Household", "description": "Defines a household with high infrastructure and service support.", "definition": "A household with a high Infrastructure Quality Score and a high Service Support Score.", "type": "domain_knowledge", "children_knowledge": [13, 16]} +{"id": 28, "knowledge": "Economically Stable Household", "description": "Defines a household with balanced socioeconomic metrics.", "definition": "A household with a high Socioeconomic Index and a low Expenditure Ratio.", "type": "domain_knowledge", "children_knowledge": [12, 19]} +{"id": 29, "knowledge": "Comfortable Living Household", "description": "Defines a household with a high standard of living conditions.", "definition": "A household is considered 'Comfortable' if its Living Condition Score is greater than 3 AND its Bathroom Ratio is greater than 0.5.", "type": "domain_knowledge", "children_knowledge": [15, 20]} +{"id": 30, "knowledge": "Self-Sufficient Household", "description": "Defines a household with minimal reliance on external support.", "definition": "A household with limited Domestic Help Availability, social support status of 'No', and high Vehicle Ownership Index.", "type": "domain_knowledge", "children_knowledge": [8, 9, 14]} +{"id": 31, "knowledge": "Purge Incomplete Transport Data", "description": "An action to remove transportation asset records that are linked to households with missing or incomplete core economic data.", "definition": "Delete records from the transportation assets data for any household where the income classification is NULL.", "type": "domain_knowledge", "children_knowledge": [2]} +{"id": 32, "knowledge": "Register New Household", "description": "An action to add a new household’s primary record into the database system.", "definition": "Insert a new record into the household data with all required information.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 33, "knowledge": "Update Vehicle Inventory", "description": "An action to modify a household’s vehicle records, typically after acquiring or selling a vehicle.", "definition": "Update the vehicle inventory data for a specific household, modifying fields such as newest vehicle year or vehicle counts.", "type": "domain_knowledge", "children_knowledge": [10, 14]} +{"id": 34, "knowledge": "Residential Zone Types", "description": "Illustrates the types of residential zones based on geographic classification.", "definition": "Includes 'Urban' (densely populated city areas), 'Suburban' (residential outskirts), 'Rural' (sparsely populated countryside), and 'Mixed' (transitional areas). Null indicates unclassified or pending zoning.", "type": "value_illustration", "children_knowledge": -1} +{"id": 35, "knowledge": "Utility Access Level", "description": "Illustrates the level of utility connectivity available to households.", "definition": "Includes 'Full' (all utilities like water and cable available), 'Partial' (some utilities available), 'Basic' (only essential utilities like water), and 'None' (no utility access). Null indicates unassessed connectivity.", "type": "value_illustration", "children_knowledge": [3, 7]} +{"id": 36, "knowledge": "Vehicle Type Distribution", "description": "Illustrates the structure of vehicle ownership by type for a household.", "definition": "An array of counts representing vehicle types. Null indicates unknown or unverified ownership.", "type": "value_illustration", "children_knowledge": -1} +{"id": 37, "knowledge": "Social Assistance Participation", "description": "Indicates household participation in social assistance programs.", "definition": "Simple Yes/No indicator showing whether the household accepts social assistance, part of a unique constraint combination.", "type": "value_illustration", "children_knowledge": [9]} +{"id": 38, "knowledge": "Dwelling Condition Status", "description": "Illustrates the maintenance and condition categories of residential properties.", "definition": "Includes 'Excellent' (well-maintained), 'Good' (minor repairs needed), 'Fair' (moderate repairs needed), 'Poor' (significant repairs needed). Null indicates unassessed condition.", "type": "value_illustration", "children_knowledge": -1} +{"id": 39, "knowledge": "Compact Household", "description": "Defines a household with minimal space requirements and high efficiency.", "definition": "A household with specific Dwelling Type and a small resident count.", "type": "domain_knowledge", "children_knowledge": [6]} +{"id": 40, "knowledge": "High-Mobility Urban Household", "description": "Defines a household in an urban area with significant transportation assets.", "definition": "A household with specific Residential Zone Type and Vehicle Type Distribution.", "type": "domain_knowledge", "children_knowledge": [34, 36]} +{"id": 41, "knowledge": "Stable Infrastructure Household", "description": "Defines a household with reliable and high-quality infrastructure.", "definition": "A household with a specific Utility Access Level and Road Surface Quality.", "type": "domain_knowledge", "children_knowledge": [4, 35]} +{"id": 42, "knowledge": "Economically Independent Household", "description": "Defines a household with minimal reliance on external financial support.", "definition": "A household with high Income Classification and social support status of 'No'.", "type": "domain_knowledge", "children_knowledge": [2, 9]} +{"id": 43, "knowledge": "Well-Maintained Dwelling", "description": "Defines a residential unit in excellent or good condition with modern amenities.", "definition": "A dwelling with specific Dwelling Condition Status and Cable TV Status.", "type": "domain_knowledge", "children_knowledge": [7, 38]} +{"id": 44, "knowledge": "Dwelling Type Score", "description": "Assigns a numerical score to different dwelling types based on a predefined quality ranking.", "definition": "A scoring system where 'Brickwork house' receives 4 points, 'Apartment' receives 3 points, and all other types receive 1 point. This score is used in broader calculations like the Living Condition Score.", "type": "calculation_knowledge", "children_knowledge": [6]} +{"id": 45, "knowledge": "Urban Zone", "description": "Defines which zones are considered urban.", "definition": "A zone is considered urban if its `loczone` is 1.", "type": "domain_knowledge", "children_knowledge": -1} diff --git a/households/households_schema.txt b/households/households_schema.txt new file mode 100644 index 0000000000000000000000000000000000000000..974f025317db5571c1ee4de862af206ce5b5bf65 --- /dev/null +++ b/households/households_schema.txt @@ -0,0 +1,123 @@ +CREATE TABLE "locations" ( +regioncode text NOT NULL, +zonenum bigint NOT NULL, + PRIMARY KEY (regioncode, zonenum) +); + +First 3 rows: +regioncode zonenum +------------ --------- +Taguatinga 315 +Taguatinga 315 +Guará 222 +... + + +CREATE TABLE "amenities" ( +amenityref bigint NOT NULL DEFAULT nextval('amenities_amenityref_seq'::regclass), +houseid bigint NOT NULL, +cablestatus text NOT NULL, + PRIMARY KEY (amenityref), + FOREIGN KEY (houseid) REFERENCES households(housenum) +); + +First 3 rows: + amenityref houseid cablestatus +------------ --------- ------------- + 1 3 avail + 2 4 available + 3 7 Available +... + + +CREATE TABLE "infrastructure" ( +infraref bigint NOT NULL DEFAULT nextval('infrastructure_infraref_seq'::regclass), +wateraccess text NOT NULL, +roadsurface text NOT NULL, +parkavail text NOT NULL, + PRIMARY KEY (infraref) +); + +First 3 rows: + infraref wateraccess roadsurface parkavail +---------- ------------------------------------ ----------------- ------------- + 1 Yes, available at least in one room Asphalt, concrete Available + 2 Yes, available at least in one room Asphalt, concrete Available + 6 Yes, available at least in one room Asphalt, concrete Not available +... + + +CREATE TABLE "service_types" ( +serviceref bigint NOT NULL DEFAULT nextval('service_types_serviceref_seq'::regclass), +domestichelp text NOT NULL, +socsupport text NOT NULL, + PRIMARY KEY (serviceref) +); + +First 3 rows: + serviceref domestichelp socsupport +------------ ------------------- ------------ + 1 No domestic workers No + 14 No domestic workers No + 21 No domestic workers Yes +... + + +CREATE TABLE "households" ( +housenum bigint NOT NULL, +residentcount bigint NOT NULL, +locregion text NOT NULL, +loczone bigint NOT NULL, +serviceplan bigint NULL, +socioeconomic jsonb NULL, + PRIMARY KEY (housenum), + FOREIGN KEY (locregion) REFERENCES locations(regioncode), + FOREIGN KEY (locregion) REFERENCES locations(zonenum), + FOREIGN KEY (loczone) REFERENCES locations(regioncode), + FOREIGN KEY (loczone) REFERENCES locations(zonenum), + FOREIGN KEY (serviceplan) REFERENCES service_types(serviceref) +); + +First 3 rows: + housenum residentcount locregion loczone serviceplan socioeconomic +---------- --------------- ----------- --------- ------------- ---------------------------------------------------------------------------------------------------------------- + 4 4 Taguatinga 315 1 {'Tenure_Type': 'Owned', 'Expend_Coeff': 33.78, 'Income_Bracket': 'More than R$ 1,760 and less than R$ 2,640'} + 7 3 Taguatinga 315 1 {'Tenure_Type': 'owned', 'Expend_Coeff': 37.1846, 'Income_Bracket': 'More than R$ 2,640 and less than R$ 4,400'} + 22 3 Taguatinga 315 1 {'Tenure_Type': 'OWNED', 'Expend_Coeff': 37.2258, 'Income_Bracket': 'More than R$ 4,400 and less than R$ 8,800'} +... + + +CREATE TABLE "properties" ( +propref bigint NOT NULL DEFAULT nextval('properties_propref_seq'::regclass), +houselink bigint NOT NULL, +infralink bigint NOT NULL, +dwelling_specs jsonb NULL, + PRIMARY KEY (propref), + FOREIGN KEY (houselink) REFERENCES households(housenum), + FOREIGN KEY (infralink) REFERENCES infrastructure(infraref) +); + +First 3 rows: + propref houselink infralink dwelling_specs +--------- ----------- ----------- ----------------------------------------------------------------------- + 19 77 1 {'Bath_Count': 1, 'Room_Count': 3, 'Dwelling_Class': 'Brickwork house'} + 20 102 1 {'Bath_Count': 1, 'Room_Count': 2, 'Dwelling_Class': 'apartment'} + 21 103 21 {'Bath_Count': 1, 'Room_Count': 2, 'Dwelling_Class': 'Apartment'} +... + + +CREATE TABLE "transportation_assets" ( +transref bigint NOT NULL DEFAULT nextval('transportation_assets_transref_seq'::regclass), +housetag bigint NOT NULL, +vehicleinventory jsonb NULL, + PRIMARY KEY (transref), + FOREIGN KEY (housetag) REFERENCES households(housenum) +); + +First 3 rows: + transref housetag vehicleinventory +---------- ---------- --------------------------------------------------------------------------------------------------------- + 4 22 {'Newest_Year': 'after 2014', 'vehicle_counts': {'Auto_Count': 2, 'Bike_Count': 0, 'Motor_Count': 0}} + 5 35 {'Newest_Year': '2010 TO 2013', 'vehicle_counts': {'Auto_Count': 1, 'Bike_Count': 1, 'Motor_Count': 0}} + 6 37 {'Newest_Year': 'nOt apPLIcaBlE', 'vehicle_counts': {'Auto_Count': 0, 'Bike_Count': 0, 'Motor_Count': 0}} +... diff --git a/hulushows/hulushows_column_meaning_base.json b/hulushows/hulushows_column_meaning_base.json new file mode 100644 index 0000000000000000000000000000000000000000..72bdde039cfa19b37fbb126fb3c0786f2c1f6961 --- /dev/null +++ b/hulushows/hulushows_column_meaning_base.json @@ -0,0 +1,128 @@ +{ + "hulushows|companies|entity_key": "A BIGINT primary key uniquely identifying each production company, studio, or content distributor in the database.", + "hulushows|companies|chanref": "Legacy channel reference ID used for backwards compatibility with older systems, may contain gaps and non-sequential values. Contains NULL when legacy channel reference is not available or not applicable for newer companies.", + "hulushows|companies|company_name": "Official full legal name of the production company or content distributor (e.g., 'Walt Disney Pictures', 'Warner Bros Entertainment'). Contains NULL when official company name is not available or company operates under alternative naming conventions.", + "hulushows|companies|short_name": "Abbreviated or commonly used short name for the company (e.g., 'Disney', 'Warner Bros'). Contains NULL when no commonly recognized short name exists for the company.", + "hulushows|companies|canonical_name": "Standardized canonical name used for consistent reference across the platform, normalized for search and matching purposes. Contains NULL when canonical normalization has not been established for the company.", + "hulushows|core|content_key": "A BIGINT primary key uniquely identifying each show or content item in the Hulu database system.", + "hulushows|core|canonical_name": "Standardized canonical name of the show used for consistent reference and search functionality across the platform. Contains NULL when canonical naming has not been established for the content.", + "hulushows|core|content_title": "Display title of the show as it appears to users on the platform interface. Contains NULL when display title is not yet finalized or content is in preliminary stages.", + "hulushows|core|series_id": "Identifier linking individual shows to their parent series or franchise for grouping related content. Contains NULL when content is standalone and not part of a larger series or franchise.", + "hulushows|core|studiolink": "Foreign key referencing companies.Entity_key, indicating the primary production studio. Contains NULL for independent or self-produced content without studio affiliation.", + "hulushows|core|annotations": "Combined free-text annotations and metadata comments from multiple sources, concatenated from annotation fields 0 and 1. Contains NULL when no annotations or metadata comments are available for the content.", + "hulushows|availabilitys|content_key": "Primary key referencing core.content_key, ensuring 1:1 relationship for availability information per show.", + "hulushows|availabilitys|cache_time": "Timestamp indicating when the content metadata was last cached or updated. Contains noise with inconsistent date formats: '2024-12-08', '2024/12/8', '24/12/8', 'Dec 8, 2024'. Contains NULL when cache timestamp is not available or content has never been cached.", + "hulushows|availabilitys|auth_name": "Authentication or authorization level name required to access the content, indicating access control requirements. Contains NULL when no specific authentication requirements are needed for content access.", + "hulushows|content_info|content_key": "Primary key referencing core.content_key, ensuring 1:1 relationship for detailed content information per show.", + "hulushows|content_info|story_outline": "Detailed plot synopsis or description of the show's storyline and content for user discovery and recommendation systems. Contains NULL when plot synopsis is not available or content description is pending.", + "hulushows|promo_info|content_key": "Primary key referencing core.content_key, ensuring 1:1 relationship for promotional messaging per show.", + "hulushows|rollups|tierkey": "A BIGSERIAL primary key uniquely identifying each subscription and availability tier in the system.", + "hulushows|rollups|tiertype": "Enumerated subscription tier type from: 'free' (free content), 'subscriber' (paid subscriber content), 'current' (currently available content), 'free_on_web' (web-only free content), 'subscriber_on_device' (device-specific subscriber content), 'auth_on_web' (web authentication required), 'showtime' (Showtime premium content).", + "hulushows|show_rollups|srkeys": "Foreign key referencing core.content_key, indicating which show this rollup metrics record belongs to.", + "hulushows|show_rollups|srlinks": "Foreign key referencing rollups.TierKey, indicating which subscription tier these metrics apply to.", + "hulushows|show_rollups|launchmoment": "Timestamp indicating when content was first made available in this subscription tier. Contains noise with inconsistent formats: '2024-12-08 10:30', 'Dec 8, 2024', '08/12/24'. Contains NULL when launch date is not available or content has not yet launched in this tier.", + "hulushows|show_rollups|latestadd": "Timestamp of the most recent content addition or update for this show in the specified tier. Contains NULL for shows with no recent updates or when no content additions have been tracked.", + "hulushows|companies|brandingassets": { + "column_meaning": "JSONB column. Consolidates all branding and visual assets for the company including key art, logos, and availability flags for display purposes.", + "fields_meaning": { + "KeyArt_URL": "URL pointing to the company's key art image or logo used for branding display purposes on the platform. Contains NULL when key art is not available or company does not provide branding assets.", + "NetworkLogo_URL": "URL pointing to the network or company logo image file for display in the user interface. Contains NULL when network logo is not available or company operates without branded logo assets.", + "HasLogo_Flag": "Boolean flag stored as text indicating whether the company has a high-resolution logo available. Contains noise with inconsistent formats: 'TRUE', 'FALSE', 'Yes', 'No', '1', '0'. Contains NULL when logo availability status is unknown or not determined." + } + }, + "hulushows|core|genreclass": { + "column_meaning": "JSONB column. Groups all genre and classification metadata including primary genre, complex genre hierarchies, content class, and user scoring information.", + "fields_meaning": { + "Primary_Genre": "Primary broad genre category from enum: 'Animation and Cartoons', 'Comedy', 'Drama', 'Anime', 'Kids', 'Reality and Game Shows', 'Classics', 'Family', 'Science Fiction', 'Action and Adventure', 'Food', 'News and Information', 'Health and Wellness', 'Teen'. Contains NULL when primary genre classification is pending or undetermined.", + "Hierarchical_Genres": "Complex multi-genre classification string with hierarchical and combined genres using delimiters like '~' and '|' (e.g., 'Animation~Comedy|Teen'). Contains NULL when detailed genre hierarchy has not been established for the content.", + "Content_Type": "Classification of content type, primarily observed as 'show' in the dataset but may include other values like 'movie' or 'special'. Contains NULL when content type classification is pending review or determination.", + "User_Score": "User rating or professional score for the content, stored as text with various formats including monetary ('$4.35M'), star ratings ('4.35★'), basis points ('435 bp'), and rating scales ('4.35 RTG'). Contains NULL when no user ratings or professional scores are available for the content." + } + }, + "hulushows|content_info|mediacounts": { + "column_meaning": "JSONB column. Aggregates all content volume metrics including episodes, clips, films, seasons, and total video counts for inventory tracking.", + "fields_meaning": { + "content_volumes": { + "Clips_Total": "Total number of short video clips or previews available for this content item. Contains NULL when clip count is not available or no clips exist for the content.", + "Episode_Total": "Episode volume count, representing the total number of episodes available for the show across all seasons. Contains NULL when episode count is not available or content is not episodic.", + "Feature_Films": "Number of full-length feature films associated with this content entry. Contains NULL when feature film count is not applicable or not available for the content type.", + "Film_Clips": "Number of short clips or trailers specifically related to films within this content package. Contains NULL when film clips are not available or not applicable to the content.", + "Seasons_Total": "Total number of seasons available for this show on the platform. Contains NULL when season count is not applicable (e.g., for movies) or not yet determined.", + "Videos_Total": "Aggregate count of all video content types (episodes, clips, features) associated with this show. Contains NULL when total video count is not available or cannot be determined." + } + } + }, + "hulushows|content_info|visualassets": { + "column_meaning": "JSONB column. Contains all visual and descriptive assets for content presentation including URLs, descriptions, and copyright information.", + "fields_meaning": { + "Thumbnail_URL": "URL pointing to the small thumbnail image used for content discovery and grid displays. Contains NULL when thumbnail image is not available or not yet uploaded for the content.", + "KeyArt_URL": "URL pointing to the primary promotional artwork or poster image for the content. Contains NULL when key art is not available or promotional materials are pending.", + "Link_Desc": "Descriptive text used for content linking and cross-references within the platform. Contains NULL when link description is not available or not yet created.", + "Art_Copyright": "Copyright information and attribution for the promotional artwork and images associated with the content. Contains NULL when copyright information is not available or not applicable." + } + }, + "hulushows|availabilitys|accessflags": { + "column_meaning": "JSONB column. Consolidates all boolean flags related to content access restrictions and platform availability across different tiers and devices.", + "fields_meaning": { + "Movie_Flag": "Boolean flag stored as text indicating if content is a movie format. Contains noise with various formats: 'TRUE', 'FALSE', 'Y', 'N', 'Movie', 'Series'. Contains NULL when content format classification is not yet determined.", + "Showtime_Only": "Boolean indicating whether content is exclusively available through Showtime subscription tier. Contains NULL when Showtime exclusivity status is not yet determined or not applicable.", + "Subscriber_Only": "Boolean indicating whether content requires a paid subscription to access, not available in free tier. Contains NULL when subscription requirement status is pending determination.", + "COPPA_Comp": "Boolean indicating whether content complies with Children's Online Privacy Protection Act (COPPA) regulations for child-safe viewing. Contains NULL when COPPA compliance status is not yet evaluated or not applicable.", + "Web_Only": "Boolean indicating whether content is exclusively available through web platform and not on mobile or living room devices. Contains NULL when platform availability restrictions are not yet determined." + } + }, + "hulushows|promo_info|tiernotices": { + "column_meaning": "JSONB column. Organizes promotional and notification messages by subscription tier including availability, expiration, alerts, and promotional content.", + "fields_meaning": { + "free_tier": { + "Avail_Note": "Free tier availability notification text displayed to users about content access in the free subscription level. Contains NULL when no specific availability notifications are needed for free tier access.", + "Expire_Note": "Free tier expiration notice text informing users when free access to content will end. Contains NULL when content has no expiration date in free tier or expiration notice is not applicable.", + "Alert_Note": "Free tier alert message text for important notifications related to free content access changes. Contains NULL when no alerts are currently active for free tier content.", + "Promo_Note": "Free tier promotional message text used for marketing and user engagement for free content. Contains NULL when no promotional messaging is active for free tier." + }, + "member_tier": { + "Avail_Note": "Member/subscriber tier availability notification text displayed to paying subscribers about content access. Contains NULL when no specific availability notifications are needed for subscriber access.", + "Expire_Note": "Member/subscriber tier expiration notice text informing paid users when content access will end. Contains NULL when content has no expiration date for subscribers or expiration notice is not applicable.", + "Alert_Note": "Member/subscriber tier alert message text for important notifications related to subscriber content access. Contains NULL when no alerts are currently active for subscriber content.", + "Promo_Note": "Member/subscriber tier promotional message text used for marketing premium content to paying subscribers. Contains NULL when no promotional messaging is active for subscriber tier." + } + } + }, + "hulushows|show_rollups|contentvols": { + "column_meaning": "JSONB column. Groups all content volume metrics for different media types within subscription tiers including clips, episodes, features, games, and seasons.", + "fields_meaning": { + "standard_content": { + "Clip_Vol": "Volume count of short video clips available for this show in the specified subscription tier. May contain professional notation like '288K'. Contains NULL when clip volume data is not available for this tier.", + "Ep_Vol": "Episode volume count available for this show in the specified subscription tier. Contains NULL when episode data is not available or not applicable for this tier.", + "Feature_Vol": "Count of full-length feature content items available in this tier for the show. Contains NULL when feature content is not available or not applicable for this subscription tier.", + "FilmClip_Vol": "Volume count of film-related clips and trailers available in this subscription tier. Contains NULL when film clips are not available for this tier.", + "Trailer_Vol": "Count of trailer videos available for this show in the specified subscription tier. Contains NULL when trailers are not available for this tier.", + "Game_Vol": "Count of interactive games or game-related content available in this tier. Contains NULL when game content is not available or not applicable for this subscription tier.", + "Season_Vol": "Number of complete seasons available for this show in the specified subscription tier. Contains NULL when season data is not available for this tier.", + "Media_Total": "Total aggregate count of all media types (episodes, clips, features, games) available in this tier. Contains NULL when total media count cannot be determined for this tier." + } + } + }, + "hulushows|show_rollups|html5metrics": { + "column_meaning": "JSONB column. Consolidates HTML5-compatible content counts for cross-platform streaming capabilities across different media types.", + "fields_meaning": { + "html5_volumes": { + "H5_Clips": "Count of HTML5-compatible video clips available for cross-platform playback in this subscription tier. Contains NULL when HTML5 clip data is not available for this tier.", + "H5_Episodes": "Count of HTML5-compatible full episodes available for cross-platform streaming in this tier. Contains NULL when HTML5 episode data is not available for this tier.", + "H5_Games": "Count of HTML5-compatible interactive games or game content available in this subscription tier. Contains NULL when HTML5 game content is not available for this tier.", + "H5_Features": "Count of HTML5-compatible feature-length content available for streaming in this tier. Contains NULL when HTML5 feature content is not available for this tier.", + "H5_FilmClips": "Count of HTML5-compatible film clips and movie trailers available in this subscription tier. Contains NULL when HTML5 film clips are not available for this tier.", + "H5_Trailers": "Count of HTML5-compatible trailer videos available for cross-platform viewing in this tier. Contains NULL when HTML5 trailers are not available for this tier.", + "H5_MediaTotal": "Total aggregate count of all HTML5-compatible media content available in this subscription tier. Contains NULL when HTML5 total count cannot be determined for this tier." + } + } + }, + "hulushows|show_rollups|ratinginfo": { + "column_meaning": "JSONB column. Contains television rating information and unrated content counts for content classification and parental guidance.", + "fields_meaning": { + "TV_Rating": "Television content rating or classification for content in this tier, stored as text with possible enum values like 'TV-14', 'TV-MA', 'TV-PG', etc. Contains NULL when content rating has not been assigned or is pending review for this tier.", + "Peak_Rating": "Highest or peak content rating for content in this tier, stored as text with possible noise in enum format: 'TV-14', 'TV14', 'T14', 'FOURTEEN'. Contains NULL when peak rating cannot be determined or no rated content exists in this tier.", + "Unrated_Vol": "Count of content items that do not have official television ratings or are unrated in this tier. Contains NULL when unrated content count is not available or not tracked for this tier." + } + } +} \ No newline at end of file diff --git a/hulushows/hulushows_kb.jsonl b/hulushows/hulushows_kb.jsonl new file mode 100644 index 0000000000000000000000000000000000000000..b3e60788de7ed96a8c6b9dc3abc4fdad61b7133f --- /dev/null +++ b/hulushows/hulushows_kb.jsonl @@ -0,0 +1,92 @@ +{"id": 0, "knowledge": "Content Type Labels", "description": "Clarifies the values used to indicate whether a piece of content is a show, movie, or other format.", "definition": "Includes 'show', 'movie', and 'special' to distinguish the type of content displayed.", "type": "value_illustration", "children_knowledge": -1} +{"id": 1, "knowledge": "User Score Formats", "description": "Illustrates different formats used to record user ratings.", "definition": "Examples include raw numbers like '4.35', symbols like '4.35★', monetized ratings like '$4.35M', and normalized formats like '4.35 RTG'.", "type": "value_illustration", "children_knowledge": -1} +{"id": 2, "knowledge": "TV Rating Types", "description": "Shows the different age-based content classifications used in broadcasting.", "definition": "Includes 'TV-Y', 'TV-Y7', 'TV-G', 'TV-PG', 'TV-14', and 'TV-MA' to represent different maturity levels.", "type": "value_illustration", "children_knowledge": -1} +{"id": 3, "knowledge": "Subscription Tier Values", "description": "Represents different content access levels for viewers.", "definition": "Common values include 'free', 'subscriber', 'current', 'free on website', 'auth on website', 'subscriber on device', and 'show time'.", "type": "value_illustration", "children_knowledge": -1} +{"id": 4, "knowledge": "High-Resolution Logo Flags", "description": "Explains how availability of high-res logos is marked.", "definition": "Typical values include 'TRUE', 'FALSE', 'Yes', 'No', '1', '0' and all indicate presence or absence of a high-resolution logo.", "type": "value_illustration", "children_knowledge": -1} +{"id": 5, "knowledge": "Movie Identifier Formats", "description": "Describes how movie content is labeled.", "definition": "Can appear as 'TRUE', 'FALSE', 'Y', 'N', 'Movie', 'Series'. These values signal whether content is a movie.", "type": "value_illustration", "children_knowledge": -1} +{"id": 6, "knowledge": "Genre Hierarchy Format", "description": "Explains how genres are combined and nested.", "definition": "Uses '~' for sub genres and '|' for alternatives. Example: 'Comedy~Sitcom|Teen'.", "type": "value_illustration", "children_knowledge": -1} +{"id": 7, "knowledge": "HTML5 Media Metrics", "description": "Details types of media available in HTML5 format.", "definition": "Includes clips, episodes, games, features, trailers that support cross-platform playback.", "type": "value_illustration", "children_knowledge": -1} +{"id": 8, "knowledge": "Boolean Value Variants", "description": "Indicates the diversity in how true/false values appear across fields.", "definition": "Can appear as strings ('Yes', 'No'), booleans ('TRUE', 'FALSE'), or numerics ('1', '0').", "type": "value_illustration", "children_knowledge": -1} +{"id": 9, "knowledge": "Cache Time Formats", "description": "Lists typical formats used in content caching timestamps.", "definition": "Includes examples like '2024-12-08', 'Dec 8, 2024', '08/12/24', and ISO-8601 format.", "type": "value_illustration", "children_knowledge": -1} +{"id": 10, "knowledge": "Subscriber-Only Content", "description": "Defines content that is not available in the free tier.", "definition": "Content marked as only accessible to paying users is considered subscriber-only.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 11, "knowledge": "HTML5 Compatible Show", "description": "Denotes a show with HTML5 support for full episodes.", "definition": "A show with more than 0 HTML5-compatible full episodes is HTML5 compatible.", "type": "domain_knowledge", "children_knowledge": [7]} +{"id": 12, "knowledge": "Unrated Media", "description": "Defines media that lacks an official rating.", "definition": "Media is considered unrated when there is no official rating information available.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 13, "knowledge": "Franchise Group", "description": "Groups shows by shared franchise.", "definition": "Content entries recognized as part of the same story world or universe are grouped together as a franchise.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 14, "knowledge": "Null Update Tag", "description": "Explains the implication of a missing update timestamp.", "definition": "If there’s no recent update record, it means the content hasn’t received any new additions since it first appeared.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 15, "knowledge": "Long-running Series", "description": "Identifies series with significant longevity.", "definition": "Any series that continues for 10 or more years or installments is considered long-running.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 16, "knowledge": "Multi-Tier Presence", "description": "Labels content appearing in multiple tiers.", "definition": "If content is available in two or more distinct subscription tiers, it qualifies as multi-tier.", "type": "domain_knowledge", "children_knowledge": [3]} +{"id": 17, "knowledge": "High Engagement Show", "description": "Represents shows with significant content volume.", "definition": "If a show includes a very large number of episodes and extra video segments, it’s considered highly engaging.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 18, "knowledge": "Showtime Exclusive", "description": "Indicates content only accessible to Showtime subscribers.", "definition": "Content where the 'showtime-only' flag is true is Showtime exclusive.", "type": "domain_knowledge", "children_knowledge": [5, 8]} +{"id": 19, "knowledge": "Canonical Rating Enumeration", "description": "Enumerates all known TV ratings for content classification.", "definition": "The six recognized TV content ratings are TV-Y, TV-Y7, TV-G, TV-PG, TV-14, TV-MA.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 20, "knowledge": "HTML5 Episode Ratio", "description": "Computes the share of HTML5 episodes relative to all episodes.", "definition": "HER = \\frac{\\text{H5 Episodes}}{\\text{Episodes Vol}}", "type": "calculation_knowledge", "children_knowledge": [7]} +{"id": 21, "knowledge": "Content Size Index", "description": "Measures overall content size from video types.", "definition": "CSI = \\text{Episodes Vol} + \\text{Film Clip Vol} + \\text{Feature Vol} + \\text{Game Vol} + \\text{Film Cli pVol} + \\text{Trailer Vol}", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 22, "knowledge": "Media Platform Compatibility Rate", "description": "Assesses platform compatibility for all media.", "definition": "MPCR = \\frac{\\text{H5Media Total}}{\\text{Media Total}}", "type": "calculation_knowledge", "children_knowledge": [7]} +{"id": 23, "knowledge": "Season to Episode Ratio", "description": "Computes average episodes per season.", "definition": "SE_Ratio = \\frac{\\text{Episodes Vol}}{\\text{Seasons Count}}", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 24, "knowledge": "Unrated Share", "description": "Estimates the share of unrated content.", "definition": "Unrated content share is calculated as the amount of unrated material divided by all available content.", "type": "calculation_knowledge", "children_knowledge": [7]} +{"id": 25, "knowledge": "Promotional Message Count", "description": "Counts the total promotional fields for a content entry.", "definition": "PMC = The count of promotional messages is found by adding up all the filled-in promotional and availability notes for a piece of content.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 26, "knowledge": "Average Media per Tier", "description": "Computes average number of media items across tiers.", "definition": "AMT = \\frac{\\sum Media Total tier}{|Tiers|}", "type": "calculation_knowledge", "children_knowledge": [3]} +{"id": 27, "knowledge": "Show Longevity Estimate", "description": "Estimates how long content has been on platform.", "definition": "Longevity = How long content has been available is estimated by subtracting its release year from the current year.", "type": "calculation_knowledge", "children_knowledge": [9]} +{"id": 28, "knowledge": "Normalized User Score", "description": "Scales user score to [0,1] range.", "definition": "NUS = \\frac{score - min}{max - min}", "type": "calculation_knowledge", "children_knowledge": [1]} +{"id": 29, "knowledge": "Tier Distribution Ratio", "description": "Determines tier-specific share of media.", "definition": "TDR = \\frac{\\text{Media Total}_{tier}}{\\sum_{tier} Media Total tier}", "type": "calculation_knowledge", "children_knowledge": [3]} +{"id": 30, "knowledge": "Average Episode Rating (AER)", "description": "Calculates the average user score for a show's episodes.", "definition": "AER = The average rating across all episodes of a show is calculated by taking all their ratings and finding the mean.", "type": "calculation_knowledge", "children_knowledge": [1]} +{"id": 31, "knowledge": "Normalized Media Volume (NMV)", "description": "Computes the normalized total media volume across different tiers.", "definition": "NMV = The normalized media volume is found by dividing the total content amount by the total number of parts or groupings, plus one.", "type": "calculation_knowledge", "children_knowledge": [4, 5]} +{"id": 32, "knowledge": "Studio Productivity Index (SPI)", "description": "Measures how productive a studio is based on the number of videos and seasons it produces.", "definition": "SPI = The productivity of a studio is measured by dividing the number of videos it made by the number of project groupings, plus one.", "type": "calculation_knowledge", "children_knowledge": [5]} +{"id": 33, "knowledge": "High-Res Branding Ratio (HBR)", "description": "Measures the share of studios with high-resolution logos.", "definition": "HBR = The share of studios using high-quality brand images is found by dividing the number that use them by the total number of studios.", "type": "calculation_knowledge", "children_knowledge": [2]} +{"id": 34, "knowledge": "User Score Dispersion (USD)", "description": "Calculates the variance of user scores for a show’s episodes.", "definition": "USD = User score dispersion is the variance in episode ratings for a show.", "type": "calculation_knowledge", "children_knowledge": [1]} +{"id": 35, "knowledge": "Availability Entropy Score (AES)", "description": "Calculates the entropy of availability types for a show.", "definition": "AES = -\\sum_{i=1}^{k} type_i \\log type_i, where type_i is the proportion of availability tier i.", "type": "calculation_knowledge", "children_knowledge": [3]} +{"id": 36, "knowledge": "Title-to-Episode Ratio (TER)", "description": "Ratio between content title length and episode volume.", "definition": "TER = The ratio of the length of a show’s title to its total number of episodes, plus one.", "type": "calculation_knowledge", "children_knowledge": [1]} +{"id": 37, "knowledge": "Premium Exclusivity Score (PES)", "description": "Measures exclusivity of a show to premium access tiers.", "definition": "PES = \\frac{\\text{Subscriber OnlyContent}}{\\text{Total Content}}, based on access restrictions.", "type": "calculation_knowledge", "children_knowledge": [3, 4]} +{"id": 38, "knowledge": "Annotated Content Ratio (ACR)", "description": "Proportion of content items with non-empty annotation fields.", "definition": "ACR = The share of content items that include extra notes or descriptions.", "type": "calculation_knowledge", "children_knowledge": [0]} +{"id": 39, "knowledge": "Multi-Genre Spread Score (MGSS)", "description": "Captures how many hierarchical and hybrid genres a show spans.", "definition": "MGSS = Multi-genre spread is measured by the number of ways genres are split or combined in a show.", "type": "calculation_knowledge", "children_knowledge": [0]} +{"id": 40, "knowledge": "Limited Series", "description": "Defines content with very few seasons, often a single story arc.", "definition": "If a show consists of only a single part and no more than ten installments, it's called a limited series.", "type": "domain_knowledge", "children_knowledge": [1, 5]} +{"id": 41, "knowledge": "Multitier Syndicated Show", "description": "Identifies content available across multiple subscription tiers.", "definition": "If a show is made available in three or more distinct access groups, it's considered multi-tier.", "type": "domain_knowledge", "children_knowledge": [3]} +{"id": 42, "knowledge": "Premium Access Content", "description": "Refers to shows exclusively available to paying subscribers or special access tiers.", "definition": "Defined as content with is subscriber_only = TRUE or available only on Showtime tiers.", "type": "domain_knowledge", "children_knowledge": [3]} +{"id": 43, "knowledge": "Iconic Comedy Brands", "description": "Identifies production companies that created multiple top-rated comedy shows.", "definition": "A studio is considered iconic for comedy if it has created at least three highly-rated comedy shows.", "type": "domain_knowledge", "children_knowledge": [0, 30]} +{"id": 44, "knowledge": "Highly Diversified Show", "description": "Indicates a show that spans multiple genre branches.", "definition": "If a show’s genres span many different types and combinations, it’s highly diversified.", "type": "domain_knowledge", "children_knowledge": [39]} +{"id": 45, "knowledge": "Studio Brand Consistency", "description": "Identifies studios that consistently apply high-resolution logos across all their shows.", "definition": "Studios with HBR = 1 are said to maintain brand consistency.", "type": "domain_knowledge", "children_knowledge": [33]} +{"id": 46, "knowledge": "Genre Enumeration: Animation Families", "description": "Defines a group of genre types frequently used for animation-targeted family content.", "definition": "Includes 'Animation and Cartoons', 'Primetime Animation', 'Teen', 'Comedy', 'Sitcoms'.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 47, "knowledge": "Empty Clip Bucket Indicator", "description": "Highlights content with no associated clips but existing full episodes.", "definition": "Shows where Film Clips Count = 0 and Episodes Vol > 0 indicate a 'clip bucket empty' status.", "type": "domain_knowledge", "children_knowledge": [1]} +{"id": 48, "knowledge": "Missing Launch Moment", "description": "A meaningful missing value indicating legacy or untracked rollup entries.", "definition": "If Launch Moment is NULL while Media Total > 0, the show is considered legacy-uploaded.", "type": "domain_knowledge", "children_knowledge": [4]} +{"id": 49, "knowledge": "Underutilized Franchise", "description": "Defines a franchise ID that has multiple content entries but low total media output.", "definition": "Franchises with ≥ 3 content items and aggregate Media Total < 10 are underutilized.", "type": "domain_knowledge", "children_knowledge": [4]} +{"id": 50, "knowledge": "Genre Fragmentation Index (GFI)", "description": "Quantifies the complexity of genre categorization using delimiter count.", "definition": "GFI = \\frac{\\text{Number of genre tokens}}{1 + \\text{Number of '~' or '|' delimiters}}, using the way genres are split or grouped into subtypes and alternatives", "type": "calculation_knowledge", "children_knowledge": [6]} +{"id": 51, "knowledge": "Tier-Normalized Media Load (TNML)", "description": "Calculates average media volume per available tier.", "definition": "TNML = \\frac{\\text{Total Media Volume}}{\\text{Number of Availability Tiers}}, where tiers are determined by content access grouping.", "type": "calculation_knowledge", "children_knowledge": [4]} +{"id": 52, "knowledge": "High-Resolution Utilization Rate (HRUR)", "description": "Measures how effectively high-res logos are used relative to movie content.", "definition": "HRUR = \\frac{\\text{HighResLogos for Movies}}{\\text{Total Movies}}, based on the presence of high-quality logo indicators and labels showing whether content is a movie", "type": "calculation_knowledge", "children_knowledge": [2, 3]} +{"id": 53, "knowledge": "Rating Diversity Score (RDS)", "description": "Evaluates the variance of TV ratings within a franchise.", "definition": "RDS = stddev(Rating) across all content within a Franchise Group, by checking the different rating categories assigned within the same franchise", "type": "calculation_knowledge", "children_knowledge": [1]} +{"id": 54, "knowledge": "Boolean Value Redundancy Rate (BVRR)", "description": "Estimates how many redundant encodings of booleans are present.", "definition": "BVRR = \\frac{\\text{Unique Representations}}{\\text{Total Boolean Fields}}, by comparing the variety of true/false indicators used", "type": "calculation_knowledge", "children_knowledge": [8]} +{"id": 55, "knowledge": "Unrated Proportion per Tier (UPT)", "description": "Computes the share of unrated media for each access tier.", "definition": "UPT = \\frac{\\text{Unrated Vol}_{tier}}{\\text{Media Total}_{tier}}, using the standard set of content ratings and grouping by access level", "type": "calculation_knowledge", "children_knowledge": [4, 29]} +{"id": 56, "knowledge": "Clip-to-Feature Ratio (CFR)", "description": "Compares short-form clips to long-form feature content.", "definition": "CFR = \\frac{\\text{Film Clip Vol}}{\\text{Feature Vol}}, both contributing to the total size of content available", "type": "calculation_knowledge", "children_knowledge": [4]} +{"id": 57, "knowledge": "Temporal Staleness Index (TSI)", "description": "Measures how long it's been since a show was last updated.", "definition": "TSI = \\text{Current Date} - \\text{Latest Update Date}, where lack of a recent update means the content is considered unchanged", "type": "calculation_knowledge", "children_knowledge": [14]} +{"id": 58, "knowledge": "Trailer Coverage Ratio (TCR)", "description": "Proportion of content entries that include trailers.", "definition": "TCR = \\frac{\\text{Trailer Vol}}{\\text{Total Content Items}}, indicating marketing completeness.", "type": "calculation_knowledge", "children_knowledge": [4]} +{"id": 59, "knowledge": "HTML5 Depth Index (HDI)", "description": "Quantifies HTML5 support by combining media type counts.", "definition": "HDI = H5 Clips + H5 Episodes + H5 Trailers + H5 Games, by adding up the number of HTML5-compatible video types", "type": "calculation_knowledge", "children_knowledge": [7]} +{"id": 60, "knowledge": "Single-Platform Dependency", "description": "Labels content that only supports HTML5 media types.", "definition": "Content is considered single-platform dependent if all available content is playable in HTML5 format", "type": "domain_knowledge", "children_knowledge": [7]} +{"id": 61, "knowledge": "Redundant Boolean Format", "description": "Flags fields that store boolean values in multiple redundant encodings.", "definition": "Identified by BVRR > threshold, by checking for repeated ways of marking true or false values", "type": "domain_knowledge", "children_knowledge": [8]} +{"id": 62, "knowledge": "Fragmented Genre Definition", "description": "Labels shows with complex or ambiguous genre hierarchy.", "definition": "Shows are considered fragmented if the number of types and subtypes listed for the show is high", "type": "domain_knowledge", "children_knowledge": [6]} +{"id": 63, "knowledge": "Incomplete High-Engagement Title", "description": "Titles with high content volume but missing key annotations.", "definition": "Defined by high Content Size Index and ACR < 0.5.", "type": "domain_knowledge", "children_knowledge": [4, 38]} +{"id": 64, "knowledge": "Tier-Specific Content Gaps", "description": "Highlights tiers with significantly lower media volume.", "definition": "Tiers where normalized content amount is less than a given threshold are considered to have content gaps.", "type": "domain_knowledge", "children_knowledge": [31]} +{"id": 65, "knowledge": "Legacy Title Indicator", "description": "Flags older shows that lack launch timestamps.", "definition": "Shows are considered legacy if there is no launch date and the content hasn’t been updated for a long time", "type": "domain_knowledge", "children_knowledge": [14]} +{"id": 66, "knowledge": "Rating Inconsistency in Franchise", "description": "Detects franchises with significant rating inconsistency.", "definition": "Franchise Groups where the range of rating types is very wide are labeled inconsistent", "type": "domain_knowledge", "children_knowledge": [1]} +{"id": 67, "knowledge": "Trailer-Deficient Feature", "description": "Content with features but no accompanying trailer.", "definition": "A show is trailer-deficient if it has many main episodes but no trailers", "type": "domain_knowledge", "children_knowledge": [4]} +{"id": 68, "knowledge": "Over-Fragmented Offering", "description": "Indicates a show spread across too many short-form genres.", "definition": "A show is over-fragmented if the number of short-form genre categories and content types exceeds a given threshold", "type": "domain_knowledge", "children_knowledge": [6]} +{"id": 69, "knowledge": "High-Visibility Empty Bucket", "description": "Popular show with missing clip or trailer materials.", "definition": "A show is highly rated but has no extra video segments or trailers", "type": "domain_knowledge", "children_knowledge": [30, 4]} +{"id": 70, "knowledge": "Highly Rated but Visually Empty", "description": "Finds the top-rated show among those missing both trailers and clips.", "definition": "Among top-rated shows, find those missing both trailers and extra video segments", "type": "domain_knowledge", "children_knowledge": [4, 30]} +{"id": 71, "knowledge": "Over-Fragmented Offering", "description": "Indicates a show spread across too many short-form genres.", "definition": "A show is considered over-fragmented if it is associated with more than six nested genre categories and the quantity of its short-form video assets exceeds that of its long-form feature content.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 72, "knowledge": "Franchise Engagement Summary", "description": "Summarizes engagement statistics across franchises by grouping shows with shared series identifiers.", "definition": "For each franchise group, compute the number of shows and the total episode count by summing up values across all entries that share the same franchise.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 73, "knowledge": "Syndicated Franchise Engagement", "description": "Identifies popular franchises that span across multiple availability tiers.", "definition": "Franchises that have at least 3 shows and are available in 3 or more unique subscription tiers.", "type": "domain_knowledge", "children_knowledge": [3]} +{"id": 74, "knowledge": "Primary Genre Classification", "description": "Categorizes shows using common high-level genre tags for filtering.", "definition": "Examples include 'Drama', 'Comedy', 'Documentary', 'Reality', and 'Animation'. These genres can appear in the genre metadata of content records.", "type": "value_illustration", "children_knowledge": -1} +{"id": 75, "knowledge": "Content Volume Level Classification", "description": "Classifies shows into tiers based on total video volume.", "definition": "If videos total > 500 then 'High'; if between 200 and 500 then 'Medium'; otherwise 'Low'.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 76, "knowledge": "Multi-Tier Syndication", "description": "Identifies shows that are distributed across multiple availability tiers.", "definition": "A show is considered 'multi-tier' if it appears in at least three different viewing plans, like free, basic, and premium.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 77, "knowledge": "Peak Media Load", "description": "Identifies shows with heavy media volume by selecting the larger of trailer or feature content.", "definition": "For each show, we check the sizes of both its trailers and full episodes, and pick whichever is bigger to represent its media load.", "type": "calculation_knowledge", "children_knowledge": [67]} +{"id": 78, "knowledge": "Episode Rating Band", "description": "Groups shows into bands like Low, Medium, and High based on average episode ratings.", "definition": "We calculate the average user rating across all episodes in a show, and sort it into one of three categories: Low (under 3.5), Medium (3.5 to 4.2), or High (above 4.2).", "type": "calculation_knowledge", "children_knowledge": [69]} +{"id": 79, "knowledge": "Clip Availability Flag", "description": "Flags shows based on whether they contain any film clip content.", "definition": "If a show includes at least one film clip, it’s marked as 'Has Clips'; if it doesn’t, it’s marked as 'No Clips'.", "type": "domain_knowledge", "children_knowledge": [4]} +{"id": 80, "knowledge": "Promotional Intensity Summary", "description": "Summarizes the number of promotional notes available for each content across all tiers.", "definition": "We check each show or movie for various promo message types—like availability, promotions, alerts, or expirations—and count how many of them are actually filled in.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 81, "knowledge": "Most Common Peak TV Rating", "description": "Finds the most commonly assigned peak TV rating across all content distribution records in the HuluShows dataset. This metric is useful for understanding the typical maturity level (e.g., TV-PG, TV-MA) that shows are released with.", "definition": "This identifies which TV rating (like TV-MA or TV-PG) appears the most often across all shows, showing the most typical maturity level in the catalog.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 82, "knowledge": "Maximum Promo Saturation Ratio", "description": "Promo Saturation Ratio (PSR) quantifies how heavily a show is overloaded with promotional messages. It is calculated as the total number of non-null promotional fields (alertnote, availnote, promonote, expirenote) across both the free and member tiers, divided by 8 (the maximum possible promotional slots). The maximum PSR identifies the most saturated show in terms of promotional presence.", "definition": "This measures how packed a show is with promotional content. It looks at how many of the 8 possible promo message spots are used. A score of 1.0 means all spots are filled.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 83, "knowledge": "TieredUserScoreCoverage", "description": "Counts how many shows have user scores that fall within predefined standard score tiers", "definition": "Each show gets a user rating, which is then matched to a category: Low (0–2), Medium (2–4), or High (4–5). We count how many shows fall into each of those rating bands.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 84, "knowledge": "Studio Activity Index", "description": "The Studio Activity Index (SAI) measures how active a production studio is by counting how many distinct titles in the table and are linked to each studio via the studio link. This metric helps determine which studios have the largest output footprint in the catalog.", "definition": "This measures how many different shows a studio is involved in, giving a sense of how active or prolific each studio is.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 85, "knowledge": "Series Entry Count", "description": "Counts how many distinct titles are part of each series. Useful for understanding franchise size or continuation.", "definition": "For each series, we count how many unique shows or episodes it includes. This helps show how big or ongoing the series is.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 86, "knowledge": "Series Size", "description": "Series Size measures how many individual titles (e.g., episodes or entries) are associated with each series identifier in the catalog.", "definition": "This tells us how many titles are part of a single series, giving a sense of how large or deep a series is.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 87, "knowledge": "Series Title Uniformity Flag", "description": "Checks whether all content entries in the same series share the same canonical name.", "definition": "If every title in a series uses the exact same name, it’s marked as 'True'. If they don’t match, it’s marked as 'False'.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 88, "knowledge": "Studio Catalog Size", "description": "Measures the number of unique titles associated with each studio.", "definition": "This counts how many different shows are linked to each studio, showing how big their catalog is.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 89, "knowledge": "Title Count per Studio", "description": "This metric reflects how many titles are associated with each production studio, helping identify the studios with the most content in the catalog.", "definition": "We total up how many shows are tied to each studio, to find out which ones have produced the most content.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 90, "knowledge": "Average Title Length per Studio", "description": "This metric measures how long the average show title is for each production studio. It reflects naming trends or stylistic tendencies in studio catalogs.", "definition": "We look at how many characters are in the titles of a studio’s shows, then average those lengths to see what naming style they tend to use.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 91, "knowledge": "Launch Year Distribution", "description": "Breaks down the number of titles launched each year based on their recorded launch dates. Helps analyze content release trends over time.", "definition": "We check when each show was released, group them by year, and count how many came out in each. This shows how active each year was for launches.", "type": "calculation_knowledge", "children_knowledge": -1} diff --git a/hulushows/hulushows_schema.txt b/hulushows/hulushows_schema.txt new file mode 100644 index 0000000000000000000000000000000000000000..6f5e10223b51ab909c5cd9a182471192976868a7 --- /dev/null +++ b/hulushows/hulushows_schema.txt @@ -0,0 +1,127 @@ +CREATE TABLE "companies" ( +entity_key bigint NOT NULL, +chanref bigint NULL, +company_name text NULL, +short_name text NULL, +canonical_name text NULL, +brandingassets jsonb NULL, + PRIMARY KEY (entity_key) +); + +First 3 rows: + entity_key chanref company_name short_name canonical_name brandingassets +------------ --------- ----------------------- ------------- ----------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- + 8 2 FBC FOX fox {'KeyArt_URL': 'https://ib1.hulu.com/company_key_art/8?size=1600x600®ion=US', 'HasLogo_Flag': 'True', 'NetworkLogo_URL': 'https://ib1.hulu.com/company_logo/8?bg=dim&color=0&format=png®ion=US'} + 99 129 ComedyCentral ComedyCentral comedy-central {'KeyArt_URL': 'https://ib4.hulu.com/company_key_art/99?size=1600x600®ion=US', 'HasLogo_Flag': 'True', 'NetworkLogo_URL': 'https://ib4.hulu.com/company_logo/99?bg=dim&color=0&format=png®ion=US'} + 10 45 Fox Television Classics FOX-TELEVISION-CLASSICS {'KeyArt_URL': 'https://ib2.hulu.com/company_key_art/10?size=1600x600®ion=US', 'HasLogo_Flag': 'True', 'NetworkLogo_URL': 'https://ib2.hulu.com/company_logo/10?bg=dim&color=0&format=png®ion=US'} +... + + +CREATE TABLE "show_rollups" ( +srkeys bigint NOT NULL, +srlinks bigint NOT NULL, +launchmoment text NULL, +latestadd text NULL, +contentvols jsonb NULL, +html5metrics jsonb NULL, +ratinginfo jsonb NULL, + PRIMARY KEY (srkeys, srlinks), + FOREIGN KEY (srkeys) REFERENCES core(content_key), + FOREIGN KEY (srlinks) REFERENCES rollups(tierkey) +); + +First 3 rows: + srkeys srlinks launchmoment latestadd contentvols html5metrics ratinginfo +-------- --------- -------------- ----------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------- + 6979 2 2011-06-15 12/8/2016 {'standard_content': {'Ep_Vol': 274, 'Clip_Vol': 3874, 'Game_Vol': 0, 'Season_Vol': 20, 'Feature_Vol': 0, 'Media_Total': 4148, 'Trailer_Vol': 0, 'FilmClip_Vol': 0}} {'html5_volumes': {'H5_Clips': 3874, 'H5_Games': 0, 'H5_Episodes': 274, 'H5_Features': 0, 'H5_Trailers': 0, 'H5_FilmClips': 0, 'H5_MediaTotal': 4148}} {'TV_Rating': None, 'Peak_Rating': 'TV-MA', 'Unrated_Vol': 0} + 6979 3 2011-06-15 12/8/2016 {'standard_content': {'Ep_Vol': 274, 'Clip_Vol': 3874, 'Game_Vol': 0, 'Season_Vol': 20, 'Feature_Vol': 0, 'Media_Total': 4148, 'Trailer_Vol': 0, 'FilmClip_Vol': 0}} {'html5_volumes': {'H5_Clips': 3874, 'H5_Games': 0, 'H5_Episodes': 274, 'H5_Features': 0, 'H5_Trailers': 0, 'H5_FilmClips': 0, 'H5_MediaTotal': 4148}} {'TV_Rating': None, 'Peak_Rating': 'TVMA', 'Unrated_Vol': 0} + 6979 4 2011-06-15 12/29/2016 {'standard_content': {'Ep_Vol': 10, 'Clip_Vol': 3865, 'Game_Vol': 0, 'Season_Vol': 6, 'Feature_Vol': 0, 'Media_Total': 3875, 'Trailer_Vol': 0, 'FilmClip_Vol': 0}} {'html5_volumes': {'H5_Clips': 3865, 'H5_Games': 0, 'H5_Episodes': 10, 'H5_Features': 0, 'H5_Trailers': 0, 'H5_FilmClips': 0, 'H5_MediaTotal': 3875}} {'TV_Rating': None, 'Peak_Rating': 'MA', 'Unrated_Vol': 0} +... + + +CREATE TABLE "rollups" ( +tierkey bigint NOT NULL DEFAULT nextval('rollups_tierkey_seq'::regclass), +tiertype USER-DEFINED NOT NULL, + PRIMARY KEY (tierkey) +); + +First 3 rows: + tierkey tiertype +--------- ---------- + 1 free + 2 subscriber + 3 current +... + + +CREATE TABLE "core" ( +content_key bigint NOT NULL, +canonical_name text NULL, +content_title text NULL, +series_id bigint NULL, +studiolink bigint NULL, +annotations text NULL, +genreclass jsonb NULL, + PRIMARY KEY (content_key), + FOREIGN KEY (studiolink) REFERENCES companies(entity_key) +); + +First 3 rows: + content_key canonical_name content_title series_id studiolink annotations genreclass +------------- ---------------- --------------- ----------- ------------ ------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- + 54 family-guy Family Guy 11730 8 {'User_Score': '4.35370739', 'Content_Type': 'show', 'Primary_Genre': 'Animation and Cartoons', 'Hierarchical_Genres': 'Animation and Cartoons~Primetime Animation|Teen|Comedy~Sitcoms'} + 6979 south-park South Park 50003814 99 {'User_Score': '4.36303207', 'Content_Type': 'show', 'Primary_Genre': 'Comedy', 'Hierarchical_Genres': 'Comedy|Animation and Cartoons~Primetime Animation'} + 364490 frasier Frasier 50009965 430 {'User_Score': '4.252930323', 'Content_Type': 'show', 'Primary_Genre': 'Comedy', 'Hierarchical_Genres': 'Comedy~Sitcoms'} +... + + +CREATE TABLE "content_info" ( +content_key bigint NOT NULL, +story_outline text NULL, +mediacounts jsonb NULL, +visualassets jsonb NULL, + PRIMARY KEY (content_key), + FOREIGN KEY (content_key) REFERENCES core(content_key) +); + +First 3 rows: + content_key story_outline mediacounts visualassets +------------- ------------------------------------------------------------------------------------------------------------ ---------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- + 54 The adventures of an endearingly ignorant dad and his hilariously odd family of middle-class New Englanders. {'content_volumes': {'Film_Clips': 0, 'Clips_Total': 288, 'Videos_Total': 566, 'Episode_Total': 288, 'Feature_Films': 0, 'Seasons_Total': 15}} {'Link_Desc': 'For insider news, photos, and more visit the official Family Guy website', 'KeyArt_URL': 'https://ib.hulu.com/show_key_art/54?size=1600x600®ion=US', 'Art_Copyright': None, 'Thumbnail_URL': 'https://ib.hulu.com/show/54?size=476x268®ion=US'} + 969 America votes in the ultimate talent show to determine which act deserves a million dollars. {'content_volumes': {'Film_Clips': 0, 'Clips_Total': 171, 'Videos_Total': 183, 'Episode_Total': 12, 'Feature_Films': 0, 'Seasons_Total': 1}} {'Link_Desc': None, 'KeyArt_URL': 'https://ib3.hulu.com/show_key_art/969?size=1600x600®ion=US', 'Art_Copyright': None, 'Thumbnail_URL': 'https://ib3.hulu.com/show/969?size=476x268®ion=US'} + 340097 Blue Bloods is a drama about a multi-generational family of cops dedicated to New York City law enforcement. {'content_volumes': {'Film_Clips': 0, 'Clips_Total': 0, 'Videos_Total': 147, 'Episode_Total': 155, 'Feature_Films': 0, 'Seasons_Total': 7}} {'Link_Desc': None, 'KeyArt_URL': 'https://ib.hulu.com/show_key_art/18114?size=1600x600®ion=US', 'Art_Copyright': None, 'Thumbnail_URL': None} +... + + +CREATE TABLE "availabilitys" ( +content_key bigint NOT NULL, +cache_time text NULL, +auth_name text NULL, +accessflags jsonb NULL, + PRIMARY KEY (content_key), + FOREIGN KEY (content_key) REFERENCES core(content_key) +); + +First 3 rows: + content_key cache_time auth_name accessflags +------------- ------------------------- ----------- -------------------------------------------------------------------------------------------------------------- + 54 2017-08-10T14:53:04+00:00 {'Web_Only': False, 'COPPA_Comp': False, 'Movie_Flag': 'no', 'Showtime_Only': False, 'Subscriber_Only': False} + 6979 2017-08-10T14:14:33+00:00 {'Web_Only': False, 'COPPA_Comp': False, 'Movie_Flag': 'no', 'Showtime_Only': False, 'Subscriber_Only': False} + 53 2017-08-10T14:46:51+00:00 {'Web_Only': False, 'COPPA_Comp': False, 'Movie_Flag': 'no', 'Showtime_Only': False, 'Subscriber_Only': False} +... + + +CREATE TABLE "promo_info" ( +content_key bigint NOT NULL, +tiernotices jsonb NULL, + PRIMARY KEY (content_key), + FOREIGN KEY (content_key) REFERENCES core(content_key) +); + +First 3 rows: + content_key tiernotices +------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- + 54 {'free_tier': {'Alert_Note': None, 'Avail_Note': 'New episodes are available 8 days after air.', 'Promo_Note': None, 'Expire_Note': None}, 'member_tier': {'Alert_Note': None, 'Avail_Note': 'seasons 1-14 and the current season episodes the day after air', 'Promo_Note': None, 'Expire_Note': None}} + 6979 {'free_tier': {'Alert_Note': None, 'Avail_Note': None, 'Promo_Note': None, 'Expire_Note': None}, 'member_tier': {'Alert_Note': None, 'Avail_Note': 'the entire series', 'Promo_Note': 'Season 5, Episode 3 and Season 14, Episodes 5 and 6 are not available at this time.', 'Expire_Note': 'Episodes from the new season will be available the day after air.'}} + 837041 {'free_tier': {'Alert_Note': None, 'Avail_Note': None, 'Promo_Note': None, 'Expire_Note': None}, 'member_tier': {'Alert_Note': None, 'Avail_Note': None, 'Promo_Note': None, 'Expire_Note': None}} +... diff --git a/insider_trading/insider_trading_column_meaning_base.json b/insider_trading/insider_trading_column_meaning_base.json new file mode 100644 index 0000000000000000000000000000000000000000..da8bf58886a68ad50afdeec69df0a4f503d5a3d9 --- /dev/null +++ b/insider_trading/insider_trading_column_meaning_base.json @@ -0,0 +1,189 @@ +{ + "insider_trading|traders|TR_KEY": "TEXT. Unique identifier for a trader account. PK. Example: TR73442.", + "insider_trading|traders|typeFlag": "TEXT. Classification flag indicating trader type . Possible values: Broker, Individual, Institution, Market Maker.", + "insider_trading|instruments|SYM_KEY": "TEXT. Unique symbol key identifying a financial instrument. PK. Possible values: AAPL, AMZN, GOOGL, META, MSFT.", + "insider_trading|trader_relationships|rel_root": "TEXT. Root trader referenced in the relationship map. PK. FK to traders(TR_KEY).", + "insider_trading|trader_relationships|map_state": "TEXT. Current mapping state of the relationship graph. Possible values: Complete, Partial, Pending.", + "insider_trading|trader_relationships|addrHits": "BIGINT. Count of address matches across linked traders. Possible values: 0, 1, 2, 3, 4, 5.", + "insider_trading|trader_relationships|commPath": "TEXT. Primary communication pathway identified. **NULL means communication path has not been mapped.**. Possible values: Irregular, Regular.", + "insider_trading|trader_relationships|circ_size": "BIGINT. Size of the trader-relationship circle. Example: 32.", + "insider_trading|order_status_types|STAT_TOKEN": "TEXT. Unique token representing an order-status type. PK.", + "insider_trading|trade_records|REC_KEY": "TEXT. Unique key identifying an individual trade record. PK. Example: IT291460.", + "insider_trading|trade_records|snap_ts": "TIMESTAMP. Snapshot timestamp capturing the trade state. **NULL means snapshot time was not captured.**. Example: 2024-11-24 23:31:04.103260.", + "insider_trading|trade_records|tr_anchor": "TEXT. Trader key involved in the trade. FK to traders(TR_KEY).", + "insider_trading|trade_records|sym_anchor": "TEXT. Instrument symbol key traded. FK to instruments(SYM_KEY).", + "insider_trading|trade_records|freq_tag": "TEXT. Frequency-band tag for the trader’s activity. Possible values: High, Low, Medium.", + "insider_trading|trade_records|vol_day": "REAL. Average daily trading volume for the trader-instrument pair. Example: 693469.34.", + "insider_trading|trade_records|pos_avg": "REAL. Average position size held by the trader. Example: 63384.3.", + "insider_trading|trade_records|hold_span": "TEXT. Typical holding-period span . Possible values: Intraday, Long-term, Position, Swing.", + "insider_trading|trade_records|margin_pct": "REAL. Margin percentage utilised in the trades. Example: 74.62.", + "insider_trading|market_conditions|REC_PIN": "TEXT. Primary key linking market-condition metrics to a trade record. PK.", + "insider_trading|market_conditions|vol_ano": "REAL. Volume-anomaly indicator. Example: 2.12.", + "insider_trading|market_conditions|mov_pct": "REAL. Percentage price movement over the observation window. Example: 9.6.", + "insider_trading|market_conditions|spreadTag": "REAL. Bid-ask-spread metric for the instrument. Example: 0.471.", + "insider_trading|market_conditions|impactVal": "TEXT. Estimated market-impact value. Example: 0.05%.", + "insider_trading|market_conditions|mkt_corr": "REAL. Correlation with the broader market index. Example: -0.327.", + "insider_trading|market_conditions|rot_imp": "REAL. Rotation-impact score on sector rotation. Example: -0.793.", + "insider_trading|order_behaviour|REC_NODE": "TEXT. Identifier linking order-behaviour stats to a market-condition set. PK. FK to market_conditions(REC_PIN).", + "insider_trading|order_behaviour|OST_ref": "TEXT. Reference to the order-status type token. FK to order_status_types(STAT_TOKEN).", + "insider_trading|order_behaviour|dark_use": "REAL. Proportion of volume routed to dark pools. **NULL means dark-pool usage data not collected.**. Example: 0.009.", + "insider_trading|order_behaviour|off_mkt": "REAL. Fraction of executions occurring off-exchange. Example: 0.197.", + "insider_trading|order_behaviour|x_freq": "REAL. Execution frequency (orders filled per unit time). Example: 0.069.", + "insider_trading|manipulation_signals|REC_TAG": "TEXT. Identifier linking manipulation-signal metrics to a market-condition set. PK. FK to market_conditions(REC_PIN).", + "insider_trading|manipulation_signals|layer_idx": "TEXT. Index measuring order-layering activity. **NULL means layering index not assigned.**. Possible values: Confirmed, Suspected.", + "insider_trading|manipulation_signals|stuff_idx": "REAL. Quote-stuffing index. Example: 0.409.", + "insider_trading|manipulation_signals|ignite_sig": "TEXT. Signal denoting rapid-ignite trading tactics. **NULL means ignition signal not detected.**. Possible values: Strong, Weak.", + "insider_trading|manipulation_signals|close_mark": "TEXT. Indicator of closing-price manipulation. **NULL means close-price marker not set.**. Possible values: Frequent, Occasional.", + "insider_trading|sentiment_analytics|REC_SA": "TEXT. Identifier linking sentiment analytics to a market-condition set. PK. FK to market_conditions(REC_PIN).", + "insider_trading|sentiment_analytics|ins_hold": "REAL. Percentage of insider holdings in the float. Example: 45.91.", + "insider_trading|sentiment_analytics|inst_own": "REAL. Percentage of institutional ownership. **NULL means institutional-ownership data unavailable.**. Example: 78.29.", + "insider_trading|sentiment_analytics|short_rt": "REAL. Short-interest rate for the instrument. **NULL means short-interest rate not provided.**. Example: 17.24.", + "insider_trading|sentiment_analytics|opt_vol": "REAL. Options-trading volume on the instrument. **NULL means options volume value is missing.**. Example: 2.91.", + "insider_trading|sentiment_analytics|pc_ratio": "REAL. Put-call ratio for the instrument. Example: 1.17.", + "insider_trading|sentiment_analytics|iv_rank": "TEXT. Implied-volatility rank classification. Example: 11.39%.", + "insider_trading|sentiment_analytics|uo_act": "TEXT. Unusual-options-activity descriptor. **NULL means no unusual-options activity recorded.**. Possible values: High, Moderate.", + "insider_trading|corporate_events|REC_EVT": "TEXT. Identifier linking corporate-event data to a market-condition set. PK. FK to market_conditions(REC_PIN).", + "insider_trading|corporate_events|evt_near": "TEXT. Flag indicating proximity of an upcoming corporate event. **NULL means no upcoming event recorded.**. Possible values: Earnings, M&A, Restructuring.", + "insider_trading|corporate_events|announce_time": "TEXT. Scheduled or actual announcement time of the event. Possible values: Intraday hrs before, Post-market hrs before, Pre-market hrs before.", + "insider_trading|corporate_events|leak_score": "REAL. Score estimating likelihood of information leakage before the event. Example: 34.04.", + "insider_trading|reg_compliance|REC_COMP": "TEXT. Identifier linking regulatory-compliance data to a market-condition set. PK. FK to market_conditions(REC_PIN).", + "insider_trading|reg_compliance|file_state": "TEXT. Filing-status state. Possible values: Current, Delayed, Missing.", + "insider_trading|reg_compliance|disc_state": "TEXT. Disclosure-status state. Possible values: Full, Non-compliant, Partial.", + "insider_trading|reg_compliance|restrict_win": "TEXT. Trading-restriction window status. **NULL means restriction window not defined.**. Possible values: Blackout, Special.", + "insider_trading|reg_compliance|broker_flag": "TEXT. Flag indicating broker-dealer involvement. **NULL means broker-flag unset.**. Possible values: Complete, Incomplete, Late.", + "insider_trading|reg_compliance|exch_note": "TEXT. Exchange note or comment. **NULL means no exchange note present.**. Possible values: Inquiry, Warning.", + "insider_trading|reg_compliance|invest_stat": "TEXT. Current investment-status designation. **NULL means investment status not specified.**. Possible values: Active, Preliminary.", + "insider_trading|reg_compliance|alert_lvl": "TEXT. Alert-level assigned by compliance monitoring. Possible values: Critical, High, Low, Medium.", + "insider_trading|reg_compliance|invest_prior": "TEXT. Investigation-priority indicator. Possible values: High, Low, Medium.", + "insider_trading|reg_compliance|case_flag": "TEXT. Flag indicating an open compliance case. Possible values: Closed, Investigation, Monitoring.", + "insider_trading|reg_compliance|review_freq": "TEXT. Recommended frequency of compliance reviews. **NULL means review frequency not defined.**. Possible values: Daily, Monthly, Weekly.", + "insider_trading|reg_compliance|last_rev": "DATE. Date of the most recent compliance review. Example: 2025/02/02.", + "insider_trading|reg_compliance|next_rev": "DATE. Scheduled date of the next compliance review. Example: March 19, 2025.", + "insider_trading|reg_compliance|mon_inten": "TEXT. Monitoring-intensity setting. Possible values: Enhanced, Intensive, Standard.", + "insider_trading|reg_compliance|sv_sys": "TEXT. Surveillance system used. Possible values: Multiple, Primary, Secondary.", + "insider_trading|reg_compliance|det_meth": "TEXT. Detection method applied. Possible values: Automated, Hybrid, Manual.", + "insider_trading|reg_compliance|fp_rate": "REAL. False-positive rate of the detection system. Example: 0.033.", + "insider_trading|reg_compliance|model_conf": "REAL. Confidence score of the detection model. Example: 0.883.", + "insider_trading|reg_compliance|pat_rec": "REAL. Pattern-recognition metric for compliance. Example: 42.51.", + "insider_trading|reg_compliance|behav_score": "REAL. Behavioural risk score from compliance analytics. Example: 37.76.", + "insider_trading|reg_compliance|net_score": "REAL. Network-based compliance risk score. Example: 68.91.", + "insider_trading|enforcement_actions|REC_ENF": "TEXT. Identifier linking enforcement actions to a market-condition set. PK. FK to market_conditions(REC_PIN).", + "insider_trading|enforcement_actions|abuse_prob": "REAL. Probability that abusive activity occurred. **NULL means abuse probability not calculated.**. Example: 0.177.", + "insider_trading|enforcement_actions|evid_pow": "TEXT. Evidentiary strength supporting enforcement action. Possible values: Moderate, Strong, Weak.", + "insider_trading|enforcement_actions|doc_stat": "TEXT. Status of supporting documentation. Possible values: Complete, Incomplete, Partial.", + "insider_trading|enforcement_actions|act_taken": "TEXT. Enforcement action taken. **NULL means no action recorded.**. Possible values: Restriction, Suspension, Warning.", + "insider_trading|enforcement_actions|esc_lvl": "TEXT. Escalation level applied to the case. **NULL means escalation level not set.**. Possible values: Compliance, Legal, Supervisor.", + "insider_trading|enforcement_actions|legal_state": "TEXT. Current legal-proceeding state. **NULL means legal case state not recorded.**. Possible values: Active, Pending.", + "insider_trading|enforcement_actions|settle_state": "TEXT. Settlement state of the enforcement action. **NULL means settlement state unknown.**. Possible values: Negotiating, Settled.", + "insider_trading|enforcement_actions|rep_impact": "TEXT. Reputational impact assessment. Possible values: Minimal, Moderate, Severe.", + "insider_trading|enforcement_actions|biz_restrict": "TEXT. Business-restriction imposed (if any). **NULL means business restriction not documented.**. Possible values: Full, Partial.", + "insider_trading|enforcement_actions|rem_status": "TEXT. Remediation-status update. Possible values: Completed, Not Required, Pending.", + "insider_trading|enforcement_actions|sys_need": "TEXT. Required system changes identified. Possible values: Major, Minor, No.", + "insider_trading|enforcement_actions|policy_need": "TEXT. Required policy changes identified. Possible values: No, Urgent, Yes.", + "insider_trading|enforcement_actions|train_req": "TEXT. Mandatory training requirement specified. **NULL means training requirement not specified.**. Possible values: Comprehensive, Refresher.", + "insider_trading|enforcement_actions|report_state": "TEXT. Reporting-obligation state. Possible values: Automated, Hybrid, Manual.", + "insider_trading|enforcement_actions|retain_stat": "TEXT. Data-retention status. **NULL means retention status not indicated.**. Possible values: Archived, Current, Deleted.", + "insider_trading|enforcement_actions|audit_stat": "TEXT. Audit-status indicator. Possible values: Complete, Missing, Partial.", + "insider_trading|enforcement_actions|conf_lvl": "TEXT. Confidence level in enforcement findings. **NULL means confidence level not assigned.**. Possible values: Highly Sensitive, Normal, Sensitive.", + "insider_trading|enforcement_actions|access_res": "TEXT. Access-restriction outcome after enforcement. Possible values: Internal, Public, R.", + "insider_trading|enforcement_actions|share_state": "TEXT. Data-sharing state following enforcement. Possible values: Allowed, Limited, Prohibited.", + "insider_trading|market_conditions|price_accel": "TEXT. Rate of change in the price volatility per hour squared. A higher value indicates a more volatile price movement over time. Example: 2.32 %/(hour²).", + "insider_trading|market_conditions|liq_imp": "TEXT. The liquidity impact rate, calculated as the USD value of trades per minute. Reflects the instantaneous impact on market liquidity by trading volume. Example: 12,534.56 USD/min.", + "insider_trading|trade_records|bal_turnover": "TEXT. The turnover rate of the trader's balance per day, calculated as the ratio of daily trading volume to the account balance. Example: 2.34 times/day.", + "insider_trading|trade_records|risk_adj_lev": "TEXT. The risk-adjusted leverage ratio, indicating the amount of leverage relative to the trader's risk tolerance. Example: 16.98 USD/risk-point.", + "insider_trading|corporate_events|info_leak_rate": "TEXT. The rate at which information leakage occurs before a corporate event's announcement, measured in score per hour. Example: 0.45 score/hour.", + "insider_trading|sentiment_analytics|short_press_int": "TEXT. The intensity of short selling pressure as a percentage per hour, reflecting how quickly short interest builds up near corporate events. Example: 2.57 %/hour.", + "insider_trading|reg_compliance|reg_resp_spd": "TEXT. The speed of regulatory response, measured in hours per case. A lower value indicates faster response times. Example: 42.3 hours/case.", + "insider_trading|trade_records|vol_adj_lev": "TEXT. The volatility-adjusted leverage, indicating the amount of leverage in relation to market volatility. Example: 0.34 leverage-point/vol-point.", + "insider_trading|market_conditions|ofi_density": "TEXT. The order flow impact density, measured in basis points per million USD of trade volume. Example: 8.76 bps/million USD.", + "insider_trading|reg_compliance|reg_alert_conc": "TEXT. The concentration of regulatory alerts per billion USD of trading volume. Helps identify market manipulation risks. Example: 45.2 alerts/billion USD.", + "insider_trading|trader_relationships|insider_net_str": "TEXT. The strength of the insider network, calculated by the number of connected entities per percentage of insider ownership. Example: 12.45 connection-point/ownership%.", + "insider_trading|traders|trader_fin_data": { + "column_meaning": "JSONB column. Trader financial details including balance and risk.", + "fields_meaning": { + "usd_bal": "TEXT. Current account balance denominated in U.S. dollars. Example: $4,692,991.", + "risk_lvl": "TEXT. Qualitative risk-tone category assigned to the trader. Possible values: Aggressive, Conservative, Moderate.", + "age_days": "BIGINT. Number of days since the trader account was created. Example: 1532." + } + }, + "insider_trading|instruments|inst_info": { + "column_meaning": "JSONB column. Instrument data including cap and sector.", + "fields_meaning": { + "cap": "REAL. Market-capitalisation value of the instrument. **NULL means market-cap could not be determined or is not reported.**. Example: 155167483887.68.", + "sector": "TEXT. Sector classification (padded string for alignment). Possible values: Consumer, Energy, Finance, Healthcare, Technology.", + "stream": "TEXT. Industry-stream or sub-sector classification. Possible values: Banking, Biotech, Oil & Gas, Retail, Software." + } + }, + "insider_trading|trader_relationships|trader_links": { + "column_meaning": "JSONB column. Relationship data for traders including contact and links.", + "fields_meaning": { + "link_count": "BIGINT. Number of direct relationship links. **NULL means link count has not been recorded.**. Example: 19.0.", + "contact_share": "TEXT. Shared contact information among related traders. **NULL means no shared-contact data available.**. Possible values: Email, Multiple, Phone.", + "fin_link": "TEXT. Indicator of financial linkage between traders. **NULL means financial linkage data not provided.**. Possible values: Business, Personal.", + "grp_score": "REAL. Group influence or cohesion score. Example: 41.87." + } + }, + "insider_trading|order_status_types|order_status": { + "column_meaning": "JSONB column. Order status types with tracking and spread.", + "fields_meaning": { + "tick_type": "TEXT. Tick-tracking state that the status token maps to (NOT NULL). Possible values: Irregular, Regular, Suspicious.", + "spread_type": "TEXT. Spread-classification that the status token maps to (NOT NULL). Possible values: Limit, Market, Mixed." + } + }, + "insider_trading|trade_records|trade_perf": { + "column_meaning": "JSONB column. Trade performance including win/loss and leverage.", + "fields_meaning": { + "win_pct": "REAL. Percentage of profitable trades for the record. **NULL means win percentage has not been calculated.**. Example: 55.83.", + "pl_ratio": "REAL. Profit-to-loss ratio for the set of trades. Example: 1.18.", + "lev_ratio": "REAL. Leverage ratio applied by the trader. **NULL means leverage ratio is not available.**. Example: 1.81." + } + }, + "insider_trading|market_conditions|market_metrics": { + "column_meaning": "JSONB column. Market conditions data including volatility and correlation.", + "fields_meaning": { + "px_vol": "REAL. Price-weighted volume metric. Example: 0.348.", + "liq_ratio": "REAL. Liquidity ratio for the instrument. **NULL means liquidity ratio could not be determined.**. Example: 0.58.", + "peer_corr": "REAL. Correlation with peer instruments. **NULL means peer correlation metric not computed.**. Example: 0.525." + } + }, + "insider_trading|order_behaviour|order_metrics": { + "column_meaning": "JSONB column. Order behaviour metrics including size and cancellation.", + "fields_meaning": { + "size_var": "REAL. Variance of order sizes placed. Example: 0.791.", + "cancel_pct": "REAL. Percentage of orders cancelled. **NULL means cancellation percentage not computed.**. Example: 0.129.", + "mod_freq": "REAL. Frequency at which orders are modified. Example: 0.46." + } + }, + "insider_trading|manipulation_signals|manip_signals": { + "column_meaning": "JSONB column. Signals related to market manipulation like spoofing.", + "fields_meaning": { + "spoof_prob": "TEXT. Estimated probability of spoofing behaviour. Example: 0.88%.", + "wash_flag": "TEXT. Flag indicating suspected wash-trading. **NULL means wash-trade flag not determined.**. Possible values: High, Low, Medium.", + "front_run": "REAL. Score indicating likelihood of front-running. Example: 39.75." + } + }, + "insider_trading|sentiment_analytics|sentiment_data": { + "column_meaning": "JSONB column. Sentiment data from news, social media, and analysts.", + "fields_meaning": { + "news_score": "REAL. Sentiment score derived from news sources. Example: 0.874.", + "soc_sent": "REAL. Aggregate social-media sentiment score. Example: 0.613.", + "analyst_cnt": "BIGINT. Number of analyst reports considered. Example: 18." + } + }, + "insider_trading|reg_compliance|compliance_data": { + "column_meaning": "JSONB column. Regulatory compliance data including violations and risk.", + "fields_meaning": { + "prev_viol": "BIGINT. Count of previous regulatory violations. Possible values: 0, 1, 2, 3, 4, 5.", + "comp_rate": "TEXT. Compliance rate category. Possible values: A, B, C, D.", + "risk_val": "REAL. Quantitative risk value for compliance breaches. Example: 58.84." + } + }, + "insider_trading|enforcement_actions|enf_actions": { + "column_meaning": "JSONB column. Enforcement actions including penalties and resolutions.", + "fields_meaning": { + "pen_amt": "REAL. Monetary amount of the penalty imposed. Example: 290459.21.", + "res_state": "TEXT. Resolution state of the enforcement case. Possible values: In Progress, Pending, Resolved.", + "pen_flag": "TEXT. Indicator that a penalty was imposed. **NULL means penalty flag not decided.**. Possible values: Ban, Fine, Warning." + } + } +} \ No newline at end of file diff --git a/insider_trading/insider_trading_kb.jsonl b/insider_trading/insider_trading_kb.jsonl new file mode 100644 index 0000000000000000000000000000000000000000..cc140aff74516248b33d890aa43b4b6bee390ac7 --- /dev/null +++ b/insider_trading/insider_trading_kb.jsonl @@ -0,0 +1,74 @@ +{"id": 0, "knowledge": "Daily Turnover Rate (DTR)", "description": "Calculates the ratio of a trader's daily trading volume to their account balance, indicating capital velocity.", "definition": "DTR = \\frac{\\text{Daily Trading Volume}}{\\text{Account Balance}}", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 1, "knowledge": "Order Modification Intensity (OMI)", "description": "Measures how frequently a trader modifies orders relative to their cancellation rate.", "definition": "OMI = \\frac{\\text{Order Modification Frequency}}{1 - \\text{Order Cancellation Percentage}} \\text{ (undefined if cancellation percentage is 1)}", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 2, "knowledge": "Trader Leverage Exposure (TLE)", "description": "Extracts the leverage ratio from the trader's performance data.", "definition": "TLE = A trader's leverage ratio, typically found within their performance metrics.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 3, "knowledge": "Suspicious Activity Index (SAI)", "description": "A composite index attempting to quantify overall suspicious trading behavior based on risk indicators.", "definition": "SAI is a weighted sum of normalized risk indicators, including probabilities for spoofing, front-running, wash trading, and confirmed layering.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 4, "knowledge": "Pattern Anomaly Score (PAS)", "description": "Measures the deviation of a trader's pattern similarity from their peer correlation, potentially indicating unique illicit behavior.", "definition": "PAS = |\\text{Pattern Similarity Score} - \\text{Peer Correlation Score}|", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 5, "knowledge": "Compliance Recidivism Score (CRS)", "description": "Calculates a score indicating the tendency for repeat compliance issues, adjusted for account age.", "definition": "CRS = \\frac{\\text{Number of Previous Violations}}{\\text{Max}(1, \\frac{\\text{Account Age in Days}}{365})}", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 6, "knowledge": "Investigation Intensity Index (III)", "description": "Combines behavioral and network analysis scores from an investigation.", "definition": "III = (0.6 \\times \\text{Behavioral Analysis Score}) + (0.4 \\times \\text{Network Analysis Score})", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 7, "knowledge": "Sentiment Divergence Factor (SDF)", "description": "Measures the difference between news and social media sentiment scores.", "definition": "SDF = |\\text{News Sentiment Score} - \\text{Social Sentiment Score}|", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 8, "knowledge": "Relative Short Interest (RSI)", "description": "Calculates short interest ratio relative to institutional ownership.", "definition": "RSI = \\frac{\\text{Short Interest Ratio}}{\\text{Institutional Ownership Percentage}} \\text{ (undefined if institutional ownership is 0)}", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 9, "knowledge": "Enforcement Financial Impact Ratio (EFIR)", "description": "Calculates the ratio of the penalty amount to the trader's account balance at the time of the related transaction.", "definition": "EFIR = \\frac{\\text{Penalty Amount}}{\\text{Account Balance}}", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 10, "knowledge": "High-Risk Trader Profile", "description": "Identifies traders exhibiting characteristics associated with high-risk trading strategies.", "definition": "A trader is considered High-Risk if their 'Trader Leverage Exposure' > 5.0 AND their risk level is 'Aggressive' OR their 'Daily Turnover Rate' > 0.5.", "type": "domain_knowledge", "children_knowledge": [0, 2]} +{"id": 11, "knowledge": "Potential Insider Trading Flag", "description": "Flags transactions potentially linked to insider knowledge based on timing and context.", "definition": "A transaction is flagged if its information leakage score > 50.0 AND it is linked to a corporate event AND the event's announcement timing is 'Pre-market' or 'Intraday'.", "type": "domain_knowledge", "children_knowledge": [33, 34]} +{"id": 12, "knowledge": "Market Manipulation Pattern: Layering/Spoofing", "description": "Identifies trading sessions indicative of layering or spoofing tactics.", "definition": "A transaction record suggests Layering/Spoofing if its layering indicator is 'Confirmed' OR (its spoofing probability > 0.75 AND 'Order Modification Intensity' > 1.0).", "type": "domain_knowledge", "children_knowledge": [1, 31, 32]} +{"id": 13, "knowledge": "Collusion Network Indicator", "description": "Suggests potential collusion based on investigation details.", "definition": "A case indicates potential collusion if the trader circle size > 5 AND the group behavior score > 0.6 AND the communication path is 'Regular'.", "type": "domain_knowledge", "children_knowledge": [37]} +{"id": 14, "knowledge": "Elevated Regulatory Scrutiny", "description": "Identifies compliance cases under intense review or investigation.", "definition": "A case is under Elevated Regulatory Scrutiny if its alert level is 'High' or 'Critical' AND its investigation priority is 'High' AND its monitoring intensity is 'Intensive'.", "type": "domain_knowledge", "children_knowledge": [35]} +{"id": 15, "knowledge": "Problematic Compliance History", "description": "Identifies traders with a poor track record of compliance.", "definition": "A trader has a Problematic Compliance History if they have more than 3 previous violations OR their compliance rating is 'C' or 'D' OR their 'Compliance Recidivism Score' > 1.0.", "type": "domain_knowledge", "children_knowledge": [5, 36]} +{"id": 16, "knowledge": "Wash Trading Alert", "description": "Flags transactions highly suspicious for wash trading.", "definition": "A transaction triggers a Wash Trading Alert if its wash trading suspicion level is 'High'.", "type": "domain_knowledge", "children_knowledge": [30]} +{"id": 17, "knowledge": "Event-Driven Trader", "description": "Classifies traders whose activity appears strongly linked to corporate events.", "definition": "A trader may be classified as Event-Driven if a significant portion (>30%) of their transactions are linked to a corporate event.", "type": "domain_knowledge", "children_knowledge": [33]} +{"id": 18, "knowledge": "High Cancellation/Modification Trader", "description": "Identifies traders who frequently cancel or modify orders, potentially indicating manipulative intent or poor execution strategy.", "definition": "A trader is flagged if their average cancellation percentage > 0.5 OR their average 'Order Modification Intensity' > 1.5 across their transactions.", "type": "domain_knowledge", "children_knowledge": [1]} +{"id": 19, "knowledge": "Significant Enforcement Action", "description": "Categorizes enforcement actions that represent substantial penalties or restrictions.", "definition": "An action is considered a Significant Enforcement Action if the penalty impact is 'Fine' or 'Ban' OR the action taken is 'Suspension' OR the business restriction is 'Full'.", "type": "domain_knowledge", "children_knowledge": [38, 39]} +{"id": 20, "knowledge": "Trader Position Holding Style", "description": "Illustrates the typical duration traders hold their positions, based on their strategy.", "definition": "The position holding span indicates strategy: 'Intraday' for same-day trades; 'Swing' for holds of a few days to weeks; 'Position' for weeks to months; 'Long-term' for months or years.", "type": "value_illustration", "children_knowledge": -1} +{"id": 21, "knowledge": "Dark Pool Usage Venues", "description": "Explains the nature of dark pool usage indicated in transaction records.", "definition": "Dark pools are private exchanges (e.g., Alternative Trading Systems) used for large orders to reduce market impact. Usage patterns can be analyzed for regulatory compliance or signs of avoiding market transparency.", "type": "value_illustration", "children_knowledge": -1} +{"id": 22, "knowledge": "Off-Market Trading Activity", "description": "Illustrates types of trading activity occurring outside public exchanges.", "definition": "Off-market activity describes trades not on lit exchanges. 'Internal crosses' involve a broker matching buy/sell orders from their own clients internally, which requires monitoring for fairness.", "type": "value_illustration", "children_knowledge": -1} +{"id": 23, "knowledge": "Order Type Distribution", "description": "Illustrates the mix of primary order types used by a trader.", "definition": "The distribution of order types shows strategy: 'Market' orders prioritize speed; 'Limit' orders prioritize price; 'Mixed' suggests a combination of strategies.", "type": "value_illustration", "children_knowledge": -1} +{"id": 24, "knowledge": "Momentum Ignition Signals", "description": "Explains the signals related to attempting to artificially create price momentum.", "definition": "The momentum ignition signal indicates attempts to create false price movement. 'Strong' suggests clear patterns of manipulative intent; 'Weak' suggests such patterns are less evident.", "type": "value_illustration", "children_knowledge": -1} +{"id": 25, "knowledge": "Marking the Close Patterns", "description": "Explains the patterns associated with influencing the closing price of a security.", "definition": "Marking the close refers to trading near market close to manipulate the closing price. 'Frequent' indicates repeated activity; 'Occasional' suggests it is infrequent. This is a prohibited practice.", "type": "value_illustration", "children_knowledge": -1} +{"id": 26, "knowledge": "Unusual Option Activity Level", "description": "Illustrates the degree of detected unusual options trading volume or types.", "definition": "The level of unusual options activity indicates deviations from normal patterns. 'High' suggests significant deviations, a potential flag for insider information. 'Moderate' indicates some unusual activity, but less pronounced.", "type": "value_illustration", "children_knowledge": -1} +{"id": 27, "knowledge": "Information Leakage Score Interpretation", "description": "Provides context for the information leakage score, indicating potential trading on non-public information.", "definition": "A score from 0-100. Low scores (<20) suggest little evidence of informed trading. Moderate scores (20-50) warrant attention. High scores (>50) strongly suggest trading based on non-public information and require investigation.", "type": "value_illustration", "children_knowledge": -1} +{"id": 28, "knowledge": "Pattern Similarity Score Context", "description": "Provides context for the pattern similarity score, comparing trading to known illicit behaviors.", "definition": "A score from 0-1. Values near 1 indicate high similarity to known illicit trading patterns. Values near 0 indicate patterns do not strongly match known manipulative techniques.", "type": "value_illustration", "children_knowledge": -1} +{"id": 29, "knowledge": "Trading Restriction Period Types", "description": "Explains the types of trading restrictions imposed as part of enforcement.", "definition": "The type of trading restriction. 'Blackout' is a complete prohibition on trading. 'Special' indicates other, specific restrictions tailored to a case, such as position size limits or pre-trade approvals.", "type": "value_illustration", "children_knowledge": -1} +{"id": 30, "knowledge": "Risk-Adjusted Turnover (RAT)", "description": "Calculates trader turnover scaled by their leverage exposure.", "definition": "RAT = 'Daily Turnover Rate' \\times 'Trader Leverage Exposure'", "type": "calculation_knowledge", "children_knowledge": [0, 2]} +{"id": 31, "knowledge": "Combined Manipulation Indicator (CMI)", "description": "A combined score reflecting both general suspicious activity and specific pattern anomalies.", "definition": "CMI = ('Suspicious Activity Index' + 'Pattern Anomaly Score') / 2", "type": "calculation_knowledge", "children_knowledge": [3, 4]} +{"id": 32, "knowledge": "Compliance Health Score (CHS)", "description": "Inverse score reflecting compliance history severity, penalizing high recidivism and poor ratings.", "definition": "CHS = \\frac{1}{1 + \\text{'Compliance Recidivism Score'} \\times \\text{A numeric value mapped from the 'Compliance Rating Grade'}}", "type": "calculation_knowledge", "children_knowledge": [5, 71]} +{"id": 33, "knowledge": "Weighted Investigation Score (WIS)", "description": "Combines raw investigation scores with the current alert severity level.", "definition": "WIS = 'Investigation Intensity Index' \\times A numeric multiplier based on Alert Level Severity ('Low'=1, 'High'=3, etc.)", "type": "calculation_knowledge", "children_knowledge": [6, 35]} +{"id": 34, "knowledge": "Sentiment-Weighted Option Volume (SWOV)", "description": "Adjusts the option volume ratio based on the divergence between news and social sentiment.", "definition": "SWOV = \\text{Option Volume Ratio} \\times (1 + \\text{'Sentiment Divergence Factor'})", "type": "calculation_knowledge", "children_knowledge": [7]} +{"id": 35, "knowledge": "Logarithmic Enforcement Fine Impact (LEFI)", "description": "Calculates the log-scaled financial impact ratio of enforcement fines, emphasizing order of magnitude.", "definition": "LEFI = 'Enforcement Financial Impact Ratio' \\times \\log_{10}(\\text{Max}(10, \\text{Penalty Amount}))", "type": "calculation_knowledge", "children_knowledge": [9]} +{"id": 36, "knowledge": "Aggressive Trading Intensity (ATI)", "description": "Measures intensity by combining high turnover, leverage, and order modification frequency.", "definition": "ATI = 'Daily Turnover Rate' \\times 'Trader Leverage Exposure' \\times 'Order Modification Intensity'.", "type": "calculation_knowledge", "children_knowledge": [0, 1, 2]} +{"id": 37, "knowledge": "Suspicion-Weighted Turnover (SWT)", "description": "Calculates daily turnover weighted by the Suspicious Activity Index.", "definition": "SWT = 'Suspicious Activity Index' \\times 'Daily Turnover Rate'", "type": "calculation_knowledge", "children_knowledge": [0, 3]} +{"id": 38, "knowledge": "Boosted Insider Leakage Score (BILS)", "description": "Increases the Information Leakage Score if a Potential Insider Trading Flag is also present.", "definition": "BILS = The raw 'Information Leakage Score' \\times (1.5 \\text{ if 'Potential Insider Trading Flag' is True else } 1.0)", "type": "calculation_knowledge", "children_knowledge": [11, 27]} +{"id": 39, "knowledge": "Market-Adjusted Pattern Anomaly (MAPA)", "description": "Calculates pattern anomaly score adjusted for market correlation, highlighting non-market related deviations.", "definition": "MAPA = 'Pattern Anomaly Score' \\times (1 - \\text{Market Correlation})", "type": "calculation_knowledge", "children_knowledge": [4]} +{"id": 40, "knowledge": "High-Frequency High-Risk Trader", "description": "Identifies traders classified as High-Risk who also operate at high frequency.", "definition": "A trader matching the 'High-Risk Trader Profile' AND whose trading frequency scope is 'High'.", "type": "domain_knowledge", "children_knowledge": [10]} +{"id": 41, "knowledge": "Suspected Event-Driven Insider", "description": "Flags traders identified as event-driven who also trigger potential insider trading alerts.", "definition": "A trader who meets the criteria for 'Event-Driven Trader' AND for whom the 'Potential Insider Trading Flag' is True.", "type": "domain_knowledge", "children_knowledge": [11, 17]} +{"id": 42, "knowledge": "Confirmed Manipulator Under Scrutiny", "description": "Identifies traders with confirmed manipulative patterns whose cases are under high scrutiny.", "definition": "A trader exhibiting a confirmed 'Market Manipulation Pattern: Layering/Spoofing' AND whose case status is 'Elevated Regulatory Scrutiny'.", "type": "domain_knowledge", "children_knowledge": [12, 14]} +{"id": 43, "knowledge": "High-Risk Collusion Group Member", "description": "Identifies traders within a suspected collusion network who individually exhibit high-risk behavior.", "definition": "A trader flagged by the 'Collusion Network Indicator' AND who also meets the 'High-Risk Trader Profile' criteria.", "type": "domain_knowledge", "children_knowledge": [10, 13]} +{"id": 44, "knowledge": "Chronic Compliance Violator", "description": "Identifies traders with a problematic history and a high recidivism score.", "definition": "A trader identified as having a 'Problematic Compliance History' AND whose 'Compliance Recidivism Score' is greater than 1.5.", "type": "domain_knowledge", "children_knowledge": [5, 15]} +{"id": 45, "knowledge": "High-Volume Wash Trading Concern", "description": "Flags traders with wash trading alerts who also trade significant volume.", "definition": "A trader triggering a 'Wash Trading Alert' AND whose daily trading volume exceeds 1,000,000.", "type": "domain_knowledge", "children_knowledge": [16]} +{"id": 46, "knowledge": "Aggressive Event Speculator", "description": "Classifies event-driven traders who employ an aggressive risk strategy.", "definition": "A trader classified as an 'Event-Driven Trader' AND whose risk appetite is 'Aggressive'.", "type": "domain_knowledge", "children_knowledge": [17, 23]} +{"id": 47, "knowledge": "Potentially Evasive Order Modifier", "description": "Flags high cancellation/modification traders who make significant use of dark pools.", "definition": "A trader identified as a 'High Cancellation/Modification Trader' AND whose transactions show 'Dark Pool Usage' in more than 50% of instances.", "type": "domain_knowledge", "children_knowledge": [18, 21]} +{"id": 48, "knowledge": "Financially Impactful Enforcement Case", "description": "Identifies traders who faced significant enforcement actions with a high financial impact relative to their account size.", "definition": "A trader subject to a 'Significant Enforcement Action' AND whose 'Enforcement Financial Impact Ratio (EFIR)' is greater than 0.1.", "type": "domain_knowledge", "children_knowledge": [9, 19]} +{"id": 49, "knowledge": "Peer Mimicry Suspicion", "description": "Flags traders whose behavior closely matches peers but deviates little from known patterns, potentially mimicking a risky group.", "definition": "A trader with a low 'Pattern Anomaly Score' (e.g., < 0.1) BUT a high peer correlation score (e.g., > 0.7).", "type": "domain_knowledge", "children_knowledge": [4]} +{"id": 50, "knowledge": "Investigation Compliance Risk Index (ICRI)", "description": "Combines the weighted investigation score with the inverse compliance health score, highlighting cases that are both problematic and under intense investigation.", "definition": "ICRI = 'Weighted Investigation Score' \\times (1 - 'Compliance Health Score')", "type": "calculation_knowledge", "children_knowledge": [32, 33]} +{"id": 51, "knowledge": "Sentiment-Driven Leakage Risk (SDLR)", "description": "Calculates potential information leakage risk weighted by sentiment-driven unusual option volume.", "definition": "SDLR = 'Sentiment-Weighted Option Volume' \\times \\text{Information Leakage Score}.", "type": "calculation_knowledge", "children_knowledge": [27, 34]} +{"id": 52, "knowledge": "Unique Pattern Deviation Ratio (UPDR)", "description": "Measures the ratio of unique pattern deviation (anomaly) to the similarity with known illicit patterns, indicating how unusual the potentially illicit behavior is.", "definition": "UPDR = \\frac{\\text{'Pattern Anomaly Score'}}{\\text{Max}(0.01, \\text{'Pattern Similarity Score'})}", "type": "calculation_knowledge", "children_knowledge": [4, 28]} +{"id": 53, "knowledge": "Recidivism Enforcement Severity (RES)", "description": "Multiplies the compliance recidivism score by the enforcement financial impact, highlighting costly repeat offenders.", "definition": "RES = 'Compliance Recidivism Score' \\times 'Enforcement Financial Impact Ratio'", "type": "calculation_knowledge", "children_knowledge": [5, 9]} +{"id": 54, "knowledge": "Aggressive Suspicion Score (ASS)", "description": "Combines overall suspicious activity index with aggressive trading intensity, identifying traders who are both suspicious and trade aggressively.", "definition": "ASS = 'Suspicious Activity Index' \\times 'Aggressive Trading Intensity'", "type": "calculation_knowledge", "children_knowledge": [3, 36]} +{"id": 55, "knowledge": "Capital-Adjusted Investigation Intensity (CAII)", "description": "Normalizes the investigation intensity index by the trader's account balance, showing investigation focus relative to trader size.", "definition": "CAII = \\frac{\\text{'Investigation Intensity Index'}}{\\text{Max}(1000, \\text{Account Balance})}", "type": "calculation_knowledge", "children_knowledge": [6]} +{"id": 56, "knowledge": "Market-Agnostic Suspicion Index (MASI)", "description": "Combines the general suspicion index with market-adjusted pattern anomaly, focusing on suspicious activity independent of market moves.", "definition": "MASI = ('Suspicious Activity Index' + 'Market-Adjusted Pattern Anomaly') / 2", "type": "calculation_knowledge", "children_knowledge": [3, 39]} +{"id": 57, "knowledge": "Cross-Modification Ratio (CMR)", "description": "Calculates the ratio of cross-trade frequency to order modification intensity, potentially indicating coordinated or manipulative crossing activity.", "definition": "CMR = \\frac{\\text{Cross-Trade Frequency}}{\\text{Max}(0.01, \\text{'Order Modification Intensity'})}", "type": "calculation_knowledge", "children_knowledge": [1]} +{"id": 58, "knowledge": "Insider Sentiment Short Ratio (ISSR)", "description": "Combines boosted insider leakage score with relative short interest, identifying potential insider trading concurrent with high relative short interest.", "definition": "ISSR = 'Boosted Insider Leakage Score' \\times 'Relative Short Interest'", "type": "calculation_knowledge", "children_knowledge": [8, 38]} +{"id": 59, "knowledge": "Risk-Adjusted Win Rate (RAWR)", "description": "Calculates the trader's historical win percentage adjusted for their leverage exposure.", "definition": "RAWR = \\frac{\\text{Historical Win Percentage}}{\\text{Max}(1, \\text{'Trader Leverage Exposure'})}", "type": "calculation_knowledge", "children_knowledge": [2]} +{"id": 60, "knowledge": "High-Risk Manipulator Candidate", "description": "Identifies traders flagged for both high-risk profiles and specific market manipulation patterns.", "definition": "A trader who meets the 'High-Risk Trader Profile' AND is flagged for 'Market Manipulation Pattern: Layering/Spoofing'.", "type": "domain_knowledge", "children_knowledge": [10, 12]} +{"id": 61, "knowledge": "Escalated Compliance Failure", "description": "Identifies traders with a problematic compliance history who have now incurred significant enforcement actions.", "definition": "A trader identified with 'Problematic Compliance History' AND subject to a 'Significant Enforcement Action'.", "type": "domain_knowledge", "children_knowledge": [15, 19]} +{"id": 62, "knowledge": "Networked Mimicry Risk", "description": "Flags traders suspected of peer mimicry who are also part of an identified potential collusion network.", "definition": "A trader flagged for 'Peer Mimicry Suspicion' AND associated with a 'Collusion Network Indicator'.", "type": "domain_knowledge", "children_knowledge": [13, 49]} +{"id": 63, "knowledge": "High-Scrutiny Wash Trading Case", "description": "Identifies compliance cases involving high-volume wash trading concerns that are also under elevated regulatory scrutiny.", "definition": "A compliance case flagged for 'Elevated Regulatory Scrutiny' AND linked to a 'High-Volume Wash Trading Concern'.", "type": "domain_knowledge", "children_knowledge": [14, 45]} +{"id": 64, "knowledge": "Volatile Event Speculator", "description": "Flags aggressive event speculators whose trading coincides with high sentiment divergence, indicating potential reaction to conflicting information.", "definition": "A trader identified as an 'Aggressive Event Speculator' AND associated with a high 'Sentiment Divergence Factor' (e.g., > 1.0).", "type": "domain_knowledge", "children_knowledge": [7, 46]} +{"id": 65, "knowledge": "Confirmed Evasive Layering/Spoofing", "description": "Identifies traders confirmed to be layering or spoofing who also exhibit high cancellation/modification behavior, suggesting deliberate evasion.", "definition": "A trader flagged as a 'High Cancellation/Modification Trader' AND confirmed via 'Market Manipulation Pattern: Layering/Spoofing' where the layering indicator is 'Confirmed' or the spoofing probability is > 0.75.", "type": "domain_knowledge", "children_knowledge": [12, 18]} +{"id": 66, "knowledge": "High Velocity Suspicion Trader", "description": "Identifies traders exhibiting both high risk-adjusted turnover and a high suspicious activity index.", "definition": "A trader with a high 'Risk-Adjusted Turnover' (e.g., > 1.0) AND a high 'Suspicious Activity Index' (e.g., > 0.6).", "type": "domain_knowledge", "children_knowledge": [3, 30]} +{"id": 67, "knowledge": "High-Intensity Insider Investigation", "description": "Flags investigations triggered by potential insider trading that show high intensity scores, suggesting significant findings.", "definition": "An investigation linked to a 'Potential Insider Trading Flag' AND having a high 'Investigation Intensity Index' (e.g., > 70).", "type": "domain_knowledge", "children_knowledge": [6, 11]} +{"id": 68, "knowledge": "Severe Chronic Violator Case", "description": "Identifies compliance cases under elevated scrutiny involving traders flagged as chronic compliance violators.", "definition": "A compliance case flagged for 'Elevated Regulatory Scrutiny' AND involving a trader identified as a 'Chronic Compliance Violator'.", "type": "domain_knowledge", "children_knowledge": [14, 44]} +{"id": 69, "knowledge": "Costly High-Frequency Risk Enforcement", "description": "Identifies enforcement cases with significant financial impact against traders previously identified as high-frequency, high-risk.", "definition": "An enforcement case identified as 'Financially Impactful' targeting a trader previously flagged as a 'High-Frequency High-Risk Trader'.", "type": "domain_knowledge", "children_knowledge": [40, 48]} +{"id": 70, "knowledge": "High SDLR Transaction", "description": "Identifies transactions deemed high-risk based on their Sentiment-Driven Leakage Risk score exceeding a specific threshold.", "definition": "A transaction where the calculated 'Sentiment-Driven Leakage Risk' > 1000.", "type": "domain_knowledge", "children_knowledge": [51]} +{"id": 71, "knowledge": "Compliance Rating Grade", "description": "Explains the overall compliance assessment grade assigned in compliance cases.", "definition": "The compliance rating grade: 'A' represents excellent compliance; 'B' indicates good compliance with minor issues; 'C' suggests significant compliance deficiencies; 'D' signifies serious or repeated compliance failures.", "type": "value_illustration", "children_knowledge": -1} +{"id": 72, "knowledge": "Premature Resolution Block", "description": "A business rule preventing an enforcement action from being marked as 'Resolved' if associated risk metrics exceed a predefined threshold, ensuring high-risk cases receive sufficient review.", "definition": "Block an enforcement action from being marked 'Resolved' if its linked 'Investigation Intensity Index' is greater than 75.", "type": "domain_knowledge", "children_knowledge": [6]} +{"id": 73, "knowledge": "Peer Correlation Z-Score", "description": "A normalized score indicating how many standard deviations an individual record's peer correlation is away from the average peer correlation of all traders within the same trader category. Used for standardized comparison across different peer groups.", "definition": "Z-Score = (Individual Peer Correlation - Average Peer Correlation for the Trader's Category) / Standard Deviation of Peer Correlation for the Trader's Category. The score is 0 if the standard deviation is zero or not applicable.", "type": "calculation_knowledge", "children_knowledge": -1} diff --git a/insider_trading/insider_trading_schema.txt b/insider_trading/insider_trading_schema.txt new file mode 100644 index 0000000000000000000000000000000000000000..3f698dd5619dd30646c843707b1072664a86c55a --- /dev/null +++ b/insider_trading/insider_trading_schema.txt @@ -0,0 +1,298 @@ +"CREATE" TABLE "traders" ( +"TR_KEY" text NOT NULL, +"typeFlag" text NULL, +trader_fin_data jsonb NULL, + "PRIMARY" KEY (TR_KEY) +); + + + +"First" 3 rows: +TR_KEY typeFlag trader_fin_data +-------- ------------ ------------------------------------------------------------------------ +TR73442 Market Maker {'usd_bal': '$4,692,991 ', 'age_days': 1532, 'risk_lvl': 'Conservative'} +TR94368 Broker {'usd_bal': '$6,383,503 ', 'age_days': 2730, 'risk_lvl': 'Aggressive'} +TR32485 Broker {'usd_bal': '$8,042,787 ', 'age_days': 1386, 'risk_lvl': 'Conservative'} +... + + +"CREATE" TABLE "instruments" ( +"SYM_KEY" text NOT NULL, +inst_info jsonb NULL, + "PRIMARY" KEY (SYM_KEY) +); + + + +"First" 3 rows: +SYM_KEY inst_info +--------- ----------------------------------------------------------------- +AAPL {'cap': 155167000000, 'sector': 'Energy', 'stream': 'Oil & Gas'} +AMZN {'cap': 12840133000, 'sector': 'Healthcare', 'stream': 'Banking'} +GOOGL {'cap': 919192000000, 'sector': 'Consumer', 'stream': 'Banking'} +... + + +"CREATE" TABLE "trader_relationships" ( +rel_root text NOT NULL, +map_state text NULL, +"addrHits" bigint NULL, +"commPath" text NULL, +circ_size bigint NULL, +insider_net_str text NULL, +trader_links jsonb NULL, + "PRIMARY" KEY (rel_root), + "FOREIGN" KEY (rel_root) REFERENCES traders(TR_KEY) +); + + + +"First" 3 rows: +rel_root map_state addrHits commPath circ_size insider_net_str trader_links +---------- ----------- ---------- ---------- ----------- --------------------------------- ------------------------------------------------------------------------------------------ +TR73442 Partial 3 Regular 32 18.68 connection-point/ownership% {'fin_link': 'Business', 'grp_score': 41.87, 'link_count': 19, 'contact_share': 'Email'} +TR94368 Partial 0 Irregular 50 16.65 connection-point/ownership% {'fin_link': 'Business', 'grp_score': 74.18, 'link_count': 8, 'contact_share': 'Email'} +TR32485 Pending 1 Irregular 39 8.81 connection-point/ownership% {'fin_link': 'Business', 'grp_score': 55.24, 'link_count': None, 'contact_share': 'Email'} +... + + +"CREATE" TABLE "trade_records" ( +"REC_KEY" text NOT NULL, +snap_ts text NULL, +tr_anchor text NULL, +sym_anchor text NULL, +freq_tag text NULL, +vol_day real NULL, +pos_avg real NULL, +hold_span text NULL, +margin_pct real NULL, +vol_adj_lev text NULL, +risk_adj_lev text NULL, +bal_turnover text NULL, +trade_perf jsonb NULL, + "PRIMARY" KEY (REC_KEY), + "FOREIGN" KEY (tr_anchor) REFERENCES traders(TR_KEY), + "FOREIGN" KEY (sym_anchor) REFERENCES instruments(SYM_KEY) +); + + + +"First" 3 rows: +REC_KEY snap_ts tr_anchor sym_anchor freq_tag vol_day pos_avg hold_span margin_pct vol_adj_lev risk_adj_lev bal_turnover trade_perf +--------- --------- ----------- ------------ ---------- --------- --------- ----------- ------------ ----------------------------- -------------------- -------------- ------------------------------------------------------- +IT291460 TR73442 AAPL Low 693469 63384.3 Intraday 74.62 0.19 leverage-point/vol-point 20.18 USD/risk-point 2.69 times/day {'win_pct': 55.83, 'pl_ratio': 1.18, 'lev_ratio': 1.81} +IT721698 TR16988 GOOGL Low 832045 403075 Long-term 43.93 3.47 leverage-point/vol-point 18.97 USD/risk-point 0.89 times/day {'win_pct': None, 'pl_ratio': 2.7, 'lev_ratio': None} +IT794700 12:24.1 TR25044 GOOGL High 297486 327126 Swing 7.7 0.39 leverage-point/vol-point 23.34 USD/risk-point 2.83 times/day {'win_pct': None, 'pl_ratio': 1.6, 'lev_ratio': None} +... + + +"CREATE" TABLE "market_conditions" ( +"REC_PIN" text NOT NULL, +vol_ano real NULL, +mov_pct real NULL, +"spreadTag" real NULL, +"impactVal" text NULL, +mkt_corr real NULL, +rot_imp real NULL, +ofi_density text NULL, +liq_imp text NULL, +price_accel text NULL, +market_metrics jsonb NULL, + "PRIMARY" KEY (REC_PIN) +); + + + +"First" 3 rows: +REC_PIN vol_ano mov_pct spreadTag impactVal mkt_corr rot_imp ofi_density liq_imp price_accel market_metrics +--------- --------- --------- ----------- ----------- ---------- --------- --------------------- ---------------- --------------- -------------------------------------------------------- +IT291460 2.12 9.6 0.471 0.05% -0.327 -0.793 14.41 bps/million USD 19328.16 USD/min -1.25 %/(hour_) {'px_vol': 0.348, 'liq_ratio': None, 'peer_corr': 0.525} +IT931600 2.3 7.45 0.203 0.04% -0.324 0.371 13.75 bps/million USD 54648.19 USD/min 4.51 %/(hour_) {'px_vol': 0.464, 'liq_ratio': 0.58, 'peer_corr': None} +IT310545 2.58 0.22 0.203 0.02% -0.104 -0.183 1.92 bps/million USD 87421.64 USD/min 2.32 %/(hour_) {'px_vol': 0.351, 'liq_ratio': 0.94, 'peer_corr': None} +... + + +"CREATE" TABLE "order_status_types" ( +"STAT_TOKEN" text NOT NULL, +order_status jsonb NULL, + "PRIMARY" KEY (STAT_TOKEN) +); + + + +"First" 3 rows: +STAT_TOKEN order_status +------------ --------------------------------------------------- +99BYZYKK {'tick_type': 'Irregular', 'spread_type': 'Market'} +IEHVWQ6W {'tick_type': 'Regular', 'spread_type': 'Mixed'} +DA9KLQ5O {'tick_type': 'Irregular', 'spread_type': 'Mixed'} +... + + +"CREATE" TABLE "sentiment_analytics" ( +"REC_SA" text NOT NULL, +ins_hold real NULL, +inst_own real NULL, +short_rt real NULL, +opt_vol real NULL, +pc_ratio real NULL, +iv_rank text NULL, +uo_act text NULL, +short_press_int text NULL, +sentiment_data jsonb NULL, + "PRIMARY" KEY (REC_SA), + "FOREIGN" KEY ("REC_SA") REFERENCES market_conditions(REC_PIN) +); + + + +"First" 3 rows: +REC_SA ins_hold inst_own short_rt opt_vol pc_ratio iv_rank uo_act short_press_int sentiment_data +-------- ---------- ---------- ---------- --------- ---------- --------- -------- ----------------- ------------------------------------------------------------ +IT291460 45.91 nan 2.91 1.17 11.39% Moderate 1.97 %/hour {'soc_sent': 0.613, 'news_score': 0.874, 'analyst_cnt': 18} +IT931600 37.78 nan 1.39 0.85 91.47% 2.37 %/hour {'soc_sent': -0.482, 'news_score': -0.933, 'analyst_cnt': 9} +IT310545 35.22 17.24 3.9 0.65 12.68% 4.27 %/hour {'soc_sent': 0.585, 'news_score': 0.827, 'analyst_cnt': 29} +... + + +"CREATE" TABLE "order_behaviour" ( +"REC_NODE" text NOT NULL, +"OST_ref" text NULL, +dark_use real NULL, +off_mkt real NULL, +x_freq real NULL, +order_metrics jsonb NULL, + "PRIMARY" KEY (REC_NODE), + "FOREIGN" KEY ("REC_NODE") REFERENCES market_conditions(REC_PIN), + "FOREIGN" KEY ("OST_ref") REFERENCES order_status_types(STAT_TOKEN) +); + + + +"First" 3 rows: +REC_NODE OST_ref dark_use off_mkt x_freq order_metrics +---------- --------- ---------- --------- -------- ----------------------------------------------------------- +IT291460 0.009 0.197 0.069 {'mod_freq': 0.46, 'size_var': 0.791, 'cancel_pct': 0.129} +IT931600 0.098 0.069 0.074 {'mod_freq': 0.467, 'size_var': 1.507, 'cancel_pct': 0.184} +IT310545 0.26 0.076 0.08 {'mod_freq': 0.175, 'size_var': 0.759, 'cancel_pct': None} +... + + +"CREATE" TABLE "manipulation_signals" ( +"REC_TAG" text NOT NULL, +layer_idx text NULL, +stuff_idx real NULL, +ignite_sig text NULL, +close_mark text NULL, +manip_signals jsonb NULL, + "PRIMARY" KEY (REC_TAG), + "FOREIGN" KEY ("REC_TAG") REFERENCES market_conditions(REC_PIN) +); + + + +"First" 3 rows: +REC_TAG layer_idx stuff_idx ignite_sig close_mark manip_signals +--------- ----------- ----------- ------------ ------------ --------------------------------------------------------------- +IT291460 0.409 Strong Occasional {'front_run': 39.75, 'wash_flag': 'Low', 'spoof_prob': '0.88%'} +IT931600 Confirmed 0.271 Weak Occasional {'front_run': 65.34, 'wash_flag': None, 'spoof_prob': '0.21%'} +IT310545 0.369 Frequent {'front_run': 61.14, 'wash_flag': None, 'spoof_prob': '0.39%'} +... + + +"CREATE" TABLE "corporate_events" ( +"REC_EVT" text NOT NULL, +evt_near text NULL, +announce_time text NULL, +leak_score real NULL, +info_leak_rate text NULL, + "PRIMARY" KEY (REC_EVT), + "FOREIGN" KEY ("REC_EVT") REFERENCES market_conditions(REC_PIN) +); + + + +"First" 3 rows: +REC_EVT evt_near announce_time leak_score info_leak_rate +--------- ------------- --------------------- ------------ ---------------- +IT291460 Earnings Pre-market hrs before 34.04 0.57 score/hour +IT931600 Restructuring Intraday hrs before 41.03 0.81 score/hour +IT310545 M&A Pre-market hrs before 12.66 0.76 score/hour +... + + +"CREATE" TABLE "reg_compliance" ( +"REC_COMP" text NOT NULL, +file_state text NULL, +disc_state text NULL, +restrict_win text NULL, +broker_flag text NULL, +exch_note text NULL, +invest_stat text NULL, +alert_lvl text NULL, +invest_prior text NULL, +case_flag text NULL, +review_freq text NULL, +last_rev date NULL, +next_rev date NULL, +mon_inten text NULL, +sv_sys text NULL, +det_meth text NULL, +fp_rate real NULL, +model_conf real NULL, +pat_rec real NULL, +behav_score real NULL, +net_score real NULL, +reg_alert_conc text NULL, +reg_resp_spd text NULL, +compliance_data jsonb NULL, + "PRIMARY" KEY (REC_COMP), + "FOREIGN" KEY ("REC_COMP") REFERENCES market_conditions(REC_PIN) +); + + + +"First" 3 rows: +REC_COMP file_state disc_state restrict_win broker_flag exch_note invest_stat alert_lvl invest_prior case_flag review_freq last_rev next_rev mon_inten sv_sys det_meth fp_rate model_conf pat_rec behav_score net_score reg_alert_conc reg_resp_spd compliance_data +---------- ------------ ------------- -------------- ------------- ----------- ------------- ----------- -------------- ------------- ------------- ---------- ---------- ----------- --------- ---------- --------- ------------ --------- ------------- ----------- ------------------------ ----------------- ----------------------------------------------------- +IT291460 Delayed Full Warning Preliminary Low Low Investigation Monthly 2025-02-02 2025-03-19 Enhanced Secondary Automated 0.033 0.883 42.51 37.76 68.91 91.36 alerts/billion USD 130.00 hours/case {'risk_val': 58.84, 'comp_rate': 'B', 'prev_viol': 1} +IT931600 Missing Full Inquiry Preliminary Low Low Investigation Monthly 2025-02-14 2025-03-07 Intensive Secondary Automated 0.137 0.613 12.01 92.39 77.42 52.54 alerts/billion USD 35.30 hours/case {'risk_val': 49.31, 'comp_rate': 'A', 'prev_viol': 1} +IT310545 Delayed Non-compliant Blackout Warning Critical Low Closed Monthly 2025-02-07 2025-02-22 Intensive Secondary Hybrid 0.197 0.73 88.2 73.36 47.35 72.49 alerts/billion USD 174.61 hours/case {'risk_val': 56.81, 'comp_rate': 'D', 'prev_viol': 5} +... + + +"CREATE" TABLE "enforcement_actions" ( +"REC_ENF" text NOT NULL, +abuse_prob real NULL, +evid_pow text NULL, +doc_stat text NULL, +act_taken text NULL, +esc_lvl text NULL, +legal_state text NULL, +settle_state text NULL, +rep_impact text NULL, +biz_restrict text NULL, +rem_status text NULL, +sys_need text NULL, +policy_need text NULL, +train_req text NULL, +report_state text NULL, +retain_stat text NULL, +audit_stat text NULL, +conf_lvl text NULL, +access_res text NULL, +share_state text NULL, +enf_actions jsonb NULL, + "PRIMARY" KEY (REC_ENF), + "FOREIGN" KEY ("REC_ENF") REFERENCES market_conditions(REC_PIN) +); + + + +"First" 3 rows: +REC_ENF abuse_prob evid_pow doc_stat act_taken esc_lvl legal_state settle_state rep_impact biz_restrict rem_status sys_need policy_need train_req report_state retain_stat audit_stat conf_lvl access_res share_state enf_actions +--------- ------------ ---------- ---------- ----------- --------- ------------- -------------- ------------ -------------- ------------ ---------- ------------- ----------- -------------- ------------- ------------ ---------------- ------------ ------------- ------------------------------------------------------------------------ +IT291460 0.177 Strong Incomplete Warning Pending Negotiating Severe Not Required Minor Yes Hybrid Archived Complete Internal Limited {'pen_amt': 290459.22, 'pen_flag': None, 'res_state': 'Resolved'} +IT931600 nan Weak Incomplete Moderate Pending Minor No Hybrid Archived Missing Highly Sensitive Internal Prohibited {'pen_amt': 76804.63, 'pen_flag': 'Warning', 'res_state': 'In Progress'} +IT310545 nan Strong Partial Restriction Legal Active Settled Moderate Pending Major Yes Refresher Automated Deleted Complete Sensitive Internal Prohibited {'pen_amt': 504727.53, 'pen_flag': None, 'res_state': 'In Progress'} +... diff --git a/labor_certification_applications/labor_certification_applications_column_meaning_base.json b/labor_certification_applications/labor_certification_applications_column_meaning_base.json new file mode 100644 index 0000000000000000000000000000000000000000..114c2c55eadab5287fbfb075feba2199546011fe --- /dev/null +++ b/labor_certification_applications/labor_certification_applications_column_meaning_base.json @@ -0,0 +1,155 @@ +{ + "labor_certification_applications|employer|corpHandle": "TEXT. Unique identifier for the employer. PK.", + "labor_certification_applications|employer_poc|contactMail": "TEXT. Employer point of contact email address. PK. Example: skaul@avanthealthcare.com.", + "labor_certification_applications|employer_poc|firmLink": "TEXT. Reference to the employer’s firm. FK to employer.corpHandle.", + "labor_certification_applications|employer_poc|firmZip": "TEXT. Reference to the employer’s postal code. FK to employer.ZipRef.", + "labor_certification_applications|attorney|lawMail": "TEXT. Attorney's email address. PK.", + "labor_certification_applications|preparer|prepMail": "TEXT. Preparer’s email address. PK.", + "labor_certification_applications|preparer|PrepLname": "TEXT. Last name of the preparer. Example: Peace.", + "labor_certification_applications|preparer|PrepFname": "TEXT. First name of the preparer. Example: Tyler.", + "labor_certification_applications|preparer|PrepMI": "TEXT. Middle initial of the preparer. Example: J.", + "labor_certification_applications|preparer|prepBiz": "TEXT. Business name of the preparer. **NULL means no business name provided.**. Example: Musillo Unkenholt, LLC..", + "labor_certification_applications|worksite|w_addr1": "TEXT. Primary address line of the worksite. PK.", + "labor_certification_applications|worksite|wCITY": "TEXT. City of the worksite. PK.", + "labor_certification_applications|worksite|wSTATE": "TEXT. State of the worksite. PK.", + "labor_certification_applications|worksite|wZip": "TEXT. Postal code of the worksite. PK.", + "labor_certification_applications|worksite|SecEnt": "TEXT. Secondary entity associated with the worksite. **NULL means no secondary entity specified.**. Possible values: No, Yes.", + "labor_certification_applications|worksite|SecEntName": "TEXT. Name of the secondary entity. **NULL means no secondary entity name provided.**. Example: Billings Clinic Health System.", + "labor_certification_applications|worksite|Waddr_2": "TEXT. Secondary address line of the worksite. **NULL means no secondary address provided.**. Example: 800.", + "labor_certification_applications|worksite|wCnty": "TEXT. County of the worksite. **NULL means no county specified.**. Example: YELLOWSTONE.", + "labor_certification_applications|prevailing_wage|trackNo": "TEXT. Unique tracking number for the prevailing wage record. PK.", + "labor_certification_applications|cases|fileKey": "TEXT. Unique case number identifier. PK.", + "labor_certification_applications|cases|statusTag": "TEXT. Current status of the case. Possible values: CERTIFIED, Certified, certified.", + "labor_certification_applications|cases|recvDay": "TEXT. Date the case was received. Possible values: 2023/12/21.", + "labor_certification_applications|cases|decisionDAY": "TEXT. Date the case decision was made. **NULL means decision date not provided.**. Possible values: 2023-12-29.", + "labor_certification_applications|cases|OrigCertDay": "DATE. Original certification date. **NULL means certification date not provided.**. Possible values: .", + "labor_certification_applications|cases|visaCls": "enum_visa_class. Visa class for the case. Possible values: E-3 Australian, H-1B, H-1B1 Chile, H-1B1 Singapore.", + "labor_certification_applications|cases|jobTag": "TEXT. Job title associated with the case. Example: Registered Nurse.", + "labor_certification_applications|cases|SocCd": "TEXT. Standard Occupational Classification (SOC) code. Example: 29-1141.00.", + "labor_certification_applications|cases|socTitle": "TEXT. SOC title for the job role. Example: Registered Nurses.", + "labor_certification_applications|cases|FullTimeInd": "TEXT. Full-time employment indicator (Y/N). Possible values: 0, 1, N, Y.", + "labor_certification_applications|cases|beginDay": "TEXT. Start date of employment. Example: 21/12/2023.", + "labor_certification_applications|cases|endDay": "TEXT. End date of employment. **NULL means no end date provided.**. Example: 2026 20th Dec..", + "labor_certification_applications|cases|headCt": "BIGINT. Total number of worker positions in the case. Possible values: 1, 3, 4, 5, 6, 10, 25, 50.", + "labor_certification_applications|cases|newEmp": "BIGINT. Number of new employment positions. Possible values: 0, 1, 2, 5, 10, 20.", + "labor_certification_applications|cases|contEmp": "BIGINT. Number of continued employment positions. Possible values: 0, 1, 2, 5, 10.", + "labor_certification_applications|cases|changePrev": "BIGINT. Number of changes from previous employment. Possible values: 0, 1, 2, 5.", + "labor_certification_applications|cases|concurrentlyNew": "BIGINT. Number of concurrent new employment positions. Possible values: 0, 1.", + "labor_certification_applications|cases|changeFirm": "BIGINT. Number of employer changes. Possible values: 0, 1, 2, 5, 10.", + "labor_certification_applications|cases|amendFlag": "BIGINT. Flag indicating if the petition is amended. Possible values: 0, 1, 2, 5.", + "labor_certification_applications|cases|siteSlots": "BIGINT. Total number of worksite locations. Possible values: 1, 2, 3, 4, 5, 10.", + "labor_certification_applications|cases|AgreeLC": "TEXT. Agreement to the labor condition statement. Possible values: Yes.", + "labor_certification_applications|cases|h1bDep": "enum_h1b_dependent. Indicates if the applicant is a dependent under H-1B. Possible values: No, Yes.", + "labor_certification_applications|cases|willfulV": "enum_willful_violator. Indicates if the employer is a willful violator. Possible values: No.", + "labor_certification_applications|cases|SupportH": "enum_support_h1b. Indicates if the employer supports H-1B. Possible values: Yes.", + "labor_certification_applications|cases|statBasis": "TEXT. Statutory basis for the case. Possible values: $60,000 or higher annual wage, Both $60,000 or higher in annual wage and Masters Degree or higher in related specialty.", + "labor_certification_applications|cases|appA": "TEXT. Indicates if Appendix A is attached. Possible values: .", + "labor_certification_applications|cases|pubDisc": "TEXT. Public disclosure status. Possible values: Disclose Business, Disclose Business and Employment, Disclose Employment.", + "labor_certification_applications|cases|homeFirm": "TEXT. Employer reference. FK to employer.corpHandle. Example: Avant Healthcare Professionals, LLC..", + "labor_certification_applications|cases|homeZip": "TEXT. Employer postal code reference. FK to employer.ZipRef. Example: 32751.", + "labor_certification_applications|cases|prepLink": "TEXT. Preparer’s email address. FK to preparer.prepMail. Example: tyler.peace@muimmigration.com.", + "labor_certification_applications|case_attorney|docketKey": "TEXT. Case number reference. FK to cases.fileKey.", + "labor_certification_applications|case_attorney|counselMail": "TEXT. Attorney's email address. FK to attorney.lawMail. Example: tyler.peace@muimmigration.com.", + "labor_certification_applications|case_attorney|counselFor": "enum_agent_representing_employer. Indicates if the attorney is representing the employer. Possible values: No, Yes.", + "labor_certification_applications|case_worksite|dockKey": "TEXT. Case number reference. FK to cases.fileKey. PK. Example: I-200-23355-584296.", + "labor_certification_applications|case_worksite|ws_addr1": "TEXT. Primary worksite address line. PK. Example: 2800 10th Avenue North.", + "labor_certification_applications|case_worksite|wsCity": "TEXT. Worksite city. PK. Example: Billings.", + "labor_certification_applications|case_worksite|wsState": "TEXT. Worksite state. PK. Example: MT.", + "labor_certification_applications|case_worksite|wsZip": "TEXT. Worksite postal code. PK. Example: 59101.", + "labor_certification_applications|case_worksite|wsHeads": "BIGINT. Number of workers at the worksite. Possible values: 1, 3, 4, 5, 6, 10, 25, 50.", + "labor_certification_applications|case_worksite|wageTrack": "TEXT. Wage tracking number reference. FK to prevailing_wage.trackNo. Example: 1.", + "labor_certification_applications|employer|employer_contact_info": { + "column_meaning": "JSONB column. Groups the employer's full address, contact details, and classification data into a structured JSONB format.", + "fields_meaning": { + "address": { + "line1": "TEXT. Employer’s primary address line. Example: 2301 Lucien Way.", + "line2": "TEXT. Employer’s secondary address line. **NULL means no secondary address provided.**. Example: Suite 360.", + "city": "TEXT. City of the employer’s location. Example: Maitland.", + "state": "TEXT. State of the employer’s location. Example: FL.", + "zip": "TEXT. Postal code for the employer’s address. PK.", + "country": "TEXT. Country of the employer. **NULL means no country specified.**. Possible values: UNITED STATES OF AMERICA.", + "province": "TEXT. Province or region of the employer. **NULL means no province specified.**. Example: TX." + }, + "phone": { + "number": "BIGINT. Employer’s main phone number. Example: (140) 768 12999.", + "extension": "BIGINT. Extension number for the employer's phone. **NULL means no extension provided.**. Example: 0.0." + }, + "naics_code": "BIGINT. North American Industry Classification System (NAICS) code for the employer. Example: 561320.", + "alternate_name": "TEXT. Alternate name or trade name of the employer. **NULL means no alternate name provided.**. Example: Lattice." + } + }, + "labor_certification_applications|attorney|attorney_profile": { + "column_meaning": "JSONB column. Captures identifying details of the attorney, including name, address, firm, and court affiliation.", + "fields_meaning": { + "name": { + "first": "TEXT. First name of the attorney. Example: Maria.", + "middle": "TEXT. Middle initial of the attorney. **NULL means no middle initial provided.**. Example: T..", + "last": "TEXT. Last name of the attorney. Example: Schneider." + }, + "address": { + "line1": "TEXT. Primary address line for the attorney. Example: 302 West Third Street.", + "line2": "TEXT. Secondary address line for the attorney. **NULL means no secondary address provided.**. Example: Suite 710.", + "city": "TEXT. City of the attorney. Example: Cincinnati.", + "state": "TEXT. State of the attorney. Example: OH.", + "zip": "TEXT. Postal code for the attorney. Example: 45202.", + "country": "TEXT. Country of the attorney. Possible values: CANADA, UNITED STATES OF AMERICA.", + "province": "TEXT. Province or region of the attorney. **NULL means no province specified.**. Example: Ontario." + }, + "contact": { + "phone": "BIGINT. Phone number for the attorney. Example: 15133818472.0.", + "extension": "BIGINT. Extension number for the attorney’s phone. **NULL means no extension provided.**. Example: 7375.0." + }, + "firm": "TEXT. Law firm name or business name. Example: Musillo Unkenholt, LLC..", + "highest_court": { + "state": "TEXT. State of the highest court the attorney is registered with. **NULL means no state provided.**. Example: OH.", + "name": "TEXT. Name of the highest court the attorney is registered with. **NULL means no court name provided.**. Example: Supreme Court of Ohio." + } + } + }, + "labor_certification_applications|prevailing_wage|wage_details": { + "column_meaning": "JSONB column. Summarizes offered and prevailing wage details including ranges, units, and source information.", + "fields_meaning": { + "offered_wage": { + "from": "TEXT. Minimum wage rate of pay from. Example: $35.42.", + "to": "REAL. Maximum wage rate of pay to. **NULL means maximum wage not provided.**. Example: 150000.0.", + "unit": "enum_wage_unit_of_pay. Unit of pay for the wage rate (e.g., Hour, Week, Year). Possible values: Hour, Week, Year." + }, + "prevailing_wage": { + "value": "TEXT. Prevailing wage value. Example: USD 35.42.", + "unit": "enum_pw_unit_of_pay. Unit of pay for the prevailing wage. Possible values: Hour, Month, Week, Year.", + "level": "enum_pw_wage_level. Wage level classification for the position. Possible values: I, II, III, IV.", + "oes_year": "TEXT. OES year for the prevailing wage survey. **NULL means no OES span provided.**. Possible values: 7/1/2023 - 6/30/2024." + }, + "alternate_source": { + "source_type": "enum_pw_other_source. Source of the prevailing wage information. Possible values: CBA, Survey.", + "source_year": "BIGINT. Year of the alternative wage source. **NULL means no year specified.**. Possible values: 2016.0, 2017.0, 2022.0, 2023.0.", + "publisher": "TEXT. Name of the prevailing wage survey publisher. Example: Willis Towers Watson.", + "survey_title": "TEXT. Title of the prevailing wage survey. Example: Gen. Industry Professional (Technical and Operations) Report." + } + } + }, + "labor_certification_applications|employer_poc|poc_contact_info": { + "column_meaning": "JSONB column. Encapsulates personal and location-based contact details for the employer’s point of contact.", + "fields_meaning": { + "name": { + "first": "TEXT. First name of the employer’s point of contact. Example: Saloni.", + "middle": "TEXT. Middle name of the employer’s point of contact. **NULL means no middle name provided.**. Example: R..", + "last": "TEXT. Last name of the employer’s point of contact. Example: Kaul." + }, + "title": "TEXT. Job title of the employer’s point of contact. Example: Director of Immigration.", + "address": { + "line1": "TEXT. Primary address line for the employer’s point of contact. Example: 2301 Lucien Way.", + "line2": "TEXT. Secondary address line for the employer’s point of contact. **NULL means no secondary address provided.**. Example: Suite 360.", + "city": "TEXT. City of the employer’s point of contact. Example: Maitland.", + "state": "TEXT. State of the employer’s point of contact. Example: FL.", + "zip": "TEXT. Postal code of the employer’s point of contact. Example: 32751.", + "country": "TEXT. Country of the employer’s point of contact. Possible values: UNITED STATES OF AMERICA.", + "province": "TEXT. Province or region of the employer’s point of contact. **NULL means no province specified.**. Possible values: CALIFORNIA, California, GEORGIA, New York, TEXAS, TX." + }, + "phone": { + "number": "TEXT. Phone number for the employer’s point of contact. Example: 14076812999.", + "extension": "BIGINT. Extension number for the employer’s point of contact’s phone. **NULL means no extension provided.**. Example: 0.0." + } + } + } +} \ No newline at end of file diff --git a/labor_certification_applications/labor_certification_applications_kb.jsonl b/labor_certification_applications/labor_certification_applications_kb.jsonl new file mode 100644 index 0000000000000000000000000000000000000000..4dc432fff58f2ec068ecd105db831e59e4541756 --- /dev/null +++ b/labor_certification_applications/labor_certification_applications_kb.jsonl @@ -0,0 +1,60 @@ +{"id": 1, "knowledge": "Visa Classification Types", "description": "Illustrates the different types of work visas available in the employment-based immigration system.", "definition": "The visa classification system includes several types: H-1B (specialty occupations requiring theoretical and practical application of specialized knowledge), H-1B1 Singapore (Singapore nationals in specialty occupations), H-1B1 Chile (Chilean nationals in specialty occupations), and E-3 Australian (Australian nationals in specialty occupations).", "type": "value_illustration", "children_knowledge": -1} +{"id": 2, "knowledge": "H-1B Dependency Status", "description": "Illustrates what it means for an employer to be H-1B dependent.", "definition": "An employer is considered H-1B dependent when the proportion of H-1B workers relative to the total workforce exceeds specific thresholds: 15% for employers with more than 50 employees, 8 workers for employers with 26-50 employees, or 4 workers for employers with 25 or fewer employees. H-1B dependent employers face additional attestation requirements. This status is tracked as part of the application data.", "type": "value_illustration", "children_knowledge": -1} +{"id": 3, "knowledge": "Willful Violator Status", "description": "Illustrates what constitutes a willful violator designation for employers.", "definition": "A willful violator is an employer who has been found by the Department of Labor to have committed a willful failure to meet LCA conditions or made a misrepresentation of material fact on an LCA within the past 5 years. Such employers face additional attestation requirements similar to H-1B dependent employers. This status is tracked as part of the application data.", "type": "value_illustration", "children_knowledge": -1} +{"id": 4, "knowledge": "Full-Time Position Indicator", "description": "Illustrates what defines a full-time position in visa applications.", "definition": "A full-time position typically means employment for at least 35 hours per week, though specific definitions may vary by employer. Non-full-time positions may include part-time roles that require fewer hours per week. This indicator is recorded as part of the application data.", "type": "value_illustration", "children_knowledge": -1} +{"id": 5, "knowledge": "Prevailing Wage Levels", "description": "Illustrates the meaning of the four wage levels in the prevailing wage system.", "definition": "Prevailing wage levels range from I to IV, representing increasingly higher wages based on skill, experience, education, and responsibility: Level I (entry-level), Level II (qualified), Level III (experienced), and Level IV (fully competent). These levels are determined based on the position's requirements compared to the occupational standard. Wage level information is included in the wage determination data.", "type": "value_illustration", "children_knowledge": -1} +{"id": 6, "knowledge": "NAICS Code Purpose", "description": "Illustrates the purpose and meaning of NAICS codes in the context of visa applications.", "definition": "The North American Industry Classification System (NAICS) code is a 6-digit code that identifies the employer's primary industry sector. For example, code 541511 represents 'Custom Computer Programming Services', while 561320 represents 'Temporary Help Services'. These codes help categorize employers by industry for statistical and regulatory purposes. The NAICS code is part of the employer's profile information.", "type": "value_illustration", "children_knowledge": -1} +{"id": 7, "knowledge": "SOC Code Framework", "description": "Illustrates the Standard Occupational Classification system used in labor certification applications.", "definition": "The Standard Occupational Classification (SOC) code is a standardized numbering system that identifies and classifies occupations. For example, 15-1253.00 represents 'Software Quality Assurance Analysts and Testers', while 29-1141.00 represents 'Registered Nurses'. These codes help ensure that foreign workers are properly classified and paid according to their occupational category. The SOC code is included in the job information for each application.", "type": "value_illustration", "children_knowledge": -1} +{"id": 8, "knowledge": "Wage Payment Units", "description": "Illustrates the different units of payment used in wage reporting.", "definition": "Wage payment units indicate how wages are calculated and paid, with common units including: Hour (payment calculated per working hour), Week (payment calculated as a weekly salary), Month (payment calculated as a monthly salary), and Year (payment calculated as an annual salary). Different units may be used depending on industry norms and position types. Wage unit information is included in the wage determination data.", "type": "value_illustration", "children_knowledge": -1} +{"id": 9, "knowledge": "Attorney Representation Status", "description": "Illustrates the significance of attorney representation in visa applications.", "definition": "Attorney representation status indicates whether an employer has legal counsel for the visa application process. When present, it shows the employer has retained qualified legal assistance for navigating immigration regulations. When absent, it indicates the employer is self-represented, handling the application process internally without specialized legal counsel. This status is tracked as part of the application process.", "type": "value_illustration", "children_knowledge": -1} +{"id": 10, "knowledge": "Public Disclosure Options", "description": "Illustrates the choices employers have regarding public disclosure of their business information.", "definition": "Employers can choose whether their business information is publicly disclosed in the visa application process. 'Disclose Business' indicates the employer consents to having their information publicly available, while other options may restrict disclosure to protect confidential business information. Disclosure choices are recorded as part of the application data.", "type": "value_illustration", "children_knowledge": -1} +{"id": 11, "knowledge": "Wage Differential Rate (WDR)", "description": "Calculates the percentage difference between offered wage and prevailing wage.", "definition": "WDR = ((Offered Wage - Prevailing Wage) / Prevailing Wage) × 100%. Both offered and prevailing wages are converted to the same payment unit (hourly, weekly, or annually) before calculation. Wage information is included in the wage determination data.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 12, "knowledge": "Application Processing Time (APT)", "description": "Calculates the number of days between application receipt and decision.", "definition": "APT is the number of days between the date the application is received and the date a decision is made. Both dates are recorded as part of the application data.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 13, "knowledge": "Approval Rate (AR)", "description": "Calculates the percentage of certified applications out of total applications.", "definition": "AR = (Number of Certified Applications / Total Number of Applications) × 100%. Application status is tracked as part of the application data.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 14, "knowledge": "Attorney Case Load (ACL)", "description": "Measures the number of cases handled by each attorney.", "definition": "ACL is the total number of cases handled by each attorney. Attorney assignment is tracked as part of the application process.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 15, "knowledge": "Employer H-1B Concentration (EHC)", "description": "Measures the proportion of H-1B applications submitted by an employer relative to all applications.", "definition": "EHC = (Number of H-1B Applications by Employer / Total Number of H-1B Applications) × 100%. Visa classification and employer information are tracked in the application data.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 16, "knowledge": "Worksite Density (WD)", "description": "Measures the number of visa workers per worksite.", "definition": "WD = (Number of Workers at Worksite / Number of Worksites). Worksite and worker information are recorded as part of the application data.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 17, "knowledge": "Wage Premium Rate (WPR)", "description": "Measures how much more an employer pays compared to minimum requirements.", "definition": "WPR = ((Offered Wage - Minimum Required Wage) / Minimum Required Wage) × 100%. Minimum required wage is typically the prevailing wage or higher depending on regulations. Wage information is included in the wage determination data.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 18, "knowledge": "Application Success Rate (ASR)", "description": "Measures the success rate of applications by employer.", "definition": "ASR = (Number of Certified Applications by Employer / Total Applications by Employer) × 100%. Employer and application status are tracked as part of the application data.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 19, "knowledge": "State Application Distribution (SAD)", "description": "Measures the percentage of applications in each state.", "definition": "SAD = (Number of Applications in State / Total Number of Applications) × 100%. State information is included in the worksite data.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 20, "knowledge": "Industry Application Distribution (IAD)", "description": "Measures the percentage of applications in each industry based on NAICS codes.", "definition": "IAD = (Number of Applications in Industry / Total Number of Applications) × 100%. Industry is determined by the employer's NAICS code.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 21, "knowledge": "Occupational Demand Index (ODI)", "description": "Measures the relative demand for specific occupations based on SOC codes.", "definition": "ODI = (Number of Applications for SOC Code / Average Applications per SOC Code). SOC code information is included in the job data for each application.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 22, "knowledge": "Cost of Living Adjusted Wage (CLAW)", "description": "Adjusts offered wage based on city's cost of living index.", "definition": "CLAW = (Offered Wage / Cost of Living Index) × 100, where the Cost of Living Index is a standardized index with 100 as the national average. Wage and location information are included in the application data.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 23, "knowledge": "Seasonal Application Index (SAI)", "description": "Measures the concentration of applications in different months of the year.", "definition": "SAI = (Number of Applications in Month / Average Monthly Applications). Application receipt dates are tracked as part of the application data.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 24, "knowledge": "Attorney Success Rate (ASR_Attorney)", "description": "Measures the certification rate of applications handled by each attorney.", "definition": "ASR = (Number of Certified Applications by Attorney / Total Applications by Attorney) × 100%. Attorney assignment and application status are tracked as part of the application process.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 25, "knowledge": "Employer Retention Rate (ERR)", "description": "Measures the percentage of continuation applications by an employer.", "definition": "ERR = (Number of Continuation Applications / Total Number of Applications by Employer) × 100%. Employer and application type are tracked as part of the application data.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 26, "knowledge": "External Counsel Rate (ECR)", "description": "Measures the percentage of applications using external legal counsel.", "definition": "ECR = (Number of Applications with Attorney / Total Number of Applications) × 100%. Attorney assignment is tracked as part of the application process.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 27, "knowledge": "Worksite Diversity Index (WDI)", "description": "Measures the diversity of an employer's worksites across different geographic areas.", "definition": "WDI = 1 - sum((Number of Workers at Worksite i / Total Workers)^2) for all worksites. Worksite and worker information are included in the application data.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 28, "knowledge": "Wage Competitiveness Index (WCI)", "description": "Measures how competitive an employer's offered wage is compared to industry average.", "definition": "WCI = (Employer Average Offered Wage / Industry Average Offered Wage). Wage and industry information are included in the application data.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 29, "knowledge": "Visa Class Distribution (VCD)", "description": "Measures the percentage distribution of different visa classifications.", "definition": "VCD = (Number of Applications for Visa Class / Total Number of Applications) × 100%. Visa classification is tracked as part of the application data.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 30, "knowledge": "Employer Scale Indicator (ESI)", "description": "Quantifies the relative size of an employer based on number of visa applications.", "definition": "ESI = (Employer Number of Applications / Average Applications per Employer). Employer and application information are tracked as part of the application data.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 31, "knowledge": "Industry Wage Differential (IWD)", "description": "Measures the average wage differential in each industry.", "definition": "IWD = (Sum of Wage Differential Rates in Industry / Number of Applications in Industry). Wage and industry information are included in the application data.", "type": "calculation_knowledge", "children_knowledge": [11]} +{"id": 32, "knowledge": "Processing Efficiency Ratio (PER)", "description": "Measures the efficiency of application processing relative to average times.", "definition": "PER = (Average Processing Time / Application Processing Time), where values above 1 indicate faster than average processing. Processing time is calculated from application receipt and decision dates.", "type": "calculation_knowledge", "children_knowledge": [12]} +{"id": 33, "knowledge": "Geographic Concentration Index (GCI)", "description": "Measures the geographic concentration of visa applications.", "definition": "GCI = sum((Applications in State i / Total Applications)^2) for all states. State and application information are included in the application data.", "type": "calculation_knowledge", "children_knowledge": [19]} +{"id": 34, "knowledge": "Attorney Specialization Index (ASI)", "description": "Measures how specialized an attorney is in specific visa types.", "definition": "ASI = 1 - (Number of Different Visa Types Handled by Attorney / Total Number of Visa Types), where values closer to 1 indicate higher specialization. Attorney and visa type information are tracked as part of the application process.", "type": "calculation_knowledge", "children_knowledge": [29]} +{"id": 35, "knowledge": "Industry Concentration Ratio (ICR)", "description": "Measures the concentration of applications across industries.", "definition": "ICR = sum((Applications in Industry i / Total Applications)^2) for all industries. Industry and application information are included in the application data.", "type": "calculation_knowledge", "children_knowledge": [20]} +{"id": 36, "knowledge": "Wage Growth Rate (WGR)", "description": "Measures the annual percentage increase in offered wages for similar positions.", "definition": "WGR = ((Current Year Average Wage - Previous Year Average Wage) / Previous Year Average Wage) × 100%. Wage and year information are included in the application data.", "type": "calculation_knowledge", "children_knowledge": [28]} +{"id": 37, "knowledge": "Application Complexity Score (ACS)", "description": "Quantifies the complexity of a visa application based on multiple factors.", "definition": "ACS is calculated as a weighted sum of factors such as H-1B dependency, willful violator status, new employment, and amendment status. Each factor is represented as a binary indicator in the application data.", "type": "calculation_knowledge", "children_knowledge": [2, 3]} +{"id": 38, "knowledge": "Worksite Cost Index (WkCI)", "description": "Compares the cost of living at a worksite relative to the national average.", "definition": "WkCI = (Worksite Cost of Living / National Average Cost of Living) × 100. Worksite and cost of living information are included in the application data.", "type": "calculation_knowledge", "children_knowledge": [22]} +{"id": 39, "knowledge": "Premium Processing Rate (PPR)", "description": "Measures the percentage of applications using premium processing.", "definition": "PPR = (Number of Premium Processing Applications / Total Number of Applications) × 100%. Premium processing status is tracked as part of the application data.", "type": "calculation_knowledge", "children_knowledge": [12]} +{"id": 40, "knowledge": "Premium Wage Position", "description": "Defines positions that offer significantly higher wages than required.", "definition": "A position is considered a premium wage position if the Wage Differential Rate (WDR) exceeds 20%, indicating the employer is offering significantly above the prevailing wage. Wage information is included in the wage determination data.", "type": "domain_knowledge", "children_knowledge": [11]} +{"id": 41, "knowledge": "Visa Filing Window", "description": "Categorizes the timing of visa application submissions relative to employment start date.", "definition": "The visa filing window represents when applications are submitted relative to the intended employment start date. Applications are categorized as: 'Optimal Window (4-6 Months)' before start date, 'Early Filing' (more than 6 months before), '1-3 Months Before', 'Same Month', or 'After Start Date'. These categories help analyze filing patterns and potential processing challenges. Submission and start dates are tracked as part of the application data.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 42, "knowledge": "Visa-Dependent Industry", "description": "Defines industries that heavily rely on foreign workers through visa programs.", "definition": "An industry is considered visa-dependent if the percentage of visa applications relative to the total workforce exceeds 15%, indicating significant reliance on foreign talent. Industry information is determined by the employer's NAICS code.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 43, "knowledge": "Skill Shortage Occupation", "description": "Identifies occupations with demonstrated shortages of qualified U.S. workers.", "definition": "Occupations with significant shortages of qualified U.S. workers are identified by a Wage Differential Rate (WDR) exceeding 10% and a statistically significant number of applications. These occupations may receive prioritized processing or cap exemptions.", "type": "domain_knowledge", "children_knowledge": [11]} +{"id": 44, "knowledge": "Geographic Application Hotspot", "description": "Identifies geographic areas with concentrated visa application activity.", "definition": "A geographic area (such as a state, metropolitan area, or city) is considered a hotspot if the volume of visa applications is at least 50% higher than the national average when adjusted for population. Location information is included in the worksite data.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 45, "knowledge": "Attorney Specialization Category", "description": "Categorizes attorneys based on their visa application specialization patterns.", "definition": "Attorneys are classified as Specialists (over 80% of cases in one visa type), Generalists (even distribution across visa types), or Hybrid Practitioners (significant experience in 2-3 visa categories). Attorney and visa type information are tracked as part of the application process.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 46, "knowledge": "Employer Size Classification", "description": "Categorizes employers based on their scale of visa usage.", "definition": "Employers are classified as Small-scale users (fewer than 5 applications annually), Medium-scale users (5-25 applications annually), or Large-scale users (more than 25 applications annually). Employer and application information are tracked as part of the application data.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 47, "knowledge": "Application Status Categories", "description": "Defines the possible status outcomes for visa applications.", "definition": "Possible outcomes for a visa application include: Certified (approved), Denied (rejected), Withdrawn (voluntarily withdrawn by applicant), and Certified-Withdrawn (approved but later withdrawn). Application status is tracked as part of the application data.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 48, "knowledge": "Wage Competitiveness Tiers", "description": "Categorizes wage offers based on their competitiveness in the labor market.", "definition": "Wage offers are classified as Below-Market (WDR < 0%), Market-Competitive (0% ≤ WDR ≤ 10%), or Premium (WDR > 10%), indicating how employers position their compensation relative to minimum requirements. Wage information is included in the wage determination data.", "type": "domain_knowledge", "children_knowledge": [11]} +{"id": 49, "knowledge": "Occupational Specialization Levels", "description": "Categorizes positions based on their level of specialization.", "definition": "Job positions are classified as General (broad knowledge required), Specialized (focused expertise in one area), or Highly Specialized (deep expertise in niche areas), typically correlated with wage level and experience requirements. Job and wage information are included in the application data.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 50, "knowledge": "Seasonal Application Pattern", "description": "Defines patterns in visa application timing throughout the year.", "definition": "Seasonal patterns in visa application submissions include Peak Season (periods with higher submission rates), Off-Peak Season (lower submission rates), and Transition Periods (moderate activity between peaks). Application receipt dates are tracked as part of the application data.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 51, "knowledge": "Application Complexity Tiers", "description": "Categorizes applications based on their procedural complexity.", "definition": "Applications are classified as Standard (straightforward cases), or Complex (cases involving H-1B dependency, willful violator status, or special attestations). Complexity factors are tracked as part of the application data.", "type": "domain_knowledge", "children_knowledge": [2, 3]} +{"id": 52, "knowledge": "Attorney Performance Rating", "description": "Categorizes attorneys based on their application success rates.", "definition": "Attorneys are rated as High Performers (over 95% approval rate), Standard Performers (85-95% approval rate), or Underperformers (below 85% approval rate), based on their application outcomes.", "type": "domain_knowledge", "children_knowledge": [24]} +{"id": 53, "knowledge": "Worksite Geographic Diversity", "description": "Categorizes employers based on the geographic spread of their worksites.", "definition": "Employers are classified as Single-Location (all applications for one location), Regional (multiple locations in one region), or National (multiple regions), reflecting operational breadth. Worksite information is included in the application data.", "type": "domain_knowledge", "children_knowledge": [27]} +{"id": 54, "knowledge": "Employer Dependency Level", "description": "Categorizes the degree to which employers rely on visa programs.", "definition": "Employers are classified as Low Dependency (less than 5% of workforce), Moderate Dependency (5-15%), or High Dependency (over 15%). H-1B dependent status typically applies to the high dependency category.", "type": "domain_knowledge", "children_knowledge": [2]} +{"id": 55, "knowledge": "Position Scarcity Index", "description": "Categorizes positions based on the scarcity of qualified candidates.", "definition": "Positions are classified as Abundant (many qualified candidates), Moderate Scarcity (limited candidates), or High Scarcity (very few candidates), often correlated with wage premiums and processing priorities.", "type": "domain_knowledge", "children_knowledge": [21]} +{"id": 56, "knowledge": "Premium Wage Employer", "description": "Defines employers that consistently offer above-market wages.", "definition": "An employer is considered a premium wage employer if their average Wage Differential Rate (WDR) across all applications exceeds 15%, indicating a consistent strategy of offering premium compensation.", "type": "domain_knowledge", "children_knowledge": [11]} +{"id": 57, "knowledge": "Application Delay Risk Profile", "description": "Categorizes applications based on their risk of processing delays.", "definition": "Applications are classified as Low Risk (no special factors), Moderate Risk (1-2 complexity factors), or High Risk (3 or more complexity factors or from willful violators), based on the Application Complexity Score (ACS).", "type": "domain_knowledge", "children_knowledge": [37]} +{"id": 58, "knowledge": "Continuous Filing Employer", "description": "Defines employers with regular, ongoing visa application activity.", "definition": "An employer is considered a continuous filer if they submit visa applications in at least 9 months of the year, indicating consistent reliance on foreign worker programs.", "type": "domain_knowledge", "children_knowledge": [23]} +{"id": 59, "knowledge": "Legal Representation Efficacy", "description": "Evaluates the effectiveness of legal representation in visa applications.", "definition": "Legal representation efficacy is measured by comparing the approval rate of applications with attorney representation to those without. High Impact is defined as over 10% higher approval rate, Moderate Impact as 5-10% higher, and Minimal Impact as less than 5% difference.", "type": "domain_knowledge", "children_knowledge": [13, 24]} +{"id": 60, "knowledge": "Alternative Prevailing Wage Sources", "description": "Enumerates the acceptable alternative sources for prevailing wage determinations.", "definition": "Acceptable sources for prevailing wage determinations include Occupational Employment Statistics (OES), Collective Bargaining Agreements (CBA), and independent published wage surveys that meet Department of Labor requirements. Source information is included in the wage determination data.", "type": "domain_knowledge", "children_knowledge": -1} \ No newline at end of file diff --git a/labor_certification_applications/labor_certification_applications_schema.txt b/labor_certification_applications/labor_certification_applications_schema.txt new file mode 100644 index 0000000000000000000000000000000000000000..947157596ed46f3e37c231491efb10c0be427a6b --- /dev/null +++ b/labor_certification_applications/labor_certification_applications_schema.txt @@ -0,0 +1,208 @@ +CREATE TABLE "employer" ( +corphandle text NOT NULL, +zipref text NOT NULL, +employer_contact_info jsonb NULL, + PRIMARY KEY (corphandle, zipref) +); + +First 3 rows: +corphandle zipref employer_contact_info +------------------------------------ -------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ +Avant Healthcare Professionals, LLC. 32751 {'phone': {'number': None, 'extension': None}, 'address': {'city': 'Maitland', 'line1': '2301 Lucien Way', 'line2': 'Suite 360', 'state': 'FL', 'country': 'UNITED STATES OF AMERICA', 'province': None}, 'naics_code': 561320, 'alternate_name': None} +TECHIE BRAINS INCORPORATED 61761 {'phone': {'number': 19174766150, 'extension': None}, 'address': {'city': 'NORMAL', 'line1': '1713 FORT JESSE ROAD', 'line2': 'SUIT C', 'state': 'IL', 'country': 'UNITED STATES OF AMERICA', 'province': None}, 'naics_code': 541511, 'alternate_name': None} +ValueMomentum, Inc. 08854 {'phone': {'number': 19087550226, 'extension': None}, 'address': {'city': 'Piscataway', 'line1': '220 Old New Brunswick Rd.', 'line2': None, 'state': 'NJ', 'country': 'UNITED STATES OF AMERICA', 'province': None}, 'naics_code': 54151, 'alternate_name': None} +... + + +CREATE TABLE "employer_poc" ( +contactmail text NOT NULL, +firmlink text NOT NULL, +firmzip text NOT NULL, +poc_contact_info jsonb NULL, + PRIMARY KEY (contactmail), + FOREIGN KEY (firmlink) REFERENCES employer(corphandle), + FOREIGN KEY (firmlink) REFERENCES employer(zipref), + FOREIGN KEY (firmzip) REFERENCES employer(corphandle), + FOREIGN KEY (firmzip) REFERENCES employer(zipref) +); + +First 3 rows: +contactmail firmlink firmzip poc_contact_info +----------------------------- ------------------------------------ --------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- +skaul@avanthealthcare.com Avant Healthcare Professionals, LLC. 32751 {'name': {'last': 'Kaul', 'first': 'Saloni', 'middle': None}, 'phone': {'number': '14076812999', 'extension': None}, 'title': 'Director of Immigration', 'address': {'zip': '32751', 'city': 'Maitland', 'line1': '2301 Lucien Way', 'line2': 'Suite 360', 'state': 'FL', 'country': 'UNITED STATES OF AMERICA', 'province': None}} +naveen@techiebrains.com TECHIE BRAINS INCORPORATED 61761 {'name': {'last': 'MADISETTY', 'first': 'NAVEEN', 'middle': None}, 'phone': {'number': '19174766150', 'extension': None}, 'title': 'PRESIDENT', 'address': {'zip': '61761', 'city': 'Normal', 'line1': '3602 como ct', 'line2': None, 'state': 'IL', 'country': 'UNITED STATES OF AMERICA', 'province': None}} +cyrus.noria@valuemomentum.com ValueMomentum, Inc. 08854 {'name': {'last': 'Noria', 'first': 'Cyrus', 'middle': 'R.'}, 'phone': {'number': '19087550105', 'extension': None}, 'title': 'Sr. Director - HR', 'address': {'zip': '08854', 'city': 'PISCATAWAY', 'line1': '220 OLD NEW BRUNSWICK RD.', 'line2': None, 'state': 'NJ', 'country': 'UNITED STATES OF AMERICA', 'province': None}} +... + + +CREATE TABLE "cases" ( +filekey text NOT NULL, +statustag text NULL, +recvday text NULL, +decisionday text NULL, +origcertday date NULL, +visacls USER-DEFINED NULL, +jobtag text NULL, +soccd text NULL, +soctitle text NULL, +fulltimeind text NULL, +beginday text NULL, +endday text NULL, +headct bigint NULL, +newemp bigint NULL, +contemp bigint NULL, +changeprev bigint NULL, +concurrentlynew bigint NULL, +changefirm bigint NULL, +amendflag bigint NULL, +siteslots bigint NULL, +agreelc text NULL, +h1bdep USER-DEFINED NULL, +willfulv USER-DEFINED NULL, +supporth USER-DEFINED NULL, +statbasis text NULL, +appa text NULL, +pubdisc text NULL, +homefirm text NOT NULL, +homezip text NOT NULL, +preplink text NULL, + PRIMARY KEY (filekey), + FOREIGN KEY (homefirm) REFERENCES employer(corphandle), + FOREIGN KEY (homefirm) REFERENCES employer(zipref), + FOREIGN KEY (homezip) REFERENCES employer(corphandle), + FOREIGN KEY (homezip) REFERENCES employer(zipref), + FOREIGN KEY (preplink) REFERENCES preparer(prepmail) +); + +First 3 rows: +filekey statustag recvday decisionday origcertday visacls jobtag soccd soctitle fulltimeind beginday endday headct newemp contemp changeprev concurrentlynew changefirm amendflag siteslots agreelc h1bdep willfulv supporth statbasis appa pubdisc homefirm homezip preplink +------------------ ----------- ---------- ------------- ------------- -------------- ---------------------------- ---------- ----------------------------------------------- ------------- ---------- -------------- -------- -------- --------- ------------ ----------------- ------------ ----------- ----------- --------- -------- ---------- ---------- ----------------------------- ------ ----------------- ------------------------------------ --------- ----------------------------- +I-200-23355-584296 Certified 2023/12/21 2023-12-29 H-1B Registered Nurse 29-1141.00 Registered Nurses N 21/12/2023 2026 20th Dec. 1 1 0 0 0 0 0 1 Yes No No Disclose Business Avant Healthcare Professionals, LLC. 32751 tyler.peace@muimmigration.com +I-203-23355-583713 certified 2023/12/21 2023-12-29 E-3 Australian Infrastructure Engineer 15-1244.00 Network and Computer Systems Administrators 21/12/2023 2025 20th Dec. 1 0 1 0 0 0 0 2 Yes Disclose Business TECHIE BRAINS INCORPORATED 61761 +I-200-23355-584402 Certified 2023/12/21 2023-12-29 H-1B Sr. Lead - Quality Assurance 15-1253.00 Software Quality Assurance Analysts and Testers N 01/04/2024 2027 31th Mar. 1 0 1 0 0 0 0 2 Yes Yes No Yes $60,000 or higher annual wage Disclose Business ValueMomentum, Inc. 08854 subin@cyrusmehta.com +... + + +CREATE TABLE "preparer" ( +prepmail text NOT NULL, +preplname text NULL, +prepfname text NULL, +prepmi text NULL, +prepbiz text NULL, + PRIMARY KEY (prepmail) +); + +First 3 rows: +prepmail preplname prepfname prepmi prepbiz +----------------------------- ----------- ----------- -------- ------------------------------------- +tyler.peace@muimmigration.com Peace Tyler J Musillo Unkenholt, LLC. +subin@cyrusmehta.com Son Subin Cyrus D. Mehta & Partners PLLC +khan@kramerlevin.com Han Kristy Kramer Levin Naftalis and Frankel LLP +... + + +CREATE TABLE "case_attorney" ( +docketkey text NOT NULL, +counselmail text NOT NULL, +counselfor USER-DEFINED NULL, + PRIMARY KEY (docketkey, counselmail), + FOREIGN KEY (docketkey) REFERENCES cases(filekey), + FOREIGN KEY (counselmail) REFERENCES attorney(lawmail) +); + +First 3 rows: +docketkey counselmail counselfor +------------------ ----------------------------- ------------ +I-200-23355-584296 tyler.peace@muimmigration.com Yes +I-200-23355-584402 KAITLYN@CYRUSMEHTA.COM Yes +I-200-23355-585360 MDRENNAN@KRAMERLEVIN.COM Yes +... + + +CREATE TABLE "attorney" ( +lawmail text NOT NULL, +attorney_profile jsonb NULL, + PRIMARY KEY (lawmail) +); + +First 3 rows: +lawmail attorney_profile +----------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- +tyler.peace@muimmigration.com {'firm': 'Musillo Unkenholt, LLC.', 'name': {'last': 'Schneider', 'first': 'Maria', 'middle': 'T.'}, 'address': {'zip': '45202', 'city': 'Cincinnati', 'line1': '302 West Third Street', 'line2': 'Suite 710', 'state': 'OH', 'country': 'UNITED STATES OF AMERICA', 'province': None}, 'contact': {'phone': 15133818472, 'extension': None}, 'highest_court': {'name': 'Supreme Court of Ohio', 'state': 'OH'}} +KAITLYN@CYRUSMEHTA.COM {'firm': 'CYRUS D. MEHTA & PARTNERS PLLC', 'name': {'last': 'Box', 'first': 'Kaitlyn', 'middle': 'Amanda'}, 'address': {'zip': '10004', 'city': 'NEW YORK', 'line1': 'ONE BATTERY PARK PLAZA', 'line2': None, 'state': 'NY', 'country': 'UNITED STATES OF AMERICA', 'province': None}, 'contact': {'phone': 12124250555, 'extension': None}, 'highest_court': {'name': 'NEW YORK COURT OF APPEALS', 'state': 'NY'}} +MDRENNAN@KRAMERLEVIN.COM {'firm': 'Kramer Levin Naftalis & Frankel LLP', 'name': {'last': 'DRENNAN', 'first': 'MELISSA', 'middle': 'BELLE'}, 'address': {'zip': '10036', 'city': 'NEW YORK', 'line1': '1177 AVENUE OF THE AMERICAS', 'line2': '23RD FLOOR', 'state': 'NY', 'country': 'UNITED STATES OF AMERICA', 'province': None}, 'contact': {'phone': 12127157554, 'extension': None}, 'highest_court': {'name': 'SUPREME COURT', 'state': 'NY'}} +... + + +CREATE TABLE "case_worksite" ( +dockkey text NOT NULL, +ws_addr1 text NOT NULL, +wscity text NOT NULL, +wsstate text NOT NULL, +wszip text NOT NULL, +wsheads bigint NULL, +wagetrack text NULL, + PRIMARY KEY (dockkey, ws_addr1, wscity, wsstate, wszip), + FOREIGN KEY (dockkey) REFERENCES cases(filekey), + FOREIGN KEY (ws_addr1) REFERENCES worksite(w_addr1), + FOREIGN KEY (ws_addr1) REFERENCES worksite(wcity), + FOREIGN KEY (ws_addr1) REFERENCES worksite(wstate), + FOREIGN KEY (ws_addr1) REFERENCES worksite(wzip), + FOREIGN KEY (wscity) REFERENCES worksite(w_addr1), + FOREIGN KEY (wscity) REFERENCES worksite(wcity), + FOREIGN KEY (wscity) REFERENCES worksite(wstate), + FOREIGN KEY (wscity) REFERENCES worksite(wzip), + FOREIGN KEY (wsstate) REFERENCES worksite(w_addr1), + FOREIGN KEY (wsstate) REFERENCES worksite(wcity), + FOREIGN KEY (wsstate) REFERENCES worksite(wstate), + FOREIGN KEY (wsstate) REFERENCES worksite(wzip), + FOREIGN KEY (wszip) REFERENCES worksite(w_addr1), + FOREIGN KEY (wszip) REFERENCES worksite(wcity), + FOREIGN KEY (wszip) REFERENCES worksite(wstate), + FOREIGN KEY (wszip) REFERENCES worksite(wzip), + FOREIGN KEY (wagetrack) REFERENCES prevailing_wage(trackno) +); + +First 3 rows: +dockkey ws_addr1 wscity wsstate wszip wsheads wagetrack +------------------ ---------------------- ----------- --------- ------- --------- ----------- +I-200-23355-584296 2800 10th Avenue North Billings MT 59101 1 1 +I-203-23355-583713 8300 NORMAN CENTER DR BLOOMINGTON MN 55437 1 2 +I-200-23355-584402 125 E 6th Street Erie PA 16501 1 3 +... + + +CREATE TABLE "worksite" ( +w_addr1 text NOT NULL, +wcity text NOT NULL, +wstate text NOT NULL, +wzip text NOT NULL, +secent text NULL, +secentname text NULL, +waddr_2 text NULL, +wcnty text NULL, + PRIMARY KEY (w_addr1, wcity, wstate, wzip) +); + +First 3 rows: +w_addr1 wcity wstate wzip secent secentname waddr_2 wcnty +---------------------- ----------- -------- ------ -------- ----------------------------- --------- ----------- +2800 10th Avenue North Billings MT 59101 Yes Billings Clinic Health System YELLOWSTONE +8300 NORMAN CENTER DR BLOOMINGTON MN 55437 Yes CVS HEALTH 800 HENNEPIN +125 E 6th Street Erie PA 16501 Yes Erie Indemnity Company ERIE +... + + +CREATE TABLE "prevailing_wage" ( +trackno text NOT NULL, +wage_details jsonb NULL, + PRIMARY KEY (trackno) +); + +First 3 rows: + trackno wage_details +--------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- + 457 {'offered_wage': {'to': 0, 'from': '$130000.00', 'unit': 'Year'}, 'prevailing_wage': {'unit': 'Year', 'level': 'IV', 'value': 'USD 116,979.00', 'oes_year': None}, 'alternate_source': {'publisher': None, 'source_type': None, 'source_year': None, 'survey_title': None}} + 823 {'offered_wage': {'to': 0, 'from': '$41.00', 'unit': 'Hour'}, 'prevailing_wage': {'unit': 'Hour', 'level': 'II', 'value': 'USD 25.40', 'oes_year': None}, 'alternate_source': {'publisher': None, 'source_type': None, 'source_year': None, 'survey_title': None}} + 250 {'offered_wage': {'to': 0, 'from': '$262499.77', 'unit': 'Year'}, 'prevailing_wage': {'unit': 'Year', 'level': 'IV', 'value': 'USD 101,088.00', 'oes_year': '7/1/2023 - 6/30/2024'}, 'alternate_source': {'publisher': None, 'source_type': None, 'source_year': None, 'survey_title': None}} +... diff --git a/livesqlbench_data.jsonl b/livesqlbench_data.jsonl new file mode 100644 index 0000000000000000000000000000000000000000..4117e3d34106e321ef2ba2c2fed7e8bca885f505 --- /dev/null +++ b/livesqlbench_data.jsonl @@ -0,0 +1,600 @@ +{"instance_id": "solar_panel_1", "selected_database": "solar_panel", "query": "How likely is the 'solar plant west davidport' (matching the name regardless of case) to be down when we need it? Give me its system unavailability score, just the number, to four decimal points.", "normal_query": "For the solar plant labeled 'solar plant west davidport' (case-insensitive match), calculate its system unavailability. Display the result as a scalar value, rounded to 4 decimal places.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": 4, "distinct": false, "order": false}} +{"instance_id": "solar_panel_2", "selected_database": "solar_panel", "query": "I need to know the financial hit from plants with recurring warranty issues—the ones whose warranty status is 'claimed' and have had three or more claims logged against them. Can you figure out the total lifetime revenue loss for them, but only count ones where we know their go-live date and degradation? Just assume they all have 15 years left, produce 500,000 kwh a year, and we sell the power at 12 cents. Give me the grand total.", "normal_query": "Calculate the total projected lifetime revenue loss for all plants that are flagged for Warranty Claim Risk. For this calculation, only include plants where the commissioning date and cumulative degradation are known. For the projection, assume a remaining lifetime of 15 years, an average annual energy production of 500,000 kwh, and an energy price of $0.12/kwh. Present the total loss as a single value.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "solar_panel_3", "selected_database": "solar_panel", "query": "If we could magically cool the panels for snapshot pv945724 down to 25 degrees celsius, what would its power output be? Give me the temperature-corrected performance in watts, with two decimal points.", "normal_query": "For the snapshot 'pv945724', calculate the temperature-corrected performance. Use a reference temperature of 25°c. Display the result in watts, rounded to two decimal places.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": 2, "distinct": false, "order": false}} +{"instance_id": "solar_panel_4", "selected_database": "solar_panel", "query": "For the maintenance event pv937101, did the repair cost more than the revenue we lost during the downtime? To figure that out, you'll have to clean up the revenue loss text by stripping out any '$' or ',' characters. Tell me the maintenance cost to revenue impact ratio, just the number, rounded to two decimals.", "normal_query": "What is the maintenance cost to revenue impact ratio for the snapshot 'pv937101'? The calculation requires cleaning the revenue loss text by removing dollar signs and commas to convert it to a numeric value. Calculate it and return a single numeric value rounded to 2 decimal places.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": 2, "distinct": false, "order": false}} +{"instance_id": "solar_panel_5", "selected_database": "solar_panel", "query": "How many of our plants are real lemons, both losing more than a quarter of their potential power and being offline for more than one day out of every twenty? Make sure you only use records that have all the numbers needed for the math. Just give me the total count.", "normal_query": "What is the total count of plants that are classified as both an underperforming asset, meaning its performance ratio is less than three-quarters, and a chronic downtime asset, meaning its availability is below nineteen-twentieths? Only include snapshots where all data necessary for the calculations is available and valid. Return a single integer.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "solar_panel_6", "selected_database": "solar_panel", "query": "Using the latest data for each plant, find the one that costs the most to run for its size, and tell me how much power it loses internally. I need the system power loss ratio for whichever plant has the biggest operational expenditure index. Give me the number to 4 decimal places, and only consider plants and snapshots with all the necessary and valid data to make the calculation crash-proof.", "normal_query": "For the plant with the highest operational expenditure index based on its most recent snapshot, what is its system power loss ratio, presented to 4 decimal places? Only plants with a known, non-zero power capacity and snapshots with known power values should be considered, and the logic must prevent division-by-zero errors.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": 4, "distinct": false, "order": true}} +{"instance_id": "solar_panel_7", "selected_database": "solar_panel", "query": "When our panel busbars are as corroded as they can get, how much does the quality drop? Calculate the average fill factor degradation for all panels in the worst category for corrosion (regardless of case), but only use data where we have both a before and after fill factor. Give me the result to 3 decimal places.", "normal_query": "What is the average fill factor degradation for panels where the busbar corrosion has reached the highest level of severity (case-insensitive)? Only include snapshots where both initial and current fill factors are known. Display the result to 3 decimal places.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": 3, "distinct": false, "order": false}} +{"instance_id": "solar_panel_8", "selected_database": "solar_panel", "query": "When a plant with hjt panels breaks, what's the average cost to fix it? Calculate the mean repair cost for those plants (matching 'hjt' regardless of case), assuming they've been running for two years straight and have a valid, positive mtbf record. Give me the final number, rounded to a whole dollar.", "normal_query": "Determine the mean repair cost for plants using the 'hjt' panel type (case-insensitive), assuming a total operational time of 2 years (17520 hours). Only include snapshots with a known and positive mtbf for the calculation. Provide the result rounded to the nearest dollar.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": 0, "distinct": false, "order": false}} +{"instance_id": "solar_panel_9", "selected_database": "solar_panel", "query": "When our electrical systems fail, how much money do we lose? Add up all the revenue loss from every incident with an 'electrical integrity failure', making sure to strip the dollar signs and commas from the text to get the total.", "normal_query": "What is the total revenue loss for snapshots where there is an electrical integrity failure? To perform the sum, the revenue loss text must be cleaned by removing dollar signs and commas. Sum up the cleaned revenue loss for these records.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "solar_panel_10", "selected_database": "solar_panel", "query": "After accounting for all the internal power drains, what's the actual juice each plant is sending to the grid right now? Only using snapshots where we know both the power loss and current output, and their combined total isn't zero, give me a list of plant names and their latest effective power output, rounded to two decimal places, with the most powerful plant at the top.", "normal_query": "For each site, calculate the effective power output using the most recent snapshot. Only include snapshots where both power loss and current power output are known, and their sum is not zero to prevent calculation errors. Display the site label and the calculated power in a table, sorted by the effective power in descending order. Show the result to 2 decimal places.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 2, "distinct": false, "order": true}} +{"instance_id": "solar_panel_11", "selected_database": "solar_panel", "query": "For the plants that are aging terribly—meaning their performance drops by more than 0.5% a year—how long does it typically take to fix them? I need the average mean-time-to-repair for these 'accelerated aging assets'. The age calculation needs to be safe for new plants. Give me the answer in hours, rounded to two decimal places.", "normal_query": "Find the average mean time to repair for all plants classified as accelerated aging assets, defined as those with an Annual Degradation Rate greater than 0.5%. The calculation for the degradation rate must handle cases where the plant's age is zero. Round to 2 decimal places.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": 2, "distinct": false, "order": false}} +{"instance_id": "solar_panel_12", "selected_database": "solar_panel", "query": "How many times have our panels gotten so dirty that they're losing more than three-twentieths of their potential energy? Just give me the total count.", "normal_query": "Count the number of snapshots where the power loss from soiling means that for every 200 watts of potential power, more than 30 watts are lost. Return a single integer value.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "solar_panel_13", "selected_database": "solar_panel", "query": "Which of our plants are a recurring headache for warranty claims, with more than just a couple of filings? I need a list of sites whose status is 'claimed' (regardless of case). Show their names and how many claims they've had, from most to least.", "normal_query": "List all plants where the number of warranty claims exceeds the typical initial one or two filings, and their warranty status is 'claimed' (case-insensitive). Show the site label and the number of warranty claims. Sort by the number of claims in descending order.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "solar_panel_14", "selected_database": "solar_panel", "query": "Among our plants in the toughest, highest-risk locations, what's the worst we've seen dirt and grime impact performance? I need the highest soiling loss index from any site that's in that top risk category. Give me the percentage.", "normal_query": "What is the highest soiling loss index recorded for a plant that is located in one of our designated top-tier environmental risk zones (case-insensitive)? Return the value as a percentage.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "solar_panel_15", "selected_database": "solar_panel", "query": "Let's get a financial forecast for our worst panels, the ones that degrade so fast they'll lose over 14% of their power in 20 years. What's the total projected revenue loss over their remaining 15-year lifespan? Base the calculation on a standard 400,000 mwh annual output and a sale price of $50 per mwh.", "normal_query": "What is the total lifetime revenue loss projection for all plants using panel models that are projected to lose more than 14% of their output over a 20-year lifespan? Assume an average annual energy production of 400,000 mwh, an energy price of $50/mwh, and a remaining lifetime of 15 years for all plants.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "solar_panel_16", "selected_database": "solar_panel", "query": "How much are the different types of panels losing their voltage punch over time? I need you to group by the panel technology, making sure to ignore case, and then figure out the average voltage degradation factor for each. But hey, only use data where we actually have a valid 'before' and 'after' voltage to compare, and make sure the starting voltage isn't zero. List the panel types and their average voltage loss, with the worst ones first.", "normal_query": "For each distinct panel model type, calculate the average voltage degradation factor. This calculation should only use snapshots that contain all the necessary voltage data and where the initial voltage reading is a positive number. The panel type should be converted to lowercase before grouping. Display the panel kind and the average degradation factor, sorted by the factor in descending order.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": true, "order": true}} +{"instance_id": "solar_panel_17", "selected_database": "solar_panel", "query": "For the machines that are down more than one day in a 20-day period, what's the average price tag on a single repair? To calculate the mean repair cost, you'll need to figure out how long each machine has been running. Only use data where the mtbf and service time are positive.", "normal_query": "What is the average mean repair cost for assets that are offline more than 5% of the time? The calculation requires the total time in service, which must be derived from the snapshot and go-live dates, and only include snapshots where mtbf and total hours are positive.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "solar_panel_18", "selected_database": "solar_panel", "query": "How many of our plants have a major electrical issue right now? I'm talking about situations where the grounding is shot or the bypass diodes are not running in their normal state. Just give me a count of the unique plants with these problems, and don't worry about the case of the status text.", "normal_query": "Count the number of distinct plants where the electrical integrity is compromised, indicated by either a complete failure of the grounding system or a bypass diode status that is anything other than nominal (checks performed case-insensitively).", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": true, "order": false}} +{"instance_id": "solar_panel_19", "selected_database": "solar_panel", "query": "After accounting for all the power being lost inside the system, what was the actual usable power output for snapshot 'pv945724'? Give me the final number in watts.", "normal_query": "What is the effective power output for snapshot 'pv945724'? Calculate it and return the value in watts.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "solar_panel_20", "selected_database": "solar_panel", "query": "For the panels specifically made by longi (regardless of case), how much has their current output dropped on average? To get a good average, please only use records where you have a valid, positive starting current to compare against. Calculate the mean current degradation factor across all of them.", "normal_query": "What is the average current degradation factor for all panel models from the manufacturer 'longi' (case-insensitive)? For an accurate average, include only snapshots that have a valid, positive initial current reading to compare against the current reading.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "solar_panel_M_1", "selected_database": "solar_panel", "query": "Let's make a special table for problems that need immediate attention, call it `high_risk_alerts`. It needs to store the snapshot id, the alert status, both maintenance and replacement priorities, and when it happened. After creating it, fill it with any alert that's so serious we'd need to send our top people out or order a new part right away. Make sure to find these alerts regardless of case. Also, make sure the snapshot id links back to the main plant record table.", "normal_query": "Create a new table `high_risk_alerts` with columns for the snapshot key, alert state, maintenance priority, replacement priority, and the timestamp of the snapshot. Then, populate it by inserting records for any issue that would require either dispatching a senior engineer or ordering a replacement part before the end of the day (checks must be case-insensitive). Add a foreign key constraint on the snapshot key referencing `plant_record`.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "solar_panel_M_2", "selected_database": "solar_panel", "query": "I need a handy summary of how our plants are doing right now. Can you create a view called `v_plant_performance_overview`? It should show the plant's name, when the data was taken, how much power it was making, how much sunlight was hitting it, and the cell temperature. Make sure it only shows the very latest data we have for each plant.", "normal_query": "Create a view named `v_plant_performance_overview`. This view should join data from the `plants`, `electrical_performance`, and `environmental_conditions` tables. It must display the site label, snapshot timestamp, power output, plane-of-array irradiance, and cell temperature for the most recent snapshot of each plant.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "solar_panel_M_3", "selected_database": "solar_panel", "query": "I need a faster way to see yearly energy production. Create a materialized view called `mv_yearly_plant_yield`. It should calculate the total kilowatt-hours produced by each plant for each year and store it, but only use records that actually have a yield value. The view should have the plant's name, the year, and the total yield.", "normal_query": "Create a materialized view named `mv_yearly_plant_yield` which summarizes the total energy yield for each plant for each year. It should include the site label, the year, and the total energy yield in kwh, only including records where the energy yield is not null.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "solar_panel_M_4", "selected_database": "solar_panel", "query": "Let's build a cleaning schedule table. Call it `panel_cleaning_schedule`. It needs a unique ID for each entry, the plant's ID, the date it was last cleaned, and the date it's due next. Then, fill it up for all our plants using the latest cleaning info from their mechanical health reports to calculate the next due date.", "normal_query": "Create a new table `panel_cleaning_schedule` with columns `schedule_id` (Primary Key, Serial), `site_key` (Foreign Key to plants), `last_cleaned_date` (Date), and `next_cleaning_due` (Date). Populate it for all plants, setting `last_cleaned_date` to the most recent `last_clean_date` from `mechanical_condition` and `next_cleaning_due` by adding the `cleaning_cycle_days` to that date.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "solar_panel_M_5", "selected_database": "solar_panel", "query": "I want a tool to quickly tell me how old a plant is. Can you create a function called `get_plant_age`? You give it a plant's ID, and it should spit out its current age in years.", "normal_query": "Create a function `get_plant_age` that takes a site key as input and returns the age of the plant in years (as a real number) based on its go-live date and the current date.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "solar_panel_M_6", "selected_database": "solar_panel", "query": "I want a 'hall of fame' for extreme weather events at our plants. Can you make a view called `v_environmental_extremes`? It should find the highest ambient temperature, strongest wind speed, and most intense uv index ever recorded across all sites. For each of these records, show which plant it happened at, what the record-breaking value was, and when it happened.", "normal_query": "Create a view `v_environmental_extremes` which, for each environmental variable, shows the plant site label, the value, and the timestamp for the all-time maximum recorded value. Include ambient temperature, wind speed, and uv index.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "solar_panel_M_7", "selected_database": "solar_panel", "query": "Let's make a log of all our plants that aren't up to code. Create a table called `compliance_issues` with an id, the plant's id, a space for a description, and the date it was logged. After you create it, go through the main plants list and add an entry for every single one that's failed its compliance checks (ignoring case). You can just put 'Initial non-compliance record' for the description.", "normal_query": "Create a new table `compliance_issues` with columns for `issue_id`, `plant_sitekey`, `issue_description`, and `date_logged`. Then, insert a record for every plant that has failed to meet its regulatory standards, based on a case-insensitive check of its compliance flag, using the specific description 'Initial non-compliance record'.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "solar_panel_M_8", "selected_database": "solar_panel", "query": "I need a new place to keep track of our plant's health stats. Can you create a table called `plant_kpi_summary`? It should have columns for the site's id, its age in years, its annual performance drop, and its uptime percentage.", "normal_query": "Create a new table named `plant_kpi_summary` to store key performance indicators. The table should include a key for the site (text, primary key), the plant's age in years (real), its annual degradation rate (real), and its system availability (real).", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "solar_panel_M_9", "selected_database": "solar_panel", "query": "Let's make a quick-look list of the absolute worst problems. Create a view, call it `v_critical_alerts_details`, for every alert that's got the highest possible priority for both a maintenance dispatch and a part replacement. Make sure you find them regardless of case. Show me the plant name, when it happened, and the event count.", "normal_query": "Create a view named `v_critical_alerts_details` that lists the site label, the snapshot timestamp, and the alert count for all snapshots where the issue is so severe it has been assigned the maximum priority level for both maintenance and replacement (checks performed case-insensitively).", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "solar_panel_M_10", "selected_database": "solar_panel", "query": "I want to start logging all our repair jobs. Can you set up a new table for me called `maintenance_log`? It needs a unique id for each entry, a reference to the snapshot it's related to, the date of the repair, a description of what was done, and how much it cost. Make sure the snapshot reference actually links to a real record.", "normal_query": "Create a new table `maintenance_log` with columns `log_id` (serial primary key), `snap_reference` (text), `log_date` (date), `action_taken` (text), and `cost` (numeric(10, 2)). Add a foreign key on `snap_reference` to the `plant_record` table.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "hulushows_1", "selected_database": "hulushows", "query": "Let’s check which shows have tons of content across different releases but no written description. Add up their standard content (episodes, clips, etc.) across all tiers, keep only the ones with over 500 total, and no annotations. Show each show’s ID, name, and total volume—sorted by volume, highest first.", "normal_query": "I want to identify all Incomplete High-Engagement Titles. Compute the total content volume for each title by summing up standard content quantities across all distribution records. Then check whether the title has any descriptive annotation. Can you only include titles with a high total volume (greater than 500) and no annotations? List each title's ID, name, and total content volume, sorted by volume in descending order.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "hulushows_2", "selected_database": "hulushows", "query": "I want to find shows that show up in three or more different subscription tiers. For each show, can you count how many unique tiers it’s available in? First, keep the ones that are in at least three tiers, and then sort the results from the most widely distributed to the last.", "normal_query": "I want to know all Multitier Syndicated Shows. For each show with at least three tiers, show its unique identifier and the number of tiers it appears in. Sort the results by tier count in descending order.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "hulushows_3", "selected_database": "hulushows", "query": "Let’s find out which titles are getting strong user scores even though they don’t have any trailers or clips. I want to look across all content and find the highest user rating among those that don’t offer any visual previews but still include a valid score. Just return that one number, rounded to 2 decimals—it tells us how well these visually sparse titles are performing.", "normal_query": "My goal is to identify the Highly Rated but Visually Empty titles in the catalog. Specifically, I want to calculate the highest user rating among all titles that have no available trailers or clips but still include valid user score data.Give me the maximum user score across these titles, rounded to 2 decimals", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": 2, "distinct": false, "order": false}} +{"instance_id": "hulushows_4", "selected_database": "hulushows", "query": "I want to find out how long it's been since each show got any new updates. For each show, check the most recent update date. But if there's no update info, just use the launch date instead. Then, I’d like to see how many days it's been since that date, and treat that as the staleness score. If a show is available in multiple tiers, take the smallest one. Can you show the show ID and the number of days it's been stale? Finally, sort the list so the stalest shows—that is, the ones that haven't been updated in the longest time—come first.", "normal_query": "For each show, I need to measure the Temporal Staleness Index (TSI). Please determine how many days have passed since the show last had any updates. If no update timestamp is available, use the launch date as a fallback. I’d like to see the show ID along with its staleness index, and the minimum value of this index across all its distribution tiers. Sort the results so that the shows with the highest staleness appear first.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "hulushows_5", "selected_database": "hulushows", "query": "How many titles are spread across over six nested genre tags and lean more on short clips, including both general clips and film-related clips, than full-length features?", "normal_query": "Count how many shows meet the Over-Fragmented Offering classification in the catalog.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "hulushows_6", "selected_database": "hulushows", "query": "Let’s all find groups of shows that belong to the same franchise. Can you only include franchises that have at least two shows? For each group, can you show me the franchise ID, how many shows it has, and list the show titles? Also, I need to sort the list so that the biggest franchises with the most shows come first.", "normal_query": "Please find all franchise groups. For each group with at least two shows, list the franchise ID, total show count, and the list of show titles. Sort the results by show count in descending order.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "hulushows_7", "selected_database": "hulushows", "query": "I want to find out how many episodes there are on average in each season for every show. Can you look at shows where we know both the total number of episodes and how many seasons they have. For each one, give me the show ID, how many episodes it has, how many seasons, and the average episodes per season. Please skip anything where the season count is missing or zero. Finally, show the ones with the highest average first.", "normal_query": "Please calculate the average number of episodes per season for each show. Can you only include shows with both episode and season counts? For each, list the show ID, total episodes, total seasons, and the episode-to-season ratio. Importantly, exclude entries with missing or zero seasons. Sort results by the ratio in descending order.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": 2, "distinct": false, "order": true}} +{"instance_id": "hulushows_8", "selected_database": "hulushows", "query": "Let’s figure out what the most frequent top-end maturity rating is across all the shows. Basically, I want to scan all the records, grab the maturity info, and tell me which of those high-end ratings pops up the most. Just return the one that shows up the most often.", "normal_query": "To support catalog analysis, compute the Most Common Peak TV Rating across All Distribution Records. It should consider all available distributiondata, extract their rating information, and determine the single most frequently assigned rating value. Give me a single text result representing the most common rating.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "hulushows_9", "selected_database": "hulushows", "query": "Which franchises are producing the most content? Group shows in the same franchise and add up their episodes. Some episode counts may be text or invalid — after trimming whitespace, parse only digit strings (digits 0–9 only) and treat the rest as zero. Show only franchises with more than 100 total episodes, listing the identifier, number of shows, and total episodes from largest to smallest.", "normal_query": "Generate a Franchise Engagement Summary by grouping shows that belong to the same franchise. The episode count field may be stored as text and can include non-numeric values; after trimming whitespace, parse only digit strings (digits 0–9 only) and treat everything else as zero. Only include franchises whose total number of episodes exceeds 100. For each franchise, provide its identifier, the number of shows it contains, and the combined episode count, sorted by total episodes in descending order.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "hulushows_10", "selected_database": "hulushows", "query": "Let’s see how our shows are spread out across the different subscription plans. For each plan, I want to know how many titles it has and what chunk of the full catalog that is. Just give me the plan name, the total count of media in it, and what percentage of the catalog that represents. Start with the plans that have the biggest share of content.", "normal_query": "Determine the Tier Distribution Ratio to understand how media content is shared across different access levels. First, sum up the total media volume available under each tier. Then compute the overall media total across all tiers. For each tier, calculate its share of the total by dividing the tier’s media volume by the grand total. List the tier ID, tier type, media total, and its Tier Distribution Ratio. Sort the results by the ratio in descending order.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": 4, "distinct": false, "order": true}} +{"instance_id": "hulushows_11", "selected_database": "hulushows", "query": "Let’s see which franchises are really making waves across different subscription levels. We’re looking for those that have at least 3 shows, and those shows appear across 3 or more tiers. For each of these franchise powerhouses, show me the franchise ID, how many shows they’ve got, and how many tiers they show up in. Sort the list by number of shows to spotlight the most widely spread ones first.", "normal_query": "To evaluate Syndicated Franchise Engagement, we need to check which franchise groups have both a strong show count and wide distribution. For each franchise, count how many shows belong to it and how many unique distribution tiers those shows appear in. These shows should include franchises with at least 3 shows and presence in 3 or more tiers. List the franchise ID, number of shows, and number of tiers, ordered by show count in descending order.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "hulushows_12", "selected_database": "hulushows", "query": "Let’s dive into the main genre types that keep popping up in our show catalog. I’m only interested in shows labeled as Drama, Comedy, or Animation and Cartoons. For each of those, can you pull together a quick list that includes the show’s ID, its title, and what genre it’s tagged under? Sort the list by title.", "normal_query": "We want to analyze Primary Genre Classification across our show catalog. For this, filter and retrieve all titles that fall under the Drama, Comedy, or Animation and Cartoons categories. For each matching title, show its unique ID, name, and its primary genre type. Sort the results alphabetically by title.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "hulushows_13", "selected_database": "hulushows", "query": "I want to look at how packed each show’s video library is. Can you pull up a list that shows the total number of video items for each show and group them into three levels? Label them High if they’ve got over 500 videos, Medium if they’re between 200 and 500, and Low if they’re under 200. Let’s sort the list so the shows with the most content show up first, and include the show ID, total count, and the volume level tag.", "normal_query": "For each show, compute its total number of video items and classify it using the Content Volume Level Classification. Return the show ID, total volume, and the resulting volume category, ordered by total volume from highest to lowest.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "hulushows_14", "selected_database": "hulushows", "query": "Which show feels the most crammed with promotional stuff? Just give me the one with the heaviest promo presence overall.", "normal_query": "Find the Maximum Promo Saturation Ratio across all shows in the catalog.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "hulushows_15", "selected_database": "hulushows", "query": "How many shows land in our usual user-score buckets—Low, Medium, or High? Just give me the total.", "normal_query": "Report the total number of shows whose user scores fall into the standard Low, Medium, or High buckets.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "hulushows_16", "selected_database": "hulushows", "query": "I want to find shows that show up in three or more different subscription tiers. For each show, can you count how many unique tiers it’s available in? First, keep the ones that are in at least three tiers, and then sort the results from the most widely distributed to the last.", "normal_query": "I want to know all Multitier Syndicated Shows. For each show with at least three tiers, show its unique identifier and the number of tiers it appears in. Sort the results by tier count in descending order.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "hulushows_17", "selected_database": "hulushows", "query": "Let’s grab the shows where the bigger of their trailer or feature count is over 100. Show the ID, title, and that number, sorted from highest to lowest.", "normal_query": "Find shows whose Peak Media Load is greater than 100. Give me the show ID, title, and the peak value, sorted from highest to lowest.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "hulushows_18", "selected_database": "hulushows", "query": "I want to see how shows rank based on what viewers think. Just group them by how well they’re rated, ignore anything without a proper score, and tell me the show ID, name, how it scored, and which group it ended up in—start from the highest-rated and go down.", "normal_query": "Analyze show-level user ratings to assign each show to its corresponding Episode Rating Band. Only include shows with valid numeric scores. For each show, return its ID, title, user score, and band, sorted from highest to lowest score.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": 8, "distinct": false, "order": true}} +{"instance_id": "hulushows_19", "selected_database": "hulushows", "query": "Which shows actually have film clips? List the ones with the most film-related clips first. For each show, show the title, how many film clips it has, and a quick flag for Has Clips or No Clips.", "normal_query": "I want to check film-clip availability for each show. For every show, return its ID, title, the number of film-related clips, and a flag saying Has Clips if that count is greater than 0, otherwise No Clips. Sort from highest to lowest film-clip count.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "hulushows_20", "selected_database": "hulushows", "query": "Let’s see which shows are loading up on promo messages. For each one, count availability updates, promo messages, alerts, and expiration notices across the free and member tiers. Only include shows with at least one note, and list them starting with the most.", "normal_query": "Show the Promotional Intensity Summary for each show with at least one note. Include the show ID and the total count, sorted descending.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "hulushows_M_1", "selected_database": "hulushows", "query": "Let’s drop in a new show using these exact values: make the ID 900001, set the official name to new-show-canonical, call it New Show Title, link it to series 99999999, tag it to studio 8, and add the note ‘This is a newly added show for fall season release.’ For genres, store a JSON with score 4.25, type show, main genre Science Fiction, and breakdown Science Fiction~Space|Adventure. Once that’s saved, return what you added.", "normal_query": "Add a brand-new show with these exact details: ID 900001, official name new-show-canonical, title New Show Title, series 99999999, studio 8, and the note This is a newly added show for fall season release. For its genre info, save a JSON that has a score 4.25, type show, main genre Science Fiction, and a breakdown Science Fiction~Space|Adventure. After saving, show me the inserted record.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "hulushows_M_2", "selected_database": "hulushows", "query": "So, which studios are really cranking out the content? Let’s create a function called calculate_studio_activity_index that tells us how many entries a studio has in the system. Just pass in the studio’s ID, and it’ll return the total number of catalog records linked to that studio—even if some titles repeat. Simple enough, right? Oh, and while we’re at it—find the show with ID 54 and update its official name to ‘updated-family-guy’.", "normal_query": "Create a PostgreSQL function called calculate_studio_activity_index that computes the Studio Activity Index and returns the calculated value. The function takes one parameter: the unique identifier of a studio. It calculates the total number of content records that are associated with the given studio in the catalog, counting all entries regardless of whether the titles repeat. The result is an integer representing the count of all such records. Additionally, update the canonical name of a specific show in the catalog. Locate the show using its unique content key, which is 54, and set its canonical name to 'updated-family-guy'.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "hulushows_M_3", "selected_database": "hulushows", "query": "Let’s check how much content each subscription gets. Just give me the plan name—like “free” or “subscriber”—and I’ll count all the shows linked to it. Don’t worry about casing or spaces; it should match even if someone types it differently.", "normal_query": "Create a function that returns the number of unique shows available under a given subscription plan like \"free\" or \"subscriber\". Match the plan name in a case-insensitive and trimmed way to ensure accurate mapping. Return the total number of linked shows.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "hulushows_M_4", "selected_database": "hulushows", "query": "Let’s check how many titles belong to a given series. Just pass in a series ID, and we’ll return the total number of titles linked to that series.", "normal_query": "We need to calculate the number of distinct titles that belong to a specific series to support the Series Entry Count metric. Given a series identifier as input, the system should return a single integer representing how many entries are part of that series.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "hulushows_M_5", "selected_database": "hulushows", "query": "Let’s see how our shows break down by age-appropriateness—like “TV-Y”, “TV-PG”, etc. Just group them and count how many land in each level, making sure different casing or extra spaces are treated the same.", "normal_query": "Could you help me get a quick overview of how shows are distributed across different TV Rating types? For each rating, return how many shows fall under it, normalizing the rating values by lowercasing and trimming to avoid mismatches.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "hulushows_M_6", "selected_database": "hulushows", "query": "I want to know if all shows in a series share the same name? Just use check_series_title_uniformity with the series ID. it returns true if the titles match across the board, false if they don’t.", "normal_query": "A function named check_series_title_uniformity is required. This function determines the Series Title Uniformity Flag for a given series. It checks whether all shows linked to the same series share an identical canonical title. The output is a boolean value—true if all titles match, and false otherwise.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "hulushows_M_7", "selected_database": "hulushows", "query": "Let’s figure out which studios have been the busiest. For each one, can you show me how many titles they’ve worked on? Just include the studios that are actually linked to content, and sort the list so the most active ones show up first. I need this saved as a permanent table called studio_catalog_size.", "normal_query": "We need to create a persistent table of all Studio Catalog Size data for our content analysis. Please set up a table called studio_catalog_size that includes each studio’s unique identifier and the total number of titles linked to that studio. The count should be grouped by studio and sorted from the most prolific to the least. Please note only include entries that are explicitly associated with a studio.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "hulushows_M_8", "selected_database": "hulushows", "query": "Let’s figure out which studios have been the busiest in the catalog and save it in a table called title_count_per_studio. For each one, can you show me their ID, name, and how many shows they’ve worked on? Only count the ones that are actually linked to a studio. We’ll need to pull the studio info by joining the show records with the studio list. Then, sort the results so the studios with the most titles show up first.", "normal_query": "Let’s build a persistent table called title_count_per_studio to analyze Title Count per Studio for catalog assessment. This table should include each studio’s unique ID, its canonical name, and the number of titles linked to it. Only include entries where a valid studio association exists. The result must be grouped by studio and sorted so the most prolific studios appear first. Join is required between the show catalog and the studio registry. The output will be a structured table listing studio ID, studio name, and how many titles are attributed to each.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "hulushows_M_9", "selected_database": "hulushows", "query": "Set up a permanent table called avg_title_length_per_studio so we can track how long each studio’s show titles usually are. It should include which studio it is and the average number of characters in the titles of its shows. We’re only defining the structure for avg_title_length_per_studio right now—no data yet.", "normal_query": "Please create a permanent table named avg_title_length_per_studio to track the average length of show titles per production studio. The table must have two columns: (1) the studio’s unique ID and (2) the average number of characters in titles of shows linked to that studio. This step only defines the schema for avg_title_length_per_studio—do not insert any data.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "hulushows_M_10", "selected_database": "hulushows", "query": "Let’s check how busy our release schedule was in a particular year. I need a function that takes in a year and tells me how many shows were launched during that time. It should go through the catalog and count only the shows whose launch dates fall in that year, but only for test titles with srkeys 900001 and 900002. Please don’t include the rest of the system’s data. The result should just be a number showing how many of those selected titles came out in that year.", "normal_query": "Create a function named get_launch_count_by_year that computes the Launch Year Distribution for a specific year. This function analyzes the release history by counting how many titles were launched in the specified year. It operates over the catalog of shows, using each show's recorded launch timestamp, and filters to only include test data with srkeys in (900001, 900002). The output is a single integer indicating the number of titles launched in that year.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "cybermarket_pattern_1", "selected_database": "cybermarket_pattern", "query": "Give me all platforms sorted by its risk score, most dangerous on top and show 4 digits.", "normal_query": "List each marketplace with its Marketplace Risk Score (MRS), rounded to 4 decimal places, highest first.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": 4, "distinct": false, "order": true}} +{"instance_id": "cybermarket_pattern_M_1", "selected_database": "cybermarket_pattern", "query": "Mark every seller who's currently being investigated or getting a lot of attention from authorities as “High” on the compliance scale, leave the already-High ones alone, and give me the IDs that changed.", "normal_query": "Set the compliance category to “High” for all sellers with an active investigation or high attention from authorities, skipping those already at “High”. Return the IDs of the sellers that were updated.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "cybermarket_pattern_M_2", "selected_database": "cybermarket_pattern", "query": "Add a daily review entry for each sale the model rates over 70% fraud risk and doesn't already have one.", "normal_query": "Create a daily review entry for every transaction with model-assessed fraud probability above 70% that currently has no review entry.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "cybermarket_pattern_M_3", "selected_database": "cybermarket_pattern", "query": "Purge the top-priority alert cases that are resolved and whose next review date is over 180 days old.", "normal_query": "Delete alert cases at the highest escalation level that are resolved and have a next review date more than 180 days ago.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "cybermarket_pattern_M_4", "selected_database": "cybermarket_pattern", "query": "Save the current list of sites that meet the security rule, along with their computed rating, into a fresh archive—replace any prior archive.", "normal_query": "Archive the current list of Secure Platforms together with their Marketplace Risk Score, replacing any existing archive if present.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "cybermarket_pattern_2", "selected_database": "cybermarket_pattern", "query": "Split shoppers into three risk-per-dollar groups; for each group, show how many shoppers there are, what fraction of their orders go across countries, and how often their sessions look highly/medium/low hidden.", "normal_query": "Group buyers into three buckets based on Buyer Risk Dollar Ratio; for each bucket, return the buyer count, the share of their transactions that are cross-border, and the distribution of session anonymity (High/Medium/Low).", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "cybermarket_pattern_3", "selected_database": "cybermarket_pattern", "query": "Give me a list of sellers with their transaction flow scores, plus details about how complicated their shipping networks are.", "normal_query": "List vendors along with their Platform Liquidity Rate (PLR), including metrics related to Shipping Route Complexity.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "cybermarket_pattern_4", "selected_database": "cybermarket_pattern", "query": "Give me how fast each session processed threats, and the levels of login verification for buyers.", "normal_query": "Provide Threat Handling Rate (THR) for each security session, ordered from highest to lowest. Additionally, include metrics related to Buyer Authentication Levels.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "cybermarket_pattern_5", "selected_database": "cybermarket_pattern", "query": "I want to know the keyword-hitting values for all customer and internal chats to identify high-risk patterns. Round to 3 decimal places and show in descending order", "normal_query": "Calculate Suspicion Signal Density (SSD) for every communication thread, rounded to 3 decimal places and shown in descending order.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": 3, "distinct": false, "order": true}} +{"instance_id": "cybermarket_pattern_M_5", "selected_database": "cybermarket_pattern", "query": "Update table statistics and query plans for the vendors table, focusing on improving efficiency-related query performance.", "normal_query": "Analyze the vendors table to refresh statistics for Compliance Efficiency Index (CEI) queries.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "cybermarket_pattern_6", "selected_database": "cybermarket_pattern", "query": "Show me all protected platforms, whether they're up or down, how many serious escalation cases they have, and how bad their current alerts are.", "normal_query": "List all Secure Platforms and their current operational status. Also include metrics related to Tier-3 Escalation Case and Alert Severity Levels.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "cybermarket_pattern_7", "selected_database": "cybermarket_pattern", "query": "Tell me how many live listings we have in each category, along with which ones have weird descriptions and how many sketchy buyers are interacting with them.", "normal_query": "Count active listings for each Product Category, shown in descending order. Besides, show metrics related to Language Patterns, Suspicious Buyer.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "cybermarket_pattern_8", "selected_database": "cybermarket_pattern", "query": "Break down transactions by how complicated their shipping routes were, then show me the counts with the trickiest routes at the top.", "normal_query": "Show the number of transactions per Shipping Route Complexity label, highest first.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "cybermarket_pattern_9", "selected_database": "cybermarket_pattern", "query": " Tell me how the average security score stacks up across sessions with different privacy levels, rounded to 2 decimal places, from totally open to fully masked connections.", "normal_query": "List average OpSec score for each Session Anonymity Level, rounded to 2 decimal places.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": 2, "distinct": false, "order": true}} +{"instance_id": "cybermarket_pattern_M_6", "selected_database": "cybermarket_pattern", "query": "I need to optimize the database for cross-border transaction lookups - could you create a dedicated index for those searches?", "normal_query": "Create an index to speed up searches for Cross-Border Transactions.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "cybermarket_pattern_10", "selected_database": "cybermarket_pattern", "query": "I want to know the average keyword-hitting values for all customer and internal chats to identify high-risk patterns. Round to 3 decimal places.", "normal_query": "Return the average Suspicion Signal Density (SSD) across all communications, rounded to 3 decimal places.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": 3, "distinct": false, "order": false}} +{"instance_id": "cybermarket_pattern_M_7", "selected_database": "cybermarket_pattern", "query": "Make a table called 'suspicious_buyers_cap' that lists all the shady buyers, but only include ones that hit at least $10 in suspicious activity.", "normal_query": "Create table suspicious_buyers_cap listing Suspicious Buyers with a $10 cap.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "cybermarket_pattern_M_8", "selected_database": "cybermarket_pattern", "query": "I need to mandate sessions secured by two factor across the board. Please configure the system to upgrade any active sessions still relying on basic authentication.", "normal_query": "Force Premium Authentication by setting auth_protocol_type to \"2FA\" for every session that is currently using \"Basic\".", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "cybermarket_pattern_11", "selected_database": "cybermarket_pattern", "query": "I need the total number of transactions that were both marked as fraud and involved cross-border payments.", "normal_query": "Count Fraud-Flagged Transactions that are Cross-Border.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "cybermarket_pattern_12", "selected_database": "cybermarket_pattern", "query": "Calculate how many hours we typically take to close Tier-3 escalations. Show the average value, rounded to hundredths.", "normal_query": "Return the average resolve time in hours for Tier-3 Escalation Cases, rounded to 2 decimal places.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": 2, "distinct": false, "order": false}} +{"instance_id": "cybermarket_pattern_13", "selected_database": "cybermarket_pattern", "query": "How many platforms show as 'active' right now?", "normal_query": "Count platforms currently marked as Active.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "cybermarket_pattern_M_9", "selected_database": "cybermarket_pattern", "query": "Show me where our response is slowest—give me a quick breakdown by key groups, a percentile snapshot, and the 50 slowest sessions.", "normal_query": "Analyze connection_security to optimize Threat Handling Rate reports.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "cybermarket_pattern_14", "selected_database": "cybermarket_pattern", "query": "How many shoppers are using advanced authentication?", "normal_query": "Count buyers who have Advanced authentication.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "cybermarket_pattern_15", "selected_database": "cybermarket_pattern", "query": "What's the overall revenue from digital goods? Round the result to 2 decimal places.", "normal_query": "Sum total sales value for Digital product listings, rounded to 2 decimal places.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": 2, "distinct": false, "order": false}} +{"instance_id": "cybermarket_pattern_16", "selected_database": "cybermarket_pattern", "query": "What's the average distance traveled for shipments with complex routes? Round the result to 2 decimal places.", "normal_query": "Compute the average geographical distance for shipments on complex routes and round the result to two decimals.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": 2, "distinct": false, "order": false}} +{"instance_id": "cybermarket_pattern_M_10", "selected_database": "cybermarket_pattern", "query": "Set up the secure-platform snapshot—only create it if it isn't there yet.", "normal_query": "Create the secure-platform summary materialized view if it does not already exist.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "cybermarket_pattern_17", "selected_database": "cybermarket_pattern", "query": "How many critical alerts do we have?", "normal_query": "Count alerts with Critical severity level.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "cybermarket_pattern_18", "selected_database": "cybermarket_pattern", "query": "What's the ratio of sales went through escrow? Round to 2 decimal places.", "normal_query": "Calculate the ratio of transactions that used escrow, rounded to 2 decimal places.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": 2, "distinct": false, "order": false}} +{"instance_id": "cybermarket_pattern_19", "selected_database": "cybermarket_pattern", "query": "How many message threads contain irregular phrasing, sudden language switches, or machine translated text that indicate possible deception?", "normal_query": "Count communication threads with Suspicious language patterns.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "cybermarket_pattern_20", "selected_database": "cybermarket_pattern", "query": "How many buyers have unpredictable spending trends?", "normal_query": "Count buyers with Variable spend pattern.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "archeology_scan_1", "selected_database": "archeology_scan", "query": "I'd like to see which of our dig sites have the best scan quality ratings. Could you show me each site's ID and name along with their average quality score, sorted best to worst?", "normal_query": "I'd like to see a quality assessment of scans across our archaeological sites. Show site code, site name, average Scan Quality Score for each site and rank them from highest to lowest quality.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": 2, "distinct": false, "order": true}} +{"instance_id": "archeology_scan_2", "selected_database": "archeology_scan", "query": "Which sites need urgent conservation work? Please show me each location's ID, name, structural condition, preservation status, and whether they're in a high-risk category.", "normal_query": "Could you help me find archaeological sites that might need urgent conservation attention? I'm particularly interested in identifying sites that fall into Degradation Risk Zones. For each site, I'd like to see their code, name, structural state, and preservation status, along with their Risk Zone Category. This information would help our conservation team prioritize their efforts.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "archeology_scan_3", "selected_database": "archeology_scan", "query": "Where are the best places to do scanning based on weather conditions? Show me each site's ID and name with their average environmental condition score indicating suitability for scanning operations.", "normal_query": "I'm planning our upcoming archaeological scanning sessions and want to understand which sites have the most favorable scanning environments. Could you show me a report with each site's code, name, and its average Environmental Suitability Index? This would help us prioritize locations where we'll get the best scan quality.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 2, "distinct": false, "order": true}} +{"instance_id": "archeology_scan_4", "selected_database": "archeology_scan", "query": "How reliable are our scan alignments? For each alignment record, could you show me the registration accuracy relative to scan resolution and the registration confidence category. I need to see its registration ID, project ID, accuracy measurements, error values, calculated ratio, and the confidence category.", "normal_query": "I'm evaluating the quality of our scan registrations and would like to understand which ones are most reliable for spatial analysis. Could you show me the Registration Accuracy Ratio and Registration Confidence Level for each registration? I'd need to see the registration ID, project ID, accuracy measurements, error values, calculated RAR (rounded to 2 decimal places), and what confidence level that translates to.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 2, "distinct": false, "order": true}} +{"instance_id": "archeology_scan_5", "selected_database": "archeology_scan", "query": "Which archaeologicalsites have the best digital preservation? Rank our locations showing their ID, designation, and a comprehensive metric for evaluating digital preservation quality, with the best first.", "normal_query": "For our archaeological site evaluation, I need to quantify the Digital Preservation Quality metrics across our collection. Please compute a comprehensive DPQ index for each archaeological location. Present the results in descending order of DPQ values, displaying only the site identification code, site designation, and calculated DPQ value (rounded to two decimal places) to facilitate prioritization of our digital preservation resources.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 2, "distinct": false, "order": true}} +{"instance_id": "archeology_scan_6", "selected_database": "archeology_scan", "query": "How good are our 3D models based on the criteria for high-fidelity standard? Please generate a comprehensive report that shows each site's ID, name, total mesh count, high-fidelity mesh count and proportion (as a percentage), average ratio of mesh complexity, average resolution parameters (in mm), average geometric accuracy measurements and Mesh Quality category. Present the data with the highest-fidelity results first.", "normal_query": "Would you generate a comprehensive report categorizing sites based on High Fidelity Mesh standard? For each archaeological location, please include the site code, side name, total mesh count, high-fidelity mesh count and proportion (as a percentage), the average Mesh Complexity Ratio, average resolution parameters (in mm), average geometric accuracy measurements and Mesh Quality Classification. The data should be presented in descending order of high-fidelity percentage.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 2, "distinct": false, "order": true}} +{"instance_id": "archeology_scan_7", "selected_database": "archeology_scan", "query": "What are the scanning conditions like at each site? Show me each location's code and name, along with weather averages (temperature, humidity, and illumination levels), environment suitability score, and corresponding quartile ranking and environmental condition category based on the score.", "normal_query": "Show me each site's code and name, along with the average temperature, humidity, and illumination levels. I'd also like to see the average Environmental Suitability Index for each site, classified into quartiles, to understand the range of conditions. Finally, classify each site into Environmental Condition Classification System according to average ESI value.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": 1, "distinct": false, "order": true}} +{"instance_id": "archeology_scan_8", "selected_database": "archeology_scan", "query": "I'd like to analyze how efficiently each scan processing workflow performs and spot any bottlenecks. For every software and stage combination, show me the software, processing stage, average hours needed for processing, average CPU and GPU usage percentages, average data size in GB, the ratio of the processing efficiency, and whether it's running efficiently or hitting bottlenecks ('Bottleneck Detected' if it is qualified as processing bottleneck, 'Efficient' if it is not). Also include how many workflows we're looking at for each combination. Sort the results by bottleneck status first, followed by the ratio value from lowest to highest.", "normal_query": "I want to evaluate each scan processing workflow's Processing Efficiency Ratio and identify whether it qualifies as a Processing Bottleneck. For each combination of processing software and stage, please include the software, stage, average processing hours, average CPU and GPU usage percentages, average data size in GB, the average PER value, and the the efficiency status ('Bottleneck Detected' if it is qualified as processing bottleneck, 'Efficient' if it is not). Additionally, provide the total count of workflows for each combination. Sort the results by bottleneck status first, followed by the PER value in ascending order.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": 1, "distinct": false, "order": true}} +{"instance_id": "archeology_scan_9", "selected_database": "archeology_scan", "query": "Which sites are best for finding artifacts? Show me each location's ID along with the average ratio between total points and cloud density, and the average efficiency of feature identification. I need all sites included, even if some data might be missing. Sort the results by average feature identification efficiency in descending order.", "normal_query": "For each archaeological site, I need its Point Cloud Density Ratio and Feature Extraction Efficiency to identify sites with high potential for feature extraction. Please include the site code, average PCDR value, and average FEE value. Ensure that all sites are included, even if some data might be missing. Sort the results by average FEE in descending order.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": 2, "distinct": false, "order": true}} +{"instance_id": "archeology_scan_10", "selected_database": "archeology_scan", "query": "Hey, can you help me figure out how efficient our archaeological scanning gear is? I need to know the equipments' IDs, their efficiency of computing resource utilization (rounded to two decimal places), the average processing time in hours, their efficiency rankings, and their workflow efficiency status. Also, please include CPU usage (named 'cpu_usage'), GPU usage (named 'gpu_usage'), and processing hours (named 'processing_hours') as JSON in the resource details. Make sure to include all equipments, even if the data's incomplete, and sort everything by PRU value from lowest to highest. Thanks!", "normal_query": "My purpose is to analyze the Processing Resource Utilization (PRU) of our archaeological scanning equipment and categorize workflows according to the Workflow Efficiency Classification system. Please provide the equipments' IDs, PRU values (rounded to two decimal places), average processing time in hours, efficiency rankings, workflow efficiency status, and include the CPU usage (named 'cpu_usage'), GPU usage (named 'gpu_usage'), and processing hours (named 'processing_hours') in json format as resource details. I'd like all equipment to be included in the analysis, even those with incomplete data. Please sort the results by PRU value in ascending order to help identify the most efficient setups.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 2, "distinct": false, "order": true}} +{"instance_id": "archeology_scan_M_1", "selected_database": "archeology_scan", "query": "For our analysis work, let's create a special, pre-calculated table called high_fidelity_meshes to keep track of our best 3D models. In this table, I want to see the mesh's unique ID, the site it belongs to, the equipment used, the vertex and face counts, its resolution in millimeters, and its geometric accuracy. Also, please add a column for the ratio of its topological complexity to resolution. Only include the high fidelity meshes.", "normal_query": "We need to create a persistent table of all High Fidelity Mesh data for our archaeological analysis. Please set up a materialized view called 'high_fidelity_meshes'. The view should include the mesh's registry ID, site reference, equipment used, vertex and face counts, resolution in millimeters, geometric accuracy, and the calculated MCR value. Only include meshes that meet all the High Fidelity Mesh criteria.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "archeology_scan_M_3", "selected_database": "archeology_scan", "query": "Can you create a view for me called view_premium_quality_scans to identify high-quality archaeological scans? For each of these scans, please display its ID, project and site refs, the scan timestamp, scan resolution (mm), point density (points/m²), coverage percentage, overlap percentage, and noise level (dB). The main thing is to only include scans that meet our standards: high resolution, comprehensive coverage, and the noise level is below 1.5 dB.", "normal_query": "Create a view called view_premium_quality_scans that identifies high-quality archaeological scans. This view should include the Scan ID, Project Reference, Site Reference, Scan Timestamp, Scan Resolution (mm), Point Density (points/m²), Coverage (%), Overlap (%), and Noise Level (dB). The view should identify scans that meet the criteria for both a High Resolution Scan and Comprehensive Coverage, and also have a Noise Level less than 1.5.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "archeology_scan_M_4", "selected_database": "archeology_scan", "query": "I need a way to quickly check how good the scanning conditions were for our different sites. Can you create a view called site_esi that calculates how suitable environmental conditions were for scanning operations? For each site, just show its zone reference ID and the calculated ESI score, rounded to two decimal places.", "normal_query": "A view named site_esi is required. This view should determine the Environmental Suitability Index for each site. The output should include the Zone Reference and the calculated ESI value, rounded to two decimal places.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": 2, "distinct": false, "order": false}} +{"instance_id": "cross_border_1", "selected_database": "cross_border", "query": "Let's check out the top 5 riskiest data flows. For each one, show me the flow ID, how risky it is, and how sensitive the data is. Sort them by the most sensitive data first, and make sure to round everything to two decimal places.", "normal_query": "List the top 5 high-risk data flows, showing each flow's ID, Risk Exposure Score, and Data Sensitivity Index, including all flows even if risk or profile data is missing. Sort by Data Sensitivity Index from highest to lowest, rounding scores to two decimal places.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": 2, "distinct": false, "order": true}} +{"instance_id": "cross_border_2", "selected_database": "cross_border", "query": "Let’s see how vendors are distributed across different risk tiers. For each tier, tell me the tier name, how many vendors fall into it, and what percentage of the total that is (rounded to two decimals). Sort them so the tier with the most vendors comes first.", "normal_query": "Group all vendors by their Vendor Risk Tier. For each tier, return the tier name, the number of vendors in that tier, and the percentage of total vendors (rounded to two decimals). Sort the results by the number of vendors in each tier from highest to lowest.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": 2, "distinct": false, "order": true}} +{"instance_id": "cross_border_3", "selected_database": "cross_border", "query": "Let’s find the top 10 overloaded data flows. For each, show me the flow ID, how much of the available bandwidth is being used compared to the total possible, and how efficient the transfer was based on the success rate and error count. We’ll sort them by bandwidth usage, from highest to lowest, and round the numbers to two decimal places.", "normal_query": "Find the top 10 Overloaded Data Flows, and list each flow's ID, its Bandwidth Saturation Index, and its Data Transfer Efficiency, with both metrics rounded to two decimal places. Sort by BSI from highest to lowest.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": 2, "distinct": false, "order": true}} +{"instance_id": "cross_border_4", "selected_database": "cross_border", "query": "Let’s find the 5 data profiles most at risk for sensitive data exposure. For each one, tell me the profile ID, how sensitive the data is, and how strong the security protections are. Round the sensitivity score to two decimals and sort highest-to-lowest sensitivity. Use the scale High=3, Medium=2, Low=1; treat any other label (including 'Critical') as Low.", "normal_query": "Find the top 5 data profiles with potential Sensitive Data Exposure. For each one, show the profile ID, the data sensitivity score, and the security score. Round the sensitivity score to two decimal places and list them from highest to lowest sensitivity. Use the existing sensitivity scale (High=3, Medium=2, Low=1); treat any other label (including 'Critical') as Low.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": 2, "distinct": false, "order": true}} +{"instance_id": "cross_border_5", "selected_database": "cross_border", "query": "Let’s find the top 10 compliance records where there are issues with data moving between countries—like mismatched or missing origin and destination—and either GDPR or local law compliance is marked as failed. For each, I want the compliance ID, GDPR and local law status, and the data transfer route. Sort them by ID from smallest to biggest.", "normal_query": "Find the top 10 records where data is moving between different countries (the two countries don’t match or one is missing) and either GDPR or local-law status is marked Non-compliant. Show the record ID, the GDPR status, the local-law status, and the transfer route (origin to destination). Sort by ID from smallest to largest.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "cross_border_6", "selected_database": "cross_border", "query": "Let’s find the top 3 months with the highest average severity for audit findings, but only include audits where the severity score was over 0.5. For each month, I need the month (in 'year-month' format), the average severity (rounded to two decimal places), and how severe it was compared to other months. We’ll sort everything from the earliest to the latest month.", "normal_query": "Find the top 3 months with the highest average Audit Finding Severity for audits with a Critical Audit Issue. List each month ('year-month'), the average AFS (rounded to two decimal places), and its severity rank. Sort by month from earliest to latest.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": 2, "distinct": false, "order": true}} +{"instance_id": "cross_border_7", "selected_database": "cross_border", "query": "Find audits where the pressure from data subject requests is greater than 50. For each of them, I need the audit ID, the pressure score (rounded to two decimal places), and a breakdown of the request types, such as how many requests for access, deletion, rectification, and portability were made. Sort the results by the pressure score from highest to lowest, and show up to 100 records.", "normal_query": "Find audits with a Data Subject Request Pressure greater than 50. List each audits ID, the DSRP (rounded to two decimal places), and a breakdown of request types (access, deletion, rectification, portability). Sort by DSRP from highest to lowest, and show up to 100 records.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": 2, "distinct": false, "order": true}} +{"instance_id": "cross_border_8", "selected_database": "cross_border", "query": "Let's look at data flows that cross borders and calculate their associated risk based on their volume. For each flow, I need to see the flow ID, its risk factor (rounded to two decimal places), the total risk (rounded to two decimal places), and how each flow ranks based on its total risk. Give me the flows where the total risk exceeds 1000, and sort them from highest to lowest. Please limit the results to the top 5 flows.", "normal_query": "For cross-border data flows, calculate the Cross-Border Data Volume Risk and list the flow ID, Cross-Border Risk Factor (rounded to two decimal places), CDVR (rounded to two decimal places), and the rank of CDVR. Show only flows where CDVR is greater than 1000, sort by CDVR from highest to lowest, and limit to the top 5.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": 2, "distinct": false, "order": true}} +{"instance_id": "cross_border_9", "selected_database": "cross_border", "query": "Let’s find the data profiles that have failed their integrity checks. For each profile, I need the profile ID, the count of integrity failures, and a list of failure types (like 'Integrity Check' or 'Checksum Verification') in a single string, separated by commas. Sort the profiles by the failure count, starting with the highest, and show me just the top 10.", "normal_query": "Find data profiles with a Data Integrity Failure, and calculate their Integrity Failure Count. List each profiles ID, its IFC, and the types of failures (like 'Integrity Check' or 'Checksum Verification') in a single string, separated by commas. Sort by IFC from highest to lowest, and show only the top 10 profiles.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "cross_border_10", "selected_database": "cross_border", "query": "Let’s find cross-border data flows that are under high audit compliance pressure. Focus on those with slow remediation timelines and remediation deadlines approaching within the next 5 days (assuming today is 2025-04-01). For each of these flows, I need the flow ID, the audit compliance pressure (rounded to two decimal places), and how many days the remediation is overdue. Sort these by the most overdue flows first, followed by audit compliance pressure from highest to lowest. Limit the results to the top 10 flows.", "normal_query": "I want to find cross-border data flows with High Audit Compliance Pressure. Focus on flows with slow remediation timelines and remediation deadlines within the next 5 days (assuming today is 2025-04-01). Show the flow ID, the Audit Compliance Pressure rounded to 2 decimal places, and the days overdue. Sort by days overdue from most overdue to least, then by Audit Compliance Pressure from highest to lowest, and limit to the top 10 flows.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 2, "distinct": false, "order": true}} +{"instance_id": "cross_border_M_1", "selected_database": "cross_border", "query": "Find the systems that work with a lot of sensitive stuff but don’t have strong protection in place. If something fits that risk profile, mark it for review. For each one, show its ID, whether we flagged it, and key details about how it’s secured.", "normal_query": "Identify systems that should be flagged for review if they have a high Data Sensitivity Index (DSI) and a low Security Robustness Score (SRS). For each, return the system ID, whether it's marked for review, and key security settings.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "cross_border_M_2", "selected_database": "cross_border", "query": "We need to keep an updated summary of how well the data flows are performing. Make sure we have a place to store the record ID, the success rate, the error count, and a timestamp showing when the data was last updated. For every record, calculate how efficient the data transfer was. Then, if we don’t already have a record for it, add a new one, or if it’s already there, update it with the latest success rate, error count, and timestamp.", "normal_query": "We need to maintain a reliable summary that tracks the performance of each data flow. For every data transfer, calculate its Data Transfer Efficiency (DTE) and make sure this value is stored in a dedicated record, along with the original success rate, the number of errors, and the timestamp when this performance summary was last refreshed. If there’s already a summary for a data flow, make sure it gets updated with the latest numbers; if not, create a new one with all the required information. The DTE value should be rounded to two decimal places.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": true, "conditions": {"decimal": 2, "distinct": false, "order": false}} +{"instance_id": "cross_border_M_3", "selected_database": "cross_border", "query": "Let’s find all data transfers that go between two different countries and clearly fail to meet legal requirements. We’ll only consider it a serious compliance gap if either the general data protection rules or the local laws are explicitly marked as not being followed — not just partially done, but fully non-compliant. For each one, show the countries involved, some identifying info about the flow, and who the vendor is if we know it.", "normal_query": "Please create a materialized view named cross_border_compliance_gap_view. This view should act as a pre-computed list identifying all data flows that exhibit a Cross-Border Compliance Gap, defined as flows where the origin and destination countries differ, and where either GDPR compliance or local law compliance is marked as 'Non-compliant'.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "cross_border_M_4", "selected_database": "cross_border", "query": "Let's update the dataflow table by adding a column called transfer_path. For all the data flows that cross borders, I want you to create a string that shows the journey from the origin to the destination country in this format: 'OrigNation -> DestNation'. Make sure the column gets filled in for all the existing records.", "normal_query": "Please modify the dataflow table by adding a new column called transfer_path. Once the column is added, populate it for all existing Cross-Border Data Flows by creating their Transfer Path string, which combines the origin and destination nations.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "cross_border_M_5", "selected_database": "cross_border", "query": "Let’s go through the audit records. If a record has a lot of critical findings — meaning the number of critical issues is more than half of the total findings — and the remediation deadline has already passed, change its status to 'Overdue'. But only do that if the current status isn’t already set to 'Complete' or 'Overdue'.", "normal_query": "Please update the AuditAndCompliance table. For the purpose of this operation, define a 'Critical Audit Issue' as any audit where the number of critical findings is greater than 50% of total findings. For any such audit record where the remediation due date is earlier than today, set its remediation status to 'Overdue'. This should only apply if the current status is not already marked as 'Complete' or 'Overdue'.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "cross_border_11", "selected_database": "cross_border", "query": "Let’s figure out which data flows are really feeling the heat from audits and compliance costs. First, how heavy is the audit load? For each flow, take the number of critical findings, divide by total findings plus one, and multiply that by the total number of data subject requests. Then, how costly is it to stay compliant? Divide the compliance cost by the penalties plus one. Now show me the flows where both numbers are high—specifically, audit load over 10 and cost pressure over 0.8. For each of those, give me the flow ID, the audit load, and the cost pressure, both rounded nicely. Just make sure to link everything properly across the tables using the flow ID.", "normal_query": "I want to identify data flows with both high audit remediation load and high compliance cost pressure. Calculate the remediation load as audit severity (critical findings over findings + 1) times total data subject requests. Compute cost pressure as total compliance cost divided by penalties plus 1. List the flow ID along with both values, rounded to two decimal places, but only include flows where remediation load is over 10 and cost pressure exceeds 0.8. Ensure to join the relevant tables using the flow ID.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": 2, "distinct": false, "order": true}} +{"instance_id": "cross_border_12", "selected_database": "cross_border", "query": "I want to find data flows that seem both risky and unreliable. I'm looking at how dangerous they are based on how well protections are working, and how often the data transfers succeed without problems. Just show me the ID, how much risk is involved, and whether they usually work well.", "normal_query": "I want to identify data transfers with high RES and low DFRS. Please return the unique identifier, RES, and DFRS for each qualifying transfer.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 2, "distinct": false, "order": false}} +{"instance_id": "cross_border_13", "selected_database": "cross_border", "query": "Let’s create a list that keeps track of data flows with compliance issues when data crosses borders. For each flow that has this compliance gap, we need to include details like the record ID, the flow tag, the countries involved (origin and destination), the compliance status with relevant data protection laws, the status of compliance with local regulations, and the vendor trace ID. This list will be called cross_border_compliance_gap_view.", "normal_query": "Please create a materialized view named cross_border_compliance_gap_view. This view should act as a pre-computed list identifying all data flows exhibiting a Cross-Border Compliance Gap. For each identified data flow, include the following details in the view: the record registry ID, flow tag, origin nation, destination nation, GDPR compliance status, local law compliance status, and the vendor trace ID.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "cross_border_14", "selected_database": "cross_border", "query": "Let’s check out the data flows that might be considered high-risk because they involve sensitive data. For each of these, I need the flow ID, the sensitivity score, and the destination country. The sensitivity score is calculated by multiplying the data size by a factor: if the data is highly sensitive, it gets a 3x factor, otherwise it gets a 1x factor. Show me the flows where the sensitivity score is above 100 and sort them with the highest sensitivity first.", "normal_query": "I want to find data flows that could be considered high-risk based on their sensitivity. For each data flow, show me the flow ID, the calculated sensitivity score (called DSI), and the country where the data is going. The DSI is calculated by taking the data volume (in GB) and multiplying it by a factor based on how sensitive the data is: if the data is marked as 'High' sensitivity, the factor is 3, and for any other sensitivity, it’s 1. Only show the data flows where the DSI is more than 100, and sort them by DSI from highest to lowest.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "cross_border_15", "selected_database": "cross_border", "query": "I want to get a general idea of how trustworthy our vendors are. Can you give me a single number that reflects their overall reliability based on things like how secure they seem and whether they’re still actively working with us?", "normal_query": "Calculate the average Vendor Reliability Index (VRI) using the standard definition, across all vendors.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "cross_border_16", "selected_database": "cross_border", "query": "Let’s figure out which data profile has the highest sensitivity score. For each profile, we’ll calculate a score by multiplying how much data it has (in GB) with a factor based on how sensitive it is—3 for High sensitivity, 2 for Medium, and 1 for Low. I just need the highest score from all the profiles.", "normal_query": "I’m trying to find out which data profile has the highest sensitivity score based on how much data it holds and how sensitive the data is. Each profile has a volume in gigabytes and a sensitivity level—either High, Medium, or Low. I want to multiply the volume by a factor depending on sensitivity: 3 if it's High, 2 if it's Medium, and 1 if it's Low. Then give me maximum of these calculated values across all data profiles.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "cross_border_17", "selected_database": "cross_border", "query": "Let’s see which audits are under the most pressure from user data requests. For each one, add up the number of access, deletion, rectification, and portability requests—use zero if any values are missing—then multiply that total by the average response time. I just want to know which audit ends up with the highest result.", "normal_query": "I’m trying to find the maximum Data Subject Request Pressure (DSRP) from the audit records. To get this, I’ll calculate the Data Subject Request Load (DSRL) by adding up the number of access, deletion, rectification, and portability requests—treating any missing values as zero. Then I’ll multiply that total by the average response time (also defaulting to zero) and return the highest result.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "cross_border_18", "selected_database": "cross_border", "query": "Let’s check out which vendors are carrying the biggest compliance burden. First, we’ll measure how bad their audit issues are by dividing critical findings by total findings plus one. Then we turn their security rating into a score—4 for ‘A’, 3 for ‘B’, 2 for ‘C’, and 1 for anything else. To get the compliance burden, we multiply the audit severity by (5 minus the security score). Show me the vendors with a burden over 1.5, and list their ID, compliance burden, and security score—sorted from highest to lowest. Include vendors even if they don’t have audit data.", "normal_query": "I’m looking for vendors with a high Vendor Compliance Burden (VCB). To get that, first compute their Audit Finding Severity (AFS) by dividing critical findings by total findings plus one. Then turn their security rating into a number: 4 for ‘A’, 3 for ‘B’, 2 for ‘C’, and 1 for anything else. Multiply AFS by (5 minus the security score) to get the VCB. Show only vendors with a VCB above 1.5, and return their vendor ID, the VCB rounded to two decimals, and their numeric security rating—sorted from highest to lowest. Include all vendors with ratings, even if they don’t have audit data.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "cross_border_19", "selected_database": "cross_border", "query": "Let's look at countries where the data is super sensitive but the encryption's a bit lacking. For each country, I need the number of profiles, the average sensitivity score, average security strength, how well data is encrypted, and how long it’s being kept. Only show me the countries where the sensitivity score is above 100, and the encryption coverage is below 2. Make sure to sort them by encryption first (lowest to highest) and then by sensitivity (highest to lowest), and give me the top 20. You’ll need to work out the sensitivity from data volume, the security score from encryption and access settings, and the coverage ratio by combining both.", "normal_query": "I’m looking to assess countries where data sensitivity is high but encryption coverage is weak. For each destination country, calculate the number of profiles, the average Data Sensitivity Index (DSI), average Security Robustness Score (SRS), average Encryption Coverage Ratio (ECR), and average retention days. Only include destinations where the average DSI is over 100 and the ECR is below 2. Sort the results by ECR in ascending order, then DSI in descending order, and return the top 20. You’ll need to compute DSI from data volume and sensitivity, SRS from encryption and access control settings, and ECR by combining both. Be sure to link the profiles and flow data properly using their shared identifiers.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "cross_border_20", "selected_database": "cross_border", "query": "Let’s find data flows that look risky and involve a lot of sensitive data. For each one, can you tell me when it happened, who sent it, where it went, and which protocol it used? I only want the ones where the risk exposure score is above 0.7 and the data sensitivity index is over 100. Show me the top 50, sorted from highest risk to lowest, and use the sensitivity score as a tiebreaker. You’ll need to pull info from different places using the flow ID, even if some values are missing.", "normal_query": "I want to find data flows with high Risk Exposure Score and high Data Sensitivity Index. For each of these flows, show the timestamp, origin actor, destination country, protocol used, the computed risk exposure (rounded), and data sensitivity (rounded). A flow qualifies if its risk exposure is greater than 0.7 and its sensitivity index exceeds 100. Sort the results by risk exposure and sensitivity, both in descending order, and return the top 50 flows. Use the flow identifier to combine data across the necessary tables, even if not all values are present.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "cross_border_M_6", "selected_database": "cross_border", "query": "Let’s create a quick summary view called `DataFlowSummary` for data flows with 'High' or 'Critical' sensitivity levels. For each flow, I need details like the record ID, destination country, the actor that started the flow, the data size, the duration, and the sensitivity level. This is for summarizing only those flows with the specific sensitivity levels.", "normal_query": "I want to create a view called `DataFlowSummary` that summarizes the data flows from the DataFlow record, specifically for flows with 'High' or 'Critical' sensitivity levels. The view should include details like the record identifier, destination country, originating actor, data size, duration, and sensitivity level. This involves filtering based on the sensitivity level of the data flows.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "cross_border_M_7", "selected_database": "cross_border", "query": "So, I need a list of data flows that are marked as 'Critical' and have a risk score above 50. For each one, I’d like to see the flow ID, sensitivity level, risk score, mitigation state, encryption status and method, the vendor assessment, and when the contract expires. Make sure the list is sorted by the highest risk score first.", "normal_query": "Generate a report for data flows with a sensitivity level of 'Critical' and a Risk Exposure Score (RES) greater than 50. The report should include the flow identifier, sensitivity level, risk assessment, risk mitigation state, encryption status and method, vendor assessment, and contract expiry date. The results should be ordered by risk assessment in descending order.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "cross_border_M_8", "selected_database": "cross_border", "query": "Let’s pretend we just had a critical data transfer. It happened on 2025-06-29 at 10:30 AM, 'ActorA' sent 150.5 MB of data to 'ActorB' in the USA over TCP. It took 60 minutes, and the data was marked as “Critical”. Let’s log that with ID 'UUID-1236'.", "normal_query": "Please add a new data exchange event to the system. Use ID 'UUID-1236', timestamp '2025-06-29T10:30:00', initiated by 'ActorA', received by 'ActorB', sent to 'USA', over 'TCP'. The data volume was 150.5 MB, it lasted 60 minutes, and the sensitivity level is 'Critical'.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "cross_border_M_10", "selected_database": "cross_border", "query": "Let’s clean up the data a bit. I need you to delete any records where the success rate is under 50% and the sensitivity level is 'Low.' But only delete those if they’re also linked to records with a risk score under 20, and if they’re tied to records where the GDPR status is 'Non-compliant.'", "normal_query": "I want to delete records where the success percentage is below 50 and the data sensitivity level is 'Low.' Additionally, only delete these records if they are linked to entries with a risk assessment score under 20 (this is related to the Risk Exposure Score calculation) and if they are also linked to records with non-compliant GDPR status (this refers to GDPR compliance).", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "crypto_exchange_1", "selected_database": "crypto_exchange", "query": "What's the current spread percentage of the midpoint price? Show me the exchange code, the timestamp of the snapshot, and the calculated spread for the latest market data.", "normal_query": "Could you calculate the Spread Percentage for the most recent market snapshot. Show me the exchange code of the most recent market snapshot with the timestamp of the snapshot, and the calculated percentage?", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "crypto_exchange_2", "selected_database": "crypto_exchange", "query": "Show me how much of each order has been filled by checking the most recent execution record. Please include the order ID, total order quantity, remaining quantity, and the calculated rate.", "normal_query": "For each order, calculate the Order Fill Rate based on its latest execution record. Display the order ID, total order quantity, remaining quantity, and the calculated order fill rate.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "crypto_exchange_3", "selected_database": "crypto_exchange", "query": "What's the risk exposure for our top 5 positions right now? Show me the margin-form identifier, the position's notional value, the volatility measure used and the calculated risk value.", "normal_query": "Calculate the Position Value at Risk (PVaR) for the top 5 positions, using their notional value from risk and margin data and the single latest market volatility reading. Show me the margin-form identifier, the position's notional value, the volatility measure used, and the calculated PVaR.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "crypto_exchange_4", "selected_database": "crypto_exchange", "query": "Show me the risk and margin pivot ID, the associated order ID, the account balance node ID, the initial margin hold value, the margin account balance, and the percentage of margin being utilized.", "normal_query": "Please display the risk and margin pivot ID, the associated order ID, the account balance node ID, the initial margin hold value, the margin account balance, and the calculated margin utilization.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "crypto_exchange_5", "selected_database": "crypto_exchange", "query": "What's our overall profit ratio when comparing winning and losing trades across all accounts? Display the total sum of positive realized PnL, the total sum of negative realized PnL, and the calculated ratio of profitable trades to losing trades.", "normal_query": "Can you calculate the Profit Factor based on the realized PnL across all account balances? Display the total sum of positive realized PnL, the total sum of negative realized PnL, and the calculated Profit Factor.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "crypto_exchange_6", "selected_database": "crypto_exchange", "query": "How do trading spreads vary with market mood across different currency pairs? Show me the the market pair name, the calculated percentage, the overall market sentiment, the buy force, the average percentage for that sentiment, and the percentile rank of the percentage.", "normal_query": "Analyze the Spread Percentage across different markets and correlate it with market sentiment indicators. For each market pair, display the market pair name, the calculated spread percentage, the overall market sentiment, the buy force, the average spread percentage for that sentiment, and the percentile rank of the spread percentage.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "crypto_exchange_7", "selected_database": "crypto_exchange", "query": "How well does smart money predict price changes? I'd like to see the dominance category, the level of 'Whale-Driven Market' activity, the market pair, the average price change over 1 hour, average price change over 4 hours, average price change over 24 hours for different market pairs and calculate the success rate of smart money flow. Please group the results by flow dominance, whale activity, and market pair, and sort them by the successful smart money flow rate, from highest to lowest.", "normal_query": "I want to understand the impact of 'Smart Money Flow' on price movements across different market pairs. Can you provide the 'flow dominance' category, the level of 'Whale-Driven Market' activity, the market pair, the average price change over 1 hour, average price change over 4 hours, average price change over 24 hours for different market pairs and calculate the 'smart money accuracy' rate. Please group the results by flow dominance, whale activity, and market pair, and sort them by smart money accuracy, from highest to lowest.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "crypto_exchange_8", "selected_database": "crypto_exchange", "query": "I want to know the real leverage traders are using. Can you provide the notional value of position, position leverage multiplier, the total wallet balance, and the resulting effective leverage for each relevant position?", "normal_query": "To analyze the 'Effective Leverage' for positions, please provide the notional value of position, position leverage multiplier, the total wallet balance, and the resulting effective leverage for each relevant position.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "crypto_exchange_9", "selected_database": "crypto_exchange", "query": "I want to determine the strength of technical signals in the market. Please provide the RSI(14) value, MACD line value, Bollinger Band width, the technical meter direction, and the calculated strength.", "normal_query": "I want to determine the 'Technical Signal Strength' in the market. Please provide the RSI(14) value, MACD line value, Bollinger Band width, the technical meter direction, and the calculated technical signal strength.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "crypto_exchange_10", "selected_database": "crypto_exchange", "query": "I need to identify the large orders that could significantly impact market prices. Please include the order ID, the trade side (Buy or Sell), the order quantity, and the depth volume in units of both bid and ask.", "normal_query": "Help me find the Whale Orders, including the order ID, the trade side (Buy or Sell), the order quantity, and the depth volume in units of both bid and ask for any order that qualifies as a Whale Order.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "crypto_exchange_M_1", "selected_database": "crypto_exchange", "query": "Clean up our executed orders data by removing all records for orders that were cancelled.", "normal_query": "We need to clean up our 'orderExecutions' table by removing all orders with a 'Cancelled' orderflow status. Can you create such query?", "preprocess_sql": ["CREATE table \"orderexecutions_bak\" as select * from \"orderExecutions\";"], "clean_up_sqls": ["\nINSERT INTO \"orderExecutions\"\nSELECT * FROM \"orderexecutions_bak\"\nWHERE ordersmark IN (\n SELECT RecordVault\n FROM \"orders\"\n WHERE LOWER(TRIM(order_attributes->>'status')) = 'cancelled') AND (order_attributes->>'quantity')::real > 5;"], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "crypto_exchange_M_2", "selected_database": "crypto_exchange", "query": "Make a function called 'calc_effective_leverage' that figures out how leveraged a position really is by comparing its size to the trader's wallet balance.", "normal_query": "Create a function called 'calc_effective_leverage' that takes position leverage (as text), position value, and wallet balance to calculate Effective Leverage.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "crypto_exchange_M_3", "selected_database": "crypto_exchange", "query": "Can you help me create a recalc_market_impact_cost procedure that grabs the current market impact factor and calculates impact costs for all our 'New' orders, then saves the results with timestamps? We'll need a special log table for this, named market_impact_cost_log, which should have columns for a unique auto-incrementing ID (primary key), the order's reference text field, the calculated impact cost as a number, and when it was calculated with timezone info defaulting to current time. We don't need to run the procedure just yet.", "normal_query": "We need to track and calculate Market Impact Cost for all new orders. Please create a procedure called 'recalc_market_impact_cost' that gets the current market impact factor, calculates MIC for all orders with 'New' status using the formula, and logs the results with timestamps. Besides, create a log table 'market_impact_cost_log' to store the impact costs with columns for ID, order reference, calculated MIC, and timestamp (log_id SERIAL PRIMARY KEY, ordersmark TEXT, mic NUMERIC, calculated_at TIMESTAMP WITH TIME ZONE DEFAULT NOW()). No need to call the procedure now.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "crypto_exchange_M_4", "selected_database": "crypto_exchange", "query": "Make a view called 'whale_orders' that flags really big orders by comparing their size to the market's available liquidity, showing the order ID, market note, order quantity, and available liquidity.", "normal_query": "Could you create a view called 'whale_orders' that identifies all Whale Orders in our system? We need to see the order ID, market note, order quantity, and available liquidity for orders.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "crypto_exchange_M_5", "selected_database": "crypto_exchange", "query": "Add a new field called 'spread_percentage' to show the spread percentage calculation of all market data records by updating their JSON fields for orderbook metrics.", "normal_query": "Please update all market data records to include the Spread Percentage as a new field 'spread_percentage' in the orderbook_metrics JSON in table 'marketdata'.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "crypto_exchange_11", "selected_database": "crypto_exchange", "query": "I need to understand our platform's overall risk level. Can you tell me, on average, what percentage of their available margin our users have currently tied up in positions?", "normal_query": "Help me calculate the platform-wide average for 'Margin Utilization'.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "crypto_exchange_12", "selected_database": "crypto_exchange", "query": "I need a quick risk assessment of how many of our users are in the danger zone of getting a margin call.", "normal_query": "Generate a count of all accounts that are currently at 'Margin Call Risk'.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "crypto_exchange_13", "selected_database": "crypto_exchange", "query": "Can you count how many enormous trades have occurred on our platform? I'm looking for the total number of single orders that were so large they were more than 10% of the market's depth at that moment.", "normal_query": "Provide a total count of all orders that are classified as a 'Whale Order'.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "crypto_exchange_14", "selected_database": "crypto_exchange", "query": "Can you calculate the average spread as a percentage of the midpoint price?", "normal_query": "What is the average 'Spread Percentage' across all of our markets?", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "crypto_exchange_15", "selected_database": "crypto_exchange", "query": "How risky is order OR6015391 in terms of getting liquidated?", "normal_query": "What is the Liquidation Risk Level for order OR6015391?", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "crypto_exchange_16", "selected_database": "crypto_exchange", "query": "If we execute order OR6015391 right now, what is the cost of its impact on market? Please rounded to 2 decimals", "normal_query": "What is the Market Impact Cost for order OR6015391, rounded to 2 decimals?", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": 2, "distinct": false, "order": false}} +{"instance_id": "crypto_exchange_17", "selected_database": "crypto_exchange", "query": "Is the EX203 market drying up right now? Tell me if we're in a liquidity crunch where it's hard to trade without moving prices by returning the categorical status 'Liquidity Crisis' or 'Normal Market Conditions'.", "normal_query": "Our trading strategy requires large transactions in liquid markets. Is market EX203 experiencing a Liquidity Crisis? Return the categorical status 'Liquidity Crisis' or 'Normal Market Conditions'.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "crypto_exchange_18", "selected_database": "crypto_exchange", "query": "How good are the average returns on order OR6015391 adjusted for risk exposure?", "normal_query": "What are the average Risk-Adjusted Returns for order OR6015391?", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "crypto_exchange_19", "selected_database": "crypto_exchange", "query": "Our arbitrage strategy robot needs to identify cross-market spread opportunities. Does EX203 have significant arbitrage opportunities across markets? Return 'Arbitrage Opportunity' if the value exceeds the threshold, otherwise 'Normal Market'.", "normal_query": "Our arbitrage strategy robot needs to identify cross-market spread opportunities. According to our arbitrage strategy, when the cross-market spread exceeds the threshold, an Arbitrage Window exists, triggering automated trading. Please determine whether EX203 presents an arbitrage opportunity. Return 'Arbitrage Opportunity' if the value exceeds the threshold, otherwise 'Normal Market'.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "crypto_exchange_20", "selected_database": "crypto_exchange", "query": "What percentage of order OR6015391 has been filled?", "normal_query": "What is the Order Fill Rate for order OR6015391?", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "crypto_exchange_M_6", "selected_database": "crypto_exchange", "query": "Clean up old execution records that have passed their expiration date, but only for those quick-fire orders that either fill immediately or cancel.", "normal_query": "Purge expired execution records for IOC/FOK orders where expireSpot timestamp is before current time.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "crypto_exchange_M_7", "selected_database": "crypto_exchange", "query": "Build a live liquidity dashboard 'market_liquidity_dashboard' showing the exchange spot market symbol, snapshot timestamp, and the corresponding liquidity ratio.", "normal_query": "Create a view market_liquidity_dashboard showing the exchange spot market symbol, the timestamp when the snapshot was taken, and the corresponding liquidity ratio for each market.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "crypto_exchange_M_8", "selected_database": "crypto_exchange", "query": "Can you make a helper calc_spread_pct() that takes the JSONB for fast order-book analytics and returns the calculated spread percentage?", "normal_query": "Create function calc_spread_pct() that takes the JSONB for fast order-book analytics and returns the calculated spread percentage.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "crypto_exchange_M_9", "selected_database": "crypto_exchange", "query": "I want to create a trg_margin_util that automatically update how much percentage of margin being utilized whenever their risk profile changes. Please store the result in a new JSONB key named margin_util_pct inside the margin_risk_profile column.", "normal_query": "Create trigger trg_margin_util that auto-calculates Margin Utilization whenever margin profile changes. The result should be stored in a new JSONB key named margin_util_pct inside the margin_risk_profile column.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": 2, "distinct": false, "order": false}} +{"instance_id": "crypto_exchange_M_10", "selected_database": "crypto_exchange", "query": "Emergency brake on dangerous bets! Cancel orders classified as Critically Over-Leveraged Position and set the cancellation reason to 'Critical Leverage'.", "normal_query": "Please cancel executions for positions classified as Critically Over-Leveraged Position and set the cancellation reason to 'Critical Leverage'.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "polar_equipment_1", "selected_database": "polar_equipment", "query": "Let's compare how efficient our equipment is versus how safe it is. Can you show me a list with the equipment type, its code, its efficiency score, and its safety score? Then, for each equipment type, rank them by efficiency and by safety. I also want to see how big the gap is between those two ranks. Sort everything by type, and then by the best efficiency score.", "normal_query": "Show me the equipment type, equipment code, equipment efficiency rx ating, safety index, efficiency rank, safety rank, and the absolute rank difference between them. Sort the results by equipment type and then by EER in descending order.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": 2, "distinct": false, "order": true}} +{"instance_id": "polar_equipment_2", "selected_database": "polar_equipment", "query": "I need to know which of our gear is ready for a bad storm. Can you check everything and give me a list of all equipment that's up to our 'extreme weather readiness' standard? For each item, I want to see its code and type, whether the heater, insulation, and emergency lights are good to go, its structural safety score, and the final 'Ready' or 'Not Ready' label.", "normal_query": "Could you identify all equipment that meets the extreme weather readiness criteria in our polar database? Show me the equipment code, equipment type, heater status, insulation status, emergency light status, the calculated structural safety factor, and the extreme weather readiness status. Make sure to include all equipment with available structural safety data, even if some equipment might be missing cabin environment, lighting safety, or thermal insulation information.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 2, "distinct": false, "order": false}} +{"instance_id": "polar_equipment_3", "selected_database": "polar_equipment", "query": "Time for a safety check on our life support gear. Can you create a report for me? I need to see the equipment's code and type, its current life support status, and its calculated reliability score. Based on that score, tell me if it's 'High', 'Moderate', or 'Low' reliability. Also, toss in a little JSON summary showing the status of the oxygen, medical, and safety systems with fields names: 'oxygen_status', 'medical_status', 'safety_system_status'. Let's just focus on the 'Safety' type equipment and sort it by the best reliability score.", "normal_query": "For our polar safety assessment, I need to evaluate the safety equipment's life support system reliability. Please provide a report showing the equipment code, equipment type, life support status, calculated LSSR score (rounded to 2 decimal places), and reliability classification based on life support reliability classification. Also include a JSON summary of oxygen status , medical status, and safety system status as support systems status with fields names: 'oxygen_status', 'medical_status', 'safety_system_status'. Focus only on safety equipment and sort the results by LSSR in descending order.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 2, "distinct": false, "order": true}} +{"instance_id": "polar_equipment_4", "selected_database": "polar_equipment", "query": "How green are our stations? I want a report showing each station's type and name, how many pieces of gear are there, and how much they rely on renewable energy. Please show the percentage of renewable use, the total renewable power in watts, and a simple classification according to the classification system of energy sustainability. Only look at stations with solar or wind data, and please sort them to show the greenest stations first.", "normal_query": "Provide the location type, station name, number of equipment at each station, their renewable energy contribution values (rounded to 2 decimal places), total renewable energy output in watts, and how they're categorized according to the energy sustainability classification System? Only include equipment that has measurable solar or wind output data, and sort the results from highest to lowest REC value.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": 2, "distinct": false, "order": true}} +{"instance_id": "polar_equipment_5", "selected_database": "polar_equipment", "query": "Let's get a handle on our water situation at each station. For each station, can you tell me its name and location type? I need to see the average water quality score, the average water management score, and a count of how many systems are in 'conservation needed' mode. Also, give me a simple classification for both the water quality and the overall management status. Sort the list with the best-managed stations at the top.", "normal_query": "For each combination of station name and location type, I need to see station names, location types, average water quality indices, average water resource management index scores (both rounded to 2 decimal places), count of systems with water conservation requirement, water quality classification, and water resource management status. Sort by highest WRMI first, then by water quality.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 2, "distinct": false, "order": true}} +{"instance_id": "polar_equipment_6", "selected_database": "polar_equipment", "query": "I need to check how ready our equipment is. Can you go through all the maintenance records and calculate the score for its operational readiness for each one? Just show me a list with the record ID, its operating hours, maintenance cycle hours, its current status, and the final readiness score.", "normal_query": "Could you calculate the operational readiness score for all our equipment maintenance records? I'd like to see the registry ID, operation hours, maintenance cycle hours, operational status, and the calculated ORS value for each record.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "polar_equipment_7", "selected_database": "polar_equipment", "query": "Let's figure out how sustainable our power gear is. Can you calculate the index of energy sustainability for every power device? I need a list showing the device's code, its energy efficiency percentage, what its power source is, and the final index score you calculated.", "normal_query": "I want to calculate the energy sustainability index for each power device in our database. Please retrieve the equipment reference code, energy efficiency percentage, power source, and then calculate the corresponding ESI value for each device.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "polar_equipment_8", "selected_database": "polar_equipment", "query": "How stable are our comms systems? I need a report that calculates the stability index for each communication unit. Can you show me the unit's ID, antenna status, signal strength, and network lag? Then, using that, calculate both the simple reliability index and the more complex stability index. Please round the numbers to make them easier to read.", "normal_query": "I would like to assess our polar base communication systems by calculating the base station communication stability index for each communication unit. Please extract the registry ID, antenna status, radio signal strength, and network latency from our communication records, then calculate both the communication reliability index and BSCSI for each unit. Make sure to round all values to two decimal places for clarity in reporting.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 2, "distinct": false, "order": false}} +{"instance_id": "polar_equipment_9", "selected_database": "polar_equipment", "query": "I need a list of our safest and best-performing equipment. Can you find all the gear with a top-tier performance index of overall safety-say, anything over 0.75? For each item on the list, show me its equipment code, its calculated efficiency rating, and the final safety/performance score.", "normal_query": "Could you list all equipment with high overall safety performance index scores greater than 0.75? Please display the equipment code, calculate the equipment efficiency rating, and show the OSPI value for each item.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "polar_equipment_10", "selected_database": "polar_equipment", "query": "Let's assess how our vehicles are performing. Can you calculate the coefficient for the vehicle's performance for every chassis we have? I just need a simple report with the chassis ID and its calculated performance score. Make sure to check all of them, even if some have missing engine data.", "normal_query": "For each chassis in our database, calculate the vehicle performance coefficient. I need a report showing the chassis registry ID first, followed by the calculated VPC value. Please include all chassis records in your analysis, even those without corresponding engine data.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "polar_equipment_11", "selected_database": "polar_equipment", "query": "I need a quick number: how many of our shelters are actually ready if a big storm hits? Even if we're missing some sensor data for a shelter, it should still be part of the initial check. Just give me the final tally.", "normal_query": "I need to get a total count of all shelters that are prepared for severe weather. Please determine this by applying the extreme weather readiness status standard. The analysis should include all shelters, even if some weather or thermal data is missing. Provide the final result as a single number representing the total count.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "polar_equipment_12", "selected_database": "polar_equipment", "query": "What's our best-case scenario for getting good science data from the Arctic? Looking only at our equipment up north, find the highest chance of success for a mission from any single instrument. Just give me that single, top-line probability score rounded to two decimal places.", "normal_query": "I want to assess our top-end capability for research in the Arctic. Could you please calculate the maximum scientific mission success probability for any single piece of scientific equipment operating in the 'Arctic' region? Please provide the final result as a single value rounded to two decimal places.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 2, "distinct": false, "order": false}} +{"instance_id": "polar_equipment_13", "selected_database": "polar_equipment", "query": "Let's find our safest and most efficient truck. Just calculate their safety performance overall, and I want to see the single highest score out of the entire fleet. Just give me that top number, rounded.", "normal_query": "I need to identify the absolute best-performing vehicle in our fleet from a safety perspective. Please calculate the overall safety performance index for every vehicle. From all the calculated OSPI scores, find the single maximum value, rounded to two decimal places.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 2, "distinct": false, "order": false}} +{"instance_id": "polar_equipment_14", "selected_database": "polar_equipment", "query": "I want to see our best and worst stability of equipment during long-term operation. For each category of equipment, can you show me a list of the top 5 most stable machines and the 5 least stable ones? Show me the equipment's ID, its category, and its stability score, and group the results by category, with the best ones on top.", "normal_query": "I need you, for each equipment type, please identify the 5 units with the highest long-term operational stability score (LOSS) and the 5 units with the lowest LOSS. Please display the equipment code, its type, and the calculated LOSS, ordered first by equipment type and then by the LOSS score in descending order.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "polar_equipment_15", "selected_database": "polar_equipment", "query": "Let's see how much the antenna's condition matters for our comms. Can you group everything by the antenna status—like normal, warning, or error—and tell me the average communication stability for each? Also, show me how many links are in each group. Make sure to only use records where we have all the necessary data, and list the results with the most stable antenna status on top.", "normal_query": "I want to perform an analysis of communication link stability grouped by antenna status. For each antenna stat category, please calculate the average base station communication stability index. The final report should display the antenna status, the total number of links for that status, and the average bscsi rounded to two decimal places. For this analysis, please ensure you are using a complete data set. Sort the results by the average BSCSI in descending order.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 2, "distinct": false, "order": true}} +{"instance_id": "polar_equipment_16", "selected_database": "polar_equipment", "query": "I want to rank our most efficient vehicles. For each truck, calculate its overall transportation efficiency number. Please show me a list of the top 100 vehicles, with the vehicle's ID, the coefficient for vehicle performance, the index for energy sustainability, and the overall transportation efficiency number, ordered from most efficient to least.", "normal_query": "I need to generate a comprehensive vehicle efficiency and sustainability report. For all vehicles, please calculate the polar transportation efficiency coefficient. The report should display the equipment ref for each vehicle, along with its calculated VPC, ESI, and the final PTEC. Please round the VPC, ESI and PTEC scores to two decimal places. Sort the results by the PTEC in descending order and show only the top 100 vehicles.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 2, "distinct": false, "order": true}} +{"instance_id": "polar_equipment_17", "selected_database": "polar_equipment", "query": "What's the overall reliability score for all the gear we have running right now? I need the average comprehensive score, but only for the active equipment. Make sure you factor in the efficiency, readiness, and communication scores. Just give me that one final number, rounded.", "normal_query": "I need a high-level summary of our fleet's current operational state. Please calculate the average comprehensive operational reliability indicator for all equipment that is currently in an 'Active' operational status. Present the final result as a single value, rounded to two decimal places.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 2, "distinct": false, "order": false}} +{"instance_id": "polar_equipment_18", "selected_database": "polar_equipment", "query": "How does the cold affect our batteries? I want to see a breakdown of battery health based on how cold it is outside. Group the gear into a few temperature buckets like 'Extreme Cold,' 'Standard Cold,' and 'Mild Cold,' and for each bucket, show how many pieces of equipment are in it and what their average battery health is.", "normal_query": "I need to analyze battery performance under thermal stress. Please calculate the temperature-zoned average battery health for all equipment. The report should group equipment by the standard external temperature ranges and display the equipment count and the average battery health for each zone, rounded to two decimal places. Order the results by the temperature range.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 2, "distinct": true, "order": true}} +{"instance_id": "polar_equipment_19", "selected_database": "polar_equipment", "query": "I want a list of our most unreliable comms hubs. Figure out which stations have consistently bad link resilience. For each of those problem stations, show me the station's name, its average reliability and stability scores rounded by 2 decimals, and a list of all the equipment there so we know what to check. Only use complete data for this, and show me the worst stations at the top of the list.", "normal_query": "Please generate a report on stations with poor communication links. Use the communication network resilience assessment to identify all stations with 'Low Resilience'. For each of these stations, I need to see the station name, the average communication reliability index rounded by 2 decimals, the average base station communication stability index rounded by 2 decimals, and a list of all equipment at station contributing to the low score. Please ensure you use a complete data set for the calculations. Order the results by the average BSCSI, with the lowest first.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": 2, "distinct": false, "order": true}} +{"instance_id": "polar_equipment_20", "selected_database": "polar_equipment", "query": "I need to see the water situation at all of our stations. Can you give me a list showing each station's name, its water quality score, and the water tank level? Also, add a simple category based on our standard classification system for water quality. Make sure every station shows up, even the ones we don't have water readings for, and list them alphabetically.", "normal_query": "Please generate a comprehensive water quality report for each station. For every station, show its name, the raw water quality index, and the water level percentage. Additionally, apply the water quality classification system to categorize the water. Ensure that all stations are included in the report, even if they have no associated water data. Order the results by station name.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "polar_equipment_M_1", "selected_database": "polar_equipment", "query": "To make things easier, can we build a reusable tool to figure out the index of energy sustainability? I need a function called 'calculate_esi' that takes an efficiency number and a power source name, and then just spits out the ESI score.", "normal_query": "I want to create a function called 'calculate_esi' taking two inputs, efficiency and resource, that returns the energy sustainability index for our equipment. Please make this a reusable PostgreSQL function that our team can call whenever needed.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "polar_equipment_M_2", "selected_database": "polar_equipment", "query": "Our queries filtering scientific equipment by reliability are slow. Can you create a special index called 'idx_scientific_reliability' to speed things up? It should be built directly on the reliability score calculation so we can find our most reliable gear faster.", "normal_query": "Create a function-based index called 'idx_scientific_reliability' to optimize queries that filter scientific equipment based on their scientific equipment reliability. This index should directly implement the SER formula.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "polar_equipment_M_3", "selected_database": "polar_equipment", "query": "Let's reward the well-maintained cabins. For any equipment that's in a cabin meeting our 'habitability standard', can you give its reliability index a 15% boost?", "normal_query": "Increase the reliability index by 15% for all equipment associated with cabins that meet our cabin habitability standard.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "polar_equipment_M_4", "selected_database": "polar_equipment", "query": "I need a simple way to check on water usage for our dashboards. Can you create a view called 'water_management_view'? It should show the equipment ID, its calculated water management score, and categorize each of them based on the status classification of water resource management. Let's base it on all equipment that has water level data.", "normal_query": "Create a dashboard view called 'water_management_view' that calculates the water resource management index for all equipment with water level data. The view should display the equipment reference, the calculated WRMI value, and categorize each item according to the water resource management status classification.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "polar_equipment_M_5", "selected_database": "polar_equipment", "query": "We need a standard way to calculate the performance coefficient for vehicles. Can you build a function called 'calculate_vpc' that takes brake wear, track wear, speed, and engine load as inputs? It's important that it's robust, so please make sure it throws an error if any of the input values are out of the expected range.", "normal_query": "For our polar vehicles, we need a utility function 'calculate_vpc' to calculate the vehicle performance coefficient for performance assessment. Create a PostgreSQL function that takes four parameters: brake pad wear percentage (0-100), track wear percentage (0-100), vehicle speed (km/h, non-negative), and engine load percentage (0-100). The function should validate these inputs with clear error messages.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "polar_equipment_M_6", "selected_database": "polar_equipment", "query": "I want a standard way to figure out how reliable our life support systems are. Can you build a reusable function called get_lssr that takes an equipment ID and gives back its life support score? It should be based on our formulas for how ready the gear is and how well its insulation is holding up. Make sure the calculator doesn't break if some of the sensor data is missing; it should just return zero in that case.", "normal_query": "Please create a reusable function named get_lssr to standardize the calculation of the life support system reliability for any given piece of equipment. This function should take an equipment code as input and calculate the LSSR. The function must also handle cases where component data might be missing, returning 0 in such instances.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "polar_equipment_M_7", "selected_database": "polar_equipment", "query": "Let's find our most problematic equipment. I need a list of all assets that are both performing poorly—let's use an a rating to calculate equipment efficiency below 40 as the cutoff—and are also more expensive to maintain than other gear in their same category. For each one that meets these criteria, please log its equipment code in our review system table called EquipmentReviewLog, create if it doesn't exists, and note the reason it was flagged.", "normal_query": "Please identify all equipment with an equipment efficiency rating below 40 that also have a maintenance cost higher than the average for their specific equipment type. For each identified piece of equipment, create a new record in the EquipmentReviewLog table, create if the table doesn't exists. You should also insert its equipment code and a reason for the review.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "polar_equipment_M_8", "selected_database": "polar_equipment", "query": "We need to add a hard safety stop to our system. Can you set things up so that if a piece of gear has failed its inspection, nobody can mark it as 'Active' and put it back in service? The system should block the update and give an error message. We can't have people using equipment that we know is broken.", "normal_query": "I need to implement a critical safety protocol. Please create a failed inspection activation lockout rule in the database. If this condition is met, the transaction should be blocked and an exception raised to prevent using unsafe equipment.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "polar_equipment_M_9", "selected_database": "polar_equipment", "query": "Let's make a standard tool for measuring how well our energy and water systems work together. Can you build a calculator function called get_ewrii? It should take an equipment ID and give back the a single score based on its energy sustainability and its water management performance. It's important that this tool is reliable; if it can't find some of the data it needs for a calculation, it should just return zero instead of breaking.", "normal_query": "I need to create a reusable function named get_ewrii to standardize our energy-water resource integration index calculation. The function should accept an equipment code and calculate the EWRII. The function must return a value of 0 if any of the underlying data for a calculation component is not found, thereby preventing query failures.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "polar_equipment_M_10", "selected_database": "polar_equipment", "query": "We need a database failsafe to protect our most important gear. Can you set up a trigger named trigger_prevent_delete_active_critical that stops anyone from deleting a piece of critical equipment from the system if it's currently running? The system should throw an error and block the deletion automatically.", "normal_query": "I need to enforce a database-level safety protocol for critical equipment. Please create a trigger named trigger_prevent_delete_active_critical that prevents the deletion of any equipment record that is currently 'Active' and also meets the definition of critical equipment. This trigger should fire before any delete operation on the Equipment table and raise an exception if the conditions are met, ensuring that essential, in-use assets cannot be accidentally removed.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "sports_events_1", "selected_database": "sports_events", "query": "Please show me the average age of all sprint session winners at the time they won. The result should be a single age in years.", "normal_query": "Calculate the average age of all Sprint Winners at the time they won. Show the result as a single value representing the average age in years.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "sports_events_2", "selected_database": "sports_events", "query": "Our team is studying how thin air affects car performance at racing venues. Can you pull up a list of all tracks that are located high enough above sea level to create high-altitude circuit? I need to see the track names and their exact elevations, with the highest altitude venues listed first.", "normal_query": "I need to identify all High-Altitude Circuits in our database for aerodynamics research. Please retrieve the circuit name and elevation for all circuits that qualify as High-Altitude Circuits. Sort the results by elevation in descending order to show the highest circuits first.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "sports_events_3", "selected_database": "sports_events", "query": "To analyze team performance, can you calculate the rate of constructor reliability for each team? I need to see the team names, their total races started, total races finished, the reliability rate as a percentage, and give them a reliability rank. Only include constructors with significant participation so we get meaningful data, and sort them from most reliable to least reliable.", "normal_query": "I need to analyze team performance by calculating the Constructor Reliability Rate for all constructors in our championship database. Please provide a ranking that shows each constructor's name, total races started, total races finished, their reliability rate as a percentage, and their reliability rank. Only include Constructors with Significant Participation to ensure statistical validity. Sort the results by reliability rate from highest to lowest.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "sports_events_4", "selected_database": "sports_events", "query": "I'm curious about the dominant wins in sprint races, which are characterized by large margins. Just give me the total number of these landslide wins.", "normal_query": "Please count how many Dominant Victory events occurred in sprint races. Just return the total count.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "sports_events_5", "selected_database": "sports_events", "query": "I want to see how McLaren's overall team performance develops over seasons - like watching their report card get updated after each race. Just show me the year, race ID, constructor name, and the cumulative constructor's performance score after each race event, so I can track how their performance score changes as the season progresses.", "normal_query": "Our team need to analyze how McLaren's Constructor's Performance Score (CPS) evolves throughout different seasons. Show the year, race ID, constructor name, and the cumulative CPS score after each race event, so I can track how their performance score changes as the season progresses.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "sports_events_6", "selected_database": "sports_events", "query": "I'm curious about how Hamilton's value as a driver changes as he gets older and more experienced. Please give me the race IDs and his performance values.", "normal_query": "I need to analyze Lewis Hamilton's Driver Performance Value throughout his career. Can you show the race ID and the calculated DPV value?", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "sports_events_7", "selected_database": "sports_events", "query": "Can you rank the drivers based on the stability of their average lap time? Please show me each driver's surname and first_name in a JSON format, their average consistency score, and the number of Races Analyzed. Just focus on drivers who have competed in more than five races, and list the most consistent ones at the top.", "normal_query": "Can you rank the drivers based on their Average Lap Time Consistency? Please show me each driver's surname and first_name in a JSON format, their average consistency score, and the number of Races Analyzed. Just focus on drivers who have competed in more than five races, and list the most consistent ones at the top.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "sports_events_8", "selected_database": "sports_events", "query": "I'm interested in the achievements of veterans. Could you pull up a list of the race year, official race event name, driver's full name, their podium position, and their age at the time of the race. Please show accomplishments by oldest drivers first, and for same-age drivers, show most recent results first.", "normal_query": "Retrieve all instances of a Veteran's Podium. For each occurrence, please provide the race year, the official race event name, the driver's full name, their specific podium position, and their calculated age at the time of the race. The results should be ordered first in descending order by the driver's age at the time of the race, and then in descending order by the race year.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "sports_events_9", "selected_database": "sports_events", "query": "I need to generate a list that ranks the drivers' overall performance in a Sprint session. The output should include the event name, the driver's ID, and their performance index score. Please make sure the best performances are right at the top.", "normal_query": "I need to generate a report that calculates the Sprint Performance Index for every completed driver result in a sprint session. The output should include the event name, the driver's reference code, and the calculated Sprint Performance Index. Sort the results in descending order based on the index to feature the highest-scoring performances first.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "sports_events_10", "selected_database": "sports_events", "query": "For each qualifying session, calculate the average percentage of qualifying specialists and tell me the average of those percentages across all sessions, rounded to two decimal places.", "normal_query": "For each qualifying session, calculate the average percentage of drivers who meet the Qualifying Specialist criteria: specifically and output the average of those percentages across all sessions, rounded to two decimal places.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": 2, "distinct": false, "order": false}} +{"instance_id": "sports_events_11", "selected_database": "sports_events", "query": "Using historical data, estimate the average probability that a driver achieves hat trick achievements, given that they start from pole position. Use the simplified probability assumptions for calculation, which estimate the chance to win and the chance to set the fastest lap if they're on pole, and the result should be rounded to four decimal places.", "normal_query": "Using historical data, estimate the average probability that a driver achieves a Hat Trick, given that they start from Pole Position. Base your calculation on assumed Pole-Based Race Win Probability and Pole-Based Fastest Lap Probability, and the result should be rounded to four decimal places.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": 4, "distinct": false, "order": false}} +{"instance_id": "sports_events_12", "selected_database": "sports_events", "query": "Can you calculate how well the top 8 finishers perform on average in sprint sessions? Round the result to two decimal places.", "normal_query": "I need to analyze the average Sprint Performance Index (SPI) across top 8 finishers in sprint sessions. Please round the result to two decimal places.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": 2, "distinct": false, "order": false}} +{"instance_id": "sports_events_13", "selected_database": "sports_events", "query": "Which constructor has the best track record for finishing races? Calculate the reliability rate among all constructors who have participated in at least 5 races, which shows the races finished out of races started, and return the highest reliability percentage, rounded to two decimal places.", "normal_query": "Which constructor has the best track record for finishing races? Help me find the highest Constructor Reliability Rate among all constructors who have participated in at least 5 races. The result should be rounded to two decimal places.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 2, "distinct": false, "order": false}} +{"instance_id": "sports_events_14", "selected_database": "sports_events", "query": "Find drivers with at least 10 laps recorded, and identify the highest stability of a driver's lap times during a race. Give me the top score rounded to two decimals.", "normal_query": "I want to identify the best Lap Time Consistency performance from drivers who have completed at least 10 laps. Please provide me with the highest consistency score rounded to two decimal places.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": 2, "distinct": false, "order": false}} +{"instance_id": "sports_events_15", "selected_database": "sports_events", "query": "What's the absolute fastest lap time ever recorded in our database, measured in seconds? Make sure to ignore any zero or negative times.", "normal_query": "What is the fastest single Lap Time in Seconds recorded in the database? Exclude any zero or negative lap times.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "sports_events_16", "selected_database": "sports_events", "query": "To do the performance analysis, please help me calculate the average duration of our pit stops (in seconds), excluding any records where the duration is not a positive value. I want a single output, rounded to three decimal places.", "normal_query": "For our performance analysis, please calculate the Average Pit Stop Duration (in seconds), excluding any records where the duration is not a positive value. The final output should be a single value, rounded to three decimal places.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": 3, "distinct": false, "order": false}} +{"instance_id": "sports_events_17", "selected_database": "sports_events", "query": "I need to know if we have any circuits with specific environmental characteristics regarded as 'high-altitude'. Can you just give me a simple 'Yes' or 'No' answer?", "normal_query": "I need to know if we have any circuits that are considered a High-Altitude Circuit. Can you just give me a simple 'Yes' or 'No' answer?", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "sports_events_18", "selected_database": "sports_events", "query": "Please analyze every session that constitute a championship race weekend and count the occurrences of unavailable date or time information. What's the session name with the highest total count of indeterminate entries?", "normal_query": "Please analyze every session within the standard Race Weekend Structure and count the occurrences of Indeterminate Event Timings for each session type. What's the session name with the highest total count of indeterminate entries?", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "sports_events_19", "selected_database": "sports_events", "query": "Calculates the time difference between each driver's qualifying lap and the pole sitter's lap, and then categorize drivers into three groups based on their qualifying performance. Return driver IDs, average deficits (rounded to 3 decimal places), and their qualifying cluster.", "normal_query": "Calculating each driver's Qualifying Time Deficit to Pole, and then categorize drivers into three groups based on Qualifying Performance Cluster. Return driver IDs, average deficits (rounded to 3 decimal places), and their qualifying cluster.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": 3, "distinct": false, "order": false}} +{"instance_id": "sports_events_20", "selected_database": "sports_events", "query": "Can you calculate the average stops per car for each event, and just show me the total count of races classified as a 'Single-Stop Race' based on the pit strategy classification criteria?", "normal_query": "Can you calculate the Average Stops Per Car for each event and just show me the total count of races classified as a 'Single-Stop Race' based on Pit Strategy Cluster?", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "sports_events_M_1", "selected_database": "sports_events", "query": "Please add a boolean column to the table that records pit stops to identify whether the pit stops are efficient. The new column should be named 'is_efficient' and contain TRUE for efficient stops and FALSE otherwise. Besides, the value should remain NULL if the millisecond count is NULL.", "normal_query": "Please add a boolean column to the pit_stops table based on the Efficient Pit Stop criteria for our analysis. The new column should be named 'is_efficient' and contain TRUE for efficient stops and FALSE otherwise. Besides, the value should remain NULL if the millisecond count is NULL.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "sports_events_M_2", "selected_database": "sports_events", "query": "Can you create a function named get_driver_age that takes the driver information (JSONB) as input and calculates their current driver age?", "normal_query": "Can you create a function named get_driver_age that takes a driver_identity JSONB parameter as input, extracts the birth_date from it, and returns the driver's current age in years as an INTEGER?", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "sports_events_M_3", "selected_database": "sports_events", "query": "Create a high_altitude_circuits view that shows all circuits that can be classified as high-altitude. I want to see their circuit ID, name, and elevation.", "normal_query": "Create a view called high_altitude_circuits showing all High-Altitude Circuit entries. Include the circuit key, name, and elevation from the circuits table.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "sports_events_M_4", "selected_database": "sports_events", "query": "Update the race records with a victory type marker using the sprint results timing data, which set victory_type to 'Dominant Victory' if the cretiria is satisfied.", "normal_query": "Update the races table to flag Dominant Victory events in the event_schedule JSONB field (set victory_type to 'Dominant Victory').", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "sports_events_M_5", "selected_database": "sports_events", "query": "Create a stored procedure named award_hat_trick that takes driver ID and race ID as parameters to verify and record the three key achievements in a single race weekend. If all three conditions are met, insert a record with achievement type being 'Hat Trick' into the table that records all achievements.", "normal_query": "Create a stored procedure named award_hat_trick that takes driver_id and race_id as parameters to verify and record if a specified driver accomplished a 'Hat Trick'. If all three conditions are met, insert a Hat Trick record (achievement_type should be 'Hat Trick') into the achievements table.", "preprocess_sql": ["-- Pre-process SQL to create achievements table\nCREATE TABLE IF NOT EXISTS achievements (\n achievement_id SERIAL PRIMARY KEY,\n driver_id INTEGER NOT NULL,\n race_id INTEGER NOT NULL,\n achievement_type VARCHAR(50) NOT NULL,\n recorded_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,\n FOREIGN KEY (driver_id) REFERENCES drivers(drv_main),\n FOREIGN KEY (race_id) REFERENCES races(rak_id),\n CONSTRAINT unique_achievement UNIQUE (driver_id, race_id, achievement_type)\n);"], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "sports_events_M_6", "selected_database": "sports_events", "query": "Build a view called podium_finishes showing all podium finishes in season standings. Display the driver's last name, race year, and their finishing position.", "normal_query": "Create a view named podium_finishes that displays all Podium Finish achievements in season standings. Show the driver surname (from driver_identity JSONB), race year, and final position.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "sports_events_M_8", "selected_database": "sports_events", "query": "Please add a new true/false column called is_pole_position to the qualifying table with a default value of FALSE, then mark it as TRUE for whoever got the pole position.", "normal_query": "Add a new boolean column named is_pole_position to the qualifying table with a default value of FALSE. Then update this column to TRUE for all records having achieved Pole Position.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "sports_events_29", "selected_database": "sports_events", "query": "I need to create a custom domain called championship_points based on the REAL data type that only allows zero or positive numbers for championship points.", "normal_query": "I want to create a custom domain named championship_points based on the REAL data type to store Championship Points System (Race) values. The domain should include a CHECK constraint to ensure all values are non-negative.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "sports_events_30", "selected_database": "sports_events", "query": "I need to clean up our constructor database by handling missing nationality information. Can you find all the teams where we don't know what country they're from and mark those entries as 'Not Recorded' instead of leaving them empty?.", "normal_query": "I need to clean up our constructor database by handling missing nationality information. For all constructors where the nationality field shows Data Unavailability, please update these records to explicitly indicate 'Not Recorded' instead of leaving them blank.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "labor_certification_applications_1", "selected_database": "labor_certification_applications", "query": "I'm trying to figure out which visas take the longest to get approved. Can you give me a breakdown of the average wait time for each type of visa? Just show me the ones that actually got certified, and list them from longest to shortest wait time.", "normal_query": "I'm curious about how long it takes for different visa applications to get approved. Could you show me the average Application Processing Time for each of the Visa Classification Types? Please only include applications that were certified, and sort the list to show the visa types that take the longest on top.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "labor_certification_applications_2", "selected_database": "labor_certification_applications", "query": "What's the percentage of H-1B applications that are successful?", "normal_query": "I want to know the Approval Rate for H-1B Visa Classification Types. Can you calculate the percentage of H-1B visa applications that end up being certified?", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "labor_certification_applications_3", "selected_database": "labor_certification_applications", "query": "I'm trying to see if bigger companies have an easier time getting visas approved. Can you break down companies based on different employer sizes? Then, for each size, tell me the average application success rate for getting those visas approved. I want to see how many companies are in each size group, and what their average approval rate is, from the highest approval rate to the lowest.", "normal_query": "I'm looking to understand how the size of an employer, based on their application volume, relates to their success in getting visa applications approved. Can you categorize employers into Employer Size Classifications, and then calculate the average Application Success Rate for each of these categories? I'd like to see the number of employers in each size category, and their average success rate, sorted from highest success rate to lowest.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "labor_certification_applications_4", "selected_database": "labor_certification_applications", "query": "I'm curious to know which soc code categories are most frequently associated with successful H-1B visa applications. Can you list the top 5 job soc titles that appear most often in certified H-1B visa cases? I want to see which jobs are most commonly approved for H-1B visas.", "normal_query": "I'm interested in identifying the most frequently certified occupations for H-1B visas. Could you provide a list of the top 5 SOC Code Framework that appear most often in certified H-1B visa applications? The output should include the job title and the number of certified H-1B applications for each title, sorted in descending order by the number of applications.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "labor_certification_applications_5", "selected_database": "labor_certification_applications", "query": "The goal is to calculate the average wage differential ratio for top-level positions that are compensated on an annual basis. When a pay range is provided, the midpoint between the lower and upper amounts should be used. If only a single value is available, that value will be used as the offered amount. This ensures consistency in how compensation is interpreted across all relevant records.", "normal_query": "I want to analyze the Wage Differential Rate specifically for top-tier, high-skill roles that are paid on a yearly basis. For these roles, when a salary range is given, the midpoint between the lower and upper values should be used; if only one value is available, that value should be used directly. The calculation should focus only on entries where all required wage information is clearly provided and valid. Finally, I want to compute the average percentage difference between the offered pay and the standard market rate for these positions.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "labor_certification_applications_6", "selected_database": "labor_certification_applications", "query": "Within the custom computer programming services industry where NAICS code equals to 541511, how many annually paid positions qualify as significantly high wage positions? I'm looking for a count of positions where the wage differential ratio exceeds 20%.", "normal_query": "I am analyzing compensation trends within the Custom Computer Programming Services industry, specifically NAICS code 541511. I need to determine the number of Premium Wage Positions that are paid annually. Can you provide a count of the positions within this industry where the Wage Differential Rate exceeds 20%?", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "labor_certification_applications_7", "selected_database": "labor_certification_applications", "query": "I need to identify attorneys with high level performance who specialize in E-3 Australian visas. Can you provide a count of attorneys who meet the criteria of a high performer and for whom E-3 Australian visa cases constitute more than 50% of their total caseload?", "normal_query": "I am interested in identifying attorneys who are highly proficient in handling E-3 Australian visa applications. Can you provide a count of attorneys who qualify as High Performers, based on the Attorney Performance Rating, and for whom E-3 Australian visa cases constitute a significant portion of their practice? Specifically, I need the number of attorneys where more than 50% of their caseload consists of E-3 Australian visa applications.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "labor_certification_applications_8", "selected_database": "labor_certification_applications", "query": "I want to see how competitive the salaries are for software quality assurance analysts and testers jobs that pay yearly. Group these jobs into the wage competitiveness levels, and for each level, show the total number of positions, where positions means the total worker count from the dataset's head count information, not the number of rows. Sort so the most common category is at the top.", "normal_query": "I am preparing to analyze the distribution of Wage Competitiveness Tiers for Software Quality Assurance Analysts and Testers positions. Specifically, I would like to see a breakdown of annually paid positions in this occupation, categorized by their wage competitiveness level. For clarity, positions here are defined strictly as the total number of worker positions calculated by summing the head count information from the dataset, rather than counting individual records. Please provide the total number of worker positions in each tier, and sort the results by the number of positions in descending order.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "labor_certification_applications_10", "selected_database": "labor_certification_applications", "query": "I'm looking to compare how long visa applications take to process based on how complex they are. Could you sort the applications into application complexity levels and for each group, tell me how many applications there are along with the average processing time in days? Please round the averages to two decimals and list the groups starting with the ones that take the longest.", "normal_query": "I require a comparative analysis of visa application processing times segmented by application complexity. Please classify each application using the Application Complexity Tiers. Then, for each complexity category, compute the number of applications and the average processing time in days. Round the average processing time to two decimal places and present the results in descending order of average processing time.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 2, "distinct": false, "order": true}} +{"instance_id": "labor_certification_applications_11", "selected_database": "labor_certification_applications", "query": "I'm trying to figure out when people are submitting their visa applications to see if they're doing it at the best time. Could you help me break down the applications based on different visa filing window? And I also need to know how many applications are in each category and what percentage of the total they make up and round it to two decimal places. Please sort the results from most to least applications.", "normal_query": "I am eager to create a report detailing the Visa Filing Window Distribution for all visa applications. The report should categorize applications based on Visa Filing Window. The output should include the category name, the number of applications falling into each category, and the percentage of total applications represented by each category, rounded to two decimal places. Please present the results in descending order by the number of applications.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 2, "distinct": false, "order": true}} +{"instance_id": "labor_certification_applications_12", "selected_database": "labor_certification_applications", "query": "Find the jobs where the offered pay is over 10% higher than the going market rate and there are at least 30 applications. Only include cases where both pay figures are available and in the same pay unit, like both per hour or both per year. For each job, list its title, number of applications, average percentage pay difference (rounded to two decimals), and mark it as “Skill Shortage Occupation”. Show only the top five with the biggest average pay differences, starting from the highest.", "normal_query": "For each occupation, identify those that meet the definition of Skill Shortage Occupations — having a Wage Differential Rate (WDR) greater than 10% and at least 30 applications. Include only situations where both the offered wage and the prevailing wage are available and measured in the same pay unit. Show the occupation title, total applications, average WDR (rounded to two decimals), and mark them with the category “Skill Shortage Occupation”. List only the top five with the highest average WDR in descending order.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 2, "distinct": false, "order": true}} +{"instance_id": "labor_certification_applications_13", "selected_database": "labor_certification_applications", "query": "List the states that have at least 1.5× the national average of visa applications, and tell me how many such hotspot states there are in total.", "normal_query": "For each U.S. state, identify Geographic Application Hotspots based on visa application counts exceeding 1.5 times the national average. The output should include the list of hotspot states and the total count of such hotspot states.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "labor_certification_applications_14", "selected_database": "labor_certification_applications", "query": "Can you figure out which industries depend on visa? I'd like to see the NAICS code, the total number of applications, and the percentage of all applications rounded off to two decimal places. List the results starting with the industries that have the highest percentages.", "normal_query": "Please determine which industries belong to Visa-Dependent Industry. The output should include the NAICS code, the total number of applications, and the percentage of total applications (rounded to two decimal places). Sort the results from the highest to the lowest percentage.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": 2, "distinct": false, "order": true}} +{"instance_id": "labor_certification_applications_15", "selected_database": "labor_certification_applications", "query": "Can you group attorneys based on the types of visa cases they usually work on? Only include attorneys who’ve worked on at least 5 cases. For each group, show the category name, how many attorneys are in it, their average specialization score rounded off to two decimals, and the average percentage of cases they handle in their main visa type rounded to two decimals. Sort the list so the categories with the most attorneys come first.", "normal_query": "For each attorney, categorize them into Attorney Specialization Categories based on their visa case specialization. Only include attorneys who have handled at least 5 cases. For each category, display the category name, the number of attorneys in each category, the average Attorney Specialization Index (ASI) (rounded to two decimal places, as handled in the SQL), and the average percentage of cases they handle in their dominant visa type (rounded to two decimal places). Sort the results by the number of attorneys in each category in descending order.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": 2, "distinct": false, "order": true}} +{"instance_id": "labor_certification_applications_16", "selected_database": "labor_certification_applications", "query": "I'm trying to figure out which states have the best lawyers when it comes to handling visa cases. Could you help me check the rate of attorney success for different states? I'd like to see: which state they're practicing in, how many cases they've handled in total, how many cases were successful and their success rate as a percentage with 2 decimal points. Let's focus on states where attorneys have handled at least 3 cases, and just show me the top 3 states with the highest success rates.", "normal_query": "Could you analyze the performance of attorneys across different court jurisdictions by calculating their Attorney Success Rate? Please show the jurisdiction state, total number of cases handled, number of certified cases, and success rate as a percentage with 2 decimal places. Only include jurisdictions with at least 3 cases and show the top 3 most successful jurisdictions.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 2, "distinct": false, "order": true}} +{"instance_id": "labor_certification_applications_17", "selected_database": "labor_certification_applications", "query": "I'm curious about the wage levels for H-1B jobs - could you help me break down the numbers? I'd like to know how many applications we have for each wage level, and what percentage of the total they make up with 2 decimal points. And rank them from most common to least common.", "normal_query": "I need an overview of Prevailing Wage Levels distribution in H-1B visa applications with valid prevailing wage level. Please show each wage level along with its application count and the percentage share of total applications, with percentages shown to 2 decimal places. Sort the results to highlight which wage levels are most commonly requested.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 2, "distinct": false, "order": true}} +{"instance_id": "labor_certification_applications_18", "selected_database": "labor_certification_applications", "query": "Could you help me find out which industries tend to pay more than others? I'd love to see the top 5 highest-paying industries and just show me their NAICS codes, how many job applications each industry has, and average industry wage difference. Make sure we're only looking at entries with valid NAICS codes and consistent wage units, and round the wage differences to 2 decimal places to keep it clean.", "normal_query": "Can you analyze how wages differ across industries by calculating the Industry Wage Differential for each NAICS code where naics code is valid and wage unit equals? Please show: the industry NAICS code, number of applications in that industry and average wage differential (rounded to 2 decimal places). Show only the top 5 industries with the highest wage differentials.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 2, "distinct": false, "order": true}} +{"instance_id": "labor_certification_applications_19", "selected_database": "labor_certification_applications", "query": "I'm wondering if having a lawyer really boosts your chances of getting a visa approved. Can you compare visa approval rates for applications filed with an attorney versus those filed without one? I want to see the total number of applications in each category, the number that were approved, and the approval rate percentage, rounded to two decimal places. Include all the attorney cases.", "normal_query": "I am working on analyzing the effectiveness of legal representation in visa application outcomes including all the attorney cases. Can you provide a report comparing the approval rate for applications that used an attorney versus those that were self-represented? I'd like to see the total number of applications in each category, the number of certified applications, and the calculated approval rate rounded to two decimal places.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 2, "distinct": false, "order": false}} +{"instance_id": "labor_certification_applications_20", "selected_database": "labor_certification_applications", "query": "Which jobs are most in demand for visa applications? Could you show me the top 5 jobs that are most popular? I'd like to see the job title, how many applications there are for each, and some kind of occupational demand index for each job, rounded to two decimal places. Also, let's just stick to jobs that have valid SOC codes.", "normal_query": "I'm interested in understanding the relative demand for different occupations within the visa application process. Could you generate a report showing the top 5 occupations with the highest Occupational Demand Index? The report should include the occupation title, the number of applications for that occupation, and the calculated ODI rounded to two decimal places. Please only include occupations with valid SOC codes.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 2, "distinct": false, "order": true}} +{"instance_id": "labor_certification_applications_M_1", "selected_database": "labor_certification_applications", "query": "Can you make a table called 'employer_analytics' that shows how big each employer is in our visa database? I'm looking to track which companies submit lots of visa applications versus just a few. For each employer, I need their name, their employer scale indicator number, and their employer size category. If this table's already existed, just update it with the newest information.", "normal_query": "Could you create a table called 'employer_analytics' that calculates and stores the Employer Scale Indicator for each employer in our visa application database? I need the table to include the employer name, their ESI value, and categorize them according to the Employer Size Classification framework. If the table already exists, please update the records with the latest values.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "labor_certification_applications_M_2", "selected_database": "labor_certification_applications", "query": "Hey, can you help me sort our visa attorneys into different categories based on their specialization patterns? I need to add a new column to our attorney table that shows if each lawyer is a 'Specialist,' 'Hybrid Practitioner,' or 'Generalist' depending on the different attorney specialization classification standard.", "normal_query": "will you identify and categorize attorneys in our visa database according to their Attorney Specialization Category? This requires adding a new column to the attorney table to store this classification.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "labor_certification_applications_M_3", "selected_database": "labor_certification_applications", "query": "I'm trying to figure out how long our visa applications take to process. Can you make a simple procedure called 'calculate_apt' that works out the time taken for application processing for each case? After you've created the procedure, could you run it to update all our records?", "normal_query": "I intend to implement a procedure to calculate the Application Processing Time for our visa applications database. Could you create a stored procedure called calculate apt? After creating the procedure, please execute it to update all records.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "labor_certification_applications_M_4", "selected_database": "labor_certification_applications", "query": "I'm trying to build a function for our visa database that figures out the wage differential ratio. Can you make it round to two decimal places and return null if there's no prevailing wage? I need it to take four inputs: the offered wage amount, prevailing wage amount, and both of their payment units.", "normal_query": "I am considering creating a PostgreSQL function that calculates the Wage Differential Rate in our visa application database. Please round the final percentage to two decimal places and return null if the prevailing wage is zero. The function should accept four parameters: offered wage amount, prevailing wage amount, offered wage unit, and prevailing wage unit.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": true, "conditions": {"decimal": 2, "distinct": false, "order": false}} +{"instance_id": "labor_certification_applications_M_5", "selected_database": "labor_certification_applications", "query": "We want to add a new column that shows how much employers rely on visa workers, but only for those who have submitted any applications. Just sort them into Low, Moderate, or High based on their visa usage. Use 20 times their case count to estimate workforce and handle any divide-by-zero issues.", "normal_query": "We need to enhance our employer table by adding an Employer Dependency Level classification column. Please create an enumerated type with three dependency levels (Low, Moderate, High) and update the classification only for employers that have submitted at least one application. For workforce estimation, use a factor of 20 times the distinct case count per employer. Make sure to handle division by zero properly.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "labor_certification_applications_M_6", "selected_database": "labor_certification_applications", "query": "Can you set up something that automatically checks how strong a wage is whenever someone adds or changes one? I want the system to figure out the difference between what’s being offered and what’s typical, then sort it into the right group. Also, make sure it works even if the wages are in different formats, like hourly versus yearly.", "normal_query": "Please build an automated system that assigns each wage entry to a category based on the Wage Competitiveness Tiers framework. This system should activate whenever a new entry is added or an existing one is modified. It needs to calculate the Wage Differential Rate (WDR) and use that value to determine the appropriate category. The process must ensure any required conversions between wage types — such as hourly or annual — are handled correctly during the calculation.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "labor_certification_applications_M_7", "selected_database": "labor_certification_applications", "query": "I am curious about companies that regularly file visa applications throughout the year? I'm looking for these continuous filing employers. I'd like to see each employer's name, how many total applications they filed, in how many different months they submitted applications, and whether they qualify as continuous filers or not. Could you make this into a procedure where I can specify which year I want to look at? If I don't specify a year, just use the current year by default.", "normal_query": "Could you identify all continuous filing employers for the current year? I'd like to see the employer name, their total number of applications, how many months they filed in, and whether they qualify as continuous filers. Please make this as a procedure that can accept a specific year parameter, defaulting to the current year if no year is provided.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "labor_certification_applications_M_8", "selected_database": "labor_certification_applications", "query": "I want to make our visa database smarter by adding a complexity score for each application. Could you add a new column called 'application_complexity_score' to our cases table that starts at zero by default? Then I need you to fill it in by calculating application complexity value. Just make sure each factor adds to the score when it applies.", "normal_query": "I need to enhance our visa application database by adding and calculating the Application Complexity Score for each case in our records. Please add a new integer column called 'application_complexity_score' to the cases table with a default value of 0, then populate it based on ACS standard. All these factors should contribute to the total score when positive.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "labor_certification_applications_M_9", "selected_database": "labor_certification_applications", "query": "I'm trying to understand when companies are submitting their visa applications compared to when people actually start working. Could you make a function that looks at the receipt date and start date to figure out the Visa Filing Window category? The function should take in those two dates and spit out which category the application falls into. Just make sure it handles the date formats correctly since they might be in text format.", "normal_query": "I wish to categorize all visa applications based on their Visa Filing Window timing. Create a function that determines how far in advance applications were submitted before the employment start date. The function should take the receipt date and begin date as inputs and classify applications into appropriate categories. Please ensure the function handles date conversions properly and returns the categorical result as text.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "labor_certification_applications_M_10", "selected_database": "labor_certification_applications", "query": "I’d like to know how good different companies are at keeping people in their jobs. Use whatever job history information we have to figure this out, but skip companies where there’s no relevant data.", "normal_query": "We want to enhance our employer records by calculating the Retention Rate, showing the percentage of how often employers keep their workers. This should be based on available employment signals and should only apply to employers that have relevant job history data.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "insider_trading_1", "selected_database": "insider_trading", "query": "Give me all trades for high-risk traders, with how much they traded and their leverage. Make sure to show the biggest trades first.", "normal_query": "Show all trades for high-risk compliance cases, including trader ID, trade amount, and leverage, ordered by the trade amount from highest to lowest.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 2, "distinct": false, "order": true}} +{"instance_id": "insider_trading_2", "selected_database": "insider_trading", "query": "Find me transactions that look suspiciously like insider trading. For these, calculate the Sentiment-Driven Leakage Risk. If that risk score is over 1000, show me the transaction ID, trader ID, time, the original leakage score, and the new SDLR score. Cap it at 100 results.", "normal_query": "Please identify transaction records that trigger a Potential Insider Trading Flag. For these flagged transactions, calculate their Sentiment-Driven Leakage Risk score. For transactions where this SDLR score is over 1000, please show the transaction register ID, the trader reference ID, the transaction timestamp, the original information leakage score, and the calculated SDLR score, limited to the top 100 results.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": 2, "distinct": false, "order": false}} +{"instance_id": "insider_trading_3", "selected_database": "insider_trading", "query": "Let's compare our different kinds of traders. For each type, what's their average aggression and compliance score? Show me the trader type (in lowercase), their avg aggression, and avg compliance. List the most aggressive types first.", "normal_query": "I need an analysis comparing different types of traders. For each trader type, please calculate the average Aggressive Trading Intensity and the average Compliance Health Score. Display the trader type (all in lower case), the calculated average ATI, and the average CHS. Finally, sort the results by the average ATI in descending order.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 2, "distinct": false, "order": true}} +{"instance_id": "insider_trading_4", "selected_database": "insider_trading", "query": "Some traders seem to just copy others in their network. Find compliance cases for this behavior, then figure out an 'investigation intensity' score for them. Give me the top 100, sorted by that score, showing the case ID and the score.", "normal_query": "Please identify all compliance cases associated with traders flagged for Networked Mimicry Risk. For each of these specific cases, calculate the Investigation Intensity Index (III). List the compliance case registration ID and its corresponding Investigation Intensity Index (III). Finally, sort the results by the Investigation Intensity Index in descending order and show only the top 100 cases.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": 2, "distinct": false, "order": true}} +{"instance_id": "insider_trading_5", "selected_database": "insider_trading", "query": "Let's find our riskiest manipulators. I mean traders who are either high-frequency with high risk-leverage, or have been confirmed for layering. For that specific group, what's their average 'uniqueness' score? I just need the single number.", "normal_query": "First, identify all traders who qualify as High-Risk Manipulator Candidates. Then, for this specific group of traders, calculate the average Unique Pattern Deviation Ratio based on their transaction history. Please provide only this single average value.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": 2, "distinct": false, "order": false}} +{"instance_id": "insider_trading_6", "selected_database": "insider_trading", "query": "For our most intense insider trading investigations, what are the usual penalties? Tally them up and show me the list, from most common to least.", "normal_query": "I want to analyze the enforcement outcomes specifically for cases flagged as High-Intensity Insider Investigations. Could you provide a frequency count for each type of Penalty Imposed that resulted from these investigations? Please list the penalty types and their corresponding frequencies, ordered from the most frequent penalty to the least frequent.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "insider_trading_7", "selected_database": "insider_trading", "query": "Are the 'copycat' traders any good? Let's see. Compare their average risk-adjusted win rate to the traders who act independently.", "normal_query": "I want to compare the performance of traders potentially involved in Peer Mimicry Suspicion versus other traders. Please calculate the average Risk-Adjusted Win Rate for these two groups. Display a boolean indicating if the group represents Peer Mimicry Suspicion (True) or not (False), and the corresponding average RAWR for that group.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 2, "distinct": false, "order": false}} +{"instance_id": "insider_trading_8", "selected_database": "insider_trading", "query": "For traders who speculate on volatile events, what's their average Order Modification Intensity? Just give me the one number.", "normal_query": "I need to analyze the order modification behavior of a specific trader group. Please identify all traders classified as Volatile Event Speculators. Then, calculate the average Order Modification Intensity across all transactions associated with this group. Provide just the calculated average OMI.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 4, "distinct": false, "order": false}} +{"instance_id": "insider_trading_9", "selected_database": "insider_trading", "query": "I want to see the cases for high-frequency trades that resulted in a big fine over $100,000. If a trading restriction was applied in those cases, show me the case ID and the exact restriction type.", "normal_query": "List all enforcement actions for cases involving high-frequency trades where the penalty amount exceeded $100,000. Only include cases where a trading restriction was applied, and show the enforcement ID and the specific trading restriction period type.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "insider_trading_10", "selected_database": "insider_trading", "query": "What's the difference in average aggression score between trades with 'confirmed' vs 'suspected' layering? Show me the comparison.", "normal_query": "I need to compare the average Aggressive Suspicion Score between transactions where the layering index is 'Confirmed' and those where it is 'Suspected'. Please calculate the average ASS for each of these two groups. Display the layering status (in lower case) and the corresponding average ASS.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 3, "distinct": false, "order": false}} +{"instance_id": "insider_trading_11", "selected_database": "insider_trading", "query": "Give me the stats on how much trader 'TR94368' messes with their orders. I need a single row showing the trade count, and the min, avg, median, and max OMI.", "normal_query": "For the trader with ID 'TR94368', calculate the distribution statistics for their Order Modification Intensity (OMI) based on all their valid transactions. Please return a single row containing the trader's ID, the total count of transactions considered, and the minimum, average, median, and maximum OMI.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "insider_trading_12", "selected_database": "insider_trading", "query": "Time to bulk update risk scores. For all 2024+ cases, recalculate the behavioral risk score with the new suspicious activity index. Cap the score at 100, and use a temp table called `score_updates` to do the update.", "normal_query": "Please create a temporary table named `score_updates` containing new `behav_score` values for all cases with transactions from 2024 onwards. Calculate the new scores using the suspicious activity index, ensuring all calculation components use double precision and the final result is capped at 100. Then, use this temporary table to update the `reg_compliance` table.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "insider_trading_13", "selected_database": "insider_trading", "query": "Find trades from after-hours where the news leaked early and the price got choppy. Specifically, find trades near a 'post-market' announcement with an info leak rate over 0.8 and price acceleration over 3.0. Show me the trade ID, leak rate, and price acceleration.", "normal_query": "Find all trade records (`REC_KEY`) that occurred in proximity to a corporate event with an `announce_time` of 'Post-market hrs before' and an `info_leak_rate` greater than 0.8 score/hour. Additionally, these trades must have a corresponding `price_accel` value greater than 3.0 %/(hour²). List the record key, the info leak rate, and the price acceleration.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "insider_trading_14", "selected_database": "insider_trading", "query": "Find trades where liquidity impact was over 80k and there was a 'Strong' ignite signal. Show me the trade ID, liquidity impact, the signal, and the trader's type in lowercase.", "normal_query": "I need to identify trade records where the market experienced a high liquidity impact, defined as `liq_imp` greater than 80,000 USD/min, and where a 'Strong' `ignite_sig` was detected in the manipulation signals. For each of these records, please list the record key (`REC_TAG`), the liquidity impact, the ignite signal, and the associated trader's `typeFlag` (in lower case).", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "insider_trading_15", "selected_database": "insider_trading", "query": "Are our most-connected institutional traders also the least compliant? For every institution, show their ID, their compliance health score, and their network strength. Sort by the worst compliance score first.", "normal_query": "For all traders with a `typeFlag` as 'Institution', calculate two metrics: their Compliance Health Score (CHS) and their Insider Network Strength. Please display the trader's key (`TR_KEY`), the calculated CHS, and the `insider_net_str`. Sort the results by CHS in ascending order (worst first).", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "insider_trading_16", "selected_database": "insider_trading", "query": "Find our cowboys—traders who are aggressive with over 5x leverage, or who trade over half their balance daily. List their key and type.", "normal_query": "Identify all high-risk traders. A trader is considered high-risk if their leverage exposure is over 5.0 and their risk level is 'Aggressive', or if their daily turnover rate is over 0.5. Display their trader key and type flag.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": true, "order": false}} +{"instance_id": "insider_trading_17", "selected_database": "insider_trading", "query": "Find trades that might be insider trading. The flag goes up if leak score is > 50, it's near a corporate event, AND the announcement was pre-market or intraday. Show the record key and trader ID.", "normal_query": "Find trades flagged for potential insider trading. A flag is raised if the information leakage score is over 50, there is an upcoming corporate event, and the announcement time is 'Pre-market hrs before' or 'Intraday hrs before'. Return the record key and trader anchor.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "insider_trading_18", "selected_database": "insider_trading", "query": "Find trades that look like layering or spoofing. It's a match if layering is 'Confirmed', OR if spoofing probability is > 75% AND order modification intensity is > 1.0. Just show the record tags.", "normal_query": "Identify trades indicating a layering or spoofing manipulation pattern. A trade is suspect if its layering index is 'Confirmed', or if its spoofing probability is over 75% and its Order Modification Intensity is above 1.0. Display the record tag for each matching trade.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "insider_trading_19", "selected_database": "insider_trading", "query": "Find any cozy trader networks. The ones I want have a circle size > 5, a group score > 60, and use a 'Regular' communication path. Just show the root trader's key.", "normal_query": "Identify potential collusion networks. A network is flagged if its relationship circle size is greater than 5, its group score is over 60, and its communication path is 'Regular'. Show the root trader key for each flagged network.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "insider_trading_20", "selected_database": "insider_trading", "query": "Show me the hot cases. I want records with 'High' or 'Critical' alerts, 'High' investigation priority, and 'Intensive' monitoring. Just the record keys.", "normal_query": "Find compliance records under 'Elevated Regulatory Scrutiny'. This status applies when the alert level is 'High' or 'Critical', investigation priority is 'High', and monitoring is 'Intensive'. Return the compliance record keys.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "insider_trading_M_1", "selected_database": "insider_trading", "query": "Make a reusable list called `high_risk_trader_view` for our high-risk traders. For each, I need their ID, type, balance, daily volume, DTR, TLE, and risk level text.", "normal_query": "Please create a reusable view named high_risk_trader_view that identifies traders fitting the High-Risk Trader Profile. For each trader identified, the view should show their registration ID (tradereg), trader kind (tradekind), account balance (acctbal), daily volume (voldaily), their calculated Daily Turnover Rate (DTR), their extracted Trader Leverage Exposure (TLE), and the text description of their risk level (risk_level_text) from their performance data.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": 2, "distinct": false, "order": false}} +{"instance_id": "insider_trading_M_2", "selected_database": "insider_trading", "query": "Time to bulk update risk scores. For all 2024+ cases, recalculate the behavioral risk score with the new suspicious activity index. Cap the score at 100, and use a temp table called `score_updates` to do the update.", "normal_query": "Please create a temporary table named `score_updates` containing new `behav_score` values for all cases with transactions from 2024 onwards. Calculate the new scores using the suspicious activity index, ensuring all calculation components use double precision and the final result is capped at 100. Then, use this temporary table to update the `reg_compliance` table.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "insider_trading_M_3", "selected_database": "insider_trading", "query": "Make a trigger to put a safety check on the enforcement actions table. Before an update can change a case's status to 'Resolved', it must check the investigation intensity. If that score is over 150, block the update and throw an error.", "normal_query": "Please create a database trigger function named prevent_premature_resolution. This function should be attached to the enforcement_actions table and fire before any update operation. Its purpose is to implement a Premature Resolution Block, where if the `enf_actions` ->> 'res_state' field is changed to 'Resolved' and the associated intensity score exceeds 150, the update is blocked.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "insider_trading_M_4", "selected_database": "insider_trading", "query": "If a compliance record's file state is 'Missing' or 'Delayed', bump its investigation priority to 'High'. Show me the IDs and new priority for everything you changed.", "normal_query": "Update the `invest_prior` column in the `reg_compliance` table. For every record where the `file_state` is either 'Missing' or 'Delayed', set the `invest_prior` to 'High'. After the update, return the record's primary key (`REC_COMP`) and the new `invest_prior` value for each modified row.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "insider_trading_M_5", "selected_database": "insider_trading", "query": "Let's bump up monitoring for our fast traders. First, find the record IDs for all high-frequency trades that are still on 'standard' monitoring. Then, take that list of IDs and update their monitoring level to 'enhanced'. Let me know which records you changed.", "normal_query": "Please perform an update on the `reg_compliance` table. First, identify all record keys (`REC_COMP`) where the associated trade has a `freq_tag` of 'High' and the record's current `mon_inten` is 'Standard'. Then, for all records matching these keys, set their `mon_inten` column to 'Enhanced' and return the `REC_COMP` of all updated rows.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "insider_trading_M_6", "selected_database": "insider_trading", "query": "Find our repeat offenders. I'm looking for traders with more than 3 past violations, a compliance rate of 'C' or 'D', or a recidivism score over 1.0. Show me their trader ID and their calculated recidivism score, which is their previous violations per year.", "normal_query": "Identify traders with a 'Problematic Compliance History'. This status applies to traders with over 3 previous violations, a compliance rate of 'C' or 'D', or a calculated Compliance Recidivism Score (CRS) over 1.0. The CRS is calculated as the number of previous violations per year of the trader's history. Display the trader's key and their calculated CRS.", "preprocess_sql": [], "clean_up_sqls": ["DROP VIEW IF EXISTS V_Problematic_Traders;"], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "insider_trading_M_7", "selected_database": "insider_trading", "query": "Show me the top 10 records where news and social media feelings are most split. I need the ID and the score gap.", "normal_query": "For all sentiment analytics records that have both a news score and a social sentiment score, find the top 10 records with the greatest absolute difference between these two scores. Display the record ID and the calculated difference, rounded to 4 decimal places.", "preprocess_sql": [], "clean_up_sqls": ["DROP VIEW IF EXISTS V_Top10_Sentiment_Disagreement;"], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": 4, "distinct": false, "order": true}} +{"instance_id": "insider_trading_M_8", "selected_database": "insider_trading", "query": "Let's see which fines really hurt. For all cases with a monetary penalty, calculate the ratio of the fine to the trader's account balance. I only want to see cases where this ratio is over 5%. Show me the enforcement ID, trader key, and the calculated ratio.", "normal_query": "Calculate the 'Enforcement Financial Impact Ratio (EFIR)'. The EFIR is the penalty amount divided by the trader's account balance. Return the enforcement record ID, trader key, and EFIR for cases where the ratio exceeds 0.05.", "preprocess_sql": [], "clean_up_sqls": ["DROP VIEW IF EXISTS V_High_Impact_Fines;"], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "insider_trading_M_9", "selected_database": "insider_trading", "query": "I'm looking for gamblers who bet big on corporate news. Find traders where over 30% of their trades are linked to corporate events and who also have an 'Aggressive' risk level. Give me their IDs.", "normal_query": "Identify 'Aggressive Event Speculators'. These are traders with an 'Aggressive' risk level where over 30% of their trades are linked to corporate events. List the keys for all qualifying traders.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "insider_trading_M_10", "selected_database": "insider_trading", "query": "I need an overall risk rating for our trades. For each trade, calculate a composite score by averaging its suspicious activity index and its pattern anomaly score. Show me the 10 trades with the highest scores, along with their record key and the score itself.", "normal_query": "Calculate a 'Composite Suspicion Score' for all trade records. This score is the average of the 'Suspicious Activity Index (SAI)' and the 'Pattern Anomaly Score (PAS)'. Return the record key and composite score for the top 10 trades with the highest scores.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "virtual_idol_1", "selected_database": "virtual_idol", "query": "We're looking for our most spontaneous big spenders. Could you pull a list of our top 10 fans who have had at least one session with an extremely high spending rate? Let's rank them by that single best spending-per-minute session. I need to see their nickname, ID, that peak spending number, and their final rank on the list, with the highest listed first.", "normal_query": "Retrieve a ranked list of the top 10 fans classified as a 'Whale Fan' (Peak Monetization Index > 20). The output must include the fan's nickname, their fan ID, the calculated Peak Monetization Index rounded to two decimal places, and their final rank. The ranking should be in descending order of the index.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 2, "distinct": false, "order": true}} +{"instance_id": "virtual_idol_2", "selected_database": "virtual_idol", "query": "I'm curious about the quiet observers on our platform, the ones who consume a lot of content but rarely participate in chats. Could you analyze this group to see what type of content they prefer? I need a list of content categories and how many of these specific fans prefer each, with the most popular categories at the top.", "normal_query": "Generate a report summarizing the content preferences of fans classified as 'Engaged Lurkers' (avg_cci > 0.5 and avg_cs < 0.5). The output should list each content preference and the corresponding count of unique Engaged Lurkers, sorted in descending order of the count.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": true, "order": true}} +{"instance_id": "virtual_idol_3", "selected_database": "virtual_idol", "query": "I want to identify our most influential users on the platform. Can you generate a list of the top 20 people who have a large follower-to-following ratio and a high overall influence score? For each person on this list, I need to see their nickname, their calculated influence score, and their follower ratio. Also, please add their rank and sort them with number one at the top.", "normal_query": "Retrieve the top 20 fans who meet the definition of a 'Community Influencer' (FFR > 2.0 and CII > 10000). For each, provide their nickname, their Community Influence Index rounded to two decimals, their Follower-to-Following Ratio rounded to two decimals, and their rank based on the index. The final list must be sorted by rank in ascending order.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 2, "distinct": false, "order": true}} +{"instance_id": "virtual_idol_4", "selected_database": "virtual_idol", "query": "I'm trying to figure out if the fans who contact support a lot are also the ones we think might leave soon. Could you make a table that shows this breakdown for our entire user base? I want to see a count of fans for each combination: those who are 'at-risk' and contact support a lot, those who are 'at-risk' but don't, and the same for the 'not at-risk' folks.", "normal_query": "Generate a correlation analysis between a fan's support status and their churn risk. For all fans, categorize each into 'High-Maintenance' (SLS > 10) or 'Low-Maintenance' and 'At-Risk' (Churn Risk Flag = 'High') or 'Not At-Risk'. The output should be a table showing the churn status, support status, and the distinct count of fans in each cross-category.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": true, "order": false}} +{"instance_id": "virtual_idol_5", "selected_database": "virtual_idol", "query": "I need a list of our absolute best fans, the kind of people who are both big spenders and show up to all our events. For each person in that elite group, can you show me their nickname, how much they've spent in total, their score for event attendance, and what loyalty tier they're in right now? Please sort the list so the biggest spenders are at the top.", "normal_query": "Generate a profile of all fans who meet the criteria for the 'Idol Superfan' segment. For each qualifying fan, the output should list their nickname, their total spending in USD rounded to two decimals, their Event Participation Score, and their current Loyalty Reward Tier. The results should be sorted by total spending in descending order.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 2, "distinct": false, "order": true}} +{"instance_id": "virtual_idol_6", "selected_database": "virtual_idol", "query": "I have a theory that our happiest users are more likely to buy our merchandise. Could we check that? I'd like to see a comparison of the average merchandise spending habits between our biggest advocates—the ones who rate us really highly—and our strongest critics, the ones who give us low scores. Essentially, let's see what percentage of their total spending goes towards merch for each of those two groups.", "normal_query": "Generate a comparative analysis of the average Merchandise Affinity Score (MAS) between fans classified as 'Platform Promoters' (NPS 9-10) and those as 'Detractors' (NPS 0-6). The output must display the fan segment name and their corresponding average MAS, rounded to two decimal places.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 2, "distinct": false, "order": false}} +{"instance_id": "virtual_idol_7", "selected_database": "virtual_idol", "query": "I want to know which idol genres our most dedicated collectors are into. Could you first identify everyone who owns a large number of items and has a high completion rate for their collections? Then, for that specific group, tell me how many of them interact with idols from each genre. The final list should just show the genre and the number of these collectors, with the most popular genre at the top.", "normal_query": "Identify all fans classified as 'Collector Fans' and determine the popularity of virtual idol genres among this segment. The output should list each idol genre and the distinct count of Collector Fans who have interacted with that genre, sorted in descending order of the fan count.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": true, "order": true}} +{"instance_id": "virtual_idol_9", "selected_database": "virtual_idol", "query": "I need to understand how quickly our best fans start spending money. Can you run an analysis on our 'Idol Superfans'—the ones who are both top spenders and event enthusiasts? I want to see their average cumulative spending at key points after they sign up: specifically at the 7-day, 30-day, and 90-day marks. The output should just show these three milestones and the average total spend for each.", "normal_query": "Perform a cohort analysis on 'Idol Superfans' to determine their spending velocity. For this specific segment, calculate the average cumulative spending at three milestones post-registration: 7 days, 30 days, and 90 days. The output should display each milestone and its corresponding average cumulative spend rounded to two decimal places.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 2, "distinct": false, "order": false}} +{"instance_id": "virtual_idol_10", "selected_database": "virtual_idol", "query": "I want to see if our key influencers have a 'ripple effect' on chat conversations. Can you analyze chats where at least one of these influencers is present and measure the overall tone of the messages from everyone else? Specifically, for any chat with an influencer, look at all messages from non-influencers with the same idol in the same session, and give me the total counts of 'Positive', 'Negative', and 'Neutral' messages.", "normal_query": "Measure the Ripple Effect of 'Community Influencers' on chat sentiment. For chat sessions where at least one Community Influencer is present, calculate the total count of 'Positive', 'Negative', and 'Neutral' messages based on Interaction Tone, sent by non-influencer participants. The output should be a single row with the total counts for each sentiment.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 0, "distinct": false, "order": false}} +{"instance_id": "virtual_idol_11", "selected_database": "virtual_idol", "query": "I need a deep-dive profile of the fans we think we might lose. For every fan flagged with a high churn risk, can you show me where they stand compared to everyone else? I want to see their percentile rank for a few key behaviors: how sticky their platform usage is, their average spending per minute, and their average content consumption rate. Please list the fan's nickname along with these three percentile ranks, and sort them to show the ones with the worst platform stickiness at the top.", "normal_query": "Generate a behavioral profile for all 'At-Risk Fans'. For each fan with a 'High' Churn Risk Flag, calculate their percentile rank across the entire fan base for three key metrics: Platform Stickiness Score (PSS), average Fan Monetization Index (FMI), and average Content Consumption Index (CCI). The output should list the fan's nickname and their three percentile ranks (rounded to three decimal places), sorted by the lowest stickiness percentile.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 3, "distinct": false, "order": true}} +{"instance_id": "virtual_idol_12", "selected_database": "virtual_idol", "query": "I'm concerned our most socially-connected users might be disengaging. For all the fans who have a large social network and belong to multiple groups, can you find their longest period of inactivity? I'm only interested in seeing people who have been gone for more than two weeks. Please show me a list of their nicknames and the number of days in their longest absence, sorted from the longest time away to the shortest.", "normal_query": "For each fan classified as a 'Social Butterfly' (SCS > 1000 and Group Memberships > 5), calculate their longest inactivity streak in days. The output should list the fan's nickname and their streak duration, but only for fans whose streak is greater than 14 days. Sort the results in descending order of the streak duration.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "virtual_idol_13", "selected_database": "virtual_idol", "query": "I want to do a quadrant analysis to better understand our fan base. Could you categorize every fan based on their financial value to us and their support needs? Specifically, split everyone into a top half and bottom half based on their monthly financial value, and do the same for their support ticket volume. This should give us four groups, like 'High-Value, Low-Support' or 'Low-Value, High-Support'. Please show me the names for these four segments and the number of fans in each, with the largest group listed first.", "normal_query": "Generate a quadrant analysis report by creating four fan segments. These segments are based on a 2x2 grid, dividing all fans into a top and bottom 50% for both Fan Financial Value (FFV) and Support Load Score (SLS). The output must list the name of each fan segment and the total number of fans within it, sorted in descending order by the fan count.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "virtual_idol_14", "selected_database": "virtual_idol", "query": "I need a leaderboard of the biggest spenders for each of our idols. Can you go through every idol and, for each one, identify their top three financial supporters? The ranking should be based only on the total value of gifts a fan has given to that specific idol. The output should show the idol's name, the fan's nickname, how much they've given to that idol, and their rank for that idol.", "normal_query": "For each virtual idol, generate a ranked list of their top three contributing 'Whale Fans'. The ranking must be based on the total gift value each fan has given to that specific idol. The output should include the idol's name, the fan's nickname, the total gift value to that idol, and the fan's rank (1, 2, or 3) for that idol. The final list should be sorted by idol name, then by rank.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 2, "distinct": false, "order": true}} +{"instance_id": "virtual_idol_15", "selected_database": "virtual_idol", "query": "I want to understand the journey our users take to become paying members. Can you calculate the average time it takes for a fan to upgrade to a premium account after their first interaction with us? And alongside that, could you also figure out, on average, how many interactions they have with the platform before they decide to subscribe? The final result should just be those two numbers.", "normal_query": "Analyze the conversion funnel for fans who become 'Premium Members' (membership kind is not 'Free'). Calculate two metrics: the average Time to Conversion in days (from first interaction to subscription date) and the average number of interactions that occurred before this conversion. The final output should be a single row containing these two averages.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 2, "distinct": false, "order": false}} +{"instance_id": "virtual_idol_16", "selected_database": "virtual_idol", "query": "I'm worried about the activity patterns of fans who might be about to leave. Can you look at everyone who is flagged as a high churn risk and calculate the average time between their interactions? From that group, I want a list of the top 15 who have the longest average gaps, showing their nickname and that average number of days they wait between activities.", "normal_query": "For the top 15 'At-Risk Fans' (Churn Risk Flag = 'High') with the longest average time gap between interactions, retrieve their nickname and the calculated average number of days between their successive interactions, rounded to two decimal places. The ranking must be based on this average gap in descending order.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 2, "distinct": false, "order": true}} +{"instance_id": "virtual_idol_17", "selected_database": "virtual_idol", "query": "I need to find our most consistently negative users to understand their issues. Can you generate a list of fans whose chat messages are flagged as 'Negative' more than 70% of the time? Please only include fans who have had at least one interaction with a recorded tone. For each fan on the list, show their nickname, their total number of interactions, their count of negative interactions, and the exact percentage. Sort the list to show the most negative person at the top.", "normal_query": "Identify fans with a consistently negative Interaction Tone. For fans with at least one interaction, generate a list where the 'Negative' tone accounts for over 70% of their total interactions with a recorded tone. The output must include the fan's nickname, total interactions, count of negative interactions, and the calculated negativity percentage, sorted in descending order by this percentage.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 2, "distinct": false, "order": true}} +{"instance_id": "virtual_idol_18", "selected_database": "virtual_idol", "query": "I need a really detailed, multi-dimensional report on our gift spending. Can you please show me the total gift spending broken down by every possible combination of these three things: the idol's genre, the fan's preferred language for content, and whether or not the fan has opted in to marketing? I need all the subtotals included, for example, by genre alone, by language alone, by genre and language together, all the way up to a grand total for everything.", "normal_query": "Generate a multi-dimensional report of total gift spending. The report must calculate the sum of gift values for every possible combination of idol genre, fan content Language Preference Setting, and marketing preference (e.g., opted-in or opted-out). All possible subtotals, including a grand total, must be included in the output using the CUBE operator.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 2, "distinct": false, "order": true}} +{"instance_id": "virtual_idol_19", "selected_database": "virtual_idol", "query": "We need to find our next big content creators before they blow up. Can you help me find users who have a knack for making content that could go viral, but haven't quite hit that top-tier influencer status yet? I'm looking for people with a high viral potential score but a community influence that's still in the medium range. For anyone who fits that description, could you show me their nickname, their exact viral score, and their community influence score? Let's list the ones with the highest viral potential at the very top.", "normal_query": "Identify all fans who are classified as 'Rising Star Influencers' (VPS > 50 and 1000 < CII < 10000). For each of these fans, provide their nickname, their Viral Potential Score (VPS), and their Community Influence Index (CII) rounded to two decimal places. The results should be sorted first by VPS in descending order, then by CII in descending order.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 2, "distinct": false, "order": true}} +{"instance_id": "virtual_idol_20", "selected_database": "virtual_idol", "query": "I'm wondering if being a dedicated achiever translates to having a good reputation in the community. Can you give me a breakdown of our 'Loyal Achievers'—the ones who consistently earn achievements and loyalty points—by their community reputation level? I'd like to see a table showing each reputation level and the number of these dedicated fans within it, sorted to show which level has the most.", "normal_query": "Analyze the distribution of 'Loyal Achievers' (AD > 0.2 and LPR > 500) across different Reputation Levels. The output should display each Reputation Level and the total count of fans classified as Loyal Achievers within that level. Sort the results in descending order of the count.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "virtual_idol_M_1", "selected_database": "virtual_idol", "query": "To help the marketing team, we need a special list of our top spenders that's easy to access. Can you run a process that creates a new table for this, let's call it WhaleFan_Summary? Once the table is ready, please fill it up by finding all of our Whale Fans. The way we find them is by looking at their single best spending-per-minute session, their Peak Monetization Index. For every fan who makes the cut, I need their ID, nickname, that peak spending value, and the date it happened.", "normal_query": "Execute a data provisioning task. First, create a new table named WhaleFan_Summary with a primary key on fan_id and a foreign key referencing fans.user_registry. The table must include columns for nickname, peak_fmi, peak_fmi_date, and a timestamp of calculation. Second, populate this table by calculating the Peak Monetization Index for every fan. Insert a row for each fan who qualifies as a Whale Fan, containing their fan ID, nickname, the calculated peak FMI value, and the corresponding date of the interaction.", "preprocess_sql": ["DROP TABLE IF EXISTS WhaleFan_Summary;"], "clean_up_sqls": ["DROP TABLE IF EXISTS WhaleFan_Summary;"], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": 2, "distinct": false, "order": false}} +{"instance_id": "virtual_idol_M_2", "selected_database": "virtual_idol", "query": "To speed up our dashboard reporting, I want you to create a materialized view named Fan_Segment_Analysis. This view should contain our Fan Segments analysis, categorizing users based on their Fan Financial Value (FFV) and Support Load Score (SLS). After creating the view, please also provide the command to refresh it with the latest data.", "normal_query": "Create a materialized view named Fan_Segment_Analysis to pre-calculate and store Fan Segments. The view should categorize all fans into a 2x2 grid based on their relative ranking for Fan Financial Value (FFV) and Support Load Score (SLS). After creation, execute a refresh of the view.", "preprocess_sql": ["DROP MATERIALIZED VIEW IF EXISTS Fan_Segment_Analysis;"], "clean_up_sqls": ["DROP MATERIALIZED VIEW IF EXISTS Fan_Segment_Analysis;"], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "virtual_idol_M_3", "selected_database": "virtual_idol", "query": "We need to perform some database maintenance to manage our storage. I want you to create a data archival process for old interactions. First, please ensure a table named interactions_archive exists, with the same structure as the main interactions table. Then, you need to move all interaction records older than three years into this archive table, but only for users who meet two specific conditions: they must have an 'Inactive' Fan Status Tier and they must not be a Premium Member. After successfully copying the data to the archive, you must delete those same records from the original interactions table. This entire process, the copy and the delete, must be performed as a single, atomic transaction to ensure data integrity.", "normal_query": "Execute a data archival process within a single transaction. This process must first ensure an interactions_archive table exists. Then, it must copy all interaction records older than three years from fans with an 'Inactive' Fan Status Tier who are not Premium Members into the archive table. Finally, it must delete these same records from the primary interactions table.", "preprocess_sql": ["DROP TABLE IF EXISTS interactions_archive;"], "clean_up_sqls": ["DROP TABLE IF EXISTS interactions_archive;"], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "virtual_idol_M_5", "selected_database": "virtual_idol", "query": "I need a way to track daily fan activity stats efficiently. Could you set up a summary table called fan_daily_activity? Then, for the fan 'FAN75581', can you run a process that gathers their total messages, gifts, and gift value for today and either adds it as a new line or just adds to their totals if they're already in there for today?", "normal_query": "Create a table fan_daily_activity if one does not exist. Then, perform an Upsert Operation for fan 'FAN75581'. The operation must aggregate their total messages, total gifts, and total gift value for the current date from the interactions table. If a record for this fan and date already exists in fan_daily_activity, the new values should be added to the existing ones; otherwise, a new record should be inserted.", "preprocess_sql": ["DROP TABLE IF EXISTS fan_daily_activity;"], "clean_up_sqls": ["DROP TABLE IF EXISTS fan_daily_activity;"], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "virtual_idol_M_6", "selected_database": "virtual_idol", "query": "Let's automate our fan promotions. I want you to create a trigger named `on_spending_update_grant_vip`. This trigger should automatically change a fan's `status_tag` to 'VIP' in their profile as soon as their total spending in the `membershipandspending` table hits or goes over the $10,000 mark. This should work for both new purchases and updates to their spending record.", "normal_query": "Create a database trigger named on_spending_update_grant_vip and its associated function check_and_grant_vip_status. This trigger must automatically update a fan's status_tag to 'VIP', according to the Fan Status Tiers definition, whenever an INSERT or UPDATE on the membershipandspending table causes their total spend_usd to meet or exceed $10,000.", "preprocess_sql": ["DROP TRIGGER IF EXISTS on_spending_update_grant_vip ON membershipandspending;", "DROP FUNCTION IF EXISTS check_and_grant_vip_status();"], "clean_up_sqls": ["DROP TRIGGER IF EXISTS on_spending_update_grant_vip ON membershipandspending;", "DROP FUNCTION IF EXISTS check_and_grant_vip_status();"], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "virtual_idol_M_7", "selected_database": "virtual_idol", "query": "To standardize how we handle money values, please create a custom Monetary Domain named monetary_value. This domain should be a numeric type that cannot be negative. After creating the domain, create a new table called transaction_log that uses this new domain for its transaction_amount column.", "normal_query": "Create a custom Monetary Domain named monetary_value which is a NUMERIC(12, 2) type that must be non-negative. Subsequently, create a new table named transaction_log utilizing this domain for the transaction_amount column.", "preprocess_sql": ["DROP TABLE IF EXISTS transaction_log;", "DROP DOMAIN IF EXISTS monetary_value;"], "clean_up_sqls": ["DROP TABLE IF EXISTS transaction_log;", "DROP DOMAIN IF EXISTS monetary_value;"], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "virtual_idol_M_8", "selected_database": "virtual_idol", "query": "I've noticed that queries filtering interactions by both an idol and a platform are running slowly. To improve performance, could you create a Composite Index named idx_interactions_idol_platform on the interactions table? It should cover the columns for the idol pivot and the activity platform.", "normal_query": "To improve query performance for analyses related to idol and platform activity, create a Composite Index named idx_interactions_idol_platform on the interactions table. This index should be created on the interact_idol_pivot and act_plat columns.", "preprocess_sql": ["DROP INDEX IF EXISTS idx_interactions_idol_platform;"], "clean_up_sqls": ["DROP INDEX IF EXISTS idx_interactions_idol_platform;"], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "virtual_idol_M_9", "selected_database": "virtual_idol", "query": "I need a quick, ad-hoc script to check the health of our top user segment. Can you write a server-side script that calculates the total number of Idol Superfans, then shows me a message with that count and what percentage of our total fans they make up? I don't need a table back, just the notice.", "normal_query": "Execute an anonymous procedural block that calculates the total number of Idol Superfans. The block must then raise a server notice containing this count and the calculated percentage of Idol Superfans relative to the total fan population. The script should not return a result set.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": true, "conditions": {"decimal": 2, "distinct": false, "order": false}} +{"instance_id": "virtual_idol_M_10", "selected_database": "virtual_idol", "query": "To improve our data quality, please add a Data Integrity Constraint to the loyaltyandachievements table. This constraint, named trust_value_is_a_percentage, must ensure that the trust_val column can only contain numbers between 0 and 100, inclusive.", "normal_query": "Add a Data Integrity Constraint named trust_value_is_a_percentage to the loyaltyandachievements table. This constraint must ensure that the trust_val column only accepts values within the inclusive range of 0 to 100.", "preprocess_sql": ["ALTER TABLE loyaltyandachievements DROP CONSTRAINT IF EXISTS trust_value_is_a_percentage;"], "clean_up_sqls": ["ALTER TABLE loyaltyandachievements DROP CONSTRAINT IF EXISTS trust_value_is_a_percentage;"], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "organ_transplant_1", "selected_database": "organ_transplant", "query": "Let's dig into the files of patients who are getting positive crossmatch results.\nI need a list of these folks.\nFor each one, show me their ID, their PRA score, and whether they have those donor-specific antibodies.\nThen, based on our official rules for Antibody-Mediated Rejection (AMR) Risk Stratification, tell me if they're considered 'High Risk'.\nOh, and I also want to see the date of the last time we tried to find a match for them.\nSort the whole thing so the most sensitized patients are at the top.", "normal_query": "I want a report on all recipients with a positive crossmatch test to evaluate their risk profile.\nFor each recipient, display their registry ID, their Panel Reactive Antibody (PRA) score, and their donor-specific antibody (DSA) status.\nThen, classify their risk according to the formal Antibody-Mediated Rejection (AMR) Risk Stratification rules, labeling them 'High Risk' or 'Standard Risk'.\nAdditionally, for context, please include the timestamp of their most recent prior matching attempt if one exists.\nOrder the results to show recipients with the highest PRA scores at the top.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "organ_transplant_2", "selected_database": "organ_transplant", "query": "I need the pancreas waiting list, sorted exactly how the official Allocation Policy says we should for all the pending matches. Show me the patient's ID, what region they're in, their exact urgency status, the HLA mismatch number, and their final rank in their local area.", "normal_query": "Generate a report for transplant coordinators that follows the formal Allocation Policy for all pending pancreas matches. The report should display the recipient's registry ID, their allocation region, their specific medical urgency, their HLA mismatch count, and their final rank within their region.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "organ_transplant_3", "selected_database": "organ_transplant", "query": "I want to find the lung transplants that were either insanely expensive for the benefit, or a massive bargain.\nCould you calculate the Cost-Effectiveness Ratio for all the lung transplants we've finished? For the QALY part of the formula, just use the patient's quality-of-life score and multiply it by 5 years.\nThen, rank all of them and split the list into 20 groups. I want to see all the details—match, donor, and recipient IDs, and the final cost-effectiveness number rounded to two decimals—for only the absolute worst group and the absolute best group.", "normal_query": "I need to identify cost-effectiveness outliers.\nPlease calculate the Cost-Effectiveness Ratio (CER) for all 'Completed' lung transplants, assuming a Quality-Adjusted Life Year (QALY) gain of 5 years multiplied by the recipient's quality of life score.\nThen, using the `NTILE` window function, divide the results into 20 buckets.\nDisplay the full details (match ID, donor ID, recipient ID, and the final CER rounded to 2 decimal places) for all transplants that fall into the top and bottom buckets.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 2, "distinct": false, "order": true}} +{"instance_id": "organ_transplant_4", "selected_database": "organ_transplant", "query": "Let's see how often we find a perfect match within and between different ethnic groups.\nFirst, you need to identify every single Optimal Donor-Recipient Match we have.\nOnce you have that list of perfect pairs, I want a table that shows the donor's ethnicity down the side and the recipient's ethnicity across the top, with the cells showing the count of how many times each combination happened.", "normal_query": "I want a detailed ethnic compatibility report for all pairings that qualify as an Optimal Donor-Recipient Match.\nFor every such optimal match found, create a cross-tabulation showing the count of matches, with the donor's ethnicity as rows and the recipient's ethnicity as columns.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "organ_transplant_6", "selected_database": "organ_transplant", "query": "Let's check our CMV exposure risk. I want a list of all current and completed transplants where a CMV-positive donor gave an organ to a CMV-negative patient.\nFor each of these risky cases, show me the match ID and the transplant center. I also want to see the patient's Infection Risk score from their chart and, right next to it, the average infection risk—rounded to four decimals—for all the non-mismatched transplants done at that same hospital. I want to see if the scores reflect the risk. Please order the results by the hospital's ID.", "normal_query": "I need to perform a viral mismatch risk analysis for all completed and in-progress transplants.\nPlease produce a report that identifies every donor-recipient pair with a Cytomegalovirus (CMV) mismatch, defined as a CMV-positive donor matched with a CMV-negative recipient.\nFor each identified mismatch, display the match registry ID, the center where the transplant occurred, the pre-calculated Infection Risk, and compare this to the average Infection Risk (rounded to 4 decimal places) for all other transplants at that same center that did not have a CMV mismatch. The final report should be ordered by center ID.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 4, "distinct": false, "order": true}} +{"instance_id": "organ_transplant_7", "selected_database": "organ_transplant", "query": "I need a list of our absolute sickest patients—the ones on ECMO or a VAD.\nFor each of these patients, show me their ID and what kind of life support they're on.\nThen, calculate their full Patient Urgency Score. The crucial part is, I want to see their score next to the average score for all the other, more stable patients who are waiting for the same organ, with both scores rounded to four decimals. Let's see how big the gap is. Group the list by organ, and show the sickest patients first within each group.", "normal_query": "I need to assess the urgency of recipients currently on advanced life support.\nPlease identify all pending recipients who are on 'ECMO' or 'VAD' life support.\nFor each of these critical recipients, display their registry ID, the specific life support method, their calculated Patient Urgency Score, and compare this score to the average urgency score of other patients waiting for the same organ who are not on life support. Both scores should be rounded to 4 decimal places. Order the results by organ type, then by the critical patient's urgency score.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 4, "distinct": false, "order": true}} +{"instance_id": "organ_transplant_8", "selected_database": "organ_transplant", "query": "I'm wondering if how we ship organs really makes a difference. Can you run some numbers for me?\nLet's look at all our finished transplants.\nGroup them by how the organ was transported—you know, ground, helicopter, commercial air, all that.\nFor each of those transport types, I want to see the average Total Ischemia Time rounded to two decimals, and the average Expected Graft Survival Score rounded to four decimals.\nSort the results so I can see which transport methods are linked with the best outcomes.", "normal_query": "I need a report on the impact of ischemia time on expected graft survival, broken down by the transportation method used.\nFor every completed transplant, determine the Total Ischemia Time and retrieve the Expected Graft Survival (EGS) Score.\nGroup the results by the `trans_method` from the logistics table, and for each method, calculate the average Total Ischemia Time (rounded to 2 decimal places) and the average EGS Score (rounded to 4 decimal places). The report should be sorted by the average EGS Score from highest to lowest.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 4, "distinct": false, "order": true}} +{"instance_id": "organ_transplant_9", "selected_database": "organ_transplant", "query": "I want to know what the most common health problems our patients have and if those problems make surgery riskier.\nCan you go through all the patient files, break apart their list of health conditions, and find the top 5 most common ones?\nThen, for each of those top 5, figure out the average Surgical Risk Score for all patients who have that specific condition. I want to see the condition, how many people have it, and what the average risk score is, rounded to four decimals.", "normal_query": "I need to analyze the prevalence of comorbidities and their impact on surgical risk.\nFirst, analyze each individual condition listed for all recipients.\nThen, identify the top 5 most frequently occurring comorbidities across all patients.\nFinally, for each of these top 5 conditions, calculate the average Surgical Risk Score for the patients who have that comorbidity. Display the comorbidity, its total count, and the calculated average risk score rounded to 4 decimal places.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 4, "distinct": false, "order": true}} +{"instance_id": "organ_transplant_10", "selected_database": "organ_transplant", "query": "I need to see who's been stuck on our waiting list the longest. Can you pull a special report for me?\nFor each organ, find the 2% of patients who have been waiting longer than everyone else.\nFor this group, I want to see everything that might be making them hard to match: their patient ID, the organ they need, how many days they've been waiting, their PRA score, and a tally of their other health problems. Please group the list by organ and put the longest-waiting patients at the top of each group.", "normal_query": "Please generate a profile of our longest-waiting patients. For each organ type, identify the top 2% of pending recipients with the longest wait times using the `PERCENT_RANK` window function.\nFor this elite cohort of long-waiters, display their registry ID, organ type, wait time in days, their Panel Reactive Antibody (PRA) score to assess Immunological Sensitization, and a count of their listed comorbidities. Order the results by organ type and then by wait time descending.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "organ_transplant_11", "selected_database": "organ_transplant", "query": "I need to find out which of our hospitals are doing the heaviest lifting. I'm talking about the ones that take on the toughest cases, both in terms of travel and patient health.\nCan you create a ranking? For each hospital, come up with a 'Logistical Challenge Score' based on average distance and organ-on-ice time, and a 'Medical Complexity Score' based on average surgical risk and how many other illnesses the patients have.\nThen, average those two scores together to get a final 'Workhorse Score'. I just want to see the top 10 hospitals based on that final score, rounded to four decimals.", "normal_query": "I want to identify our 'workhorse' transplant centers, defined as those that handle a high volume of logistically and medically complex cases.\nFor each center, calculate a 'Logistical Challenge Score' (average distance * 0.4 + average expected ischemia time * 0.6) and a 'Medical Complexity Score' (average surgical risk * 0.7 + average number of recipient comorbidities * 0.3).\nThen, combine these into a final 'Workhorse Score' (Logistical Score * 0.5 + Medical Score * 0.5). Display the top 10 centers ranked by this final score, rounded to 4 decimal places.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 4, "distinct": false, "order": true}} +{"instance_id": "organ_transplant_13", "selected_database": "organ_transplant", "query": "I want to know if our fancy Decision Support System is actually helping us pick better matches. Can you check if its score lines up with the EGS score?\nPlease take all our completed transplants and divide them into 5 groups based on their DSS score. For each of these 5 buckets, tell me how many transplants are in it and what their average Expected Graft Survival Score is, rounded to four decimals. I want to see if the average EGS score goes up as the DSS score bucket goes up.", "normal_query": "I need to analyze if our Decision Support System score is aligned with our primary success metric, the Expected Graft Survival score.\nUsing the `WIDTH_BUCKET` function, group all completed transplants into 5 equal buckets based on their Decision Support System score, from the minimum to the maximum score in the dataset.\nFor each bucket, calculate the number of transplants and the average Expected Graft Survival Score, rounded to 4 decimal places. This will show if a higher DSS score correlates with a higher predicted graft survival.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 4, "distinct": false, "order": true}} +{"instance_id": "organ_transplant_14", "selected_database": "organ_transplant", "query": "I'm curious if some hospitals are more willing to take a chance on a less-than-perfect genetic match.\nFor every transplant center that has performed at least two transplants, can you calculate their average HLA Mismatch Score?\nAlso, for each of those centers, figure out the standard deviation so we can see if their mismatch numbers are all over the place or pretty consistent.\nThen, just show me the top 10 from that group with the highest average mismatch scores.\nI'll need to see their total number of transplants, that average mismatch score rounded to four decimals, and the standard deviation, also rounded to four decimals.", "normal_query": "I want to identify transplant centers that may have a higher tolerance for immunological risk.\nPlease calculate the average HLA Mismatch Score for all completed transplants at each unique transplant center that has performed two or more transplants.\nConcurrently, calculate the standard deviation of the mismatch scores for each center to understand the variability in their matches.\nDisplay the top 10 centers with the highest average mismatch scores, along with their transplant volume, the average mismatch rounded to 4 decimal places, and the standard deviation rounded to 4 decimal places.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 4, "distinct": false, "order": true}} +{"instance_id": "organ_transplant_16", "selected_database": "organ_transplant", "query": "I need to see the trade-offs we're making with our less-than-perfect donor organs.\nCan you pull a list of all matches that fall under our Marginal Donor Acceptance Criteria? That means either the age gap is huge—more than 25 years—or their kidney score is poor, say under 40.\nFor each of those matches, show me the donor and patient IDs, tell me exactly why we're calling the donor 'marginal', and then calculate the patient's standard Patient Urgency Score so I can see just how desperate they are. Round the score to four decimals. Sort it so the most urgent patients are at the top.", "normal_query": "Generate a risk-benefit report for transplants using organs from Marginal Donors.\nFirst, identify all donors who meet the Marginal Donor Acceptance Criteria, defined as having an age difference greater than 25 years with the recipient OR a Renal Function Score below 40.\nFor each of these marginal donor matches, list the donor ID, the recipient ID, the specific marginal criterion met ('Age Difference' or 'Low Renal Score'), and then calculate the standard Patient Urgency Score for the recipient to assess the necessity of using the marginal organ. The final score should be rounded to 4 decimal places. Sort the results by the calculated urgency score in descending order.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 4, "distinct": false, "order": false}} +{"instance_id": "organ_transplant_17", "selected_database": "organ_transplant", "query": "I want to figure out which transport method is the most efficient at getting organs delivered quickly relative to the distance they have to travel.\nCan you come up with a 'Logistical Efficiency Ratio' for every completed transplant? Just divide the total time the organ was on ice by the distance it traveled.\nThen, for each transport type—ground, air, etc.—I want to see the average, best, and worst efficiency ratio, all rounded to four decimals. Ignore any really short trips, say under 10km. Sort the list by the average efficiency.", "normal_query": "I need to analyze the logistical efficiency of different transport methods. For each completed transplant, calculate a 'Logistical Efficiency Ratio', defined as the Total Ischemia Time in minutes divided by the geographic distance in kilometers.\nA lower ratio indicates better efficiency. Then, for each `trans_method`, calculate the average, minimum, and maximum efficiency ratio, all rounded to 4 decimal places. The report should exclude any trips under 10km as outliers and be sorted by the average efficiency.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 4, "distinct": false, "order": true}} +{"instance_id": "organ_transplant_18", "selected_database": "organ_transplant", "query": "I want to see how our patients are doing based on how hard they were to match immunologically.\nCan you sort all the patients who've received a transplant into four groups based on their PRA score? Let's do 'Low' for 0-10, 'Moderate' for 11-79, 'High' for 80-95, and 'Very High' for 96 and up.\nFor each of these four groups, I want to see the total number of patients, the average HLA mismatch they ended up with rounded to two decimals, how long they had to wait on average to the nearest day, and what their average EGS score was, rounded to four decimals.", "normal_query": "I need a comprehensive profile of patient outcomes across different levels of Immunological Sensitization.\nGroup all recipients of completed transplants into four Panel Reactive Antibody (PRA) score buckets: 'Low' (0-10), 'Moderate' (11-79), 'High' (80-95), and 'Very High' (96-100).\nFor each bucket, calculate the total number of transplants, the average HLA Mismatch Score of their match (rounded to 2 decimal places), their average wait time in days (rounded to the nearest whole day), and the average Expected Graft Survival (EGS) score (rounded to 4 decimal places) for their transplant.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 4, "distinct": false, "order": true}} +{"instance_id": "organ_transplant_19", "selected_database": "organ_transplant", "query": "I'm curious about where our single HLA mismatches are happening. Are they usually on the A, the B, or the DR?\nCan you look at all our completed transplants that had exactly one mismatch? For that group, I want you to figure out which specific HLA type was the one that didn't match.\nThen, just give me a count: how many were mismatched at 'A', how many at 'B', and how many at 'DR'.", "normal_query": "I want to analyze potential HLA mismatch patterns. For all completed matches that have exactly one HLA mismatch, I need to determine on which specific locus (A, B, or DR) the mismatch occurred.\nPlease produce a report that counts the number of mismatches that occurred on the 'A-locus', 'B-locus', and 'DR-locus'.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "organ_transplant_20", "selected_database": "organ_transplant", "query": "I want to create a map of where we're struggling the most to find organs.\nCan you calculate a 'demand versus supply ratio' for each region and for each blood type, including the +/-?\nTo get 'demand', just count the number of patients waiting in a region for a certain blood type. For 'supply', count all the donors we've ever had from that region with that blood type.\nShow me a table with the region, the blood type, the number of patients, the number of donors, and the final ratio rounded to four decimals. Put the biggest problem spots at the top.", "normal_query": "I need to find geographic and blood type scarcity hotspots.\nFor each allocation region and for each main blood type (A, B, AB, O, and their Rh variations), calculate a 'Demand-to-Supply Ratio'.\n'Demand' is the number of pending recipients of that blood type in that region. 'Supply' is the total number of unique donors of that blood type from that region available in the entire dataset.\nDisplay the region, blood type, demand count, supply count, and the final ratio rounded to 4 decimal places, sorted to show the highest scarcity ratios first.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 4, "distinct": false, "order": true}} +{"instance_id": "organ_transplant_21", "selected_database": "organ_transplant", "query": "I need to know if we're getting less bang for our buck on the really urgent transplants.\nCan you run a cost-effectiveness analysis for me? For every completed transplant, figure out the CER. Let's just assume the quality-adjusted life year gain is 8 years times whatever their quality-of-life score is.\nThen, I want you to group the results by the patient's medical urgency status. Show me the average CER, rounded to two decimal places, for the Status 1A patients, the Status 1B patients, and so on. Keep them in order of urgency.", "normal_query": "I want to find out if transplanting sicker patients is less cost-effective.\nCalculate the Cost-Effectiveness Ratio (CER) for every completed transplant where cost and quality-of-life data are available. For the QALY gain, use a standard of 8 years multiplied by the patient's quality of life score.\nThen, group these transplants by the recipient's Medical Urgency Status tier ('Status 1A', 'Status 1B', 'Status 2', etc.) and calculate the average CER for each tier, rounded to 2 decimal places. The results should be ordered by urgency.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 2, "distinct": false, "order": true}} +{"instance_id": "organ_transplant_22", "selected_database": "organ_transplant", "query": "I'm worried some of our hospitals might be having a rough patch. I want to look for streaks of failed matches.\nCan you go through the data for each transplant center and find every time they had two or more failed matches in a row, based on when the match was created?\nI want a list that shows the hospital ID, the ID of the match that continued the streak, and the time it happened. Just show me the second failure in any given streak.", "normal_query": "I need a report on consecutive match failures at our transplant centers to identify potential systemic issues.\nFor each transplant center, find every instance where at least two consecutive matches (ordered by creation time) both had a final status of 'Failed'.\nThe report should list the center ID, the registry ID of the second failed match in the sequence, and the timestamp of that failure. Only show the start of sequences of 2 or more failures.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "organ_transplant_M_1", "selected_database": "organ_transplant", "query": "We keep calculating the Size Compatibility Score over and over.\nCan we just build a tool for it?\nI want a function, let's call it `calculate_size_compatibility`, where I can just plug in a donor's ID and a recipient's ID, and it spits out the score.\nIt needs to find the BMI for both the donor and the recipient and then do the math.\nIf it can't find the BMI for either one, it should just return nothing instead of crashing.", "normal_query": "Create a reusable PostgreSQL function named `calculate_size_compatibility` that computes the Size Compatibility Score.\nThis function must accept a donor's registry ID and a recipient's registry ID as text inputs.\nIt should retrieve the Body Mass Index for both the donor and the recipient, then apply the standard formula for the Size Compatibility Score.\nThe function must be robust and return NULL if the BMI for either individual is missing or zero to prevent division errors.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "organ_transplant_M_2", "selected_database": "organ_transplant", "query": "Finding a perfect organ match is like looking for a needle in a haystack, and I want a special list that only shows these golden tickets.\nCan you build a 'live' list, let's call it `v_optimal_matches`, that shows every single donor-recipient pair that qualifies as an Optimal Donor-Recipient Match?\nI want this list to update itself without locking everything up every time we get a new patient registered in the system.", "normal_query": "I need a system to continuously identify every potential match that qualifies as an Optimal Donor-Recipient Match.\nFirst, create a materialized view named `v_optimal_matches` that contains the donor and recipient registry IDs for every such pair.\nSecond, create a trigger named `trg_refresh_optimal_matches` that automatically executes a procedure to refresh this materialized view concurrently whenever a new recipient is inserted into the `recipients_demographics` table.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "organ_transplant_M_3", "selected_database": "organ_transplant", "query": "I want to be able to check a donor's kidney health easily.\nCan you create a function called `get_donor_renal_score` that takes a donor's ID?\nIt should do the math for the Renal Function Score automatically. For the final score, use a weighting of 0.8 for the eGFR part and 0.2 for the creatinine part. I just need it to spit out the single score.", "normal_query": "Create a PostgreSQL function called `get_donor_renal_score` that accepts a donor's registry ID as a TEXT input.\nThis function should calculate the donor's Renal Function Score, using weights of 0.8 for the internal eGFR calculation and 0.2 for serum creatinine, and return it as a REAL value.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "organ_transplant_M_4", "selected_database": "organ_transplant", "query": "Let's create a special watchlist for all of our High-Risk Donors so we can track them easily.\nCan you build a materialized view for this? Call it `v_high_risk_donors`.\nIt should automatically pull in any donor who meets the official definition of high-risk.\nFor each donor on this list, I want to see their ID, their age, and a note on which specific risk factor got them on the list.", "normal_query": "I need a dedicated, up-to-date list of all donors who are classified as a High-Risk Donor based on the established criteria.\nPlease create a materialized view named `v_high_risk_donors`.\nThe view should list the donor's registry ID, their age, and the specific high-risk factor that was identified.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "organ_transplant_M_5", "selected_database": "organ_transplant", "query": "We need to keep track whenever a patient's life support status actually changes.\nCan you set up a log for that? First, make a new table called `life_support_audit` that can store a log entry number, the patient's ID, what the status was before and after the change, and when it happened.\nThen, create a trigger called `trg_audit_life_support_changes` that automatically adds a new line to this log only if the life support value is modified to something new.", "normal_query": "I need an audit trail for changes to a recipient's life support status.\nPlease create a new table called `life_support_audit` with columns for `audit_id`, `recipient_id`, `old_status`, `new_status`, and `change_timestamp`.\nThen, create a trigger named `trg_audit_life_support_changes` that fires after an update on the `recipients_immunology` table, inserting a new record into the audit table only when the `life_support` value actually changes.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "organ_transplant_M_6", "selected_database": "organ_transplant", "query": "I want a new summary table that tracks how well our transplant centers are doing.\nLet's call it `transplant_center_performance`.\nIt should show the center's ID, a count of all the transplants they've done, their average EGS score, and a final Center Performance Score.\nOnce the table is made, fill it up by calculating the score for every center.\nFor the score, let's say the number of surgeries they do is the most important part, maybe 70%, and their average graft survival outcome is the other 30%.", "normal_query": "Create a new table named `transplant_center_performance` to store analytical data.\nThis table should have columns for the center's identification code, the total number of transplants performed, the average Expected Graft Survival (EGS) score for that center, and a calculated Center Performance Score.\nAfter creating the table, populate it by analyzing all relevant transplant records.\nThe Center Performance Score should be calculated with a 70% weight on the center's total transplant volume and a 30% weight on its average EGS score.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "organ_transplant_M_7", "selected_database": "organ_transplant", "query": "I want a list of all the donors who died from Anoxia.\nLet's make a view for it called `v_anoxia_donor_profile`.\nJust show me the donor's ID, their age, and their kidney numbers—the creatinine and GFR values. This will help us quickly see if the organs are any good.", "normal_query": "Create a view named `v_anoxia_donor_profile` to help assess organ quality for a specific subset of donors.\nThe view should list all donors whose cause of death is recorded as 'Anoxia'.\nFor each such donor, display their registry ID, age, and their key kidney function indicators: serum creatinine and glomerular filtration rate.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "organ_transplant_M_8", "selected_database": "organ_transplant", "query": "We need to track every time a patient's urgency level actually changes.\nCan you set up an audit trail for that? First, make a table called `urgency_status_log` to store the history—it needs a log number, the patient ID, the date it happened, and what the status changed from and to.\nThen, build a trigger called `trg_log_urgency_status_change`. It should watch the clinical table and automatically add a new line to our log only when the medical urgency value for a patient is modified.", "normal_query": "Create a system for logging changes to a patient's Medical Urgency Status.\nFirst, create a new table called `urgency_status_log` with columns to track the log ID, the recipient's registry ID, the date of the change, the old status, and the new status.\nSecond, create a trigger named `trg_log_urgency_status_change` that executes after an update on the `clinical` table. The trigger should fire only if the `med_urgency` column is modified, and it must insert a new record into the log table with the relevant details.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "organ_transplant_M_9", "selected_database": "organ_transplant", "query": "We need to standardize how we calculate our main ranking score.\nCan you build a function called `calculate_composite_allocation_score` that does all the work?\nI want to just give it a match ID.\nIt should then go and figure out all the component scores and combine them using the standard Composite Allocation Score formula, and just return the one final number.", "normal_query": "Create a reusable function named `calculate_composite_allocation_score` that takes a match ID as input.\nThis function must internally calculate and combine the Patient Urgency Score, Immunological Compatibility Score, and Expected Graft Survival (EGS) Score using the standard Composite Allocation Score formula.\nThe function must return the final calculated score as a single REAL value.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "organ_transplant_M_10", "selected_database": "organ_transplant", "query": "Let's make our center performance stats update in real-time.\nFirst, make a simple summary table, call it `center_performance_live`, just to hold the center's ID, a running count of their transplants, and their average EGS score.\nThen, the magic part: build a trigger called `trg_update_center_performance`. Whenever a match is officially marked as 'Completed', this trigger should automatically update that center's numbers in our new summary table. It needs to be smart enough to add the center if it's their first completed transplant.", "normal_query": "I want to automate the updates to our center performance metrics.\nFirst, create a summary table called `center_performance_live` with columns for center ID, total transplants, and the running average for their Expected Graft Survival (EGS) Score.\nThen, create a trigger named `trg_update_center_performance` that fires after an update on `transplant_matching`. When a `match_status` changes to 'Completed', the trigger must find the corresponding transplant center and update its `total_transplants` and `avg_egs_score` in the summary table. If the center isn't in the table yet, it should be inserted.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "organ_transplant_M_11", "selected_database": "organ_transplant", "query": "I need a dedicated, ranked list of all the kids waiting for a heart.\nCan you create a view called `v_pediatric_heart_candidates`?\nIt should only include patients under 18. The ranking is critical: the sickest kids—Status 1A—go first. If kids have the same urgency status, the one with the higher Recipient Wait Time Ratio should get priority.\nI need the view to show the kid's ID, their age, their urgency level, and their calculated wait time ratio.", "normal_query": "Create a specialized view named `v_pediatric_heart_candidates`.\nThis view should produce a prioritized list of all recipients under the age of 18 who are waiting for a heart transplant.\nThe list should be ordered first by their Medical Urgency Status (Status 1A being highest), and then by their Recipient Wait Time Ratio in descending order for recipients with the same urgency status.\nThe view should display the recipient's ID, age, medical urgency, and the calculated wait time ratio.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "organ_transplant_M_12", "selected_database": "organ_transplant", "query": "We need to prevent silly mistakes in our data entry. People don't get younger.\nCan you build a trigger called `trg_validate_recipient_age`?\nIt should watch the recipient demographics table. If anyone ever tries to update a patient's age to a number that's lower than what it was before, the trigger must just ignore the change and keep the old age.", "normal_query": "To ensure data integrity, create a trigger that prevents illogical updates to a recipient's age.\nThe trigger, named `trg_validate_recipient_age`, should activate before any update on the `recipients_demographics` table.\nIt must check if the new `age_count` is less than the old `age_count`. If a user attempts to decrease a recipient's age, the trigger should silently prevent the change by reverting the age to its original value.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "organ_transplant_M_13", "selected_database": "organ_transplant", "query": "I need a quick way to pull up all the main risk numbers for a specific transplant match.\nCan you make a function called `get_match_risk_profile` that takes a match ID?\nIt should just go into the risk data and pull out the five important scores—Immunological, Infection, Rejection, Readmission, and Mortality—and then give them back to me as a single JSON object.", "normal_query": "Please create a function that provides a holistic risk profile for a given transplant match.\nThe function, named `get_match_risk_profile`, must accept a match ID.\nIt should retrieve and return a single JSONB object containing the five key risk metrics: Immunological Risk, Infection Risk, Rejection Risk, Readmission Risk, and Mortality Risk.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "mental_health_1", "selected_database": "mental_health", "query": "Let's find our most vulnerable patients: those who are high-risk, in facilities under severe stress, and who are also not engaging well with their therapy. I need a list with their patient ID, assessment ID, the date of their latest assessment, their average rounded engagement score, and the stress level of their facility. Please sort by the most recent assessment and just show the top 50.", "normal_query": "I want to identify High-Risk Patients from facilities experiencing Severe Environmental Stress or Severe Life Impact, who also exhibit low Therapy Engagement Scores (average TES is lower than 2). For each patient, include their patient ID, assessment ID, date of their most recent assessment, their average rounded TES score, and the environmental stress or life impact level of the facility they are associated with. Focus only on the most recent assessments and prioritize patients meeting all these criteria. Sort the results by the assessment date in descending order and limit to the top 50 results.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "mental_health_2", "selected_database": "mental_health", "query": "Can you help me check how closely a facility's resources relate to how well patients stick to their treatment? I'd like to see the overall resource adequacy score across all facilities and the correlation between each facility's resource score and their treatment adherence rate. Just skip the places where there's no rate for the treatment adherence.", "normal_query": "For all facilities, I want to explore the Correlation Between Resource Adequacy and Adherence. Include the overall Facility Resource Adequacy Index as a reference and the correlation coefficient between each facility's resource adequacy score and treatment adherence rate. Exclude facilities with no applicable TAR.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "mental_health_3", "selected_database": "mental_health", "query": "Show me facilities where patients seem highly engaged in therapy, but their recovery progress is still lagging basically, places with a possible engagement-outcome disconnect. For each of those facilities, I want to see the Facility ID, their average therapy engagement score, and their index of recovery trajectory, both rounded to two decimal places. Sort the list by Facility ID and keep it to the first 100 results.", "normal_query": "Identify facilities classified as having a Facility with Potential Engagement-Outcome Disconnect. Display the facility ID, the average TES, and the RTI for these facilities. Round both TES and RTI to 2 decimal places, sort by facility ID, and limit the output to 100 rows.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": 2, "distinct": false, "order": true}} +{"instance_id": "mental_health_4", "selected_database": "mental_health", "query": "Can you show me the top clinicians working in well-supported facilities based on the stability metric of the patients? I want to see each clinician's ID, which facility they work at, their Patient Stability Metric score, and how they rank within their facility (higher PSM means better rank). Just include those in resource-backed facilities, sort by facility and rank, and show only the top 100 results.", "normal_query": "I want to identify the top-performing clinicians in Resource-Supported Facilities based on their Patient Stability Metric. For each clinician, provide their ID, the facility ID, their PSM score, and their rank within the facility. The rank should be based on PSM, with higher PSM scores ranked higher. Only include clinicians from facilities classified as Resource-Supported Facilities. Sort the results by facility ID and then by rank within each facility, limiting the output to the top 100 rows.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 2, "distinct": false, "order": true}} +{"instance_id": "mental_health_5", "selected_database": "mental_health", "query": "Can you show me the patients who seem to have fragile stability? I want to see their ID, how often they miss appointments on average, and what their latest effectiveness score of social support.", "normal_query": "I want to find patients who are exhibiting fragile stability. List each patients ID, their average missed appointments, and their most recent SSE score.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": true, "order": false}} +{"instance_id": "mental_health_6", "selected_database": "mental_health", "query": "Show me the top 100 primary diagnoses where patients have the highest number of crisis interventions. For each diagnosis, include the name of the diagnosis, how many patients had that diagnosis, and the average crisis intervention frequency, rounded to two decimal places. Sort the list by CIF from highest to lowest.", "normal_query": "I want to identify which primary diagnoses are associated with the highest Crisis Intervention Frequency (CIF) across all patients. For each diagnosis, list the diagnosis name, the number of patients with that diagnosis, and the CIF value, rounded to two decimal places. Sort the results by CIF in descending order and limit to the top 100 diagnoses.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 2, "distinct": true, "order": true}} +{"instance_id": "mental_health_7", "selected_database": "mental_health", "query": "Show me the top 100 facilities, grouped into performance quadrants. For each one, list its ID, how well patients stick to their treatments (rate of treatment adherence), how stable the patients are, both rounded to two decimals, and which performance quadrant it falls into. Sort the results by quadrant and then by facility ID.", "normal_query": "I want to categorize facilities into performance quadrants. For each facility, list the facility ID, Treatment Adherence Rate (rounded to two decimal places), Patient Stability Metric (rounded to two decimal places), and the performance quadrant. Sort results by performance quadrant and facility ID, limiting to the top 100 facilities.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 2, "distinct": false, "order": true}} +{"instance_id": "mental_health_8", "selected_database": "mental_health", "query": "I want to see how different kinds of therapy changes—like switching therapy type, therapist, or session frequency—affect how engaged patients are. For each type of change, show how often it happens, what the average engagement score was before and after the change, and how much the score changed overall. Sort the results so that the most common changes appear at the top.", "normal_query": "Analyze the impact of therapy changes (modality, therapist, frequency) on the Therapy Engagement Score and calculate the engagement variation for each change type. Show the change type, total occurrences, average scores before (previous encounter of each encounter) and after (current encounter), and average score change from previous score to current score, ordering by total occurrences in descending order.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": 2, "distinct": false, "order": true}} +{"instance_id": "mental_health_9", "selected_database": "mental_health", "query": "Show me the top 100 facilities where suicide risk is very high—over 20%. For each one, list the facility ID, their PFIS score, FRAI score, and the difference in resource demand, sorted from the highest RDD to the lowest. I want to find places where the need for resources is most urgent.", "normal_query": "For facilities with high Suicide Risk Prevalence over 20%, calculate the Resource-Demand Differential. List the facility ID, PFIS, FRAI, and RDD scores, ordered by RDD from highest to lowest, showing the top 100 facilities. This helps identify resource gaps in critical environments.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "mental_health_10", "selected_database": "mental_health", "query": "Find facilities that seem to have an environment that are systemically stressed. For each one, list the facility ID and the differential in resource demand. Just show the top 100 most stressed facilities.", "normal_query": "Identify facilities exhibiting characteristics of a Systemically Stressed Facility Environment. For each facility, return its ID and Resource-Demand Differential value, limited to the top 100 facilities.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": 2, "distinct": false, "order": true}} +{"instance_id": "mental_health_11", "selected_database": "mental_health", "query": "Hey, can you pull together a report that shows how care might vary between different ethnic groups? For each patient, I want to see their ethnicity, how severe their symptoms are, and how well they're sticking with their treatment. Also, please include a summary row that combines all ethnicities, so we have a baseline to compare against. And make sure the results are sorted by ethnicity alphabetically. Thanks!", "normal_query": "To support our health equity audit, I need a report that assesses potential care disparities across different ethnic groups. Please generate a table showing each patient's ethnicity, their calculated Symptom Severity Index, and their Engagement-Adherence Score. The report must also include a summary row for 'All Ethnicities' to serve as a baseline. Please sort the results alphabetically by patient ethnicity.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": 2, "distinct": false, "order": true}} +{"instance_id": "mental_health_12", "selected_database": "mental_health", "query": "Sarah, our lead case manager, is worried about a pattern she's calling 'the David profile.' These are patients who are constantly in crisis—we're talking about patients that has a low support profile and in high crisis. She wants to find every other patient who looks just like that right now. Can you pull a list for her? She needs to see their ID and age, plus the exact crisis and support scores that got them on the list. For her team to take action, please also show the date they were last in the hospital and their next appointment, but make that appointment date easy to read. And please, put the people with the most crises at the very top so she knows who to call first.", "normal_query": "For our weekly clinical review, Sarah needs to generate a Vulnerable Patient Watchlist to proactively identify a specific cohort. The focus is on individuals who fit the Patient with High Crisis & Low Support Profile. To make this list actionable for the meeting, please display each unique patient's identifier, their age, the total count of their crisis interventions, and their calculated Social Support Effectiveness score. For additional context on their recent trajectory and our next point of contact, also include the date of their last hospitalization and format the date of their next scheduled appointment as 'Mon DD, YYYY'.Finally, please sort the list with the patients having the highest number of crisis interventions at the top to prioritize our discussion.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": true, "order": true}} +{"instance_id": "mental_health_13", "selected_database": "mental_health", "query": "Our director, Dr. Evans, is trying to get more funding for community partnerships, and his whole argument hinges on one idea: that facilities with better resources have patients who are more likely to stick to their treatment plans. He needs a solid number to prove this point. Can you run a statistical analysis to see how strong that connection really is? Basically, do a correlation analysis of the resource's adequacy. Just give me the final correlation score, nice and clean, as a single number.", "normal_query": "For a budget proposal, our regional director, Dr. Evans, needs to validate a key hypothesis: that better facility resources improve patient outcomes. He wants to test this by measuring the Correlation Between Resource Adequacy and Adherence. Please calculate this correlation. The final output should be a single numerical value representing this correlation, rounded to four decimal places.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": 4, "distinct": false, "order": false}} +{"instance_id": "mental_health_14", "selected_database": "mental_health", "query": "Our Director, Dr. Sharma, needs help. We have a lot more patients coming in who are in a really bad place—severely depressed or anxious and also at high risk for suicide. She wants to build a list of our 'go-to' experts for these tough cases. Can you figure out, for each diagnosis like 'Depression', 'Anxiety', etc, which of our clinicians has the most hands-on experience with this exact type of high-risk patient? I need a table that shows the diagnosis, the clinician, how many of these patients they have, and then ranks them, so we can see who is number 1, 2, 3 for each category. Please organize the whole thing by diagnosis, then by the rank.", "normal_query": "To address a surge in complex cases, the Director of Clinical Services is creating an expert consultation roster. The goal is to identify clinicians with the most experience managing the High Severity, High Risk Patient Group. Please generate a report that, for each primary diagnosis, ranks clinicians based on the number of these specific high-risk patients they manage. The output table should include the primary diagnosis, the clinician's identifier, the count of these high-risk patients for that clinician, and their resulting rank within that diagnosis category. The final roster should be sorted first by the primary diagnosis and then by the clinician's rank to clearly present the top experts for each specialty.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "mental_health_15", "selected_database": "mental_health", "query": "I'm looking at Patient P871358's file and I'm a bit concerned. He's been diagnosed with Bipolar for about 10 years and has already been in the hospital 3 times. I need to know if we should escalate his care. So, first, can you run that calculation of the patient's risk for annualized hospitalization and tell me if he gets flagged for a immediate coordination of intensive care? If he does, I need to find him a new clinician right away. The new clinician would need to be at the same facility he last visited, be one of our 'High' confidence therapists, and not have their credentials up for review in the next six months or so let's say any time after June 2024. If he's flagged, show me their IDs and where they work. If his risk score is fine, just let me know by reporting as 'Patient does not meet criteria for Intensive Care Coordination'.", "normal_query": "I'm assessing Patient 'P871358' and need to determine if an Intensive Care Coordination Flag should be raised. First, please calculate his Annualized Hospitalization Risk, then flag it if the criteria is met. If and only if the patient is flagged, I need a list of suitable clinicians for referral. A suitable clinician must be located at the same facility as the patient's most recent encounter, have a 'High' confidence level, and have a next credential review date after June 1st, 2024. Please return the clinician's identifier and their facility. If the patient is not flagged, just report as 'Patient does not meet criteria for Intensive Care Coordination'", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "mental_health_16", "selected_database": "mental_health", "query": "I'm doing a review on patient P883117 and need to check a couple of things for her file. First, she's supposed to be getting at least 5 hours of therapy a month in her program. Can you check her latest therapy exp intensity and tell me if she's meeting that target? Second, her insurance gives her $1,200 a year for treatment. Based on her personal cost-effectiveness rate of 0.0509, what's her total expected quality of life gain for the whole year? And finally, is that number over our 'good value' threshold of 50 points? Just give me a quick summary for all the assessment for this patient, including the patient's id, and the answers to those three questions.", "normal_query": "I need to perform a case audit for patient 'P883117'. The patient is in a program requiring a minimum therapy intensity of 5 hours per month. Their annual insurance budget for treatment is $1,200, and their current recorded treatment cost-effectiveness is 0.0509 QoL-points per dollar. Please provide a report that answers the following: A boolean value indicating if the patient's current therapy_exp_intensity meets the 5 hours/month target. The calculated Projected Annual QoL Gain, based on their cost-effectiveness rate and the $1,200 budget. A boolean value indicating if this projected gain is greater than 50 points. The output show for all the assessment for this patient with columns for their identifier and these three calculated results.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "mental_health_17", "selected_database": "mental_health", "query": "I need to run a full audit on patient P425079. His insurance gives him $6,000 a year for treatment, but we charge in pounds—£100 an hour. Assuming an exchange rate of 1.25 dollars to the pound, how many hours of therapy can he actually get for his money? Also, his high-acuity plan says he needs 30 hours of therapy a month, but his chart says he's only getting 29.86. Can you confirm if he's meeting that target? Lastly, given his personal cost-effectiveness rate of 0.1197, what's his total potential quality of life gain if he uses his whole $6,000 budget? And is that number over our 'good value' benchmark of 650?", "normal_query": "I am conducting a detailed audit for patient 'P425079'. His insurance plan has a maximum out-of-pocket cost of $6,000 USD per year. Our clinic's therapy rate is £100 GBP per hour, with the current exchange rate at 1.25 USD per GBP. The patient's therapy plan requires a minimum intensity of 30 hours/month, and his last recorded intensity was 29.86 hours/month. His cost-effectiveness rate is 0.1197 QoL-points/dollar. Please provide a report with four calculated fields: The total number of therapy hours his annual budget can afford, rounded to 2 decimal places. A boolean value indicating if his current therapy intensity meets the 30-hour monthly target. The Projected QoL Gain from Max Out-of-Pocket rounded to 2 decimal, using his cost-effectiveness rate and the $6,000 budget. A boolean value indicating if this projected gain exceeds the 'clinically valuable' threshold of 650 points.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 2, "distinct": false, "order": false}} +{"instance_id": "mental_health_18", "selected_database": "mental_health", "query": "I'm working on a study and need to find a very specific group of patients. I'm looking for people with a Bipolar diagnosis. Once you have that list, I need to check one more thing about their current treatment. Our standard for this group is 15 hours of therapy a month. Can you just show me a table of these patients with their ID, diagnosis duration, and hospitalization count, and then add a true/false column that tells me if they're meeting that 15-hour therapy target?", "normal_query": "For my research paper, I need to analyze a Chronic High-Acuity Bipolar Cohort. After identifying this cohort, I need to check their Intensive Therapy Standard Compliance. Please generate a report that lists each patient's identifier, their diagnosis duration in months, their count of previous hospitalizations, and a boolean value indicating if their current therapy intensity is 15 hours per month or more.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "mental_health_19", "selected_database": "mental_health", "query": "I'm trying to decide where to focus our new goal-setting program. I have a hunch that our PTSD patients are struggling more to make progress than our anxiety patients. Can you check this for me? I want to see a comparison: on average, how many recovery goals would a PTSD patient achieve in a full year versus an anxiety patient? I just need those two numbers side-by-side to see if my hunch is right.", "normal_query": "For program development, I need to compare the Annual Goal Achievement between two patient populations. The first population consists of patients with a primary diagnosis of 'PTSD', and the second consists of patients with a primary diagnosis of 'Anxiety'. Please calculate the average Annual Goal Achievement for each group. The final output should be a single row with two columns: one for the PTSD group's average and one for the Anxiety group's average.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "mental_health_20", "selected_database": "mental_health", "query": "I'm Dr. Hanson, a pharmacist. I need to run a safety check on our Anxiety patients at the F533 facility, but only those who've been diagnosed for more than a year. I'm worried about medication side effects. Can you calculate an side effect score for a 12-month period for each of them with just 2 medications? I need a list of these patients showing their ID, the original density number, this new score, and a true/false if their score is over our new safety limit of 0.1. Please put the patients with the highest scores at the top.", "normal_query": "I am conducting a 'Medication Protocol Review' for Anxiety patients at facility F533 who have a diagnosis duration of more than 12 months. I need to calculate their Annualized Side Effect Score, assuming that there are 2 number of medications. Please provide a report that lists the patient identifier, their original med side eff density, their calculated Annualized Side Effect Score, and a boolean flag indicating if this score is over our protocol's threshold of 0.1. The list should be sorted by the highest score first.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": true, "order": true}} +{"instance_id": "mental_health_M_3", "selected_database": "mental_health", "query": "Hey! Go ahead and clean up the treatmentoutcomes table by deleting any old or stale records, but only for those patients who've been flagged as Non-Compliant. Leave the rest untouched!", "normal_query": "Please remove Stale Treatment Outcome Records from the treatmentoutcomes table, but only for patients who have been identified as Non-Compliant Patient.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": true, "order": false}} +{"instance_id": "mental_health_M_4", "selected_database": "mental_health", "query": "Hey! Can you make a reusable database function called calculate_tes? If it already exists, just replace it. This function should take a treatment key, look up the 'engagement' level from the therapy details tied to that treatment, and return the score for therapy engagements as a number.", "normal_query": "Please create (or replace if it exists) a reusable database function named calculate_tes. This function's purpose is to calculate the Therapy Engagement Score for a single treatment record. It should take the treatment key as input, find the corresponding 'engagement' level from the therapy details data, and return the calculated numeric score based on the standard Therapy Engagement Score definition.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "mental_health_M_6", "selected_database": "mental_health", "query": "Hi team, I'm worried we're missing the chance to help patients who are really struggling until it's too late. I want to build a new 'at-risk' watchlist named vulnerable_patient_watchlist that our care coordinators can check every morning. Let's automatically flag any patient who is both having frequent crises and seems to be socially isolated. When the system flags someone, I need the new watchlist to show the watchlist id, their patient ID, who their clinician is, exactly how many crises they've had, what their support score was, and—most importantly—when their next appointment is, and also the insertion date of the watchlist. Make sure you pull that date from the notes of their last visit so we have the most current one.", "normal_query": "As the head of clinical outreach, I am launching a new initiative to reduce critical incidents. To do this, I need to establish a new, permanent data process called 'Vulnerable Patient Watchlist Generation'. This process will create and populate a new table named vulnerable_patient_watchlist. To determine who belongs on this list, we will use our established 'Patient with High Crisis & Low Support Profile'. For every patient who meets these criteria, the new vulnerable_patient_watchlist table must store the following actionable information: a watchlist id, their unique patient ID, the ID of their lead clinician, their total crisis intervention count, their calculated SSE score, and their next scheduled appointment date as recorded during their most recent encounter, and the date of the watchlist's insertion.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "mental_health_M_8", "selected_database": "mental_health", "query": "This is urgent. My patient, P515871, is at high risk for suicide based on the assessment I just filed. Our protocol says this should immediately flag me for intensive care coordination so we can get them help now. Can you run a script to do this protocol?", "normal_query": "I've just finished an emergency assessment for patient P515871, who has been flagged with a 'High' suicide risk. To ensure immediate action, I need to invoke our 'High-Risk Escalation Protocol'. Please execute a script to do this protocol.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": true, "conditions": {"decimal": -1, "distinct": true, "order": false}} +{"instance_id": "mental_health_M_9", "selected_database": "mental_health", "query": "We nearly missed a serious situation with a patient named John because nobody saw their hospitalization risk score was getting dangerously high. I want to fix this now by doing a review protocol. Please help me flag those patients that is the same as John in our patient files. First, please add a new boolean column named needs_case_review to the patients table, with a default of FALSE. Then, for all patients that are currently meeting the criteria based on their latest assessment, execute an update to set the flag to TRUE. This will be our new automated safety alert.", "normal_query": "In response to a recent critical incident, our safety board has approved a new 'Mandatory Case Review Protocol'. Some patients are exceeding the hospital risk density recently, like John, and because of this, we must execute this protocol to flag and identify more patients that has the same situation as John. To implement this, I need a two-part script. First, please add a new boolean column named needs_case_review to the patients table, with a default of FALSE. Second, execute an update to set this new flag to TRUE for all patients who currently meet the protocol's criteria based on their latest assessment.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": true, "conditions": {"decimal": -1, "distinct": true, "order": false}} +{"instance_id": "mental_health_M_10", "selected_database": "mental_health", "query": "Hi, I'm Dr. Rossi. I need an urgent change to a patient's file. My patient David, that's P883117, just lost his entire support system because his wife was in a bad accident. His current support plan is useless now. We've agreed he needs to be twice as self-reliant. Can you find his last assessment, look up his 'support utilization rate' which I think is 1.5, and cut it in half? Please change it in the system to 0.75 so his official plan is up to date.", "normal_query": "I have an urgent update for my patient, David (P883117), following a major life event—his primary caregiver is incapacitated. His existing support_util_rate of '1.5 support-level/contact' is no longer viable. We have set a new therapeutic goal to halve his reliance on external support. Please execute a script to find his single most recent assessment record and update the support_util_rate value to '0.75 support-level/contact' to reflect this new plan.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": true, "conditions": {"decimal": 2, "distinct": false, "order": false}} +{"instance_id": "reverse_logistics_1", "selected_database": "reverse_logistics", "query": "Show me how much the whole process, from transport to final disposition, cost us for every return. Round the result to 2 decimal places, biggest cost first.", "normal_query": "List the Total Return Cost (TRC) for each return case, round the value to 2 decimal places, and sort the results by TRC in descending order.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": 2, "distinct": false, "order": true}} +{"instance_id": "reverse_logistics_2", "selected_database": "reverse_logistics", "query": "Tell me the overall gain or loss from all returns: take what we got back after processing and subtract the combined out-of-pocket amount (transport, getting it sellable again, label fixes, end-of-life handling, fix-up quote). Include the net figure and both parts, rounded to two decimals.", "normal_query": "Estimate the overall net impact across all processed returns by subtracting the total handling cost (transport, reintegration, label correction, end-of-life handling, fix-up estimate) from the amount recaptured after processing; also show both components; round each to two decimals.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": 2, "distinct": false, "order": false}} +{"instance_id": "reverse_logistics_3", "selected_database": "reverse_logistics", "query": "Show me the average value recovered per day since sale for the entire portfolio, along with the recovery value and number of days used in the calculation, rounded to 2 decimal places from highest to lowest.", "normal_query": "Compute the portfolio-wide average Recovery Rate per Day, rounded to two decimals; also return the aggregated Recovery Value and the aggregated Days Lapsed used in that calculation.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": 2, "distinct": false, "order": false}} +{"instance_id": "reverse_logistics_M_1", "selected_database": "reverse_logistics", "query": "Find store credit refunds that are for zero dollars, mark them as “Pending,” and show me their reference numbers with the new status.", "normal_query": "Only for zero-amount reimbursements issued as store credit, set the status to “Pending” and return each affected reference with its new status.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "reverse_logistics_4", "selected_database": "reverse_logistics", "query": "I want to know the average lost of a return, consider the environmental impact factors. Round to two decimal places.", "normal_query": "For all returns, calculate its average Sustainability Adjusted Loss (SAL), rounded to 2 decimal places.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": 2, "distinct": false, "order": false}} +{"instance_id": "reverse_logistics_5", "selected_database": "reverse_logistics", "query": "Show me each processing site with their average and individual processing times , rounded to one decimal place, and list the slowest sites first.", "normal_query": "List the Average Processing Time (APT) for each processing site, including individual Processing Time values in the result. Round numeric outputs to 1 decimal places. Sort the results by APT in descending order.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": 1, "distinct": false, "order": true}} +{"instance_id": "reverse_logistics_6", "selected_database": "reverse_logistics", "query": "I want to know the proportion of returns having warranty claims, and round the answer to one decimal point.", "normal_query": "Calculate the Warranty Claim Ratio (WCR) for the whole business, and round the result to 1 decimal place.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": 1, "distinct": false, "order": false}} +{"instance_id": "reverse_logistics_7", "selected_database": "reverse_logistics", "query": "For each return, show a severity score equal to the number of signals times the level weight (use 1 for low, 2 for medium, 3 for high). Include both the signal count and the weight, and list the highest scores first.", "normal_query": "For each return, compute the Fraud Flag Severity Score (FFS) as the flag count multiplied by a level weight, using Low=1, Medium=2, High=3; also show the flag count and the applied weight; sort by FFS in descending order.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "reverse_logistics_8", "selected_database": "reverse_logistics", "query": "Provide the grand total of all regulatory penalties we've accumulated to date (show only the final number).", "normal_query": "Measure the total Regulatory Compliance Penalty (RCP) incurred to date, only show the final number.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "reverse_logistics_9", "selected_database": "reverse_logistics", "query": "Show the latest 100 items with a relative transport cost versus what’s typical for the same way of coming back. Include the way it came back, round to two decimals, and list newest first.", "normal_query": "For the 100 most recent returns, list each item's Return Channel Cost Index (value = its transport charge divided by the historical average for returns that came back the same way). Include the way it came back, round to two decimals, and sort by the logging time from newest to oldest.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": 2, "distinct": false, "order": true}} +{"instance_id": "reverse_logistics_M_2", "selected_database": "reverse_logistics", "query": "Suspend anyone whose combined severity score (using 1 for low, 2 for medium, 3 for high) is 9 or more, and show their IDs with the new category.", "normal_query": "Suspend all customers whose Fraud Flag Severity Score is at least 9 and the weights are Low=1, Medium=2, High=3. Return each customer ID with the updated segment category.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "reverse_logistics_M_3", "selected_database": "reverse_logistics", "query": "Make the expected expense that bring a defective return back to sellable condition increase by 15% for returns waiting more than 60 days. Show me the case numbers and new estimates (rounded to 2 decimal places).", "normal_query": "Increase the repair estimate by 15 percent for any return that has been waiting more than 60 days, and return the case number with the adjusted amount rounded to 2 decimals.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": 2, "distinct": false, "order": false}} +{"instance_id": "reverse_logistics_10", "selected_database": "reverse_logistics", "query": "Show me the average cost of printing and attaching new labels for each product category, rounded to two decimals and sorted from highest to lowest.", "normal_query": "List the Average relabeling cost by product category. Round numeric outputs to 2 decimal places and sort descendingly.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": 2, "distinct": false, "order": true}} +{"instance_id": "reverse_logistics_11", "selected_database": "reverse_logistics", "query": "Show overall disposal spend per method, most expensive first.", "normal_query": "Show the Total disposal cost by disposal method. Sort the results by the cost in descending order.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "reverse_logistics_12", "selected_database": "reverse_logistics", "query": "Which 5 returns cost most to fix? Show me from priciest to least pricey.", "normal_query": "Display the top 5 returns ranked by repair estimate, ordered from highest to lowest.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "reverse_logistics_13", "selected_database": "reverse_logistics", "query": "How much do we typically recover through each return method? Give me the average amounts rounded to pennies.", "normal_query": "List the average recovery value per return channel, rounded numeric outputs to 2 decimal places.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": 2, "distinct": false, "order": false}} +{"instance_id": "reverse_logistics_M_4", "selected_database": "reverse_logistics", "query": "Find every case marked as finished but never officially closed, mark it closed with today's date, and tell me which case numbers were touched.", "normal_query": "Close every case whose action state is 'Completed' and whose close state is still NULL; set its close state to 'Closed', update the close date to today, and return its case number.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "reverse_logistics_14", "selected_database": "reverse_logistics", "query": "Of all returns, tell me the % flagged with 'high' fraud risk (rounded to hundredths) - show only the percentage.", "normal_query": "Figure out the percentage of returns flagged 'high' in fraud risk levels. Round numeric outputs to 2 decimal places and only show the first one.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": 2, "distinct": false, "order": false}} +{"instance_id": "reverse_logistics_15", "selected_database": "reverse_logistics", "query": "Give me the kilograms of carbon dioxide released in different disposal processing activities. Round numeric outputs to 2 decimal places and show from highest to lowest.", "normal_query": "List the Average carbon footprint by disposal method. Round numeric outputs to 2 decimal places. Show the results from highest to lowest.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": 2, "distinct": false, "order": true}} +{"instance_id": "reverse_logistics_16", "selected_database": "reverse_logistics", "query": "Show me how many returns we have for each warranty status and return channel combination, sorted by the count from highest to lowest.", "normal_query": "List the Count Returns per Warranty status (CNT) and return channel. Sort the results by CNT in descending order.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "reverse_logistics_M_5", "selected_database": "reverse_logistics", "query": "Find all electronics worth over $700, mark their subcategories as 'High Value', and tell me how many got tagged.", "normal_query": "Append the tag 'High Value' to the subcategory of electronics products with unit value greater than 700. Return the number of updated rows.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "reverse_logistics_17", "selected_database": "reverse_logistics", "query": "Show me how much money we actually get back (after costs) for each type of refund, along with both the recovered amounts and what we spent processing them.", "normal_query": "Calculate the Net return profit impact per refund method. Display Recovery Value and Total Return Cost in the result.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "reverse_logistics_18", "selected_database": "reverse_logistics", "query": "How dirty and pricey is it to scrap items in different states? how me the average values, rounded to two decimals.", "normal_query": "Output the Average carbon footprint and disposal cost for each item condition state. Round numeric outputs to 2 decimal places.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": 2, "distinct": false, "order": false}} +{"instance_id": "reverse_logistics_M_7", "selected_database": "reverse_logistics", "query": "Find all unsatisfied customers (ratings 2 or below), flag their cases for follow-up, and give me the case numbers with their new status.", "normal_query": "Mark needsfollowup as 'Yes' for cases whose satisfaction score is less than or equal to 2. Return casetie and the new needsfollowup value.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "reverse_logistics_19", "selected_database": "reverse_logistics", "query": "Show me how many non-compliant items we have divided by disposal method, along with their average carbon footprint (rounded to 2 decimal places). Sort by the highest count of non-compliant items first.", "normal_query": "Display the count of items whose Regulatory Compliance Status is non-compliant, grouped by disposal method, with the average carbon footprint rounded to 2 decimals. Sort the list by the non-compliant count in descending order.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": 2, "distinct": false, "order": true}} +{"instance_id": "reverse_logistics_M_8", "selected_database": "reverse_logistics", "query": "Find all supposedly recycled items with heavy carbon footprints (over 50kg), mark them as hazardous waste instead, and tell me how many needed reclassification.", "normal_query": "Change disposal method from 'Recycle' to 'Hazardous Waste' for records with carbon footprint greater than 50 kilograms. Return the number of affected rows.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "reverse_logistics_20", "selected_database": "reverse_logistics", "query": "When fraud risk is high and it comes back by courier, how off-normal is the shipping cost? Please round the answer to two decimals.", "normal_query": "When the fraud-risk rating is high and the item comes back by courier, how far above or below average is the shipping cost? Please round the answer to two decimals.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": 2, "distinct": false, "order": false}} +{"instance_id": "reverse_logistics_M_10", "selected_database": "reverse_logistics", "query": "Find all automatic approvals for express processing, bump them up to manager approval, and tell me which locations had changes with both the old and new levels.", "normal_query": "Upgrade approval level from 'Automatic' to 'Manager' for processing priority 'Express'. Return loccode, old approval level, and new approval level.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "robot_fault_prediction_1", "selected_database": "robot_fault_prediction", "query": "Let's get a list of all robots that need urgent maintenance and are running too hot, showing their ID, model, urgency score, and hottest joint temp with worst cases first.", "normal_query": "Which robots currently meet the dual criteria of a high Predictive Maintenance Urgency score and a concurrent Thermal Anomaly? Show robot ID, model, maintenance urgency score, and maximum joint temperature. Sort by most critical cases first.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "robot_fault_prediction_3", "selected_database": "robot_fault_prediction", "query": "Find robots doing precision work that aren't accurate enough, showing their ID, job, error amount, and if they need calibration.", "normal_query": "Identify robots used in Precision-Critical Applications that are showing signs of Precision Performance Degradation. Return robot ID, its application type, relative positional error, and current calibration state.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "robot_fault_prediction_4", "selected_database": "robot_fault_prediction", "query": "Show me robots working too hard right now, listing their model, how much payload capacity they're using, and their cycle speed. Make sure the most overloaded ones are at the top of the list.", "normal_query": "List all robots currently operating under an Intensive Workload. Include the robot's model, its payload utilization ratio, and its throughput rate, sorted with the most utilized and fastest robots appearing first.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "robot_fault_prediction_5", "selected_database": "robot_fault_prediction", "query": "I need to plan next week's emergency maintenance schedule. Find all robots that are likely to fail sometime next week and which the system also considers a high risk to break. Show their ID, how likely they'll break, and how many days they have left.", "normal_query": "For next week's maintenance planning, identify robots that are projected to fail within the next 7 days and also have a 'High' Fault Prediction Score Tier. For each, show the robot ID, its fault prediction score, and its remaining useful life in days.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "robot_fault_prediction_6", "selected_database": "robot_fault_prediction", "query": "Let's see which manufacturers are struggling the most with their robots' health. For manufacturers with at least two robots getting worse, show me how many are affected and what their average decline rate is. Please show the manufacturer names in lowercase.", "normal_query": "I require an analysis of health degradation trends, grouped by manufacturer with more than one degrading robot, show the total count of such robots and their average health degradation rate (with manufacturer names in lowercase).", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "robot_fault_prediction_7", "selected_database": "robot_fault_prediction", "query": "Let's find robots whose controllers are overworked compared to their peers by flagging any with 'Anomalous Controller Stress'. I need to see the robot's ID, its model (in lowercase), its stress score, and what the average for its model was.", "normal_query": "Identify all robots experiencing 'Anomalous Controller Stress'. For each, display its robot ID, its model (in lowercase), its specific stress score, and the calculated average stress for its model.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "robot_fault_prediction_8", "selected_database": "robot_fault_prediction", "query": "Let's find robots that might be breaking down because their controllers are overworked. Show me any robot where its controller stress is over 20% more than average for its model, and its mechanical wear score is getting into the medium or high range. I need to see the robot's ID, who made it, the model (in lowercase), and both the stress and wear scores. Please group them by the maker and list the most stressed ones first.", "normal_query": "Identify robots exhibiting signs of mechanically significant wear potentially induced by high controller stress. A robot qualifies if it meets two criteria simultaneously: 1. Its 'Controller Stress Score' is at least 20% higher than the average score for all robots of the same model series. 2. Its 'Mechanical Wear Score' is classified as either 'High' or 'Medium' severity. For each qualifying robot, return its ID, manufacturer, model (in lowercase), its specific controller stress score, and its mechanical wear score. Sort the results by manufacturer, then by the controller stress score in descending order.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "robot_fault_prediction_10", "selected_database": "robot_fault_prediction", "query": "I'm trying to see if our heavy-duty jobs are really taking a toll on the robots. Can you compare the failure rates for two groups? First, all the robots doing high-wear stuff like welding and grinding. Second, everyone else. For each group, just tell me what percentage of them are considered a high reliability risk.", "normal_query": "For a comparative reliability analysis, calculate and contrast the percentage of robots classified with 'High Reliability Risk' between two groups: those assigned to 'High-Wear Applications' and those in all other application types. The final result should display the application group and its corresponding high-risk percentage.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 2, "distinct": false, "order": false}} +{"instance_id": "robot_fault_prediction_11", "selected_database": "robot_fault_prediction", "query": "I need a list of our underperforming robots. Can you find all the bots that are running slower than 180 cycles an hour, and then give me just the top 5 slowest from that list? Show me their ID, maker, model, and their actual cycles per hour score, rounded to two decimals. List the absolute slowest one at the top.", "normal_query": "For a fleet-wide performance review, identify the 5 robots with a production throughput rate lower than 180 cycles per hour. For each of these robots, display its ID, manufacturer, model, and the calculated throughput in cycles per hour, rounded to two decimal places. Sort by the lowest throughput first.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 2, "distinct": false, "order": true}} +{"instance_id": "robot_fault_prediction_12", "selected_database": "robot_fault_prediction", "query": "I'm working on our fleet health dashboard and need a key baseline number. What's the average vibration level if you look at all joints on all our robots? Just give me that one number, rounded to two decimal spots, please.", "normal_query": "To establish a fleet-wide health baseline, calculate the average joint vibration (`vibration_mmps`) across all joints of all robots in the entire fleet. The final result should be a single value, rounded to two decimal places.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 2, "distinct": false, "order": false}} +{"instance_id": "robot_fault_prediction_M_1", "selected_database": "robot_fault_prediction", "query": "We've finished the safety check for robot RB5150. Can you please zero out all its safety violation counters and also mark its calibration status as pending so we know it needs to be rechecked?", "normal_query": "Following a safety review for robot 'RB5150', reset all of its safety violation counters (zone, speed, emergency stops, and collisions) to zero and set its calibration state to 'Pending' to require re-verification.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "robot_fault_prediction_M_2", "selected_database": "robot_fault_prediction", "query": "We're switching robot RB0042 over from welding to material handling. Can you update its job in the system and put it in manual mode so we can set it up?", "normal_query": "Robot 'RB0042' is being repurposed from 'Welding' to 'Material Handling'. Update its application type accordingly and set its operational mode to 'Manual' for reconfiguration.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "robot_fault_prediction_M_3", "selected_database": "robot_fault_prediction", "query": "I need to adjust the maintenance schedule for some of our heavily used robots. For any robot that does welding or grinding and is due for service in the next two weeks, can you push their maintenance date out by 7 days and set the estimated cost for the job to 1200 dollars?", "normal_query": "For all robots assigned to high-wear applications such as 'Welding' or 'Grinding' whose next maintenance is due in less than 14 days, extend the maintenance interval by an additional 7 days and standardize their estimated upkeep cost to $1200.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "robot_fault_prediction_M_4", "selected_database": "robot_fault_prediction", "query": "It's time to retire the old robot, 'RB1055'. Can you please go through the decommissioning steps in the system? That means marking it as decommissioned, wiping its current program, and putting it into an emergency stop state so it can't be used.", "normal_query": "Initiate the full decommissioning procedure for the legacy robot 'RB1055'. This requires updating its operational mode to 'DECOMMISSIONED', clearing its currently loaded program, and setting its safety state to 'Emergency Stop' as a final precaution.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "robot_fault_prediction_M_5", "selected_database": "robot_fault_prediction", "query": "Hey, we fixed that 'SRVO-050' alarm on robot 'RB2073'. Can you clear that fault from the system? And since it's fixed, let's also reset the fault prediction score back to a really low number, like 0.05, and clear out what the system thought was going to fail.", "normal_query": "For robot 'RB2073', clear the active fault 'SRVO-050' as the underlying issue is resolved. Concurrently, reset the fault prediction metrics by setting the prediction score to a baseline low value of 0.05 and nullifying the fault type estimation.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "robot_fault_prediction_M_7", "selected_database": "robot_fault_prediction", "query": "To help us keep a constant eye on our riskiest robots, can you create a special saved list called `vw_high_risk_robots`? It should always show the bots that are considered a high safety risk, along with their ID, who made them, the model, how many times they've crashed, and their overall incident rate.", "normal_query": "As per our new safety directive, create a permanent database view named `vw_high_risk_robots`. This view must provide a live list of all robots classified as a 'High Safety-Risk Unit'. The view shall include the robot's ID, its manufacturer, model, total collision count, and its calculated Safety Incident Rate.", "preprocess_sql": [], "clean_up_sqls": ["DROP VIEW IF EXISTS vw_high_risk_robots;"], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "robot_fault_prediction_M_8", "selected_database": "robot_fault_prediction", "query": "Let's start tracking how often our robots get overloaded. Can you add a new field called `overload_frequency_ph` to the main performance and safety table? Once you've added it, you'll need to go back and fill it in for all the robots by calculating their overload frequency from their existing data.", "normal_query": "To enhance our performance monitoring, we need to begin tracking the 'Overload Frequency' KPI directly on the performance records. First, modify the `performance_and_safety` table to add a new column named `overload_frequency_ph` with a `REAL` data type. After adding the column, immediately populate it for all existing records by calculating their historical Overload Frequency based on their total overload events and operating hours.", "preprocess_sql": ["ALTER TABLE performance_and_safety DROP COLUMN IF EXISTS overload_frequency_ph;"], "clean_up_sqls": ["ALTER TABLE performance_and_safety DROP COLUMN IF EXISTS overload_frequency_ph;"], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "exchange_traded_funds_1", "selected_database": "exchange_traded_funds", "query": "Could you get me a ranked list of all income funds that are premier? For each one, I need the ticker symbol, its short name, its premier rank, and the calculated secure income efficiency score. Please sort the list by the efficiency score, from highest to lowest.", "normal_query": "Generate a ranked list of all premier income funds. For each fund, provide its ticker symbol, short label, its premier rank, and its calculated secure income efficiency score. This has to be ordered from the highest to the lowest score.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "exchange_traded_funds_2", "selected_database": "exchange_traded_funds", "query": "Hey, can you pull up the performance history for the 'AADR' fund? I want to see how it did against its category each year. For every year, also show me what the outperformance was in the year before, and what the year-over-year outperformance change was. Just list it all out by year for me.", "normal_query": "I need to analyze the performance trend for the fund with ticker 'AADR'. Please calculate the annual fund outperformance for each calendar year. Additionally, for each year, show the previous year's outperformance and the year-over-year outperformance change. The output should contain the calendar year, the current outperformance, the previous year's outperformance, and the year-over-year change, sorted by year.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "exchange_traded_funds_3", "selected_database": "exchange_traded_funds", "query": "I'm worried about interest rates going up and want to find bond funds that are safer than their peers. Can you identify funds that are much less sensitive to rate changes‚Äîat least 1.5 years less‚Äîthan what's typical for their category? For each of these funds, please list its ticker symbol, name, and category. Also, show me its specific sensitivity value, the average for its category, and its advantage to which their duration is shorter than the average duration of its category. Sort the results by the advantage , from highest to lowest.", "normal_query": "I need to perform a peer analysis on fixed-income funds based on interest rate sensitivity. For each fund category, first calculate the category average duration. Then, identify all funds whose own duration is at least 1.5 years lower than this average. For this final list of funds, please display the ticker symbol, short label, product class, the specific fund's duration, the calculated category average duration, and the fund's duration advantage. Sort the results in descending order by the duration advantage.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 2, "distinct": false, "order": true}} +{"instance_id": "exchange_traded_funds_4", "selected_database": "exchange_traded_funds", "query": "I'm looking for resilient funds that do well no matter what the market is doing. Can you find funds that tend to beat their peers when the market is up, but also protect against losses better than their peers when the market is down? For each fund, show me its ticker symbol, its average outperformance in good years, and its average outperformance in bad years. Also, calculate the fund's difference in average outperformance for both scenarios. Only show me funds with at least three good and three bad years of data. Sort the list by the biggest difference first.", "normal_query": "I want to identify funds that perform well in different market cycles. Please calculate the average upside outperformance and the average downside outperformance for each fund. Also, compute the capture differential. Display the fund's ticker symbol, its average upside outperformance, its average downside outperformance, and the capture differential. Only include funds with at least three up years and three down years of history. Sort the results by the capture differential in descending order.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 6, "distinct": false, "order": true}} +{"instance_id": "exchange_traded_funds_5", "selected_database": "exchange_traded_funds", "query": "I'm worried about some of our funds being too big and hard to trade. For each fund, please tell me what is its liquidity pressure? For the top 100 funds, I want to see a list showing the fund's ticker symbol, its size, ratio of turnover, and its calculated days to trade turnover. Show me the riskiest ones at the top of the list.", "normal_query": "I need to assess the liquidity risk for our funds. Please calculate the portfolio liquidity Pressure for each fund, which should be expressed in days. For the top 100 funds with the highest pressure, please display the ticker symbol, net worth, turnover ratio, and the calculated liquidity pressure days. The results should be sorted by the liquidity pressure days in descending order.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 2, "distinct": false, "order": true}} +{"instance_id": "exchange_traded_funds_6", "selected_database": "exchange_traded_funds", "query": "Hey, can you help me check funds that changed its investment style? I'm looking for funds with long histories and want to compare their 3-year vs 10-year Beta and R-squared values. For each one, give me the ticker, how much the Beta and R-squared changed, and include a quick summary of their classifications too. Then just show the top 100 funds with the biggest Beta drift, sorted from highest to lowest based on the absolute value of that drift.", "normal_query": "I need to analyze the style drift for funds with long track records. Please compare the 3-year and 10-year Beta and R-squared values for each fund. The output should include the fund's ticker symbol, the calculated Beta Drift, the R-Squared Drift, and a summary of the classifications. Sort the top 100 results by the absolute value of the Beta drift in descending order.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 2, "distinct": false, "order": true}} +{"instance_id": "exchange_traded_funds_7", "selected_database": "exchange_traded_funds", "query": "I want to find the top-performing fund in each category. But only look at categories that have at least 10 funds in them, and for each category, find the fund that has the highest score for peer-group comparison. For each one, show me the category name, the ticker of the best fund, its short label, and its final score. Sort everything by the highest scores first, and just give me the top 100 results.", "normal_query": "I want to find the category dominator for each fund category. For each fund category with at least 10 funds, identify the fund with the highest composite score. The output should list the category, the ticker symbol of the dominating fund, its short label, and its final composite score. Please sort the results by the composite score in descending order and limit the output to the top 100.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 4, "distinct": false, "order": true}} +{"instance_id": "exchange_traded_funds_8", "selected_database": "exchange_traded_funds", "query": "I'd like to run a regression to see how portfolio turnover relates to manager skill. For each product class that has more than 25 funds, calculate the slope between alpha and turnover, and also show how good the fit is. I want to see the product class name, how many funds are in it, the slope value, and the R-squared for the regression. Sort the results by the slope from highest to lowest, and just show the top 100.", "normal_query": "I want to conduct a regression analysis to determine the relationship between portfolio turnover and manager skill. For each productclass with more than 25 funds, calculate the alpha-turnover slope and the fit quality. The output should include the productclass, the number of funds, the calculated slope, and the R-squared value for the fit. Please sort the top 100 results by the alpha-turnover slope in descending order.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 4, "distinct": false, "order": true}} +{"instance_id": "exchange_traded_funds_9", "selected_database": "exchange_traded_funds", "query": "Can you find funds that is a potential value investment? For the top 100 that qualify, show me their ticker, short label, and where their price stands right now. Sort them from the lowest to the highest price position.", "normal_query": "Please screen for funds that match the contrarian value play profile. For the top 100 qualifying funds, please display the ticker symbol, short label, and the calculated price position. Sort the results by the price position in ascending order.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": 8, "distinct": false, "order": true}} +{"instance_id": "exchange_traded_funds_10", "selected_database": "exchange_traded_funds", "query": "I have a hunch that funds that tell us more about their stock valuations actually perform better. Can we test this? Let's split all the funds into two groups: those that share their valuation numbers and those that don't. For each of those two groups, tell me how many funds are in it and what the typical 1-year return was. I'm curious to see if there's a difference.", "normal_query": "I want to compare the performance of funds based on their data transparency. Please create two groups of funds: 'Transparent' and 'Opaque', based on their valuation data availability. For each group, calculate the fund_count and the Median 1-Year Return.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": 4, "distinct": false, "order": false}} +{"instance_id": "exchange_traded_funds_11", "selected_database": "exchange_traded_funds", "query": "Can you find those rare funds that basically produce positive alpha but are passive? I'd like to see the top 10 of these, ranked by the alpha for 5-years in descending order. For each one, show me the ticker symbol, the company that runs it, and their 5-year alpha score. Also, at the end, just tell me the total number of these funds you found.", "normal_query": "I want to find all passive alpha generators. Please provide a list of the top 10, showing their ticker symbol, parent group, and their 5-year alpha. The list should be sorted by the 5-year alpha in descending order. Also, provide a separate summary count of the total number of passive alpha generators found.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "exchange_traded_funds_12", "selected_database": "exchange_traded_funds", "query": "I want to see which fund managers are actually worth their high fees. Can you figure out a score that shows if a manager's skill outweighs their cost? Show me the top 10 funds with the best scores, listing their ticker symbol and the score itself, sorted from highest to lowest. Also, can you tell me what percentage of all funds are actually providing a positive value for their cost?", "normal_query": "I need to rank funds based on their active manager value. Show me a list of the top 10 funds with the highest AMV, displaying their ticker symbol and the calculated AMV score, sorted from highest to lowest. Additionally, provide a scalar metric showing the percentage of all funds that have a positive AMV.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 4, "distinct": false, "order": true}} +{"instance_id": "exchange_traded_funds_13", "selected_database": "exchange_traded_funds", "query": "How many funds out there are both generating excess returns successfully and its managers confidently put a lot of money into their best ideas? Just give me the total number.", "normal_query": "I need to know the total number of focused alpha leaders.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "exchange_traded_funds_14", "selected_database": "exchange_traded_funds", "query": "I want to know the average Information Ratio after adjusting for how consistent it is. I just need that one final number.", "normal_query": "I want to calculate the average consistency-adjusted information ratio, and please provide it as a single scalar value.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 2, "distinct": false, "order": false}} +{"instance_id": "exchange_traded_funds_16", "selected_database": "exchange_traded_funds", "query": "What's the best score any fund has gotten for generating steady income efficiently? Just give me the highest number and I only need the top score across the board.", "normal_query": "I need to find the single highest secure income efficiency score across all funds. Please provide only the maximum SIES value as a single scalar result.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 2, "distinct": false, "order": false}} +{"instance_id": "exchange_traded_funds_17", "selected_database": "exchange_traded_funds", "query": "How many funds are truly standing out from the crowd with their active strategies? Just give me the total number ‚Äî one single value.", "normal_query": "I want a total count of all true active differentiators. Please provide a single scalar value for the total count.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "exchange_traded_funds_18", "selected_database": "exchange_traded_funds", "query": "How many funds are really going against the grain and is a potential value investment? Just give me the final count ‚Äî one number is all I need.", "normal_query": "I need the total number of funds that qualify as a contrarian value play. Please provide the final count as a single scalar value.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "exchange_traded_funds_19", "selected_database": "exchange_traded_funds", "query": "Which exchange moves the most money overall? Let's first figure out the average daily trading volume for each fund. Then, total those numbers for every exchange. Just tell me the one exchange where the most trading happens ‚Äî the biggest player in terms of total value traded.", "normal_query": "I need to identify the most liquid exchange. To do this, first calculate the average daily value traded for every fund. Then, sum this value for all funds on each exchange. Finally, return the name of the single exchange with the highest total value traded", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "exchange_traded_funds_20", "selected_database": "exchange_traded_funds", "query": "How much money is being wasted on fees by funds that are basically just hugging the index? For every one of those closet indexers, add up the fees that aren‚Äôt really earning their keep. In the end, just give me one total number ‚Äî the grand total of all those wasted fees, rounded to 2 decimal places.", "normal_query": "I need to calculate the total wasted fee amount for all funds classified as a closet indexer. Finally, sum these amounts together and provide a single, rounded scalar value to 2 decimal values.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 2, "distinct": false, "order": false}} +{"instance_id": "exchange_traded_funds_M_1", "selected_database": "exchange_traded_funds", "query": "It's time to refresh our list of funds with positive momentum. Can you wipe the old daily_golden_cross_leaders table clean (create it if it doesn't exists) and then fill it up again with all the funds whose short-term price average is above their long-term average? For each one, I need the ticker symbol, who runs it, the momentum score, and today's date.", "normal_query": "Please run the daily job to update the daily_golden_cross_leaders table. First, ensure the table exists. Then, clear out all existing data from it. Finally, populate it with all funds that are currently showing a golden cross signal. The table should contain the fund's ticker symbol, parent group, the calculated short-term momentum indicator value, and today's date as the report date.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "exchange_traded_funds_M_2", "selected_database": "exchange_traded_funds", "query": "I need a tool that can calculate the stock-picking skill for any fund. Can you build a function called get_appraisal_ratio that I can use on any ticker? It should figure out the fund's appraisal ratio over the last 3 years and be smart enough not to break if the data isn't perfect like nulls or invalid data.", "normal_query": "Please create a reusable SQL function named get_appraisal_ratio that takes a fund's ticker symbol as input. This function should calculate the fund's 3-year appraisal ratio. Inside the function, handle potential null or invalid data to avoid errors.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "exchange_traded_funds_M_3", "selected_database": "exchange_traded_funds", "query": "I want to create a pre-calculated list named vw_reliable_core_holdings. This list should contain all funds that are reliable as a core portfolio holding. For each qualifying fund, please include its ticker symbol, short label, parent group, product class, and launch date.", "normal_query": "I want to create a pre-calculated list named vw_reliable_core_holdings. This list should contain all funds classified as a reliable core holding. For each qualifying fund, please include its ticker symbol, short label, parent group, product class, and launch date.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "exchange_traded_funds_M_4", "selected_database": "exchange_traded_funds", "query": "Let's make a refreshable summary table called family_risk_summary to track the family risk profile for each fund company. For each company, store the family name, and can you calculate their average market risk (3-year beta), their median risk-adjusted return (3-year Sharpe Ratio), and the number of funds that generates more than 0 alpha in a 5 year period? When the query is run, it should just update the table with these fresh calculations and the current time.", "normal_query": "I need to perform an upsert operation on a summary table named family_risk_summary. This table should store a family risk profile for each fund family. For each family, calculate and store the family name, the average 3-year beta, the median 3-year sharpe ratio, a total count of their alpha generator funds, and a timestamp of when the record was last updated.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "exchange_traded_funds_M_6", "selected_database": "exchange_traded_funds", "query": "Let's rebuild our summary of funds that are basically just expensive index trackers. Can you wipe the closet_indexer_summary table clean and then fill it again (create it if the table doesn't exists)? For every fund that just copies the market but still charges more than its benchmark, I want to see how much money is being wasted in extra fees. Put the fund's ticker, the company name, and that wasted fee amount into the table.", "normal_query": "I need to refresh the closet_indexer_summary table. Please ensure the table exists, then clear all its existing data. Afterward, identify all closet indexer funds, calculate their total wasted fee amount, and insert the fund's ticker symbol, its family name, and the calculated amount into the table.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "exchange_traded_funds_M_7", "selected_database": "exchange_traded_funds", "query": "I need to clean up the annual_returns table by moving old data into a separate archive. Let‚Äôs create a procedure called archive_old_returns that takes a number of years as input‚Äîthis tells it how far back we want to keep. Any return records older than that should be moved to annual_returns_archive, and then deleted from the main table. This is important because we use the archived data to calculate long-term outperformance. Once the procedure is ready, run it to archive everything older than 10 years.", "normal_query": "I need to move historical data from annual_returns which is an active table, to an archive table called annual_returns_archive. You should create a procedure, named archive_old_returns, and it must be designed to take an integer for the number of years to retain. It should move records from annual_returns to annual_returns_archive if they are older than the specified retention period, and then delete the moved records. This is important because the archived data is the basis for calculating annual fund outperformance. Please create the procedure and then CALL it to archive all records older than 10 years.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "exchange_traded_funds_M_8", "selected_database": "exchange_traded_funds", "query": "I want to automatically get an alert when a fund's investment strategy seems to be changing. Can you set up a trigger function called log_style_drift that keeps a log for us? Whenever a fund's risk numbers get updated in the risk_metrics table, I want the function to check if its market risk or its correlation to the benchmark has shifted significantly. If it has, please add a new line to a style_drift_log table with the fund's ticker symbol, the old and new risk values, and when it happened. If the style_drift_log table does not exist, please create it first.", "normal_query": "I need you to implement a trigger to monitor for style drift in funds, and this function should be called log_style_drift. First, create a new table named style_drift_log to record these events. Then, create a trigger function that fires after any update on the risk_metrics table. If the change meets the definition of style drift, it should insert a new record into style_drift_log containing the fund's ticker symbol, the old and new beta/R-squared values, and the current timestamp.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "exchange_traded_funds_M_9", "selected_database": "exchange_traded_funds", "query": "Let's add a new performance stat to the funds table. First, make sure there's a column called rrei_score, and that's where we'll store each fund's index of risk-return efficiency. Then, set up a function called calculate_rrei that figures out this score based on the fund's ticker. Once that's ready, go ahead and fill in the rrei_score for every fund by calling the function for each one.", "normal_query": "I want to enrich the funds table with a new calculated metric. First, add a new rrei_score column if it's not already there. Next, create a function calculate_rrei that computes the risk-return efficiency index for a given fund ticker. Finally, update the funds table to populate the rrei_score for every fund by calling this new function.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "exchange_traded_funds_M_10", "selected_database": "exchange_traded_funds", "query": "Let's create and keep a summary table called family_sector_focus up-to-date. I want to see which single industry each fund company is most heavily invested in. For each company, give the company's name, find their top industry and the average investment percentage in it and then either add them to the table or update their existing entry with this new info and the time of the update.", "normal_query": "I want to insert a new summary record or update an existing one, ensuring data freshness without duplicates to track the family sector concentration profile. Ensure a table named family_sector_focus exists. Then, for each fund family, insert or update the table with the family name, the top sector name, the average weight in that sector, and the current timestamp.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "disaster_relief_1", "selected_database": "disaster_relief", "query": "We need to pinpoint which operations must be flagged as critical. Let's find any operation responding to a 'Catastrophic' level disaster. For these, show me their ID, the area they are in, and that final disaster severity score, rounded to two decimals.", "normal_query": "Generate a report of all operations that require escalation to 'Critical' priority. An operation qualifies if it is responding to a disaster classified as 'Catastrophic'. The report must include the operation's reference ID, the affected area, and the specific DSI value, rounded to two decimal places.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 2, "distinct": false, "order": false}} +{"instance_id": "disaster_relief_2", "selected_database": "disaster_relief", "query": "For planning purposes, I need to get a handle on our resource situation across all disasters. Could you pull a report for every event showing what it was and where it happened, and then calculate two key things for me? First, how many days will our current supplies last, considering food and water as separate constraints? Second, what is the current shelter shortfall? Please round any calculated numbers to two decimal places.", "normal_query": "Provide a logistical status report for all recorded disasters. The report must contain the disaster ID, hazard type, and affected area, along with two calculated metrics, rounded to two decimal places: the 'Supply Sufficiency (Days)' and the 'Shelter Gap'.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 2, "distinct": false, "order": false}} +{"instance_id": "disaster_relief_3", "selected_database": "disaster_relief", "query": "I need to flag any disaster that was a Mass Casualty Incident. Can you pull a list of them and show me the ID and location for each?", "normal_query": "Identify all disasters classified as a Mass Casualty Incident (MCI). For each qualifying disaster, return its ID and the affected area.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "disaster_relief_4", "selected_database": "disaster_relief", "query": "How are our distribution hubs holding up? Let's create a report for each hub. I want a strain score that considers both how busy the hub is internally and how severe the disasters are that it's serving. If that final score is over a million, flag it as 'Overwhelmed', otherwise it's 'Normal'. Show me the hub ID, the final score rounded to two decimals, and its status.", "normal_query": "Let's assess the status of our distribution hubs. For each hub, provide its ID and its calculated Hub Strain Index (HSI), rounded to two decimal places. Also include a 'status' column classifying the hub as 'Overwhelmed' if its HSI exceeds 1,000,000, and 'Normal' otherwise.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 2, "distinct": false, "order": false}} +{"instance_id": "disaster_relief_5", "selected_database": "disaster_relief", "query": "I'm worried about operations running out of cash. Can you identify which are in a Funding Crisis? For that list, show me their ID, funding status, and exactly how many days they have left, to one decimal place.", "normal_query": "Identify all ongoing operations that are in a Funding Crisis. The report should include the operation ID, its funding state, and the specific Budget Runway in days, rounded to one decimal place.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 1, "distinct": false, "order": false}} +{"instance_id": "disaster_relief_6", "selected_database": "disaster_relief", "query": "I want to do a post-mortem on missions that didn't go well. Can you pull up the full coordination and evaluation details for any completed operations we've flagged as a Failing Operation?", "normal_query": "For post-mission analysis, retrieve the complete coordination and evaluation records for all completed operations that are classified as a Failing Operation.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "disaster_relief_7", "selected_database": "disaster_relief", "query": "Let's find our top performers. Can you identify all missions that qualify as a Highly Effective Operation? Please list their ID, along with their RES and CQI scores. Sort the list to show the most effective ones at the top.", "normal_query": "Generate a report identifying all operations designated as a Highly Effective Operation. The report must include the operation ID, its Response Effectiveness Score (RES), and its Coordination Quality Index (CQI), sorted by RES in descending order.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "disaster_relief_8", "selected_database": "disaster_relief", "query": "Let's find any areas on the brink of a major health crisis. Can you run the numbers and find all operations where the public health risk score is over 70? This score should be based on things like disease risk, sanitation, and medical staff ratios. For any you find, just list the operation's ID and that final risk score, rounded to two decimal places.", "normal_query": "Produce a report identifying all operations that should be flagged for a Public Health Emergency. An operation is flagged if its calculated Public Health Risk Score (PHRS) exceeds 70. For each flagged operation, show the operation ID and its calculated PHRS, rounded to two decimal places.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 2, "distinct": false, "order": false}} +{"instance_id": "disaster_relief_9", "selected_database": "disaster_relief", "query": "A simple list of stuck operations isn't enough; we need to know which ones are the biggest emergencies. Can you rank all operations in Logistical Gridlock by how critical they are using the Gridlock Severity Index? Then, show me a list of the worst ones at the top, with their location and that severity score, rounded to two decimals.", "normal_query": "Generate a prioritized report of all operations in Logistical Gridlock. The report should rank operations by a calculated 'Gridlock Severity Index'. Return the operation ID, affected area, and the index, rounded to two decimal places, sorted with the most severe cases first.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 2, "distinct": false, "order": true}} +{"instance_id": "disaster_relief_10", "selected_database": "disaster_relief", "query": "Give me a bird's-eye view of how our regions are performing. Can you produce a summary showing each region (in lowercase), their average coordination quality score (rounded to two decimals), and a simple count of how many disasters they've handled?", "normal_query": "I need a regional performance summary. For each region tag (in lowercase), calculate and display the average Coordination Quality Index (CQI), rounded to two decimal places, and the total count of disasters that have occurred in that region.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 2, "distinct": false, "order": false}} +{"instance_id": "disaster_relief_11", "selected_database": "disaster_relief", "query": "Let's get a big picture metric on our efficiency. For all the missions we've officially wrapped up, how much did it cost us on average to help a single person? Just give me that final dollar amount, rounded to two decimal places.", "normal_query": "For all 'Completed' operations, calculate the overall 'Cost Per Beneficiary (CPB)'. The final value should be rounded to two decimal places.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 2, "distinct": false, "order": false}} +{"instance_id": "disaster_relief_12", "selected_database": "disaster_relief", "query": "I want to know how our vehicles are holding up in the absolute worst conditions. For all those disasters where we could barely get in—the ones marked with 'Minimal' transport access—what's our average vehicle breakdown rate? Just give me that single percentage, rounded to one decimal.", "normal_query": "Determine the average vehicle breakdown rate for all transportation units involved in disasters where transport access is rated as 'Minimal'. Present the result as a percentage rounded to one decimal place.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 1, "distinct": false, "order": false}} +{"instance_id": "disaster_relief_M_1", "selected_database": "disaster_relief", "query": "Let's add a supply status flag to each disaster record to make our dashboards easier to read. Can you update our records based on how many days their supplies will last? If it's less than two days, tag it 'Critical'. If it's under five, call it 'Low'. Anything else is 'Adequate'. Please also store the calculated number of days in there.", "normal_query": "Update the impact_summary JSONB field for all disaster events to include a new 'supply_status' object. This object should contain the calculated supply sufficiency in days and a status classification based on that value. Classify the status as 'Critical' for a runway under 2 days, 'Low' for under 5 days, and 'Adequate' otherwise.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "disaster_relief_M_2", "selected_database": "disaster_relief", "query": "We need a standard way to figure out our cost per ton for aid delivery. Let's create a reusable function, call it calculate_adc, that does this for us. It should take a transportation ID, find the associated costs and total tons delivered, and return the cost per ton. Also, build in a sanity check: if delivery tons are missing or zero, it should throw an error instead of dividing by zero.", "normal_query": "Create a PL/pgSQL function named calculate_adc that calculates the aid delivery cost per ton. The function should accept a single transportation ID as an input parameter. It must compute the total transport costs associated with that ID and divide it by the total tons delivered. The function must include validation to raise an exception if the total delivery tons are null or zero.", "preprocess_sql": [], "clean_up_sqls": ["DROP FUNCTION IF EXISTS calculate_adc(VARCHAR(20));"], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "disaster_relief_M_3", "selected_database": "disaster_relief", "query": "I want to set up some automatic financial red flags. Can you build a function that checks all our active operations and alerts us based on two rules? First, if an active operation has already spent more than 20 percent of its total budget, raise a 'High Burn Rate' alert. Second, calculate how many days of funding they have left at their current spending rate; if it's less than a week, raise a 'Funding Crisis' alert.", "normal_query": "Create a function named get_financial_alerts that returns a table of alerts for active operations. It should generate a 'High Burn Rate' alert if an operation's total costs exceed 20% of its allocated budget. It should also generate a 'Funding Crisis' alert if the calculated budget runway, based on the daily burn rate, is less than 7 days.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "households_1", "selected_database": "households", "query": "I want to find the most 'plush' region. To do this, first figure out a 'comfort score' for each house by dividing its number of bathrooms by its number of residents—but only for houses where we know the bathroom count. Then, find the average comfort score for each region. Once you've identified the single region with the best average score, go back and add up the car counts for every household in that specific region and tell me the grand total.", "normal_query": "A 'Comfort Index' is calculated for each household by dividing its bathroom count by its resident count. Find the region (`locregion`) with the highest average 'Comfort Index', considering only households with a known bathroom count and at least one resident. For this top-ranking region, calculate the total number of cars ('Auto_Count') owned by all its households combined.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "households_2", "selected_database": "households", "query": "Let's find the biggest welfare fraud hotspot. A family is a red flag if they get aid but have a lot of new vehicles (more than 2, newest from 2010+). I want to know which region has the highest concentration of these red-flag families as a percentage of their total population. Just give me the name of that region.", "normal_query": "To identify potential fraud hotspots, first define a 'High-Risk Household' as one that is 'Supported' (socsupport is 'Yes') and 'High-Mobility' (total vehicles > 2, newest vehicle 2010 or later). Then, for each region, calculate the percentage of its total households that are 'High-Risk'. Finally, return the `locregion` with the highest percentage of 'High-Risk' households.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "households_3", "selected_database": "households", "query": "We're looking for welfare fraud. Flag any family that gets aid and also has a lot of newish vehicles. Specifically, a 'high-mobility' family has more than two vehicles in total, and their newest one is from 2010 or later. Give me a unique list of the flagged household IDs.", "normal_query": "The government is investigating potential welfare fraud. A household is flagged for review if it meets two conditions simultaneously: 1) its `socsupport` status is 'Yes', AND 2) it is a 'High-Mobility' household. A 'High-Mobility' household is defined as one where the sum of all vehicles is greater than 2, AND its 'Newest_Year' is from 2010 or later.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": true, "order": false}} +{"instance_id": "households_4", "selected_database": "households", "query": "Let's find all the 'roomy' apartments. A place is 'roomy' if it has more than 20 square meters per person. To figure out the total square meters, assume every bathroom is 10 sq meters and every bedroom (use the `Room_Count` field) is 15 sq meters—if a count is missing, just treat it as zero. After you get the total area, divide it by the number of people in that house, making sure to handle cases with no residents. Finally, just tell me how many apartments in total are 'roomy'.", "normal_query": "An urban planning initiative gives a 'Space Bonus'. To qualify, an apartment must have more than 20 square meters per resident. The total square meters is calculated as (the number of bathrooms * 10) plus (the number of bedrooms * 15), using the `Bath_Count` and `Room_Count` fields from `dwelling_specs` respectively. Any missing counts should be treated as 0. This total area is then divided by the household's resident count, avoiding any division-by-zero errors. Return the total count of apartments that qualify for the bonus.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "households_5", "selected_database": "households", "query": "I need to find our longest-standing 'large, wealthy' family. A family qualifies if they own their home, are in the top two income brackets, and have more than 4 people. From that list, find the one with the lowest household ID (our oldest record) and tell me their region and zone.", "normal_query": "Identify 'affluent, large' families, defined as owner-occupied households in the top two income brackets with more than 4 residents. From this group, find the household that has been in the system the longest (i.e., has the lowest `housenum`). For this specific household, list its `locregion` and `loczone`.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "households_6", "selected_database": "households", "query": "In the Taguatinga area, find all the 'overcrowded' homes (more than 3 people per bedroom). Once you have that list, figure out the average number of vehicles (cars, bikes, motorcycles all counted) that this specific group owns. Give me the final number, rounded.", "normal_query": "For the 'Taguatinga' region, calculate the 'crowding score' for each household. Identify all households with a score over 3. For this specific group of 'overcrowded' households, determine the average number of total vehicles they own (sum of autos, bikes, and motors), rounded to an integer.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 0, "distinct": false, "order": false}} +{"instance_id": "households_7", "selected_database": "households", "query": "Let's compare crowded city homes to crowded country homes. How many more crowded city homes are there than crowded country homes? Give me the difference.", "normal_query": "We want to compare two groups of households: 'Urban Crowded' and 'Rural Crowded', based on our established definitions for 'Urban' and 'Crowded' households. Calculate the count for each group and return the difference (Urban Crowded count - Rural Crowded count).", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "households_8", "selected_database": "households", "query": "Let's score each house's infrastructure. First, find the average score for each region. Then, tell me which region has the best average score and which has the worst, in a single line like 'BestRegion | WorstRegion'.", "normal_query": "Calculate the 'Infrastructure Score' for each household. Then, for each region, find the average score. Finally, list the region with the highest average score and the region with the lowest average score as a single string: '[Highest Region] | [Lowest Region]'.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "households_9", "selected_database": "households", "query": "Let's score each house's infrastructure. First, find the average score for each region. Then, tell me which region has the best average score and which has the worst, in a single line like 'BestRegion | WorstRegion'.", "normal_query": "Calculate the 'Infrastructure Score' for each household. Then, for each region, find the average score. Finally, list the region with the highest average score and the region with the lowest average score as a single string: '[Highest Region] | [Lowest Region]'.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "households_10", "selected_database": "households", "query": "I'm looking for the hotspot of 'high-tech, high-mobility' families. These are people with lots of newish vehicles (more than 2, newest from 2005+) who also live in modern homes (house, apt, condo) with available TV service. Find all these families, then tell me which single region has the most of them.", "normal_query": "Identify households that are both 'High-Mobility' (more than 2 total vehicles, newest from 2005 or later) and 'High-Tech' (living in a modern dwelling like a house, apartment, or condo with available TV service). After finding this group, determine which `locregion` has the highest count of these specific households.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "households_11", "selected_database": "households", "query": "Let's find our top 10 families by their financial health score, which considers their income, spending habits, and homeownership status. After you get that list, tell me what percentage of them has a private garage.", "normal_query": "Using the 'Socioeconomic Index' (SEI), identify the top 10 households with the highest scores. Then, for this elite group, calculate the percentage of them that have access to a 'Private Garage'.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 2, "distinct": false, "order": false}} +{"instance_id": "households_12", "selected_database": "households", "query": "Find all the families that don't get social aid or domestic help and own more than one vehicle. From those, figure out which type of home has the highest average 'prosperity score'. Then, tell me the total number of vehicles owned by families in that home type.", "normal_query": "Identify all 'independent' households (defined as those receiving no social support or domestic help and owning more than one vehicle). Among them, find the dwelling class with the highest average 'Household Prosperity Score'. For that top-ranking dwelling class, what is the total number of vehicles owned by the 'independent' households living there?", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "households_13", "selected_database": "households", "query": "I want to find the region with the highest concentration of families on social support. For each region, figure out what percentage of its total families receive aid. Then, just show me the name of the region with the top percentage and the percentage itself, rounded to two decimal places.", "normal_query": "For each region, calculate the ratio of households receiving social support to the total number of households in that region. Return the name of the region with the highest percentage, along with the ratio expressed as a percentage rounded to two decimal places.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 2, "distinct": false, "order": true}} +{"instance_id": "households_14", "selected_database": "households", "query": "Let's find the biggest spender among our 'comfortable' families. A family is 'comfortable' if they have a high living score and enough bathrooms for their size. First, make a list of all these families. Then, from that list, find the one with the highest spending number and tell me their household ID.", "normal_query": "A 'Comfortable Household' is defined as one with a 'Living Condition Score' over 3 and a bathroom-to-resident ratio over 0.5. First, identify all such households based on these criteria. Then, from this group, find the household with the highest 'Expenditure Coefficient' and return its `housenum`.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "households_15", "selected_database": "households", "query": "I need to find the region with the biggest 'at-risk' population. A household is 'at-risk' if they get social aid AND they're very overcrowded (more than 4 people per bedroom). For every region, calculate what percentage of their families are 'at-risk'. Then, just tell me the name of the region with the highest percentage.", "normal_query": "Identify 'At-Risk' households, defined as those receiving social support and having a household density (residents per bedroom) over 4. Then, for each region, calculate the percentage of all its households that are 'At-Risk'. Finally, return the region with the highest percentage of 'At-Risk' households.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "households_16", "selected_database": "households", "query": "What's the income bracket for household number 3?", "normal_query": "What is the income classification of the household with number 3?", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "households_17", "selected_database": "households", "query": "How many wealthy families who own their own homes live in the Taguatinga area?", "normal_query": "How many households in the 'Taguatinga' region are owner-occupied and fall within the top two income brackets?", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "households_18", "selected_database": "households", "query": "Which modern-style homes (like brickwork houses or apartments) in the 'Guará' area also have TV service? List their household numbers in order.", "normal_query": "List the household numbers for modern dwellings in the 'Guará' region, sorted by household number. A modern dwelling is defined as a specific dwelling type with available TV service.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": true, "order": true}} +{"instance_id": "households_19", "selected_database": "households", "query": "Which household in an urban area owns the most cars?", "normal_query": "What is the household number with the most passenger vehicles among all households in urban areas (defined by infrastructure)?", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "households_20", "selected_database": "households", "query": "How many families in each region get government help? List the regions from the one with the most helped families to the one with the least.", "normal_query": "Count the number of households receiving social support in each region, sorted by count in descending order.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "households_M_1", "selected_database": "households", "query": "The city has a $10k fund to give cable to modern homes that don't have it. It costs $75 per house. First, check if we have enough money to cover everyone who's eligible. If we do, go ahead and update their status to 'Subscribed'. After you're done, tell me exactly how many homes got the upgrade.", "normal_query": "A municipal program with a budget of $10,000 aims to upgrade cable infrastructure for 'modern dwellings' that currently have 'No Service Available'. If the cost per household is $75, first determine if the total cost for all eligible households is within budget. If it is, perform the update to set their cable status to 'Subscribed'. Finally, return the total number of households that were successfully updated (which will be 0 if the budget was exceeded).", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "households_M_2", "selected_database": "households", "query": "I want to delete vehicle records for families with no income info, but only if it's a small cleanup. First, check what percentage of our total vehicle data this would remove. If it's less than 5%, go ahead and delete them, then tell me how many you deleted. If it's 5% or more, don't delete anything and just tell me '0'.", "normal_query": "As a data quality measure, we need to purge transportation assets for households with a null income bracket. However, to prevent accidental mass deletion, this operation is only permitted if the number of affected records is less than 5% of the total transportation assets. First, calculate the percentage of records that would be deleted. If this percentage is below 5%, proceed with the deletion and return the count of deleted records. Otherwise, return 0.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "households_M_4", "selected_database": "households", "query": "Add a new family: household 5000, Taguatinga, zone 315, 3 people, no social services, owns their home.", "normal_query": "Register a new household with number 5000 in 'Taguatinga', zone 315, with 3 residents, no service plan, and owned tenure.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "households_M_5", "selected_database": "households", "query": "Remove all vehicle records for families where we don't have their income information.", "normal_query": "Purge transportation assets for households with no income classification.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "households_M_6", "selected_database": "households", "query": "What type of building does household 1182 live in?", "normal_query": "What is the dwelling type of household number 1182?", "preprocess_sql": [], "clean_up_sqls": ["DROP VIEW IF EXISTS V_Dwelling_Type_1182;"], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "households_M_7", "selected_database": "households", "query": "Count how many small living spaces (apartments or studios) house only one or two people.", "normal_query": "How many households are in compact dwellings (Apartment or Studio) with fewer than 3 residents?", "preprocess_sql": [], "clean_up_sqls": ["DROP VIEW IF EXISTS V_Compact_Household_Count;"], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "households_M_8", "selected_database": "households", "query": "How many well-maintained houses (like brickwork houses and apartments) also have TV service available?", "normal_query": "What is the total number of dwellings considered well-maintained, based on their type and available TV services?", "preprocess_sql": [], "clean_up_sqls": ["DROP VIEW IF EXISTS V_Well_Maintained_Dwellings_Count;"], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "households_M_9", "selected_database": "households", "query": "We have a $5000 monthly budget to survey premium city families. A family is 'premium' if they own a high-income home, have great infrastructure (city water, paved roads), and aren't overcrowded (no more than 2 people per bedroom). If each survey costs $150, what's the total cost, and are we over or under budget? Give me a summary like 'Cost: $XXXX, Budget Status: Within Budget'.", "normal_query": "A research institute has a total monthly budget of $5000 to identify and survey 'premium urban households'. A household qualifies as 'premium urban' if it meets three criteria: 1) It is owner-occupied with an income level of 'High Income' or 'Very High Income'. 2) It has piped water and resides on roads with asphalt or concrete surfaces. 3) The household's resident-to-bedroom ratio does not exceed 2. If the cost to survey each qualifying household is $150, calculate the total survey cost and determine if it is within the monthly budget. The final output should be a single string: 'Cost: [Total Cost], Budget Status: [Within Budget/Exceeds Budget]'.", "preprocess_sql": [], "clean_up_sqls": ["DROP VIEW IF EXISTS V_Survey_Budget_Analysis;"], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "households_M_10", "selected_database": "households", "query": "I need to find the 5th most crowded house. First, figure out the 'people per bedroom' number for all crowded houses (more than 2 people/bedroom). Then, convert that number to a 'strain index' by multiplying it by 15. Finally, tell me the ID of the house that ranks 5th on this new strain index list.", "normal_query": "To assess housing strain, we define a 'density score' as residents per bedroom. For international comparison, this score needs to be converted to a 'strain index' where 1 unit of density equals 15 'strain points'. Generate a ranked list of households with a density score greater than 2, showing their household number and the calculated strain index (as an integer). From this list, identify the household number with the 5th highest strain index.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 0, "distinct": false, "order": true}} +{"instance_id": "planets_data_1", "selected_database": "planets_data", "query": "I'm curious about the gravity on bigger, rocky worlds. For all the confirmed super-earths found by watching their star's light dip (no matter how the method is written), what's their average surface gravity? Just give me the number, rounded to two decimal spots.", "normal_query": "What is the average planet surface gravity for all confirmed exoplanets that are larger than Earth but no more than ten times its mass, have a density greater than 3 g/cm³, and were discovered by observing the dimming of their host star? The check for the discovery method must be case-insensitive. Provide the result as a single scalar value rounded to 2 decimal places.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": 2, "distinct": false, "order": false}} +{"instance_id": "planets_data_2", "selected_database": "planets_data", "query": "I'm looking for Jupiter-like planets that are both scorching hot and spinning backwards, but only those where we know the star's mass and the planet's orbital distance. Can you list them out for me? I want to see their names, how long their year is, their orbital tilt, and how fast they're zipping around their star in kilometers per second. Put the fastest ones at the top.", "normal_query": "Generate a table of all hot jupiter planets that are also in a retrograde orbit, and for which the host star's mass and the planet's semi-major axis are known. Please display the host star's name, its orbital period in days, its inclination in degrees, and its calculated orbital velocity in km/s. Sort the results in descending order of the orbital velocity.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "planets_data_4", "selected_database": "planets_data", "query": "Let's fact-check Kepler's law on star systems with multiple planets. For each of these systems, take the planet that's farthest out (based on a known semi-major axis) and use its orbit to calculate its star's mass, but only if its orbital period is also known and positive. Show me the star's name, its official mass, and the mass we just calculated. Then, sort them to show the ones where our calculation was closest to the real value first.", "normal_query": "For each star that hosts a multi-planetary system, calculate the kepler's third law verification value for its outermost planet (the one with the largest known semi-major axis). Only include planets with a known and positive orbital period for this calculation. Display the host star's name, the recorded mass from the database, and the calculated mass. Order the results by the absolute difference between the recorded and calculated mass, from smallest to largest.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "planets_data_5", "selected_database": "planets_data", "query": "Of all the stars that have a measured mass and a positive radius, which one is the most tightly packed? Give me its name. You'll need to convert solar mass to kg using 1.98847E30 and solar radius to meters using 6.957E8 to do the calculation.", "normal_query": "For all stars with a known mass and a known, positive radius, what is the name of the one with the highest calculated stellar density? Provide the name as a single text result. Note: To calculate density in SI units, use the conversion factors 1.98847E30 for solar mass to kg, and 6.957E8 for solar radius to meters.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "planets_data_6", "selected_database": "planets_data", "query": "Show me a list of planets that have highly eccentric orbits and are flagged for having a minimum mass measurement (look for 'msini', regardless of case). For each one that has a known mass and is orbiting a star with a known, positive mass, I'd like to see the planet's name, its star's name, its eccentricity value, and the planet-to-star mass ratio. To calculate the ratio, use 1.898E27 kg as the mass of Jupiter and 1.98847E30 kg as the mass of the Sun. Please show the ratio with 5 decimal places and sort the list from the smallest ratio to the largest.", "normal_query": "I want a report on all planets that have a high eccentricity orbit and also have their minimum mass status flagged (case-insensitive match for 'msini'). For each planet with a known mass, whose host star also has a known and positive mass, show its full name, its host star, its eccentricity, and calculate its planet-star mass ratio to 5 decimal places. Use 1.898E27 kg for Jupiter's mass and 1.98847E30 kg for the Sun's mass for the ratio calculation. Order the results by the mass ratio in ascending order.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 5, "distinct": false, "order": true}} +{"instance_id": "planets_data_7", "selected_database": "planets_data", "query": "For all the big gassy planets found by Kepler's second-chance mission (matching 'k2' however it's capitalized), what's their average surface temperature, basically? Only include cases where we know the star's temperature, the star's radius, and the planet's orbital distance, and all are positive numbers. Give me that in kelvin, rounded to a whole number. Note that you'll need to use 6.957E8 to convert solar radius to meters and 1.496E11 to convert AU to meters.", "normal_query": "Find the average planetary equilibrium temperature for all gas giant planets discovered by the successor to the original Kepler mission (case-insensitive match for 'k2'). Only include planets for which the host star's temperature and radius, and the planet's semi-major axis, are all known and positive. Express the result in kelvin and round to the nearest whole number. Note: for the temperature calculation, use conversion factors of 6.957E8 for solar radius to meters and 1.496E11 for AU to meters.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": 0, "distinct": false, "order": false}} +{"instance_id": "planets_data_8", "selected_database": "planets_data", "query": "When planets found by the star wobble method (no matter how it's capitalized) pass in front of their star, what's the biggest dimming effect we could see? Only consider planets that have a known radius and orbit a star with a known, positive radius. Tell me that maximum dip in brightness as a percentage with four decimal places, and also name the planet and star responsible. You'll need to use the conversion that 1 solar radius is 109.2 Earth radii.", "normal_query": "What is the maximum transit depth, expressed as a percentage to 4 decimal places, for any planet discovered via the radial velocity method (case-insensitive match) where both the planet's radius and the host star's radius are known and positive? Also, provide the full name of the planet and its host star. Note: To compare radii, use the conversion factor 1 solar radius = 109.2 Earth radii.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": 4, "distinct": false, "order": true}} +{"instance_id": "planets_data_9", "selected_database": "planets_data", "query": "Find me the rocky super-earth that has the strongest gravity pull of them all, considering only planets with known mass, radius, and density. Once you've pinpointed that planet, tell me what its mass is as a fraction of its star's mass, assuming the star's mass is also known and positive. I need that final number in scientific format, with 7-digit precision.", "normal_query": "Determine the planet-star mass ratio for the specific super-earth that exhibits the highest planet surface gravity. Only consider planets with known mass, radius, and density, orbiting stars with a known and positive mass. The final ratio should be a single value, expressed in scientific notation with 7 digits of precision.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": 6, "distinct": false, "order": false}} +{"instance_id": "planets_data_10", "selected_database": "planets_data", "query": "On average, how far away are the stars that have those big, puffy gas planets? I only want to include stars where we have a solid distance number, not just a 'less than' value, and where the brightness measurement isn't messed up by other stars nearby. Show the result in light-years with two decimal points.", "normal_query": "What is the average distance in light-years to host stars of inflated gas giant planets? Only include host stars where the distance measurement is not an upper limit value and the photometric magnitude measurement is not affected by a blended measurement. Give the answer to 2 decimal places.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": 2, "distinct": false, "order": false}} +{"instance_id": "planets_data_11", "selected_database": "planets_data", "query": "Find me the planetary systems that are really tightly packed, but only if their closest-in planet is also super fast. For these systems, I want to see the star's name and the average orbital period ratio calculated using the geometric mean, with three decimal points. Sort the list by that average ratio, from highest to lowest.", "normal_query": "Identify compact systems where the innermost planet is a short-period planet. For each such system, list the host star name and calculate the geometric mean of all orbital period ratios between adjacent planets, rounded to 3 decimal places. Order the result by this geometric mean descending.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 3, "distinct": false, "order": true}} +{"instance_id": "planets_data_12", "selected_database": "planets_data", "query": "How many different stars have planets that were discovered by the ttv method, where they look for wobbles in a planet's transit schedule? Make sure you find 'ttv' regardless of its case.", "normal_query": "What is the total number of distinct host stars for which a planet was found by analyzing timing deviations in an already known transit? The search for the facility name 'ttv' must be case-insensitive.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": true, "order": false}} +{"instance_id": "planets_data_13", "selected_database": "planets_data", "query": "Find the planet with the biggest mass-to-size ratio, only looking at planets where we have a measured mass and a measured, non-zero radius. Then tell me its escape velocity in kilometers per second. Just give me a whole number.", "normal_query": "Calculate the planet escape velocity in km/s for the planet with the highest confirmed mass-radius relationship value. Only consider planets with a known, non-zero mass and radius. Provide the result rounded to the nearest integer.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": 0, "distinct": false, "order": true}} +{"instance_id": "planets_data_14", "selected_database": "planets_data", "query": "Can you count up how many stars were observed with each type of light filter? Make sure to lump all the different ways of writing 'v-band' together, and do the same for 'kepler-band', ignoring capitalization. Just ignore the 'k-band' ones for now. Then show me the cleaned-up filter name and how many stars for each, with the most-used filter at the top.", "normal_query": "For each photometric band in the 'stars' table, count the number of stars observed. Standardize the band names (case-insensitively): 'v (johnson)', 'johnson', 'v', 'johnson v', and 'v-band' should all be grouped as 'v-band'; 'kepler-band', 'kepler', 'kep-b', and 'kep' as 'kepler-band'. Ignore 'k-band' and any nulls. Show the standardized band name and the count, ordered by count descending.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "planets_data_15", "selected_database": "planets_data", "query": "If you look at stars that only have rocky planets and no gas giants, what is their average brightness compared to our sun? Only include planets where we know their density or mass, and only stars where we know their radius and temperature. I need the number with 4 decimal places.", "normal_query": "What is the average stellar luminosity of stars that host at least one rocky planet, but have no gas giant planets in the system? This analysis should only consider planets with a known density or mass and stars with a known radius and temperature. Calculate the result relative to the sun and provide it to 4 decimal places.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 4, "distinct": false, "order": false}} +{"instance_id": "planets_data_16", "selected_database": "planets_data", "query": "How many planets did kepler find where the star's temperature reading is wonky because of other nearby stars?", "normal_query": "Count the number of planets whose discovery is attributed to the kepler mission and are part of a system with a blended measurement for stellar temperature.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "planets_data_17", "selected_database": "planets_data", "query": "Can you give me the coordinates for the star '55 cnc' on an hr diagram, matching the name regardless of its case? I need its temperature and its luminosity relative to the sun, with the luminosity value having 3 decimal points.", "normal_query": "Provide a hertzsprung-russell (hr) diagram position for the star '55 cnc' (case-insensitive match). List its effective temperature and its calculated stellar luminosity. Round the luminosity to 3 decimal places.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": 3, "distinct": false, "order": false}} +{"instance_id": "planets_data_18", "selected_database": "planets_data", "query": "I want to find the hottest planet that isn't a 'hot jupiter'. Only look at planets where we know the star's temperature and radius and the planet's orbital distance so you can do the calculation. Can you tell me its name, its star, and its temperature in kelvin? Please round the temperature to a whole number. You'll need to convert star radius from solar radii to meters using 6.957E8 and orbital distance from AU to meters using 1.496E11.", "normal_query": "Find the planet with the highest planetary equilibrium temperature that is not classified as a hot jupiter. Only include planets for which the host star's temperature and radius, and the planet's semi-major axis, are all known and valid for the calculation. Return the planet's letter, its host star name, and its calculated equilibrium temperature in kelvin, rounded to the nearest integer. Note that for unit consistency, you should use the conversion factors 6.957E8 for solar radius to meters and 1.496E11 for AU to meters.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": 0, "distinct": false, "order": true}} +{"instance_id": "planets_data_19", "selected_database": "planets_data", "query": "For how many planets do we have a size measurement, but we know it's just a 'less-than-or-equal-to' kind of number because it's marked as an upper limit?", "normal_query": "How many planets have a value for planetary radius, but this value is not a confirmed measurement and is instead flagged as an upper boundary? ", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "planets_data_20", "selected_database": "planets_data", "query": "For the star that looks brightest in our night sky, what's its standard gravitational parameter, μ? Give me the answer in scientific notation, with three digits of precision. You'll need to use G = 6.67430E-11 and convert the star's mass from solar masses to kg using the factor 1.98847E30.", "normal_query": "Calculate the gravitational parameter (μ) for the star that appears brightest from Earth. Provide the result in scientific notation with 3 digits of precision. Note that the Gravitational constant 'G' is 6.67430E-11 and the conversion factor for solar mass to kilograms is 1.98847E30.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": 3, "distinct": false, "order": true}} +{"instance_id": "planets_data_M_1", "selected_database": "planets_data", "query": "Let's clean up the discovery methods because they're a mess. Can you make a new view called `v_discovery_method_summary`? It should list every planet's id, its original discovery method from the table, and a new column with a neat, standardized category like 'radial velocity' or 'transit' that works no matter how the original method is capitalized.", "normal_query": "Create a view named `v_discovery_method_summary`. This view should contain the planet's reference id, its original discovery method string, and a new `discovery_category` column. The new column should perform a case-insensitive standardization of the various discovery method names into unified categories such as 'radial velocity', 'transit', and 'imaging', based on the known variations.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "planets_data_M_2", "selected_database": "planets_data", "query": "I need a new table called `planet_properties_earth_units` to keep track of planet sizes in a way that's easier to compare to home. It should have a reference to the planet, its mass in earths, and its radius in earths. Once you've made the table, go ahead and fill it up with all the planets we have the right data for.", "normal_query": "Create a table named `planet_properties_earth_units`. The table should store the planet's reference id (as a foreign key to the `planets` table), the planet mass in earth units, and the planet radius in earth units. After creating the table, populate it with data for all planets that have known jupiter-mass and jupiter-radius values.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "planets_data_M_3", "selected_database": "planets_data", "query": "Let's create a summary view for all the star systems; call it `v_system_overview`. For each star, I want to see its name, how big it is, how hot it is, and then two numbers: how many of its planets were found by the 'wobble' method and how many were found by the 'dimming' method (ignoring capitalization for both).", "normal_query": "I need a new view called `v_system_overview`. This view should list each host star and include its name, its stellar radius, its temperature, and two separate counts of its planets: one for discoveries via radial velocity and one for discoveries via the transit method (both matched case-insensitively).", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "planets_data_M_4", "selected_database": "planets_data", "query": "Make me a table called `high_precision_params` that flags planets with super-accurate measurements. It needs to link to the planet and then have three true/false columns: one for a high-precision mass, one for radius, and one for period. Then, fill the table with every planet for which we can calculate at least one of these uncertainty values, even if all flags end up being false.", "normal_query": "Create a table `high_precision_params`. The table should contain a reference to the planet and boolean flags indicating if its mass, radius, and period are high-precision. Populate this table for all planets that have at least one valid, non-null uncertainty measurement for either mass, radius, or period, regardless of whether it qualifies as high-precision.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "planets_data_M_5", "selected_database": "planets_data", "query": "I want to categorize all the stars with a known mass by how big they are. Can you make a new table called `star_classifications` for this? It needs to link to the star and give it a label. Call them 'massive' if they're huge (over 3 solar masses), 'intermediate' if they're a fair bit bigger than the sun, 'sun-like' if they're in the same ballpark as ours (down to about 0.8 solar masses), and 'low-mass' for all the little ones. Then fill the table with these labels.", "normal_query": "Create a new table `star_classifications` with columns for `stellarref` and `class_name`. The `stellarref` should be a foreign key to the `stars` table. Then, for all stars with a known stellar mass value, populate this table by assigning a `class_name` based on that mass: 'massive' for stars more than three times the sun's mass, 'intermediate' for those between that and one-and-a-half solar masses, 'sun-like' for those down to eighty percent of the sun's mass, and 'low-mass' for anything smaller.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "planets_data_M_6", "selected_database": "planets_data", "query": "Can you make me a pre-compiled list of all the super puffy gas planets where we can actually calculate their temperature? Let's call it `v_inflated_giants_report`. The list should only include planets where the star's temperature and radius and the planet's orbital distance are known. For those planets, show their name, star, mass and radius in jupiter units, density, and estimated temperature, with the temperature as a whole number.", "normal_query": "Please create a materialized view called `v_inflated_giants_report`. This view should contain all planets classified as inflated gas giants for which a planetary equilibrium temperature can be calculated (i.e., the host star's temperature and radius, and the planet's semi-major axis are all known). For each such planet, include its name, its host star, its mass in jupiter units, its radius in jupiter units, its density, and its planetary equilibrium temperature, rounded to the nearest integer.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": 0, "distinct": false, "order": false}} +{"instance_id": "planets_data_M_7", "selected_database": "planets_data", "query": "Let's make a new table called `planet_types` to label every planet. It should have the planet's id and its type. Use our standard rulebook to classify them: check first for the special types like 'hot jupiter' and 'super-earth', then for the basic 'rocky' or 'gas giant' types. If a planet doesn't fit any of those buckets, just label it 'unknown'. Go ahead and fill the table after you create it.", "normal_query": "Create a new table `planet_types` that contains the planet's reference id and a string representing its type. Then, populate this table by classifying each planet according to the established hierarchical definitions for 'hot jupiter', 'super-earth', 'rocky planet', and 'gas giant', assigning 'unknown' to any that do not fit a category.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "planets_data_M_8", "selected_database": "planets_data", "query": "I want a quick way to see which planets have dodgy star measurements. Make a view called `v_uncertain_measurements`. It should list any planet where the star's mass, size, or temperature reading might be mixed with another star's light. Show me the planet's name, its star, and then three flags telling me 'yes' or 'no' for whether the mass is blended, the radius is blended, and the temp is blended.", "normal_query": "Create a view called `v_uncertain_measurements`. This view should list all planets that have a blended measurement for their stellar mass, radius, or temperature. Include the planet's letter, host star name, and boolean flags indicating which of the three measurements (mass, radius, temperature) are blended.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "planets_data_M_9", "selected_database": "planets_data", "query": "Let's make a table for studying tightly-packed solar systems, call it `system_period_ratios`. It should have the star's id, the inner planet's orbital period, the outer planet's orbital period, and the ratio between them. Go ahead and fill it up with this info for every neighboring pair of planets that have known periods, in any system that has more than one planet.", "normal_query": "Create a table `system_period_ratios` to analyze compact systems. It should store `hostlink`, the `inner_planet_period`, `outer_planet_period`, and the calculated orbital period ratio. Populate this table for all adjacent planet pairs with known orbital periods in systems where the star has more than one planet.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "planets_data_M_10", "selected_database": "planets_data", "query": "I need a pre-calculated summary of how we're finding planets. Please create a materialized view called `v_discovery_stats`. It should list every discovery technique after you've cleaned them up by trimming spaces and ignoring case. For each one, show how many planets we found with it, the average distance to those planets in light-years (calculated only from planets with known distances and shown with two decimal points), and the very last date any record for that method was updated.", "normal_query": "Create a materialized view `v_discovery_stats`. The view should list each distinct discovery method, after cleaning the text by trimming whitespace and converting to lowercase. It should also provide the total count of planets discovered with that method, the average stellar distance in light-years for those discoveries (only for planets with a known distance) rounded to two decimal places, and the most recent update timestamp associated with any planet of that discovery method.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": 2, "distinct": false, "order": false}} +{"instance_id": "museum_artifact_1", "selected_database": "museum_artifact", "query": "I'm worried about our items that are most at risk. Can you pull a list of artifacts that are both considered high-value (let's say historical significance over 8 and cultural score over 20) and are also officially listed with a 'High' or 'Medium' risk level? For each one, show its ID, title, its actual risk level, and both of those scores.", "normal_query": "Generate a report of artifacts that are both high-risk (level 'High' or 'Medium') and high-value (historical significance > 8 and cultural score > 20). The report should include the artifact's ID, title, risk level, historical significance, and cultural score.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "museum_artifact_2", "selected_database": "museum_artifact", "query": "I'm concerned about environmental damage. Can we find the artifacts most at risk by calculating a score based on their average sensitivity to the environment? Only show me the ones where the score is over 4. For those, I need the artifact's ID, name, its exact score, and a list of all specific sensitivities rated 'High'. Please sort the list to show the highest-risk items first.", "normal_query": "Identify artifacts with dangerously high environmental risks by calculating their Environmental Risk Factor (ERF). The report should include the artifact's ID, its name, the calculated ERF score, and a JSON summary of all its 'High' sensitivity ratings. Only include artifacts where the ERF score exceeds 4, and sort the results from highest to lowest risk.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "museum_artifact_3", "selected_database": "museum_artifact", "query": "To help us plan conservation, I need to focus on artifacts from the 'Ming' and 'Qing' dynasties. Please calculate a priority score for each one, and then assign a priority level. I'd like a report showing the artifact's ID, title, dynasty, the calculated score, and the final priority level. Please sort it to show the highest scores at the top.", "normal_query": "Generate a conservation priority report for artifacts from the 'Ming' and 'Qing' dynasties. For each artifact, calculate its Conservation Priority Index (CPI) and determine its Priority Level. The report should include the Artifact ID, Title, Dynasty, CPI Score, and Priority Level, sorted by CPI Score in descending order.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "museum_artifact_4", "selected_database": "museum_artifact", "query": "I need a report on how well we're funding artifact conservation across different dynasties. For each artifact, show its dynasty, its priority score, the specific budget we've assigned, and whether that funding is 'Sufficient' or 'Insufficient'.", "normal_query": "For each artifact with a known dynasty, create a report showing its dynasty, its CPI score, its specific budget allocation, and its generalized budget status.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "museum_artifact_5", "selected_database": "museum_artifact", "query": "Can you whip up a fast list for me? I want to see if any artifacts are deteriorating too quickly. For each one, give me the ID, name, current temp and humidity, a count of its high sensitivities, and a 'Yes' or 'No' on whether it's in this danger zone. Don't skip any artifacts, even if data's missing. Sort it all by artifact ID.", "normal_query": "Check if artifacts are in an Accelerated Deterioration Scenario. The report should show each artifact's ID, its name, the current temperature and humidity in its display case, how many high sensitivities it has, and a 'Yes' or 'No' for the scenario. Include all artifacts and sort by artifact ID.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "museum_artifact_6", "selected_database": "museum_artifact", "query": "Can you whip up a fast rundown for me? I need all the showcase IDs that have unstable environmental conditions. Get every unique ID and line them up in alphabetical order.", "normal_query": "Generate a list of all unique showcase IDs that are experiencing an Environmental Instability Event. The list should be sorted alphabetically by ID.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": true, "order": true}} +{"instance_id": "museum_artifact_7", "selected_database": "museum_artifact", "query": "Can you sniff out all the showcase IDs that could be heading toward a problem? We'll say a showcase is a problem if its environmental stability score drops below 5, OR if it has at least three major maintenance issues like a poor seal, overdue status, or a filter/silica needing replacement. Show me just the showcase IDs for these problem cases, lined up alphabetically.", "normal_query": "Identify all showcase IDs that are at risk of failure. A showcase is considered 'At Risk' if its calculated environmental stability score is less than 5, OR if it has at least three major maintenance issues ('Poor' seal state, 'Overdue' maintenance status, 'Replace Now' filter, or 'Replace Now' silica). The report should list the IDs of the at-risk showcases, sorted alphabetically.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": true, "order": true}} +{"instance_id": "museum_artifact_8", "selected_database": "museum_artifact", "query": "Can you pull together a rundown of artifacts with 'Medium' or 'High' humidity sensitivity? I need the registry number, name, material type, and that sensitivity level for each. Plus, check if they're 'Over Exposure' or 'Within Limits' using a more cautious humidity threshold, and line them up by registry number.", "normal_query": "List all artifacts with 'Medium' or 'High' Humidity Sensitivity. For each, show the registry number, title, material type, and sensitivity level. Also, determine if they are 'Over Exposure' or 'Within Limits' based on a secondary, more cautious humidity threshold, and sort by registry number.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "museum_artifact_9", "selected_database": "museum_artifact", "query": "We need a priority list based on comprehensive risk. First, calculate a total environmental threat score for all artifacts. I'm only interested in those officially at the second-highest risk level. From that group, find the top 10 with the highest threat scores. Give me their registration IDs and their scores, sorted from highest to lowest.", "normal_query": "Identify the top 10 artifacts in greatest danger by calculating their Total Environmental Threat Level (TETL). The report should only consider artifacts at the second-highest risk level and list their registration IDs and TETL scores, sorted from highest to lowest.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "museum_artifact_10", "selected_database": "museum_artifact", "query": "Let's figure out our next rotation plan. For all items currently in a resting state, I need to see their ID, name, material, and how many months it's been on display. Then, calculate its maximum safe display time. Using our standard method, compute its final rotation priority score. To make things clear, add a final column that flags it for 'Immediate Rotation' if needed, otherwise just label it as 'Monitor'.", "normal_query": "Generate a rotation schedule based on the Exhibition Rotation Priority Score (ERPS). The report should only include artifacts in 'Resting' state and show their ID, name, material, current display duration, and Display Safety Duration (DSD) limit. It must also include the ERPS value and a final recommendation ('Immediate Rotation' or 'Monitor').", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "museum_artifact_11", "selected_database": "museum_artifact", "query": "I'm trying to figure out our data storage plan. Can you analyze the environmental readings for 2023, 2024, and 2025? First, for each year, find the average temperature. Then, for each of those years, count how many days the temperature was off by a specific amount—a deviation that's more than zero but no more than 4 degrees from that year's average. Give me a table with the year, its average temperature, and the total count of these anomaly days.", "normal_query": "To assist with data partitioning strategy, generate a report showing the annual environmental anomaly count for the years 2023-2025. An anomaly is defined as a daily reading where the temperature deviation from that year's annual average is greater than 0°C and less than or equal to 4°C. The report should show the year, the calculated average temperature for that year, and the total count of anomaly days.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 2, "distinct": true, "order": true}} +{"instance_id": "museum_artifact_12", "selected_database": "museum_artifact", "query": "I'm trying to find our most valuable artifacts (let's define that as a cultural score over 80) that might be at risk due to a weird environment. Can you find any of them that are in a showcase where the average temperature is more than 1 degree different from the average temperature of all other showcases in that same hall? For any you find, list the artifact's ID, its title, material, its showcase's temperature, what the hall's average temp is, and the exact deviation.", "normal_query": "Identify artifacts that are both a 'High-Value Artifact' (cultural score > 80) and an 'Environmental Outlier'. An outlier is defined as an artifact whose showcase has an average temperature that deviates by more than 1°C from the average temperature of all showcases in its hall. For these artifacts, show their ID, title, material, their showcase's temperature, the hall's average temperature, and the temperature deviation.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "museum_artifact_13", "selected_database": "museum_artifact", "query": "Figure out the conservation priority score for every artifact, showing their ID, name, and score, ordered from highest to lowest priority.", "normal_query": "Calculate the Conservation Priority Index (CPI) for each artifact. The report should include the artifact ID, title, and the CPI score, sorted in descending order by CPI.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 2, "order": true, "distinct": false}} +{"instance_id": "museum_artifact_14", "selected_database": "museum_artifact", "query": "Find all really valuable artifacts, group them by their historical period, and show the period, how many valuable artifacts there are, and their average cultural score, ordered from highest to lowest count.", "normal_query": "Identify all High-Value Artifacts and group them by dynasty. The report should show the dynasty, a count of high-value artifacts, and their average cultural score, sorted by the count in descending order.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 2, "order": true, "distinct": false}} +{"instance_id": "museum_artifact_15", "selected_database": "museum_artifact", "query": "I want to find artifacts that we've put in the wrong place. Can you make a list of any artifact that has a 'Medium' sensitivity to something (like light, temp, or humidity) and is in a showcase that we've classified as having a 'Medium' level of that same thing? Show me the artifact's name, material, the showcase it's in, which specific sensitivity is the problem, what the sensitivity level is, and what the showcase's environment profile is.", "normal_query": "Identify environmental mismatches by finding artifacts whose specific environmental sensitivities are incompatible with the typical environment of their showcase. A 'mismatch' occurs if an artifact with 'Medium' sensitivity to an environmental factor (e.g., humidity) is in a showcase classified with a 'Medium' level of that same factor. The report should list the artifact's title, its material, the showcase ID, the mismatched sensitivity type, the sensitivity level, and the environment profile.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "museum_artifact_16", "selected_database": "museum_artifact", "query": "Group artifacts by how vulnerable their organic material is and their environmental risk score. Show me what they're made of, their vulnerability status, their average environmental risk, and a count for each group, ordered by the highest average risk.", "normal_query": "Cluster artifacts by their Organic Material Vulnerability and Environmental Risk Factor (ERF). The report should show material type, vulnerability status, average ERF, and artifact count, sorted by average ERF descending.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 2, "order": true, "distinct": false}} +{"instance_id": "museum_artifact_17", "selected_database": "museum_artifact", "query": "I'm looking for ticking time bombs in our collection. Can you find all the artifacts that are currently listed as 'Good' or 'Excellent', but are at a high risk of light damage? Let's say 'high risk' means they have 'High' light sensitivity and they've already soaked up more than 70,000 lux-hours of light. For each one you find, show me its name, dynasty, its exact light sensitivity, the current lux level, and its total light exposure. Please list the ones with the most total exposure first.", "normal_query": "Generate a report of artifacts with a 'Good' or 'Excellent' conservation status that are at high risk of light damage. A high-risk artifact is defined as having 'High' light sensitivity AND has a cumulative light exposure exceeding 70,000 lxh. The report should include the artifact's title, dynasty, light sensitivity level, current lux, and cumulative exposure (visLxh), sorted by cumulative exposure in descending order.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "museum_artifact_18", "selected_database": "museum_artifact", "query": "What's the single highest risk score for any artifact, considering both its conservation priority and environmental sensitivity?", "normal_query": "What is the maximum Artifact Vulnerability Score (AVS) found among all artifacts in the collection?", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "museum_artifact_19", "selected_database": "museum_artifact", "query": "How well do our showcases generally suit the artifacts from the Ming dynasty? I'm looking for a single number that tells me the average compatibility.", "normal_query": "Calculate the average Artifact Exhibition Compatibility (AEC) for all artifacts from the 'Ming' dynasty.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "museum_artifact_20", "selected_database": "museum_artifact", "query": "Can you give me a total count of display cases that are at risk of failing? I'm talking about any case with a very unstable environment or at least three major maintenance problems.", "normal_query": "Calculate the total number of showcases that are currently considered a Showcase Failure Risk.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 0, "distinct": true, "order": false}} +{"instance_id": "museum_artifact_M_1", "selected_database": "museum_artifact", "query": "I need to add a maintenance alert. Find every artifact that has a condition report on file, and for each one, append a timestamped alert saying 'Alert (Conservation Emergency): Immediate action recommended' to its maintenance log.", "normal_query": "For all artifacts that have an existing condition assessment record, append a timestamped alert with the text 'Alert (Conservation Emergency): Immediate action recommended' to their maintenance log.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": true, "conditions": {"decimal": 0, "order": false, "distinct": false}} +{"instance_id": "museum_artifact_M_2", "selected_database": "museum_artifact", "query": "Write a function called 'calculate_cpi' that figures out how important an artifact is for conservation, using its historical, research, and cultural value, plus its current condition, and gives back a final score.", "normal_query": "Create a function named 'calculate_cpi' to compute a conservation priority score. This score should be based on historical significance, research value, cultural importance, and conservation condition, returning a numeric value.", "preprocess_sql": [], "clean_up_sqls": ["DROP FUNCTION IF EXISTS calculate_cpi(SMALLINT, INT, SMALLINT, VARCHAR);"], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "order": false, "distinct": false}} +{"instance_id": "museum_artifact_M_3", "selected_database": "museum_artifact", "query": "Put a rule on the artifact ratings table to make sure the historical importance score stays between 1 and 10.", "normal_query": "Add a check constraint named 'hist_sign_rating_check' to the 'ArtifactRatings' table. This constraint should ensure the historical significance score is between 1 and 10, inclusive.", "preprocess_sql": ["UPDATE \"ArtifactRatings\" SET \"HIST_sign\" = 11 WHERE \"ART_link\" = 'ART54317';"], "clean_up_sqls": ["ALTER TABLE \"ArtifactRatings\" DROP CONSTRAINT IF EXISTS hist_sign_rating_check;", "UPDATE \"ArtifactRatings\" SET \"HIST_sign\" = 7 WHERE \"ART_link\" = 'ART54317';"], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": 0, "order": false, "distinct": false}} +{"instance_id": "museum_artifact_M_4", "selected_database": "museum_artifact", "query": "We need to decide which artifacts to put into storage next. Figure out the five most urgent candidates for rotation based on a priority score, but with a twist: if an artifact is made of organic material like textile, wood, or paper, increase its final score by multiplying it by 1.2 because it's more fragile. I just need the artifact titles and their final adjusted scores, with the most urgent one (the one with the lowest score) first.", "normal_query": "Create a database view named 'V_Top5_Rotation_Priority' that provides a priority list of the top 5 artifacts for exhibition rotation. The standard Exhibition Rotation Priority Score (ERPS) calculation needs to be adjusted: for artifacts made of 'Organic' materials ('Textile', 'Wood', 'Paper'), their final ERPS score should be multiplied by 1.2 (a 20% vulnerability factor). The view should include the artifact's title and its final, adjusted ERPS score, sorted by the adjusted ERPS in ascending order (lowest score is highest priority).", "preprocess_sql": [], "clean_up_sqls": ["DROP VIEW IF EXISTS V_Top5_Rotation_Priority;", "DROP VIEW IF EXISTS V_Rotation_Priority_Analysis;"], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "museum_artifact_M_5", "selected_database": "museum_artifact", "query": "Can you find all the artifacts made of materials like textile or paper that are both extremely valuable and highly sensitive to their environment? For each one, show its name and its exact sensitivity levels for light, temperature, and humidity.", "normal_query": "Create a database view named 'V_Precious_Vulnerable_Organic_Items' to identify all artifacts that are both a High-Value Artifact and meet the Organic Material Vulnerability criteria. The view should display each artifact's title and its specific sensitivity ratings for light, temperature, and humidity.", "preprocess_sql": [], "clean_up_sqls": ["DROP VIEW IF EXISTS V_Precious_Vulnerable_Organic_Items;", "DROP VIEW IF EXISTS V_High_Light_Risk_Artifact_Status;"], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": true, "conditions": {"decimal": 0, "distinct": false, "order": false}} +{"instance_id": "museum_artifact_M_6", "selected_database": "museum_artifact", "query": "I need to know which of our halls are a security nightmare. Find any hall that has a high visitor impact score (say, over 15) but at the same time has a low security score (less than 8). For each of those problem halls, show me the hall ID and both of those scores so I can see what's going on.", "normal_query": "Create a view named 'V_High_Threat_Halls' to identify 'High Threat' exhibition halls, defined as those with a Visitor Impact Risk (VIR) score greater than 15 and a calculated Security Score below 8. The view should show the hall's ID, its VIR score, and its Security Score.", "preprocess_sql": [], "clean_up_sqls": ["DROP VIEW IF EXISTS V_High_Threat_Halls;", "DROP VIEW IF EXISTS V_Critical_Artifacts_In_High_Threat_Halls;"], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "museum_artifact_M_7", "selected_database": "museum_artifact", "query": "Out of all our textiles, how many are in a 'Poor' state of environmental compliance? Assume the ideal is 20 degrees Celsius and 50 percent humidity to make the call.", "normal_query": "Create a function named 'get_poor_compliance_count_by_material' that accepts a material type (e.g., 'textile') and calculates the number of artifacts of that material with a 'Poor' Compliance Level. The calculation should be based on the Environmental Compliance Index (ECI) with an ideal temperature of 20°C and ideal humidity of 50%.", "preprocess_sql": [], "clean_up_sqls": ["DROP FUNCTION IF EXISTS get_poor_compliance_count_by_material(TEXT);", "DROP VIEW IF EXISTS V_Compliance_Report_By_Material;"], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": true, "conditions": {"decimal": 0, "distinct": false, "order": false}} +{"instance_id": "museum_artifact_M_8", "selected_database": "museum_artifact", "query": "Can you tell me the exact number of our important dynasty artifacts (from Ming, Han, or Tang) that are aging too quickly and are considered at risk?", "normal_query": "Create a function named 'get_dynasty_artifacts_at_risk_count' to calculate the total number of artifacts classified as a Dynasty Artifact at Risk.", "preprocess_sql": [], "clean_up_sqls": ["DROP FUNCTION IF EXISTS get_dynasty_artifacts_at_risk_count();", "DROP VIEW IF EXISTS V_Dynasty_Risk_Report;"], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": true, "conditions": {"decimal": 0, "distinct": false, "order": false}} +{"instance_id": "museum_artifact_M_9", "selected_database": "museum_artifact", "query": "I need a list of our most neglected showcases. Find every single one that has all three of these problems at once: the filter needs replacing now, the silica needs replacing now, and its general maintenance is overdue. For that list of problem showcases, tell me their ID, which hall they're in, and a count of how many high-sensitivity artifacts are stuck inside them.", "normal_query": "Create a view named 'V_Chronic_Maintenance_Backlog_Showcases' to identify showcases with chronic maintenance backlogs. A 'chronic backlog' is defined as a showcase where the filter is overdue ('Replace Now'), the silica is exhausted ('Replace Now'), and the general maintenance status is 'Overdue'. For these showcases, the view should list their ID, hall ID, and the number of high-sensitivity artifacts they contain.", "preprocess_sql": [], "clean_up_sqls": ["DROP VIEW IF EXISTS V_Chronic_Maintenance_Backlog_Showcases;", "DROP FUNCTION IF EXISTS get_worst_backlog_showcase_env();"], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "museum_artifact_M_10", "selected_database": "museum_artifact", "query": "We've got a $25,000 budget to catch up on overdue cleanings. I need a priority list. Figure out the risk score for every late artifact, and also estimate a treatment cost based on its material and complexity. Then, tell me which artifacts we can afford to fix, starting with the highest risk ones, without going over our budget. List the artifact name, its risk score, and its estimated cost.", "normal_query": "Create a materialized view named 'MV_Prioritized_Maintenance_Plan' to generate a prioritized maintenance plan for overdue cleanings within a simulated budget of $25,000. Calculate the 'cost of treatment' for each overdue artifact based on its material and treatment complexity. The view should list the artifacts that can be treated within the budget, ordered by their Conservation Backlog Risk (CBR), and show their title, CBR score, and calculated cost.", "preprocess_sql": [], "clean_up_sqls": ["DROP MATERIALIZED VIEW IF EXISTS MV_Prioritized_Maintenance_Plan;", "DROP FUNCTION IF EXISTS get_remaining_maintenance_budget();"], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "fake_account_1", "selected_database": "fake_account", "query": "Which accounts are growing their networks the fastest? Give me that blended growth score rounded to three decimal places and sort from fastest to slowest.", "normal_query": "Compute the Network Growth Velocity (NGV) for every account using its follower and following growth rates, rounded to three decimal places, and list accounts with the highest NGV first.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": 3, "distinct": false, "order": true}} +{"instance_id": "fake_account_2", "selected_database": "fake_account", "query": "Show me the top 10 accounts by their logins-per-day score. For each account, take the biggest lifecycle login total you've ever seen for it, divide by its age in days (skip age missing or ≤ 0), round the score to 3 decimals, and show just the account ID plus that score.", "normal_query": "Compute each account's Account Activity Frequency (AAF) per the domain definition. For each account, use the highest recorded lifecycle session total across all session snapshots, exclude anyone with age in days missing or ≤ 0, then return the ten accounts with the greatest AAF in descending order, showing the account identifier and AAF rounded to three decimals.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": 3, "distinct": false, "order": true}} +{"instance_id": "fake_account_3", "selected_database": "fake_account", "query": "List each account together with their attempts to evade detection. Calculate that sneakiness score, rounded to 3 decimal places, and rank them from most to least sneaky.", "normal_query": "List each account along with its Technical Evasion Index (TEI), rounded to three decimal places, and sort them descending.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": 3, "distinct": false, "order": true}} +{"instance_id": "fake_account_4", "selected_database": "fake_account", "query": "Bucket every user into four sneakiness tiers based on how much they rely on tricks like VPNs, and list each account with the tier number.", "normal_query": "Divide all accounts into quartiles based on their TEI values and return each account id with its TEI quartile.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "fake_account_5", "selected_database": "fake_account", "query": "How many users land in each risk level based on the number of an account's attempts to evade detection?", "normal_query": "Count how many accounts fall into each TEI Risk Category (low, moderate, high, very high).", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "fake_account_6", "selected_database": "fake_account", "query": "List the top twenty accounts by the highest overall exposure metric based on multiple signals. Only include users whose required inputs are all present; sort high to low; show the user ID and the metric rounded to three decimals.", "normal_query": "Show the twenty accounts with the highest Security Risk Score (SRS). Compute the score only for accounts where all three inputs (risk value, trust value, impact value) are present; sort by the unrounded score descending; display the account identifier and the score rounded to three decimals.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": 3, "distinct": false, "order": true}} +{"instance_id": "fake_account_7", "selected_database": "fake_account", "query": "Show every user that needs immediate attention: they must have the top severity label and an ongoing detection. Rank them by their overall exposure metric (rounded to 3 decimals) from high to low, and include the severity label, only show those larger than 0.7.", "normal_query": "List all High-Risk Accounts: compute SRS per its definition, keep only those with SRS > 0.7, whose overall severity is Critical, and that have at least one ongoing detection. Return the account ID, the score rounded to three decimals, and the severity, sorted by the unrounded score descending.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": 3, "distinct": false, "order": true}} +{"instance_id": "fake_account_8", "selected_database": "fake_account", "query": "For each account, combines multiple bot-detection metrics into a single score, rounded to three decimal places, and display the highest value.", "normal_query": "Calculate the Bot Behavior Index (BBI) for each account, rounded to three decimal places, and show the top one.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": 3, "distinct": false, "order": true}} +{"instance_id": "fake_account_9", "selected_database": "fake_account", "query": "Identify all accounts exhibiting systematic VPN usage and report the total count of distinct countries from which they have authenticated.", "normal_query": "Identify all VPN Abuser accounts and show how many different login countries they have used.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "fake_account_10", "selected_database": "fake_account", "query": "Estimate the average authenticity level for content on each platform, rounded to three decimal places and sorted in a descending order.", "normal_query": "Compute the average Content Authenticity Score (CAS) for each platform, rounded to three decimal places and sorted from highest to lowest.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": 3, "distinct": false, "order": true}} +{"instance_id": "fake_account_M_1", "selected_database": "fake_account", "query": "Update the StateFlag to 'Suspended' for every high-risk account, according to the High-Risk Account rule, excluding those already in suspended state.", "normal_query": "Suspend every High-Risk Account by setting its StateFlag to \"Suspended\", according to the High-Risk Account rule. Make sure accounts already suspended are not updated twice.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "fake_account_11", "selected_database": "fake_account", "query": "Show the top 10 accounts that have most content manipulation patterns. Evaluate their scores, rounded to 3 decimal places, and sort in descending order.", "normal_query": "Retrieve the ten accounts with the highest Content Manipulation Score (CMS), sorted in a descending order and rounded to three decimal places.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": 3, "distinct": false, "order": true}} +{"instance_id": "fake_account_12", "selected_database": "fake_account", "query": "Find all accounts pumping out loads of near-duplicate posts, list their posts-per-day, sorted from most to least.", "normal_query": "List all accounts classified as Content Farms along with their daily post frequency, ordered by frequency from highest to lowest.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "fake_account_M_2", "selected_database": "fake_account", "query": "Create active, low-priority watch items for accounts that heavily rely on traffic-masking tools and have logged in from at least three different countries, skipping anyone who already has an active watch item. Tag the new items as coming from an automated scan and timestamp them with the current time.", "normal_query": "Insert a low-priority monitoring entry for every account that qualifies under the VPN-abuse rule (TEI > 0.8 and login countries ≥ 3), skipping any account that already has an active monitoring entry. Use a random unique ID, the current timestamp, mark the source as Algorithm, and set the entry state to Active.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "fake_account_13", "selected_database": "fake_account", "query": "For each account, build one overall risk score by combining an automation-likeness signal, a coordination signal, and the group size; round to three decimals, list those above 0.8, and sort high to low.", "normal_query": "Compute Coordinated Bot Risk (CBR) for each account using the defined BBI and CAS formulas; round to three decimals, return those with CBR greater than 0.8, and sort in descending order.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": 3, "distinct": false, "order": true}} +{"instance_id": "fake_account_14", "selected_database": "fake_account", "query": "What's the overall average trust score of everyone's connections, rounded to three decimals?", "normal_query": "Determine the overall average Network Trust Score (NTS) across all accounts, rounded to three decimal places.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": 3, "distinct": false, "order": false}} +{"instance_id": "fake_account_15", "selected_database": "fake_account", "query": "Generate a report of every cluster whose role is “SocialGroup”. For each cluster, show its identifier, the number of unique member accounts, the cluster's maximum coordination score. Quantify each account's position and influence in interaction network if the data are available, otherwise NULL, and indetify whether it is sophisticated influence campaigns", "normal_query": "Generate a report of every cluster whose role is “SocialGroup”. For each cluster, show its identifier, the number of unique member accounts, the average Network Influence Centrality (NIC) of those members if NIC data are available, otherwise NULL, the cluster's maximum coordination score, and a “Yes/No” flag that is “Yes” when the cluster satisfies the Coordinated Influence Operation definition.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "fake_account_16", "selected_database": "fake_account", "query": "Estimate each account's authentication-related risk, rounded to 3 decimal places. List accounts with a score above 0.7, sorted from highest to lowest.", "normal_query": "Identify accounts with an Authentication Risk Score (ARS) greater than 0.7, round the score to 3 decimal places and order from highest to lowest.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": 3, "distinct": false, "order": true}} +{"instance_id": "fake_account_17", "selected_database": "fake_account", "query": "For each account, show the most recent system estimate of automation likelihood using the latest detection event.", "normal_query": "Return each account's Latest Bot Likelihood Score (LBS).", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "fake_account_18", "selected_database": "fake_account", "query": "For each account, measure how much its hourly activity over a day diverges from its usual behavior, round to 3 decimals and sort descending. Show those above 0.7.", "normal_query": "Identify accounts whose Temporal Pattern Deviation Score (TPDS) exceeds 0.7, rounded to 3 decimal places and sorted in descending order.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": 3, "distinct": false, "order": true}} +{"instance_id": "fake_account_19", "selected_database": "fake_account", "query": "List all accounts that exert strong influence while posting at a high daily rate, acting as key amplifiers in coordinated networks. List their influence score and daily post frequency, sorted by influence score in a descending order.", "normal_query": "List all High-Impact Amplifier accounts together with their influence score and daily post frequency, sorted by influence score in descending order.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "fake_account_20", "selected_database": "fake_account", "query": "Measure account-reputation stability, round to 3 decimals, and show whoever has the highest score.", "normal_query": "Show the account with the highest Reputation Volatility Index (RVI), rounded to 3 decimal places.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": 3, "distinct": false, "order": true}} +{"instance_id": "fake_account_21", "selected_database": "fake_account", "query": "Retrieve accounts with elevated engagement levels based on the number of sessions or total posting frequency. Show their account ID and the daily post number.", "normal_query": "Retrieve accounts classified as High-Activity Accounts, showing their account ID and the daily post number.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "fake_account_22", "selected_database": "fake_account", "query": "Group results by platform type and show the average of the 0-1 score indicating how real the interactions feel, keep three decimals, and list them from high to low.", "normal_query": "Compute the average engagement authenticity score for each platform type, rounded to 3 decimal places and sort in descending order.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": 3, "distinct": false, "order": true}} +{"instance_id": "fake_account_23", "selected_database": "fake_account", "query": "How many accounts are currently inactive and also classified as automated?", "normal_query": "Count the number of accounts that are both in the inactive status and belong to the automated category.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "fake_account_M_3", "selected_database": "fake_account", "query": "If an account meets our trust checks, shows no recent detections for 180 days, and has been quiet for at least 90 days based on the latest activity proxy, mark its monitoring priority as \"Review_Inactive_Trusted\".", "normal_query": "For accounts that pass the trust threshold, have had no detections in the last 180 days, and whose most recent activity proxy is older than 90 days, set their monitoring priority to \"Review_Inactive_Trusted\".", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "fake_account_24", "selected_database": "fake_account", "query": "By platform kind, average a score built as: how manipulated the content looks, times how urgently it should be reviewed, times how central it is in the network.", "normal_query": "For each platform kind, compute the average Content Impact Score, where the score equals manipulation intensity multiplied by moderation priority and by network influence centrality.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": 3, "distinct": false, "order": true}} +{"instance_id": "fake_account_M_4", "selected_database": "fake_account", "query": "Make a materialized view that shows all accounts with a credibility value above 0.9.", "normal_query": "Create a materialized view listing all accounts whose built-in credibility value is greater than 0.9.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "fake_account_M_5", "selected_database": "fake_account", "query": "Analyze the table that keeps monitoring snapshots so the database updates its size and stats.", "normal_query": "Run ANALYZE on the table that stores monitoring snapshots to refresh its statistics.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "fake_account_M_6", "selected_database": "fake_account", "query": "Write a helper called pct_to_numeric(text) that turns strings like '85%' into decimals like 0.85 and returns a numeric result.", "normal_query": "Create a utility function named pct_to_numeric(text) that converts a percentage string (e.g., '85%') into a numeric value (e.g., 0.85), returning a numeric type.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "cold_chain_pharma_compliance_1", "selected_database": "cold_chain_pharma_compliance", "query": "Can you check our cold chain data and tell me how long temperature problems typically last on our riskiest shipping routes? I'm looking for the average time in minutes that temperatures went outside acceptable ranges, but only for the shipments marked as high risk. Just round to two decimal places for me.", "normal_query": "I want to find the average Temperature Excursion Duration for shipments on High Risk routes only. Please show me the route type label and the average excursion duration in minutes, rounded to two decimal places.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 2, "distinct": false, "order": false}} +{"instance_id": "cold_chain_pharma_compliance_2", "selected_database": "cold_chain_pharma_compliance", "query": "Would check what percentage of our shipments actually stayed in the right temperature range the whole time? I need a simple number showing how many shipments had zero temperature problems compared to our total shipments. Just round it to two decimal places so it's easy to report.", "normal_query": "What is our overall Cold Chain Compliance Rate for all shipments? Please calculate the percentage of compliant shipments out of the total shipments monitored, and round the result to two decimal places.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 2, "distinct": false, "order": false}} +{"instance_id": "cold_chain_pharma_compliance_3", "selected_database": "cold_chain_pharma_compliance", "query": "Will you help me figure out how our shipments are performing in terms of timing? I want to see how many shipments are arriving early, on-time, or running late. Can you count up our shipments by these delivery timing categories? Just give me each category and its count, with the biggest numbers at the top.", "normal_query": "I plan to analyze our cold chain delivery performance using Delivery Performance Classification. Show me the performance category and the number of shipments in each category, sorted by shipment count in descending order.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "cold_chain_pharma_compliance_4", "selected_database": "cold_chain_pharma_compliance", "query": "I want to figure out if our shipping performance is better when our GPS tracking is working actively versus when it's in the intermittent location tracking state. Can you compare the average on-time performance between these two categories? Just give me the two tracking categories and their average performance scores rounded to two decimal places.", "normal_query": "I am working on comparing the On-Time Delivery Performance between shipments with different Location Tracking States. Specifically, analyze the average OTDP for shipments that have either 'Active' or 'Intermittent' tracking states. Show me each tracking state and its corresponding average OTDP value rounded to two decimal places.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 2, "distinct": false, "order": false}} +{"instance_id": "cold_chain_pharma_compliance_5", "selected_database": "cold_chain_pharma_compliance", "query": "I am trying to figure out if our quality agreements are working properly. Can you check how many shipments failed compliance checks for each type of agreement we have? Just show me each agreement status and how many shipments were flagged as non-compliant under that status.", "normal_query": "I hope to analyze how different Quality Agreement Status types relate to non-compliance issues. Could you count the number of shipments that were classified as 'Non-compliant' for each quality agreement status category? Please show each agreement status with its corresponding count of non-compliant shipments.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "cold_chain_pharma_compliance_6", "selected_database": "cold_chain_pharma_compliance", "query": "Our quality team needs to flag any high-risk shipments for immediate review. Could you pull a list of all shipments falling into red quality risk zone? Just show me the shipment ID, what percentage of time it stayed in the acceptable range, and how many total minutes it was out of range. Round the portion value into 2 decimal points.", "normal_query": "I am going to identify shipments in the Red Zone of our Quality Risk Zones for further investigation. For each shipment in the Red Zone, show me the shipment ID, calculated TIRP percentage (rounded to 2 decimal places), and the total excursion duration in minutes.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 2, "distinct": false, "order": false}} +{"instance_id": "cold_chain_pharma_compliance_7", "selected_database": "cold_chain_pharma_compliance", "query": "Can you figure out what the average temperature impact was for all our shipments where the temperature went outside the acceptable range? We need that special calculation that accounts for how temperature fluctuations affect products over time. Just give me one number that summarizes this across all problematic shipments.", "normal_query": "I would like to calculate the Mean Kinetic Temperature for all shipments that have experienced temperature excursions. Please provide me with the average MKT value across these shipments.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 2, "distinct": false, "order": false}} +{"instance_id": "cold_chain_pharma_compliance_8", "selected_database": "cold_chain_pharma_compliance", "query": "Please help me find shipment routes where our risk labels don't match what's actually happening. I want to see where we've mislabeled routes compared to what the temperature excursion data shows. Make sure you check all our routes, even if some might be missing monitoring data. Just show me the top 3 most problematic routes according to average excursion count.", "normal_query": "We need to identify where our shipping route risk classifications don't match reality. Using the Route Risk Classification and High-Risk Shipping Origin-Destination Pairs knowledge, compare our documented risk notes against calculated risk levels, even those without environmental monitoring data. Show me only routes where the documented risk level differs from the calculated risk level, ordered by average excursion count from highest to lowest. Limit results to the top 3 discrepancies.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 0, "distinct": false, "order": true}} +{"instance_id": "cold_chain_pharma_compliance_9", "selected_database": "cold_chain_pharma_compliance", "query": "Just give me a rough idea of how our shipments did in terms of temperature control. You can group them into risk categories like green/yellow/red using time-in-range and total out-of-range time. It’s fine to use a simplified method—doesn’t have to be perfect—as long as we get a general sense.", "normal_query": "Please provide an approximate analysis of cold chain shipment quality by grouping them into Quality Risk Zones. Use Time In Range Percentage (TIRP) and total temperature excursion duration as approximate indicators for classification. A proxy-based zoning approach is acceptable where exact excursion details are unavailable. Return the number and percentage of shipments in each zone (Green, Yellow, Red), sorted from lowest to highest risk.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 2, "distinct": false, "order": true}} +{"instance_id": "cold_chain_pharma_compliance_10", "selected_database": "cold_chain_pharma_compliance", "query": "Can you help me figure out how strong our supply chain is overall? Let’s turn the inputs into scores like this: route risk gets 8 if it’s low, 5 for medium, 2 for high; carrier certification gets 10 for both types, 8 if it’s just 'gdp' or 'ceiv pharma', 4 otherwise; vehicles score 9 if validated, 7 if qualified, and 5 otherwise; and compliance gets 9 for full, 6 for partial, 3 otherwise. Use the 0.4/0.3/0.2/0.1 weighting and round the final number to two decimal places.", "normal_query": "I want to calculate the overall Supply Chain Resilience Score using a weighted average of several proxy indicators. For this, please map: 'low' route risk to 8, 'medium' to 5, and 'high' to 2; for carrier certification, use 10 for 'both', 8 for 'gdp' or 'ceiv pharma', and 4 otherwise; for vehicle qualification, use 9 for 'validated', 7 for 'qualified', and 5 otherwise; and for GDP compliance, use 9 for 'full', 6 for 'partial', and 3 otherwise. Then apply weights: 0.4 for route risk, 0.3 for carrier certification, 0.2 for vehicle, and 0.1 for compliance. Round to two decimals.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 2, "distinct": false, "order": false}} +{"instance_id": "cold_chain_pharma_compliance_11", "selected_database": "cold_chain_pharma_compliance", "query": "I wonder that, for each type of GDP certification, how many different shipping companies have that certification, and out of those, how many actually have at least one fully validated vehicle. Please show the certification type, the total number of unique certified companies, and how many of those companies have validated vehicles. Put the ones with the most companies using validated vehicles at the top.", "normal_query": "For each GDP Certification Status, I want to know how many distinct carriers hold that certification, and among them, how many distinct carriers also have at least one vehicle with Validated Cold Chain Vehicle Qualification Status. Please display the GDP certification level, the total number of distinct GDP-certified carriers, and the number of those distinct carriers with validated vehicles. Sort the results by the number of carriers with validated vehicles in descending order.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": true, "conditions": {"decimal": 0, "distinct": true, "order": true}} +{"instance_id": "cold_chain_pharma_compliance_14", "selected_database": "cold_chain_pharma_compliance", "query": "I need to know, on average, how much of their allowed stability budget temperature-sensitive shipments are using up. Only count shipments where there actually was a temperature excursion. Give me the average percentage used, rounded to two decimal places.", "normal_query": "Calculate the average Stability Budget Consumption Rate for all shipments of temperature-sensitive products. Return the average SBCR as a percentage, rounded to two decimal places and only count shipments where there actually was a temperature excursion.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": 2, "distinct": false, "order": false}} +{"instance_id": "cold_chain_pharma_compliance_15", "selected_database": "cold_chain_pharma_compliance", "query": "Out of all the biologics product batches, what percent need to be kept at ultra-low temperatures? Round it to two decimal places.", "normal_query": "Please calculate the percentage of biologics products that require ultra-low temperature storage. The result should be rounded to two decimal places.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": 2, "distinct": false, "order": false}} +{"instance_id": "cold_chain_pharma_compliance_16", "selected_database": "cold_chain_pharma_compliance", "query": "I want to know, on average, how much carbon was produced by shipments that were way behind schedule—specifically, those whose delivery performance category counted as severely delayed. Return this value into two decimal places.", "normal_query": "Calculate the average carbon footprint for all shipments that are classified as 'Severely Delayed' based on the Delivery Performance Classification standard. Please return a value rounded to two decimal places.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": 2, "distinct": false, "order": false}} +{"instance_id": "cold_chain_pharma_compliance_17", "selected_database": "cold_chain_pharma_compliance", "query": "Could you figure out the overall reliability score for all our temperature data loggers? Treat any logger with a missing recording interval as having a reading failure, and count a calibration failure if the calibration date is before June 26, 2024.", "normal_query": "Estimate the overall Data Logger Reliability Score for all monitoring devices in the fleet. Assume a reading failure if the recording interval is missing, and a calibration failure if the calibration date is before 2024-06-26.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "cold_chain_pharma_compliance_18", "selected_database": "cold_chain_pharma_compliance", "query": "Which three people have had the most shipments rejected when making product release decisions? I just want a list of their names and how many times each had a rejection, sorted so the person with the most rejections is at the top.", "normal_query": "Using the Product Release Decision Framework, identify the top 3 responsible persons with the highest number of 'Rejected' product releases based on the Product Release Decision Framework. Please list each responsible person and their count of rejected shipments, ordered from highest to lowest count.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "cold_chain_pharma_compliance_19", "selected_database": "cold_chain_pharma_compliance", "query": "Can you show me, for each type of regulatory compliance status, what the average number of temperature and humidity excursions is? And sort the results so the highest average temperature excursions come first.", "normal_query": "For each regulatory compliance status, calculate the average number of temperature excursions and average number of humidity excursions. Display a table with compliance status, average temperature excursions, and average humidity excursions, sorted by temperature excursions in descending order.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": 2, "distinct": false, "order": true}} +{"instance_id": "cold_chain_pharma_compliance_20", "selected_database": "cold_chain_pharma_compliance", "query": "Can you show me the three riskiest shipping routes based on temperature excursions and shipping delays? Just give me the route, how many shipments went that way, and the risk score. Only count routes with more than one shipment.", "normal_query": "I want to identify the top 3 riskiest shipping lanes by calculating the Lane Risk Potential for each route. Only include temperature excursions and shipping delays in the risk score. Please provide a report listing the route string, total number of shipments, and the calculated lane risk potential, sorted by lane risk potential in descending order. Only include lanes with more than one shipment.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Query", "high_level": false, "conditions": {"decimal": 2, "distinct": false, "order": true}} +{"instance_id": "cold_chain_pharma_compliance_M_1", "selected_database": "cold_chain_pharma_compliance", "query": "Can you make a function calculate_ted that, when I give you a shipment's ID, tells me how many minutes it spent outside the right temperature range?", "normal_query": "For a given shipment, try to create a function calculate_ted that calculates the Temperature Excursion Duration. The input is a shipment's ID and output the TED value as an integer.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": 0, "distinct": false, "order": false}} +{"instance_id": "cold_chain_pharma_compliance_M_2", "selected_database": "cold_chain_pharma_compliance", "query": "For each shipment, I want you to include two things in the summary. First, tell me if it managed to stay in the right temperature the whole way. Second, show what percent of all shipments got that right. Even if some shipments don’t have temperature info, still include both pieces of info in their summary.", "normal_query": "Please update every shipment so that its summary includes two clear insights. First, show whether that shipment stayed within the correct temperature range the entire time. Second, include the percentage of all shipments that successfully stayed within the proper temperature range from start to finish. This information should be added for every shipment, even for those where no temperature data is available.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": 2, "distinct": false, "order": false}} +{"instance_id": "cold_chain_pharma_compliance_M_3", "selected_database": "cold_chain_pharma_compliance", "query": "Can you make a view called v_product_temperature_analysis that breaks down our products by how sensitive they are to temperature changes and by their storage type? For each kind of product and storage group, show how many batches there are, what the average temperature range is, a list of the different sensitivity levels, and the lowest and highest storage temps. Please sort the results by product type and storage method.", "normal_query": "Create a view named v_product_temperature_analysis that analyzes pharmaceutical products by both Temperature Sensitivity Tiers and Product Storage Classifications. For each product category and storage classification, display the batch count, average temperature range, all unique sensitivity descriptions, and the minimum and maximum storage temperatures. Order the results by product category and storage classification.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "cold_chain_pharma_compliance_M_4", "selected_database": "cold_chain_pharma_compliance", "query": "Can you show, for every delivery, how well it matched the expected timing — like how close it was to being on time? And also include something simple that tells whether it was early, late, or arrived just right. Keep everything else as it is.", "normal_query": "Please enhance each delivery record by adding two insights. The first is the On-Time Delivery Performance, which shows how closely the actual delivery matched the planned schedule, expressed as a percentage. The second is the Delivery Performance Classification, which gives a simple label describing the delivery’s overall timeliness. These additions should be included along with the original delivery details.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": false, "conditions": {"decimal": 2, "distinct": false, "order": false}} +{"instance_id": "cold_chain_pharma_compliance_M_5", "selected_database": "cold_chain_pharma_compliance", "query": "Whenever we add or change a shipment’s temperature data, I want the system to automatically figure out how much of the time the temperature stayed where it should, and also give it a simple color rating based on that. These two things should be added back into the same record.", "normal_query": "Whenever a new or updated environmental monitoring entry is recorded, the system should automatically assess the Quality Risk Zones for that shipment. It should use the temperature data to calculate Time In Range Percentage based on a standard 72-hour journey, then assign the appropriate risk level. Both the quality risk zone and time in range percentage should be added back into the same record.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": true, "conditions": {"decimal": 2, "distinct": false, "order": false}} +{"instance_id": "cold_chain_pharma_compliance_M_6", "selected_database": "cold_chain_pharma_compliance", "query": "I need to find all the product batches that count as top-level cold chain products. For each one, list its batch tag, product code, name, and value, and make sure it’s clearly marked as tier 1 in the records. Also, let me know how many you found, and if anything goes wrong, just keep going and let me know about the problem.", "normal_query": "I want to batch identify all Tier 1 Cold Chain Products in our database. For each product batch, check if it meets the Tier 1 Cold Chain Products criteria. For every qualifying batch, record its batch tag, product code, product label, and value in USD, and flag it as Tier 1. Also, update the product batch records to append a '[TIER1]' flag to the value field for all identified Tier 1 Cold Chain Products. Please ensure the process logs the number of Tier 1 products found and handles any errors gracefully.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "cold_chain_pharma_compliance_M_7", "selected_database": "cold_chain_pharma_compliance", "query": "I need to refresh our monitoring device reliability records using the data logger value for reliability assessment. For every device, figure out its reliability value. Show all the details and scores in a temp table first. Then, save a backup of the current device list, wipe it clean, and fill it back up with the updated info from the results. Make sure to include all the device details from the original record, the made-up failure rates, the reliability score, and when you did the analysis.", "normal_query": "I plan to recalculate and rebuild the reliability tracking for all monitoring devices in our system using the Data Logger Reliability Score. For each device, calculate DLRS and store the results in a staging table. Then, back up the current monitoringdevices table and repopulate it with the original device columns from the staging results. Please include the device reference, calibration timestamp, device accuracy, recording interval, temperature points, all simulated failure rates, the calculated DLRS, and the analysis timestamp in the staging output.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": true, "conditions": {"decimal": 2, "distinct": false, "order": false}} +{"instance_id": "cold_chain_pharma_compliance_M_8", "selected_database": "cold_chain_pharma_compliance", "query": "I'm looking to see, for every shipment, what's the biggest number of shock events it went through, basically the highest shock count for each shipment, even if some shipments didn't have any shock data. Just show me the shipment code and that top shock number for each one.", "normal_query": "For each shipment in the cold chain database, I want to determine the maximum shock event count observed, using the concept of Shock Event Significance Levels. Please provide the shipment identifier and its corresponding maximum shock event count, ensuring that all shipments are included in the results even if no shock data is present, just use 0 to represent null data. The output should display the shipment code alongside the highest shock event count recorded for that shipment.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": true}} +{"instance_id": "cold_chain_pharma_compliance_M_9", "selected_database": "cold_chain_pharma_compliance", "query": "If I give you a shipment, would you tell me what its regulatory compliance status is, and also just let me know with a true or false if it’s considered compliant?", "normal_query": "For a specified shipment, I require a summary of its regulatory compliance status according to the concept of Regulatory Compliance Status Definitions. The output should include both the compliance status and an indicator of whether the shipment is compliant, with the indicator expressed as a boolean value.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} +{"instance_id": "cold_chain_pharma_compliance_M_10", "selected_database": "cold_chain_pharma_compliance", "query": "Whenever a shipment’s product release status is set to rejected, go ahead and mark its insurance claim as rejected too, and don’t bother updating claims that are already marked as rejected.", "normal_query": "For all shipments, please update the insurance claim status to 'Rejected' in the insuranceclaims table for every case where the product release status is 'Rejected', strictly following the Product Release Decision Framework. Ensure that only claims not already marked as 'Rejected' are updated.", "preprocess_sql": [], "clean_up_sqls": [], "sol_sql": [], "external_knowledge": [], "test_cases": [], "category": "Management", "high_level": true, "conditions": {"decimal": -1, "distinct": false, "order": false}} diff --git a/mental_health/mental_health_column_meaning_base.json b/mental_health/mental_health_column_meaning_base.json new file mode 100644 index 0000000000000000000000000000000000000000..9a543af8553f6e2194affa8544df1221dc48b5c4 --- /dev/null +++ b/mental_health/mental_health_column_meaning_base.json @@ -0,0 +1,155 @@ +{ + "mental_health|facilities|fac_key": "A TEXT primary key uniquely identifying each healthcare facility record in the mental health assessment database.", + "mental_health|facilities|r_source": "Referral source category indicating how patients are directed to this facility. Contains values: 'Self', 'Physician', 'Court', 'Emergency', 'Family'.", + "mental_health|facilities|env_stress": "Environmental stressors present in the facility's service area. Contains NULL when environmental stressor assessment is not available.", + "mental_health|facilities|life_impact": "Major life events impact assessment for the facility's patient population. Contains values: 'Mild', 'Moderate', 'Severe'. Contains NULL when life events impact data is not collected.", + "mental_health|facilities|season_pat": "Seasonal pattern observations in patient mental health conditions at this facility. Contains NULL when seasonal pattern tracking is not implemented.", + "mental_health|facilities|legl_issue": "Legal issues commonly encountered by patients at this facility. Contains values: 'Resolved', 'Pending', 'Ongoing'. Contains noise with mixed case variations like 'PEnding', 'oNgoING', 'ReSOlveD', 'rESOLvED', 'ONgOInG'. Contains NULL when legal issue tracking is not maintained.", + "mental_health|facilities|spt_svc": "Support services available at or through this facility. Contains values like 'Case Management', 'Limited', 'Comprehensive', 'Adequate'. Contains NULL when support service information is not documented.", + "mental_health|facilities|com_res": "Community resources accessible to patients from this facility. Contains values: 'Comprehensive', 'Limited', 'Adequate', 'Extensive'. Contains NULL when community resource mapping is incomplete.", + "mental_health|facilities|emer_contact": "Emergency contact protocols and contact information for crisis situations. Typically contains numeric values representing contact counts.", + "mental_health|facilities|s_plan_stat": "Safety plan implementation status at this facility. Contains values: 'Not Needed', 'Active', 'Needs Update'. Contains NULL when safety plan status is not tracked.", + "mental_health|facilities|c_plan_stat": "Crisis plan readiness status for this facility. Contains values: 'Not Needed', 'Active', 'Needs Update'. Contains NULL when crisis planning status is not documented.", + "mental_health|facilities|s_system_chg": "Recent or planned changes to support systems at this facility. Contains values: 'Improved', 'Declined', 'Variable'. Contains NULL when support system change tracking is not maintained.", + "mental_health|clinicians|clin_key": "A TEXT primary key uniquely identifying each clinician record in the mental health assessment database.", + "mental_health|clinicians|clin_conf": "Clinician's confidence level in their assessment and treatment decisions. Contains values: 'Medium', 'Low', 'High'.", + "mental_health|clinicians|assess_lim": "Known limitations in the clinician's assessment capabilities or scope of practice. Contains values: 'Cognitive', 'Cultural', 'Language', 'Engagement', 'Time'. Contains NULL when limitation assessment is not completed.", + "mental_health|clinicians|docu_stat": "Current status of the clinician's documentation and record-keeping. Contains values: 'Complete', 'Incomplete', 'Pending'.", + "mental_health|clinicians|bill_code": "Billing code or credential identifier for the clinician's services. Contains CPT codes like 'CPT90511', 'CPT90696', 'CPT90854'.", + "mental_health|clinicians|nxt_rev_dt": "Next scheduled review date for the clinician's credentials in YYYY-MM-DD format.", + "mental_health|clinicians|care_coord": "Level and type of care coordination activities performed by this clinician. Contains values: 'Intensive', 'Regular', 'Limited'. Contains NULL when care coordination role is not defined.", + "mental_health|clinicians|ref_need": "Referral needs or specialties that this clinician commonly requires. Contains values: 'Services', 'Testing', 'Specialty'. Contains NULL when referral patterns are not tracked.", + "mental_health|clinicians|f_up_type": "Primary type of follow-up care provided by this clinician. Contains values: 'Therapy', 'Routine', 'Crisis', 'Assessment'.", + "mental_health|clinicians|f_up_freq": "Typical frequency of follow-up appointments scheduled by this clinician. Contains values: 'Weekly', 'Biweekly', 'Monthly', 'Quarterly'.", + "mental_health|clinicians|fac_connect": "Foreign key referencing Facilities(Fac_Key), linking the clinician to their primary practice facility.", + "mental_health|patients|pat_key": "A TEXT primary key uniquely identifying each patient record in the mental health assessment database.", + "mental_health|patients|pat_age": "Patient's age in years at the time of assessment or registration.", + "mental_health|patients|pat_gender": "Patient's gender identity. Contains values: 'M', 'F', 'Other'.", + "mental_health|patients|pat_eth": "Patient's ethnicity or racial background. Contains values: 'Other', 'Hispanic', 'White', 'Black', 'Asian'.", + "mental_health|patients|edu_level": "Patient's highest level of education completed. Contains values: 'High School', 'College', 'Graduate', 'Less than High School'.", + "mental_health|patients|emp_stat": "Patient's current employment status. Contains values: 'Retired', 'Employed', 'Unemployed', 'Student', 'Disabled'.", + "mental_health|patients|mari_stat": "Patient's marital or relationship status. Contains values: 'Widowed', 'Married', 'Single', 'Divorced', 'Separated'.", + "mental_health|patients|living_arr": "Patient's current living arrangement. Contains values: 'Alone', 'Partner', 'Family', 'Group Home'.", + "mental_health|patients|insur_type": "Type of health insurance coverage held by the patient. Contains values: 'Medicaid', 'Medicare', 'Private', 'None', 'Military'. Contains NULL when insurance information is not available.", + "mental_health|patients|insur_stat": "Current status of the patient's insurance coverage. Contains values: 'Pending', 'Approved', 'Denied', 'Active'.", + "mental_health|patients|disab_stat": "Patient's disability status and accommodations needed. Contains values: 'PENDING', 'pErManent', 'temporary', 'pErmAneNt', 'Temporary', 'Pending', 'permanent', 'TEMPORARY', 'PERMANENT'. Contains noise with mixed case variations like 'PeNDinG', 'PeNDING', 'TEmpOrarY', 'PERmAnent', 'PeRmanENT'. Contains NULL when disability status assessment is not completed.", + "mental_health|patients|house_stable": "Stability and security of the patient's housing situation. Contains values: 'Stable', 'Homeless', 'At Risk', 'Temporary'.", + "mental_health|patients|cult_factor": "Cultural factors that may impact the patient's mental health treatment. Contains values: 'Language', 'beliefs', 'Family', 'multiple'. Contains noise with mixed case variations like 'LANGUAGE', 'mUlTiplE', 'FAMILy', 'BELIefs', 'BELIEFS', 'MULTIPLE', 'FAMILY'. Contains NULL when cultural assessment is not conducted.", + "mental_health|patients|stigma_imp": "Impact of mental health stigma on the patient's treatment engagement. Contains values: 'moderate', 'Mild', 'severe'. Contains noise with mixed case variations like 'mODErATe', 'mILd', 'MOdERatE', 'seVERe', 'SEVERE', 'MODERATE', 'MILD'. Contains NULL when stigma impact is not evaluated.", + "mental_health|patients|fin_stress": "Level of financial stress experienced by the patient. Contains values: 'Severe', 'Moderate', 'Mild'. Contains NULL when financial stress assessment is not performed.", + "mental_health|patients|clin_lead_ref": "Foreign key referencing Clinicians(Clin_Key), identifying the patient's primary clinician.", + "mental_health|assessmentbasics|ab_key": "A TEXT primary key uniquely identifying each assessment record in the mental health database.", + "mental_health|assessmentbasics|a_type": "Type of mental health assessment conducted. Contains values: 'Initial', 'Emergency', 'Routine', 'Follow-up'.", + "mental_health|assessmentbasics|a_method": "Method used to conduct the assessment. Contains values: 'Phone', 'Self-report', 'In-person', 'Telehealth'.", + "mental_health|assessmentbasics|a_dur_min": "Duration of the assessment session in minutes.", + "mental_health|assessmentbasics|a_lang": "Primary language used during the assessment. Contains values: 'Chinese', 'French', 'English', 'Spanish'.", + "mental_health|assessmentbasics|a_valid": "Validity rating of the assessment results. Contains values: 'Questionable', 'Invalid', 'Valid'.", + "mental_health|assessmentbasics|resp_consist": "Consistency of patient responses throughout the assessment. Contains values: 'Medium', 'High', 'Low'. Contains NULL when response consistency is not evaluated.", + "mental_health|assessmentbasics|sympt_valid": "Validity of reported symptoms during the assessment. Contains values: 'Questionable', 'Valid', 'Invalid'.", + "mental_health|assessmentbasics|pat_owner_ref": "Foreign key referencing Patients(Pat_Key), linking the assessment to the specific patient.", + "mental_health|assessmentbasics|depr_improve_rate": "TEXT. The rate at which depression symptoms improve, calculated as the reduction in PHQ-9 score per month of therapy. Example: 1.2 points/month.", + "mental_health|assessmentbasics|therapy_exp_intensity": "TEXT. The total hours spent on therapy per month, based on the frequency and duration of therapy sessions. Example: 8 hours/month.", + "mental_health|assessmentbasics|med_side_eff_density": "TEXT. The density of side effects experienced from medications, calculated as the severity of side effects per medication per month. Example: 0.03 side-effects/med/month.", + "mental_health|assessmentbasics|crisis_event_rate": "TEXT. The rate at which crisis events occur, calculated as the total number of crisis interventions and emergency contacts per month. Example: 0.2 events/month.", + "mental_health|assessmentbasics|func_recovery_ef": "TEXT. The efficiency of functional recovery, calculated as the total Quality of Life score per month of therapy. Example: 4.5 QoL-points/month.", + "mental_health|assessmentbasics|adherence_index": "TEXT. A measure of treatment adherence, calculated as the adherence score per therapy appointment. Example: 0.75 adherence-score/appointment.", + "mental_health|assessmentbasics|sympt_fluct_index": "TEXT. A measure of the fluctuation in symptoms (PHQ-9, GAD7, Mood scores), calculated as the average fluctuation per assessment. Example: 3.2 fluctuation-points/assessment.", + "mental_health|assessmentbasics|hosp_risk_density": "TEXT. The risk of hospitalization, calculated as the total number of hospitalizations per year based on previous hospitalizations and diagnosis duration. Example: 1.5 risk-score/year.", + "mental_health|assessmentbasics|med_change_freq": "TEXT. The frequency of medication changes, calculated as the number of changes in medication type or dosage per month. Example: 0.1 changes/month.", + "mental_health|assessmentbasics|support_util_rate": "TEXT. The rate of social support utilization, calculated as the level of support used per emergency contact. Example: 2.0 support-level/contact.", + "mental_health|assessmentbasics|treatment_cost_eff": "TEXT. The cost-efficiency of treatment, calculated as the Quality of Life score per dollar spent on treatment. Example: 0.15 QoL-points/$. ", + "mental_health|assessmentbasics|recovery_goal_vel": "TEXT. The speed at which recovery goals are achieved, calculated as the number of goals achieved per month of therapy. Example: 0.5 goals/month.", + "mental_health|encounters|enc_key": "A TEXT primary key uniquely identifying each clinical encounter record in the mental health database.", + "mental_health|encounters|time_mark": "Timestamp recording the exact date and time when the clinical encounter occurred in YYYY-MM-DD HH:MM:SS format.", + "mental_health|encounters|ab_ref": "Foreign key referencing AssessmentBasics(AB_Key), linking the encounter to a specific assessment when applicable.", + "mental_health|encounters|pat_ref": "Foreign key referencing Patients(Pat_Key), linking the encounter to the specific patient.", + "mental_health|encounters|clin_id": "Identifier for the clinician who conducted this encounter (local reference, not foreign key).", + "mental_health|encounters|fac_id": "Identifier for the facility where this encounter took place (local reference, not foreign key).", + "mental_health|encounters|miss_appt": "Number of appointments missed by the patient prior to or following this encounter.", + "mental_health|encounters|tx_barrier": "Treatment barriers identified during this encounter. Contains values: 'multiple', 'time', 'financial', 'Transportation'. Contains noise with mixed case variations like 'MULTIPLE', 'FiNAncIal', 'TRAnsPorTATiON', 'FINANCIAL', 'TIME', 'TRANSPORTATION'. Contains NULL when treatment barriers are not assessed.", + "mental_health|encounters|nx_appt_dt": "Date of the next scheduled appointment following this encounter, stored as text with dd/mm/yyyy format like '21/02/2025', '18/04/2025', '13/04/2025'.", + "mental_health|encounters|dq_score": "Data quality score for this encounter record, indicating completeness and accuracy of documentation.", + "mental_health|encounters|assess_complete": "Assessment completeness indicator for this encounter. Contains standard completeness values.", + "mental_health|assessmentsymptomsandrisk|asr_key": "A TEXT primary key uniquely identifying each symptom and risk assessment record, linking to AssessmentBasics(AB_Key).", + "mental_health|assessmentsymptomsandrisk|phq9_scr": "Patient Health Questionnaire-9 total score, ranging from 0 to 27, measuring depression severity.", + "mental_health|assessmentsymptomsandrisk|phq9_sev": "PHQ-9 depression severity level based on total score. Contains values: 'Moderately Severe', 'Mild', 'Moderate', 'Severe'. Contains NULL when PHQ-9 is not administered.", + "mental_health|assessmentsymptomsandrisk|gad7_scr": "Generalized Anxiety Disorder 7-item scale total score, ranging from 0 to 21, measuring anxiety severity.", + "mental_health|assessmentsymptomsandrisk|gad7_sev": "GAD-7 anxiety severity level based on total score. Contains values: 'Mild', 'Moderate', 'Severe'. Contains NULL when GAD-7 is not administered.", + "mental_health|assessmentsymptomsandrisk|suic_ideation": "Presence and severity of suicidal ideation. Contains values: 'Intent', 'Active', 'Plan', 'Passive', 'None'. Contains NULL when suicidal ideation assessment is not completed.", + "mental_health|assessmentsymptomsandrisk|suic_risk": "Overall suicide risk level assessment. Contains values: 'Medium', 'High', 'Low'.", + "mental_health|assessmentsymptomsandrisk|self_harm": "History and current status of self-harm behaviors. Contains values: 'Recent', 'Remote', 'Current', 'None'. Contains NULL when self-harm assessment is not conducted.", + "mental_health|assessmentsymptomsandrisk|viol_risk": "Assessment of violence risk toward others. Contains values: 'Medium', 'Low', 'High'.", + "mental_health|assessmentsymptomsandrisk|sub_use": "Current substance use status. Contains values: 'Cannabis', 'Opioids', 'Alcohol', 'Multiple', 'None'. Contains NULL when substance use assessment is not completed.", + "mental_health|assessmentsymptomsandrisk|sub_use_freq": "Frequency of substance use when present. Contains values: 'Daily', 'Occasional', 'Weekly', 'Never'.", + "mental_health|assessmentsymptomsandrisk|sub_use_sev": "Severity level of substance use disorder when present. Contains values: 'Moderate', 'Mild', 'Severe'. Contains NULL when substance use severity is not evaluated.", + "mental_health|assessmentsocialanddiagnosis|asd_key": "A TEXT primary key uniquely identifying each social and diagnosis assessment record, linking to AssessmentBasics(AB_Key).", + "mental_health|assessmentsocialanddiagnosis|rec_status": "Patient's recovery status and stage in treatment. Contains values: 'Relapse', 'Stable', 'Advanced', 'Early'.", + "mental_health|assessmentsocialanddiagnosis|prim_dx": "Patient's primary mental health diagnosis. Contains values: 'Anxiety', 'PTSD', 'Bipolar', 'Schizophrenia', 'Depression'.", + "mental_health|assessmentsocialanddiagnosis|sec_dx": "Patient's secondary or comorbid mental health diagnosis. Contains values: 'OCD', 'Personality Disorder', 'Substance Use', 'Eating Disorder'. Contains NULL when secondary diagnosis is not applicable.", + "mental_health|assessmentsocialanddiagnosis|dx_dur_m": "Duration of the primary diagnosis in months since initial onset or first diagnosis.", + "mental_health|assessmentsocialanddiagnosis|prev_hosp": "Number of previous psychiatric hospitalizations or inpatient admissions for mental health treatment.", + "mental_health|assessmentsocialanddiagnosis|last_hosp_dt": "Date of the patient's last psychiatric hospitalization, stored as text with format yyyy/mm/dd like '2024/12/13', '2023/11/09', '2025/02/13' due to noise modifications.", + "mental_health|assessmentsocialanddiagnosis|qol_scr": "Quality of Life score typically ranging from 1 to 100.", + "mental_health|assessmentsocialanddiagnosis|func_imp": "Functional impairment assessment. Contains values: 'SEVERE', 'MODERATE', 'severe', 'moderate', 'mild', 'Severe', 'Moderate', 'Mild'. Contains noise with mixed case variations like 'SeVerE', 'MODErate', 'SEVErE', 'mIld', 'MILD'. Contains NULL when functional impairment is not assessed.", + "mental_health|treatmentbasics|tx_key": "A BIGSERIAL primary key auto-incrementing identifier for treatment records.", + "mental_health|treatmentbasics|enc_ref": "Foreign key referencing Encounters(Enc_Key), linking treatment to specific clinical encounters.", + "mental_health|treatmentbasics|cur_med": "Current medications prescribed to the patient. Free-text field containing medication combinations like 'Antidepressant,Antipsychotic', 'None,Antipsychotic,Mood Stabilizer'. Contains NULL when medication information is not documented.", + "mental_health|treatmentbasics|med_adh": "Medication adherence level. Contains values: 'Medium', 'Low', 'High'.", + "mental_health|treatmentbasics|med_side": "Medication side effects experienced. Contains values: 'Mild', 'Moderate', 'Severe'. Contains NULL when side effects are not monitored.", + "mental_health|treatmentbasics|med_chg": "Recent medication changes made. Contains values: 'Dose Adjustment', 'Augmentation', 'Switch'. Contains NULL when medication changes are not tracked.", + "mental_health|treatmentbasics|th_type": "Type of therapy being provided. Contains values: 'CBT', 'Group', 'Individual', 'Family'. Contains NULL when therapy type is not specified.", + "mental_health|treatmentbasics|th_freq": "Frequency of therapy sessions. Contains values: 'Biweekly', 'Monthly', 'Weekly', 'Quarterly'. Contains NULL when therapy frequency is not documented.", + "mental_health|treatmentbasics|th_dur_m": "Duration of therapy treatment in months.", + "mental_health|treatmentbasics|th_eng": "Level of patient engagement in therapy. Contains values: 'Medium', 'High', 'Low'.", + "mental_health|treatmentbasics|th_chg": "Recent therapy changes made to the treatment plan. Contains values: 'Frequency Change', 'Modality Change', 'No Change'. Contains NULL when therapy changes are not tracked.", + "mental_health|treatmentbasics|crisis_int": "Number of crisis interventions required during the treatment period.", + "mental_health|treatmentoutcomes|tx_out_key": "A BIGSERIAL primary key auto-incrementing identifier for treatment outcome records.", + "mental_health|treatmentoutcomes|tx_ref": "Foreign key referencing TreatmentBasics(Tx_Key), linking outcomes to specific treatment records.", + "mental_health|treatmentoutcomes|tx_goal_stat": "Treatment goals achievement status. Contains values: 'Not Started', 'In Progress', 'Achieved', 'Partially Met'.", + "mental_health|treatmentoutcomes|rec_goal_stat": "Recovery goals achievement status. Contains values: 'Not Started', 'In Progress', 'Achieved', 'Partially Met'.", + "mental_health|assessmentsymptomsandrisk|sympscores": { + "column_meaning": "JSONB column. Consolidates all symptom rating scores including mood, anxiety, sleep, appetite, energy, concentration, interest, and hopelessness measurements into a single assessment profile.", + "fields_meaning": { + "Mood_Scr": "Subjective mood rating score typically ranging from 1 to 10.", + "Anx_Scr": "Subjective anxiety rating score typically ranging from 1 to 10.", + "Slp_Scr": "Sleep quality rating score typically ranging from 1 to 10.", + "App_Scr": "Appetite rating score typically ranging from 1 to 10.", + "En_Scr": "Energy level rating score typically ranging from 1 to 10.", + "Con_Scr": "Concentration ability rating score typically ranging from 1 to 10.", + "Int_Scr": "Interest in activities rating score typically ranging from 1 to 10.", + "Hope_Scr": "Hopelessness rating score typically ranging from 1 to 10." + } + }, + "mental_health|assessmentsocialanddiagnosis|funcassess": { + "column_meaning": "JSONB column. Groups functional assessment metrics including work functioning, social functioning, ADL functioning, stress levels, coping skills, resilience, insight, and motivation into a comprehensive functional profile.", + "fields_meaning": { + "Work_Func": "Patient's level of functioning in work or vocational activities. Contains values: 'Disabled', 'Poor', 'Good', 'Fair'.", + "Soc_Func": "Patient's level of social functioning in various settings. Contains values: 'Isolated', 'Fair', 'Good', 'Poor'.", + "ADL_Func": "Patient's ability to perform activities of daily living independently. Contains values: 'Minimal Help', 'Independent', 'Needs Help'.", + "Strs_Lvl": "Patient's current stress level rating typically ranging from 1 to 10.", + "Cop_Skill": "Assessment of the patient's coping skills and strategies. Contains values: 'Good', 'Poor', 'Fair'.", + "Res_Scr": "Resilience score rating typically ranging from 1 to 10.", + "In_Level": "Patient's level of insight into their mental health condition. Contains values: 'Fair', 'Good', 'Poor'. Contains NULL when insight level is not assessed.", + "Motiv_Level": "Patient's motivation level for treatment and change. Contains values: 'High', 'Medium', 'Low'. Contains NULL when motivation assessment is not completed.", + "Soc_Sup": "Level of social support available to the patient. Contains values: 'Strong', 'Limited', 'Adequate', 'Weak'. Contains NULL when social support is not evaluated.", + "Fam_Inv": "Level of family involvement in the patient's treatment. Contains values: 'Low', 'High', 'Medium'. Contains NULL when family involvement is not assessed.", + "Rel_Qual": "Quality of the patient's interpersonal relationships. Contains values: 'Poor', 'Conflicted', 'Good', 'Fair'." + } + }, + "mental_health|treatmentoutcomes|txprogmet": { + "column_meaning": "JSONB column. Combines treatment progress metrics including therapy progress, treatment adherence, response, side effect burden, symptom improvement, functional improvement, and satisfaction measures.", + "fields_meaning": { + "Th_Prog": "Therapy progress assessment. Contains values: 'Fair', 'Good', 'Poor', 'Excellent'. Contains NULL when therapy progress is not evaluated.", + "Tx_Adh": "Treatment adherence level. Contains values: 'Non-compliant', 'Medium', 'High', 'Compliant'.", + "Tx_Resp": "Treatment response assessment. Contains values: 'Poor', 'Partial', 'Good', 'Excellent'. Contains NULL when treatment response is not evaluated.", + "Side_Burd": "Side effect burden experienced by the patient. Contains values: 'Mild', 'Moderate', 'Good', 'Severe'. Contains NULL when side effect burden is not assessed.", + "Symp_Imp": "Symptom improvement assessment. Contains values: 'MODERATE', 'minimal', 'Significant', 'moderate'. Contains noise with mixed case variations like 'MODeRatE', 'MoDERATe', 'miNiMAL', 'SignIFIcaNT', 'MINIMAL', 'SIGNIFICANT'. Contains NULL when symptom improvement is not evaluated.", + "Func_Impv": "Functional improvement assessment. Contains values: 'MODERATE', 'moDeraTe', 'Moderate', 'MINIMAL', 'Minimal', 'siGnIfiCAnT', 'moderate', 'significant'. Contains noise with mixed case variations like 'moDEraTE', 'MINimAL', 'signiFIcant', 'SignIFicAnT'. Contains NULL when functional improvement is not assessed.", + "Work_Stat_Chg": "Work status changes during treatment. Contains values: 'Leave', 'reduced hours', 'Terminated'. Contains noise with mixed case variations like 'REDUCED HOURS', 'LEAVE', 'termINaTEd', 'TeRMINaTed', 'TERMINATED', 'REduCeD HOURS'. Contains NULL when work status changes are not tracked.", + "Sat_Scr": "Patient satisfaction score typically ranging from 1 to 10.", + "Ther_Alliance": "Therapeutic alliance quality assessment. Contains values: 'poor', 'StROng', 'moderate', 'strong', 'MODERATE', 'POOR', 'Weak', 'Strong'. Contains noise with mixed case variations like 'stroNG', 'sTRONg', 'mODERAte', 'weAk', 'STRONG', 'WEAK'. Contains NULL when therapeutic alliance is not assessed.", + "Tx_Eng": "Treatment engagement level. Contains values: 'High', 'Medium', 'Low', 'Non-compliant'.", + "Tx_Sat": "Treatment satisfaction level. Contains values: 'meDiUm', 'DisSAtISFieD', 'Dissatisfied', 'lOw', 'Low', 'DISSATISFIED', 'HIGH', 'High', 'Medium'. Contains noise with mixed case variations like 'medIUm', 'DISsATISFied', 'MEdiuM', 'HIGh', 'MEDIUM'. Contains NULL when treatment satisfaction is not assessed." + } + } +} \ No newline at end of file diff --git a/mental_health/mental_health_kb.jsonl b/mental_health/mental_health_kb.jsonl new file mode 100644 index 0000000000000000000000000000000000000000..15c1fea8439f752c36eb7e98438424bde839c53c --- /dev/null +++ b/mental_health/mental_health_kb.jsonl @@ -0,0 +1,96 @@ +{"id": 0, "knowledge": "Average PHQ-9 Score by Facility (APSF)", "description": "Calculates the average PHQ-9 depression score for patients assessed at a specific facility.", "definition": "APSF = \frac{sum \text{Patient Health Questionnaire-9 scores}}{\text{Total number of assessments}} \text{ for a given facility.}", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 1, "knowledge": "Average GAD-7 Score by Facility (AGSF)", "description": "Calculates the average GAD-7 anxiety score for patients assessed at a specific facility.", "definition": "AGSF = \frac{sum \text{Generalized Anxiety Disorder-7 scores}}{\text{Total number of assessments}} \text{ for a given facility.}", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 2, "knowledge": "Treatment Adherence Rate (TAR)", "description": "Measures the proportion of patients with high or medium treatment adherence at a facility.", "definition": "TAR = \frac{\text{Number of outcomes with Treatment Adherence of 'High', 'Medium', or 'Compliant'}}{\text{Total number of treatment outcomes}} \text{ for a given facility.}", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 3, "knowledge": "Suicide Risk Prevalence (SRP)", "description": "Calculates the percentage of assessments indicating high or severe suicide risk at a facility.", "definition": "SRP = \frac{\text{Number of assessments with a 'High' suicide risk}}{\text{Total number of assessments}} \times 100 \text{ for a given facility.}", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 4, "knowledge": "Therapy Engagement Score (TES)", "description": "Computes an average engagement score across therapy sessions.", "definition": "TES = \text{The average of engagement scores, where the score is 3 for 'High' engagement, 2 for 'Medium', and 1 for 'Low'.}", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 5, "knowledge": "Facility Resource Adequacy Index (FRAI)", "description": "Quantifies the adequacy of community resources available at a facility.", "definition": "FRAI = \text{The average of resource scores, where the score is 3 for 'Comprehensive' or 'Extensive' community resources, 2 for 'Adequate', and 1 for 'Limited'.}", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 6, "knowledge": "Patient Functional Impairment Score (PFIS)", "description": "Calculates an average functional impairment score across patients.", "definition": "PFIS = \text{The average of functional impairment scores, where the score is 3 for 'Severe' impairment, 2 for 'Moderate', and 1 for 'Mild'.}", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 7, "knowledge": "Crisis Intervention Frequency (CIF)", "description": "Calculated as the total number of crisis interventions at a facility divided by the total number of unique patients in that facility.", "definition": "CIF = \frac{sum \text{Crisis Interventions}}{\text{Total number of unique patients}} \text{ for a given facility.}", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 8, "knowledge": "Social Support Effectiveness (SSE)", "description": "Evaluates the effectiveness of social support based on social support level and relationship quality.", "definition": "SSE = \text{The average of the sum of two scores: a social support score (Strong=3, Adequate=2, Limited=1, Weak=0) and a relationship quality score (Good=3, Fair=2, Poor=1, Conflicted=0).}", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 9, "knowledge": "Missed Appointment Rate (MAR)", "description": "Calculates the average number of missed appointments per patient at a facility.", "definition": "MAR = \frac{sum \text{Missed Appointments}}{\text{Total number of unique patients}} \text{ for a given facility.}", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 10, "knowledge": "High-Risk Patient", "description": "Identifies patients with elevated suicide risk or severe symptoms.", "definition": "A patient whose assessed suicide risk is 'High', or whose depression score (PHQ-9) is over 15, or whose anxiety score (GAD-7) is over 15.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 11, "knowledge": "Treatment-Resistant Patient", "description": "Identifies patients with poor treatment response despite adherence.", "definition": "A patient whose treatment response is rated 'Poor' despite having a treatment adherence level of 'Medium' or 'High'.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 12, "knowledge": "Complex Care Needs", "description": "Identifies patients requiring intensive care coordination due to multiple risk factors.", "definition": "A patient at a facility where the Suicide Risk Prevalence is over 20%, the average Patient Functional Impairment Score is over 2.5, and the patient's substance use includes 'Opioids' or 'Multiple' substances.", "type": "domain_knowledge", "children_knowledge": [3, 6]} +{"id": 13, "knowledge": "Stable Recovery Patient", "description": "Identifies patients showing stable recovery with good functional outcomes.", "definition": "A patient whose recovery status is 'Stable' and whose functional improvement is rated as 'Moderate' or 'Significant'.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 14, "knowledge": "Low Engagement Risk", "description": "Identifies patients at risk of disengagement from therapy.", "definition": "A patient with a Therapy Engagement Score below 1.5 and a current therapy engagement level of 'Low'.", "type": "domain_knowledge", "children_knowledge": [4]} +{"id": 15, "knowledge": "Resource-Supported Facility", "description": "Identifies facilities with adequate or comprehensive community resources.", "definition": "A facility where the available community resources are rated as 'Adequate', 'Comprehensive', or 'Extensive'.", "type": "domain_knowledge", "children_knowledge": [5]} +{"id": 16, "knowledge": "High Social Support Patient", "description": "Identifies patients with strong social support and good relationship quality.", "definition": "A patient with a Social Support Effectiveness score of 5 or greater.", "type": "domain_knowledge", "children_knowledge": [8]} +{"id": 17, "knowledge": "Frequent Crisis Patient", "description": "Identifies patients with frequent crisis interventions.", "definition": "A patient at a facility where the average Crisis Intervention Frequency is greater than 2.", "type": "domain_knowledge", "children_knowledge": [7]} +{"id": 18, "knowledge": "Non-Compliant Patient", "description": "Identifies patients with consistent non-compliance in treatment.", "definition": "A patient whose treatment adherence from their outcome record and their medication adherence are both rated as 'Non-compliant'.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 19, "knowledge": "High Appointment Adherence", "description": "Identifies patients with low missed appointment rates.", "definition": "A patient at a facility where the average Missed Appointment Rate is less than 1.", "type": "domain_knowledge", "children_knowledge": [9]} +{"id": 20, "knowledge": "PHQ-9 Score (Depression)", "description": "Illustrates the value of the PHQ-9 score for depression severity.", "definition": "Ranges from 0 to 27. A score of 0–4 indicates minimal depression, 5–9 mild, 10–14 moderate, 15–19 moderately severe, and 20–27 severe.", "type": "value_illustration", "children_knowledge": -1} +{"id": 21, "knowledge": "GAD-7 Score (Anxiety)", "description": "Illustrates the value of the GAD-7 score for anxiety severity.", "definition": "Ranges from 0 to 21. A score of 0–4 indicates minimal anxiety, 5–9 mild, 10–14 moderate, and 15–21 severe.", "type": "value_illustration", "children_knowledge": -1} +{"id": 22, "knowledge": "Suicide Risk Level", "description": "Illustrates the value of the suicide risk level.", "definition": "Ranges from Low to High. Low indicates minimal risk, Medium indicates some concern, and High indicates immediate concern and need for intervention.", "type": "value_illustration", "children_knowledge": -1} +{"id": 23, "knowledge": "Therapy Engagement", "description": "Illustrates the value of therapy engagement levels.", "definition": "Ranges from Low to High. Low indicates minimal participation, Medium indicates regular participation, and High indicates active engagement.", "type": "value_illustration", "children_knowledge": -1} +{"id": 24, "knowledge": "Community Resources", "description": "Illustrates the value of community resource availability.", "definition": "Ranges from Limited to Comprehensive/Extensive. Limited indicates few resources, Adequate indicates sufficient resources, and Comprehensive/Extensive indicates robust resources.", "type": "value_illustration", "children_knowledge": -1} +{"id": 25, "knowledge": "Functional Impairment", "description": "Illustrates the value of functional impairment levels.", "definition": "Ranges from Mild to Severe. Mild indicates minimal impact on daily life, Moderate indicates noticeable impact, and Severe indicates significant disruption.", "type": "value_illustration", "children_knowledge": -1} +{"id": 26, "knowledge": "Treatment Adherence", "description": "Illustrates the value of treatment adherence levels.", "definition": "Ranges from Non-compliant to High/Compliant. Non-compliant indicates no adherence, Low indicates occasional adherence, Medium indicates regular adherence, and High/Compliant indicates consistent adherence.", "type": "value_illustration", "children_knowledge": -1} +{"id": 27, "knowledge": "Crisis Intervention Count", "description": "Illustrates the value of the crisis intervention count.", "definition": "A numeric value indicating the number of crisis interventions. A value of 0 indicates no interventions, while higher values (e.g., 3) indicate frequent interventions.", "type": "value_illustration", "children_knowledge": -1} +{"id": 28, "knowledge": "Social Support Level", "description": "Illustrates the value of social support levels.", "definition": "Ranges from Weak to Strong. Weak indicates minimal support, Limited indicates some support, Adequate indicates good support, and Strong indicates robust support.", "type": "value_illustration", "children_knowledge": -1} +{"id": 29, "knowledge": "Missed Appointment Count", "description": "Illustrates the value of the missed appointment count.", "definition": "A numeric value indicating the number of missed appointments. A value of 0 indicates perfect attendance, while higher values (e.g., 5) indicate frequent absences.", "type": "value_illustration", "children_knowledge": -1} +{"id": 30, "knowledge": "Symptom Severity Index (SSI)", "description": "Calculates a combined average symptom severity score for a facility, based on depression and anxiety.", "definition": "SSI = \frac{\text{Average PHQ-9 Score} + \text{Average GAD-7 Score}}{2}", "type": "calculation_knowledge", "children_knowledge": [0, 1]} +{"id": 31, "knowledge": "Engagement-Adherence Score (EAS)", "description": "Computes a composite score reflecting patient participation and adherence to treatment plans at a facility.", "definition": "EAS = \frac{\text{Therapy Engagement Score} + (\text{Treatment Adherence Rate} \times 3)}{2}", "type": "calculation_knowledge", "children_knowledge": [4, 2]} +{"id": 32, "knowledge": "Facility Risk Profile Index (FRPI)", "description": "Generates an index indicating the overall risk level associated with the patient population at a facility.", "definition": "FRPI = (\frac{\text{Suicide Risk Prevalence}}{100} \times 5) + \text{Patient Functional Impairment Score}", "type": "calculation_knowledge", "children_knowledge": [3, 6]} +{"id": 33, "knowledge": "Patient Stability Metric (PSM)", "description": "Calculates an index reflecting patient stability, inversely related to crisis frequency and missed appointments.", "definition": "PSM = \frac{1}{1 + \text{Crisis Intervention Frequency} + \text{Missed Appointment Rate}}", "type": "calculation_knowledge", "children_knowledge": [7, 9]} +{"id": 34, "knowledge": "Resource-Demand Differential (RDD)", "description": "Measures the potential gap between average patient functional needs and available facility resources.", "definition": "RDD = \text{Patient Functional Impairment Score} - \text{Facility Resource Adequacy Index}", "type": "calculation_knowledge", "children_knowledge": [6, 5]} +{"id": 35, "knowledge": "Socio-Environmental Support Index (SESI)", "description": "Computes a composite index reflecting the quality of the patient's social environment and the facility's resource context.", "definition": "SESI = \frac{\text{Average Social Support Effectiveness} + \text{Facility Resource Adequacy Index}}{2}", "type": "calculation_knowledge", "children_knowledge": [8, 5]} +{"id": 36, "knowledge": "Adherence Effectiveness Ratio (AER)", "description": "Calculates a ratio comparing treatment adherence rate to the average functional impairment, suggesting potential treatment impact relative to need.", "definition": "AER = \frac{\text{Treatment Adherence Rate}}{\text{Patient Functional Impairment Score}} \text{ (handle division by zero)}", "type": "calculation_knowledge", "children_knowledge": [2, 6]} +{"id": 37, "knowledge": "Engagement Deficit Index (EDI)", "description": "Quantifies the degree of patient disengagement, considering both therapy engagement scores and appointment attendance.", "definition": "EDI = (3 - \text{Therapy Engagement Score}) \times (1 + \text{Missed Appointment Rate})", "type": "calculation_knowledge", "children_knowledge": [4, 9]} +{"id": 38, "knowledge": "Comprehensive Facility Risk Score (CFRS)", "description": "A normalized index assessing overall facility risk based on combined depression severity, suicide risk prevalence, and functional impairment.", "definition": "CFRS = \frac{\text{Average PHQ-9 Score}}{27} + \frac{\text{Suicide Risk Prevalence}}{100} + \frac{\text{Patient Functional Impairment Score}}{3}", "type": "calculation_knowledge", "children_knowledge": [0, 3, 6]} +{"id": 39, "knowledge": "Support System Pressure Index (SSPI)", "description": "Index assessing the pressure on support systems based on crisis frequency relative to social support effectiveness.", "definition": "SSPI = \frac{\text{Crisis Intervention Frequency}}{\text{Average Social Support Effectiveness} + 1}", "type": "calculation_knowledge", "children_knowledge": [7, 8]} +{"id": 40, "knowledge": "High-Need, Under-Resourced Facility", "description": "Identifies facilities facing significant aggregate patient risk without adequate community resources.", "definition": "A facility where the Facility Risk Profile Index is greater than 4.5 and the Facility Resource Adequacy Index is less than 1.5.", "type": "domain_knowledge", "children_knowledge": [32, 5]} +{"id": 41, "knowledge": "Facility with Engaged but High-Impairment Population", "description": "Identifies facilities where the patient population is generally engaged and adherent but continues to struggle with high functional impairment.", "definition": "A facility where the Engagement-Adherence Score is greater than 2.0 and the Patient Functional Impairment Score is also greater than 2.0.", "type": "domain_knowledge", "children_knowledge": [31, 6]} +{"id": 42, "knowledge": "Patient with Strong Recovery Capital", "description": "Identifies patients demonstrating high social support effectiveness coupled with low functional impairment.", "definition": "A patient with a Social Support Effectiveness score of 5 or greater and whose functional impairment is rated as 'Mild'.", "type": "domain_knowledge", "children_knowledge": [8, 6]} +{"id": 43, "knowledge": "Facility Attrition Risk Indicator", "description": "Identifies facilities potentially experiencing high patient dropout, characterized by low engagement/adherence and high missed appointment rates.", "definition": "A facility where the Engagement-Adherence Score is less than 1.5 and the Missed Appointment Rate is greater than 2.5.", "type": "domain_knowledge", "children_knowledge": [31, 9]} +{"id": 44, "knowledge": "Well-Resourced High-Support Environment", "description": "Identifies facilities that are well-resourced and serve a patient population with generally high levels of social support effectiveness.", "definition": "A facility where the Facility Resource Adequacy Index is 2.0 or greater and the average Social Support Effectiveness score of its patients is 4.5 or greater.", "type": "domain_knowledge", "children_knowledge": [5, 8]} +{"id": 45, "knowledge": "Patient with Severe Comorbid Distress Profile", "description": "Identifies patients experiencing significant simultaneous distress across depression, anxiety, and functional domains.", "definition": "A patient whose depression score (PHQ-9) is 15 or greater, whose anxiety score (GAD-7) is 15 or greater, AND whose functional impairment is rated as 'Severe'.", "type": "domain_knowledge", "children_knowledge": [0, 1, 6]} +{"id": 46, "knowledge": "Facility with Potential Treatment Inertia", "description": "Identifies facilities where patients seem engaged in therapy but struggle with overall treatment adherence, suggesting potential systemic barriers or resistance.", "definition": "A facility where the Therapy Engagement Score is greater than 2.2 but the Treatment Adherence Rate is less than 0.6.", "type": "domain_knowledge", "children_knowledge": [4, 2]} +{"id": 47, "knowledge": "Patient with High Crisis & Low Support Profile", "description": "Identifies patients characterized by frequent crisis interventions and weak social support systems.", "definition": "A patient with a total crisis intervention count greater than 2 AND a personal Social Support Effectiveness score of less than 3.", "type": "domain_knowledge", "children_knowledge": [7, 8]} +{"id": 48, "knowledge": "Facility Demonstrating Strong Patient Retention", "description": "Identifies facilities showing positive performance indicators related to high patient adherence and low missed appointment rates.", "definition": "A facility where the Treatment Adherence Rate is greater than 0.75 and the Missed Appointment Rate is less than 1.0.", "type": "domain_knowledge", "children_knowledge": [2, 9]} +{"id": 49, "knowledge": "High Severity, High Risk Patient Group", "description": "Identifies patients presenting with both high symptom severity (depression/anxiety) and elevated suicide risk.", "definition": "A patient with a depression score (PHQ-9) over 19 OR an anxiety score (GAD-7) over 14, AND an assessed suicide risk of 'High'.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 50, "knowledge": "Clinical Improvement Potential Index (CIPI)", "description": "Calculates a ratio comparing patient engagement/adherence to overall symptom severity at a facility, suggesting potential responsiveness to intervention.", "definition": "CIPI = \frac{\text{Engagement-Adherence Score}}{\text{Symptom Severity Index} + 1}", "type": "calculation_knowledge", "children_knowledge": [31, 30]} +{"id": 51, "knowledge": "Facility Efficiency Index (FEI)", "description": "Estimates facility efficiency by relating the achieved patient stability metric to the available facility resource adequacy.", "definition": "FEI = \text{Patient Stability Metric} \times \text{Facility Resource Adequacy Index}", "type": "calculation_knowledge", "children_knowledge": [33, 5]} +{"id": 52, "knowledge": "Therapeutic Alliance & Engagement Score (TAES)", "description": "Computes a combined score reflecting both the average clinician-reported therapeutic alliance and the calculated therapy engagement score for a facility.", "definition": "TAES = \frac{\text{Average Therapeutic Alliance Score} + \text{Therapy Engagement Score}}{2}, where the alliance score is 3 for 'Strong', 2 for 'Moderate', 1 for 'Weak', and 0 for 'Poor'.", "type": "calculation_knowledge", "children_knowledge": [4]} +{"id": 53, "knowledge": "Recovery Trajectory Index (RTI)", "description": "Estimates the effectiveness of treatment adherence in achieving functional improvement at a facility.", "definition": "RTI = \text{Average Functional Improvement Score} \times \text{Treatment Adherence Rate}, where the improvement score is 3 for 'Significant', 2 for 'Moderate', and 1 for 'Minimal'.", "type": "calculation_knowledge", "children_knowledge": [2]} +{"id": 54, "knowledge": "Crisis Adherence Ratio (CAR)", "description": "Calculates the ratio of crisis intervention frequency to the treatment adherence rate, indicating crises occurring per unit of adherence.", "definition": "CAR = \frac{\text{Crisis Intervention Frequency}}{\text{Treatment Adherence Rate} + 0.01}", "type": "calculation_knowledge", "children_knowledge": [7, 2]} +{"id": 55, "knowledge": "Facility with High Clinical Leverage Potential", "description": "Identifies facilities with a highly engaged and adherent patient population that still experiences significant symptom severity, suggesting readiness for potentially more intensive or alternative interventions.", "definition": "A facility where the Engagement-Adherence Score is greater than 2.5 AND the Symptom Severity Index is greater than 15.", "type": "domain_knowledge", "children_knowledge": [31, 30]} +{"id": 56, "knowledge": "Patient Exhibiting Fragile Stability", "description": "Identifies patients currently classified as 'Stable Recovery' but who exhibit risk factors like frequent missed appointments or low social support, suggesting potential for destabilization.", "definition": "A patient who meets the 'Stable Recovery Patient' criteria BUT has an average of more than 2 missed appointments OR has a personal Social Support Effectiveness score of less than 3.", "type": "domain_knowledge", "children_knowledge": [13, 9, 8]} +{"id": 57, "knowledge": "Resource-Intensive High-Risk Patient Cohort", "description": "Identifies patients requiring significant care coordination and intervention due to possessing characteristics of both Complex Care Needs and Frequent Crisis Patterns.", "definition": "A patient who meets the criteria for both 'Complex Care Needs' AND 'Frequent Crisis Patient'.", "type": "domain_knowledge", "children_knowledge": [12, 17]} +{"id": 58, "knowledge": "Facility with Potential Engagement-Outcome Disconnect", "description": "Identifies facilities where high therapy engagement scores do not seem to translate into expected functional improvements or recovery progression.", "definition": "A facility where the Therapy Engagement Score is greater than 2.0 but the Recovery Trajectory Index is less than 0.8.", "type": "domain_knowledge", "children_knowledge": [4, 53]} +{"id": 59, "knowledge": "Systemically Stressed Facility Environment", "description": "Identifies facilities potentially facing overwhelming systemic stress, characterized by a significant gap between patient needs and resources, compounded by high patient attrition risk.", "definition": "A facility where the Resource-Demand Differential is greater than 1.0 AND it meets the criteria for 'Facility Attrition Risk Indicator'.", "type": "domain_knowledge", "children_knowledge": [34, 43]} +{"id": 60, "knowledge": "Correlation Between Resource Adequacy and Adherence (CRAA)", "description": "Measures the correlation between individual facility resource adequacy scores and treatment adherence rates.", "definition": "CRAA = \text{CORRELATION}(\text{facility resource score}, \text{facility treatment adherence rate})", "type": "calculation_knowledge", "children_knowledge": [5, 2]} +{"id": 61, "knowledge": "Facility Performance Quadrant (FPQ)", "description": "Categorizes facilities into performance quadrants based on their Treatment Adherence Rate and Patient Stability Metric relative to median thresholds.", "definition": "A facility is assigned to a quadrant based on whether its Treatment Adherence Rate and Patient Stability Metric are above or below the median values for all facilities.", "type": "domain_knowledge", "children_knowledge": [2, 33]} +{"id": 62, "knowledge": "Stale Treatment Outcome Records", "description": "Treatment outcome records associated with encounters that occurred before a specific time threshold (e.g., older than 60 days).", "definition": "Treatment outcome records where the timestamp of the linked clinical encounter is older than a defined interval (e.g., 60 days).", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 63, "knowledge": "Engagement Driver Analysis", "description": "A diagnostic analysis that disaggregates a composite engagement metric into its constituent parts to identify the primary factors contributing to the overall score across different cohorts.", "definition": "The process of separately calculating and presenting the average Therapy Engagement Score and Treatment Adherence Rate for a given group to understand the drivers behind their composite Engagement-Adherence Score.", "type": "domain_knowledge", "children_knowledge": [4, 2]} +{"id": 64, "knowledge": "Vulnerable Patient Watchlist Generation", "description": "A domain-specific process for creating a prioritized and actionable list of at-risk patients.", "definition": "The process of identifying patients who meet specific high-risk criteria (like the 'Patient with High Crisis & Low Support Profile'), displaying contextual and actionable data (like last hospitalization and next appointment), and sorting the results to prioritize intervention.", "type": "domain_knowledge", "children_knowledge": [47]} +{"id": 65, "knowledge": "Post-Hospitalization Response Time", "description": "A performance metric that measures the average time it takes for a high-risk patient to have their next scheduled appointment following a hospitalization.", "definition": "\text{PHRT} = \text{Average time difference between a patient's next scheduled appointment and their last hospitalization date for a high-risk cohort.}", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 66, "knowledge": "Annualized Hospitalization Risk", "description": "A risk score calculated by dividing a patient's total previous hospitalizations by the duration of their diagnosis in years.", "definition": "\text{AHR} = \frac{\text{Total previous hospitalizations}}{(\text{Diagnosis duration in months} / 12)}", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 67, "knowledge": "Intensive Care Coordination Flag", "description": "A flag indicating a patient requires immediate, intensive care coordination based on their risk profile and current treatment intensity.", "definition": "A patient is flagged if their Annualized Hospitalization Risk is greater than 1.5 AND their total monthly therapy hours are less than 8.", "type": "domain_knowledge", "children_knowledge": [66]} +{"id": 68, "knowledge": "Annual Treatment Benefit", "description": "The projected total reduction in depression score over a 12-month period.", "definition": "\text{Benefit} = \text{Numeric value of depression improvement rate} \times 12", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 69, "knowledge": "Annual Treatment Burden", "description": "The projected total number of side effects over a 12-month period, assuming a standard number of medications.", "definition": "\text{Burden} = \text{Numeric value of medication side effect density} \times \text{number of medications} \times 12", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 70, "knowledge": "High Responder Profile", "description": "Identifies patients for whom the projected annual benefit of treatment is at least five times greater than the projected burden of side effects.", "definition": "A patient where the 'Annual Treatment Benefit' is at least five times greater than the 'Annual Treatment Burden' AND they have been in therapy for at least 6 months.", "type": "domain_knowledge", "children_knowledge": [68, 69]} +{"id": 71, "knowledge": "Projected Annual QoL Gain", "description": "Calculates the total expected Quality of Life points a patient will gain over a year based on their current cost-effectiveness and a given budget.", "definition": "\text{Projected Gain} = \text{Numeric value of treatment cost-effectiveness} \times \text{annual budget}", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 72, "knowledge": "Projected QoL Gain from Max Out-of-Pocket", "description": "Calculates the total expected Quality of Life points a patient will gain over a year based on their current cost-effectiveness and their insurance plan's maximum out-of-pocket budget.", "definition": "\text{Projected Gain} = \text{Numeric value of treatment cost-effectiveness} \times \text{annual max out-of-pocket budget}", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 73, "knowledge": "Break-Even Therapy Hours", "description": "Calculates the number of therapy hours a patient needs to receive to achieve a predefined 'clinically valuable' outcome threshold, based on their personal cost-effectiveness and billing rates.", "definition": "\text{Hours} = \frac{(\frac{\text{Quality of Life Threshold}}{\text{cost-effectiveness rate}}) / \text{exchange rate}}{\text{hourly rate}}", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 74, "knowledge": "Top Clinicians by Goal Velocity", "description": "A ranked list of clinicians based on their patients' average 'Recovery Goal Velocity', measured in goals achieved per month.", "definition": "A ranked list of clinicians based on the average recovery goal velocity (goals per month) of their assigned patients.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 75, "knowledge": "Peer Mentor Candidate Profile", "description": "Identifies patients who are in stable recovery, have a long-term diagnosis, and demonstrate high resilience, making them suitable candidates for mentorship roles.", "definition": "A patient whose recovery status is 'Stable', has been diagnosed for over 72 months, and has a resilience score of 8 or greater.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 76, "knowledge": "Chronic High-Acuity Bipolar Cohort", "description": "Identifies a specific group of Bipolar patients characterized by a long diagnosis history and multiple prior hospitalizations.", "definition": "A patient with a primary diagnosis of 'Bipolar' who has been diagnosed for 120 months or more and has had more than 3 previous hospitalizations.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 77, "knowledge": "Intensive Therapy Standard Compliance", "description": "A boolean check to determine if a patient's therapy intensity meets or exceeds a given threshold.", "definition": "\text{Compliance} = (\text{Numeric value of therapy intensity in hours per month} >= \text{threshold})", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 78, "knowledge": "Therapeutic Return on Investment (T-ROI)", "description": "Calculates the efficiency of therapy by measuring the rate of depression improvement per hour of therapy intensity.", "definition": "\text{T-ROI} = \frac{\text{Numeric value of depression improvement rate}}{\text{Numeric value of therapy intensity in hours per month}}", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 79, "knowledge": "Caseload Complexity Score", "description": "The average hospitalization risk density across all patients assigned to a specific clinician.", "definition": "\text{Score} = \text{The average hospitalization risk density across all patients assigned to a specific clinician.}", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 80, "knowledge": "Cost-Effectiveness by Referral Source", "description": "The average treatment cost-effectiveness for a cohort of patients grouped by their referral source.", "definition": "The average treatment cost-effectiveness for a group of patients, categorized by their referral source.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 81, "knowledge": "Annual Goal Achievement", "description": "The projected number of recovery goals a patient is expected to achieve over a 12-month period, based on their current monthly velocity.", "definition": "\text{Annual Goals} = \text{Numeric value of recovery goal velocity} \times 12", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 82, "knowledge": "Inter-Group Goal Achievement Gap", "description": "Calculates the difference in the average Annual Goal Achievement between two specified diagnostic cohorts.", "definition": "\text{Gap} = |\text{Average Annual Goal Achievement}{group1} - \text{Average Annual Goal Achievement}{group2}|", "type": "calculation_knowledge", "children_knowledge": [81]} +{"id": 83, "knowledge": "Annualized Side Effect Score", "description": "A projected total side effect burden over a 12-month period, assuming a standard number of medications.", "definition": "\text{Score} = \text{Numeric value of medication side effect density} \times \text{number of medications} \times 12", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 84, "knowledge": "Annualized Side Effect Score", "description": "A projected total side effect burden over a 12-month period, assuming a standard number of medications.", "definition": "\text{Score} = \text{Numeric value of medication side effect density} \times \text{number of medications} \times 12", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 85, "knowledge": "High-Burden Prevalence Rate", "description": "Calculates the percentage of patients within a specific cohort that are flagged as having an Annualized Side Effect Score over the defined threshold.", "definition": "The percentage of patients within a cohort who have an 'Annualized Side Effect Score' greater than a defined threshold.", "type": "calculation_knowledge", "children_knowledge": [84]} +{"id": 86, "knowledge": "Vulnerable Patient Watchlist Generation", "description": "A domain-specific process for creating a prioritized and actionable list of at-risk patients.", "definition": "The process of identifying patients who meet specific high-risk criteria (like the 'Patient with High Crisis & Low Support Profile'), creating a dedicated table, and populating it with contextual, actionable data (like last hospitalization and next appointment) to prioritize clinical intervention.", "type": "domain_knowledge", "children_knowledge": [47]} +{"id": 87, "knowledge": "Risk Factor Profile", "description": "A consolidated report for a specific patient cohort that combines key demographic and clinical risk factors from their most recent assessments.", "definition": "For a given patient, a selection of fields including their primary diagnosis, substance use status, and housing stability drawn from their latest assessment records.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 88, "knowledge": "High QoL Gain Candidate", "description": "A classification for patients whose projected treatment benefits, measured in Quality of Life points, significantly exceed a defined clinical threshold based on their financial budget.", "definition": "A patient is flagged if their 'Projected Annual QoL Gain' is greater than 100.", "type": "domain_knowledge", "children_knowledge": [71]} +{"id": 89, "knowledge": "Insurance-Adjusted QoL Projection", "description": "A calculation that projects a patient's annual Quality of Life gain by applying an insurance-specific annual budget to their personal treatment cost-effectiveness rate.", "definition": "\text{Projection} = \text{cost-effectiveness rate} \times \text{budget}{\text{plan}}, \text{where } \text{budget}{\text{plan}} = \begin{cases} 6000 & \text{if insurance type = 'Private'} \\ 3500 & \text{if insurance type = 'Medicare'} \\ 2500 & \text{if insurance type = 'Medicaid'} \\ 0 & \text{otherwise} end{cases}", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 90, "knowledge": "High-Risk Escalation Protocol", "description": "An automated clinical safety procedure that elevates the care coordination level for a clinician when one of their patients is assessed with a high suicide risk.", "definition": "A procedure to update a clinician's care coordination level to 'Intensive' if they are the lead for a patient whose most recent assessment indicates a 'High' suicide risk.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 91, "knowledge": "Intensive Caseload Summary", "description": "A report that details the specific high-risk patient counts and facility resources for clinicians currently under an 'Intensive' care coordination status.", "definition": "A list of clinicians with an 'Intensive' care coordination level, showing their ID, a count of their patients with a 'High' suicide risk, and their facility's resource level.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 92, "knowledge": "Mandatory Case Review Protocol", "description": "A safety protocol that automatically flags patients for a case review when their hospitalization risk exceeds a set threshold.", "definition": "A procedure to flag a patient for case review if their latest hospitalization risk density is greater than 1.5.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 93, "knowledge": "Unaddressed Case Review Report", "description": "A report that identifies patients who are flagged for a mandatory case review but do not yet have a follow-up appointment scheduled in their latest encounter.", "definition": "A list of patients who are flagged for case review but do not have a next appointment scheduled in their most recent encounter record.", "type": "domain_knowledge", "children_knowledge": [92]} +{"id": 94, "knowledge": "Therapeutic Goal Adjustment", "description": "A clinical management action where a patient's performance metric is updated in their record not to correct an error, but to codify a new, forward-looking therapeutic target in response to a significant life event.", "definition": "An update operation on a patient's metric to reflect a new clinical goal rather than a new measurement.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 95, "knowledge": "At-Risk Independent Living Cohort", "description": "A high-risk group of patients characterized by living alone while also having a high dependency on external support systems, making them vulnerable to sudden crises.", "definition": "Patients who live alone AND have a recent support utilization rate greater than 2.5.", "type": "domain_knowledge", "children_knowledge": -1} \ No newline at end of file diff --git a/mental_health/mental_health_schema.txt b/mental_health/mental_health_schema.txt new file mode 100644 index 0000000000000000000000000000000000000000..9367b511fe31cfdbdbc912ce4520ea8253220960 --- /dev/null +++ b/mental_health/mental_health_schema.txt @@ -0,0 +1,235 @@ +CREATE TABLE "facilities" ( +fac_key text NOT NULL, +r_source text NULL, +env_stress text NULL, +life_impact text NULL, +season_pat text NULL, +legl_issue text NULL, +spt_svc text NULL, +com_res text NULL, +emer_contact text NULL, +s_plan_stat text NULL, +c_plan_stat text NULL, +s_system_chg text NULL, + PRIMARY KEY (fac_key) +); + +First 3 rows: +fac_key r_source env_stress life_impact season_pat legl_issue spt_svc com_res emer_contact s_plan_stat c_plan_stat s_system_chg +--------- ---------- ------------ ------------- ------------ ------------ --------------- ------------- -------------- ------------- ------------- -------------- +F801 Self Mild Mild Legal Limited 3 Not Needed Variable +F533 Court Mild Mild Summer Resolved Case Management Comprehensive 7 Needs Update Not Needed Improved +F392 Physician Summer Adequate 2 Needs Update Not Needed Declined +... + + +CREATE TABLE "assessmentsocialanddiagnosis" ( +asd_key text NOT NULL, +rec_status text NULL, +prim_dx text NULL, +sec_dx text NULL, +dx_dur_m bigint NULL, +prev_hosp bigint NULL, +last_hosp_dt text NULL, +qol_scr bigint NULL, +func_imp text NULL, +funcassess jsonb NULL, + PRIMARY KEY (asd_key), + FOREIGN KEY (asd_key) REFERENCES assessmentbasics(ab_key) +); + +First 3 rows: +asd_key rec_status prim_dx sec_dx dx_dur_m prev_hosp last_hosp_dt qol_scr func_imp funcassess +--------- ------------ --------- -------------------- ---------- ----------- -------------- --------- ---------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- +A565291 Advanced Bipolar Personality Disorder 176 4 2023/11/23 63 {'Fam_Inv': 'High', 'Res_Scr': 1, 'Soc_Sup': 'Strong', 'ADL_Func': 'Independent', 'In_Level': 'Fair', 'Rel_Qual': 'Poor', 'Soc_Func': 'Good', 'Strs_Lvl': 8, 'Cop_Skill': 'Good', 'Work_Func': 'Fair', 'Motiv_Level': None} +A617069 Relapse Anxiety Personality Disorder 190 4 2023/06/19 33 {'Fam_Inv': 'High', 'Res_Scr': 6, 'Soc_Sup': 'Strong', 'ADL_Func': 'Independent', 'In_Level': 'Good', 'Rel_Qual': 'Good', 'Soc_Func': 'Good', 'Strs_Lvl': 7, 'Cop_Skill': 'Good', 'Work_Func': 'Disabled', 'Motiv_Level': None} +A121778 Early Anxiety Substance Use 52 0 2024/08/27 29 SEVERE {'Fam_Inv': 'Low', 'Res_Scr': 10, 'Soc_Sup': 'Moderate', 'ADL_Func': 'Independent', 'In_Level': 'Fair', 'Rel_Qual': 'Poor', 'Soc_Func': 'Fair', 'Strs_Lvl': 5, 'Cop_Skill': 'Fair', 'Work_Func': 'Disabled', 'Motiv_Level': 'Low'} +... + + +CREATE TABLE "treatmentbasics" ( +tx_key bigint NOT NULL DEFAULT nextval('treatmentbasics_tx_key_seq'::regclass), +enc_ref text NOT NULL, +cur_med text NULL, +med_adh text NULL, +med_side text NULL, +med_chg text NULL, +th_type text NULL, +th_freq text NULL, +th_dur_m bigint NULL, +th_eng text NULL, +th_chg text NULL, +crisis_int real NULL, + PRIMARY KEY (tx_key), + FOREIGN KEY (enc_ref) REFERENCES encounters(enc_key) +); + +First 3 rows: + tx_key enc_ref cur_med med_adh med_side med_chg th_type th_freq th_dur_m th_eng th_chg crisis_int +-------- --------- --------------- ------------- ---------- ------------ ------------- --------- ---------- -------- ---------------- ------------ + 1 MH353857 Mood Stabilizer High Moderate Augmentation Group 22 High Therapist Change 2 + 2 MH353857 Non-compliant Severe Weekly 26 High Modality Change 1 + 3 MH512598 Medium Moderate Switch Psychodynamic Monthly 1 Medium 4 +... + + +CREATE TABLE "clinicians" ( +clin_key text NOT NULL, +clin_conf text NULL, +assess_lim text NULL, +docu_stat text NULL, +bill_code text NULL, +nxt_rev_dt date NULL, +care_coord text NULL, +ref_need text NULL, +f_up_type text NULL, +f_up_freq text NULL, +fac_connect text NULL, + PRIMARY KEY (clin_key), + FOREIGN KEY (fac_connect) REFERENCES facilities(fac_key) +); + +First 3 rows: +clin_key clin_conf assess_lim docu_stat bill_code nxt_rev_dt care_coord ref_need f_up_type f_up_freq fac_connect +---------- ----------- ------------ ----------- ----------- ------------ ------------ ---------- ----------- ----------- ------------- +C8738 Medium Cognitive Complete CPT90511 2025-05-09 Intensive Services Therapy Weekly F801 +C6837 Low Cognitive Incomplete CPT90696 2025-08-14 Testing Therapy Quarterly F533 +C6539 Low Engagement Incomplete CPT90854 2025-05-05 Regular Routine Biweekly F392 +... + + +CREATE TABLE "patients" ( +pat_key text NOT NULL, +pat_age bigint NULL, +pat_gender text NULL, +pat_eth text NULL, +edu_level text NULL, +emp_stat text NULL, +mari_stat text NULL, +living_arr text NULL, +insur_type text NULL, +insur_stat text NULL, +disab_stat text NULL, +house_stable text NULL, +cult_factor text NULL, +stigma_imp text NULL, +fin_stress text NULL, +clin_lead_ref text NULL, + PRIMARY KEY (pat_key), + FOREIGN KEY (clin_lead_ref) REFERENCES clinicians(clin_key) +); + +First 3 rows: +pat_key pat_age pat_gender pat_eth edu_level emp_stat mari_stat living_arr insur_type insur_stat disab_stat house_stable cult_factor stigma_imp fin_stress clin_lead_ref +--------- --------- ------------ --------- ----------- ---------- ----------- ------------ ------------ ------------ ------------ -------------- ------------- ------------ ------------ --------------- +P425079 23 Other Other High School Retired Widowed Alone Medicaid Pending PENDING Homeless Language moderate C8738 +P883117 42 F Other High School Retired Married Partner Medicare Approved pErManent Stable beliefs Mild Severe C6837 +P871358 32 M Hispanic High School Employed Single Family Medicaid Approved temporary Stable LANGUAGE Severe C6539 +... + + +CREATE TABLE "treatmentoutcomes" ( +tx_out_key bigint NOT NULL DEFAULT nextval('treatmentoutcomes_tx_out_key_seq'::regclass), +tx_ref bigint NOT NULL, +tx_goal_stat text NULL, +rec_goal_stat text NULL, +txprogmet jsonb NULL, + PRIMARY KEY (tx_out_key), + FOREIGN KEY (tx_ref) REFERENCES treatmentbasics(tx_key) +); + +First 3 rows: + tx_out_key tx_ref tx_goal_stat rec_goal_stat txprogmet +------------ -------- -------------- --------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ + 10 12 Achieved Achieved {'Tx_Adh': 'High', 'Tx_Eng': 'Low', 'Tx_Sat': 'dissatisfied', 'Sat_Scr': 8, 'Th_Prog': 'Poor', 'Tx_Resp': 'Partial', 'Symp_Imp': 'MODeRaTe', 'Func_Impv': None, 'Side_Burd': 'Moderate', 'Ther_Alliance': 'WEAK', 'Work_Stat_Chg': 'Terminated'} + 294 418 Modified Achieved {'Tx_Adh': 'High', 'Tx_Eng': 'Medium', 'Tx_Sat': 'LOW', 'Sat_Scr': 6, 'Th_Prog': None, 'Tx_Resp': None, 'Symp_Imp': 'MinImal', 'Func_Impv': None, 'Side_Burd': None, 'Ther_Alliance': 'Weak', 'Work_Stat_Chg': 'teRmInAteD'} + 11 13 In Progress In Progress {'Tx_Adh': 'Low', 'Tx_Eng': 'High', 'Tx_Sat': 'low', 'Sat_Scr': 9, 'Th_Prog': 'Good', 'Tx_Resp': 'Partial', 'Symp_Imp': 'MINimAl', 'Func_Impv': None, 'Side_Burd': 'Moderate', 'Ther_Alliance': 'Moderate', 'Work_Stat_Chg': 'leave'} +... + + +CREATE TABLE "assessmentbasics" ( +ab_key text NOT NULL, +a_type text NULL, +a_method text NULL, +a_dur_min bigint NULL, +a_lang text NULL, +a_valid text NULL, +resp_consist text NULL, +sympt_valid text NULL, +pat_owner_ref text NULL, +depr_improve_rate text NULL, +therapy_exp_intensity text NULL, +med_side_eff_density text NULL, +crisis_event_rate text NULL, +func_recovery_eff text NULL, +adherence_index text NULL, +sympt_fluct_index text NULL, +hosp_risk_density text NULL, +med_change_freq text NULL, +support_util_rate text NULL, +treatment_cost_eff text NULL, +recovery_goal_vel text NULL, + PRIMARY KEY (ab_key), + FOREIGN KEY (pat_owner_ref) REFERENCES patients(pat_key) +); + +First 3 rows: +ab_key a_type a_method a_dur_min a_lang a_valid resp_consist sympt_valid pat_owner_ref depr_improve_rate therapy_exp_intensity med_side_eff_density crisis_event_rate func_recovery_eff adherence_index sympt_fluct_index hosp_risk_density med_change_freq support_util_rate treatment_cost_eff recovery_goal_vel +-------- --------- ----------- ----------- -------- ------------ -------------- ------------- --------------- ------------------- ------------------------------ ------------------------------------------ -------------------------------- ---------------------------------- ----------------------------------------------- ------------------------------------------------ ---------------------------------- --------------------------------- ------------------------- -------------------------------- ------------------- +A567210 Initial Phone 93 Chinese Questionable Medium Questionable P425079 1.6 points/month 29.866666666666667 hours/month 0.0 side-effects/med/month 0.05555555555555555 events/month 6.0 QoL-points/month 0.025 adherence-score/appointment 3.1299534806600566 fluctuation-points/assessment 1.3333333333333333 risk-score/year 0.1111111111111111 changes/month 2.0 support-level/contact 0.11976047904191617 QoL-points/$ 0.3 goals/month +A981114 Emergency Self-report 112 French Invalid High Valid P883117 1.0 points/month 4.8 hours/month 0.0 side-effects/med/month 0.3333333333333333 events/month 7.833333333333333 QoL-points/month 0.3333333333333333 adherence-score/appointment 3.722388729118226 fluctuation-points/assessment 1.3333333333333333 risk-score/year 0.05555555555555555 changes/month 1.5 support-level/contact 0.0509761388286334 QoL-points/$ 1.0 goals/month +A734744 Routine Phone 81 French Questionable Medium Invalid P871358 2.0 points/month 17.066666666666666 hours/month 0.03409090909090909 side-effects/med/month 0.12903225806451613 events/month 6.857142857142857 QoL-points/month 0.14285714285714285 adherence-score/appointment 3.2176548427295777 fluctuation-points/assessment 0.3870967741935484 risk-score/year 0.09090909090909091 changes/month 3.0 support-level/contact 0.14953271028037382 QoL-points/$ 0.0 goals/month +... + + +CREATE TABLE "encounters" ( +enc_key text NOT NULL, +time_mark timestamp without time zone NOT NULL, +ab_ref text NULL, +pat_ref text NULL, +clin_id text NULL, +fac_id text NULL, +miss_appt real NULL, +tx_barrier text NULL, +nx_appt_dt text NULL, +dq_score bigint NULL, +assess_complete text NULL, + PRIMARY KEY (enc_key), + FOREIGN KEY (ab_ref) REFERENCES assessmentbasics(ab_key), + FOREIGN KEY (pat_ref) REFERENCES patients(pat_key) +); + +First 3 rows: +enc_key time_mark ab_ref pat_ref clin_id fac_id miss_appt tx_barrier nx_appt_dt dq_score assess_complete +--------- -------------------------- -------- --------- --------- -------- ----------- ------------ ------------ ---------- ----------------- +MH353857 2025-02-19 08:30:58.912609 A981114 P883117 C6837 F533 1 18/04/2025 0 0.852 +MH512598 2025-02-19 08:30:58.913983 A599516 P292211 C8094 F770 3 FiNAncIal 15/04/2025 1 0.422 +MH463949 2025-02-19 08:30:58.913983 A515871 P136511 C8691 F402 0 08/03/2025 1 0.224 +... + + +CREATE TABLE "assessmentsymptomsandrisk" ( +asr_key text NOT NULL, +phq9_scr bigint NULL, +phq9_sev text NULL, +gad7_scr bigint NULL, +gad7_sev text NULL, +suic_ideation text NULL, +suic_risk text NULL, +self_harm text NULL, +viol_risk text NULL, +sub_use text NULL, +sub_use_freq text NULL, +sub_use_sev text NULL, +sympscores jsonb NULL, + PRIMARY KEY (asr_key), + FOREIGN KEY (asr_key) REFERENCES assessmentbasics(ab_key) +); + +First 3 rows: +asr_key phq9_scr phq9_sev gad7_scr gad7_sev suic_ideation suic_risk self_harm viol_risk sub_use sub_use_freq sub_use_sev sympscores +--------- ---------- ---------- ---------- ---------- --------------- ----------- ----------- ----------- --------- -------------- ------------- -------------------------------------------------------------------------------------------------------------------- +A921610 13 Mild 5 Mild Passive Severe Past Low Opioids Daily Mild {'En_Scr': 7, 'Anx_Scr': 1, 'App_Scr': 6, 'Con_Scr': 6, 'Int_Scr': 1, 'Slp_Scr': 6, 'Hope_Scr': 9, 'Mood_Scr': 3} +A515871 24 14 Mild High High Multiple Never Severe {'En_Scr': 2, 'Anx_Scr': 3, 'App_Scr': 2, 'Con_Scr': 2, 'Int_Scr': 10, 'Slp_Scr': 5, 'Hope_Scr': 7, 'Mood_Scr': 6} +A797966 4 1 Severe Past Low Opioids Occasional {'En_Scr': 10, 'Anx_Scr': 8, 'App_Scr': 9, 'Con_Scr': 1, 'Int_Scr': 5, 'Slp_Scr': 10, 'Hope_Scr': 4, 'Mood_Scr': 10} +... diff --git a/museum_artifact/museum_artifact_column_meaning_base.json b/museum_artifact/museum_artifact_column_meaning_base.json new file mode 100644 index 0000000000000000000000000000000000000000..1f5c0406c2962fb29de53df88b2c6ae4125c34e0 --- /dev/null +++ b/museum_artifact/museum_artifact_column_meaning_base.json @@ -0,0 +1,232 @@ +{ + "museum_artifact|ArtifactsCore|ARTregID": "TEXT. Unique identifier for each artifact. PK. Example: ART54317.", + "museum_artifact|ArtifactsCore|art_title": "TEXT. Title of the artifact. **NULL means no title provided.**. Example: Culture Painting.", + "museum_artifact|ArtifactsCore|DYNASTY": "TEXT. Dynasty or period the artifact belongs to. Possible values: Han, Ming, Qing, Song, Tang, Yuan.", + "museum_artifact|ArtifactsCore|ageYears": "BIGINT. Age of the artifact in years. Example: 943.", + "museum_artifact|ArtifactsCore|MatKind": "TEXT. Material type of the artifact. Possible values: Bronze, Ceramic, Jade, Paper, Stone, Textile, Wood.", + "museum_artifact|ArtifactsCore|conserve_status": "TEXT. Current conservation status of the artifact. **NULL means no conservation status provided.**. Possible values: Critical, Excellent, Fair, Good, Poor.", + "museum_artifact|ArtifactRatings|HIST_sign": "BIGINT. Unique historical identifier for the artifact rating. PK. Possible values: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10.", + "museum_artifact|ArtifactRatings|ART_link": "TEXT. Reference to the artifact in ArtifactsCore. FK to ArtifactsCore.ARTregID.", + "museum_artifact|SensitivityData|ENVsense": "TEXT. Environmental sensitivity identifier for the artifact. PK. Possible values: High, Low, Medium.", + "museum_artifact|SensitivityData|ART_link": "TEXT. Reference to the artifact in ArtifactsCore. FK to ArtifactsCore.ARTregID.", + "museum_artifact|ExhibitionHalls|Hall_ID": "TEXT. Unique identifier for each exhibition hall. PK. Example: Hall-3.", + "museum_artifact|Showcases|caseID": "TEXT. Unique identifier for each showcase. PK.", + "museum_artifact|Showcases|hall_ref": "TEXT. Reference to the exhibition hall where the showcase is located. FK to ExhibitionHalls.Hall_ID.", + "museum_artifact|EnvironmentalReadingsCore|monitor_code": "TEXT. Unique identifier for each environmental reading. PK.", + "museum_artifact|EnvironmentalReadingsCore|readTS": "TIMESTAMP. Timestamp of when the environmental reading was taken. Not NULL. Example: 2024-08-06T08:38:48.", + "museum_artifact|EnvironmentalReadingsCore|case_link": "TEXT. Reference to the showcase. FK to Showcases.caseID.", + "museum_artifact|EnvironmentalReadingsCore|TEMPc": "BIGINT. Temperature reading in Celsius. **NULL means no temperature data available.**. Example: 21.73.", + "museum_artifact|EnvironmentalReadingsCore|tempVar24": "REAL. Temperature variation over 24 hours. **NULL means no temperature variation recorded.**. Example: 0.85.", + "museum_artifact|EnvironmentalReadingsCore|RH": "BIGINT. Relative humidity reading. Example: 53.28.", + "museum_artifact|EnvironmentalReadingsCore|RHvar": "BIGINT. Relative humidity variation. **NULL means no humidity variation recorded.**. Example: 2.54.", + "museum_artifact|EnvironmentalReadingsCore|air_press": "REAL. Air pressure reading. **NULL means no air pressure data available.**. Example: 1020.0.", + "museum_artifact|AirQualityReadings|aq_id": "BIGSERIAL. Unique identifier for each air quality reading. PK.", + "museum_artifact|AirQualityReadings|env_link": "TEXT. Reference to the environmental reading. FK to EnvironmentalReadingsCore.monitor_code.", + "museum_artifact|SurfaceAndPhysicalReadings|surf_id": "BIGSERIAL. Unique identifier for each surface and physical reading. PK.", + "museum_artifact|SurfaceAndPhysicalReadings|env_link": "TEXT. Reference to the environmental reading. FK to EnvironmentalReadingsCore.monitor_code.", + "museum_artifact|SurfaceAndPhysicalReadings|vibra_mms2": "REAL. Vibration measurements in millimeters squared. Example: 0.461.", + "museum_artifact|SurfaceAndPhysicalReadings|noise_dB": "BIGINT. Noise level measurement in decibels. Example: 47.", + "museum_artifact|SurfaceAndPhysicalReadings|dust_Mg_m2": "REAL. Dust level in milligrams per square meter. Example: 1.74.", + "museum_artifact|SurfaceAndPhysicalReadings|microbe_CFU": "BIGINT. Microbial colony-forming units (CFU) detected. Example: 234.0.", + "museum_artifact|SurfaceAndPhysicalReadings|moldIdx": "REAL. Mold index indicating the level of mold presence. Example: 0.1.", + "museum_artifact|SurfaceAndPhysicalReadings|pestActivity": "TEXT. Recorded pest activity level. **NULL means no pest activity reported.**. Possible values: High, Low, Medium.", + "museum_artifact|SurfaceAndPhysicalReadings|pestTrap": "BIGINT. Number of pest traps set. Example: 10.", + "museum_artifact|SurfaceAndPhysicalReadings|pestSpecies": "TEXT. Species of pests detected. **NULL means no species identified.**. Possible values: Beetles, Booklice, Moths, Silverfish.", + "museum_artifact|SurfaceAndPhysicalReadings|surface_pH": "REAL. pH level of the surface. Example: 6.7.", + "museum_artifact|SurfaceAndPhysicalReadings|moist_pct": "REAL. Moisture percentage on the surface. Example: 10.3.", + "museum_artifact|SurfaceAndPhysicalReadings|saltRisk": "TEXT. Risk level due to salt exposure. Possible values: High, Low, Medium.", + "museum_artifact|SurfaceAndPhysicalReadings|metalCorr": "REAL. Metal corrosion rate on the surface. Example: 0.04.", + "museum_artifact|SurfaceAndPhysicalReadings|organicDeg": "REAL. Organic degradation rate. Example: 0.47.", + "museum_artifact|SurfaceAndPhysicalReadings|deltaE": "REAL. Color difference in surface due to environmental factors. Example: 1.99.", + "museum_artifact|SurfaceAndPhysicalReadings|surfTemp": "REAL. Temperature of the surface. Example: 19.11.", + "museum_artifact|SurfaceAndPhysicalReadings|surfRH": "REAL. Relative humidity on the surface. Example: 45.46.", + "museum_artifact|SurfaceAndPhysicalReadings|condRisk": "TEXT. Risk of damage due to environmental conditions. Possible values: High, Low, Medium.", + "museum_artifact|SurfaceAndPhysicalReadings|thermalImg": "TEXT. Thermal image data of the surface. **NULL means no thermal image provided.**. Possible values: Attention Required, Critical, Normal.", + "museum_artifact|SurfaceAndPhysicalReadings|structStable": "TEXT. Structural stability condition. **NULL means no stability data provided.**. Possible values: Major Issues, Minor Issues, Stable.", + "museum_artifact|SurfaceAndPhysicalReadings|crackNote": "TEXT. Notes about any cracks detected on the surface. Possible values: Minor Changes, No Changes, Significant Changes.", + "museum_artifact|SurfaceAndPhysicalReadings|deform_mm": "REAL. Deformation in millimeters. Example: 0.08.", + "museum_artifact|SurfaceAndPhysicalReadings|wtPct": "REAL. Weight percentage of certain materials on the surface. Example: -0.001.", + "museum_artifact|SurfaceAndPhysicalReadings|surfDust": "BIGINT. Amount of dust detected on the surface. Example: 5.7.", + "museum_artifact|SurfaceAndPhysicalReadings|O2_pct": "REAL. Oxygen percentage in the surface environment. Example: 20.8.", + "museum_artifact|SurfaceAndPhysicalReadings|N2_pct": "REAL. Nitrogen percentage in the surface environment. Example: 78.75.", + "museum_artifact|LightAndRadiationReadings|rad_id": "BIGSERIAL. Unique identifier for each light and radiation reading. PK.", + "museum_artifact|LightAndRadiationReadings|env_link": "TEXT. Reference to the environmental reading. FK to EnvironmentalReadingsCore.monitor_code.", + "museum_artifact|LightAndRadiationReadings|lux": "BIGINT. Light intensity in lux. Example: 138.0.", + "museum_artifact|LightAndRadiationReadings|UV_uW": "REAL. Ultraviolet (UV) radiation in microwatts. Example: 32.58.", + "museum_artifact|LightAndRadiationReadings|IR_W": "REAL. Infrared radiation in watts. Example: 7.51.", + "museum_artifact|LightAndRadiationReadings|visLxh": "BIGINT. Visible light intensity in lux-hours. Example: 71166.", + "museum_artifact|ConditionAssessments|cond_id": "BIGSERIAL. Unique identifier for each condition assessment. PK.", + "museum_artifact|ConditionAssessments|art_exam": "TEXT. Reference to the artifact being examined. FK to ArtifactsCore.ARTregID.", + "museum_artifact|ConditionAssessments|case_exam": "TEXT. Reference to the showcase being examined. FK to Showcases.caseID.", + "museum_artifact|ConditionAssessments|light_link": "BIGINT. Reference to light and radiation readings. FK to LightAndRadiationReadings.rad_id.", + "museum_artifact|ConditionAssessments|score": "BIGINT. Score for the condition assessment. Example: 93.", + "museum_artifact|ConditionAssessments|assess_date": "DATE. Date when the assessment was done. Example: Sep-15-2024.", + "museum_artifact|ConditionAssessments|next_due": "DATE. Date when the next assessment is due. Example: 2025/4/17.", + "museum_artifact|RiskAssessments|risk_id": "TEXT. Unique identifier for each risk assessment. PK.", + "museum_artifact|RiskAssessments|art_concern": "TEXT. Reference to the artifact being assessed for risk. FK to ArtifactsCore.ARTregID.", + "museum_artifact|RiskAssessments|hall_concern": "TEXT. Reference to the exhibition hall being assessed for risk. FK to ExhibitionHalls.Hall_ID.", + "museum_artifact|RiskAssessments|risk_level": "TEXT. Level of risk associated with the artifact or hall. Possible values: High, Low, Medium.", + "museum_artifact|RiskAssessments|emerg_plan": "TEXT. Emergency plan related to the risk. Possible values: Review Required, Under Revision, Updated.", + "museum_artifact|RiskAssessments|evacPrio": "TEXT. Evacuation priority level for the artifact or hall. Possible values: Priority 1, Priority 2, Priority 3.", + "museum_artifact|RiskAssessments|handle_rules": "TEXT. Handling rules for the artifact or hall. Possible values: Minimal, Strict.", + "museum_artifact|RiskAssessments|conserve_score": "BIGINT. Conservation score for the artifact or hall. Example: 85.0.", + "museum_artifact|ConservationAndMaintenance|maint_id": "BIGSERIAL. Unique identifier for each conservation and maintenance record. PK.", + "museum_artifact|ConservationAndMaintenance|monitor_link": "TEXT. Reference to the environmental reading linked to maintenance. FK to EnvironmentalReadingsCore.monitor_code.", + "museum_artifact|ConservationAndMaintenance|surf_link": "BIGINT. Reference to the surface and physical readings linked to maintenance. FK to SurfaceAndPhysicalReadings.surf_id.", + "museum_artifact|ConservationAndMaintenance|treat_stat": "TEXT. Treatment status of the artifact or worksite. Possible values: In Progress, Not Required, Scheduled.", + "museum_artifact|ConservationAndMaintenance|prio_tag": "TEXT. Priority tag for the conservation or maintenance action. Possible values: High, Low, Medium, Urgent.", + "museum_artifact|ConservationAndMaintenance|lastClean": "DATE. Date when the artifact or worksite was last cleaned. Example: 16-Dec-24.", + "museum_artifact|ConservationAndMaintenance|nextClean": "DATE. Date when the next cleaning is due. Example: 2025/5/10.", + "museum_artifact|ConservationAndMaintenance|cleanDays": "BIGINT. Number of days since the last cleaning. Example: 83.0.", + "museum_artifact|ConservationAndMaintenance|maintLog": "TEXT. Maintenance log details. Possible values: Pending, Review, Updated.", + "museum_artifact|ConservationAndMaintenance|incident_stat": "TEXT. Status of any incidents related to the artifact or worksite. Possible values: Closed, Open.", + "museum_artifact|ConservationAndMaintenance|drill_stat": "TEXT. Status of any drills conducted for emergency preparedness. Possible values: Current, Due, Overdue.", + "museum_artifact|ConservationAndMaintenance|train_stat": "TEXT. Training status for maintenance personnel. Possible values: Current, Due, Overdue.", + "museum_artifact|ConservationAndMaintenance|budget_alloc": "TEXT. Budget allocated for conservation and maintenance. Possible values: Adequate, Insufficient, Review Required.", + "museum_artifact|ConservationAndMaintenance|budget_stat": "TEXT. Status of the maintenance budget. Possible values: Available, Depleted, Limited.", + "museum_artifact|ConservationAndMaintenance|conserveFreq": "TEXT. Frequency of conservation activities. Possible values: Frequent, Occasional, Rare.", + "museum_artifact|ConservationAndMaintenance|history": "TEXT. Historical data related to conservation and maintenance. Possible values: Extensive, Minimal, Moderate.", + "museum_artifact|ConservationAndMaintenance|prevTreat": "BIGINT. Reference to previous treatment or maintenance action. Example: 4.", + "museum_artifact|ConservationAndMaintenance|treatEffect": "TEXT. Effectiveness of the treatment applied. Possible values: High, Low, Medium.", + "museum_artifact|ConservationAndMaintenance|reversePot": "TEXT. Potential for reversing damage or deterioration. Possible values: High, Low, Medium.", + "museum_artifact|UsageRecords|usage_id": "BIGSERIAL. Unique identifier for each usage record. PK.", + "museum_artifact|UsageRecords|env_link": "TEXT. Reference to the environmental reading linked to usage. FK to EnvironmentalReadingsCore.monitor_code.", + "museum_artifact|UsageRecords|rotate_sched": "TEXT. Schedule for rotating artifacts or exhibits. Possible values: Active, Permanent, Resting.", + "museum_artifact|UsageRecords|displayMonths": "BIGINT. Number of months the artifact or exhibit is displayed. Example: 1.", + "museum_artifact|UsageRecords|restMonths": "BIGINT. Number of months the artifact or exhibit is in rest or storage. Example: 22.0.", + "museum_artifact|UsageRecords|dispReqs": "TEXT. Requirements for displaying the artifact or exhibit. Possible values: Custom, Special, Standard.", + "museum_artifact|UsageRecords|storeReqs": "TEXT. Requirements for storing the artifact or exhibit. Possible values: Custom, Special, Standard.", + "museum_artifact|UsageRecords|handleReqs": "TEXT. Requirements for handling the artifact or exhibit. Possible values: Custom, Special, Standard.", + "museum_artifact|UsageRecords|transportReqs": "TEXT. Transportation requirements for the artifact or exhibit. Possible values: Custom, Special, Standard.", + "museum_artifact|UsageRecords|packReqs": "TEXT. Packaging requirements for the artifact or exhibit. Possible values: Custom, Special, Standard.", + "museum_artifact|UsageRecords|resAccess": "TEXT. Restrictions on access to the artifact or exhibit. Possible values: Frequent, Occasional, Rare.", + "museum_artifact|UsageRecords|publicDisp": "TEXT. Public display status of the artifact or exhibit. Possible values: Frequent, Occasional, Rare.", + "museum_artifact|UsageRecords|loanFreq": "TEXT. Frequency of loans for the artifact or exhibit. Possible values: Frequent, Occasional, Rare.", + "museum_artifact|UsageRecords|handleFreq": "TEXT. Frequency of handling the artifact or exhibit. Possible values: Frequent, Occasional, Rare.", + "museum_artifact|UsageRecords|docuFreq": "TEXT. Frequency of documentation updates for the artifact or exhibit. Possible values: Frequent, Occasional, Rare.", + "museum_artifact|UsageRecords|monitorFreq": "TEXT. Frequency of monitoring for the artifact or exhibit. Possible values: Daily, Monthly, Weekly.", + "museum_artifact|UsageRecords|assessFreq": "TEXT. Frequency of condition assessments for the artifact or exhibit. Possible values: Annually, Monthly, Quarterly.", + "museum_artifact|UsageRecords|conserveFreq": "TEXT. Frequency of conservation treatments for the artifact or exhibit. Possible values: Frequent, Occasional, Rare.", + "museum_artifact|UsageRecords|maintFreq": "TEXT. Frequency of maintenance activities for the artifact or exhibit. Possible values: Monthly, Quarterly, Weekly.", + "museum_artifact|UsageRecords|inspectFreq": "TEXT. Frequency of inspections for the artifact or exhibit. Possible values: Daily, Monthly, Weekly.", + "museum_artifact|UsageRecords|calibFreq": "TEXT. Frequency of calibration for monitoring equipment. Possible values: Annually, Monthly, Quarterly.", + "museum_artifact|UsageRecords|certStatus": "TEXT. Certification status for the artifact or exhibit. Possible values: Current, Expired, Pending.", + "museum_artifact|UsageRecords|complianceStatus": "TEXT. Compliance status for regulations or standards. Possible values: Compliant, Non-compliant, Partial.", + "museum_artifact|UsageRecords|auditStatus": "TEXT. Audit status for the artifact or exhibit. Possible values: Failed, Passed, Pending.", + "museum_artifact|UsageRecords|qualityStatus": "TEXT. Quality status for the artifact or exhibit. Possible values: Failed, Passed, Review.", + "museum_artifact|ArtifactSecurityAccess|loan_stat": "TEXT. Loan status of the artifact. PK. Possible values: Available, Not Available, On Loan.", + "museum_artifact|ArtifactSecurityAccess|insUSD": "REAL. Insurance value in USD for the artifact. Example: 968368.", + "museum_artifact|ArtifactSecurityAccess|SEC_LEVEL": "TEXT. Security level for the artifact. Possible values: Level 1, Level 2, Level 3.", + "museum_artifact|ArtifactSecurityAccess|access_restrict": "TEXT. Restrictions on access to the artifact. Possible values: Limited, Public, Restricted.", + "museum_artifact|ArtifactSecurityAccess|docu_stat": "TEXT. Documentation status for the artifact. Possible values: Complete, Partial, Updating.", + "museum_artifact|ArtifactSecurityAccess|photo_docu": "TEXT. Photo documentation status for the artifact. Possible values: Outdated, Recent, Required.", + "museum_artifact|ArtifactSecurityAccess|cond_report": "TEXT. Condition report status for the artifact. Possible values: Current, Due, Overdue.", + "museum_artifact|ArtifactSecurityAccess|conserve_rec": "TEXT. Conservation record for the artifact. Possible values: Pending, Review Required, Updated.", + "museum_artifact|ArtifactSecurityAccess|research_access": "TEXT. Research access status for the artifact. Possible values: Available, Limited, Restricted.", + "museum_artifact|ArtifactSecurityAccess|digital_rec": "TEXT. Digital record status for the artifact. Possible values: Complete, In Progress, Partial.", + "museum_artifact|Monitor_Showcase_Map|mon_ID": "TEXT. Monitor identifier. PK. Example: MM191823.", + "museum_artifact|Monitor_Showcase_Map|case_ID": "TEXT. Showcase case identifier. PK. FK to Showcases.caseID. Example: SC9857.", + "museum_artifact|ArtifactRatings|rating_profile": { + "column_meaning": "JSONB column. Collects every curatorial and conservation-planning rating for an artifact into one JSONB payload so that significance, display priority, and treatment difficulty can be queried together.", + "fields_meaning": { + "research_score": "BIGINT. Research score of the artifact. Possible values: 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0.", + "exhibit_value": "BIGINT. Exhibition value score of the artifact. Possible values: 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0.", + "cultural_score": "BIGINT. Cultural significance score of the artifact. Example: 25.", + "public_access_rating": "BIGINT. Public access rating for the artifact. Possible values: 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0.", + "educational_value": "BIGINT. Educational value of the artifact. Possible values: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10.", + "conservation_difficulty": "TEXT. Description of the conservation differences. Possible values: High, Low, Medium.", + "treatment_complexity": "TEXT. Complexity of conservation treatment. Possible values: Complex, Moderate, Simple.", + "material_stability": "TEXT. Material stability rating of the artifact. Possible values: Moderate, Stable, Unstable.", + "deterioration_rate": "TEXT. Rate of deterioration of the artifact. Possible values: Moderate, Rapid, Slow." + } + }, + "museum_artifact|SensitivityData|env_handling_sensitivity": { + "column_meaning": "JSONB column. Bundles the full multi-factor sensitivity profile (environmental, pest, handling, transport, display, storage) needed for preventive-conservation decision-making.", + "fields_meaning": { + "environment": { + "light": "TEXT. Light sensitivity data for the artifact. **NULL means no light sensitivity data provided.**. Possible values: High, Low, Medium.", + "temperature": "TEXT. Temperature sensitivity data for the artifact. Possible values: High, Low, Medium.", + "humidity": "TEXT. Humidity sensitivity data for the artifact. Possible values: High, Low, Medium.", + "vibration": "TEXT. Vibration sensitivity data for the artifact. Possible values: High, Low, Medium.", + "pollutants": "TEXT. Pollution sensitivity data for the artifact. Possible values: High, Low, Medium." + }, + "biological": { + "pest": "TEXT. Pest sensitivity data for the artifact. Possible values: High, Low, Medium." + }, + "handling_transport": { + "handling": "TEXT. Handling sensitivity data for the artifact. Possible values: High, Low, Medium.", + "transport": "TEXT. Sensitivity to transportation for the artifact. Possible values: High, Low, Medium." + }, + "context": { + "display": "TEXT. Display sensitivity data for the artifact. Possible values: High, Low, Medium.", + "storage": "TEXT. Storage sensitivity data for the artifact. Possible values: High, Low, Medium." + } + } + }, + "museum_artifact|ExhibitionHalls|security_visitor_overview": { + "column_meaning": "JSONB column. Combines hall-level security system states with visitor statistics to support risk analysis and staffing optimisation.", + "fields_meaning": { + "security": { + "cctv_coverage": "TEXT. CCTV coverage status in the exhibition hall. Possible values: Full, Limited, Partial.", + "motion_detection": "TEXT. Motion detection status in the hall. Possible values: Active, Maintenance, Partial.", + "alarm_status": "TEXT. Alarm system status in the exhibition hall. Possible values: Armed, Maintenance, Partial.", + "access_control": "TEXT. Access control status for the exhibition hall. Possible values: Active, Maintenance, Partial." + }, + "visitor_statistics": { + "avg_daily_visitors": "BIGINT. Daily visitor count for the exhibition hall. Example: 308.", + "visitor_flow": "TEXT. Visitor flow data in the hall. Possible values: High, Low, Medium.", + "avg_dwell_minutes": "BIGINT. Dwell time of visitors in minutes. Example: 16." + }, + "behaviour_notes": "TEXT. Notes on visitor behavior in the hall. Possible values: Fair, Good, Poor." + } + }, + "museum_artifact|Showcases|case_environment_profile": { + "column_meaning": "JSONB column. Packs the showcase’s physical condition, filtration/adsorption capacity, leak performance and safety-power states into a single field for monitoring dashboards.", + "fields_meaning": { + "physical_state": { + "airtightness_factor": "REAL. Airtightness level of the showcase. Example: 95.1.", + "construction_material": "TEXT. Material of the showcase. **NULL means no material specified.**. Possible values: Acrylic, Glass, Tempered Glass.", + "seal_state": "TEXT. Seal state of the showcase. Possible values: Excellent, Fair, Good, Poor.", + "leak_rate_per_day": "REAL. Leak rate of the showcase. Example: 0.41.", + "internal_pressure_pa": "BIGINT. Pressure level inside the showcase. Example: -2.6." + }, + "maintenance": { + "maint_status": "TEXT. Maintenance status of the showcase. Possible values: Due, Good, Overdue.", + "filter_status": "TEXT. Filter status in the showcase. Possible values: Clean, Replace Now, Replace Soon.", + "silica_status": "TEXT. Silica gel status in the showcase. Possible values: Active, Replace Now, Replace Soon.", + "silica_last_replaced": "DATE. Date when silica gel was last replaced. Example: 09/15/2024." + }, + "buffer_capacity": { + "humidity_capacity_g": "BIGINT. Humidity capacity of the showcase. Example: 81.0.", + "pollutant_capacity_mg": "REAL. Pollutant capacity of the showcase. Example: 79.8." + }, + "safety_and_power": { + "inert_gas_state": "TEXT. Inert gas status inside the showcase. Possible values: Active, Maintenance, Standby.", + "fire_system_state": "TEXT. Fire system status in the showcase. Possible values: Active, Maintenance, Standby.", + "primary_power_state": "TEXT. Power status for the showcase. Possible values: Active, Standby, Testing.", + "backup_power_state": "TEXT. Backup system status for the showcase. Possible values: Maintenance, Ready, Testing." + } + } + }, + "museum_artifact|AirQualityReadings|air_quality_metrics": { + "column_meaning": "JSONB column. Stores all gaseous-pollutant and particulate measurements from a single sensor snapshot so downstream analytics can ingest one JSONB blob instead of many columns.", + "fields_meaning": { + "gases_ppm_ppb": { + "co2_ppm": "TEXT. CO2 concentration in ppm. **NULL means CO2 data not available.**. Example: 794 ppm.", + "tvoc_ppb": "BIGINT. Total Volatile Organic Compounds (TVOC) in ppb. Example: 89.0.", + "ozone_ppb": "BIGINT. Ozone concentration in ppb. Example: 11.0.", + "so2_ppb": "BIGINT. Sulfur dioxide concentration in ppb. Example: 12.0.", + "no2_ppb": "BIGINT. Nitrogen dioxide concentration in ppb. Example: 27.", + "formaldehyde_mg_m3": "REAL. Formaldehyde concentration in µg/m³. Example: 0.014." + }, + "particulates": { + "pm25_ug_m3": "REAL. Particulate Matter (PM2.5) concentration in µg/m³. Example: 16.7.", + "pm10_ug_m3": "REAL. Particulate Matter (PM10) concentration in µg/m³. Example: 29.0." + }, + "air_flow": { + "air_exchange_rate_h": "REAL. Air exchange rate. **NULL means air exchange rate not recorded.**. Example: 6.4.", + "air_velocity_ms": "REAL. Air velocity in the environment. **NULL means air velocity not measured.**. Example: 0.18." + } + } + } +} \ No newline at end of file diff --git a/museum_artifact/museum_artifact_kb.jsonl b/museum_artifact/museum_artifact_kb.jsonl new file mode 100644 index 0000000000000000000000000000000000000000..bf391611096639682474955cffe84c6c979e2550 --- /dev/null +++ b/museum_artifact/museum_artifact_kb.jsonl @@ -0,0 +1,62 @@ +{"id": 0, "knowledge": "Conservation Priority Index (CPI)", "description": "Calculates the overall conservation priority for an artifact based on multiple factors.", "definition": "Calculated as: ((Historical Significance Rating + Research Value Rating + Cultural Score) * (10 - Conservation Status Value)) / 30, where the conservation status is numerically mapped: Excellent=1, Good=3, Fair=5, Poor=7, Critical=10.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 1, "knowledge": "Sensitivity Weight Values", "description": "Numerical weights for sensitivity calculations.", "definition": "A mapping of qualitative sensitivity ratings to numerical values for calculation purposes. For example: Low=1, Medium=5, High=10 for various environmental factors like light, temperature, and humidity.", "type": "value_illustration", "children_knowledge": -1} +{"id": 2, "knowledge": "Environmental Risk Factor (ERF)", "description": "Quantifies the overall environmental risk to an artifact based on its sensitivities.", "definition": "Calculated as the average of an artifact's various environmental sensitivity scores. These scores are derived by converting qualitative ratings (e.g., 'Low', 'High') to numbers using 'Sensitivity Weight Values'.", "type": "calculation_knowledge", "children_knowledge": [1]} +{"id": 3, "knowledge": "Artifact Vulnerability Score (AVS)", "description": "Comprehensive score indicating how vulnerable an artifact is based on its conservation priority and environmental sensitivities.", "definition": "Calculated by multiplying the 'Conservation Priority Index (CPI)' by the 'Environmental Risk Factor (ERF)'. Higher scores indicate artifacts requiring more urgent attention.", "type": "calculation_knowledge", "children_knowledge": [0, 2]} +{"id": 4, "knowledge": "Display Safety Duration (DSD)", "description": "Calculates the recommended maximum display duration for an artifact based on its sensitivities.", "definition": "Calculated as: (Base Duration * (10 - Light Sensitivity Weight) * (10 - Temperature Sensitivity Weight) * (10 - Humidity Sensitivity Weight)) / 1000. The Base Duration is typically 36 months, and sensitivity weights are based on 'Sensitivity Weight Values'.", "type": "calculation_knowledge", "children_knowledge": [1]} +{"id": 5, "knowledge": "Showcase Environmental Stability Rating (SESR)", "description": "Measures how well a showcase maintains stable environmental conditions.", "definition": "Calculated as: 10 - ((24-hour Temperature Variation + (24-hour Humidity Variation / 5) + Leak Rate) / 3). Higher scores indicate more stable showcases.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 6, "knowledge": "Artifact Exhibition Compatibility (AEC)", "description": "Determines how compatible an artifact is with its current showcase environment.", "definition": "Calculated as: 10 - the absolute difference between the artifact's 'Environmental Risk Factor (ERF)' and the showcase's 'Showcase Environmental Stability Rating (SESR)'. A score closer to 10 indicates better compatibility.", "type": "calculation_knowledge", "children_knowledge": [2, 5]} +{"id": 7, "knowledge": "Material Deterioration Rate (MDR)", "description": "Estimates the rate of material deterioration based on environmental factors.", "definition": "Calculated as: (Artifact Age in Years * Environmental Risk Factor * (Relative Humidity - 50)^2 * Temperature in Celsius) / 100000. Higher values indicate faster deterioration.", "type": "calculation_knowledge", "children_knowledge": [2]} +{"id": 8, "knowledge": "Light Exposure Risk (LER)", "description": "Quantifies the risk from light exposure based on artifact sensitivity and current light levels.", "definition": "Calculated as: (Light Level in Lux * Light Sensitivity Weight * Visible Light Exposure in Lux-Hours) / 1000. The light sensitivity weight is based on 'Sensitivity Weight Values'.", "type": "calculation_knowledge", "children_knowledge": [1]} +{"id": 9, "knowledge": "Conservation Budget Efficiency (CBE)", "description": "Measures the efficiency of conservation budget allocation relative to artifact importance.", "definition": "Calculated as the average of the product of each artifact's 'Conservation Priority Index (CPI)' and its allocated budget ratio. The budget ratio is the proportion of the total conservation budget allocated to that artifact.", "type": "calculation_knowledge", "children_knowledge": [0]} +{"id": 10, "knowledge": "Visitor Impact Risk (VIR)", "description": "Assesses the risk posed by visitor traffic to artifacts in exhibition halls.", "definition": "Calculated as: (Daily Visitor Count * Visitor Flow Rate * Average Dwell Time in Minutes) / 1000. The visitor flow rate is numerically mapped: Low=1, Medium=3, High=5.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 11, "knowledge": "High-Value Artifact", "description": "Identifies artifacts with exceptional historical, cultural, or monetary value requiring special attention.", "definition": "An artifact is considered high-value if its insurance value exceeds $1,000,000 OR if both its historical significance and cultural scores are in the top 10% of all artifacts.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 12, "knowledge": "Conservation Emergency", "description": "Identifies artifacts requiring immediate conservation intervention.", "definition": "A situation where an artifact's conservation status is 'Critical' AND its treatment priority is 'Urgent'.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 13, "knowledge": "Environmental Instability Event", "description": "Identifies periods when showcase environmental conditions fluctuate beyond acceptable parameters.", "definition": "Occurs when the 24-hour temperature variation exceeds 1°C OR the 24-hour humidity variation exceeds 3% within a 24-hour period.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 14, "knowledge": "Accelerated Deterioration Scenario", "description": "Identifies conditions that could lead to rapid artifact deterioration.", "definition": "Occurs when an artifact's 'Material Deterioration Rate (MDR)' score is greater than 5 AND it has at least two environmental sensitivities rated as 'High'.", "type": "domain_knowledge", "children_knowledge": [7]} +{"id": 15, "knowledge": "Exhibition Rotation Candidate", "description": "Identifies artifacts that should be considered for rotation out of display.", "definition": "An artifact is a rotation candidate when its current display duration exceeds 75% of its 'Display Safety Duration (DSD)' OR its 'Light Exposure Risk (LER)' score is greater than 7.", "type": "domain_knowledge", "children_knowledge": [4, 8]} +{"id": 16, "knowledge": "Showcase Failure Risk", "description": "Identifies showcases at risk of failing to maintain proper environmental conditions.", "definition": "Occurs when a showcase's 'Showcase Environmental Stability Rating (SESR)' is less than 4 OR at least three maintenance indicators (e.g., seal condition, maintenance status, filter status) are in a critical state.", "type": "domain_knowledge", "children_knowledge": [5]} +{"id": 17, "knowledge": "Conservation Budget Crisis", "description": "Identifies when conservation budget allocation is insufficient for high-priority artifacts.", "definition": "Occurs when the 'Conservation Budget Efficiency (CBE)' score is less than 0.5 AND at least one artifact has a 'Critical' conservation status and an 'Insufficient' budget allocation status.", "type": "domain_knowledge", "children_knowledge": [9]} +{"id": 18, "knowledge": "Dynasty Value Artifact", "description": "Identifies artifacts from historically significant dynasties with higher research and cultural value.", "definition": "Artifacts from specific historically significant dynasties (e.g., 'Ming', 'Han', 'Tang') that also have a research value rating greater than 8.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 19, "knowledge": "Visitor Crowd Risk", "description": "Identifies exhibition halls where high visitor numbers pose risks to artifact safety.", "definition": "Occurs when the 'Visitor Impact Risk (VIR)' score is greater than 5 AND any artifact in the hall has the lowest security level rating.", "type": "domain_knowledge", "children_knowledge": [10]} +{"id": 20, "knowledge": "Organic Material Vulnerability", "description": "Identifies organic materials requiring special environmental conditions.", "definition": "Artifacts made of organic materials like 'Wood', 'Textile', or 'Paper' that also have a 'High' overall environmental sensitivity rating. These require specialized environmental controls.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 21, "knowledge": "Conservation Status Categories", "description": "Illustrates the conservation status values and their meanings.", "definition": "Values range from 'Excellent' (recently conserved, no issues), 'Good' (stable with minor issues), 'Fair' (stable but with noticeable issues), 'Poor' (active deterioration), to 'Critical' (severe deterioration requiring immediate intervention).", "type": "value_illustration", "children_knowledge": -1} +{"id": 22, "knowledge": "Historical Significance Ratings", "description": "Illustrates the historical significance rating scale.", "definition": "A numerical scale typically ranging from 1-10, where 1-3 indicates minor historical significance, 4-7 indicates moderate significance, and 8-10 indicates exceptional historical importance.", "type": "value_illustration", "children_knowledge": -1} +{"id": 23, "knowledge": "Light Sensitivity Levels", "description": "Illustrates light sensitivity classifications and their implications.", "definition": "Values include 'Low' (can tolerate up to 300 lux), 'Medium' (should be limited to 150-200 lux), and 'High' (restricted to 50 lux or less).", "type": "value_illustration", "children_knowledge": -1} +{"id": 24, "knowledge": "Humidity Sensitivity Levels", "description": "Illustrates humidity sensitivity classifications and their implications.", "definition": "Values include 'Low' (can tolerate a wide range of humidity), 'Medium' (requires a controlled humidity range), and 'High' (requires a very stable and narrow humidity range).", "type": "value_illustration", "children_knowledge": -1} +{"id": 25, "knowledge": "CCTV Coverage Levels", "description": "Illustrates CCTV coverage classifications.", "definition": "'Full' indicates 100% space coverage, 'Partial' indicates 60-90% coverage, and 'Limited' indicates less than 60% coverage.", "type": "value_illustration", "children_knowledge": -1} +{"id": 26, "knowledge": "Showcase Airtightness Levels", "description": "Illustrates showcase airtightness measurement.", "definition": "A numerical value representing air changes per day. Museum-grade showcases typically have values below 0.1, while values up to 5.0 indicate poor airtightness.", "type": "value_illustration", "children_knowledge": -1} +{"id": 27, "knowledge": "Temperature Reading Implications", "description": "Illustrates temperature readings and their conservation implications.", "definition": "Ideal for most collections is 18-22°C. Fluctuations greater than 2°C within 24 hours can stress materials. Values outside 10-30°C may indicate control failure.", "type": "value_illustration", "children_knowledge": -1} +{"id": 28, "knowledge": "Relative Humidity Implications", "description": "Illustrates relative humidity readings and their conservation implications.", "definition": "Ideal for mixed collections is 45-55%. Fluctuations greater than 5% within 24 hours are risky. Readings below 30% or above 65% indicate significant risk.", "type": "value_illustration", "children_knowledge": -1} +{"id": 29, "knowledge": "Light Intensity Implications", "description": "Illustrates light intensity measurements and their conservation implications.", "definition": "Conservation standards recommend 50 lux for highly sensitive materials, 150-200 lux for medium sensitivity materials, and up to 300 lux for low sensitivity materials.", "type": "value_illustration", "children_knowledge": -1} +{"id": 30, "knowledge": "Fine Particulate Matter Implications", "description": "Illustrates fine particulate matter measurements and their conservation implications.", "definition": "Represents PM2.5 concentration. Clean museum air should be below 5 µg/m³. Values above 15 indicate potential risk, and above 30 are hazardous.", "type": "value_illustration", "children_knowledge": -1} +{"id": 31, "knowledge": "Total Environmental Threat Level (TETL)", "description": "Comprehensive measurement of all environmental threats to an artifact based on multiple risk factors.", "definition": "Calculated as: 'Environmental Risk Factor (ERF)' + 'Light Exposure Risk (LER)' + ('Material Deterioration Rate (MDR)' * 2).", "type": "calculation_knowledge", "children_knowledge": [2, 8, 7]} +{"id": 32, "knowledge": "Showcase Protection Adequacy (SPA)", "description": "Measures how well a showcase protects its artifacts based on its stability and the artifacts' requirements.", "definition": "Calculated as: 'Showcase Environmental Stability Rating (SESR)' - ('Environmental Risk Factor (ERF)' * 0.5). Positive values indicate adequate protection.", "type": "calculation_knowledge", "children_knowledge": [5, 2]} +{"id": 33, "knowledge": "Conservation Backlog Risk (CBR)", "description": "Quantifies the risk associated with delayed conservation treatments.", "definition": "Calculated as: ('Conservation Priority Index (CPI)' * (Days since last cleaning date - Cleaning interval days)) / 100. Higher values indicate higher risk.", "type": "calculation_knowledge", "children_knowledge": [0]} +{"id": 34, "knowledge": "Visitor Capacity Safety Factor (VCSF)", "description": "Determines the safe visitor capacity for exhibition halls containing sensitive artifacts.", "definition": "Calculated as: 'Visitor Impact Risk (VIR)' / ('Artifact Vulnerability Score (AVS)' * 0.1). Lower values indicate safer visitor capacities.", "type": "calculation_knowledge", "children_knowledge": [10, 3]} +{"id": 35, "knowledge": "Exhibition Safety Quotient (ESQ)", "description": "Comprehensive safety rating for an exhibition based on artifacts, showcases, and visitor factors.", "definition": "Calculated as the average of three components: (10 - 'Artifact Vulnerability Score'), 'Artifact Exhibition Compatibility', and (10 - 'Visitor Impact Risk'). Higher values are safer.", "type": "calculation_knowledge", "children_knowledge": [3, 6, 10]} +{"id": 36, "knowledge": "Conservation Resource Allocation Efficiency (CRAE)", "description": "Measures how efficiently conservation resources are allocated based on priorities and budget.", "definition": "Calculated as: 'Conservation Budget Efficiency (CBE)' * (1 - ('Conservation Backlog Risk (CBR)' / 10)). Higher values indicate more efficient allocation.", "type": "calculation_knowledge", "children_knowledge": [9, 33]} +{"id": 37, "knowledge": "Material Aging Projection (MAP)", "description": "Projects the rate of artifact aging based on material type and environmental conditions.", "definition": "Calculated as: 'Material Deterioration Rate (MDR)' * (1 + ('Total Environmental Threat Level (TETL)' / 20)). Higher values indicate faster projected aging.", "type": "calculation_knowledge", "children_knowledge": [7, 31]} +{"id": 38, "knowledge": "Exhibition Rotation Priority Score (ERPS)", "description": "Calculates priority for rotating artifacts in and out of exhibition based on multiple factors.", "definition": "Calculated as: ('Display Safety Duration' - Months on Display) * ('Light Exposure Risk' + 1) * ('Conservation Priority Index' + 1) / 100. Lower values indicate higher rotation priority.", "type": "calculation_knowledge", "children_knowledge": [4, 8, 0]} +{"id": 39, "knowledge": "Environmental Compliance Index (ECI)", "description": "Measures how well current environmental conditions meet the requirements for an artifact.", "definition": "Calculated as: 10 - (absolute difference in temperature from ideal + absolute difference in humidity from ideal / 5 + 'Environmental Risk Factor' / 2). Higher values are better.", "type": "calculation_knowledge", "children_knowledge": [2]} +{"id": 40, "knowledge": "Security Risk Exposure (SRE)", "description": "Quantifies an artifact's exposure to security risks based on value and security measures.", "definition": "Calculated as: (Insurance Value / 100000) * (10 - 'Visitor Impact Risk') / 10. Higher values indicate greater security risk.", "type": "calculation_knowledge", "children_knowledge": [10]} +{"id": 41, "knowledge": "Critical Conservation Alert", "description": "Identifies artifacts in critical condition requiring immediate intervention.", "definition": "An artifact that meets the 'Conservation Emergency' criteria AND has an 'Artifact Vulnerability Score (AVS)' greater than 8.", "type": "domain_knowledge", "children_knowledge": [12, 3]} +{"id": 42, "knowledge": "High Deterioration Risk Artifact", "description": "Identifies artifacts at high risk of rapid deterioration due to environmental factors.", "definition": "An artifact that has a 'Total Environmental Threat Level (TETL)' greater than 15 AND falls under the 'Accelerated Deterioration Scenario'.", "type": "domain_knowledge", "children_knowledge": [31, 14]} +{"id": 43, "knowledge": "Exhibition Rotation Urgency", "description": "Identifies artifacts that should be immediately removed from exhibition.", "definition": "Occurs when an artifact is an 'Exhibition Rotation Candidate' AND has an 'Exhibition Rotation Priority Score (ERPS)' less than 0.", "type": "domain_knowledge", "children_knowledge": [15, 38]} +{"id": 44, "knowledge": "Showcase Compatibility Issue", "description": "Identifies incompatible artifact-showcase pairings requiring adjustment.", "definition": "Occurs when the 'Showcase Protection Adequacy (SPA)' score is less than 0 AND the artifact is classified as a 'High-Value Artifact'.", "type": "domain_knowledge", "children_knowledge": [32, 11]} +{"id": 45, "knowledge": "Conservation Resource Crisis", "description": "Identifies serious conservation resource allocation problems.", "definition": "Occurs when the 'Conservation Resource Allocation Efficiency (CRAE)' is less than 0.3 AND there is a 'Conservation Budget Crisis'.", "type": "domain_knowledge", "children_knowledge": [36, 17]} +{"id": 46, "knowledge": "Dynasty Artifact at Risk", "description": "Identifies historically significant dynasty artifacts at conservation risk.", "definition": "An artifact that qualifies as a 'Dynasty Value Artifact' AND has a 'Material Aging Projection (MAP)' score greater than 3.", "type": "domain_knowledge", "children_knowledge": [18, 37]} +{"id": 47, "knowledge": "Environmental Control Failure", "description": "Identifies situations where environmental controls are failing to protect artifacts.", "definition": "Occurs when the 'Environmental Compliance Index (ECI)' is less than 4 AND there is an 'Environmental Instability Event'.", "type": "domain_knowledge", "children_knowledge": [39, 13]} +{"id": 48, "knowledge": "High Security Priority Artifact", "description": "Identifies artifacts requiring enhanced security measures.", "definition": "An artifact that is classified as a 'High-Value Artifact' AND has a 'Security Risk Exposure (SRE)' score greater than 5.", "type": "domain_knowledge", "children_knowledge": [11, 40]} +{"id": 49, "knowledge": "Visitor Traffic Safety Concern", "description": "Identifies situations where visitor traffic poses safety concerns for exhibitions.", "definition": "Occurs when the 'Visitor Capacity Safety Factor (VCSF)' is greater than 2 AND there is a 'Visitor Crowd Risk' situation.", "type": "domain_knowledge", "children_knowledge": [34, 19]} +{"id": 50, "knowledge": "Organic Material Emergency", "description": "Identifies emergency situations for organic materials.", "definition": "Occurs when an artifact falls under the 'Organic Material Vulnerability' classification AND has a 'Total Environmental Threat Level (TETL)' greater than 12.", "type": "domain_knowledge", "children_knowledge": [20, 31]} +{"id": 51, "knowledge": "High-Value Category", "description": "Classification system for categorizing high-value artifacts based on their monetary or cultural/historical significance.", "definition": "An artifact is 'Monetary High-Value' if its insurance value exceeds $1,000,000. It is 'Cultural/Historical High-Value' if its significance and cultural scores are in the top 10%. Otherwise, it is in the 'Other' category. This is derived from the 'High-Value Artifact' criteria.", "type": "domain_knowledge", "children_knowledge": [11]} +{"id": 52, "knowledge": "ERPS Decision Threshold", "description": "Converts ERPS scores into conservation actions.", "definition": "An 'Exhibition Rotation Priority Score (ERPS)' less than 0 triggers an 'Immediate Rotation' recommendation; otherwise, the recommendation is 'Monitor'.", "type": "domain_knowledge", "children_knowledge": [38]} +{"id": 53, "knowledge": "Light Exposure Thresholds", "description": "Defines maximum safe light exposure levels for artifacts based on material sensitivity.", "definition": "High sensitivity artifacts (e.g., textiles, paper) must not exceed 50 lux; Medium sensitivity artifacts (e.g., paintings, wood) must not exceed 200 lux.", "type": "domain_knowledge", "children_knowledge": [8, 21]} +{"id": 54, "knowledge": "Conservation Environment Chronology (CEC)", "description": "A methodological approach to segment environmental monitoring data into chronological intervals.", "definition": "Involves grouping environmental readings by fixed time intervals (such as by year) to discern patterns and inform conservation strategies.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 55, "knowledge": "Artifact Rarity & Valuation (ARV)", "description": "Establishes criteria for identifying artifacts of exceptional rarity and valuation.", "definition": "Artifacts are categorized with high rarity and valuation if their insurance value exceeds $1,000,000.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 56, "knowledge": "Conservation Priority Level", "description": "Classifies artifacts into priority levels based on their CPI scores.", "definition": "Based on the 'Conservation Priority Index (CPI)': 'High Priority' for scores > 7, 'Medium Priority' for scores between 4 and 7, and 'Low Priority' for scores <= 4.", "type": "domain_knowledge", "children_knowledge": [0]} +{"id": 57, "knowledge": "Humidity Exposure Thresholds", "description": "Defines maximum safe humidity exposure levels for artifacts based on material sensitivity.", "definition": "High sensitivity artifacts must not exceed 55% RH; Medium sensitivity artifacts must not exceed 60% RH.", "type": "domain_knowledge", "children_knowledge": [24]} +{"id": 58, "knowledge": "Exposure Status", "description": "Determines whether an artifact's current humidity exceeds its defined sensitivity threshold.", "definition": "Classified as 'Over Exposure' if the current relative humidity exceeds the 'Humidity Exposure Thresholds' defined for its sensitivity level, otherwise 'Within Limits'.", "type": "calculation_knowledge", "children_knowledge": [57]} +{"id": 59, "knowledge": "Secondary Stability Threshold", "description": "Defines a less stringent stability threshold for identifying showcases at risk.", "definition": "A risk assessment level using a less stringent 'Showcase Environmental Stability Rating (SESR)' threshold (e.g., a score < 5 instead of < 4) to prompt deeper evaluation.", "type": "calculation_knowledge", "children_knowledge": [5]} +{"id": 60, "knowledge": "Compliance Level", "description": "Categorizes the Environmental Compliance Index (ECI) score into human-readable levels.", "definition": "Assigns a level based on the 'Environmental Compliance Index (ECI)' score: 'Excellent' for scores > 7, 'Good' for scores between 4 and 7 (inclusive), and 'Poor' for scores <= 4.", "type": "domain_knowledge", "children_knowledge": [39]} +{"id": 61, "knowledge": "Overdue Days", "description": "Calculates the number of days a scheduled maintenance task is past its due date.", "definition": "Calculated as the difference between the current date and the scheduled next cleaning date. A positive value indicates the number of days overdue.", "type": "calculation_knowledge", "children_knowledge": -1} diff --git a/museum_artifact/museum_artifact_schema.txt b/museum_artifact/museum_artifact_schema.txt new file mode 100644 index 0000000000000000000000000000000000000000..284127d135b6183f32af06e10e26d74e6e8c4d71 --- /dev/null +++ b/museum_artifact/museum_artifact_schema.txt @@ -0,0 +1,370 @@ +"CREATE" TABLE "ArtifactRatings" ( +"HIST_sign" bigint NOT NULL, +"ART_link" text NOT NULL, +rating_profile jsonb NULL, + "PRIMARY" KEY (HIST_sign), + "FOREIGN" KEY ("ART_link") REFERENCES ArtifactsCore(ARTregID) +); + + + +"First" 3 rows: + HIST_sign ART_link rating_profile +----------- ---------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- + 7 ART54317 {'exhibit_value': 4, 'cultural_score': 25, 'research_score': 9, 'educational_value': 9, 'deterioration_rate': 'Moderate', 'material_stability': 'Unstable', 'public_access_rating': 9, 'treatment_complexity': 'Complex', 'conservation_difficulty': 'Medium'} + 3 ART54254 {'exhibit_value': 7, 'cultural_score': 13, 'research_score': 5, 'educational_value': 3, 'deterioration_rate': 'Rapid', 'material_stability': 'Stable', 'public_access_rating': 1, 'treatment_complexity': 'Moderate', 'conservation_difficulty': 'High'} + 5 ART69978 {'exhibit_value': None, 'cultural_score': 4, 'research_score': 10, 'educational_value': 3, 'deterioration_rate': 'Rapid', 'material_stability': 'Moderate', 'public_access_rating': None, 'treatment_complexity': 'Moderate', 'conservation_difficulty': 'High'} +... + + +"CREATE" TABLE "SensitivityData" ( +"ENVsense" text NOT NULL, +"ART_link" text NOT NULL, +env_handling_sensitivity jsonb NULL, + "PRIMARY" KEY (ENVsense), + "FOREIGN" KEY ("ART_link") REFERENCES ArtifactsCore(ARTregID) +); + + + +"First" 3 rows: +ENVsense ART_link env_handling_sensitivity +---------- ---------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- +Low ART54317 {'context': {'display': 'Low', 'storage': 'Medium'}, 'biological': {'pest': None}, 'environment': {'light': 'High', 'humidity': 'Medium', 'vibration': 'Medium', 'pollutants': None, 'temperature': None}, 'handling_transport': {'handling': 'Medium', 'transport': 'High'}} +High ART54254 {'context': {'display': 'Low', 'storage': 'Low'}, 'biological': {'pest': 'Low'}, 'environment': {'light': 'Low', 'humidity': 'High', 'vibration': 'High', 'pollutants': 'Medium', 'temperature': 'Low'}, 'handling_transport': {'handling': 'Medium', 'transport': 'High'}} +Medium ART48028 {'context': {'display': 'High', 'storage': 'Low'}, 'biological': {'pest': None}, 'environment': {'light': None, 'humidity': 'Medium', 'vibration': 'High', 'pollutants': 'High', 'temperature': 'Low'}, 'handling_transport': {'handling': 'High', 'transport': 'Low'}} +... + + +"CREATE" TABLE "ArtifactsCore" ( +"ARTregID" text NOT NULL, +art_title text NULL, +"DYNASTY" text NULL, +"ageYears" bigint NULL, +"MatKind" text NULL, +conserve_status text NULL, + "PRIMARY" KEY (ARTregID) +); + + + +"First" 3 rows: +ARTregID art_title DYNASTY ageYears MatKind conserve_status +---------- ---------------- --------- ---------- --------- ----------------- +ART54317 Culture Painting Ming 943 Stone Good +ART54254 Poor Vase Song 2179 Textile Fair +ART69978 Order Painting Qing 366 Bronze +... + + +"CREATE" TABLE "ExhibitionHalls" ( +"Hall_ID" text NOT NULL, +security_visitor_overview jsonb NULL, + "PRIMARY" KEY (Hall_ID) +); + + + +"First" 3 rows: +Hall_ID security_visitor_overview +--------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- +Hall-3 {'security': {'alarm_status': 'Armed', 'cctv_coverage': None, 'access_control': 'Maintenance', 'motion_detection': 'Active'}, 'behaviour_notes': 'Poor', 'visitor_statistics': {'visitor_flow': 'Low', 'avg_dwell_minutes': 16, 'avg_daily_visitors': 308}} +Hall-12 {'security': {'alarm_status': 'Armed', 'cctv_coverage': 'Full', 'access_control': 'Active', 'motion_detection': 'Maintenance'}, 'behaviour_notes': 'Poor', 'visitor_statistics': {'visitor_flow': 'Low', 'avg_dwell_minutes': 11, 'avg_daily_visitors': 993}} +Hall-9 {'security': {'alarm_status': 'Partial', 'cctv_coverage': 'Full', 'access_control': 'Maintenance', 'motion_detection': 'Partial'}, 'behaviour_notes': 'Poor', 'visitor_statistics': {'visitor_flow': 'High', 'avg_dwell_minutes': 6, 'avg_daily_visitors': 888}} +... + + +"CREATE" TABLE "Showcases" ( +"caseID" text NOT NULL, +hall_ref text NULL, +case_environment_profile jsonb NULL, + "PRIMARY" KEY (caseID), + "FOREIGN" KEY (hall_ref) REFERENCES ExhibitionHalls(Hall_ID) +); + + + +"First" 3 rows: +caseID hall_ref case_environment_profile +-------- ---------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- +SC9857 Hall-3 {'maintenance': {'maint_status': 'Overdue', 'filter_status': 'Replace Now', 'silica_status': 'Active', 'silica_last_replaced': '2024-09-15'}, 'physical_state': {'seal_state': None, 'leak_rate_per_day': 0.41, 'airtightness_factor': 95.1, 'internal_pressure_pa': -3, 'construction_material': 'Tempered Glass'}, 'buffer_capacity': {'humidity_capacity_g': 81, 'pollutant_capacity_mg': None}, 'safety_and_power': {'inert_gas_state': 'Active', 'fire_system_state': 'Maintenance', 'backup_power_state': 'Ready', 'primary_power_state': 'Testing'}} +SC7393 Hall-12 {'maintenance': {'maint_status': 'Overdue', 'filter_status': 'Replace Now', 'silica_status': 'Active', 'silica_last_replaced': '2024-12-15'}, 'physical_state': {'seal_state': 'Excellent', 'leak_rate_per_day': 0.07, 'airtightness_factor': 93, 'internal_pressure_pa': 3, 'construction_material': 'Tempered Glass'}, 'buffer_capacity': {'humidity_capacity_g': 78, 'pollutant_capacity_mg': 79.8}, 'safety_and_power': {'inert_gas_state': 'Standby', 'fire_system_state': 'Active', 'backup_power_state': 'Maintenance', 'primary_power_state': 'Active'}} +SC9391 Hall-3 {'maintenance': {'maint_status': 'Due', 'filter_status': 'Replace Soon', 'silica_status': 'Replace Soon', 'silica_last_replaced': '2024-12-21'}, 'physical_state': {'seal_state': 'Good', 'leak_rate_per_day': 0.2, 'airtightness_factor': 99.4, 'internal_pressure_pa': -4, 'construction_material': 'Glass'}, 'buffer_capacity': {'humidity_capacity_g': 93, 'pollutant_capacity_mg': 66.3}, 'safety_and_power': {'inert_gas_state': 'Active', 'fire_system_state': 'Active', 'backup_power_state': 'Testing', 'primary_power_state': 'Active'}} +... + + +"CREATE" TABLE "EnvironmentalReadingsCore" ( +monitor_code text NOT NULL, +"readTS" timestamp without time zone NOT NULL, +case_link text NULL, +"TEMPc" bigint NULL, +"tempVar24" real NULL, +"RH" bigint NULL, +"RHvar" bigint NULL, +air_press real NULL, + "PRIMARY" KEY (monitor_code), + "FOREIGN" KEY (case_link) REFERENCES Showcases(caseID) +); + + + +"First" 3 rows: +monitor_code readTS case_link TEMPc tempVar24 RH RHvar air_press +-------------- ------------------- ----------- ------- ----------- ---- ------- ----------- +MM191823 2024-08-06 08:38:48 SC9857 22 0.85 53 3 1020 +MM153427 2025-02-07 03:00:17 SC7393 18 1.34 54 1 1013.4 +MM675303 2024-07-25 09:37:21 SC9391 nan 1.75 48 2 1015.3 +... + + +"CREATE" TABLE "AirQualityReadings" ( +aq_id bigint NOT NULL DEFAULT nextval('"AirQualityReadings_aq_id_seq"'::regclass), +env_link text NOT NULL, +air_quality_metrics jsonb NULL, + "PRIMARY" KEY (aq_id), + "FOREIGN" KEY (env_link) REFERENCES EnvironmentalReadingsCore(monitor_code) +); + + + +"First" 3 rows: + aq_id env_link air_quality_metrics +------- ---------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- + 1 MM191823 {'air_flow': {'air_velocity_ms': 0.18, 'air_exchange_rate_h': 6.4}, 'particulates': {'pm10_ug_m3': None, 'pm25_ug_m3': 16.7}, 'gases_ppm_ppb': {'co2_ppm': '794 ppm', 'no2_ppb': 27, 'so2_ppb': 12, 'tvoc_ppb': 89, 'ozone_ppb': 11, 'formaldehyde_mg_m3': 0.014}} + 2 MM153427 {'air_flow': {'air_velocity_ms': 0.19, 'air_exchange_rate_h': 4.3}, 'particulates': {'pm10_ug_m3': None, 'pm25_ug_m3': 10.7}, 'gases_ppm_ppb': {'co2_ppm': '539 ppm', 'no2_ppb': 21, 'so2_ppb': 18, 'tvoc_ppb': 420, 'ozone_ppb': 12, 'formaldehyde_mg_m3': 0.035}} + 3 MM675303 {'air_flow': {'air_velocity_ms': 0.14, 'air_exchange_rate_h': None}, 'particulates': {'pm10_ug_m3': 29, 'pm25_ug_m3': 5.4}, 'gases_ppm_ppb': {'co2_ppm': '402 ppm', 'no2_ppb': 13, 'so2_ppb': 14, 'tvoc_ppb': 393, 'ozone_ppb': 47, 'formaldehyde_mg_m3': 0.077}} +... + + +"CREATE" TABLE "SurfaceAndPhysicalReadings" ( +surf_id bigint NOT NULL DEFAULT nextval('"SurfaceAndPhysicalReadings_surf_id_seq"'::regclass), +env_link text NOT NULL, +vibra_mms2 real NULL, +"noise_dB" bigint NULL, +"dust_Mg_m2" real NULL, +"microbe_CFU" bigint NULL, +"moldIdx" real NULL, +"pestActivity" text NULL, +"pestTrap" bigint NULL, +"pestSpecies" text NULL, +"surface_pH" real NULL, +moist_pct real NULL, +"saltRisk" text NULL, +"metalCorr" real NULL, +"organicDeg" real NULL, +"deltaE" real NULL, +"surfTemp" real NULL, +"surfRH" real NULL, +"condRisk" text NULL, +"thermalImg" text NULL, +"structStable" text NULL, +"crackNote" text NULL, +deform_mm real NULL, +"wtPct" real NULL, +"surfDust" bigint NULL, +"O2_pct" real NULL, +"N2_pct" real NULL, + "PRIMARY" KEY (surf_id), + "FOREIGN" KEY (env_link) REFERENCES EnvironmentalReadingsCore(monitor_code) +); + + + +"First" 3 rows: + surf_id env_link vibra_mms2 noise_dB dust_Mg_m2 microbe_CFU moldIdx pestActivity pestTrap pestSpecies surface_pH moist_pct saltRisk metalCorr organicDeg deltaE surfTemp surfRH condRisk thermalImg structStable crackNote deform_mm wtPct surfDust O2_pct N2_pct +--------- ---------- ------------ ---------- ------------ ------------- --------- -------------- ---------- ------------- ------------ ----------- ---------- ----------- ------------ -------- ---------- -------- ---------- ------------ -------------- ------------------- ----------- ------- ---------- -------- -------- + 1 MM191823 0.461 47 1.74 234 0.1 Medium 10 6.7 10.3 High 0.04 0.47 1.99 19.11 45.46 Medium Normal Stable Significant Changes 0.08 -0.001 6 20.8 78.75 + 2 MM153427 0.053 50 0.39 450 0.33 Low 6 6.5 11 High 0.05 0.37 0.87 nan 52.95 Critical Stable Significant Changes nan -0.011 4 20.53 78.3 + 3 MM675303 0.018 42 2.77 486 0.43 Low 7 6.6 nan nan 0.92 1.48 20.53 54.81 Critical Stable Minor Changes 0.16 nan 4 20.31 78.62 +... + + +"CREATE" TABLE "LightAndRadiationReadings" ( +rad_id bigint NOT NULL DEFAULT nextval('"LightAndRadiationReadings_rad_id_seq"'::regclass), +env_link text NOT NULL, +lux bigint NULL, +"UV_uW" real NULL, +"IR_W" real NULL, +"visLxh" bigint NULL, + "PRIMARY" KEY (rad_id), + "FOREIGN" KEY (env_link) REFERENCES EnvironmentalReadingsCore(monitor_code) +); + + + +"First" 3 rows: + rad_id env_link lux UV_uW IR_W visLxh +-------- ---------- ----- ------- ------ -------- + 1 MM191823 nan 32.58 7.51 71166 + 2 MM153427 138 64.99 7.81 69438 + 3 MM675303 71 66.82 5.47 75541 +... + + +"CREATE" TABLE "ConditionAssessments" ( +cond_id bigint NOT NULL DEFAULT nextval('"ConditionAssessments_cond_id_seq"'::regclass), +art_exam text NULL, +case_exam text NULL, +light_link bigint NULL, +score bigint NULL, +assess_date date NULL, +next_due date NULL, + "PRIMARY" KEY (cond_id), + "FOREIGN" KEY (art_exam) REFERENCES ArtifactsCore(ARTregID), + "FOREIGN" KEY (case_exam) REFERENCES Showcases(caseID), + "FOREIGN" KEY (light_link) REFERENCES LightAndRadiationReadings(rad_id) +); + + + +"First" 3 rows: + cond_id art_exam case_exam light_link score assess_date next_due +--------- ---------- ----------- ------------ ------- ------------- ---------- + 1 ART54317 SC9857 1 93 2024-09-15 2025-04-17 + 2 ART54254 SC7393 2 48 2024-03-27 2025-09-09 + 3 ART69978 SC9391 3 61 2024-05-01 2025-11-10 +... + + +"CREATE" TABLE "RiskAssessments" ( +risk_id text NOT NULL, +art_concern text NOT NULL, +hall_concern text NULL, +risk_level text NULL, +emerg_plan text NULL, +"evacPrio" text NULL, +handle_rules text NULL, +conserve_score bigint NULL, + "PRIMARY" KEY (risk_id), + "FOREIGN" KEY (art_concern) REFERENCES ArtifactsCore(ARTregID), + "FOREIGN" KEY (hall_concern) REFERENCES ExhibitionHalls(Hall_ID) +); + + + +"First" 3 rows: +risk_id art_concern hall_concern risk_level emerg_plan evacPrio handle_rules conserve_score +--------- ------------- -------------- ------------ --------------- ---------- -------------- ---------------- +11X1B3CW ART54317 Hall-3 Medium Review Required Priority 3 Minimal 85 +WE7WL5Y2 ART54254 Hall-12 Medium Under Revision Priority 1 Strict 76 +2248Y534 ART69978 Hall-3 Medium Priority 2 Minimal 91 +... + + +"CREATE" TABLE "ConservationAndMaintenance" ( +maint_id bigint NOT NULL DEFAULT nextval('"ConservationAndMaintenance_maint_id_seq"'::regclass), +monitor_link text NULL, +surf_link bigint NULL, +treat_stat text NULL, +prio_tag text NULL, +"lastClean" date NULL, +"nextClean" date NULL, +"cleanDays" bigint NULL, +"maintLog" text NULL, +incident_stat text NULL, +drill_stat text NULL, +train_stat text NULL, +budget_alloc text NULL, +budget_stat text NULL, +"conserveFreq" text NULL, +history text NULL, +"prevTreat" bigint NULL, +"treatEffect" text NULL, +"reversePot" text NULL, + "PRIMARY" KEY (maint_id), + "FOREIGN" KEY (monitor_link) REFERENCES EnvironmentalReadingsCore(monitor_code), + "FOREIGN" KEY (surf_link) REFERENCES SurfaceAndPhysicalReadings(surf_id) +); + + + +"First" 3 rows: + maint_id monitor_link surf_link treat_stat prio_tag lastClean nextClean cleanDays maintLog incident_stat drill_stat train_stat budget_alloc budget_stat conserveFreq history prevTreat treatEffect reversePot +---------- -------------- ----------- ------------ ---------- ----------- ----------- ----------- ---------- --------------- ------------ ------------ --------------- ------------- -------------- --------- ----------- ------------- ------------ + 1 MM191823 1 In Progress High 2024-12-16 2025-05-10 83 Updated Closed Current Current Review Required Limited Rare Extensive 4 Low Medium + 2 MM153427 2 Medium 2024-12-13 2025-03-26 nan Updated Overdue Overdue Review Required Depleted Rare Minimal 1 Low High + 3 MM675303 3 Not Required Low 2024-11-21 2025-05-14 85 Pending Closed Overdue Overdue Insufficient Limited Rare Moderate 8 Low Low +... + + +"CREATE" TABLE "UsageRecords" ( +usage_id bigint NOT NULL DEFAULT nextval('"UsageRecords_usage_id_seq"'::regclass), +env_link text NULL, +rotate_sched text NULL, +"displayMonths" bigint NULL, +"restMonths" bigint NULL, +"dispReqs" text NULL, +"storeReqs" text NULL, +"handleReqs" text NULL, +"transportReqs" text NULL, +"packReqs" text NULL, +"resAccess" text NULL, +"publicDisp" text NULL, +"loanFreq" text NULL, +"handleFreq" text NULL, +"docuFreq" text NULL, +"monitorFreq" text NULL, +"assessFreq" text NULL, +"conserveFreq" text NULL, +"maintFreq" text NULL, +"inspectFreq" text NULL, +"calibFreq" text NULL, +"certStatus" text NULL, +"complianceStatus" text NULL, +"auditStatus" text NULL, +"qualityStatus" text NULL, + "PRIMARY" KEY (usage_id), + "FOREIGN" KEY (env_link) REFERENCES EnvironmentalReadingsCore(monitor_code) +); + + + +"First" 3 rows: + usage_id env_link rotate_sched displayMonths restMonths dispReqs storeReqs handleReqs transportReqs packReqs resAccess publicDisp loanFreq handleFreq docuFreq monitorFreq assessFreq conserveFreq maintFreq inspectFreq calibFreq certStatus complianceStatus auditStatus qualityStatus +---------- ---------- -------------- --------------- ------------ ---------- ----------- ------------ --------------- ---------- ----------- ------------ ---------- ------------ ---------- ------------- ------------ -------------- ----------- ------------- ----------- ------------ ------------------ ------------- --------------- + 1 MM191823 Permanent 1 22 Special Standard Custom Custom Frequent Frequent Occasional Rare Frequent Monthly Monthly Rare Monthly Weekly Monthly Expired Non-compliant Passed Failed + 2 MM153427 Resting 5 11 Standard Custom Special Special Special Rare Frequent Rare Rare Quarterly Rare Weekly Monthly Quarterly Current Partial Pending Failed + 3 MM675303 Permanent 6 10 Custom Custom Standard Standard Special Rare Occasional Occasional Rare Frequent Weekly Quarterly Rare Weekly Daily Monthly Non-compliant Failed Passed +... + + +"CREATE" TABLE "ArtifactSecurityAccess" ( +loan_stat text NOT NULL, +"insUSD" real NULL, +"SEC_LEVEL" text NULL, +access_restrict text NULL, +docu_stat text NULL, +photo_docu text NULL, +cond_report text NULL, +conserve_rec text NULL, +research_access text NULL, +digital_rec text NULL, + "PRIMARY" KEY (loan_stat) +); + + + +"First" 3 rows: +loan_stat insUSD SEC_LEVEL access_restrict docu_stat photo_docu cond_report conserve_rec research_access digital_rec +------------- -------- ----------- ----------------- ----------- ------------ ------------- --------------- ----------------- ------------- +"On" Loan 968368 Level 3 Public Updating Outdated Current Review Required Limited In Progress +Available 36135 Level 3 Public Partial Required Due Pending Limited Partial +"Not" Available 776900 Level 3 Limited Updating Current Updated Available Complete +... + + +"CREATE" TABLE "Monitor_Showcase_Map" ( +"mon_ID" text NOT NULL, +"case_ID" text NOT NULL, + "PRIMARY" KEY (mon_ID, case_ID) +); + + + +"First" 3 rows: +mon_ID case_ID +-------- --------- +MM191823 SC9857 +MM153427 SC7393 +MM675303 SC9391 +... diff --git a/organ_transplant/organ_transplant_column_meaning_base.json b/organ_transplant/organ_transplant_column_meaning_base.json new file mode 100644 index 0000000000000000000000000000000000000000..0704bd95b2222e4e9cf31439203cc5b2edf5d6c0 --- /dev/null +++ b/organ_transplant/organ_transplant_column_meaning_base.json @@ -0,0 +1,179 @@ +{ + "organ_transplant|demographics|contrib_registry": "TEXT. Primary key uniquely identifying each organs donor in the transplant matching system. PK = Demographics(Contrib_Registry). Example: D812743.", + "organ_transplant|demographics|age_count": "TEXT. Donor's age in years at time of organs recovery. Example: 57 years, mature donor.", + "organ_transplant|demographics|blood_class": "TEXT. Donor's ABO blood type classification for compatibility matching. Possible values: AB- rare negative type.", + "organ_transplant|demographics|nation_ref": "TEXT. Donor's country of origin or nationality reference. Example: Seychelles.", + "organ_transplant|medical_history|contrib_med_registry": "TEXT. Primary key linking to donor demographics record. PK = Medical_History(Contrib_Med_Registry), FK to Demographics.", + "organ_transplant|medical_history|med_history": "TEXT. Comprehensive medical history and conditions of the donor. **NULL means medical history documentation is incomplete or unavailable.**. Example: None,Cancer,Heart Disease.", + "organ_transplant|medical_history|smk_cond": "TEXT. Donor's smoking status and history. Possible values: Never smoked, optimal lung condition.", + "organ_transplant|medical_history|alc_cond": "TEXT. Donor's alcohol use status and history. **NULL means alcohol use history is not documented or unknown.**. Possible values: Heavy use, liver function requires evaluation.", + "organ_transplant|medical_history|drug_cond": "TEXT. Donor's drug use status and substance abuse history. **NULL means drug use history is not documented or assessed.**. Possible values: Current use, high risk assessment.", + "organ_transplant|hla_info|immu_don_registry": "TEXT. Primary key linking to donor demographics for HLA typing data. PK = HLA_Info(Immu_Don_Registry), FK to Demographics.", + "organ_transplant|hla_info|hla_a_val": "REAL. Donor's HLA-A antigen typing value for immunological compatibility. Example: 89,11.", + "organ_transplant|hla_info|hla_b_val": "REAL. Donor's HLA-B antigen typing value for immunological compatibility. Example: 22,60.", + "organ_transplant|hla_info|hla_dr_val": "REAL. Donor's HLA-DR antigen typing value for immunological compatibility. Example: 59,9.", + "organ_transplant|hla_info|hla_dq_val": "REAL. Donor's HLA-DQ antigen typing value for immunological compatibility. Example: 8,1.", + "organ_transplant|function_and_recovery|recov_don_registry": "TEXT. Primary key linking to donor demographics for organs function data. PK = Function_and_Recovery(Recov_Don_Registry), FK to Demographics.", + "organ_transplant|function_and_recovery|don_crtn_val": "TEXT. Donor's serum creatinine level indicating kidney function. Example: 1.62 mg/dL.", + "organ_transplant|function_and_recovery|don_gfr_val": "REAL. Donor's glomerular filtration rate measuring kidney function. Example: 103.4 mL/min/1.73m².", + "organ_transplant|function_and_recovery|don_co_desc": "TEXT. Description of donor's cause of death. Possible values: Anoxia, oxygen deprivation may affect organ quality.", + "organ_transplant|function_and_recovery|org_recov_dt": "TEXT. Date and time when organss were recovered from donor.** Possible values: 02-18-2025 HKT.", + "organ_transplant|function_and_recovery|org_presv_meth": "TEXT. Method used for organs preservation during transport. Possible values: Static cold storage, standard preservation method.", + "organ_transplant|function_and_recovery|org_isch_time": "TEXT. Cold ischemia time in minutes from recovery to transplant. Example: 702 mins.", + "organ_transplant|recipients_demographics|recip_registry": "TEXT. Primary key uniquely identifying each transplant recipient. PK = Recipients_Demographics(Recip_Registry). Example: R947153.", + "organ_transplant|recipients_demographics|age_count": "SMALLINT. Recipient's age in years at time of transplant evaluation. Example: 57.", + "organ_transplant|recipients_demographics|gend_type": "TEXT. Recipient's gender classification code. Possible values: F, M.", + "organ_transplant|recipients_demographics|blood_class": "TEXT. Recipient's ABO blood type for compatibility matching. Possible values: A+, A-, AB+, AB-, B+, B-, O+, O-.", + "organ_transplant|recipients_demographics|ht_cm": "SMALLINT. Recipient's height measurement in centimeters. Example: 171.", + "organ_transplant|recipients_demographics|wt_kg": "BIGINT. Recipient's weight measurement in kilograms. Example: 55.", + "organ_transplant|recipients_demographics|bmi_val": "REAL. Recipient's calculated body mass index value. Example: 18.8.", + "organ_transplant|recipients_demographics|ethn_grp": "TEXT. Recipient's ethnic or racial background classification. Possible values: African, Asian, Caucasian, Hispanic, Other.", + "organ_transplant|clinical|clin_recip_registry": "TEXT. Primary key linking to recipient demographics for clinical data. PK = Clinical(Clin_Recip_Registry), FK to Recipients_Demographics.", + "organ_transplant|clinical|diag_detail": "TEXT. Primary diagnosis and medical condition requiring transplant. Possible values: Acute failure, immediate transplant required.", + "organ_transplant|clinical|wait_time": "TEXT. Number of days recipient has been on transplant waiting list. Example: 104 days", + "organ_transplant|clinical|med_urgency": "TEXT. Medical urgency status classification for transplant priority. **DATA NOISE: 'Status 2' partially replaced with '2', 'Status 3' partially replaced with '3', etc.** Possible values: 2, 3, Status 1A, Status 1B, Status 2, Status 3.", + "organ_transplant|clinical|prev_tx_count": "TEXT. Number of previous transplants recipient has received. Possible values: 0 previous transplants, first-time recipient.", + "organ_transplant|clinical|dial_status": "TEXT. Current dialysis status and treatment regimen. **NULL means dialysis status is not applicable or not documented.**. Possible values: Hemodialysis, standard renal replacement therapy.", + "organ_transplant|clinical|dial_duration": "TEXT. Duration of dialysis treatment in months. Example: 45 months", + "organ_transplant|clinical|comorbid_detail": "TEXT. Comorbid conditions and secondary medical issues. **NULL means comorbidity assessment is incomplete or not available.**. Example: Hypertension,Heart Disease,Diabetes.", + "organ_transplant|recipients_immunology|immu_recip_registry": "TEXT. Primary key linking to recipient demographics for immunological data. PK = Recipients_Immunology(Immu_Recip_Registry), FK to Recipients_Demographics.", + "organ_transplant|recipients_immunology|pra_score": "REAL. Panel reactive antibody score indicating sensitization level. Example: 6.", + "organ_transplant|recipients_immunology|dsa_state": "TEXT. Donor-specific antibody presence status. Possible values: Negative, Positive.", + "organ_transplant|recipients_immunology|cross_result": "TEXT. Crossmatch test result for donor-recipient compatibility. Possible values: Negative, Pending, Positive.", + "organ_transplant|recipients_immunology|cmv_state": "TEXT. Recipient's cytomegalovirus infection status. Possible values: Negative, Positive.", + "organ_transplant|recipients_immunology|ebv_state": "TEXT. Recipient's Epstein-Barr virus infection status. Possible values: Negative, Positive.", + "organ_transplant|recipients_immunology|func_status": "TEXT. Recipient's functional status assessment. Possible values: Mild Impairment, Moderate Impairment, Normal, Severe Impairment.", + "organ_transplant|recipients_immunology|life_support": "TEXT. Life support requirements and status. **NULL means life support status is not applicable or not documented.**. Possible values: ECMO, Mechanical Ventilation, VAD.", + "organ_transplant|transplant_matching|match_rec_registry": "TEXT. Primary key uniquely identifying each transplant matching record. PK = Transplant_Matching(Match_Rec_Registry). Example: TM113504.", + "organ_transplant|transplant_matching|created_ts": "TIMESTAMP. Timestamp when matching record was created. Example: 2025-02-19 08:31:22.330375.", + "organ_transplant|transplant_matching|donor_ref_reg": "TEXT. Reference to donor demographics record. FK to Demographics.", + "organ_transplant|transplant_matching|recip_ref_reg": "TEXT. Reference to recipient demographics record. FK to Recipients_Demographics.", + "organ_transplant|transplant_matching|org_spec": "TEXT. Specific organs type being matched for transplant. Possible values: Heart, Kidney, Liver, Lung, Pancreas.", + "organ_transplant|transplant_matching|match_status": "TEXT. Current status of the matching process. Possible values: Completed, Failed, In Progress, Matched, Pending.", + "organ_transplant|transplant_matching|score_val": "REAL. Overall matching score for donor-recipient compatibility. Example: 0.005.", + "organ_transplant|transplant_matching|level_val": "TEXT. Matching level classification. Possible values: Acceptable, High Risk, Marginal, Optimal.", + "organ_transplant|transplant_matching|alg_vers": "TEXT. Version of matching algorithm used. Example: v2.5.", + "organ_transplant|transplant_matching|run_registry": "TEXT. Matching run identifier for batch processing. Example: MR324767.", + "organ_transplant|transplant_matching|match_ts": "TIMESTAMP. Timestamp when matching algorithm was executed. ** Possible values: 2025.02.19 08:31:22.", + "organ_transplant|transplant_matching|dur_sec": "BIGINT. Duration of matching process in seconds. Example: 128.", + "organ_transplant|transplant_matching|conf_val": "REAL. Confidence value for matching result. Example: 0.056.", + "organ_transplant|transplant_matching|dss_val": "REAL. Decision support system score. Example: 0.68.", + "organ_transplant|compatibility_metrics|match_comp_registry": "TEXT. Primary key linking to transplant matching record. PK = Compatibility_Metrics(Match_Comp_Registry), FK to Transplant_Matching.", + "organ_transplant|compatibility_metrics|hla_mis_count": "BIGINT. Number of HLA antigen mismatches between donor and recipient. Possible values: 0 ", + "organ_transplant|compatibility_metrics|blood_compat": "TEXT. Blood type compatibility assessment result. **NULL means blood compatibility assessment is pending or unavailable.** **DATA NOISE: Values have random case variations (lowercase, uppercase, mixed case).** Example: incompatible.", + "organ_transplant|compatibility_metrics|distance": "TEXT. Geographic distance between donor and recipient in kilometers. Example: 1815 miles", + "organ_transplant|compatibility_metrics|exp_isch_time": "TEXT. Expected cold ischemia time in minutes. Example: 539 minutes ischemia.", + "organ_transplant|compatibility_metrics|exp_time": "TEXT. Expected transport time in minutes. Example: 45 minutes transport.", + "organ_transplant|compatibility_metrics|cost_est": "TEXT. Estimated cost for transplant procedure and logistics. Example: US$5210.53", + "organ_transplant|compatibility_metrics|donor_ref_reg": "TEXT. Reference to donor demographics record. FK to Demographics.", + "organ_transplant|compatibility_metrics|recip_ref_reg": "TEXT. Reference to recipient demographics record. FK to Recipients_Demographics.", + "organ_transplant|risk_evaluation|risk_eval_registry": "TEXT. Primary key linking to transplant matching record. PK = Risk_Evaluation(Risk_Eval_Registry), FK to Transplant_Matching.", + "organ_transplant|risk_evaluation|org_qual_val": "REAL. organs quality assessment score. Example: 0.964.", + "organ_transplant|risk_evaluation|egs_val": "REAL. Expected graft survival score. Example: 0.114.", + "organ_transplant|risk_evaluation|eps_val": "REAL. Expected patient survival score. Example: 0.352.", + "organ_transplant|risk_evaluation|surg_cmpl_val": "REAL. Surgical complexity assessment score. Example: 0.747.", + "organ_transplant|risk_evaluation|surg_risk_val": "REAL. Surgical risk assessment score. **NULL means surgical risk assessment is incomplete or not performed.** **DATA NOISE: Random NULL values added to original data.** Example: 0.33.", + "organ_transplant|risk_evaluation|res_avail_val": "REAL. Resource availability score for transplant center. Example: 0.567.", + "organ_transplant|risk_evaluation|cntr_exp_val": "REAL. Transplant center experience score. Example: 0.215.", + "organ_transplant|risk_evaluation|cntr_vol_val": "REAL. Transplant center volume score. Example: 0.919.", + "organ_transplant|risk_evaluation|cntr_out_val": "REAL. Transplant center outcomes score. Example: 0.077.", + "organ_transplant|risk_evaluation|qol_val": "REAL. Quality of life assessment score. **NULL means quality of life assessment is not completed or unavailable.** ** Example: 0.185.", + "organ_transplant|risk_evaluation|cost_eff_val": "REAL. Cost effectiveness assessment score. Example: 0.037.", + "organ_transplant|risk_evaluation|alloc_prio_val": "REAL. Allocation priority score for organs distribution. Example: 0.917.", + "organ_transplant|risk_evaluation|donor_ref_reg": "TEXT. Reference to donor demographics record. FK to Demographics.", + "organ_transplant|risk_evaluation|recip_ref_reg": "TEXT. Reference to recipient demographics record. FK to Recipients_Demographics.", + "organ_transplant|risk_evaluation|cost_qaly": "TEXT. Cost effectiveness per quality-adjusted life year. Example: 45000 USD/QALY", + "organ_transplant|risk_evaluation|resource_consumption": "TEXT. Medical resource usage rate. Example: 11.339999437332153 units/day", + "organ_transplant|risk_evaluation|staff": "TEXT. Medical staff time requirement. Example: 15.46999979019165 hrs/case", + "organ_transplant|allocation_details|allc_match_registry": "TEXT. Primary key linking to transplant matching record. PK = Allocation_Details(Allc_Match_Registry), FK to Transplant_Matching.", + "organ_transplant|allocation_details|allc_seq_num": "SMALLINT. Sequence number in allocation ranking. Example: 24.", + "organ_transplant|allocation_details|allc_region": "TEXT. Geographic region for organs allocation. Possible values: Region_1, Region_10, Region_2, Region_3, Region_4, Region_5, Region_6, Region_7, Region_8, Region_9.", + "organ_transplant|allocation_details|allc_pol_vers": "TEXT. Allocation policy version used. Example: v2.7.", + "organ_transplant|allocation_details|donor_ref_reg": "TEXT. Reference to donor demographics record. FK to Demographics.", + "organ_transplant|allocation_details|recip_ref_reg": "TEXT. Reference to recipient demographics record. FK to Recipients_Demographics.", + "organ_transplant|logistics|log_match_registry": "TEXT. Primary key linking to transplant matching record. PK = Logistics(Log_Match_Registry), FK to Transplant_Matching.", + "organ_transplant|logistics|trans_method": "TEXT. Transportation method for organs delivery. Possible values: Charter Air, Commercial Air, Ground, Helicopter.", + "organ_transplant|logistics|don_ref_reg": "TEXT. Reference to donor demographics record. FK to Demographics.", + "organ_transplant|logistics|rec_ref_reg": "TEXT. Reference to recipient demographics record. FK to Recipients_Demographics.", + "organ_transplant|administrative_and_review|adm_rev_registry": "TEXT. Primary key linking to transplant matching record. PK = Administrative_and_Review(Adm_Rev_Registry), FK to Transplant_Matching.", + "organ_transplant|administrative_and_review|exp_rev_stat_val": "TEXT. Expert review status value. **NULL means expert review is pending or not required.** **DATA NOISE: Values have random case variations (lowercase, uppercase, mixed case).** Example: RejECted.", + "organ_transplant|administrative_and_review|exp_rev_notes": "TEXT. Expert review comments and notes. **NULL means expert review comments are not provided or review is incomplete.** **DATA NOISE: Values have random case variations (lowercase, uppercase, mixed case).** Example: accEptABlE mAtCH.", + "organ_transplant|administrative_and_review|ec_appro_val": "TEXT. Ethics committee approval status. **NULL means ethics committee approval is pending or not applicable.** **DATA NOISE: Values have random case variations (lowercase, uppercase, mixed case).** Example: approved.", + "organ_transplant|administrative_and_review|reg_comp_val": "TEXT. Regulatory compliance status assessment. **NULL means regulatory compliance review is pending or incomplete.** **DATA NOISE: Values have random case variations (lowercase, uppercase, mixed case).** Example: under review.", + "organ_transplant|administrative_and_review|docu_stat_val": "TEXT. Documentation status completeness assessment. **NULL means documentation status review is incomplete or pending.** **DATA NOISE: Values have random case variations (lowercase, uppercase, mixed case).** Example: InCoMpLeTe.", + "organ_transplant|administrative_and_review|cons_stat_val": "TEXT. Consent status from patient and family. **NULL means consent documentation is incomplete or pending.** **DATA NOISE: Values have random case variations (lowercase, uppercase, mixed case).** Example: OBTAINED.", + "organ_transplant|administrative_and_review|fin_clear_val": "TEXT. Financial clearance status for transplant procedure. **NULL means financial clearance is pending or under review.** **DATA NOISE: Values have random case variations (lowercase, uppercase, mixed case).** Example: rejected.", + "organ_transplant|administrative_and_review|ins_appro_val": "TEXT. Insurance approval status for transplant coverage. **NULL means insurance approval is pending or under review.** **DATA NOISE: Values have random case variations (lowercase, uppercase, mixed case).** Example: APPROVED.", + "organ_transplant|administrative_and_review|coord_ref": "TEXT. Transplant coordinator identifier reference. Example: C7827.", + "organ_transplant|administrative_and_review|surge_ref": "TEXT. Surgeon identifier reference. Example: S8696.", + "organ_transplant|administrative_and_review|tx_cen_code": "TEXT. Transplant center identification code. Example: TC594.", + "organ_transplant|administrative_and_review|rec_cen_code": "TEXT. Recovery center identification code. Example: RC386.", + "organ_transplant|administrative_and_review|lab_ref": "TEXT. Laboratory identifier reference. Example: L445.", + "organ_transplant|administrative_and_review|adm_don_ref": "TEXT. Administrative reference to donor record. FK to Demographics.", + "organ_transplant|administrative_and_review|adm_rec_ref": "TEXT. Administrative reference to recipient record. FK to Recipients_Demographics.", + "organ_transplant|data_source_and_quality|q_match_registry": "TEXT. Primary key linking to transplant matching record. PK = Data_Source_and_Quality(Q_Match_Registry), FK to Transplant_Matching.", + "organ_transplant|data_source_and_quality|data_src_val": "TEXT. Data source identification and origin. **NULL means data source information is not documented or unknown.** **DATA NOISE: Values have random case variations (lowercase, uppercase, mixed case).** Example: uNoS.", + "organ_transplant|data_source_and_quality|dq_score_val": "REAL. Data quality assessment score. Example: 0.966.", + "organ_transplant|data_source_and_quality|dc_score_val": "REAL. Data completeness assessment score. Example: 0.78.", + "organ_transplant|data_source_and_quality|verif_stat_val": "TEXT. Data verification status assessment. **NULL means data verification is pending or not performed.** **DATA NOISE: Values have random case variations (lowercase, uppercase, mixed case).** Example: verified.", + "organ_transplant|data_source_and_quality|last_up_ts": "TIMESTAMP. Timestamp of last data update. **DATA NOISE: Date format converted to yyyy/mm/dd.** Possible values: 2025/02/19.", + "organ_transplant|data_source_and_quality|next_rev_dt": "DATE. Next scheduled review date for data validation. **DATA NOISE: Date format converted to dd/mm/yyyy.** Example: 04/03/2025.", + "organ_transplant|data_source_and_quality|q_don_ref": "TEXT. Quality reference to donor demographics record. FK to Demographics.", + "organ_transplant|data_source_and_quality|q_rec_ref": "TEXT. Quality reference to recipient demographics record. FK to Recipients_Demographics.", + "organ_transplant|medical_history|viralstatinfo": { + "column_meaning": "JSONB column. Consolidates all viral infection status data including CMV, EBV, HBV, HCV, and HIV test results for comprehensive infectious disease screening.", + "fields_meaning": { + "Cmv_State": "TEXT. Donor's cytomegalovirus infection status. Possible values: Negative, Positive.", + "Ebv_State": "TEXT. Donor's Epstein-Barr virus infection status. Possible values: Negative, Positive.", + "Hbv_State": "TEXT. Donor's hepatitis B virus infection status. Possible values: Negative, Positive.", + "Hcv_State": "TEXT. Donor's hepatitis C virus infection status. Possible values: Negative, Positive.", + "Hiv_State": "TEXT. Donor's human immunodeficiency virus infection status. Possible values: Negative, Positive." + } + }, + "organ_transplant|function_and_recovery|organfuncassess": { + "column_meaning": "JSONB column. Groups organ function assessment data including liver, cardiac, and pulmonary function evaluations for multi-organ status review.", + "fields_meaning": { + "Liv_Func": "TEXT. Assessment of donor's liver function status. Possible values: Mild, Moderate, Normal, Severe.", + "Card_Func": "TEXT. Assessment of donor's cardiac function status. Possible values: Mild, Moderate, Normal, Severe.", + "Pulm_Func": "TEXT. Assessment of donor's pulmonary function status. Possible values: Mild, Moderate, Normal, Severe." + } + }, + "organ_transplant|recipients_immunology|hlaprofile": { + "column_meaning": "JSONB column. Stores complete HLA typing profile including A, B, DR, and DQ values for immunological compatibility matching.", + "fields_meaning": { + "Hla_A_Val": "REAL. Recipient's HLA-A antigen typing value for compatibility matching. Example: 89,11.", + "Hla_B_Val": "REAL. Recipient's HLA-B antigen typing value for compatibility matching. Example: 22,60.", + "Hla_Dr_Val": "REAL. Recipient's HLA-DR antigen typing value for compatibility matching. Example: 59,9.", + "Hla_Dq_Val": "REAL. Recipient's HLA-DQ antigen typing value for compatibility matching. Example: 8,1." + } + }, + "organ_transplant|compatibility_metrics|compatscores": { + "column_meaning": "JSONB column. Aggregates compatibility scoring metrics including HLA, size, and age compatibility assessments for donor-recipient matching.", + "fields_meaning": { + "Hla_Score": "REAL. HLA compatibility score based on antigen matching. Example: 0.522.", + "Size_Score": "REAL. Size compatibility score based on donor-recipient measurements. Example: 0.228.", + "Age_Score": "REAL. Age compatibility score for optimal matching. Example: 0.327." + } + }, + "organ_transplant|risk_evaluation|riskmetrics": { + "column_meaning": "JSONB column. Combines multiple risk assessment scores including immunological, infection, rejection, complication, readmission, and mortality risks.", + "fields_meaning": { + "Immun_Risk": "REAL. Immunological risk assessment score. Example: 0.607.", + "Infect_Risk": "REAL. Infection risk assessment score. Example: 0.48.", + "Reject_Risk": "REAL. Rejection risk assessment score. Example: 0.527.", + "Cmpl_Risk": "REAL. Complication risk assessment score. Example: 0.349.", + "Readmit_Risk": "REAL. Readmission risk assessment score. Example: 0.747.", + "Mort_Risk": "REAL. Mortality risk assessment score. Example: 0.674." + } + }, + "organ_transplant|demographics|physicalstats": { + "column_meaning": "JSONB column. Groups physical measurements and characteristics including height, weight, BMI, and demographic information for donor assessment.", + "fields_meaning": { + "Height_Cm": "SMALLINT. Donor's height measurement in centimeters. Example: 171.", + "Weight_Kg": "SMALLINT. Donor's weight measurement in kilograms. Example: 55.", + "Bmi_Value": "REAL. Donor's calculated body mass index value. Example: 18.8.", + "Gender_Type": "TEXT. Donor's gender classification code. Possible values: F, M.", + "Ethnicity": "TEXT. Donor's ethnic or racial background classification. Possible values: African, Asian, Caucasian, Hispanic, Other." + } + } +} \ No newline at end of file diff --git a/organ_transplant/organ_transplant_kb.jsonl b/organ_transplant/organ_transplant_kb.jsonl new file mode 100644 index 0000000000000000000000000000000000000000..b1c6ae6f7ed015a9e5b54719399ae130d86e76a3 --- /dev/null +++ b/organ_transplant/organ_transplant_kb.jsonl @@ -0,0 +1,55 @@ +{"id": 0, "knowledge": "Body Mass Index (BMI)", "description": "Calculates a person's body mass index from their weight and height.", "definition": "BMI is calculated as weight in kilograms divided by the square of height in meters. Formula: $BMI = \\frac{weight_{kg}}{height_m^2}$", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 1, "knowledge": "Donor-Recipient Age Difference", "description": "Calculates the absolute age difference between an organ donor and a recipient.", "definition": "The absolute difference between the recipient's age and the donor's age. Formula: $|Age_{recipient} - Age_{donor}|$.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 2, "knowledge": "Total Ischemia Time", "description": "Measures the total time an organ is without blood supply, from recovery to transplant.", "definition": "The sum of the time from organ recovery to preservation and the time from preservation to transplant. Formula: $T_{ischemia} = T_{cold} + T_{warm}$", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 3, "knowledge": "Estimated Glomerular Filtration Rate (eGFR)", "description": "Estimates the kidney's filtration capacity, a key indicator of renal function.", "definition": "A calculated rate based on serum creatinine level, age, and gender. A simplified estimation formula is: $eGFR = 141 \\times (\\frac{Cr}{0.9})^{-0.411} \\times 0.993^{Age}$ (for males, where Cr is serum creatinine). The actual formula can be more complex.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 4, "knowledge": "HLA Mismatch Score", "description": "Quantifies the degree of immunological difference between a donor and recipient based on Human Leukocyte Antigens (HLA).", "definition": "A count of the mismatched HLA antigens (A, B, DR) between the donor and recipient. Formula: $S_{mismatch} = \\sum_{i \\in \\{A, B, DR\\}} (HLA_i^{donor} \\neq HLA_i^{recipient})$", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 5, "knowledge": "Recipient Wait Time Ratio", "description": "Calculates the ratio of a recipient's waiting time to the average waiting time for their specific organ and blood type.", "definition": "The recipient's wait time in days converted to years. Formula: $R_{wait} = \\frac{T_{recipient_wait_days}}{365.0}$", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 6, "knowledge": "Size Compatibility Score", "description": "A score that assesses the physical size match between a donor and recipient, primarily using Body Mass Index (BMI).", "definition": "A score derived from the ratio of donor and recipient BMI. Formula: $S_{size} = 1 - |\\frac{BMI_{donor}}{BMI_{recipient}} - 1|$.", "type": "calculation_knowledge", "children_knowledge": [0]} +{"id": 7, "knowledge": "Age Compatibility Score", "description": "A score evaluating the suitability of a donor's age relative to the recipient's, considering the Donor-Recipient Age Difference.", "definition": "A normalized score based on the Donor-Recipient Age Difference. Formula: $S_{age} = e^{-0.03 \\times |Age_{recipient} - Age_{donor}|}$, using a standard scaling factor.", "type": "calculation_knowledge", "children_knowledge": [1]} +{"id": 8, "knowledge": "Logistical Feasibility Index", "description": "Calculates an index representing the logistical viability of a transplant, considering distance and expected organ preservation time.", "definition": "A score combining geographic distance and Total Ischemia Time. Formula: $I_{logistics} = (\\frac{700}{Distance_{km}}) + (\\frac{300}{T_{ischemia\\_mins}})$", "type": "calculation_knowledge", "children_knowledge": [2]} +{"id": 9, "knowledge": "Renal Function Score", "description": "A composite score to evaluate a donor's kidney health based on key indicators.", "definition": "A score calculated from the donor's Estimated Glomerular Filtration Rate (eGFR) and serum creatinine (Cr) level. Formula: $S_{renal} = (1.0 \\times eGFR) - (10.0 \\times Cr)$.", "type": "calculation_knowledge", "children_knowledge": [3]} +{"id": 10, "knowledge": "Immunological Compatibility Score", "description": "A comprehensive score assessing the immunological match, factoring in blood type and HLA differences.", "definition": "A weighted score based on ABO blood type compatibility status and the HLA Mismatch Score. Formula: $S_{immune} = 0.6 \\times I_{ABO} + 0.4 \\times (1 - \\frac{S_{mismatch}}{6})$, where $I_{ABO}$ is 1 for compatible, 0 otherwise.", "type": "calculation_knowledge", "children_knowledge": [4, 20]} +{"id": 11, "knowledge": "Quality-Adjusted Life Year (QALY)", "description": "A measure of disease burden, including both the quality and the quantity of life lived.", "definition": "Calculated by multiplying the years of life by a quality-of-life score. Formula: $QALY = Years_{life} \\times Quality_{score}$", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 12, "knowledge": "Cost-Effectiveness Ratio (CER)", "description": "Calculates the cost per unit of health outcome, often using Quality-Adjusted Life Year (QALY).", "definition": "The ratio of the net cost of an intervention to its net health gain. Formula: $CER = \\frac{Cost_{net}}{QALY_{gained}}$", "type": "calculation_knowledge", "children_knowledge": [11]} +{"id": 13, "knowledge": "Expected Graft Survival (EGS) Score", "description": "Predicts the probability of a transplanted organ functioning successfully over a specific period.", "definition": "A predictive score based on factors like the Immunological Compatibility Score and donor age. Formula: $EGS = \\frac{1}{1 + e^{-(-0.5 + 1.5 S_{immune} - 0.02 Age_{donor})}}$", "type": "calculation_knowledge", "children_knowledge": [10]} +{"id": 14, "knowledge": "Patient Urgency Score", "description": "A score that quantifies the urgency of a recipient's need for a transplant.", "definition": "A composite score based on the recipient's medical urgency status and their Recipient Wait Time Ratio. Formula: $S_{urgency} = 0.7 \\times Status_{medical} + 0.3 \\times R_{wait}$", "type": "calculation_knowledge", "children_knowledge": [5, 25]} +{"id": 15, "knowledge": "Center Performance Score", "description": "A score evaluating a transplant center's performance based on its experience and patient outcomes.", "definition": "A weighted average of a center's transplant volume and its post-transplant success rates. Formula: $S_{center} = 0.6 \\times V_{transplants} + 0.4 \\times R_{success}$", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 16, "knowledge": "Surgical Risk Score", "description": "A score assessing the risk associated with the transplant surgery itself.", "definition": "A composite score derived from the recipient's comorbidities and the inherent complexity of the surgical procedure. Formula: $S_{surgical} = 0.6 \\times N_{comorbidities} + 0.4 \\times C_{procedure}$, where components are normalized.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 17, "knowledge": "Composite Allocation Score", "description": "A comprehensive score used to rank recipients for organ allocation, integrating multiple key factors.", "definition": "A weighted sum of the Patient Urgency Score, Immunological Compatibility Score, and Expected Graft Survival (EGS) Score. Formula: $S_{allocation} = 0.5 S_{urgency} + 0.25 S_{immune} + 0.25 EGS$", "type": "calculation_knowledge", "children_knowledge": [14, 10, 13]} +{"id": 18, "knowledge": "Resource Utilization Index", "description": "An index that estimates the expected consumption of medical resources for a transplant case.", "definition": "A calculated index based on the Surgical Risk Score and expected length of hospital stay. Formula: $I_{resource} = S_{surgical} \\times Days_{stay}$", "type": "calculation_knowledge", "children_knowledge": [16]} +{"id": 19, "knowledge": "Readmission Risk Index", "description": "A predictive index for the likelihood of a patient being readmitted to the hospital post-transplant.", "definition": "A risk index calculated from the number of pre-existing medical conditions and the Surgical Risk Score. Formula: $I_{readmission} = 0.1 + (0.05 \\times N_{conditions}) + (0.2 \\times S_{surgical})$", "type": "calculation_knowledge", "children_knowledge": [16]} +{"id": 20, "knowledge": "ABO Blood Type Compatibility", "description": "Defines the rules for matching blood types between an organ donor and recipient to prevent hyperacute rejection.", "definition": "A fundamental matching rule where recipient blood type determines acceptable donor types. Type O is the universal donor; Type AB is the universal recipient. Type A can receive from A and O. Type B can receive from B and O.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 21, "knowledge": "Cold Ischemia", "description": "The period during which a harvested organ is kept cold to preserve it, starting from when its blood supply is cut off until it is transplanted.", "definition": "A state of organ preservation where metabolic activity is slowed by cooling, typically to around 4°C, to minimize tissue damage while outside the body. This is a critical factor in transplant success.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 22, "knowledge": "Human Leukocyte Antigen (HLA) Typing", "description": "The process of identifying specific HLA protein markers on a person's cells to match a donor with a recipient for transplantation.", "definition": "A genetic test that identifies a person's unique HLA markers. A closer HLA match between a donor and recipient reduces the risk of the recipient's immune system attacking, or rejecting, the transplanted organ.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 23, "knowledge": "Panel Reactive Antibody (PRA)", "description": "A test that measures the level of pre-existing antibodies in a recipient's blood against common HLA antigens, indicating their sensitization level.", "definition": "A measure of a recipient's sensitization to foreign HLA antigens, represented by the `pra_score`[cite: 36]. For matching purposes, a recipient with a `pra_score` of 80 or higher is defined as having a 'high PRA score', indicating a state of high immunological sensitization.", "type": "domain_knowledge", "children_knowledge": [22]} +{"id": 24, "knowledge": "Crossmatch Test", "description": "A final compatibility test performed just before transplant surgery to ensure the recipient has no pre-formed antibodies against the specific donor's tissues.", "definition": "A laboratory test that directly mixes the recipient's serum with the donor's lymphocytes. A 'positive' Crossmatch Test indicates the presence of donor-specific antibodies and is a contraindication to transplantation, as it predicts immediate organ rejection.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 25, "knowledge": "Medical Urgency Status", "description": "A classification system used to prioritize patients on the transplant waiting list based on their immediate need and risk of mortality.", "definition": "A tiered system that reflects how critically ill a patient is. For calculation purposes, statuses are mapped to numerical values: Status 1A is assigned a value of 5, Status 1B is 4, Status 2 is 3, and Status 3 is 2. All other statuses are assigned a value of 1.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 26, "knowledge": "Donor-Specific Antibodies (DSA)", "description": "Antibodies in a recipient's blood that are specifically directed against a particular donor's HLA antigens.", "definition": "The presence of these antibodies is identified by a positive Crossmatch Test. They pose a significant risk for antibody-mediated rejection of the transplanted organ.", "type": "domain_knowledge", "children_knowledge": [22, 24]} +{"id": 27, "knowledge": "High-Risk Donor", "description": "An organ donor who has a history of certain medical conditions or behaviors that may increase the risk of disease transmission or poorer graft function in the recipient.", "definition": "A donor category defined by criteria such as a history of communicable diseases (e.g., Hepatitis C, HIV), malignancy, or high-risk social behaviors (e.g., intravenous drug use). The use of organs from a High-Risk Donor requires careful evaluation and informed consent.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 28, "knowledge": "Graft Survival", "description": "Refers to the period of time a transplanted organ continues to function properly within the recipient's body.", "definition": "A key outcome measure in transplantation. Successful Graft Survival means the organ is performing its intended biological functions without signs of rejection or failure.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 29, "knowledge": "Immunological Sensitization", "description": "A state where a recipient has a high level of pre-formed antibodies, making it difficult to find a compatible donor.", "definition": "A clinical state where a recipient is considered highly sensitized to potential donors. This condition is formally defined as having a Panel Reactive Antibody (PRA) score of 80 or greater. This high level of sensitization significantly complicates the matching process by severely limiting the number of compatible donors.", "type": "domain_knowledge", "children_knowledge": [23]} +{"id": 30, "knowledge": "Static Cold Storage", "description": "The most common method of organ preservation, where the organ is flushed with a cold preservation solution and stored on ice.", "definition": "A preservation technique that relies on hypothermia to reduce the organ's metabolic rate and oxygen demand during transport, minimizing tissue damage during Cold Ischemia.", "type": "domain_knowledge", "children_knowledge": [21]} +{"id": 31, "knowledge": "Optimal Donor-Recipient Match", "description": "An ideal pairing of a donor and recipient that meets a strict set of criteria to maximize the probability of long-term transplant success.", "definition": "An ideal match defined by meeting three specific, stringent criteria simultaneously: 1) Full ABO Blood Type Compatibility. 2) An HLA Mismatch Score of exactly 0. 3) A Size Compatibility Score within the inclusive range of [0.9, 1.1]. A pairing that satisfies all three conditions is classified as an Optimal Donor-Recipient Match.", "type": "domain_knowledge", "children_knowledge": [20, 4, 6]} +{"id": 32, "knowledge": "Anoxia", "description": "A donor's cause of death due to complete oxygen deprivation, which can affect organ quality.", "definition": "A donor's cause of death resulting from a complete lack of oxygen. [cite_start]Within the database, this specific cause is recorded as 'Anoxia' in the donor's cause of death description (`don_co_desc` [cite: 15]). This factor is critical for assessing the viability of oxygen-sensitive organs, such as the heart and kidneys.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 33, "knowledge": "Comorbidity", "description": "The presence of one or more additional diseases or conditions co-occurring with a primary disease or condition.", "definition": "In transplantation, a recipient's pre-existing conditions (e.g., diabetes, heart disease) are considered comorbidities. They can increase the complexity and risk of the transplant procedure.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 34, "knowledge": "Allocation Policy", "description": "The specific versioned set of rules governing organ distribution, used to prioritize recipients.", "definition": "The precise set of rules used for organ distribution, identified by a unique version string in the `allc_pol_vers` field, such as 'v2.7'[cite: 73]. Each policy embodies a strict hierarchy of rules. As a formal example, a given policy stipulates that Medical Urgency Status is the primary sorting criterion; for recipients with an identical urgency status, the one with a lower HLA Mismatch Score receives higher priority.", "type": "domain_knowledge", "children_knowledge": [25, 4]} +{"id": 35, "knowledge": "Informed Consent", "description": "The process by which a patient, after understanding the risks and benefits, voluntarily agrees to a medical procedure.", "definition": "A critical ethical and legal requirement in transplantation where the recipient (or their proxy) must be fully informed about all aspects of the surgery, including the use of organs from a High-Risk Donor, before giving permission.", "type": "domain_knowledge", "children_knowledge": [27]} +{"id": 36, "knowledge": "Regional Allocation Priority", "description": "A core principle within transplant allocation that prioritizes local matches to enhance fairness and clinical outcomes.", "definition": "A core rule embedded within an Allocation Policy that gives priority to recipients located in the same geographic region as the donor. [cite_start]This principle is implemented by matching the donor's region with the recipient's `allc_region`[cite: 72]. The primary goals are to ensure equitable organ access for the local community and to improve medical outcomes by minimizing the Cold Ischemia time associated with long-distance transport.", "type": "domain_knowledge", "children_knowledge": [34, 21]} +{"id": 37, "knowledge": "Multi-Organ Transplant", "description": "A complex surgical procedure where a recipient receives two or more organs from a single donor.", "definition": "A transplant involving multiple organs (e.g., heart-lung, kidney-pancreas). Prioritization and allocation for these cases are highly complex, considering the significant need of the recipient and the impact on the organ pool for other patients.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 38, "knowledge": "Post-Transplant Monitoring", "description": "The ongoing medical surveillance of a recipient after receiving a transplant to ensure graft survival and manage complications.", "definition": "A comprehensive care plan that includes regular lab tests, imaging, and clinical assessments to monitor organ function, detect early signs of rejection, and manage immunosuppressive medications.", "type": "domain_knowledge", "children_knowledge": [28]} +{"id": 39, "knowledge": "Viral Infection Status", "description": "Screening of donors and recipients for key viral infections that can be transmitted or reactivated during transplantation.", "definition": "Testing for viruses such as Cytomegalovirus (CMV), Epstein-Barr Virus (EBV), and Hepatitis. The Viral Infection Status of both donor and recipient is critical for assessing infection risk and planning post-transplant prophylactic treatment.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 40, "knowledge": "Body Mass Index (BMI) Categories", "description": "Illustrates the standard weight status categories based on BMI values.", "definition": "BMI provides a general indicator of body fatness. A value < 18.5 is considered underweight, 18.5-24.9 is normal weight, 25.0-29.9 is overweight, and > 30.0 indicates obesity. This is a factor in Size Compatibility Score.", "type": "value_illustration", "children_knowledge": [6]} +{"id": 41, "knowledge": "Serum Creatinine Levels", "description": "Illustrates how serum creatinine levels reflect kidney function.", "definition": "A waste product from muscle metabolism, cleared by the kidneys. Normal values are typically 0.7-1.3 mg/dL. A high value, such as 2.5 mg/dL, suggests impaired kidney function, which is a key component of the Renal Function Score.", "type": "value_illustration", "children_knowledge": [9]} +{"id": 42, "knowledge": "Glomerular Filtration Rate (GFR) Values", "description": "Illustrates the stages of kidney function based on GFR values.", "definition": "GFR measures how well the kidneys are filtering blood. A GFR > 90 mL/min/1.73m² is considered normal. A GFR of 30-59 indicates moderate kidney disease, and a GFR < 15 signifies kidney failure.", "type": "value_illustration", "children_knowledge": -1} +{"id": 43, "knowledge": "Panel Reactive Antibody (PRA) Score Interpretation", "description": "Illustrates the meaning of different PRA score percentages.", "definition": "A PRA score reflects a recipient's sensitization level. A score of 0-10% indicates low sensitization, making it easier to find a compatible donor. A score > 80% indicates high Immunological Sensitization, meaning the patient is likely to be incompatible with over 80% of potential donors.", "type": "value_illustration", "children_knowledge": [29]} +{"id": 44, "knowledge": "HLA Mismatch Levels", "description": "Illustrates the significance of the number of HLA mismatches.", "definition": "The number of mismatches impacts rejection risk. A '0-mismatch' is a perfect HLA match and is ideal. A '6-mismatch' is a complete mismatch across the A, B, and DR loci, representing the highest immunological barrier outside of ABO or positive crossmatch issues.", "type": "value_illustration", "children_knowledge": [4]} +{"id": 45, "knowledge": "Medical Urgency Tiers", "description": "Illustrates the hierarchy of transplant urgency statuses.", "definition": "Patients are prioritized based on their risk of dying while on the waitlist. Status 1A is the highest urgency, reserved for critically ill patients in the ICU. Status 1B is a lower, but still urgent, category. Lower statuses (e.g., Status 2) are for more stable patients.", "type": "value_illustration", "children_knowledge": [25]} +{"id": 46, "knowledge": "Crossmatch Results", "description": "Illustrates the critical outcomes of a crossmatch test.", "definition": "A 'Negative' result means no pre-formed donor-specific antibodies were detected, and the transplant can proceed. A 'Positive' result indicates the presence of these antibodies, making transplant highly risky or contraindicated due to the high chance of hyperacute rejection.", "type": "value_illustration", "children_knowledge": [24]} +{"id": 47, "knowledge": "Cause of Death Impact", "description": "Illustrates how a donor's cause of death can affect organ quality.", "definition": "The mechanism of death influences organ viability. For example, death due to Anoxia (oxygen deprivation) can compromise the function of oxygen-sensitive organs like the heart and kidneys. In contrast, death from head trauma may leave abdominal organs in optimal condition.", "type": "value_illustration", "children_knowledge": [32]} +{"id": 48, "knowledge": "Cost-Effectiveness Thresholds", "description": "Illustrates typical thresholds for considering a medical intervention as cost-effective.", "definition": "A common benchmark for the Cost-Effectiveness Ratio (CER) in the US is $50,000 to $150,000 per Quality-Adjusted Life Year (QALY) gained. Interventions below this threshold are generally considered a good value.", "type": "value_illustration", "children_knowledge": [12]} +{"id": 49, "knowledge": "Liver Function Test (LFT) Values", "description": "Illustrates how LFT results can indicate liver health.", "definition": "Key markers like ALT and AST assess liver cell injury. Normal ranges are typically below 40-50 U/L. Elevated levels, such as an ALT of 200 U/L, can indicate significant liver inflammation or damage, affecting a donor organ's suitability.", "type": "value_illustration", "children_knowledge": -1} +{"id": 50, "knowledge": "Comprehensive Match Quality Score", "description": "A holistic meta-score that aggregates various sub-scores to represent the overall quality of a potential donor-recipient match.", "definition": "A weighted composite index integrating multiple factors. The score is calculated as: $S_{match} = 0.4 S_{immune} + 0.2 S_{age} - 0.3 S_{surgical} + 0.1 I_{logistics}$, using the Immunological Compatibility Score, Age Compatibility Score, Surgical Risk Score, and Logistical Feasibility Index.", "type": "calculation_knowledge", "children_knowledge": [10, 7, 16, 8]} +{"id": 51, "knowledge": "Antibody-Mediated Rejection (AMR) Risk Stratification", "description": "A system for categorizing a recipient's risk of developing antibody-mediated rejection based on their immunological profile.", "definition": "A risk classification system. For recipients who have a 'Positive' crossmatch result, the 'High Risk' category is defined by meeting two concurrent criteria: having a Panel Reactive Antibody (PRA) score of 80 or greater AND having a 'Positive' donor-specific antibody (DSA) status. Recipients not meeting these criteria are considered 'Standard Risk'.", "type": "domain_knowledge", "children_knowledge": [26, 23, 24]} +{"id": 52, "knowledge": "Net Health Benefit Score", "description": "Calculates the overall expected health gain for a recipient from a transplant, balancing survival benefits against procedural risks and quality of life.", "definition": "A score representing the projected net outcome, calculated by offsetting the anticipated Quality-Adjusted Life Year (QALY) gain with the risks. The formula is: $S_{NHB} = (EGS \\times QALY_{gain}) - (S_{surgical} \\times 0.2)$, where EGS is the Expected Graft Survival score.", "type": "calculation_knowledge", "children_knowledge": [13, 11, 16]} +{"id": 53, "knowledge": "Marginal Donor Acceptance Criteria", "description": "A set of clinical guidelines for evaluating and accepting organs from donors who do not meet ideal criteria but are not classified as standard High-Risk Donors.", "definition": "A nuanced decision framework applied to 'extended criteria' donors. These donors may exhibit factors like advanced age (significant Donor-Recipient Age Difference), borderline organ function (sub-optimal Renal Function Score), or certain comorbidities. Acceptance is based on a risk-benefit analysis, weighing the organ's imperfections against the recipient's urgent medical need and the scarcity of ideal organs.", "type": "domain_knowledge", "children_knowledge": [27, 9, 1]} +{"id": 54, "knowledge": "Immunosuppression Protocol Tiers", "description": "Illustrates how a patient's immunological risk profile dictates the intensity of the required post-transplant immunosuppressive drug regimen.", "definition": "A tiered therapeutic strategy based on risk. A patient with a high Immunological Compatibility Score and low AMR risk (as per Antibody-Mediated Rejection (AMR) Risk Stratification) might receive a standard 'Maintenance Therapy' (e.g., a three-drug cocktail). Conversely, a patient with a low compatibility score and high AMR risk would require aggressive 'Induction Therapy' (e.g., using potent biologic agents) followed by intensive maintenance.", "type": "domain_knowledge", "children_knowledge": [51, 10]} diff --git a/organ_transplant/organ_transplant_schema.txt b/organ_transplant/organ_transplant_schema.txt new file mode 100644 index 0000000000000000000000000000000000000000..c7d821dc32d80af53eb1e07565f6a66c6840de2d --- /dev/null +++ b/organ_transplant/organ_transplant_schema.txt @@ -0,0 +1,332 @@ +CREATE TABLE "demographics" ( +contrib_registry text NOT NULL, +age_count text NULL, +blood_class text NULL, +nation_ref text NULL, +physicalstats jsonb NULL, + PRIMARY KEY (contrib_registry) +); + +First 3 rows: +contrib_registry age_count blood_class nation_ref physicalstats +------------------ --------------------------- -------------------------------- ------------ ---------------------------------------------------------------------------------------------------- +D812743 57 years, mature donor B- rare type (specific matching) Seychelles {'Bmi_Value': 31.6, 'Ethnicity': 'Caucasian', 'Height_Cm': 156, 'Weight_Kg': 77, 'Gender_Type': 'M'} +D120007 51 years, mature donor AB- rare negative type El Salvador {'Bmi_Value': 23.9, 'Ethnicity': 'Caucasian', 'Height_Cm': 183, 'Weight_Kg': 80, 'Gender_Type': 'M'} +D685621 29 years, young adult donor B+ moderate compatibility Oman {'Bmi_Value': 21.4, 'Ethnicity': 'Other', 'Height_Cm': 159, 'Weight_Kg': 54, 'Gender_Type': 'M'} +... + + +CREATE TABLE "recipients_immunology" ( +immu_recip_registry text NOT NULL, +pra_score real NULL, +dsa_state text NULL, +cross_result text NULL, +cmv_state text NULL, +ebv_state text NULL, +func_status text NULL, +life_support text NULL, +hlaprofile jsonb NULL, + PRIMARY KEY (immu_recip_registry), + FOREIGN KEY (immu_recip_registry) REFERENCES recipients_demographics(recip_registry) +); + +First 3 rows: +immu_recip_registry pra_score dsa_state cross_result cmv_state ebv_state func_status life_support hlaprofile +--------------------- ----------- ----------- -------------- ----------- ----------- ------------------- -------------- ---------------------------------------------------------------------- +R159571 61 Positive Positive Positive Negative Moderate Impairment VAD {'Hla_A_Val': 28, 'Hla_B_Val': 64, 'Hla_Dq_Val': 52, 'Hla_Dr_Val': 82} +R372719 93 Negative Pending Negative Negative Mild Impairment {'Hla_A_Val': 25, 'Hla_B_Val': 37, 'Hla_Dq_Val': 11, 'Hla_Dr_Val': 9} +R279115 7 Negative Pending Negative Positive Severe Impairment ECMO {'Hla_A_Val': 24, 'Hla_B_Val': 42, 'Hla_Dq_Val': 55, 'Hla_Dr_Val': 63} +... + + +CREATE TABLE "administrative_and_review" ( +adm_rev_registry text NOT NULL, +exp_rev_stat_val text NULL, +exp_rev_notes text NULL, +ec_appro_val text NULL, +reg_comp_val text NULL, +docu_stat_val text NULL, +cons_stat_val text NULL, +fin_clear_val text NULL, +ins_appro_val text NULL, +coord_ref text NULL, +surge_ref text NULL, +tx_cen_code text NULL, +rec_cen_code text NULL, +lab_ref text NULL, +adm_don_ref text NULL, +adm_rec_ref text NULL, + PRIMARY KEY (adm_rev_registry), + FOREIGN KEY (adm_don_ref) REFERENCES demographics(contrib_registry), + FOREIGN KEY (adm_rec_ref) REFERENCES recipients_demographics(recip_registry), + FOREIGN KEY (adm_rev_registry) REFERENCES transplant_matching(match_rec_registry) +); + +First 3 rows: +adm_rev_registry exp_rev_stat_val exp_rev_notes ec_appro_val reg_comp_val docu_stat_val cons_stat_val fin_clear_val ins_appro_val coord_ref surge_ref tx_cen_code rec_cen_code lab_ref adm_don_ref adm_rec_ref +------------------ ------------------ ------------------- -------------- -------------- --------------- --------------- --------------- --------------- ----------- ----------- ------------- -------------- --------- ------------- ------------- +TM113504 RejECted accEptABlE mAtCH under review InCoMpLeTe OBTAINED rejected APPROVED C7827 S8696 TC594 RC386 L445 D812743 R947153 +TM533084 rejected approved Under Review CoMpLeTe Obtained REJECTED APPROVeD C7211 S1636 TC810 RC832 L137 D120007 R159571 +TM464099 APPROVED Requires discussion approved CoMpLiAnT COMPLETE Refused Approved pending C8374 S3232 TC698 RC615 L412 D120007 R159571 +... + + +CREATE TABLE "recipients_demographics" ( +recip_registry text NOT NULL, +age_count smallint NULL, +gend_type text NULL, +blood_class text NULL, +ht_cm smallint NULL, +wt_kg bigint NULL, +bmi_val real NULL, +ethn_grp text NULL, + PRIMARY KEY (recip_registry) +); + +First 3 rows: +recip_registry age_count gend_type blood_class ht_cm wt_kg bmi_val ethn_grp +---------------- ----------- ----------- ------------- ------- ------- --------- ---------- +R947153 57 M AB+ 171 55 18.8 African +R159571 23 M O+ 153 119 50.8 African +R372719 39 F O- 158 119 47.7 African +... + + +CREATE TABLE "transplant_matching" ( +match_rec_registry text NOT NULL, +created_ts text NULL, +donor_ref_reg text NULL, +recip_ref_reg text NULL, +org_spec text NULL, +match_status text NULL, +score_val real NULL, +level_val text NULL, +alg_vers text NULL, +run_registry text NULL, +match_ts text NULL, +dur_sec bigint NULL, +conf_val real NULL, +dss_val real NULL, + PRIMARY KEY (match_rec_registry), + FOREIGN KEY (donor_ref_reg) REFERENCES demographics(contrib_registry), + FOREIGN KEY (recip_ref_reg) REFERENCES recipients_demographics(recip_registry) +); + +First 3 rows: +match_rec_registry created_ts donor_ref_reg recip_ref_reg org_spec match_status score_val level_val alg_vers run_registry match_ts dur_sec conf_val dss_val +-------------------- -------------------------- --------------- --------------- ---------- -------------- ----------- ----------- ---------- -------------- ------------------- --------- ---------- --------- +TM113504 2025-02-19 08:31:22.330375 D812743 R947153 Lung Failed 0.005 Marginal v2.5 MR324767 2025.02.19 08:31:22 128 0.056 0.68 +TM533084 2025-02-19 08:31:22.330375 D120007 R159571 Kidney Completed 0.827 Optimal v2.1 MR667283 2025.02.19 08:31:22 2 0.2 0.699 +TM464099 2025-02-19 08:31:22.330375 D120007 R159571 Kidney Completed 0.068 Acceptable v3.2 MR644157 2025.02.19 08:31:22 78 0.8 0.976 +... + + +CREATE TABLE "data_source_and_quality" ( +q_match_registry text NOT NULL, +data_src_val text NULL, +dq_score_val real NULL, +dc_score_val real NULL, +verif_stat_val text NULL, +last_up_ts text NULL, +next_rev_dt text NULL, +q_don_ref text NULL, +q_rec_ref text NULL, + PRIMARY KEY (q_match_registry), + FOREIGN KEY (q_don_ref) REFERENCES demographics(contrib_registry), + FOREIGN KEY (q_match_registry) REFERENCES transplant_matching(match_rec_registry), + FOREIGN KEY (q_rec_ref) REFERENCES recipients_demographics(recip_registry) +); + +First 3 rows: +q_match_registry data_src_val dq_score_val dc_score_val verif_stat_val last_up_ts next_rev_dt q_don_ref q_rec_ref +------------------ --------------- -------------- -------------- ---------------- ------------ ------------- ----------- ----------- +TM113504 uNoS 0.966 0.78 verified 2025/02/19 04/03/2025 D812743 R947153 +TM533084 Manual Entry 0.556 0.438 FAILED 2025/02/19 27/02/2025 D120007 R159571 +TM464099 cENTER daTabaSE 0.196 0.43 verified 2025/02/19 17/03/2025 D120007 R159571 +... + + +CREATE TABLE "medical_history" ( +contrib_med_registry text NOT NULL, +med_history text NULL, +smk_cond text NULL, +alc_cond text NULL, +drug_cond text NULL, +viralstatinfo jsonb NULL, + PRIMARY KEY (contrib_med_registry), + FOREIGN KEY (contrib_med_registry) REFERENCES demographics(contrib_registry) +); + +First 3 rows: +contrib_med_registry med_history smk_cond alc_cond drug_cond viralstatinfo +---------------------- ------------- ----------------------------------- --------------------------------------------- --------------------------------- ----------------------------------------------------------------------------------------------------------------------------- +D812743 Former smoker, recovery documented Moderate use, liver function normal Current use, high risk assessment {'Cmv_State': 'Negative', 'Ebv_State': 'Negative', 'Hbv_State': 'Negative', 'Hcv_State': 'Negative', 'Hiv_State': 'Negative'} +D120007 Former smoker, recovery documented Current use, high risk assessment {'Cmv_State': 'Negative', 'Ebv_State': 'Positive', 'Hbv_State': 'Negative', 'Hcv_State': 'Negative', 'Hiv_State': 'Negative'} +D685621 Heart Disease Current smoker, requires assessment Heavy use, liver function requires evaluation {'Cmv_State': 'Negative', 'Ebv_State': 'Negative', 'Hbv_State': 'Positive', 'Hcv_State': 'Positive', 'Hiv_State': 'Negative'} +... + + +CREATE TABLE "compatibility_metrics" ( +match_comp_registry text NOT NULL, +hla_mis_count bigint NULL, +blood_compat text NULL, +distance text NULL, +exp_isch_time text NULL, +exp_time text NULL, +cost_est text NULL, +donor_ref_reg text NULL, +recip_ref_reg text NULL, +compatscores jsonb NULL, + PRIMARY KEY (match_comp_registry), + FOREIGN KEY (donor_ref_reg) REFERENCES demographics(contrib_registry), + FOREIGN KEY (match_comp_registry) REFERENCES transplant_matching(match_rec_registry), + FOREIGN KEY (recip_ref_reg) REFERENCES recipients_demographics(recip_registry) +); + +First 3 rows: +match_comp_registry hla_mis_count blood_compat distance exp_isch_time exp_time cost_est donor_ref_reg recip_ref_reg compatscores +--------------------- --------------- -------------- ---------- -------------------- --------------------- ----------- --------------- --------------- ------------------------------------------------------------- +TM113504 3 incompatible 1815 miles 539 minutes ischemia 142 minutes transport US$5210.53 D812743 R947153 {'Age_Score': 0.327, 'Hla_Score': 0.522, 'Size_Score': 0.228} +TM533084 5 IncOmPATIblE 673 miles 415 minutes ischemia 141 minutes transport US$47952.51 D120007 R159571 {'Age_Score': 0.89, 'Hla_Score': 0.024, 'Size_Score': 0.928} +TM464099 2 1524 miles 381 minutes ischemia 176 minutes transport US$32836.57 D120007 R159571 {'Age_Score': 0.508, 'Hla_Score': 0.234, 'Size_Score': 0.911} +... + + +CREATE TABLE "hla_info" ( +immu_don_registry text NOT NULL, +hla_a_val real NULL, +hla_b_val real NULL, +hla_dr_val real NULL, +hla_dq_val real NULL, + PRIMARY KEY (immu_don_registry), + FOREIGN KEY (immu_don_registry) REFERENCES demographics(contrib_registry) +); + +First 3 rows: +immu_don_registry hla_a_val hla_b_val hla_dr_val hla_dq_val +------------------- ----------- ----------- ------------ ------------ +D812743 92 11 21 4 +D120007 97 52 71 56 +D887241 7 36 81 69 +... + + +CREATE TABLE "risk_evaluation" ( +risk_eval_registry text NOT NULL, +org_qual_val real NULL, +egs_val real NULL, +eps_val real NULL, +surg_cmpl_val real NULL, +surg_risk_val real NULL, +res_avail_val real NULL, +cntr_exp_val real NULL, +cntr_vol_val real NULL, +cntr_out_val real NULL, +qol_val real NULL, +cost_eff_val real NULL, +alloc_prio_val real NULL, +donor_ref_reg text NULL, +recip_ref_reg text NULL, +riskmetrics jsonb NULL, +cost_qaly text NULL, +resource_consumption text NULL, +staff text NULL, + PRIMARY KEY (risk_eval_registry), + FOREIGN KEY (donor_ref_reg) REFERENCES demographics(contrib_registry), + FOREIGN KEY (recip_ref_reg) REFERENCES recipients_demographics(recip_registry), + FOREIGN KEY (risk_eval_registry) REFERENCES transplant_matching(match_rec_registry) +); + +First 3 rows: +risk_eval_registry org_qual_val egs_val eps_val surg_cmpl_val surg_risk_val res_avail_val cntr_exp_val cntr_vol_val cntr_out_val qol_val cost_eff_val alloc_prio_val donor_ref_reg recip_ref_reg riskmetrics cost_qaly resource_consumption staff +-------------------- -------------- --------- --------- --------------- --------------- --------------- -------------- -------------- -------------- --------- -------------- ---------------- --------------- --------------- -------------------------------------------------------------------------------------------------------------------------------- -------------- ---------------------------- --------------------------- +TM113504 0.964 0.114 0.352 0.747 nan 0.567 0.215 0.919 0.077 0.185 0.037 0.917 D812743 R947153 {'Cmpl_Risk': 0.349, 'Mort_Risk': 0.674, 'Immun_Risk': 0.607, 'Infect_Risk': 0.48, 'Reject_Risk': 0.527, 'Readmit_Risk': 0.747} 53700 USD/QALY 11.339999437332153 units/day 15.46999979019165 hrs/case +TM533084 0.709 0.04 0.033 0.379 0.33 0.832 0.814 0.056 0.665 0.523 0.015 0.798 D120007 R159571 {'Cmpl_Risk': 0.909, 'Mort_Risk': 0.536, 'Immun_Risk': 0.82, 'Infect_Risk': 0.668, 'Reject_Risk': 0.461, 'Readmit_Risk': 0.491} 51500 USD/QALY 16.640000343322754 units/day 11.790000081062317 hrs/case +TM464099 0.971 0.079 0.158 0.767 0.321 0.474 0.464 0.969 0.948 0.702 0.251 0.6 D120007 R159571 {'Cmpl_Risk': 0.177, 'Mort_Risk': 0.012, 'Immun_Risk': 0.282, 'Infect_Risk': 0.377, 'Reject_Risk': 0.546, 'Readmit_Risk': 0.669} 75100 USD/QALY 9.480000138282776 units/day 15.670000195503235 hrs/case +... + + +CREATE TABLE "function_and_recovery" ( +recov_don_registry text NOT NULL, +don_crtn_val text NULL, +don_gfr_val text NULL, +don_co_desc text NULL, +org_recov_dt text NULL, +org_presv_meth text NULL, +org_isch_time text NULL, +organfuncassess jsonb NULL, + PRIMARY KEY (recov_don_registry), + FOREIGN KEY (recov_don_registry) REFERENCES demographics(contrib_registry) +); + +First 3 rows: +recov_don_registry don_crtn_val don_gfr_val don_co_desc org_recov_dt org_presv_meth org_isch_time organfuncassess +-------------------- -------------- ------------------- --------------------------------------------------- -------------- ------------------------------------------------------- --------------- ---------------------------------------------------------------------- +D126113 1 mg/dL 73 mL/min/1.73m² Anoxia, oxygen deprivation may affect organ quality 02-18-2025 HKT Static cold storage, standard preservation method 659 mins {'Liv_Func': 'Mild', 'Card_Func': 'Mild', 'Pulm_Func': 'Mild'} +D812743 1.62 mg/dL 103.4 mL/min/1.73m² Anoxia, oxygen deprivation may affect organ quality 02-18-2025 HKT Normothermic perfusion, advanced preservation technique 702 mins {'Liv_Func': 'Normal', 'Card_Func': 'Moderate', 'Pulm_Func': 'Severe'} +D120007 1.08 mg/dL 78.5 mL/min/1.73m² Trauma, sudden death preserves organ viability 02-18-2025 HKT Normothermic perfusion, advanced preservation technique 331 mins {'Liv_Func': 'Mild', 'Card_Func': 'Moderate', 'Pulm_Func': 'Mild'} +... + + +CREATE TABLE "allocation_details" ( +allc_match_registry text NOT NULL, +allc_seq_num smallint NULL, +allc_region text NULL, +allc_pol_vers text NULL, +donor_ref_reg text NULL, +recip_ref_reg text NULL, + PRIMARY KEY (allc_match_registry), + FOREIGN KEY (allc_match_registry) REFERENCES transplant_matching(match_rec_registry), + FOREIGN KEY (donor_ref_reg) REFERENCES demographics(contrib_registry), + FOREIGN KEY (recip_ref_reg) REFERENCES recipients_demographics(recip_registry) +); + +First 3 rows: +allc_match_registry allc_seq_num allc_region allc_pol_vers donor_ref_reg recip_ref_reg +--------------------- -------------- ------------- --------------- --------------- --------------- +TM113504 24 Region_9 v2.7 D812743 R947153 +TM464099 61 Region_9 v3.8 D120007 R159571 +TM409527 31 Region_5 v1.9 D812743 R372719 +... + + +CREATE TABLE "clinical" ( +clin_recip_registry text NOT NULL, +diag_detail text NULL, +wait_time text NULL, +med_urgency text NULL, +prev_tx_count text NULL, +dial_status text NULL, +dial_duration text NULL, +comorbid_detail text NULL, + PRIMARY KEY (clin_recip_registry), + FOREIGN KEY (clin_recip_registry) REFERENCES recipients_demographics(recip_registry) +); + +First 3 rows: +clin_recip_registry diag_detail wait_time med_urgency prev_tx_count dial_status dial_duration comorbid_detail +--------------------- --------------------------------------------- ----------- ------------- ----------------------------------------------- ------------------------------------------------ --------------- ----------------------------------- +R947153 End-stage disease, urgent transplant need 104 days Status 1B 2 previous transplants, high-risk re-transplant 45 months Hypertension,Heart Disease,Diabetes +R159571 Congenital condition, lifelong treatment need 837 days Status 1A 1 previous transplant, re-transplant candidate Hemodialysis, standard renal replacement therapy 29 months Diabetes,COPD +R372719 End-stage disease, urgent transplant need 529 days 2 1 previous transplant, re-transplant candidate Hemodialysis, standard renal replacement therapy 6 months Diabetes +... + + +CREATE TABLE "logistics" ( +log_match_registry text NOT NULL, +trans_method text NULL, +don_ref_reg text NULL, +rec_ref_reg text NULL, + PRIMARY KEY (log_match_registry), + FOREIGN KEY (don_ref_reg) REFERENCES demographics(contrib_registry), + FOREIGN KEY (log_match_registry) REFERENCES transplant_matching(match_rec_registry), + FOREIGN KEY (rec_ref_reg) REFERENCES recipients_demographics(recip_registry) +); + +First 3 rows: +log_match_registry trans_method don_ref_reg rec_ref_reg +-------------------- -------------- ------------- ------------- +TM113504 Ground D812743 R947153 +TM533084 Ground D120007 R159571 +TM464099 Charter Air D120007 R159571 +... diff --git a/planets_data/planets_data_column_meaning_base.json b/planets_data/planets_data_column_meaning_base.json new file mode 100644 index 0000000000000000000000000000000000000000..f383932ec8ebdc71b07fa837650a21426f2ad8c6 --- /dev/null +++ b/planets_data/planets_data_column_meaning_base.json @@ -0,0 +1,102 @@ +{ + "planets_data|stars|stellarref": "A SERIAL primary key uniquely identifying each host star record in the database. Maps to 'rowid' in source data.", + "planets_data|stars|hostplname": "The name of the host star system derived from 'pl_hostname'. VARCHAR(50) NOT NULL UNIQUE constraint ensures no duplicate star names. Examples include '11 Com', 'HD 114783', 'Kepler-1649'.", + "planets_data|stars|stellardist": "Distance to the host star measured in parsecs from 'st_dist'. REAL accommodating distances from 3.21 to 8500 parsecs. Contains NULL when stellar distance measurements are unavailable or unreliable.", + "planets_data|stars|compcount": "Total number of confirmed planetary companions from 'pl_pnum'. INTEGER count of planets orbiting this host star, ranging from 1 to 8+ planets per system.", + "planets_data|planets|planetref": "A primary key INTEGER derived from 'rowid' uniquely identifying each planet record, ranging from 1 to 3372 in current dataset.", + "planets_data|planets|hostlink": "Foreign key INTEGER referencing stars(StellarRef), linking each planet to its host star system derived from hostname matching.", + "planets_data|planets|completter": "Planet designation from 'pl_letter' within its system. VARCHAR(10) with values: 'b', 'c', 'd', 'e', 'f', 'g', 'h' following IAU conventions where 'b' is typically the first discovered planet.", + "planets_data|planets|notecount": "Number of literature references from 'pl_nnotes'. INTEGER tracking annotation count for this planet discovery.", + "planets_data|planets|discmethod": "Discovery method from 'pl_discmethod'. VARCHAR(50) with standardized values: 'RadVel'/'RV'/'RV Method'/'Radial Velocity'/'Doppler' (radial velocity), 'Transit'/'TR'/'Transit Method'/'Photometry'/'Photometric' (transit photometry), 'Direct Imaging'/'DI'/'Imaging'/'IMG'/'Direct' (direct imaging), 'TTV'/'Transit Timing Variations'/'Transit Timing'/'TTV Method'/'Timing Var' (transit timing), 'Microlensing'/'ML'/'μLens'/'Lensing'/'Gravitational' (gravitational microlensing), 'Pulsar'/'PSR Timing'/'PT'/'Pulsation Timing Variations' (pulsar timing), 'Eclipse Timing Variations'/'ETV'/'Eclipse Timing'/'Timing Variations' (eclipse timing), 'Brightness Mod'/'OBM'/'Phase Curve'/'Orbital Mod' (orbital brightness modulation), 'AST' (astrometry).", + "planets_data|orbital_characteristics|orbitalref": "A SERIAL primary key uniquely identifying each orbital characteristics record.", + "planets_data|orbital_characteristics|bodylink": "Foreign key INTEGER referencing planets(PlanetRef), linking orbital parameters to a specific planet.", + "planets_data|orbital_characteristics|period": "Orbital period from 'pl_orbper' measured in days. REAL accommodating periods from 0.09 days (hot Jupiters) to 7,300,000 days (20,000+ years). Contains NULL when orbital period cannot be determined from available observations.", + "planets_data|orbital_characteristics|semimajor": "Semi-major axis from 'pl_orbsmax' measured in AU. REAL covering range from 0.0044 AU (ultra-hot Jupiters) to 2500 AU (wide-separation planets). Contains NULL when orbital distance cannot be reliably calculated.", + "planets_data|orbital_characteristics|eccentricity": "Orbital eccentricity from 'pl_orbeccen' ranging from 0 (circular) to <1 (elliptical). REAL providing high precision for nearly circular orbits. Contains NULL when eccentricity cannot be constrained from available data.", + "planets_data|orbital_characteristics|inclination": "Orbital inclination from 'pl_orbincl' in degrees relative to sky plane. REAL ranging from 0° (face-on) to 180° (retrograde). Contains NULL when inclination cannot be determined (e.g., for radial velocity-only discoveries).", + "planets_data|physical_properties|physref": "A SERIAL primary key uniquely identifying each physical properties record.", + "planets_data|physical_properties|objectlink": "Foreign key INTEGER referencing planets(PlanetRef), linking physical parameters to a specific planet.", + "planets_data|physical_properties|massjup": "Planetary mass from 'pl_bmassj' in Jupiter mass units. REAL covering range from 0.00006 MJ (sub-Earth) to 28.5 MJ (super-Jupiter). Contains NULL when mass cannot be determined (e.g., for transit-only discoveries without radial velocity follow-up).", + "planets_data|physical_properties|radjup": "Planetary radius from 'pl_radj' in Jupiter radius units. REAL covering range from 0.027 RJ (sub-Earth) to 6.9 RJ (inflated giants). Contains NULL when radius cannot be measured (e.g., for radial velocity-only discoveries).", + "planets_data|physical_properties|densvalue": "Bulk density from 'pl_dens' in g/cm³. REAL covering gas giants (<1 g/cm³) to super-dense planets (>20 g/cm³). Contains NULL when both mass and radius are not available for density calculation.", + "planets_data|instruments_surveys|instrumentref": "A SERIAL primary key uniquely identifying each observational facility or survey program.", + "planets_data|instruments_surveys|facilityname": "Observational facility name derived from flag analysis. VARCHAR(100) NOT NULL UNIQUE with values: 'ttv' (from pl_ttvflag='T'), 'kep' (from pl_kepflag=1), 'k2' (from pl_k2flag=true) representing Transit Timing Variations, Kepler mission, and K2 mission respectively.", + "planets_data|planet_instrument_observations|obsref": "A SERIAL primary key uniquely identifying each planet-instrument observation record.", + "planets_data|planet_instrument_observations|subjectlink": "Foreign key INTEGER referencing planets(PlanetRef), indicating which planet was observed.", + "planets_data|planet_instrument_observations|facilitylink": "Foreign key INTEGER referencing instruments_surveys(InstrumentRef), indicating which facility made the observation.", + "planets_data|data_quality_tracking|qualityref": "A SERIAL primary key uniquely identifying each data quality record.", + "planets_data|data_quality_tracking|targetlink": "Foreign key INTEGER referencing planets(PlanetRef), linking quality metrics to a specific planet.", + "planets_data|data_quality_tracking|perioderr1": "Positive uncertainty from 'pl_orbpererr1' in orbital period, in days. REAL representing +1σ error. Contains NULL when period uncertainty is not available or period itself is unknown.", + "planets_data|data_quality_tracking|perioderr2": "Negative uncertainty from 'pl_orbpererr2' in orbital period, in days. REAL representing -1σ error, typically stored as negative value. Contains NULL when period uncertainty is not available.", + "planets_data|data_quality_tracking|semimajerr1": "Positive uncertainty from 'pl_orbsmaxerr1' in semi-major axis, in AU. REAL representing +1σ error in orbital distance. Contains NULL when semi-major axis uncertainty is not available.", + "planets_data|data_quality_tracking|semimajerr2": "Negative uncertainty from 'pl_orbsmaxerr2' in semi-major axis, in AU. REAL representing -1σ error, accommodating asymmetric uncertainties. Contains NULL when semi-major axis uncertainty is not available.", + "planets_data|data_quality_tracking|eccerr1": "Positive uncertainty from 'pl_orbeccenerr1' in eccentricity. REAL representing +1σ error, typically small values <0.1. Contains NULL when eccentricity uncertainty is not available or eccentricity is unconstrained.", + "planets_data|data_quality_tracking|eccerr2": "Negative uncertainty from 'pl_orbeccenerr2' in eccentricity. REAL representing -1σ error, often asymmetric for low-eccentricity orbits. Contains NULL when eccentricity uncertainty is not available.", + "planets_data|data_quality_tracking|inclerr1": "Positive uncertainty from 'pl_orbinclerr1' in inclination, in degrees. REAL representing +1σ angular error. Contains NULL when inclination uncertainty is not available or inclination is unconstrained.", + "planets_data|data_quality_tracking|inclerr2": "Negative uncertainty from 'pl_orbinclerr2' in inclination, in degrees. REAL representing -1σ angular error. Contains NULL when inclination uncertainty is not available.", + "planets_data|data_quality_tracking|masserr1": "Positive uncertainty from 'pl_bmassjerr1' in planetary mass, in Jupiter masses. REAL representing +1σ error in mass measurement. Contains NULL when mass uncertainty is not available or mass is unconstrained.", + "planets_data|data_quality_tracking|masserr2": "Negative uncertainty from 'pl_bmassjerr2' in planetary mass, in Jupiter masses. REAL representing -1σ error, often asymmetric for low-mass planets. Contains NULL when mass uncertainty is not available.", + "planets_data|data_quality_tracking|raderr1": "Positive uncertainty from 'pl_radjerr1' in planetary radius, in Jupiter radii. REAL representing +1σ error in radius measurement. Contains NULL when radius uncertainty is not available or radius is unconstrained.", + "planets_data|data_quality_tracking|raderr2": "Negative uncertainty from 'pl_radjerr2' in planetary radius, in Jupiter radii. REAL representing -1σ error in radius measurement. Contains NULL when radius uncertainty is not available.", + "planets_data|data_quality_tracking|denserr1": "Positive uncertainty from 'pl_denserr1' in planetary density, in g/cm³. REAL representing +1σ error in density calculation. Contains NULL when density uncertainty cannot be calculated due to missing mass or radius uncertainties.", + "planets_data|data_quality_tracking|denserr2": "Negative uncertainty from 'pl_denserr2' in planetary density, in g/cm³. REAL representing -1σ error in density calculation. Contains NULL when density uncertainty cannot be calculated.", + "planets_data|data_quality_tracking|disterr1": "Positive uncertainty from 'st_disterr1' in stellar distance, in parsecs. REAL representing +1σ error in distance measurement. Contains NULL when stellar distance uncertainty is not available.", + "planets_data|data_quality_tracking|disterr2": "Negative uncertainty from 'st_disterr2' in stellar distance, in parsecs. REAL representing -1σ error in distance measurement. Contains NULL when stellar distance uncertainty is not available.", + "planets_data|data_quality_tracking|optmagerr": "Uncertainty from 'st_optmagerr' in stellar magnitude. REAL representing symmetric 1σ error in magnitude measurement. Contains NULL when magnitude uncertainty is not available.", + "planets_data|data_quality_tracking|temperr1": "Positive uncertainty from 'st_tefferr1' in stellar temperature, in Kelvin. REAL representing +1σ error in temperature measurement. Contains NULL when temperature uncertainty is not available.", + "planets_data|data_quality_tracking|temperr2": "Negative uncertainty from 'st_tefferr2' in stellar temperature, in Kelvin. REAL representing -1σ error, accommodating larger negative uncertainties. Contains NULL when temperature uncertainty is not available.", + "planets_data|data_quality_tracking|stellarmasserr1": "Positive uncertainty from 'st_masserr1' in stellar mass, in solar masses. REAL representing +1σ error in stellar mass. Contains NULL when stellar mass uncertainty is not available.", + "planets_data|data_quality_tracking|stellarmasserr2": "Negative uncertainty from 'st_masserr2' in stellar mass, in solar masses. REAL representing -1σ error in stellar mass. Contains NULL when stellar mass uncertainty is not available.", + "planets_data|data_quality_tracking|stellarraderr1": "Positive uncertainty from 'st_raderr1' in stellar radius, in solar radii. REAL representing +1σ error in stellar radius. Contains NULL when stellar radius uncertainty is not available.", + "planets_data|data_quality_tracking|stellarraderr2": "Negative uncertainty from 'st_raderr2' in stellar radius, in solar radii. REAL representing -1σ error in stellar radius. Contains NULL when stellar radius uncertainty is not available.", + "planets_data|data_quality_tracking|masssource": "Mass determination method from 'pl_bmassprov'. VARCHAR(50) with values: 'Msini' (minimum mass from radial velocity, M×sin(i)), 'Mass' (true mass from transit+RV or other methods), 'Msin(i)/sin(i)' (mass corrected for inclination). Contains NULL when no mass measurement is available.", + "planets_data|data_quality_tracking|updatestamp": "Data update timestamp from 'rowupdate'. DATE field in YYYY-MM-DD format tracking when the record was last updated, with dates ranging from 2014-05-14 to 2016-07-07 in current dataset.", + "planets_data|stars|coordsys": { + "column_meaning": "JSONB column. Consolidates all coordinate system information including both text and decimal representations of right ascension and declination, providing complete positional data for the host star.", + "fields_meaning": { + "RA_Text": "Right ascension coordinate in sexagesimal format derived from 'ra_str'. VARCHAR(60) with format like '12h20m43.03s', '15h17m05.89s', accommodating hours-minutes-seconds notation. Contains NULL when coordinate string representation is not available.", + "RA_Decimal": "Right ascension coordinate converted to decimal degrees from 'ra' field. REAL providing precision to arc-second level, ranging from 0 to 360 degrees. Contains NULL when precise coordinates are not available.", + "Dec_Text": "Declination coordinate in sexagesimal format derived from 'dec_str'. VARCHAR(60) with format like '+17d47m34.3s', '+71d49m26.0s', '-39d14m10.3s', accommodating degrees-arcminutes-arcseconds notation with sign prefix. Contains NULL when coordinate string representation is not available.", + "Dec_Decimal": "Declination coordinate converted to decimal degrees from 'dec' field. REAL providing precision to arc-second level, ranging from -90 to +90 degrees. Contains NULL when precise coordinates are not available." + } + }, + "planets_data|stars|stellarprops": { + "column_meaning": "JSONB column. Groups all stellar physical properties and their measurement quality flags, including photometric, temperature, mass, and radius measurements with blend indicators.", + "fields_meaning": { + "photometry": { + "Opt_Mag": "Optical magnitude of the host star from 'st_optmag'. REAL covering magnitude range typically from -1.5 to 20+ magnitudes. Contains NULL when stellar magnitude measurements are not available.", + "Mag_Blend": "Blend flag from 'st_optmagblend' indicating if optical magnitude is affected by stellar companions. REAL with values 0 (no blending), 1 (blended measurement). Contains NULL when blending status is unknown.", + "Photo_Band": "Photometric band from 'st_optband' used for magnitude measurement. TEXT field with NOISE containing inconsistent representations of the same bands: 'V (Johnson)', 'Johnson', 'V', 'Johnson V' (all representing Johnson V-band), 'Kepler-band', 'Kepler', 'Kep-b', 'Kep' (all representing Kepler band), 'V-band', 'K-band'. Contains NULL when photometric band information is not available." + }, + "physical": { + "Temp_Value": "Effective temperature from 'st_teff' measured in Kelvin. REAL covering range from 575K to 57000K. Contains NULL when stellar temperature measurements are not available or unreliable.", + "Temp_Blend": "Temperature blend flag from 'st_teffblend' indicating measurement quality. REAL with values 0 (clean measurement), 1 (affected by blending). Contains NULL when temperature blending status is unknown.", + "Mass_Value": "Stellar mass from 'st_mass' in solar mass units (M☉). REAL covering range from low-mass stars to massive stars. Contains NULL when stellar mass cannot be determined from available data.", + "Mass_Blend": "Mass blend flag from 'st_massblend' indicating measurement reliability. REAL with values 0 (direct measurement), 1 (affected by multiplicity). Contains NULL when mass blending status is unknown.", + "Radius_Value": "Stellar radius from 'st_rad' in solar radius units (R☉). REAL covering range from sub-solar to giant stars. Contains NULL when stellar radius measurements are not available.", + "Rad_Blend": "Radius blend flag from 'st_radblend' indicating measurement quality. REAL with values 0 (clean measurement), 1 (affected by stellar activity). Contains NULL when radius blending status is unknown." + } + } + }, + "planets_data|data_quality_tracking|limitflags": { + "column_meaning": "JSONB column. Consolidates all measurement limit flags indicating whether values represent actual measurements, upper limits, or lower limits for planetary and stellar parameters.", + "fields_meaning": { + "planetary_limits": { + "Period_Lim": "Limit flag from 'pl_orbperlim' for orbital period. REAL with values: 0 (measured value), 1 (upper limit), -1 (lower limit). Contains NULL when limit status is unknown or not applicable.", + "Semimaj_Lim": "Limit flag from 'pl_orbsmaxlim' for semi-major axis. REAL with values: 0 (measured value), 1 (upper limit), -1 (lower limit). Contains NULL when limit status is unknown or not applicable.", + "Ecc_Lim": "Limit flag from 'pl_orbeccenlim' for eccentricity. REAL with values: 0 (measured value), 1 (upper limit), -1 (lower limit). Contains NULL when limit status is unknown or not applicable.", + "Incl_Lim": "Limit flag from 'pl_orbincllim' for inclination. REAL with values: 0 (measured value), 1 (upper limit), -1 (lower limit). Contains NULL when limit status is unknown or not applicable.", + "Mass_Lim": "Limit flag from 'pl_bmassjlim' for planetary mass. REAL with values: 0 (measured value), 1 (upper limit), -1 (lower limit). Contains NULL when limit status is unknown or not applicable.", + "Rad_Lim": "Limit flag from 'pl_radjlim' for planetary radius. REAL with values: 0 (measured value), 1 (upper limit), -1 (lower limit). Contains NULL when limit status is unknown or not applicable.", + "Dens_Lim": "Limit flag from 'pl_denslim' for planetary density. REAL with values: 0 (measured value), 1 (upper limit), -1 (lower limit). Contains NULL when limit status is unknown or not applicable." + }, + "stellar_limits": { + "Dist_Lim": "Limit flag from 'st_distlim' for stellar distance. REAL with values: 0 (measured value), 1 (upper limit), -1 (lower limit). Contains NULL when limit status is unknown or not applicable.", + "OptMag_Lim": "Limit flag from 'st_optmaglim' for stellar magnitude. REAL with values: 0 (measured value), 1 (upper limit), -1 (lower limit). Contains NULL when limit status is unknown or not applicable.", + "Temp_Lim": "Limit flag from 'st_tefflim' for stellar temperature. REAL with values: 0 (measured value), 1 (upper limit), -1 (lower limit). Contains NULL when limit status is unknown or not applicable.", + "StellarMass_Lim": "Limit flag from 'st_masslim' for stellar mass. REAL with values: 0 (measured value), 1 (upper limit), -1 (lower limit). Contains NULL when limit status is unknown or not applicable.", + "StellarRad_Lim": "Limit flag from 'st_radlim' for stellar radius. REAL with values: 0 (measured value), 1 (upper limit), -1 (lower limit). Contains NULL when limit status is unknown or not applicable." + } + } + } +} \ No newline at end of file diff --git a/planets_data/planets_data_kb.jsonl b/planets_data/planets_data_kb.jsonl new file mode 100644 index 0000000000000000000000000000000000000000..46970e988377a83bb676f6d480b2a69c5022f302 --- /dev/null +++ b/planets_data/planets_data_kb.jsonl @@ -0,0 +1,52 @@ +{"id": 0, "knowledge": "Distance in Light-Years", "description": "Converts the distance of a celestial object from parsecs to light-years.", "definition": "Given the distance in parsecs ($D_{pc}$), the distance in light-years ($D_{ly}$) is calculated as: $D_{ly} = D_{pc} \\times 3.26156$", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 1, "knowledge": "Planet Mass in Earth Units", "description": "Converts a planet's mass from Jupiter mass units to Earth mass units.", "definition": "Given a planet's mass in Jupiter masses ($M_J$), its mass in Earth masses ($M_E$) is: $M_E = M_J \\times 317.83$", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 2, "knowledge": "Planet Radius in Earth Units", "description": "Converts a planet's radius from Jupiter radius units to Earth radius units.", "definition": "Given a planet's radius in Jupiter radii ($R_J$), its radius in Earth radii ($R_E$) is: $R_E = R_J \\times 11.209$", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 3, "knowledge": "Stellar Luminosity", "description": "Calculates a star's luminosity relative to the Sun using its temperature and radius.", "definition": "Luminosity ($L_{\\star}$) relative to the Sun is estimated using the Stefan-Boltzmann law: $L_{\\star} = \\left( \\frac{R_{\\star}}{R_{\\odot}} \\right)^2 \\left( \\frac{T_{\\star}}{T_{\\odot}} \\right)^4$, where $R_{\\star}$ is the star's radius, $T_{\\star}$ is its effective temperature, and $T_{\\odot}$ is the Sun's effective temperature (~5778 K).", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 4, "knowledge": "Planet Surface Gravity", "description": "Estimates the surface gravity of a planet relative to Earth's.", "definition": "The surface gravity ($g_p$) relative to Earth's is found by: $g_p = \\frac{M_E}{R_E^2}$, where $M_E$ is the planet's mass in Earth masses and $R_E$ is its radius in Earth radii. Depends on knowing the Planet Mass in Earth Units and Planet Radius in Earth Units.", "type": "calculation_knowledge", "children_knowledge": [1, 2]} +{"id": 5, "knowledge": "Planetary Equilibrium Temperature", "description": "Estimates a planet's surface temperature based on the energy it receives from its star.", "definition": "The equilibrium temperature ($T_{eq}$) of a planet is calculated as: $T_{eq} = T_{\\star} \\sqrt{\\frac{R_{\\star}}{2a}}$, where $T_{\\star}$ is the star's temperature, $R_{\\star}$ is the star's radius, and $a$ is the planet's semi-major axis.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 6, "knowledge": "Habitable Zone Inner Boundary", "description": "Calculates the inner edge of a star's habitable zone, where a planet would be too hot for liquid water.", "definition": "The inner boundary of the habitable zone ($r_i$) in AU is estimated based on the star's luminosity ($L_{\\star}$): $r_i \\approx \\sqrt{L_{\\star} / 1.1}$. This relies on the Stellar Luminosity.", "type": "calculation_knowledge", "children_knowledge": [3]} +{"id": 7, "knowledge": "Habitable Zone Outer Boundary", "description": "Calculates the outer edge of a star's habitable zone, where a planet would be too cold for liquid water.", "definition": "The outer boundary of the habitable zone ($r_o$) in AU is estimated based on the star's luminosity ($L_{\\star}$): $r_o \\approx \\sqrt{L_{\\star} / 0.53}$. This relies on the Stellar Luminosity.", "type": "calculation_knowledge", "children_knowledge": [3]} +{"id": 8, "knowledge": "Relative Uncertainty", "description": "Measures the precision of a measurement as a percentage of the measured value.", "definition": "For a value $v$ with positive error $e_1$ and negative error $e_2$, the relative uncertainty ($U_{rel}$) is: $U_{rel} = \\frac{(e_1 - e_2) / 2}{v} \\times 100\\%$", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 9, "knowledge": "Transit Depth", "description": "Calculates the fractional drop in a star's brightness when a planet passes in front of it.", "definition": "The transit depth ($Delta F$) is the ratio of the planet's surface area to the star's: $Delta F = \\left( \\frac{R_p}{R_{\\star}} \\right)^2$, where $R_p$ is the planet's radius and $R_{\\star}$ is the star's radius.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 10, "knowledge": "Planet Escape Velocity", "description": "Calculates the minimum speed needed for an object to escape a planet's gravitational pull.", "definition": "The escape velocity ($v_e$) is calculated as: $v_e = \\sqrt{\\frac{2GM_p}{R_p}}$, where $M_p$ and $R_p$ are the planet's mass and radius, and G is the gravitational constant. The calculation depends on Planet Mass in Earth Units and Planet Radius in Earth Units, converted to standard units.", "type": "calculation_knowledge", "children_knowledge": [1, 2]} +{"id": 11, "knowledge": "Orbital Velocity", "description": "Calculates the average speed of a planet as it orbits its host star.", "definition": "Assuming a circular orbit, the orbital velocity ($v_{orb}$) is: $v_{orb} = \\sqrt{\\frac{GM_{\\star}}{a}}$, where $M_{\\star}$ is the mass of the star and $a$ is the semi-major axis.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 12, "knowledge": "Stellar Density", "description": "Calculates the average density of a star.", "definition": "The average density of a star ($\\rho_{\\star}$) is its mass divided by its volume: $\\rho_{\\star} = \\frac{M_{\\star}}{\\frac{4}{3}\\pi R_{\\star}^3}$, where $M_{\\star}$ and $R_{\\star}$ are the star's mass and radius.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 13, "knowledge": "Hertzsprung-Russell (HR) Diagram Position", "description": "Determines a star's position on the HR diagram based on its temperature and luminosity.", "definition": "The position is a coordinate pair ($T_{\\star}$, $L_{\\star}$) where $T_{\\star}$ is the star's effective temperature and $L_{\\star}$ is its calculated Stellar Luminosity.", "type": "calculation_knowledge", "children_knowledge": [3]} +{"id": 14, "knowledge": "Goldilocks Value", "description": "Quantifies how centered a planet is within its star's habitable zone.", "definition": "A Goldilocks Value ($G$) can be defined as: $G = \\frac{a - r_i}{r_o - r_i}$, where $a$ is the planet's semi-major axis, and $r_i$ and $r_o$ are the Habitable Zone Inner Boundary and Outer Boundary. A value of 0.5 is perfectly centered.", "type": "calculation_knowledge", "children_knowledge": [6, 7]} +{"id": 15, "knowledge": "Kepler's Third Law Verification", "description": "Uses a planet's orbital properties to calculate the mass of its host star.", "definition": "The mass of the star ($M_{\\star}$) in solar masses can be derived from Kepler's Third Law: $M_{\\star} = \\frac{a^3}{P^2}$, where $a$ is the semi-major axis in AU and $P$ is the orbital period in years.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 16, "knowledge": "Mass-Radius Relationship", "description": "A simple ratio to compare a planet's mass and size.", "definition": "A basic mass-radius ratio ($MRR$) can be expressed as: $MRR = \\frac{M_E}{R_E}$, which depends on the Planet Mass in Earth Units and Planet Radius in Earth Units.", "type": "calculation_knowledge", "children_knowledge": [1, 2]} +{"id": 17, "knowledge": "Orbital Period Ratio", "description": "Calculates the ratio of the orbital periods of two adjacent planets in a system.", "definition": "For two planets, an outer planet with period $P_{out}$ and an inner planet with period $P_{in}$, the ratio is: $R_{period} = \\frac{P_{out}}{P_{in}}$.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 18, "knowledge": "Planet-Star Mass Ratio", "description": "Calculates the ratio of a planet's mass to its host star's mass.", "definition": "The mass ratio ($q$) is: $q = \\frac{M_p}{M_{\\star}}$, where $M_p$ is the planet's mass and $M_{\\star}$ is the star's mass. Both must be in the same units.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 19, "knowledge": "Gravitational Parameter (μ)", "description": "Calculates the standard gravitational parameter for a star, a constant used in orbital mechanics.", "definition": "The standard gravitational parameter is $\\mu = G M_{\\star}$, where G is the gravitational constant and $M_{\\star}$ is the mass of the star.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 20, "knowledge": "Gas Giant Planet", "description": "A large planet composed primarily of gases like hydrogen and helium.", "definition": "A planet with a mass greater than 0.1 Jupiter masses.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 21, "knowledge": "Rocky Planet", "description": "A planet composed primarily of rock or metals, with a solid surface.", "definition": "A planet with a bulk density greater than 3 g/cm³.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 22, "knowledge": "Short-Period Planet", "description": "A planet that orbits its host star in a very short amount of time.", "definition": "A planet with an orbital period of less than 10 days.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 23, "knowledge": "Hot Jupiter", "description": "A class of exoplanets that are physically similar to Jupiter but orbit very close to their stars.", "definition": "A planet that is classified as both a Gas Giant Planet and a Short-Period Planet.", "type": "domain_knowledge", "children_knowledge": [20, 22]} +{"id": 24, "knowledge": "Super-Earth", "description": "A class of planets with masses higher than Earth's but substantially below those of the solar system's ice giants.", "definition": "A planet that is likely a Rocky Planet and has a mass between 1 and 10 Earth masses. This depends on the Planet Mass in Earth Units.", "type": "domain_knowledge", "children_knowledge": [1, 21]} +{"id": 25, "knowledge": "Planet in Habitable Zone", "description": "A planet orbiting within a star's habitable zone, where conditions might be right for liquid water.", "definition": "A planet whose semi-major axis ($a$) falls between the Habitable Zone Inner Boundary ($r_i$) and the Habitable Zone Outer Boundary ($r_o$).", "type": "domain_knowledge", "children_knowledge": [6, 7]} +{"id": 26, "knowledge": "Potentially Habitable Exoplanet", "description": "An exoplanet that has the potential to support life, typically meaning it is rocky and in the habitable zone.", "definition": "A planet that is classified as a Rocky Planet and is also a Planet in Habitable Zone.", "type": "domain_knowledge", "children_knowledge": [21, 25]} +{"id": 27, "knowledge": "High Eccentricity Planet", "description": "A planet with a highly elliptical orbit, leading to significant temperature variations.", "definition": "A planet with an orbital eccentricity greater than 0.25.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 28, "knowledge": "Multi-planetary System", "description": "A star that hosts more than one confirmed planet.", "definition": "Any star system with a total number of confirmed planetary companions greater than 1.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 29, "knowledge": "High-Precision Measurement", "description": "Indicates that a specific physical or orbital parameter is known with a high degree of confidence.", "definition": "A parameter for which the calculated Relative Uncertainty is less than 5%.", "type": "domain_knowledge", "children_knowledge": [8]} +{"id": 30, "knowledge": "Well-Characterized Planet", "description": "A planet for which the key parameters of mass, radius, and period are known with high precision.", "definition": "A planet where the measurements for mass, radius, and orbital period are all considered High-Precision Measurements.", "type": "domain_knowledge", "children_knowledge": [29]} +{"id": 31, "knowledge": "Retrograde Orbit", "description": "A planet that orbits its star in the opposite direction to the star's rotation.", "definition": "A planet with an orbital inclination greater than 90 degrees.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 32, "knowledge": "Inflated Gas Giant", "description": "A gas giant planet with a radius that is unexpectedly large for its mass, suggesting a high internal temperature.", "definition": "A Gas Giant Planet with a bulk density less than 0.5 g/cm³.", "type": "domain_knowledge", "children_knowledge": [20]} +{"id": 33, "knowledge": "Compact System", "description": "A planetary system where multiple planets orbit very close to each other.", "definition": "A Multi-planetary System where the Orbital Period Ratio between all adjacent pairs of planets is less than 3.", "type": "domain_knowledge", "children_knowledge": [17, 28]} +{"id": 34, "knowledge": "Minimum Mass Status", "description": "Indicates that the provided mass for a planet is a lower limit, not the true mass.", "definition": "A flag indicating that the planet's mass was determined using a method (like radial velocity) that measures the minimum mass ($M \\sin i$) because the orbital inclination is unknown.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 35, "knowledge": "Upper Limit Value", "description": "Indicates that a measured value for a parameter is not an exact measurement but an upper boundary.", "definition": "A quality flag on a parameter signifying that its true value is less than or equal to the stated value.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 36, "knowledge": "Discovery via Transit Method", "description": "Identifies a planet discovered by observing the dimming of its star as the planet passes in front.", "definition": "A planet whose discovery method is listed as Transit, TR, Transit Method, Photometry, or Photometric.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 37, "knowledge": "Discovery via Radial Velocity", "description": "Identifies a planet discovered by observing the wobble of its star caused by the planet's gravitational pull.", "definition": "A planet whose discovery method is listed as RadVel, RV, RV Method, Radial Velocity, or Doppler.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 38, "knowledge": "Blended Measurement", "description": "Indicates a parameter measurement (like brightness or temperature) is potentially contaminated by the light of nearby, unresolved stars.", "definition": "A quality flag indicating that a measurement's value is affected by light from stellar companions, potentially reducing its accuracy.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 39, "knowledge": "Kepler Mission Discovery", "description": "Identifies a planet discovered by the Kepler Space Telescope.", "definition": "A planet whose observation record is linked to the 'kep' facility.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 40, "knowledge": "Stellar Distance Value", "description": "Illustrates the value for the distance to a star system.", "definition": "Measured in parsecs (pc), where 1 parsec equals about 3.26 light-years. Values range from nearby stars like Proxima Centauri (~1.3 pc) to stars thousands of parsecs away.", "type": "value_illustration", "children_knowledge": -1} +{"id": 41, "knowledge": "Apparent Magnitude Value", "description": "Illustrates the value for a star's brightness as seen from Earth.", "definition": "This is a logarithmic scale where smaller numbers are brighter. A magnitude of 1.0 is 100 times brighter than a magnitude of 6.0. The brightest stars have negative magnitudes (e.g., Sirius is -1.46).", "type": "value_illustration", "children_knowledge": -1} +{"id": 42, "knowledge": "Stellar Temperature Value", "description": "Illustrates the value for a star's effective surface temperature.", "definition": "Measured in Kelvin (K). Cool red dwarfs can be around 3,000 K, a Sun-like star is about 5,800 K, and very hot blue stars can exceed 30,000 K.", "type": "value_illustration", "children_knowledge": -1} +{"id": 43, "knowledge": "Stellar Mass Value", "description": "Illustrates the value for a star's mass.", "definition": "Measured in solar masses ($M_{\\odot}$), where 1 is the mass of our Sun. Most known host stars range from low-mass red dwarfs (~0.1 $M_{\\odot}$) to stars several times more massive than the Sun.", "type": "value_illustration", "children_knowledge": -1} +{"id": 44, "knowledge": "Stellar Radius Value", "description": "Illustrates the value for a star's radius.", "definition": "Measured in solar radii ($R_{\\odot}$), where 1 is the radius of our Sun. Values range from small neutron stars to giant stars like Betelgeuse, which would extend beyond the orbit of Mars if in our solar system.", "type": "value_illustration", "children_knowledge": -1} +{"id": 45, "knowledge": "Orbital Period Value", "description": "Illustrates the value for the time a planet takes to complete one orbit around its star.", "definition": "Measured in days. 'Hot Jupiters' can have periods of only a few days, while planets in very distant orbits can have periods of many thousands of years.", "type": "value_illustration", "children_knowledge": -1} +{"id": 46, "knowledge": "Orbital Eccentricity Value", "description": "Illustrates the value describing how much an orbit deviates from a perfect circle.", "definition": "A dimensionless value from 0 to <1. An eccentricity of 0 is a perfect circle. A value of 0.1 indicates a slightly elliptical orbit, while a value of 0.7 indicates a very elongated, comet-like orbit.", "type": "value_illustration", "children_knowledge": -1} +{"id": 47, "knowledge": "Planet Mass Value", "description": "Illustrates the value for a planet's mass.", "definition": "Typically measured in Jupiter masses ($M_J$). Earth's mass is about 0.003 $M_J$. Known exoplanets range from less than Earth's mass to over 20 times the mass of Jupiter.", "type": "value_illustration", "children_knowledge": -1} +{"id": 48, "knowledge": "Planet Radius Value", "description": "Illustrates the value for a planet's physical size.", "definition": "Typically measured in Jupiter radii ($R_J$). Earth's radius is about 0.09 $R_J$. Planets range from small rocky worlds smaller than Earth to 'inflated' gas giants larger than Jupiter.", "type": "value_illustration", "children_knowledge": -1} +{"id": 49, "knowledge": "Planet Density Value", "description": "Illustrates the value for a planet's bulk density.", "definition": "Measured in grams per cubic centimeter (g/cm³). Puffy gas giants can have densities less than water (<1 g/cm³), while dense, rocky planets like Earth have densities around 5.5 g/cm³.", "type": "value_illustration", "children_knowledge": -1} +{"id": 50, "knowledge": "Transit Timing Variation (TTV) Method", "description": "A method of exoplanet detection that infers the presence of planets by observing variations in the timing of a known transiting planet's transit across its star.", "definition": "A planet whose observation record is linked to the 'ttv' facility.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 51, "knowledge": "Discovery Method Standardization", "description": "Standardizes various text entries for exoplanet discovery methods into a set of unified categories.", "definition": "A process that groups different raw discovery method labels from the database into a clear, standardized set. The standard categories and their corresponding raw labels are: 'Radial Velocity' (includes 'RadVel', 'RV', 'RV Method', 'Radial Velocity', 'Doppler'), 'Transit' (includes 'Transit', 'TR', 'Transit Method', 'Photometry', 'Photometric'), 'Imaging' (includes 'Direct Imaging', 'DI', 'Imaging', 'IMG', 'Direct'), 'TTV' (includes 'TTV', 'Transit Timing Variations', 'Transit Timing', 'TTV Method', 'Timing Var'), 'Microlensing' (includes 'Microlensing', 'ML', '\\u03bcLens', 'Lensing', 'Gravitational'). Any method not in these groups is classified as 'Other'.", "type": "domain_knowledge", "children_knowledge": [36, 37, 50]} \ No newline at end of file diff --git a/planets_data/planets_data_schema.txt b/planets_data/planets_data_schema.txt new file mode 100644 index 0000000000000000000000000000000000000000..d62220bdb19616ecc281efa1f97acaecd8815a80 --- /dev/null +++ b/planets_data/planets_data_schema.txt @@ -0,0 +1,150 @@ +CREATE TABLE "stars" ( +stellarref bigint NOT NULL DEFAULT nextval('stars_stellarref_seq'::regclass), +hostplname text NOT NULL, +stellardist real NULL, +compcount bigint NULL, +coordsys jsonb NULL, +stellarprops jsonb NULL, + PRIMARY KEY (stellarref) +); + +First 3 rows: + stellarref hostplname stellardist compcount coordsys stellarprops +------------ ------------ ------------- ----------- ---------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- + 1 11 Com 110.62 1 {'RA_Text': '12h20m43.03s', 'Dec_Text': '+17d47m34.3s', 'RA_Decimal': 185.17928, 'Dec_Decimal': 17.792868} {'physical': {'Rad_Blend': 0, 'Mass_Blend': 0, 'Mass_Value': 2.7, 'Temp_Blend': 0, 'Temp_Value': 4742, 'Radius_Value': 19}, 'photometry': {'Opt_Mag': 4.74, 'Mag_Blend': 0, 'Photo_Band': 'V (Johnson)'}} + 2 11 UMi 119.47 1 {'RA_Text': '15h17m05.89s', 'Dec_Text': '+71d49m26.0s', 'RA_Decimal': 229.27454, 'Dec_Decimal': 71.8239} {'physical': {'Rad_Blend': 0, 'Mass_Blend': 0, 'Mass_Value': 1.8, 'Temp_Blend': 0, 'Temp_Value': 4340, 'Radius_Value': 24.08}, 'photometry': {'Opt_Mag': 5.016, 'Mag_Blend': 0, 'Photo_Band': 'V (Johnson)'}} + 3 14 And 76.39 1 {'RA_Text': '23h31m17.42s', 'Dec_Text': '+39d14m10.3s', 'RA_Decimal': 352.82257, 'Dec_Decimal': 39.2362} {'physical': {'Rad_Blend': 0, 'Mass_Blend': 0, 'Mass_Value': 2.2, 'Temp_Blend': 0, 'Temp_Value': 4813, 'Radius_Value': 11}, 'photometry': {'Opt_Mag': None, 'Mag_Blend': None, 'Photo_Band': None}} +... + + +CREATE TABLE "instruments_surveys" ( +instrumentref bigint NOT NULL DEFAULT nextval('instruments_surveys_instrumentref_seq'::regclass), +facilityname character varying NOT NULL, + PRIMARY KEY (instrumentref) +); + +First 3 rows: + instrumentref facilityname +--------------- -------------- + 1 ttv + 2 kep + 3 k2 +... + + +CREATE TABLE "planets" ( +planetref bigint NOT NULL, +hostlink bigint NULL, +completter text NULL, +notecount bigint NULL, +discmethod text NULL, + PRIMARY KEY (planetref), + FOREIGN KEY (hostlink) REFERENCES stars(stellarref) +); + +First 3 rows: + planetref hostlink completter notecount discmethod +----------- ---------- ------------ ----------- ------------ + 1 1 b 0 RadVel + 2 2 b 0 RV + 3 3 b 0 RV Method +... + + +CREATE TABLE "orbital_characteristics" ( +orbitalref bigint NOT NULL DEFAULT nextval('orbital_characteristics_orbitalref_seq'::regclass), +bodylink bigint NOT NULL, +period real NULL, +semimajor real NULL, +eccentricity real NULL, +inclination real NULL, + PRIMARY KEY (orbitalref), + FOREIGN KEY (bodylink) REFERENCES planets(planetref) +); + +First 3 rows: + orbitalref bodylink period semimajor eccentricity inclination +------------ ---------- -------- ----------- -------------- ------------- + 1 1 326.03 1.29 0.231 + 2 2 516.22 1.54 0.08 + 3 3 185.84 0.83 0 +... + + +CREATE TABLE "physical_properties" ( +physref bigint NOT NULL DEFAULT nextval('physical_properties_physref_seq'::regclass), +objectlink bigint NOT NULL, +massjup real NULL, +radjup real NULL, +densvalue real NULL, + PRIMARY KEY (physref), + FOREIGN KEY (objectlink) REFERENCES planets(planetref) +); + +First 3 rows: + physref objectlink massjup radjup densvalue +--------- ------------ --------- -------- ----------- + 1 1 19.4 + 2 2 10.5 + 3 3 4.8 +... + + +CREATE TABLE "planet_instrument_observations" ( +obsref bigint NOT NULL DEFAULT nextval('planet_instrument_observations_obsref_seq'::regclass), +subjectlink bigint NOT NULL, +facilitylink bigint NOT NULL, + PRIMARY KEY (obsref), + FOREIGN KEY (facilitylink) REFERENCES instruments_surveys(instrumentref), + FOREIGN KEY (subjectlink) REFERENCES planets(planetref) +); + +First 3 rows: + obsref subjectlink facilitylink +-------- ------------- -------------- + 1 14 2 + 2 47 3 + 3 141 2 +... + + +CREATE TABLE "data_quality_tracking" ( +qualityref bigint NOT NULL DEFAULT nextval('data_quality_tracking_qualityref_seq'::regclass), +targetlink bigint NOT NULL, +perioderr1 real NULL, +perioderr2 real NULL, +semimajerr1 real NULL, +semimajerr2 real NULL, +eccerr1 real NULL, +eccerr2 real NULL, +inclerr1 real NULL, +inclerr2 real NULL, +masserr1 real NULL, +masserr2 real NULL, +raderr1 real NULL, +raderr2 real NULL, +denserr1 real NULL, +denserr2 real NULL, +disterr1 real NULL, +disterr2 real NULL, +optmagerr real NULL, +temperr1 real NULL, +temperr2 real NULL, +stellarmasserr1 real NULL, +stellarmasserr2 real NULL, +stellarraderr1 real NULL, +stellarraderr2 real NULL, +masssource text NULL, +updatestamp date NULL, +limitflags jsonb NULL, + PRIMARY KEY (qualityref), + FOREIGN KEY (targetlink) REFERENCES planets(planetref) +); + +First 3 rows: + qualityref targetlink perioderr1 perioderr2 semimajerr1 semimajerr2 eccerr1 eccerr2 inclerr1 inclerr2 masserr1 masserr2 raderr1 raderr2 denserr1 denserr2 disterr1 disterr2 optmagerr temperr1 temperr2 stellarmasserr1 stellarmasserr2 stellarraderr1 stellarraderr2 masssource updatestamp limitflags +------------ ------------ ------------ ------------ ------------- ------------- --------- --------- ---------- ---------- ---------- ---------- --------- --------- ---------- ---------- ---------- ---------- ----------- ---------- ---------- ----------------- ----------------- ---------------- ---------------- ------------ ------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ + 1 1 0.32 -0.32 0.05 -0.05 0.005 -0.005 1.5 -1.5 9.61 -11.63 nan 100 -100 0.3 -0.3 2 -2 Msini 2014-05-14 {'stellar_limits': {'Dist_Lim': 0, 'Temp_Lim': 0, 'OptMag_Lim': 0, 'StellarRad_Lim': 0, 'StellarMass_Lim': 0}, 'planetary_limits': {'Ecc_Lim': 0, 'Rad_Lim': None, 'Dens_Lim': None, 'Incl_Lim': None, 'Mass_Lim': 0, 'Period_Lim': 0, 'Semimaj_Lim': 0}} + 2 2 3.25 -3.25 0.07 -0.07 0.03 -0.03 2.47 -2.47 6.22 -6.95 0.009 70 -70 0.25 -0.25 1.84 -1.84 Msini 2014-05-14 {'stellar_limits': {'Dist_Lim': 0, 'Temp_Lim': 0, 'OptMag_Lim': 0, 'StellarRad_Lim': 0, 'StellarMass_Lim': 0}, 'planetary_limits': {'Ecc_Lim': 0, 'Rad_Lim': None, 'Dens_Lim': None, 'Incl_Lim': None, 'Mass_Lim': 0, 'Period_Lim': 0, 'Semimaj_Lim': 0}} + 3 3 0.23 -0.23 nan nan nan nan nan nan 3.93 -4.38 nan 20 -20 0.1 -0.2 1 -1 Msini 2014-05-14 {'stellar_limits': {'Dist_Lim': 0, 'Temp_Lim': 0, 'OptMag_Lim': None, 'StellarRad_Lim': 0, 'StellarMass_Lim': 0}, 'planetary_limits': {'Ecc_Lim': 0, 'Rad_Lim': None, 'Dens_Lim': None, 'Incl_Lim': None, 'Mass_Lim': 0, 'Period_Lim': 0, 'Semimaj_Lim': 0}} +... diff --git a/polar_equipment/polar_equipment_column_meaning_base.json b/polar_equipment/polar_equipment_column_meaning_base.json new file mode 100644 index 0000000000000000000000000000000000000000..098ebe453b049ae907cb997496ae7833b96669e0 --- /dev/null +++ b/polar_equipment/polar_equipment_column_meaning_base.json @@ -0,0 +1,259 @@ +{ + "polar_equipment|EquipmentType|EquipType": "TEXT. Unique identifier for the equipment type. PK.", + "polar_equipment|Equipment|EQUIP_CODE": "TEXT. Unique identifier for the equipment. PK. Example: PE593707.", + "polar_equipment|Equipment|EquipType": "TEXT. Reference to the equipment type. FK to EquipmentType.EquipType.", + "polar_equipment|Equipment|model_name": "TEXT. Model name of the equipment. **NULL means no model name provided.**. Example: Model-925.", + "polar_equipment|Equipment|MakerName": "TEXT. Name of the equipment manufacturer. **NULL means no manufacturer provided.**. Example: Lee, Meyers and Hamilton.", + "polar_equipment|Equipment|SERVICE_YRS": "BIGINT. Number of years the equipment has been in service. Example: 4.", + "polar_equipment|Equipment|utilPercent": "REAL. Utilization percentage of the equipment. Example: 53.", + "polar_equipment|Equipment|RELIAB_IDX": "REAL. Reliability index of the equipment. Example: 97.8.", + "polar_equipment|Location|STATION_name": "TEXT. Unique name of the station. PK.", + "polar_equipment|Location|TimeStamp": "TIMESTAMP. Timestamp when the location data was recorded. Example: 2024-10-29T17:30:55.", + "polar_equipment|Location|locType": "TEXT. Type of location (e.g., field, lab). Possible values: Antarctic, Arctic.", + "polar_equipment|Location|LAT_deg": "REAL. Latitude of the location in degrees. Example: 80.255202.", + "polar_equipment|Location|LON_deg": "REAL. Longitude of the location in degrees. Example: -146.257874.", + "polar_equipment|Location|altitude_m": "REAL. Altitude of the location in meters. **NULL means no altitude provided.**. Example: 2054.5.", + "polar_equipment|OperationMaintenance|OP_MAINT_ID": "BIGSERIAL. Unique identifier for operation and maintenance record. PK.", + "polar_equipment|OperationMaintenance|equipRef": "TEXT. Reference to the equipment. FK to Equipment.EQUIP_CODE.", + "polar_equipment|OperationMaintenance|OPER_hours": "REAL. Total operational hours of the equipment. Example: 17843.", + "polar_equipment|OperationMaintenance|maintCycleHrs": "REAL. Maintenance cycle hours. **NULL means no maintenance cycle defined.**. Example: 2047.0.", + "polar_equipment|OperationMaintenance|LAST_maint_date": "DATE. Date of the last maintenance. **NULL means no maintenance date provided.**. Example: 25-Feb-24.", + "polar_equipment|OperationMaintenance|NEXT_due_date": "DATE. Date of the next scheduled maintenance. **NULL means no next maintenance date provided.**. Example: 2025/11/16.", + "polar_equipment|OperationMaintenance|OPER_status": "TEXT. Operational status of the equipment. Possible values: Active, Maintenance, Repair, Standby, Storage.", + "polar_equipment|OperationMaintenance|MAINT_COST_usd": "REAL. Maintenance cost in USD. Example: 7632.51.", + "polar_equipment|OperationMaintenance|repairCostUsd": "REAL. Repair cost in USD. Example: 3297.13.", + "polar_equipment|OperationMaintenance|operating_cost_usd": "REAL. Operating cost in USD. **NULL means no operating cost recorded.**. Example: 338.79.", + "polar_equipment|OperationMaintenance|crewCertStatus": "TEXT. Certification status of the crew. Possible values: Expired, Pending, Valid.", + "polar_equipment|OperationMaintenance|inspect_status": "TEXT. Inspection status of the equipment. Possible values: Failed, Passed, Pending.", + "polar_equipment|OperationMaintenance|docu_status": "TEXT. Documentation status for the equipment. Possible values: Complete, Incomplete, Updated.", + "polar_equipment|OperationMaintenance|COMPLIANCE_state": "TEXT. Compliance state of the equipment. **NULL means no compliance state recorded.**. Possible values: Compliant, Non-compliant, Review.", + "polar_equipment|OperationMaintenance|comm_link": "BIGINT. Optional reference to communication link.", + "polar_equipment|PowerBattery|PWR_BATT_ID": "BIGSERIAL. Unique identifier for the power battery. PK.", + "polar_equipment|PowerBattery|equip_ref": "TEXT. Reference to the equipment. FK to Equipment.EQUIP_CODE.", + "polar_equipment|EngineAndFluids|ENGINE_ID": "BIGSERIAL. Unique identifier for the engine and fluids record. PK.", + "polar_equipment|EngineAndFluids|equipRef": "TEXT. Reference to the equipment. FK to Equipment.EQUIP_CODE.", + "polar_equipment|EngineAndFluids|batt_link": "BIGINT. Reference to the battery. FK to PowerBattery.PWR_BATT_ID.", + "polar_equipment|EngineAndFluids|opmaint_link": "BIGINT. Reference to the operation and maintenance record. FK to OperationMaintenance.OP_MAINT_ID.", + "polar_equipment|Transmission|TRANS_ID": "BIGSERIAL. Unique identifier for the transmission record. PK.", + "polar_equipment|Transmission|equipRef": "TEXT. Reference to the equipment. FK to Equipment.EQUIP_CODE.", + "polar_equipment|Transmission|engine_link": "BIGINT. Reference to the engine and fluids record. FK to EngineAndFluids.ENGINE_ID.", + "polar_equipment|Transmission|transTempC": "REAL. Transmission temperature in Celsius. Example: 94.5.", + "polar_equipment|Transmission|transPress_kpa": "REAL. Transmission pressure in kPa. **NULL means no transmission pressure recorded.**. Example: 532.9.", + "polar_equipment|Transmission|TRANS_gear": "TEXT. Gear setting of the transmission. **NULL means no gear setting recorded.**. Possible values: -1.0, 0.0, 1.0, 2.0, 3.0, 4.0, 5.0, 6.0.", + "polar_equipment|Transmission|diffTempC": "REAL. Differential temperature in Celsius. Example: 80.9.", + "polar_equipment|Transmission|axleTempC": "REAL. Axle temperature in Celsius. Example: 22.1.", + "polar_equipment|Transmission|opmaint_link": "BIGINT. Reference to the operation and maintenance record. FK to OperationMaintenance.OP_MAINT_ID.", + "polar_equipment|ChassisAndVehicle|CHASSIS_ID": "BIGSERIAL. Unique identifier for the chassis and vehicle record. PK.", + "polar_equipment|ChassisAndVehicle|equipRef": "TEXT. Reference to the equipment. FK to Equipment.EQUIP_CODE.", + "polar_equipment|ChassisAndVehicle|trans_link": "BIGINT. Reference to the transmission record. FK to Transmission.TRANS_ID.", + "polar_equipment|ChassisAndVehicle|engine_link": "BIGINT. Reference to the engine and fluids record. FK to EngineAndFluids.ENGINE_ID.", + "polar_equipment|Communication|COMM_ID": "BIGSERIAL. Unique identifier for the communication record. PK.", + "polar_equipment|Communication|equipRef": "TEXT. Reference to the equipment. FK to Equipment.EQUIP_CODE.", + "polar_equipment|Communication|loc_link": "TEXT. Reference to the location. FK to Location.STATION_name.", + "polar_equipment|Communication|GPS_signal": "TEXT. GPS signal status. **NULL means no GPS signal data recorded.**. Possible values: Medium, Strong, Weak.", + "polar_equipment|Communication|sat_conn_stat": "TEXT. Satellite connection status. **NULL means no satellite connection status recorded.**. Possible values: Connected, Disconnected, Limited.", + "polar_equipment|Communication|wifiSignal_dBm": "REAL. Wi-Fi signal strength in dBm. **NULL means no Wi-Fi signal data recorded.**. Example: -61.7.", + "polar_equipment|Communication|radioSignal_dBm": "REAL. Radio signal strength in dBm. Example: -97.8.", + "polar_equipment|Communication|radioFreq_mhz": "REAL. Radio frequency in MHz. Example: 731.2.", + "polar_equipment|Communication|antenna_stat": "TEXT. Antenna status. Possible values: Error, Normal, Warning.", + "polar_equipment|Communication|netLatency_ms": "REAL. Network latency in milliseconds. Example: 1006.0.", + "polar_equipment|Communication|dataRate_kbps": "REAL. Data rate in kilobits per second. Example: 389.6.", + "polar_equipment|Communication|btStatus": "TEXT. Bluetooth status. Possible values: Error, Off, On, Pairing.", + "polar_equipment|Communication|opmaint_link": "BIGINT. Reference to the operation and maintenance record. FK to OperationMaintenance.OP_MAINT_ID.", + "polar_equipment|CabinEnvironment|CABIN_ID": "BIGSERIAL. Unique identifier for the cabin environment record. PK.", + "polar_equipment|CabinEnvironment|equipRef": "TEXT. Reference to the equipment. FK to Equipment.EQUIP_CODE.", + "polar_equipment|CabinEnvironment|loc_link": "TEXT. Reference to the location. FK to Location.STATION_name.", + "polar_equipment|CabinEnvironment|comm_link": "BIGINT. Reference to the communication record. FK to Communication.COMM_ID.", + "polar_equipment|LightingAndSafety|LIGHT_ID": "BIGSERIAL. Unique identifier for the lighting and safety record. PK.", + "polar_equipment|LightingAndSafety|equipRef": "TEXT. Reference to the equipment. FK to Equipment.EQUIP_CODE.", + "polar_equipment|LightingAndSafety|lightingStat": "TEXT. Lighting status. Possible values: Auto, Off, On.", + "polar_equipment|LightingAndSafety|lightIntensityPct": "REAL. Light intensity percentage. **NULL means no light intensity data recorded.**. Example: 21.0.", + "polar_equipment|LightingAndSafety|extLightStat": "TEXT. External lighting status. **NULL means no external lighting status recorded.**. Possible values: Auto, Off, On.", + "polar_equipment|LightingAndSafety|emerStopStat": "TEXT. Emergency stop status. **NULL means no emergency stop status recorded.**. Possible values: Activated, Ready, Reset.", + "polar_equipment|LightingAndSafety|emerLightStat": "TEXT. Emergency light status. Possible values: Off, On, Testing.", + "polar_equipment|LightingAndSafety|fireDetectStat": "TEXT. Fire detection status. Possible values: Alert, Fault, Normal.", + "polar_equipment|LightingAndSafety|smokeDetectStat": "TEXT. Smoke detection status. Possible values: Alert, Fault, Normal.", + "polar_equipment|LightingAndSafety|COdetectStat": "TEXT. CO detection status. Possible values: Alert, Fault, Normal.", + "polar_equipment|LightingAndSafety|gasDetectStat": "TEXT. Gas detection status. Possible values: Alert, Fault, Normal.", + "polar_equipment|LightingAndSafety|alarm_stat": "TEXT. Alarm system status. Possible values: Critical, Normal, Warning.", + "polar_equipment|LightingAndSafety|safetySysStat": "TEXT. Safety system status. Possible values: Active, Fault, Standby.", + "polar_equipment|LightingAndSafety|lifeSupportStat": "TEXT. Life support system status. Possible values: Critical, Normal, Warning.", + "polar_equipment|LightingAndSafety|O2SupplyStat": "TEXT. Oxygen supply status. Possible values: Critical, Normal, Warning.", + "polar_equipment|LightingAndSafety|medEquipStat": "TEXT. Medical equipment status. Possible values: Critical, Normal, Warning.", + "polar_equipment|LightingAndSafety|wasteMgmtStat": "TEXT. Waste management status. Possible values: Critical, Normal, Warning.", + "polar_equipment|LightingAndSafety|waterSupplyStat": "TEXT. Water supply status. Possible values: Critical, Normal, Warning.", + "polar_equipment|WaterAndWaste|WATER_ID": "BIGSERIAL. Unique identifier for the water and waste record. PK.", + "polar_equipment|WaterAndWaste|equipRef": "TEXT. Reference to the equipment. FK to Equipment.EQUIP_CODE.", + "polar_equipment|WaterAndWaste|waterLevelPct": "REAL. Water level percentage. Example: 77.", + "polar_equipment|WaterAndWaste|waterPress_kpa": "REAL. Water pressure in kPa. Example: 66.4.", + "polar_equipment|WaterAndWaste|waterTempC": "TEXT. Water temperature in Celsius. **NULL means no water temperature data recorded.**. Example: 58.4 °C.", + "polar_equipment|WaterAndWaste|waterFlow_lpm": "REAL. Water flow in liters per minute. Example: 28.6.", + "polar_equipment|WaterAndWaste|waterQualityIdx": "BIGINT. Water quality index. Example: 57.", + "polar_equipment|WaterAndWaste|wasteTankPct": "REAL. Waste tank percentage. Example: 28.", + "polar_equipment|Scientific|SCI_ID": "BIGSERIAL. Unique identifier for the scientific instrument record. PK.", + "polar_equipment|Scientific|equipRef": "TEXT. Reference to the equipment. FK to Equipment.EQUIP_CODE.", + "polar_equipment|Scientific|sciEquipStat": "TEXT. Status of the scientific equipment. Possible values: Fault, Operating, Standby.", + "polar_equipment|Scientific|dataLogStat": "TEXT. Data logging status for the equipment. Possible values: Active, Error, Paused.", + "polar_equipment|Scientific|sensorStat": "TEXT. Sensor status. Possible values: Error, Normal, Warning.", + "polar_equipment|Scientific|calibrStat": "TEXT. Calibration status. Possible values: Due, Expired, Valid.", + "polar_equipment|Scientific|measureAccPct": "REAL. Measurement accuracy percentage. Example: 99.3.", + "polar_equipment|WeatherAndStructure|WEATHER_ID": "BIGSERIAL. Unique identifier for the weather and structure record. PK.", + "polar_equipment|WeatherAndStructure|loc_link": "TEXT. Reference to the location. FK to Location.STATION_name.", + "polar_equipment|WeatherAndStructure|opmaint_link": "BIGINT. Reference to the operation and maintenance record. FK to OperationMaintenance.OP_MAINT_ID.", + "polar_equipment|WeatherAndStructure|extTempC": "REAL. External temperature in Celsius. Example: -14.9.", + "polar_equipment|WeatherAndStructure|windSpeed_ms": "REAL. Wind speed in meters per second. Example: 26.5.", + "polar_equipment|WeatherAndStructure|windDir_deg": "REAL. Wind direction in degrees. **NULL means no wind direction data recorded.**. Example: 71.4.", + "polar_equipment|WeatherAndStructure|precipType": "TEXT. Type of precipitation (e.g., rain, snow). **NULL means no precipitation type recorded.**. Possible values: Blowing Snow, Ice, Snow.", + "polar_equipment|WeatherAndStructure|structIntegrityStat": "TEXT. Structural integrity status. **NULL means no structural integrity status recorded.**. Possible values: Critical, Normal, Warning.", + "polar_equipment|WeatherAndStructure|baroPress_hpa": "REAL. Barometric pressure in hectopascals. Example: 975.3.", + "polar_equipment|WeatherAndStructure|solarRad_wm2": "REAL. Solar radiation in watts per square meter. Example: 541.0.", + "polar_equipment|WeatherAndStructure|snowDepth_cm": "BIGINT. Snow depth in centimeters. Example: 143.6.", + "polar_equipment|WeatherAndStructure|iceThick_cm": "REAL. Ice thickness in centimeters. Example: 254.4.", + "polar_equipment|WeatherAndStructure|visibility_km": "REAL. Visibility in kilometers. Example: 42.8.", + "polar_equipment|WeatherAndStructure|precipRate_mmh": "REAL. Precipitation rate in millimeters per hour. Example: 2.4.", + "polar_equipment|WeatherAndStructure|snowLoad_kgm2": "BIGINT. Snow load in kilograms per square meter. Example: 459.7.", + "polar_equipment|WeatherAndStructure|structLoadPct": "REAL. Structural load percentage. Example: 62.", + "polar_equipment|WeatherAndStructure|vibrLevel_mms2": "REAL. Vibration level in millimeters squared. Example: 8.61.", + "polar_equipment|WeatherAndStructure|noiseLevel_dB": "REAL. Noise level in decibels. Example: 38.5.", + "polar_equipment|ThermalSolarWindAndGrid|THERMAL_ID": "BIGSERIAL. Unique identifier for the thermal, solar, wind, and grid record. PK.", + "polar_equipment|ThermalSolarWindAndGrid|comm_link": "BIGINT. Reference to the communication record. FK to Communication.COMM_ID.", + "polar_equipment|ThermalSolarWindAndGrid|batt_link": "BIGINT. Reference to the battery record. FK to PowerBattery.PWR_BATT_ID.", + "polar_equipment|ThermalSolarWindAndGrid|thermalImgStat": "TEXT. Thermal image status. Possible values: Critical, Normal, Warning.", + "polar_equipment|ThermalSolarWindAndGrid|insulationStat": "TEXT. Insulation status. Possible values: Fair, Good, Poor.", + "polar_equipment|ThermalSolarWindAndGrid|heatLoss_kwh": "REAL. Heat loss in kilowatt-hours. **NULL means no heat loss data recorded.**. Example: 1.11.", + "polar_equipment|ThermalSolarWindAndGrid|windOutput_w": "REAL. Wind turbine output in watts. **NULL means no wind output data recorded.**. Example: 656.8.", + "polar_equipment|ThermalSolarWindAndGrid|backupPowerStat": "TEXT. Backup power status. **NULL means no backup power status recorded.**. Possible values: Active, Fault, Standby.", + "polar_equipment|ThermalSolarWindAndGrid|solarPanelStat": "TEXT. Solar panel status. Possible values: Active, Fault, Inactive.", + "polar_equipment|ThermalSolarWindAndGrid|solarOutput_w": "REAL. Solar panel output in watts. Example: 746.1.", + "polar_equipment|ThermalSolarWindAndGrid|solarEffPct": "REAL. Solar efficiency percentage. Example: 2.0.", + "polar_equipment|ThermalSolarWindAndGrid|solarTempC": "REAL. Solar panel temperature in Celsius. Example: -14.9.", + "polar_equipment|ThermalSolarWindAndGrid|windTurbineStat": "TEXT. Wind turbine status. Possible values: Fault, Operating, Stopped.", + "polar_equipment|ThermalSolarWindAndGrid|windRPM": "BIGINT. Wind turbine RPM. Example: 130.3.", + "polar_equipment|ThermalSolarWindAndGrid|powerGridStat": "TEXT. Power grid status. Possible values: Connected, Disconnected, Island Mode.", + "polar_equipment|ThermalSolarWindAndGrid|powerQualIdx": "REAL. Power quality index. Example: 95.", + "polar_equipment|ThermalSolarWindAndGrid|fuelCellStat": "TEXT. Fuel cell status. Possible values: Fault, Operating, Standby.", + "polar_equipment|ThermalSolarWindAndGrid|fuelCellOutput_w": "REAL. Fuel cell output in watts. Example: 185.0.", + "polar_equipment|ThermalSolarWindAndGrid|fuelCellEffPct": "REAL. Fuel cell efficiency percentage. Example: 48.2.", + "polar_equipment|ThermalSolarWindAndGrid|H2LevelPct": "REAL. Hydrogen level percentage. Example: 95.", + "polar_equipment|ThermalSolarWindAndGrid|O2LevelPct": "REAL. Oxygen level percentage. Example: 82.", + "polar_equipment|StationEquipmentType|station_name": "TEXT. Reference to the station. FK to Location.STATION_name. Example: Station-14.", + "polar_equipment|StationEquipmentType|equip_type": "TEXT. Reference to the equipment type. FK to EquipmentType.EquipType. Possible values: Communication, Generator, Safety, Scientific, Shelter, Vehicle.", + "polar_equipment|StationEquipmentType|PRIMARY KEY (station_name, equip_type)": "PRIMARY KEY. Composite primary key combining station name and equipment type.", + "polar_equipment|EquipmentType|type_indices": { + "column_meaning": "JSONB column. Bundles the various performance-, efficiency-, safety- and sustainability-related indices that characterise an equipment family.", + "fields_meaning": { + "performance_score": "REAL. Performance index for the equipment type. Example: 72.8.", + "energy_efficiency_idx": "REAL. Efficiency index for the equipment type. Example: 47.1.", + "safety_idx": "REAL. Safety index for the equipment type. Example: 75.9.", + "environmental_impact_idx": "REAL. Environmental impact index for the equipment type. Example: 36.7." + } + }, + "polar_equipment|PowerBattery|battery_telemetry": { + "column_meaning": "JSONB column. Captures the full real-time power and battery state of a unit (operating mode, SOC, health, charge parameters, consumption, efficiency) as a single JSONB blob.", + "fields_meaning": { + "power_state": { + "system_state": "TEXT. Power status of the battery. Possible values: Charging, Off, On, Sleep.", + "primary_source": "TEXT. Source of the power for the battery. Possible values: Battery, Diesel, Hybrid, Solar, Wind.", + "instant_consumption_w": "REAL. Power consumption in watts. Example: 4383.2.", + "conversion_eff_pct": "REAL. Energy efficiency percentage of the battery. Example: 81.8." + }, + "battery_pack": { + "soc_pct": "REAL. Battery level percentage. Example: 19.", + "health_pct": "REAL. Battery health percentage. Example: 93.", + "cycle_count": "BIGINT. Number of cycles the battery has undergone. Example: 79.", + "temperature_c": "REAL. Battery temperature in Celsius. Example: -22.9." + }, + "charging": { + "charge_state": "TEXT. Charging status of the battery. Possible values: Charging, Error, Full, Not Charging.", + "current_a": "REAL. Charging current in amperes. **NULL means no current recorded.**. Example: 24.12.", + "voltage_v": "REAL. Charging voltage in volts. Example: 26.5." + } + } + }, + "polar_equipment|EngineAndFluids|engine_fluids_snapshot": { + "column_meaning": "JSONB column. Consolidates engine operating metrics together with the levels, temperatures and pressures of all key fluids (fuel, oil, coolant, hydraulics) for easier telemetry ingestion.", + "fields_meaning": { + "fuel": { + "level_pct": "REAL. Fuel level percentage. Example: 49.", + "consumption_lph": "REAL. Fuel consumption in liters per hour. Example: 44.48.", + "rail_pressure_kpa": "BIGINT. Fuel pressure in kPa. **NULL means no fuel pressure data recorded.**. Example: 89.9.", + "temperature_c": "REAL. Fuel temperature in Celsius. Example: -37.2." + }, + "oil": { + "level_pct": "REAL. Oil level percentage. Example: 49.", + "pressure_kpa": "BIGINT. Oil pressure in kPa. Example: 331.4.", + "temperature_c": "REAL. Oil temperature in Celsius. Example: 6.6." + }, + "coolant": { + "level_pct": "REAL. Coolant level percentage. Example: 64.", + "temperature_c": "REAL. Coolant temperature in Celsius. Example: 69.3.", + "pressure_kpa_or_code": "TEXT. Coolant pressure in kPa. Example: 88.9 hPa." + }, + "hydraulic": { + "pressure_kpa": "REAL. Hydraulic pressure in kPa. Example: 19647.2.", + "temperature_c": "REAL. Hydraulic temperature in Celsius. Example: 39.2.", + "fluid_level_pct": "REAL. Hydraulic fluid percentage. Example: 59." + }, + "engine_core": { + "rpm": "BIGINT. Engine RPM. **NULL means no engine RPM data recorded.**. Example: 3133.0.", + "load_pct": "REAL. Engine load percentage. Example: 61.", + "block_temp_c": "REAL. Engine temperature in Celsius. Example: 4.1.", + "lifetime_hours": "REAL. Total engine hours. Example: 31452." + } + } + }, + "polar_equipment|ChassisAndVehicle|ground_vehicle_status": { + "column_meaning": "JSONB column. Wraps the principal running-gear and motion telemetry (brakes, tyres, tracks, suspension, speed & load) into a single JSONB field for rapid health checks.", + "fields_meaning": { + "brake_system": { + "pad_wear_pct": "REAL. Brake pad wear percentage. **NULL means no brake pad wear data recorded.**. Example: 69.0.", + "fluid_level_pct": "REAL. Brake fluid percentage. Example: 97.", + "pressure_kpa_or_code": "TEXT. Brake pressure in kPa. Example: 370.8 hPa." + }, + "tires": { + "pressure_kpa": "BIGINT. Tire pressure in kPa. Example: 345.2.", + "temperature_c": "REAL. Tire temperature in Celsius. Example: 28.7.", + "tread_depth_mm": "REAL. Tire tread thickness in millimeters. Example: 12.8." + }, + "tracks": { + "tension_kN": "REAL. Track tension in kilonewtons. Example: 21.9.", + "wear_pct": "REAL. Track wear percentage. **NULL means no track wear data recorded.**. Example: 35.0." + }, + "suspension": { + "ride_height_mm": "REAL. Suspension height in millimeters. **NULL means no suspension height data recorded.**. Example: 351.9." + }, + "vehicle_motion": { + "speed_kmh": "TEXT. Vehicle speed in kilometers per hour. Example: 49.30 m/s.", + "payload_kg": "REAL. Vehicle load in kilograms. **NULL means no vehicle load data recorded.**. Example: 9585.3.", + "attitude_angle_deg": "REAL. Vehicle angle in degrees. Example: 1.6.", + "heading_deg": "REAL. Vehicle heading in degrees. Example: 89.3." + } + } + }, + "polar_equipment|CabinEnvironment|cabin_env_snapshot": { + "column_meaning": "JSONB column. Stores all atmosphere, HVAC, access and emergency-beacon readings relevant to crew comfort & safety in a single JSONB column.", + "fields_meaning": { + "emergency": "TEXT. Emergency beacon status. Possible values: Active, Standby, Testing.", + "air_metrics": { + "temperature_c": "REAL. Cabin temperature in Celsius. **NULL means no cabin temperature data recorded.**. Example: -0.3.", + "humidity_pct": "REAL. Cabin humidity percentage. Example: 58.8.", + "pressure_kpa": "BIGINT. Cabin pressure in kPa. Example: 98.9.", + "co2_ppm": "REAL. Cabin CO2 level in parts per million. Example: 557.", + "o2_pct": "REAL. Cabin oxygen level in percentage. Example: 19.9.", + "air_quality_idx": "REAL. Cabin air quality index. Example: 237." + }, + "hvac": { + "vent_state": "TEXT. Ventilation status. **NULL means no ventilation status recorded.**. Possible values: Auto, Off, On.", + "vent_speed_pct": "REAL. Ventilation speed percentage. Example: 13.", + "heater_state": "TEXT. Heater status. Possible values: Auto, Off, On.", + "heater_temp_c": "REAL. Heater temperature in Celsius. Example: 34.9.", + "defroster_state": "TEXT. Defroster status. Possible values: Auto, Off, On." + }, + "access": { + "window_state": "TEXT. Window status. Possible values: Closed, Open, Partial.", + "door_state": "TEXT. Door status. Possible values: Closed, Locked, Open.", + "hatch_state": "TEXT. Hatch status. Possible values: Closed, Locked, Open." + } + } + } +} \ No newline at end of file diff --git a/polar_equipment/polar_equipment_kb.jsonl b/polar_equipment/polar_equipment_kb.jsonl new file mode 100644 index 0000000000000000000000000000000000000000..5d0a5553252f8d7317506a4b4d7e30f6ed3c1f0d --- /dev/null +++ b/polar_equipment/polar_equipment_kb.jsonl @@ -0,0 +1,58 @@ +{"id": 0, "knowledge": "Equipment Efficiency Rating (EER)", "description": "A composite metric that evaluates the overall efficiency of equipment based on performance, reliability, and environmental impact.", "definition": "EER = \\frac{\\text{performance score} + \\text{reliability index}}{2} \\times (1 - \\frac{\\text{environmental impact index}}{100}), \\text{ where higher values indicate more efficient equipment with better performance and lower environmental impact.}", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 1, "knowledge": "Operational Readiness Score (ORS)", "description": "Quantifies how ready equipment is for immediate deployment based on operational status and maintenance schedule.", "definition": "ORS = \\begin{cases} 10 \\times (1 - \\frac{\\text{operation hours}}{\\text{maintenance cycle hours}}) & \\text{if operational status = 'Active'} \\\\ 5 \\times (1 - \\frac{\\text{operation hours}}{\\text{maintenance cycle hours}}) & \\text{if operational status = 'Standby'} \\\\ 0 & \\text{otherwise} \\end{cases}", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 2, "knowledge": "Energy Sustainability Index (ESI)", "description": "Measures the sustainability of an equipment's energy usage by evaluating energy efficiency and renewable sources.", "definition": "ESI = \\text{energy efficiency percent} \\times \\begin{cases} 1.5 & \\text{if power source IN ('Solar', 'Wind', 'Hybrid')} \\\\ 1.0 & \\text{if power source = 'Battery'} \\\\ 0.7 & \\text{if power source = 'Diesel'} \\\\ 0 & \\text{otherwise} \\end{cases}, \\text{ providing higher ratings for renewable energy and lower ratings for fossil fuels.}", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 3, "knowledge": "Structural Safety Factor (SSF)", "description": "Evaluates the safety margin of structures under extreme weather conditions.", "definition": "SSF = \\frac{100 - \\text{structural load percent}}{100} \\times \\begin{cases} 0.5 & \\text{if snow load (kg/m^2) > 100 or wind speed (m/s) > 20} \\\\ 0.8 & \\text{if snow load (kg/m^2) > 50 or wind speed (m/s) > 10} \\\\ 1.0 & \\text{otherwise} \\end{cases}", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 4, "knowledge": "Communication Reliability Index (CRI)", "description": "Assesses the reliability of communication systems based on signal metrics and antenna status.", "definition": "CRI = \\begin{cases} 0 & \\text{if antenna status = 'Error'} \\\\ 5 & \\text{if antenna status = 'Warning'} \\\\ 10 & \\text{if antenna status = 'Normal'} \\\\ 0 & \\text{otherwise} \\end{cases} \\times (1 - \\frac{\\text{signal latency (ms)}}{1000}), \\text{ where lower latency and better antenna status result in higher reliability.}", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 5, "knowledge": "Vehicle Performance Composite (VPC)", "description": "A metric that evaluates vehicle performance based on mechanical condition and operational efficiency.", "definition": "VPC = (1 - \\frac{\\text{brake pad wear percent} + \\text{track wear percent}}{200}) \\times \\frac{\\text{vehicle speed (km/h)}}{50} \\times \\frac{\\text{engine load percent}}{100}, \\text{ where lower wear percentages and optimal engine load contribute to better performance.}", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 6, "knowledge": "Thermal Insulation Efficiency (TIE)", "description": "Measures how effectively a structure retains heat based on insulation status and heat loss rate.", "definition": "TIE = \\begin{cases} 0.9 - \\frac{\\text{heat loss rate (kWh)}}{10} & \\text{if insulation status = 'Good'} \\\\ 0.6 - \\frac{\\text{heat loss rate (kWh)}}{10} & \\text{if insulation status = 'Fair'} \\\\ 0.3 - \\frac{\\text{heat loss rate (kWh)}}{10} & \\text{if insulation status = 'Poor'} \\end{cases}, \\text{ where lower heat loss and better insulation result in higher efficiency.}", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 7, "knowledge": "Water Resource Management Index (WRMI)", "description": "Evaluates the efficiency of water resource management based on water levels, quality, and waste levels.", "definition": "WRMI = \\frac{\\text{water level percent}}{100} \\times \\frac{\\text{water quality index}}{100} \\times (1 - \\frac{\\text{waste tank level percent}}{100}), \\text{ where higher water quality and appropriate water/waste levels indicate better management.}", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 8, "knowledge": "Scientific Equipment Reliability (SER)", "description": "Quantifies the reliability of scientific equipment based on calibration status and measurement accuracy.", "definition": "SER = \\text{measurement accuracy percent} \\times \\begin{cases} 1.0 & \\text{if calibration status = 'Valid'} \\\\ 0.7 & \\text{if calibration status = 'Due'} \\\\ 0.3 & \\text{if calibration status = 'Expired'} \\end{cases}, \\text{ where valid calibration and high accuracy result in more reliable scientific data.}", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 9, "knowledge": "Renewable Energy Contribution (REC)", "description": "Calculates the percentage contribution of renewable energy sources to the total power generation.", "definition": "REC = \\frac{\\text{solar output (W)} + \\text{wind output (W)}}{\\text{fuel cell output (W)} + \\text{solar output (W)} + \\text{wind output (W)}} \\times 100, \\text{ where higher values indicate greater reliance on renewable energy.}", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 10, "knowledge": "Extreme Weather Readiness (EWR)", "description": "Evaluates how prepared equipment and structures are for extreme weather conditions.", "definition": "A composite rating where equipment is considered 'Extreme Weather Ready' if it maintains an SSF > 0.7 and has operational heating systems (heater status not 'Off'), proper insulation (insulation status not 'Poor'), and functional emergency systems (emergency light status = 'On' or 'Testing').", "type": "domain_knowledge", "children_knowledge": [3]} +{"id": 11, "knowledge": "Critical Equipment", "description": "Identifies equipment that is essential for life support and safety in polar environments.", "definition": "Equipment is designated as 'Critical' if it belongs to the 'Safety' equipment type, has a safety index > 0.75, and is associated with any of these life-critical systems: life support status, oxygen supply status, or heater systems where temperatures are below freezing (external temperature (°C) < 0).", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 12, "knowledge": "Maintenance Priority Level", "description": "Classifies equipment based on the urgency of required maintenance.", "definition": "Equipment is categorized into maintenance priority levels: 'Immediate Attention' (operation hours > maintenance cycle hours OR operational status = 'Repair'), 'Scheduled Service' (operation hours > 0.8 * maintenance cycle hours), and 'Routine Maintenance' (all other cases), helping prioritize resource allocation.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 13, "knowledge": "Energy Sustainability Classification", "description": "Categories equipment based on their energy sustainability index for environmental impact assessment.", "definition": "Equipment is classified as 'Green' (ESI > 0.8), 'Intermediate' (ESI between 0.4 and 0.8), or 'High Impact' (ESI < 0.4), with Green indicating environmentally sustainable operations.", "type": "domain_knowledge", "children_knowledge": [2]} +{"id": 14, "knowledge": "Communication Zone Status", "description": "Evaluates the communication coverage and reliability in different operational zones.", "definition": "A zone is classified as having 'Reliable Coverage' when equipment within it maintains a CRI > 7, has active satellite connections (satellite status = 'Connected'), and supports emergency beacon functionality (emergency beacon status != 'Inactive').", "type": "domain_knowledge", "children_knowledge": [4]} +{"id": 15, "knowledge": "Vehicle Operational Safety Threshold", "description": "Defines the safety threshold for vehicle operations based on multiple safety factors.", "definition": "A vehicle is considered 'Safe for Operation' when it maintains a VPC > 0.6, has brake fluid levels above 50%, brake pad wear below 70%, adequate tire pressure (tire pressure (kPa) > 200).", "type": "domain_knowledge", "children_knowledge": [5]} +{"id": 16, "knowledge": "Scientific Data Reliability Classification", "description": "Classifies scientific data based on equipment reliability and calibration status.", "definition": "Scientific data is classified as 'Research Grade' when collected by equipment with SER > 0.9, with valid calibration status, and under appropriate environmental conditions for the equipment type.", "type": "domain_knowledge", "children_knowledge": [8]} +{"id": 17, "knowledge": "Cabin Habitability Standard", "description": "Defines the minimum standards for habitable cabin conditions in polar environments.", "definition": "A cabin meets 'Habitability Standards' when it maintains internal temperature (°C) between 18-24°C, oxygen level (%) above 19.5%, CO2 level (ppm) below 1000 ppm, functioning ventilation systems, and operational heating systems.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 18, "knowledge": "Water Conservation Requirement", "description": "Specifies the conditions under which water conservation measures must be implemented.", "definition": "Water conservation measures must be implemented when the WRMI falls below 0.5, indicating either low water levels, poor water quality, or high waste tank levels that require immediate attention to maintain sustainable water usage.", "type": "domain_knowledge", "children_knowledge": [7]} +{"id": 19, "knowledge": "Sustainable Energy Operation", "description": "Defines the conditions for energy-sustainable operations in polar environments.", "definition": "An operation is considered 'Energy-Sustainable' when it maintains a REC above 70% (meaning more than 70% of energy comes from renewable sources) while maintaining full operational capability and adequate power reserves for at least 48 hours in case of emergency.", "type": "domain_knowledge", "children_knowledge": [9]} +{"id": 20, "knowledge": "reliabilityindex", "description": "Illustrates the significance of reliability index measurements in equipment durability.", "definition": "The reliability index typically ranges from 0 to 1, where values below 0.5 indicate equipment that fails frequently and requires constant maintenance, values between 0.5-0.8 represent equipment with occasional failures that require regular maintenance, and values above 0.8 indicate highly reliable equipment with minimal downtime.", "type": "value_illustration", "children_knowledge": -1} +{"id": 21, "knowledge": "operationhours", "description": "Illustrates the meaning and importance of equipment operation hours.", "definition": "Operation hours represent the cumulative time an equipment has been in active use. New equipment typically has low hours (0-100), mid-life equipment shows moderate hours (100-1000), while equipment approaching maintenance or replacement typically exceeds 1000 hours. The ratio of operation hours to maintenance cycle hours is critical for preventative maintenance scheduling.", "type": "value_illustration", "children_knowledge": -1} +{"id": 22, "knowledge": "externaltemperaturec", "description": "Illustrates the significance of external temperature readings in polar environments.", "definition": "External temperature in polar regions typically ranges from -70°C to 10°C. Temperatures below -40°C represent extreme cold requiring special equipment protection measures, -20°C to -40°C require standard cold weather protocols, while temperatures above -20°C are considered relatively mild for polar operations but still require normal cold weather precautions.", "type": "value_illustration", "children_knowledge": -1} +{"id": 23, "knowledge": "windspeedms", "description": "Illustrates the impact of wind speed measurements on operations and safety.", "definition": "Wind speeds in polar environments typically range from 0 to 60 m/s. Speeds below 5 m/s represent calm conditions, 5-15 m/s indicate moderate winds with minor operational impact, 15-25 m/s represent strong winds requiring additional safety measures, and speeds above 25 m/s indicate dangerous conditions that may require suspension of outdoor activities and securing of equipment.", "type": "value_illustration", "children_knowledge": -1} +{"id": 24, "knowledge": "powerconsumptionw", "description": "Illustrates the significance of power consumption measurements in energy management.", "definition": "Power consumption measured in watts varies by equipment type. Small scientific instruments typically consume 5-50W, communication equipment 20-200W, heating systems 500-5000W, and vehicle systems 1000-10000W. Understanding consumption patterns is crucial for power budgeting and determining appropriate power source sizing in isolated polar environments.", "type": "value_illustration", "children_knowledge": -1} +{"id": 25, "knowledge": "Water Quality Classification System (WQCS)", "description": "A standardized system that categorizes water quality for health, safety, and operational purposes in polar environments.", "definition": "Water is classified into five quality categories based on its quality index: 'High-Quality' WHEN (water quality index >= 91), suitable for all purposes including direct consumption; 'Good' WHEN (water quality index >= 71 AND water quality index < 91), safe for consumption after standard treatment; 'Moderate' WHEN (water quality index >= 51 AND water quality index < 71), acceptable for washing but not consumption; 'Poor' WHEN (water quality index >= 26 AND water quality index < 51), suitable only for limited non-contact uses; 'Unsafe' WHEN (water quality index < 26), unsuitable for any use and requiring immediate remediation.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 26, "knowledge": "cabinclimate.co2_ppm", "description": "Illustrates the health and cognitive implications of carbon dioxide levels in enclosed environments.", "definition": "CO2 levels in enclosed spaces like cabins are measured in parts per million (ppm). Levels below 600 ppm indicate excellent ventilation, 600-1000 ppm represent good air quality, 1000-2500 ppm indicate poor ventilation that may cause drowsiness and reduced cognitive function, while levels above 2500 ppm may cause headaches, sleepiness, and significantly impaired cognitive performance.", "type": "value_illustration", "children_knowledge": -1} +{"id": 27, "knowledge": "energyefficiencypercent", "description": "Illustrates the meaning and importance of energy efficiency percentages.", "definition": "Energy efficiency percentage typically ranges from 10% to 99%. Values below 30% indicate inefficient systems typical of older equipment, 30-60% represent standard efficiency for conventional equipment, 60-80% indicate high-efficiency modern systems, while values above 80% represent cutting-edge technology with optimal efficiency that minimizes energy waste and operational costs.", "type": "value_illustration", "children_knowledge": -1} +{"id": 28, "knowledge": "safetyindex", "description": "Illustrates the significance of safety index ratings for operational risk assessment.", "definition": "Safety index typically ranges from 0 to 1, where values below 0.5 indicate equipment with significant safety concerns requiring immediate attention or limited operation, values between 0.5-0.7 represent equipment with acceptable safety for normal operations with appropriate precautions, and values above 0.7 indicate equipment with excellent safety features suitable for all operational conditions including those with elevated risks.", "type": "value_illustration", "children_knowledge": -1} +{"id": 29, "knowledge": "fuelcellefficiencypercent", "description": "Illustrates the technical significance of fuel cell efficiency ratings.", "definition": "Fuel cell efficiency percentages typically range from 40% to 90%. Values below 50% represent older or degraded fuel cell technology, 50-70% indicate standard efficiency modern fuel cells suitable for general applications, and values above 70% represent high-performance fuel cells with optimal conversion of chemical energy to electrical power with minimal waste heat generation.", "type": "value_illustration", "children_knowledge": -1} +{"id": 30, "knowledge": "Overall Safety Performance Index (OSPI)", "description": "Comprehensively evaluates equipment's overall safety performance based on safety index and equipment efficiency rating", "definition": "OSPI = \\text{safety index}/100 \\times EER \\times 0.8", "type": "calculation_knowledge", "children_knowledge": [0]} +{"id": 31, "knowledge": "Polar Transportation Efficiency Coefficient (PTEC)", "description": "Measures vehicle transportation efficiency in polar conditions, considering vehicle performance and energy sustainability", "definition": "PTEC = VPC \\times (0.6 + 0.4 \\times ESI \\div 100)", "type": "calculation_knowledge", "children_knowledge": [2, 5]} +{"id": 32, "knowledge": "Base Station Communication Stability Index (BSCSI)", "description": "Evaluates the stability and reliability of polar base station communication systems", "definition": "BSCSI = CRI \\times (1 + 0.2 \\times \\text{radio signal (dBm)} \\div 100) \\times (1 - 0.01 \\times (1000 - \\text{latency (ms)}))", "type": "calculation_knowledge", "children_knowledge": [4]} +{"id": 33, "knowledge": "Life Support System Reliability (LSSR)", "description": "Evaluates the reliability of life support systems under polar conditions", "definition": "LSSR = 0.7 \\times ORS + 0.3 \\times TIE", "type": "calculation_knowledge", "children_knowledge": [1, 6]} +{"id": 34, "knowledge": "Scientific Mission Success Probability (SMSP)", "description": "Predicts the probability of successful completion of scientific missions", "definition": "SMSP = SER \\times (0.8 + 0.2 \\times CRI \\div 10)", "type": "calculation_knowledge", "children_knowledge": [4, 8]} +{"id": 35, "knowledge": "Resource Self-Sufficiency Index (RSSI)", "description": "Measures a polar site's self-sufficiency in terms of resources", "definition": "RSSI = 0.6 \\times REC + 0.4 \\times WRMI", "type": "calculation_knowledge", "children_knowledge": [7, 9]} +{"id": 36, "knowledge": "Extreme Climate Adaptation Coefficient (ECAC)", "description": "Evaluates equipment adaptation capability under extreme climate conditions", "definition": "ECAC = SSF \\times (1 + TIE \\times 0.5) \\times \\begin{cases} 0.7 & \\text{if external temperature (°C) < -30} \\\\ 0.85 & \\text{if external temperature (°C) < -15} \\\\ 1.0 & \\text{otherwise} \\end{cases}", "type": "calculation_knowledge", "children_knowledge": [3, 6]} +{"id": 37, "knowledge": "Long-term Operational Stability Score (LOSS)", "description": "Evaluates the stability of equipment during long-term operation", "definition": "LOSS = 0.5 \\times EER + 0.5 \\times ORS \\times (1 - \\frac{\\text{operation hours}}{20000})", "type": "calculation_knowledge", "children_knowledge": [0, 1]} +{"id": 38, "knowledge": "Energy-Water Resource Integration Index (EWRII)", "description": "Evaluates the integration efficiency of energy and water resource management", "definition": "EWRII = 0.5 \\times ESI + 0.5 \\times WRMI \\times (1 - \\frac{\\text{heater temperature (°C)}}{100})", "type": "calculation_knowledge", "children_knowledge": [2, 7]} +{"id": 39, "knowledge": "Comprehensive Operational Reliability Indicator (CORI)", "description": "Comprehensively assesses the overall reliability of polar equipment operations", "definition": "CORI = 0.4 \\times EER + 0.4 \\times ORS + 0.2 \\times CRI", "type": "calculation_knowledge", "children_knowledge": [0, 1, 4]} +{"id": 40, "knowledge": "Extreme Operating Conditions (EOC)", "description": "Defines the extreme environmental conditions under which equipment can safely operate", "definition": "Equipment is considered to 'operate safely under extreme conditions' when its SSF > 0.65 and ECAC > 0.8.", "type": "domain_knowledge", "children_knowledge": [3, 36]} +{"id": 41, "knowledge": "Emergency Response Readiness Status (ERRS)", "description": "Assesses a polar site's preparedness to respond to emergency situations", "definition": "A polar site is rated as 'emergency response ready' when its critical equipment maintains OSPI > 0.75 and LSSR > 0.8, with emergency communication status = 'Operational' and backup power status = 'Active' and battery level (%) > 85.", "type": "domain_knowledge", "children_knowledge": [30, 33]} +{"id": 42, "knowledge": "Sustainable Polar Operations (SPO)", "description": "Defines sustainability standards for polar operations", "definition": "Polar operations are defined as 'sustainable' when the site maintains RSSI > 0.7 and EWRII > 0.65, with waste management status = 'Normal' and environmental impact index < 6.0.", "type": "domain_knowledge", "children_knowledge": [35, 38]} +{"id": 43, "knowledge": "Critical Scientific Equipment Status (CSES)", "description": "Determines the operational status and reliability of critical scientific equipment", "definition": "Scientific equipment is classified as 'Fully Operational' (SER > 0.9 and SMSP > 0.85), 'Degraded Operation' (SER > 0.7 and SMSP > 0.6), or 'Needs Repair' (other cases).", "type": "domain_knowledge", "children_knowledge": [8, 34]} +{"id": 44, "knowledge": "Polar Vehicle Safe Operation Conditions (PVSOC)", "description": "Determines the conditions for safe operation of polar vehicles", "definition": "Polar vehicles are considered 'suitable for polar missions' when they maintain PTEC > 0.7 and VPC > 0.75, with operational status = 'Active' and safety index ≥ 0.8.", "type": "domain_knowledge", "children_knowledge": [5, 31]} +{"id": 45, "knowledge": "Communication Network Resilience Assessment (CNRA)", "description": "Assesses the resilience and interference resistance of polar communication networks", "definition": "Communication networks are assessed as having 'High Resilience' (CRI > 0.8 and BSCSI > 0.85), 'Medium Resilience' (CRI > 0.6 and BSCSI > 0.7), or 'Low Resilience' (other cases).", "type": "domain_knowledge", "children_knowledge": [4, 32]} +{"id": 46, "knowledge": "Critical Infrastructure Protection Level (CIPL)", "description": "Determines the protection level for polar critical infrastructure", "definition": "Infrastructure is assigned protection level 'A' (SSF > 0.8, LOSS > 0.85, and OSPI > 0.9), 'B' (SSF > 0.7, LOSS > 0.75, and OSPI > 0.8), or 'C' (other cases).", "type": "domain_knowledge", "children_knowledge": [3, 30, 37]} +{"id": 47, "knowledge": "Long-term Scientific Mission Viability (LSMV)", "description": "Assesses the viability of long-term scientific missions under polar conditions", "definition": "Scientific missions are assessed as 'long-term viable' when all involved scientific equipment maintains SMSP > 0.8 and overall site operations maintain CORI > 0.75, with calibration status = 'Valid' and data logging status = 'Active'.", "type": "domain_knowledge", "children_knowledge": [34, 39]} +{"id": 48, "knowledge": "Polar Base Energy Security Status (PBESS)", "description": "Determines the security status of energy supply for polar bases", "definition": "A polar base is assessed as being in an 'energy secure' state when it maintains REC > 65%, ESI > 0.7, and RSSI > 0.75, with battery level (%) > 75 and hydrogen level percent > 70.", "type": "domain_knowledge", "children_knowledge": [2, 9, 35]} +{"id": 49, "knowledge": "Comprehensive Environmental Adaptability Rating (CEAR)", "description": "Assesses the overall adaptability of equipment and systems to the polar environment", "definition": "Equipment and systems are rated as having 'Excellent Adaptability' (ECAC > 0.85 and SSF > 0.8 and insulation status = 'Good'), 'Good Adaptability' (ECAC > 0.7 and SSF > 0.65 and insulation status != 'Poor'), or 'Limited Adaptability' (other cases).", "type": "domain_knowledge", "children_knowledge": [10, 36]} +{"id": 50, "knowledge": "Extreme Weather Readiness Status (EWRS)", "description": "A binary classification system that determines if equipment has met all necessary conditions to safely operate during extreme weather events.", "definition": "Equipment is classified as 'Extreme Weather Ready' when (SSF > 0.7) and (heater status != 'Off') and (insulation status != 'Good') and (emergency light status is either ('On', 'Testing')); OTHERWISE equipment is classified as 'Not Ready'. This evaluation combines structural integrity checks with essential operational systems status to determine immediate readiness for extreme weather exposure.", "type": "domain_knowledge", "children_knowledge": [3]} +{"id": 51, "knowledge": "Life Support Reliability Classification (LSRC)", "description": "Categorizes life support systems into reliability classes based on their LSSR score for operational decision-making.", "definition": "Life support systems are classified into three reliability categories: 'High Reliability' WHEN (LSSR >= 0.8), 'Moderate Reliability' WHEN (LSSR >= 0.6 AND LSSR < 0.8), and 'Low Reliability' WHEN (LSSR < 0.6).", "type": "domain_knowledge", "children_knowledge": [33]} +{"id": 52, "knowledge": "Energy Sustainability Classification System (ESCS)", "description": "A comprehensive classification system that categorizes operational energy sustainability based on renewable energy contribution percentages.", "definition": "Operations are classified into three sustainability levels: 'Energy-Sustainable' WHEN (REC > 70); 'Moderately Sustainable' WHEN (REC > 50 AND REC <= 70); 'Low Sustainability' WHEN (REC <= 50).", "type": "domain_knowledge", "children_knowledge": [9, 19]} +{"id": 53, "knowledge": "Water Resource Management Status Classification (WRMSC)", "description": "A comprehensive classification system that categorizes water resource management status based on WRMI values to guide operational decisions.", "definition": "Water management operations are classified into three status levels: 'Conservation Needed' WHEN (WRMI < 0.5), indicating critical resource limitations requiring immediate conservation measures; 'Monitoring Advised' WHEN (WRMI >= 0.5 AND WRMI < 0.7), representing adequate but vigilant management requiring regular system monitoring; 'Sustainable Management' WHEN (WRMI >= 0.7), indicating optimal water resource utilization suitable for unrestricted operations.", "type": "domain_knowledge", "children_knowledge": [7, 18]} +{"id": 54, "knowledge": "Complete Data Set", "description": "A data quality rule requiring that records used for a specific analysis must be complete and not contain null values for key input metrics.", "definition": "To ensure the integrity of the BSCSI analysis, the source data is filtered to include only records where the 'antenna status', 'network latency (ms)', and 'radio signal (dBm)' fields are all non-null. This prevents incomplete records from producing null results that could skew the aggregate analysis.", "type": "domain_knowledge", "children_knowledge": [32, 4]} +{"id": 55, "knowledge": "Vehicle Efficiency and Sustainability Report", "description": "A comprehensive analysis that ranks vehicles by combining their mechanical performance with their energy sustainability to produce a holistic efficiency score.", "definition": "The process of calculating VPC and ESI for each vehicle, using those values to calculate PTEC, and finally ordering the vehicles by the resulting PTEC score to create a ranked performance report.","type": "domain_knowledge", "children_knowledge": [31, 2, 5]} +{"id": 56, "knowledge": "Temperature-Zoned Average Battery Health", "description": "A metric that calculates the average battery health for all equipment operating within specific, predefined external temperature zones.", "definition": "AvgHealth_{\\text{zone}} = \\overline{\\text{health percent}}_{\\text{zone}} \\text{ where zone is defined as:} \\\\ \\begin{cases} \\text{Extreme Cold} & \\text{if external temperature (°C) < -40^\\circ\\text{C}} \\\\ \\text{Standard Cold} & \\text{if } -40^\\circ\\text{C} \\le \\text{external temperature (°C)} < -20^\\circ\\text{C} \\\\ \\text{Mild Cold} & \\text{if } \\text{external temperature (°C)} \\ge -20^\\circ\\text{C} \\end{cases}", "type": "calculation_knowledge", "children_knowledge": [22]} +{"id": 57, "knowledge": "Failed Inspection Activation Lockout", "description": "A critical safety protocol implemented at the database level to prevent unsafe equipment from being put into service.", "definition": "A rule, typically enforced by a database trigger, that blocks any attempt to change an equipment's operational status to 'Active' if its current inspection status is 'Failed'. This ensures that equipment which has not passed safety checks cannot be used.", "type": "domain_knowledge", "children_knowledge": -1} diff --git a/polar_equipment/polar_equipment_schema.txt b/polar_equipment/polar_equipment_schema.txt new file mode 100644 index 0000000000000000000000000000000000000000..9dfa0101ee15a5f429640545d9ff08ba210df1bc --- /dev/null +++ b/polar_equipment/polar_equipment_schema.txt @@ -0,0 +1,409 @@ +"CREATE" TABLE "EquipmentType" ( +"EquipType" text NOT NULL, +type_indices jsonb NULL, + "PRIMARY" KEY (EquipType) +); + + + +"First" 3 rows: +EquipType type_indices +----------- ---------------------------------------------------------------------------------------------------------------- +Shelter {'safety_idx': 75.9, 'performance_score': 72.8, 'energy_efficiency_idx': 47.1, 'environmental_impact_idx': 36.7} +Scientific {'safety_idx': 35.9, 'performance_score': 48.8, 'energy_efficiency_idx': 72.2, 'environmental_impact_idx': 74.7} +Safety {'safety_idx': 34.6, 'performance_score': 93, 'energy_efficiency_idx': 36.8, 'environmental_impact_idx': 87.3} +... + + +"CREATE" TABLE "PowerBattery" ( +"PWR_BATT_ID" bigint NOT NULL DEFAULT nextval('"PowerBattery_PWR_BATT_ID_seq"'::regclass), +equip_ref text NOT NULL, +battery_telemetry jsonb NULL, + "PRIMARY" KEY (PWR_BATT_ID), + "FOREIGN" KEY (equip_ref) REFERENCES Equipment(EQUIP_CODE) +); + + + +"First" 3 rows: + PWR_BATT_ID equip_ref battery_telemetry +------------- ----------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ + 1 PE593707 {'charging': {'current_a': None, 'voltage_v': 26.5, 'charge_state': 'Error'}, 'power_state': {'system_state': 'Sleep', 'primary_source': 'Wind', 'conversion_eff_pct': 81.8, 'instant_consumption_w': 4383.2}, 'battery_pack': {'soc_pct': 19, 'health_pct': 93, 'cycle_count': 79, 'temperature_c': -22.9}} + 2 PE292528 {'charging': {'current_a': 24.12, 'voltage_v': 19.8, 'charge_state': 'Not Charging'}, 'power_state': {'system_state': 'Charging', 'primary_source': 'Solar', 'conversion_eff_pct': 74.4, 'instant_consumption_w': 2710.9}, 'battery_pack': {'soc_pct': 32, 'health_pct': 74, 'cycle_count': 617, 'temperature_c': 31.6}} + 3 PE617633 {'charging': {'current_a': 15.94, 'voltage_v': 39.1, 'charge_state': 'Charging'}, 'power_state': {'system_state': 'On', 'primary_source': 'Wind', 'conversion_eff_pct': 87.7, 'instant_consumption_w': 3552.2}, 'battery_pack': {'soc_pct': 42, 'health_pct': 67, 'cycle_count': 667, 'temperature_c': -12}} +... + + +"CREATE" TABLE "Equipment" ( +"EQUIP_CODE" text NOT NULL, +"EquipType" text NOT NULL, +model_name text NULL, +"MakerName" text NULL, +"SERVICE_YRS" bigint NULL, +"utilPercent" real NULL, +"RELIAB_IDX" real NULL, + "PRIMARY" KEY (EQUIP_CODE), + "FOREIGN" KEY ("EquipType") REFERENCES EquipmentType("EquipType") +); + + + +"First" 3 rows: +EQUIP_CODE EquipType model_name MakerName SERVICE_YRS utilPercent RELIAB_IDX +------------ ----------- ------------ ------------------------ ------------- ------------- ------------ +PE593707 Shelter Model-925 Lee, Meyers and Hamilton 4 53 97.8 +PE292528 Scientific Model-454 Wiggins Inc 6 60 97.7 +PE617633 Safety Graves-Cox 10 81 97 +... + + +"CREATE" TABLE "EngineAndFluids" ( +"ENGINE_ID" bigint NOT NULL DEFAULT nextval('"EngineAndFluids_ENGINE_ID_seq"'::regclass), +"equipRef" text NOT NULL, +batt_link bigint NULL, +opmaint_link bigint NULL, +engine_fluids_snapshot jsonb NULL, + "PRIMARY" KEY (ENGINE_ID), + "FOREIGN" KEY ("equipRef") REFERENCES Equipment(EQUIP_CODE), + "FOREIGN" KEY (batt_link) REFERENCES PowerBattery(PWR_BATT_ID), + "FOREIGN" KEY (opmaint_link) REFERENCES OperationMaintenance(OP_MAINT_ID) +); + + + +"First" 3 rows: + ENGINE_ID equipRef batt_link opmaint_link engine_fluids_snapshot +----------- ---------- ----------- -------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- + 1 PE593707 1 {'oil': {'level_pct': 49, 'pressure_kpa': 331, 'temperature_c': 6.6}, 'fuel': {'level_pct': 49, 'temperature_c': -37.2, 'consumption_lph': 44.48, 'rail_pressure_kpa': None}, 'coolant': {'level_pct': 64, 'temperature_c': 69.3, 'pressure_kpa_or_code': '88.9 hPa'}, 'hydraulic': {'pressure_kpa': 19647.2, 'temperature_c': 39.2, 'fluid_level_pct': 59}, 'engine_core': {'rpm': 3133, 'load_pct': 61, 'block_temp_c': 4.1, 'lifetime_hours': 31452}} + 2 PE292528 2 {'oil': {'level_pct': 21, 'pressure_kpa': 141, 'temperature_c': 26.1}, 'fuel': {'level_pct': 68, 'temperature_c': 19.8, 'consumption_lph': 14.85, 'rail_pressure_kpa': 90}, 'coolant': {'level_pct': 58, 'temperature_c': -6.7, 'pressure_kpa_or_code': '147.2 hPa'}, 'hydraulic': {'pressure_kpa': 9189.7, 'temperature_c': 31.5, 'fluid_level_pct': 10}, 'engine_core': {'rpm': 1669, 'load_pct': 34, 'block_temp_c': 66.3, 'lifetime_hours': 45593}} + 3 PE617633 3 {'oil': {'level_pct': 71, 'pressure_kpa': 598, 'temperature_c': 49.2}, 'fuel': {'level_pct': 88, 'temperature_c': -1.7, 'consumption_lph': 30.24, 'rail_pressure_kpa': 421}, 'coolant': {'level_pct': 26, 'temperature_c': 59.5, 'pressure_kpa_or_code': '18.2 hPa'}, 'hydraulic': {'pressure_kpa': 18097.7, 'temperature_c': 38.4, 'fluid_level_pct': 92}, 'engine_core': {'rpm': 2447, 'load_pct': 40, 'block_temp_c': 28.2, 'lifetime_hours': 28148}} +... + + +"CREATE" TABLE "OperationMaintenance" ( +"OP_MAINT_ID" bigint NOT NULL DEFAULT nextval('"OperationMaintenance_OP_MAINT_ID_seq"'::regclass), +"equipRef" text NOT NULL, +"OPER_hours" real NULL, +"maintCycleHrs" real NULL, +"LAST_maint_date" date NULL, +"NEXT_due_date" date NULL, +"OPER_status" text NULL, +"MAINT_COST_usd" real NULL, +"repairCostUsd" real NULL, +operating_cost_usd real NULL, +"crewCertStatus" text NULL, +inspect_status text NULL, +"COMPLIANCE_state" text NULL, +docu_status text NULL, +comm_link bigint NULL, + "PRIMARY" KEY (OP_MAINT_ID), + "FOREIGN" KEY ("equipRef") REFERENCES Equipment(EQUIP_CODE) +); + + + +"First" 3 rows: + OP_MAINT_ID equipRef OPER_hours maintCycleHrs LAST_maint_date NEXT_due_date OPER_status MAINT_COST_usd repairCostUsd operating_cost_usd crewCertStatus inspect_status COMPLIANCE_state docu_status comm_link +------------- ---------- ------------ --------------- ----------------- --------------- ------------- ---------------- --------------- -------------------- ---------------- ---------------- ------------------ ------------- ----------- + 1 PE593707 17843 2047 2024-02-25 2025-11-16 Storage 7632.51 3297.13 338.79 Valid Failed Review Updated + 2 PE292528 45000 2269 2024-07-02 2025-04-08 Standby 3608.69 1688.51 483.45 Pending Failed Non-compliant Incomplete + 3 PE617633 49833 2335 2025-01-21 2025-11-21 Standby 6231.56 1855.45 911.76 Valid Passed Review Incomplete +... + + +"CREATE" TABLE "Transmission" ( +"TRANS_ID" bigint NOT NULL DEFAULT nextval('"Transmission_TRANS_ID_seq"'::regclass), +"equipRef" text NOT NULL, +engine_link bigint NULL, +"transTempC" real NULL, +"transPress_kpa" real NULL, +"TRANS_gear" text NULL, +"diffTempC" real NULL, +"axleTempC" real NULL, +opmaint_link bigint NULL, + "PRIMARY" KEY (TRANS_ID), + "FOREIGN" KEY ("equipRef") REFERENCES Equipment(EQUIP_CODE), + "FOREIGN" KEY (engine_link) REFERENCES EngineAndFluids(ENGINE_ID), + "FOREIGN" KEY (opmaint_link) REFERENCES OperationMaintenance(OP_MAINT_ID) +); + + + +"First" 3 rows: + TRANS_ID equipRef engine_link transTempC transPress_kpa TRANS_gear diffTempC axleTempC opmaint_link +---------- ---------- ------------- ------------ ---------------- ------------ ----------- ----------- -------------- + 1 PE593707 1 94.5 532.9 3 80.9 22.1 + 2 PE292528 2 30.6 82.8 -1 54.6 90.8 + 3 PE617633 3 68 1632.5 1 -19.8 43.1 +... + + +"CREATE" TABLE "ChassisAndVehicle" ( +"CHASSIS_ID" bigint NOT NULL DEFAULT nextval('"ChassisAndVehicle_CHASSIS_ID_seq"'::regclass), +"equipRef" text NOT NULL, +trans_link bigint NULL, +engine_link bigint NULL, +ground_vehicle_status jsonb NULL, + "PRIMARY" KEY (CHASSIS_ID), + "FOREIGN" KEY ("equipRef") REFERENCES Equipment(EQUIP_CODE), + "FOREIGN" KEY (trans_link) REFERENCES Transmission(TRANS_ID), + "FOREIGN" KEY (engine_link) REFERENCES EngineAndFluids(ENGINE_ID) +); + + + +"First" 3 rows: + CHASSIS_ID equipRef trans_link engine_link ground_vehicle_status +------------ ---------- ------------ ------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ + 1 PE593707 1 1 {'tires': {'pressure_kpa': 345, 'temperature_c': 28.7, 'tread_depth_mm': 12.8}, 'tracks': {'wear_pct': 35, 'tension_kN': 21.9}, 'suspension': {'ride_height_mm': 351.9}, 'brake_system': {'pad_wear_pct': 69, 'fluid_level_pct': 97, 'pressure_kpa_or_code': '370.8 hPa'}, 'vehicle_motion': {'speed_kmh': '49.30 m/s', 'payload_kg': 9585.3, 'heading_deg': 89.3, 'attitude_angle_deg': 1.6}} + 2 PE292528 2 2 {'tires': {'pressure_kpa': 441, 'temperature_c': -1.6, 'tread_depth_mm': 2.8}, 'tracks': {'wear_pct': None, 'tension_kN': 34.6}, 'suspension': {'ride_height_mm': 219.4}, 'brake_system': {'pad_wear_pct': 78, 'fluid_level_pct': 10, 'pressure_kpa_or_code': '616.0 hPa'}, 'vehicle_motion': {'speed_kmh': '39.60 m/s', 'payload_kg': 2335.2, 'heading_deg': 67.2, 'attitude_angle_deg': 40.1}} + 3 PE617633 3 3 {'tires': {'pressure_kpa': 470, 'temperature_c': -4.3, 'tread_depth_mm': 18.2}, 'tracks': {'wear_pct': 47, 'tension_kN': 30.5}, 'suspension': {'ride_height_mm': 220.7}, 'brake_system': {'pad_wear_pct': 54, 'fluid_level_pct': 56, 'pressure_kpa_or_code': '847.0 hPa'}, 'vehicle_motion': {'speed_kmh': '55.40 m/s', 'payload_kg': None, 'heading_deg': 8.5, 'attitude_angle_deg': 44}} +... + + +"CREATE" TABLE "Communication" ( +"COMM_ID" bigint NOT NULL DEFAULT nextval('"Communication_COMM_ID_seq"'::regclass), +"equipRef" text NOT NULL, +loc_link text NULL, +"GPS_signal" text NULL, +sat_conn_stat text NULL, +"radioSignal_dBm" real NULL, +"radioFreq_mhz" real NULL, +antenna_stat text NULL, +"netLatency_ms" real NULL, +"dataRate_kbps" real NULL, +"wifiSignal_dBm" real NULL, +"btStatus" text NULL, +opmaint_link bigint NULL, + "PRIMARY" KEY (COMM_ID), + "FOREIGN" KEY ("equipRef") REFERENCES Equipment(EQUIP_CODE), + "FOREIGN" KEY (loc_link) REFERENCES Location(STATION_name), + "FOREIGN" KEY (opmaint_link) REFERENCES OperationMaintenance(OP_MAINT_ID) +); + + + +"First" 3 rows: + COMM_ID equipRef loc_link GPS_signal sat_conn_stat radioSignal_dBm radioFreq_mhz antenna_stat netLatency_ms dataRate_kbps wifiSignal_dBm btStatus opmaint_link +--------- ---------- ---------- ------------ --------------- ----------------- --------------- -------------- --------------- --------------- ---------------- ---------- -------------- + 1 PE593707 Station-14 Strong Limited -97.8 731.2 Error 1006 389.6 nan On + 2 PE292528 Station-8 Limited -61.7 614.7 Normal 984.1 575.5 nan Error + 3 PE617633 Station-19 Weak Connected -79.6 779.8 Error 1818.1 733.8 -61.7 Error +... + + +"CREATE" TABLE "Location" ( +"STATION_name" text NOT NULL, +"TimeStamp" timestamp without time zone NULL, +"locType" text NULL, +"LAT_deg" real NULL, +"LON_deg" real NULL, +altitude_m real NULL, + "PRIMARY" KEY (STATION_name) +); + + + +"First" 3 rows: +STATION_name TimeStamp locType LAT_deg LON_deg altitude_m +-------------- ------------------- --------- --------- --------- ------------ +Station-14 2024-10-29 17:30:55 Arctic 80.2552 -146.258 2054.5 +Station-8 2024-03-28 10:51:42 Antarctic -61.9982 -153.401 1343.9 +Station-19 2024-02-23 01:26:41 Arctic 76.0172 -10.7953 479.1 +... + + +"CREATE" TABLE "CabinEnvironment" ( +"CABIN_ID" bigint NOT NULL DEFAULT nextval('"CabinEnvironment_CABIN_ID_seq"'::regclass), +"equipRef" text NOT NULL, +loc_link text NULL, +comm_link bigint NULL, +cabin_env_snapshot jsonb NULL, + "PRIMARY" KEY (CABIN_ID), + "FOREIGN" KEY ("equipRef") REFERENCES Equipment(EQUIP_CODE), + "FOREIGN" KEY (loc_link) REFERENCES Location(STATION_name), + "FOREIGN" KEY (comm_link) REFERENCES Communication(COMM_ID) +); + + + +"First" 3 rows: + CABIN_ID equipRef loc_link comm_link cabin_env_snapshot +---------- ---------- ---------- ----------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- + 1 PE593707 Station-14 1 {'hvac': {'vent_state': 'On', 'heater_state': 'Off', 'heater_temp_c': 34.9, 'vent_speed_pct': 13, 'defroster_state': 'On'}, 'access': {'door_state': 'Closed', 'hatch_state': 'Closed', 'window_state': 'Partial'}, 'emergency': 'Active', 'air_metrics': {'o2_pct': 19.9, 'co2_ppm': 557, 'humidity_pct': 58.8, 'pressure_kpa': 99, 'temperature_c': -0.3, 'air_quality_idx': 237}} + 2 PE292528 Station-8 2 {'hvac': {'vent_state': 'Auto', 'heater_state': 'On', 'heater_temp_c': 27, 'vent_speed_pct': 68, 'defroster_state': 'Auto'}, 'access': {'door_state': 'Closed', 'hatch_state': 'Closed', 'window_state': 'Closed'}, 'emergency': 'Standby', 'air_metrics': {'o2_pct': 19.8, 'co2_ppm': 1343, 'humidity_pct': 86.4, 'pressure_kpa': 104, 'temperature_c': 9.9, 'air_quality_idx': 340}} + 3 PE617633 Station-19 3 {'hvac': {'vent_state': 'On', 'heater_state': 'On', 'heater_temp_c': 19.3, 'vent_speed_pct': 55, 'defroster_state': 'On'}, 'access': {'door_state': 'Locked', 'hatch_state': 'Open', 'window_state': 'Partial'}, 'emergency': 'Standby', 'air_metrics': {'o2_pct': 20, 'co2_ppm': 930, 'humidity_pct': 46.1, 'pressure_kpa': 99, 'temperature_c': -15, 'air_quality_idx': 235}} +... + + +"CREATE" TABLE "LightingAndSafety" ( +"LIGHT_ID" bigint NOT NULL DEFAULT nextval('"LightingAndSafety_LIGHT_ID_seq"'::regclass), +"equipRef" text NOT NULL, +"lightingStat" text NULL, +"lightIntensityPct" real NULL, +"extLightStat" text NULL, +"emerLightStat" text NULL, +"fireDetectStat" text NULL, +"smokeDetectStat" text NULL, +"COdetectStat" text NULL, +"gasDetectStat" text NULL, +"emerStopStat" text NULL, +alarm_stat text NULL, +"safetySysStat" text NULL, +"lifeSupportStat" text NULL, +"O2SupplyStat" text NULL, +"medEquipStat" text NULL, +"wasteMgmtStat" text NULL, +"waterSupplyStat" text NULL, + "PRIMARY" KEY (LIGHT_ID), + "FOREIGN" KEY ("equipRef") REFERENCES Equipment(EQUIP_CODE) +); + + + +"First" 3 rows: + LIGHT_ID equipRef lightingStat lightIntensityPct extLightStat emerLightStat fireDetectStat smokeDetectStat COdetectStat gasDetectStat emerStopStat alarm_stat safetySysStat lifeSupportStat O2SupplyStat medEquipStat wasteMgmtStat waterSupplyStat +---------- ---------- -------------- ------------------- -------------- --------------- ---------------- ----------------- -------------- --------------- -------------- ------------ --------------- ----------------- -------------- -------------- --------------- ----------------- + 1 PE593707 Off 21 Off On Normal Fault Fault Alert Activated Normal Fault Warning Warning Normal Critical Normal + 2 PE292528 Off 10 Off Alert Alert Alert Fault Activated Critical Fault Warning Normal Normal Critical Warning + 3 PE617633 Off 96 On Off Normal Normal Fault Fault Activated Warning Fault Critical Normal Critical Critical Critical +... + + +"CREATE" TABLE "WaterAndWaste" ( +"WATER_ID" bigint NOT NULL DEFAULT nextval('"WaterAndWaste_WATER_ID_seq"'::regclass), +"equipRef" text NOT NULL, +"waterLevelPct" real NULL, +"waterPress_kpa" real NULL, +"waterTempC" text NULL, +"waterFlow_lpm" real NULL, +"waterQualityIdx" bigint NULL, +"wasteTankPct" real NULL, + "PRIMARY" KEY (WATER_ID), + "FOREIGN" KEY ("equipRef") REFERENCES Equipment(EQUIP_CODE) +); + + + +"First" 3 rows: + WATER_ID equipRef waterLevelPct waterPress_kpa waterTempC waterFlow_lpm waterQualityIdx wasteTankPct +---------- ---------- --------------- ---------------- ------------ --------------- ----------------- -------------- + 1 PE593707 77 66.4 58.4 °C 28.6 57 28 + 2 PE292528 24 403.4 30.1 °C 4.1 5 45 + 3 PE617633 3 453.2 20.3 °C 28.7 58 22 +... + + +"CREATE" TABLE "Scientific" ( +"SCI_ID" bigint NOT NULL DEFAULT nextval('"Scientific_SCI_ID_seq"'::regclass), +"equipRef" text NOT NULL, +"sciEquipStat" text NULL, +"dataLogStat" text NULL, +"sensorStat" text NULL, +"calibrStat" text NULL, +"measureAccPct" real NULL, + "PRIMARY" KEY (SCI_ID), + "FOREIGN" KEY ("equipRef") REFERENCES Equipment(EQUIP_CODE) +); + + + +"First" 3 rows: + SCI_ID equipRef sciEquipStat dataLogStat sensorStat calibrStat measureAccPct +-------- ---------- -------------- ------------- ------------ ------------ --------------- + 1 PE593707 Standby Active Error Expired 99.3 + 2 PE292528 Operating Paused Error Valid 96 + 3 PE617633 Active Warning Due 92.3 +... + + +"CREATE" TABLE "WeatherAndStructure" ( +"WEATHER_ID" bigint NOT NULL DEFAULT nextval('"WeatherAndStructure_WEATHER_ID_seq"'::regclass), +loc_link text NULL, +opmaint_link bigint NULL, +"extTempC" real NULL, +"windSpeed_ms" real NULL, +"windDir_deg" real NULL, +"baroPress_hpa" real NULL, +"solarRad_wm2" real NULL, +"snowDepth_cm" bigint NULL, +"iceThick_cm" real NULL, +visibility_km real NULL, +"precipType" text NULL, +"precipRate_mmh" real NULL, +"snowLoad_kgm2" bigint NULL, +"structLoadPct" real NULL, +"structIntegrityStat" text NULL, +"vibrLevel_mms2" real NULL, +"noiseLevel_dB" real NULL, + "PRIMARY" KEY (WEATHER_ID), + "FOREIGN" KEY (loc_link) REFERENCES Location(STATION_name), + "FOREIGN" KEY (opmaint_link) REFERENCES OperationMaintenance(OP_MAINT_ID) +); + + + +"First" 3 rows: + WEATHER_ID loc_link opmaint_link extTempC windSpeed_ms windDir_deg baroPress_hpa solarRad_wm2 snowDepth_cm iceThick_cm visibility_km precipType precipRate_mmh snowLoad_kgm2 structLoadPct structIntegrityStat vibrLevel_mms2 noiseLevel_dB +------------ ---------- -------------- ---------- -------------- ------------- --------------- -------------- -------------- ------------- --------------- ------------ ---------------- --------------- --------------- --------------------- ---------------- --------------- + 1 Station-14 1 -14.9 26.5 71.4 975.3 541 144 254.4 42.8 Blowing Snow 2.4 460 62 Warning 8.61 38.5 + 2 Station-8 2 -57.9 30.6 202.6 931.2 280.6 273 123.6 46.1 Ice 4.7 111 40 Critical 5.85 66 + 3 Station-19 3 -41.7 4.1 259.8 913 370.8 76 282.6 12 Ice 12.3 400 48 Critical 0.31 68 +... + + +"CREATE" TABLE "ThermalSolarWindAndGrid" ( +"THERMAL_ID" bigint NOT NULL DEFAULT nextval('"ThermalSolarWindAndGrid_THERMAL_ID_seq"'::regclass), +comm_link bigint NULL, +batt_link bigint NULL, +"thermalImgStat" text NULL, +"insulationStat" text NULL, +"heatLoss_kwh" real NULL, +"solarPanelStat" text NULL, +"solarOutput_w" real NULL, +"solarEffPct" real NULL, +"solarTempC" real NULL, +"windTurbineStat" text NULL, +"windOutput_w" real NULL, +"windRPM" bigint NULL, +"powerGridStat" text NULL, +"powerQualIdx" real NULL, +"backupPowerStat" text NULL, +"fuelCellStat" text NULL, +"fuelCellOutput_w" real NULL, +"fuelCellEffPct" real NULL, +"H2LevelPct" real NULL, +"O2LevelPct" real NULL, + "PRIMARY" KEY (THERMAL_ID), + "FOREIGN" KEY (comm_link) REFERENCES Communication(COMM_ID), + "FOREIGN" KEY (batt_link) REFERENCES PowerBattery(PWR_BATT_ID) +); + + + +"First" 3 rows: + THERMAL_ID comm_link batt_link thermalImgStat insulationStat heatLoss_kwh solarPanelStat solarOutput_w solarEffPct solarTempC windTurbineStat windOutput_w windRPM powerGridStat powerQualIdx backupPowerStat fuelCellStat fuelCellOutput_w fuelCellEffPct H2LevelPct O2LevelPct +------------ ----------- ----------- ---------------- ---------------- -------------- ---------------- --------------- ------------- ------------ ----------------- -------------- --------- --------------- -------------- ----------------- -------------- ------------------ ---------------- ------------ ------------ + 1 1 1 Warning Fair 1.11 Fault 746.1 2 -14.9 Fault 656.8 130 Connected 95 Fault Standby 185 48.2 95 82 + 2 2 2 Warning Poor 8.47 Inactive 337.7 80.1 -16.1 Operating 4084.9 221 Disconnected 97 Standby 2564 42.1 41 68 + 3 3 3 Warning Good nan Active 268 54.9 14 Stopped 4514.5 210 Connected 61 Active Fault 944 41.3 57 27 +... + + +"CREATE" TABLE "StationEquipmentType" ( +station_name text NOT NULL, +equip_type text NOT NULL, + "PRIMARY" KEY (station_name, equip_type), + "FOREIGN" KEY (station_name) REFERENCES Location(STATION_name), + "FOREIGN" KEY (equip_type) REFERENCES EquipmentType(EquipType) +); + + + +"First" 3 rows: +station_name equip_type +-------------- ------------ +Station-14 Shelter +Station-8 Scientific +Station-19 Safety +... diff --git a/reverse_logistics/reverse_logistics_column_meaning_base.json b/reverse_logistics/reverse_logistics_column_meaning_base.json new file mode 100644 index 0000000000000000000000000000000000000000..5f35f023414b390dd5279b2997535b4ad7ab83dd --- /dev/null +++ b/reverse_logistics/reverse_logistics_column_meaning_base.json @@ -0,0 +1,186 @@ +{ + "reverse_logistics|customers|ProfileNum": "VARCHAR(50). Unique identifier for the customer profile. PK.", + "reverse_logistics|customers|Seg_Category": "TEXT. Customer segment category. **NULL means the segment category is not assigned.**. Possible values: Wholesale,Retail,Individual", + "reverse_logistics|customers|GEOGRAPHYZONE": "VARCHAR(100). Geographic location zone of the customer. example: Cambodia", + "reverse_logistics|products|ItemCode": "VARCHAR(50). Unique identifier for the product. PK.", + "reverse_logistics|products|ItemCategory": "VARCHAR(50). Category of the product (e.g., apparel, electronics, Home Goods, Accessories).", + "reverse_logistics|products|SubCat": "VARCHAR(50). Subcategory of the product. Possible values: Laptops, Shirts, Shoes, Smartphones.", + "reverse_logistics|products|UNIT_VALUE": "REAL. Unit value/price of the product.", + "reverse_logistics|orders|TxnNum": "VARCHAR(50). Unique identifier for the order transaction. PK.", + "reverse_logistics|orders|BuyerLink": "TEXT. Link to the buyer's account or details. Example: CUS00181.", + "reverse_logistics|orders|transaction_value": "REAL. Total value of the order transaction.", + "reverse_logistics|orders|TxnDate": "TEXT. Date when the transaction was made. Example: Feb 16, 2025.", + "reverse_logistics|returns|CaseNum": "VARCHAR(50). Unique identifier for the return case. PK. Example: RL924605.", + "reverse_logistics|returns|LogTime": "VARCHAR(50). Timestamp when the return was logged. Example: 2024/11/04.", + "reverse_logistics|returns|SrcTxn": "VARCHAR(50). Foreign key to the orders table (TxnNum). FK to orders. Example: ORD89293.", + "reverse_logistics|returns|ItemLink": "VARCHAR(50). Foreign key to the products table (ItemCode). FK to products. Example: PRD00023.", + "reverse_logistics|returns|RevDate": "TEXT. Date when the return was processed. Example: 2025-01-20.", + "reverse_logistics|returns|DaysLapsed": "BIGINT. Number of days lapsed since the transaction. Example: 33.", + "reverse_logistics|returns|Return_Channel": "VARCHAR(50). Channel through which the return was made.", + "reverse_logistics|quality_assessment|InspectRef": "VARCHAR(50). Foreign key to the returns table (CaseNum). PK, FK to returns.", + "reverse_logistics|return_processing|LocCode": "VARCHAR(50). Location code for the return processing. PK. Example: LOC008.", + "reverse_logistics|return_processing|ProcPrio": "VARCHAR(50). Processing priority for the return. Possible values: Bulk, Express, Standard.", + "reverse_logistics|return_processing|ProcState": "ProcessingStatus_enum. Current state of the return processing. Possible values: Completed, Inspecting, Processing, Received.", + "reverse_logistics|return_processing|ProcTime": "REAL. Time taken for processing the return. Example: 15.0.", + "reverse_logistics|return_processing|HandReq": "HandlingRequirements_enum. Handling requirements for the return. Possible values: Fragile, Hazardous, Special, Standard.", + "reverse_logistics|return_processing|NeedsQuar": "VARCHAR(20). Indicates if quarantine is needed for the return. Possible values: No, Yes.", + "reverse_logistics|return_processing|QuarDays": "BIGINT. Number of days the return item will be quarantined. Example: 14.", + "reverse_logistics|return_processing|Handling_Notes": "TEXT. Notes related to the handling of the return item.", + "reverse_logistics|return_processing|DispAction": "TEXT. Disposition action for the return item. Possible values: Refurbish, Repair, Resell, Scrap.", + "reverse_logistics|return_processing|DispReason": "DispositionReason_enum. Reason for the return disposition. **NULL means no disposition reason provided.**. Possible values: Good Condition, Repairable, Too Costly, Unsalvageable.", + "reverse_logistics|return_processing|NeedsRelabel": "TEXT. Indicates if relabeling is required for the return item. Possible values: 0, 1, F, False, N, NO, P, True, Y, YES.", + "reverse_logistics|return_processing|RepairFeas": "RepairFeasibility_enum. Feasibility of repairing the return item. Possible values: High, Low, Medium, Not Feasible.", + "reverse_logistics|return_processing|EstRepairHrs": "REAL. Estimated hours required for repair. Example: 39.8.", + "reverse_logistics|return_processing|PartsAvail": "PartsAvailability_enum. Availability of parts for repairing the return item. Possible values: Available, Partial, Unavailable.", + "reverse_logistics|return_processing|PolicyComp": "TEXT. Policy compliance status of the return item. Possible values: Compliant, Non-compliant.", + "reverse_logistics|return_processing|ExceptMade": "VARCHAR(20). Indicates if an exception was made for the return. Possible values: No, Yes.", + "reverse_logistics|return_processing|ExceptType": "ExceptionReason_enum. Type of exception for the return. **NULL means no exception type provided.**. Possible values: Customer Value, Error, Goodwill.", + "reverse_logistics|return_processing|ApprLevel": "ApprovalLevel_enum. Level of approval required for processing the return. Possible values: Automatic, Manager, Supervisor.", + "reverse_logistics|financial_management|CreditRef": "TEXT. Unique identifier for the financial record. PK. Example: CM78914.", + "reverse_logistics|financial_management|CaseTag": "VARCHAR(50). Foreign key to the returns table (CaseNum). FK to returns.", + "reverse_logistics|financial_management|DispCost": "REAL. Disposal cost for the return item. Example: $86.84.", + "reverse_logistics|case_management|CaseTie": "VARCHAR(50). Foreign key to the returns table (CaseNum). PK, FK to returns.", + "reverse_logistics|case_management|SatisfScore": "BIGINT. Satisfaction score for the case resolution. Possible values: 1, 2, 3, 4, 5.", + "reverse_logistics|case_management|CommState": "CustomerCommunicationStatus_enum. Current state of communication with the customer. Possible values: In Progress, Initial, Resolved.", + "reverse_logistics|case_management|RespTime": "REAL. Response time for handling the case. Example: 14.8.", + "reverse_logistics|case_management|ResolSatis": "ResolutionSatisfaction_enum. Satisfaction level of the customer after case resolution. Possible values: Dissatisfied, Neutral, Satisfied.", + "reverse_logistics|case_management|HasFeedback": "YesNo_enum. Indicates if feedback was provided by the customer. Possible values: No, Yes.", + "reverse_logistics|case_management|FeedbackType": "FeedbackCategory_enum. Type of feedback provided by the customer. **NULL means no feedback type provided.**. Possible values: Process, Product, Service.", + "reverse_logistics|case_management|VendorNotice": "YesNo_enum. Indicates if the vendor was notified about the case. Possible values: No, Yes.", + "reverse_logistics|case_management|VendorAction": "SupplierCorrectiveAction_enum. Action taken by the vendor regarding the case. **NULL means no vendor action reported.**. Possible values: Completed, Initiated.", + "reverse_logistics|case_management|PreventOpp": "ReturnPreventionOpportunity_enum. Opportunity for preventing future returns. Possible values: High, Low, Medium.", + "reverse_logistics|case_management|ActionIdent": "YesNo_enum. Indicates if an action was identified for the case. Possible values: No, Yes.", + "reverse_logistics|case_management|ActionState": "PreventiveActionStatus_enum. Status of the preventive action. Possible values: Completed, In Progress, Planned.", + "reverse_logistics|case_management|KBUpdated": "YesNo_enum. Indicates if the knowledge base was updated after the case. Possible values: No, Yes.", + "reverse_logistics|case_management|TrainIdent": "TEXT. Identifies if training was required after the case. Possible values: 0, 1, F, False, N, NO, P, True, Y, YES.", + "reverse_logistics|case_management|NeedsProcImprove": "YesNo_enum. Indicates if process improvements are needed. Possible values: No, Yes.", + "reverse_logistics|case_management|NeedsDocUpdate": "YesNo_enum. Indicates if documentation updates are needed. Possible values: No, Yes.", + "reverse_logistics|case_management|NeedsSysUpdate": "YesNo_enum. Indicates if system updates are needed. Possible values: No, Yes.", + "reverse_logistics|case_management|ReportState": "ReportGenerationStatus_enum. State of the report generation for the case. Possible values: Generated, Pending, Reviewed.", + "reverse_logistics|case_management|AnalysisState": "DataAnalysisStatus_enum. State of the data analysis for the case. Possible values: Completed, In Progress, Not Started.", + "reverse_logistics|case_management|HasTrendAnalysis": "YesNo_enum. Indicates if trend analysis was performed for the case. Possible values: No, Yes.", + "reverse_logistics|case_management|HasCostAnalysis": "YesNo_enum. Indicates if cost analysis was performed for the case. Possible values: No, Yes.", + "reverse_logistics|case_management|RecState": "RecommendationStatus_enum. State of the recommendation for the case. **NULL means no recommendation state provided.**. Possible values: Approved, Draft.", + "reverse_logistics|case_management|ActionCount": "BIGINT. Number of actions taken for the case. Example: 0.", + "reverse_logistics|case_management|NeedsFollowUp": "YesNo_enum. Indicates if follow-up is needed for the case. Possible values: No, Yes.", + "reverse_logistics|case_management|NextReview": "TEXT. Date for the next review of the case. Example: 2025-04-28.", + "reverse_logistics|case_management|CloseState": "CaseClosureStatus_enum. Current closure state of the case. Possible values: Closed, Open, Pending.", + "reverse_logistics|case_management|CloseDate": "TEXT. Date when the case was closed. Example: 2025-02-19.", + "reverse_logistics|case_management|CloseNotes": "TEXT. Notes related to the closure of the case. Example: Above suggest statement likely sound..", + "reverse_logistics|customers|return_behavior_profile": { + "column_meaning": "JSONB column. Captures the customer's historical return behavior, including frequency and similarity of returns.", + "fields_meaning": { + "total_returns": "BIGINT. Total number of returns made by the customer. example: 10", + "similar_previous_returns": "BIGINT. Simulated number of returns. **NULL means no previous simulated returns data.**. Possible values: 0.0, 1.0, 2.0, 3.0, 4.0, 5.0.", + "return_frequency_score": "BIGINT. Customer's frequency score based on return behavior." + } + }, + "reverse_logistics|products|product_traceability": { + "column_meaning": "JSONB column. Encapsulates product traceability and compliance metadata including batch, lot, and serial tracking.", + "fields_meaning": { + "trace": { + "batch_reference": "TEXT. Batch reference for the product. example: BT1693, 6730", + "lot_reference": "VARCHAR(50). Lot reference for the product. Example: 4119LO.", + "serial_number": "TEXT. Serial number reference for the product. Example: SN258151.", + "manufacture_date": "TEXT. Manufacture date of the product." + }, + "compliance": { + "regulatory_compliance": "VARCHAR(50). Regulatory compliance status of the product. **NULL means no regulatory compliance status available.**. Possible values: Compliant, Non-compliant.", + "hazardous_material": "VARCHAR(50). Hazardous material information for the product.", + "recall_flag": "YesNo_enum. Indicates if the product has been recalled. Possible values: No, Yes." + } + } + }, + "reverse_logistics|returns|return_details": { + "column_meaning": "JSONB column. Groups together metadata about the return reason, authorization, and shipping logistics.", + "fields_meaning": { + "reasoning": { + "primary_reason": "VARCHAR(50). Primary reason for the return. Possible values: Changed Mind, Quality Issue, Size/Fit, Wrong Item.", + "secondary_reason": "VARCHAR(50). Secondary reason for the return. Possible values: Better Price, Damaged, Defective, Not as Described.", + "reason_notes": "TEXT. Notes related to the reasons for the return. Example: Yard which quickly step since half part..", + "client_notes": "TEXT. Notes from the client regarding the return. Example: Chance building four loss study. Response actually miss everybody such.." + }, + "authorization": { + "auth_status": "TEXT. Authorization status of the return. Possible values: Approved, Pending, Rejected.", + "warranty_status": "VARCHAR(50). Warranty status of the product being returned. Possible values: Expired, Not Applicable, Valid.", + "warranty_claim": "TEXT. Warranty claim information. **NULL means no warranty claim information available.**. Example: WC8668." + }, + "shipping": { + "carrier": "VARCHAR(50). Shipping vendor used for the return.", + "fee": "REAL. Shipping fee for the return. Example: 64.1.", + "insurance_amount": "REAL. Insurance amount associated with the return. Example: 502.73.", + "estimated_arrival": "TEXT. Estimated arrival date for the returned item. Example: 2025-03-03.", + "tracking_reference": "TEXT. Tracking reference for the return shipment. Example: 83." + }, + "fraud": { + "risk_level": "VARCHAR(50). Fraud risk level for the return. **NULL means no fraud risk level assessed.**", + "fraud_flags": "BIGINT. Fraud flags indicating potential issues with the return. Possible values: 0, 1, 2, 3, 4, 5." + } + } + }, + "reverse_logistics|quality_assessment|assessment_summary": { + "column_meaning": "JSONB column. Summarizes quality inspection, usage, and documentation state for a returned product.", + "fields_meaning": { + "condition": { + "item_condition": "TEXT. Condition of the item during inspection. Possible values: Damaged, Like New, New, Used.", + "package_condition": "VARCHAR(50). Condition of the packaging during inspection.", + "completeness": "VARCHAR(50). Completeness state of the item during inspection. Possible values: Accessories Missing, Complete, Missing Parts.", + "usage_signs": "VARCHAR(50). Signs of usage observed on the item. **NULL means no usage signs observed.**. Possible values: Heavy, Minor, Significant." + }, + "defects": { + "defect_type": "TEXT. Type of defect found during inspection. **NULL means no defect type reported.**. Possible values: Manufacturing, Shipping, Usage.", + "defect_severity": "VARCHAR(50). Severity level of the defect. **NULL means no defect severity reported.**. Possible values: Critical, Major, Minor." + }, + "results": { + "qa_result": "VARCHAR(50). Quality assessment result (e.g., pass, fail).", + "functional_test_result": "TEXT. Functional test result for the item. Possible values: Fail, Partial, Pass.", + "aesthetic_score": "REAL. Aesthetic score assigned to the item based on inspection. Example: 5.0.", + "tech_review_status": "VARCHAR(20). Technical review status of the item. Possible values: Completed, Not Required, Pending." + }, + "documentation": { + "documentation_status": "VARCHAR(50). Completeness status of the product documentation.", + "has_photos": "VARCHAR(20). Indicates if the item has supporting photos. Possible values: 0, 1, F, False, N, NO, P, True, Y, YES.", + "qa_alert": "VARCHAR(20). Quality assurance alert status. Possible values: No, Yes.", + "needs_investigation": "VARCHAR(20). Indicates if further investigation is required. Possible values: No, Yes." + } + } + }, + "reverse_logistics|financial_management|cost_breakdown": { + "column_meaning": "JSONB column. Captures the financial breakdown of refund, recovery, and sustainability-related fees for a return case.", + "fields_meaning": { + "refund": { + "refund_amount": "REAL. Refund amount for the return. Example: $1,337.51.", + "method": "RefundMethod_enum. Method used for refunding the customer. Possible values: Bank Transfer, Original Payment, Store Credit.", + "status": "RefundStatus_enum. Current state of the refund. Possible values: Completed, Pending, Processed.", + "processing_days": "REAL. Number of days taken to process the refund. Example: 12.8." + }, + "fees": { + "restocking_fee": "REAL. Restocking fee for the return item.", + "repackaging_cost": "REAL. Repackaging cost for the return item. Example: $19.53.", + "relabeling_cost": "REAL. Relabeling cost for the return item. Example: 10.26.", + "qa_fee": "REAL. Quality assurance fee for the return item. Example: 12.33." + }, + "repair_costs": { + "repair_estimate": "REAL. Estimated repair cost for the return item. Example: 490.99.", + "parts_fee": "REAL. Parts fee for the return item. Example: 136.63.", + "labor_fee": "REAL. Labor fee for the return item. Example: 85.38." + }, + "disposal": { + "disposal_cost": "REAL. Disposal fee for the return item. Example: 83.49.", + "disposal_method": "DisposalMethod_enum. Method used to dispose of the return item. Possible values: Hazardous Waste, Landfill, Recycle, Return to Vendor.", + "environmental_impact": "EnvironmentalImpact_enum. Environmental impact of the disposal method. Possible values: High, Low, Medium.", + "recycling_category": "RecyclingCategory_enum. Category of material recycled during disposal. Possible values: Electronics, Metal, Mixed, Plastic." + }, + "valuation": { + "recovery_value": "REAL. Amount recovered from the return. Example: 19.18.", + "value_loss_pct": "REAL. Total value loss from the return. Example: 72.3." + }, + "sustainability": { + "sustainability_score": "REAL. Sustainability score for the return item. **NULL means no sustainability score provided.**. Example: 46.0.", + "carbon_footprint": "REAL. Carbon footprint of the return item. Example: 32.28.", + "efficiency_score": "REAL. Efficiency score for the return item. Example: 98.1.", + "cost_efficiency": "REAL. Cost efficiency score for the return item. Example: 45.4." + } + } + } +} \ No newline at end of file diff --git a/reverse_logistics/reverse_logistics_kb.jsonl b/reverse_logistics/reverse_logistics_kb.jsonl new file mode 100644 index 0000000000000000000000000000000000000000..de960059570b9ae1d843c2cbd34f25cce02ca415 --- /dev/null +++ b/reverse_logistics/reverse_logistics_kb.jsonl @@ -0,0 +1,30 @@ +{"id": 0, "knowledge": "Total Return Cost (TRC)", "description": "Aggregated monetary outlay incurred to handle a single return from transport to final disposition.", "definition": "The Total Return Cost is calculated as the sum of Shipping Fee ($S_f$), Restocking Fee ($R_f$), Relabeling Cost ($L_c$), Disposal Cost ($D_c$) and Repair Estimate ($R_e$): $$TRC = S_f + R_f + L_c + D_c + R_e$$.", "type": "calculation_knowledge", "children_knowledge": [10, 11, 12, 13, 14]} +{"id": 1, "knowledge": "Return Profit Impact (RPI)", "description": "Net financial impact of a return after accounting for cost and recovery value.", "definition": "$$RPI = R_v - TRC$$ where $R_v$ is Recovery Value and $TRC$ is Total Return Cost.", "type": "calculation_knowledge", "children_knowledge": [0, 15]} +{"id": 2, "knowledge": "Recovery Rate per Day (RRD)", "description": "Daily efficiency metric showing value recovered per day elapsed since sale.", "definition": "$$RRD = \\frac{R_v}{D_l}$$ where $R_v$ is Recovery Value and $D_l$ is Days Lapsed.", "type": "calculation_knowledge", "children_knowledge": [15, 16]} +{"id": 3, "knowledge": "Customer Return Frequency Index (CRFI)", "description": "Rate at which a customer initiates returns relative to their tenure.", "definition": "$$CRFI = \\frac{T_r}{T_y}$$ where $T_r$ is Total Returns and $T_y$ is Customer Tenure in years.", "type": "calculation_knowledge", "children_knowledge": [17, 18]} +{"id": 4, "knowledge": "Sustainability-Adjusted Loss (SAL)", "description": "Cost of a return adjusted for environmental impact factors.", "definition": "$$SAL = TRC + 0.5 \\times C_f - R_v$$ where $C_f$ is Carbon Footprint.", "type": "calculation_knowledge", "children_knowledge": [0, 19, 15]} +{"id": 5, "knowledge": "Average Processing Time (APT)", "description": "Mean calendar time spent processing a batch of returns at a given facility.", "definition": "$$APT = \\frac{\\sum_{i=1}^{n} PT_i}{n}$$ where $PT_i$ is the processing time of each case.", "type": "calculation_knowledge", "children_knowledge": []} +{"id": 6, "knowledge": "Warranty Claim Ratio (WCR)", "description": "Proportion of returns that include a valid warranty claim.", "definition": "$$WCR = \\frac{N_{claims}}{T_r}$$ where $N_{claims}$ is the number of authorised warranty claims.", "type": "calculation_knowledge", "children_knowledge": [17]} +{"id": 7, "knowledge": "Fraud Flag Severity Score (FFS)", "description": "Composite score that weights fraud flags by qualitative risk level.", "definition": "$$FFS = F_f \\times w_{risk}$$ where $F_f$ is number of fraud flags and $w_{risk}$ is weight based on Fraud Risk Level.", "type": "calculation_knowledge", "children_knowledge": [21]} +{"id": 8, "knowledge": "Regulatory Compliance Penalty (RCP)", "description": "Additional proportional cost applied when a product is non-compliant with regulations.", "definition": "If the Regulatory Compliance Status is *Non-compliant* the penalty is $$RCP = 0.2 \\times TRC$$, else $$RCP = 0$$.", "type": "calculation_knowledge", "children_knowledge": [0, 29]} +{"id": 9, "knowledge": "Return Channel Cost Index (RCCI)", "description": "Relative shipping expense of a return channel compared to its historical average.", "definition": "$$RCCI = \\frac{S_f}{\\overline{S_{channel}}}$$ where $S_f$ is Shipping Fee and $\\overline{S_{channel}}$ is average Shipping Fee for the same Return Channel.", "type": "calculation_knowledge", "children_knowledge": [10, 20]} +{"id": 10, "knowledge": "Shipping Fee", "description": "Cost charged by the logistics provider for transporting a returned item.", "definition": "A monetary amount reflecting transportation cost incurred for a single return shipment.", "type": "domain_knowledge", "children_knowledge": [20]} +{"id": 11, "knowledge": "Restocking Fee", "description": "Service charge for reintegrating a returned item into inventory.", "definition": "Fee imposed to cover administrative and handling work required to make the item sellable again.", "type": "domain_knowledge", "children_knowledge": [20]} +{"id": 12, "knowledge": "Relabeling Cost", "description": "Expense of printing and applying new labels or packaging identifiers to a returned item.", "definition": "Direct cost of material and labor associated with correcting item labeling deficiencies.", "type": "domain_knowledge", "children_knowledge": []} +{"id": 13, "knowledge": "Disposal Cost", "description": "Fee incurred when discarding a returned item that cannot be resold or refurbished.", "definition": "Monetary cost of disposing the item through methods such as recycling, landfill, or hazardous-waste processing.", "type": "domain_knowledge", "children_knowledge": [23]} +{"id": 14, "knowledge": "Repair Estimate", "description": "Projected expense required to bring a defective return back to sellable condition.", "definition": "Estimated sum of parts, labor and overhead needed for repair work, agreed before work commences.", "type": "domain_knowledge", "children_knowledge": [25]} +{"id": 15, "knowledge": "Recovery Value", "description": "Amount of monetary value recaptured from a return via resale, refurbish, or parts harvesting.", "definition": "Net revenue expected after processing the return, excluding initial cost.", "type": "domain_knowledge", "children_knowledge": []} +{"id": 16, "knowledge": "Days Lapsed", "description": "Number of calendar days between original transaction and completion of return processing.", "definition": "Difference in days between transaction date and point at which funds/refund are issued.", "type": "domain_knowledge", "children_knowledge": []} +{"id": 17, "knowledge": "Total Returns", "description": "Lifetime count of items a customer has returned to the business.", "definition": "Cumulative tally of processed return cases associated with a single customer account.", "type": "domain_knowledge", "children_knowledge": []} +{"id": 18, "knowledge": "Customer Tenure (Years)", "description": "Length of time, in years, a customer has maintained an active account before current date.", "definition": "Calculated as the difference between current date and the customer's first recorded transaction, divided by 365.", "type": "domain_knowledge", "children_knowledge": []} +{"id": 19, "knowledge": "Carbon Footprint", "description": "Estimated kilograms of CO₂-equivalent generated through return processing activities.", "definition": "Quantifies greenhouse-gas emissions attributable to transportation, handling, and disposal of a single return.", "type": "domain_knowledge", "children_knowledge": [23]} +{"id": 20, "knowledge": "Return Channels", "description": "Enumerates the pathways through which customers send items back.", "definition": "Marked as 'Store' when customer returns item in person at a retail outlet; 'Courier' when customer arranges a courier service for doorstep pickup; 'Mail' when item is posted through the national mail system; 'Locker' when customer deposits parcel in an automated locker or drop-box; 'Pickup' when retailer schedules a home or office collection for oversized or fragile goods.", "type": "value_illustration", "children_knowledge": []} +{"id": 21, "knowledge": "Fraud Risk Levels", "description": "Qualitative grading of suspected fraud severity in a return.", "definition": "There are 3 different levels: 'Low', 'Medium' and 'High'.", "type": "value_illustration", "children_knowledge": []} +{"id": 22, "knowledge": "Item Condition States", "description": "Standardised terms describing physical state of returned merchandise.", "definition": "There are 4 different states: 'New', 'Like New', 'Used' and 'Damaged'.", "type": "value_illustration", "children_knowledge": []} +{"id": 23, "knowledge": "Disposal Methods", "description": "Approved pathways for discarding unsellable returns.", "definition": "'Recycle' for materials separated for recycling streams; 'Hazardous Waste' for components containing batteries, chemicals, or e-waste; 'Landfill' for non-recyclable residuals; 'Return to Vendor' for supplier agrees to take back for specialised processing.", "type": "value_illustration", "children_knowledge": []} +{"id": 24, "knowledge": "Refund Methods", "description": "Modalities available for reimbursing customers.", "definition": "'Original Payment' is when credit card/PayPal reversal within payment gateway; 'Bank Transfer' is when direct ACH/SEPA transfer when gateway refund blocked; 'Store Credit' is when applied to customer account as gift card or voucher on customer request or policy.", "type": "value_illustration", "children_knowledge": []} +{"id": 25, "knowledge": "Approval Levels", "description": "Hierarchy of sign-off required for exceptional processing steps.", "definition": "There 3 levels: 'Automatic', 'Manager' and 'Supervisor'.", "type": "value_illustration", "children_knowledge": []} +{"id": 26, "knowledge": "Satisfaction Scores", "description": "5-point ordinal scale gauging customer contentment with return resolution.", "definition": "Use 1-5 to quantify customer' contentment. Larger number means more satified: 1 - Very Dissatisfied; 2 - Dissatisfied; 3 - Neutral; 4 - Satisfied; 5 - Very Satisfied.", "type": "value_illustration", "children_knowledge": []} +{"id": 27, "knowledge": "Warranty Statuses", "description": "Indicates whether a product remains under warranty coverage.", "definition": "'Valid' is within manufacturer or retailer warranty period; 'Expired' is warranty term has lapsed; 'Not Applicable' is no warranty offered or proof unavailable.", "type": "value_illustration", "children_knowledge": []} +{"id": 28, "knowledge": "Processing Priorities", "description": "Queue categories dictating speed of return handling.", "definition": "Use 'Bulk' for palletised or batch returns processed during low-demand periods; 'Standard' for default SLA (e.g., 3-5 days); 'Express' for expedited for VIP customers, high resale value, or paid premium.", "type": "value_illustration", "children_knowledge": []} +{"id": 29, "knowledge": "Regulatory Compliance Statuses", "description": "Declares adherence of product to applicable regulations.", "definition": "'Compliant' means meeting all relevant safety/environmental standards; 'Non-compliant' means failing certification, recall notice, or missing documentation, triggering quarantine or disposal penalty.", "type": "value_illustration", "children_knowledge": []} \ No newline at end of file diff --git a/reverse_logistics/reverse_logistics_schema.txt b/reverse_logistics/reverse_logistics_schema.txt new file mode 100644 index 0000000000000000000000000000000000000000..82b6d73534a3ca36abefc9fca562702adc2437cc --- /dev/null +++ b/reverse_logistics/reverse_logistics_schema.txt @@ -0,0 +1,180 @@ +CREATE TABLE "customers" ( +profilenum character varying NOT NULL, +seg_category text NULL, +geographyzone character varying NULL, +return_behavior_profile jsonb NULL, + PRIMARY KEY (profilenum) +); + +First 3 rows: +profilenum seg_category geographyzone return_behavior_profile +------------ -------------- --------------- --------------------------------------------------------------------------------- +CUS00181 Individual Cambodia {'total_returns': 10, 'return_frequency_score': 5, 'similar_previous_returns': 1} +CUS00009 Wholesale Burkina Faso {'total_returns': 8, 'return_frequency_score': 5, 'similar_previous_returns': 3} +CUS00042 Bermuda {'total_returns': 5, 'return_frequency_score': 8, 'similar_previous_returns': 4} +... + + +CREATE TABLE "products" ( +itemcode character varying NOT NULL, +itemcategory character varying NULL, +subcat character varying NULL, +unit_value real NULL, +product_traceability jsonb NULL, + PRIMARY KEY (itemcode) +); + +First 3 rows: +itemcode itemcategory subcat unit_value product_traceability +---------- -------------- -------- ------------ --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- +PRD00023 Apparel Shoes 186.05 {'trace': {'lot_reference': '4119LO', 'serial_number': 'SN258151', 'batch_reference': '6730', 'manufacture_date': '2024-04-21'}, 'compliance': {'recall_flag': 'Yes', 'hazardous_material': 'Yes', 'regulatory_compliance': 'Compliant'}} +PRD00058 Home Goods Laptops 151.37 {'trace': {'lot_reference': '6279LO', 'serial_number': 'SN310365', 'batch_reference': 'BT1693', 'manufacture_date': '2023-04-07'}, 'compliance': {'recall_flag': 'No', 'hazardous_material': 'No', 'regulatory_compliance': 'Non-compliant'}} +PRD00079 Electronics Shoes 752.89 {'trace': {'lot_reference': 'lot8331', 'serial_number': 'SN774661', 'batch_reference': 'BT7204', 'manufacture_date': '2024-10-17'}, 'compliance': {'recall_flag': 'No', 'hazardous_material': 'No', 'regulatory_compliance': None}} +... + + +CREATE TABLE "returns" ( +casenum character varying NOT NULL, +logtime character varying NULL, +srctxn character varying NULL, +itemlink character varying NULL, +revdate text NULL, +dayslapsed bigint NULL, +return_channel character varying NULL, +return_details jsonb NULL, + PRIMARY KEY (casenum), + FOREIGN KEY (srctxn) REFERENCES orders(txnnum), + FOREIGN KEY (itemlink) REFERENCES products(itemcode) +); + +First 3 rows: +casenum logtime srctxn itemlink revdate dayslapsed return_channel return_details +--------- ------------------- -------- ---------- ---------- ------------ ---------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- +RL781345 Jun 15, 2024 ORD67804 PRD00357 2025-02-14 56 Store {'fraud': {'risk_level': 'Low', 'fraud_flags': 5}, 'shipping': {'fee': 99.57, 'carrier': 'Local', 'insurance_amount': 314.86, 'estimated_arrival': '2025-03-03', 'tracking_reference': '84825'}, 'reasoning': {'client_notes': 'Late impact process.', 'reason_notes': 'Ok light model fish country.', 'primary_reason': 'Quality Issue', 'secondary_reason': 'Not as Described'}, 'authorization': {'auth_status': 'Pending', 'warranty_claim': 'WC2710', 'warranty_status': 'Not Applicable'}} +RL862996 February 18th, 2025 ORD46009 PRD00219 2025-02-18 40 Courier {'fraud': {'risk_level': 'High', 'fraud_flags': 5}, 'shipping': {'fee': 29.78, 'carrier': 'UPS', 'insurance_amount': 720.27, 'estimated_arrival': '2025-02-22', 'tracking_reference': '486848'}, 'reasoning': {'client_notes': 'Keep team of could.', 'reason_notes': 'Tell carry degree true.', 'primary_reason': 'Wrong Item', 'secondary_reason': 'Defective'}, 'authorization': {'auth_status': 'Rejected', 'warranty_claim': 'WC8592', 'warranty_status': 'Expired'}} +RL253528 2024-08-13 ORD93572 PRD00053 2025-02-10 43 Mail {'fraud': {'risk_level': None, 'fraud_flags': 1}, 'shipping': {'fee': 19.57, 'carrier': 'Local', 'insurance_amount': 788.41, 'estimated_arrival': '2025-03-02', 'tracking_reference': 'rt921107'}, 'reasoning': {'client_notes': 'Get use shake rise. Address future hit current scientist.', 'reason_notes': 'Art nice budget for.', 'primary_reason': 'Wrong Item', 'secondary_reason': 'Damaged'}, 'authorization': {'auth_status': 'Approved', 'warranty_claim': None, 'warranty_status': 'Not Applicable'}} +... + + +CREATE TABLE "orders" ( +txnnum character varying NOT NULL, +buyerlink text NULL, +transaction_value real NULL, +txndate text NULL, + PRIMARY KEY (txnnum) +); + +First 3 rows: +txnnum buyerlink transaction_value txndate +-------- ----------- ------------------- --------- +ORD89293 CUS00181 0 +ORD66774 CUS00009 0 +ORD57926 CUS00042 0 +... + + +CREATE TABLE "quality_assessment" ( +inspectref character varying NOT NULL, +assessment_summary jsonb NULL, + PRIMARY KEY (inspectref), + FOREIGN KEY (inspectref) REFERENCES returns(casenum) +); + +First 3 rows: +inspectref assessment_summary +------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- +RL924605 {'defects': {'defect_type': None, 'defect_severity': 'Minor'}, 'results': {'qa_result': 'Fail', 'aesthetic_score': 5, 'tech_review_status': 'Pending', 'functional_test_result': 'Partial'}, 'condition': {'usage_signs': 'Significant', 'completeness': 'Complete', 'item_condition': 'Used', 'package_condition': 'Original'}, 'documentation': {'qa_alert': 'Yes', 'has_photos': 'True', 'needs_investigation': 'No', 'documentation_status': 'Missing'}} +RL382759 {'defects': {'defect_type': None, 'defect_severity': 'Major'}, 'results': {'qa_result': 'Pass', 'aesthetic_score': 2.7, 'tech_review_status': 'Completed', 'functional_test_result': 'Fail'}, 'condition': {'usage_signs': 'Minor', 'completeness': 'Complete', 'item_condition': 'New', 'package_condition': 'Damaged'}, 'documentation': {'qa_alert': 'Yes', 'has_photos': 'Y', 'needs_investigation': 'Yes', 'documentation_status': 'Partial'}} +RL818285 {'defects': {'defect_type': 'Manufacturing', 'defect_severity': 'Critical'}, 'results': {'qa_result': 'Pass', 'aesthetic_score': 1, 'tech_review_status': 'Completed', 'functional_test_result': 'Pass'}, 'condition': {'usage_signs': 'Significant', 'completeness': 'Accessories Missing', 'item_condition': 'Damaged', 'package_condition': 'Original'}, 'documentation': {'qa_alert': 'No', 'has_photos': '1', 'needs_investigation': 'Yes', 'documentation_status': 'Complete'}} +... + + +CREATE TABLE "return_processing" ( +loccode character varying NOT NULL, +procprio character varying NULL, +procstate USER-DEFINED NULL, +proctime real NULL, +handreq USER-DEFINED NULL, +needsquar character varying NULL, +quardays bigint NULL, +handling_notes text NULL, +dispaction text NULL, +dispreason USER-DEFINED NULL, +needsrelabel text NULL, +repairfeas USER-DEFINED NULL, +estrepairhrs real NULL, +partsavail USER-DEFINED NULL, +policycomp text NULL, +exceptmade character varying NULL, +excepttype USER-DEFINED NULL, +apprlevel USER-DEFINED NULL, + PRIMARY KEY (loccode) +); + +First 3 rows: +loccode procprio procstate proctime handreq needsquar quardays handling_notes dispaction dispreason needsrelabel repairfeas estrepairhrs partsavail policycomp exceptmade excepttype apprlevel +--------- ---------- ----------- ---------- --------- ----------- ---------- ----------------------------- ------------ ------------ -------------- ------------ -------------- ------------ ------------- ------------ ------------ ----------- +LOC008 Bulk Received 15 Hazardous No 14 Kind he you let. Repair Repairable Y High 39.8 Partial Non-compliant Yes Error Supervisor +LOC026 Express Processing 24.9 Fragile Yes 4 Employee now star size out. Resell Repairable True High 19.8 Available Non-compliant No Goodwill Automatic +LOC013 Standard Received 38 Special Yes 13 Away course challenge spring. Repair 0 Not Feasible 14.1 Available Compliant No Error Manager +... + + +CREATE TABLE "financial_management" ( +creditref text NOT NULL, +casetag character varying NULL, +dispcost real NULL, +cost_breakdown jsonb NULL, + PRIMARY KEY (creditref), + FOREIGN KEY (casetag) REFERENCES returns(casenum) +); + +First 3 rows: +creditref casetag dispcost cost_breakdown +----------- --------- ---------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- +CM78914 RL924605 0 {'fees': {'qa_fee': 12.33, 'restocking_fee': 89.34, 'relabeling_cost': 10.26, 'repackaging_cost': 0}, 'refund': {'method': 'Store Credit', 'status': 'Completed', 'refund_amount': 0, 'processing_days': 12.8}, 'disposal': {'disposal_cost': 83.49, 'disposal_method': 'Recycle', 'recycling_category': 'Plastic', 'environmental_impact': 'High'}, 'valuation': {'recovery_value': 19.18, 'value_loss_pct': 72.3}, 'repair_costs': {'labor_fee': 85.38, 'parts_fee': 136.63, 'repair_estimate': 490.99}, 'sustainability': {'cost_efficiency': 45.4, 'carbon_footprint': 32.28, 'efficiency_score': 98.1, 'sustainability_score': 0}} +CM98655 RL818285 0 {'fees': {'qa_fee': 26.53, 'restocking_fee': 50.98, 'relabeling_cost': 16.87, 'repackaging_cost': 0}, 'refund': {'method': 'Store Credit', 'status': 'Pending', 'refund_amount': 0, 'processing_days': 6.4}, 'disposal': {'disposal_cost': 78.35, 'disposal_method': 'Recycle', 'recycling_category': 'Mixed', 'environmental_impact': 'Low'}, 'valuation': {'recovery_value': 554.38, 'value_loss_pct': 82.4}, 'repair_costs': {'labor_fee': 118.58, 'parts_fee': 19.29, 'repair_estimate': 387.6}, 'sustainability': {'cost_efficiency': 1.4, 'carbon_footprint': 76.05, 'efficiency_score': 39, 'sustainability_score': 0}} +CM36391 RL381491 0 {'fees': {'qa_fee': 46, 'restocking_fee': 35.36, 'relabeling_cost': 18.95, 'repackaging_cost': 0}, 'refund': {'method': 'Bank Transfer', 'status': 'Completed', 'refund_amount': 0, 'processing_days': 9.2}, 'disposal': {'disposal_cost': 127.9, 'disposal_method': 'Hazardous Waste', 'recycling_category': 'Metal', 'environmental_impact': 'Low'}, 'valuation': {'recovery_value': 963.61, 'value_loss_pct': 85.3}, 'repair_costs': {'labor_fee': 129.57, 'parts_fee': 31.74, 'repair_estimate': 156.98}, 'sustainability': {'cost_efficiency': 99.6, 'carbon_footprint': 97.66, 'efficiency_score': 3.9, 'sustainability_score': 0}} +... + + +CREATE TABLE "case_management" ( +casetie character varying NOT NULL, +satisfscore bigint NULL, +commstate USER-DEFINED NULL, +resptime real NULL, +resolsatis USER-DEFINED NULL, +hasfeedback USER-DEFINED NULL, +feedbacktype USER-DEFINED NULL, +vendornotice USER-DEFINED NULL, +vendoraction USER-DEFINED NULL, +preventopp USER-DEFINED NULL, +actionident USER-DEFINED NULL, +actionstate USER-DEFINED NULL, +kbupdated USER-DEFINED NULL, +trainident text NULL, +needsprocimprove USER-DEFINED NULL, +needsdocupdate USER-DEFINED NULL, +needssysupdate USER-DEFINED NULL, +reportstate USER-DEFINED NULL, +analysisstate USER-DEFINED NULL, +hastrendanalysis USER-DEFINED NULL, +hascostanalysis USER-DEFINED NULL, +recstate USER-DEFINED NULL, +actioncount bigint NULL, +needsfollowup USER-DEFINED NULL, +nextreview text NULL, +closestate USER-DEFINED NULL, +closedate text NULL, +closenotes text NULL, + PRIMARY KEY (casetie), + FOREIGN KEY (casetie) REFERENCES returns(casenum) +); + +First 3 rows: +casetie satisfscore commstate resptime resolsatis hasfeedback feedbacktype vendornotice vendoraction preventopp actionident actionstate kbupdated trainident needsprocimprove needsdocupdate needssysupdate reportstate analysisstate hastrendanalysis hascostanalysis recstate actioncount needsfollowup nextreview closestate closedate closenotes +--------- ------------- ----------- ---------- ------------ ------------- -------------- -------------- -------------- ------------ ------------- ------------- ----------- ------------ ------------------ ---------------- ---------------- ------------- --------------- ------------------ ----------------- ---------- ------------- --------------- ------------ ------------ ----------- ------------------------------------- +RL924605 2 In Progress 14.8 Dissatisfied Yes Process Yes Initiated Medium Yes Completed No True No No Yes Reviewed Completed No Yes Draft 0 No 2025-04-28 Closed 2025-02-19 Above suggest statement likely sound. +RL818285 5 Resolved 67.6 Neutral No Process Yes Initiated Medium No Completed Yes N Yes Yes Yes Generated In Progress Yes Yes Draft 7 Yes 2025-04-25 Open 2025-03-18 Kid week half. +RL381491 5 In Progress 59.3 Dissatisfied No Product No Initiated High Yes Completed Yes 1 No Yes Yes Reviewed In Progress No Yes Approved 9 No 2025-02-21 Closed 2025-03-12 Experience near front opportunity. +... diff --git a/robot_fault_prediction/robot_fault_prediction_column_meaning_base.json b/robot_fault_prediction/robot_fault_prediction_column_meaning_base.json new file mode 100644 index 0000000000000000000000000000000000000000..3f0b9514b36135ce848fb9cea2f4ab8c7fd8df94 --- /dev/null +++ b/robot_fault_prediction/robot_fault_prediction_column_meaning_base.json @@ -0,0 +1,210 @@ +{ + "robot_fault_prediction|robot_record|RecReg": "TEXT. Unique record registration identifier. PK. Example: RF100725.", + "robot_fault_prediction|robot_record|RecTS": "TIMESTAMP. Timestamp of the record creation. Not nullable. Possible values: 2025-02-18, 2025-02-19.", + "robot_fault_prediction|robot_record|BotCode": "TEXT. Unique robot code identifier. PK. Example: RB2073.", + "robot_fault_prediction|robot_details|BotDetReg": "TEXT. Foreign key to the robot_record table (BotCode). PK, FK to robot_record.", + "robot_fault_prediction|robot_details|MfgNameVal": "TEXT. Manufacturer name of the robot. Possible values: ABB, FANUC, KUKA, Universal Robots, Yaskawa.", + "robot_fault_prediction|robot_details|ModelSeriesVal": "TEXT. Model series of the robot. Example: Series_784.", + "robot_fault_prediction|robot_details|BotTypeVal": "CHAR(15). Type of the robot (e.g., industrial, collaborative). Possible values: Articulated, Cartesian, Collaborative, Delta, SCARA.", + "robot_fault_prediction|robot_details|PayloadCapKG": "REAL. Payload capacity of the robot in kilograms. Possible values: 3, 5, 10, 20, 50, 100, 200.", + "robot_fault_prediction|robot_details|ReachMMVal": "BIGINT. Reach distance of the robot in millimeters. Example: 1592.", + "robot_fault_prediction|robot_details|InstDateVal": "DATE. Installation date of the robot. Example: 10 Jun 2023.", + "robot_fault_prediction|robot_details|FWVersionVal": "TEXT. Firmware version of the robot. Example: 9.6.6.", + "robot_fault_prediction|robot_details|CtrlTypeVal": "TEXT. Control type of the robot (e.g., manual, automatic). Example: Controller_C2.", + "robot_fault_prediction|operation|OperReg": "TEXT. Unique operation record identifier. PK.", + "robot_fault_prediction|operation|OperRecRef": "TEXT. Foreign key to the robot_record table (BotCode). FK to robot_record.", + "robot_fault_prediction|operation|TotOpsHrVal": "REAL. Total operational hours of the robot. **NULL means no operational hours data available.**. Example: 20009.0.", + "robot_fault_prediction|operation|AppTypeVal": "TEXT. Application type for the operation. **NULL means no application type specified.**. Possible values: Assembly, Material Handling, Painting, Palletizing, Welding.", + "robot_fault_prediction|operation|OperModeVal": "CHAR(25). Operating mode of the robot (e.g., automatic, manual). Possible values: MANU.", + "robot_fault_prediction|operation|CurrProgVal": "TEXT. Current program being executed by the robot. **NULL means no program is active.**. Example: PRG_4901.", + "robot_fault_prediction|operation|ProgCycleCount": "BIGINT. Number of cycles executed by the robot. Example: 177681.", + "robot_fault_prediction|operation|CycleTimeSecVal": "REAL. Cycle time of the robot operation in seconds. **NULL means no cycle time data available.**. Example: 211.82.", + "robot_fault_prediction|operation|AxisCountVal": "BIGINT. Number of axes in the robot. Possible values: 4, 5, 6, 7.", + "robot_fault_prediction|joint_performance|JPerfID": "BIGSERIAL. Unique identifier for the joint performance record. PK.", + "robot_fault_prediction|joint_performance|JPerfOperRef": "TEXT. Foreign key to the operation table (OperReg). FK to operation.", + "robot_fault_prediction|joint_performance|JPerfDetRef": "TEXT. Foreign key to the robot_details table (BotDetReg). FK to robot_details.", + "robot_fault_prediction|joint_condition|JCondID": "BIGSERIAL. Unique identifier for the joint condition record. PK.", + "robot_fault_prediction|joint_condition|JCondOperRef": "TEXT. Foreign key to the operation table (OperReg). FK to operation.", + "robot_fault_prediction|joint_condition|JCDetRef": "TEXT. Foreign key to the robot_details table (BotDetReg). FK to robot_details.", + "robot_fault_prediction|actuation_data|ActReg": "TEXT. Unique actuation record identifier. PK.", + "robot_fault_prediction|actuation_data|ActOperRef": "TEXT. Foreign key to the operation table (OperReg). FK to operation.", + "robot_fault_prediction|actuation_data|ActRecRef": "TEXT. Foreign key to the robot_record table (BotCode). FK to robot_record.", + "robot_fault_prediction|actuation_data|TCPXVal": "REAL. TCP X-coordinate in mm. **NULL means TCP pose data unavailable.**. Example: 1275.23.", + "robot_fault_prediction|actuation_data|TCPYVal": "REAL. TCP Y-coordinate in mm. **NULL means TCP pose data unavailable.**. Example: 873.0.", + "robot_fault_prediction|actuation_data|TCPZVal": "REAL. TCP Z-coordinate in mm. **NULL means TCP pose data unavailable.**. Example: 1618.63.", + "robot_fault_prediction|actuation_data|TCP_RxVal": "REAL. TCP rotation around the X-axis in radians. Example: -156.28.", + "robot_fault_prediction|actuation_data|TCP_RyVal": "REAL. TCP rotation around the Y-axis in radians. Example: -150.3.", + "robot_fault_prediction|actuation_data|TCP_RzVal": "REAL. TCP rotation around the Z-axis in radians. Example: -6.26.", + "robot_fault_prediction|actuation_data|TCPSpeedVal": "REAL. Speed of the TCP in mm/s. Example: 1231.14.", + "robot_fault_prediction|actuation_data|TCPAccelVal": "REAL. Acceleration of the TCP in mm/s². Example: 6.65.", + "robot_fault_prediction|actuation_data|PathAccMMVal": "REAL. Path accuracy in mm. Example: 0.797.", + "robot_fault_prediction|actuation_data|PosErrMMVal": "REAL. Position error in mm. **NULL means no position error data available.**. Example: 0.069.", + "robot_fault_prediction|actuation_data|OrientErrDegVal": "REAL. Orientation error in degrees. **NULL means no orientation error data available.**. Example: 0.471.", + "robot_fault_prediction|actuation_data|PayloadWVal": "REAL. Payload weight in kg. **NULL means no payload data available.**. Example: 144.85.", + "robot_fault_prediction|actuation_data|PayloadIVal": "REAL. Payload current in Amps. Example: 1.78.", + "robot_fault_prediction|actuation_data|M1CurrVal": "REAL. Motor 1 current in Amps. **NULL means no motor 1 data available.**. Example: 6.58.", + "robot_fault_prediction|actuation_data|M2CurrVal": "REAL. Motor 2 current in Amps. **NULL means no motor 2 data available.**. Example: 8.61.", + "robot_fault_prediction|actuation_data|M3CurrVal": "REAL. Motor 3 current in Amps. **NULL means no motor 3 data available.**. Example: 3.31.", + "robot_fault_prediction|actuation_data|M4CurrVal": "REAL. Motor 4 current in Amps. **NULL means no motor 4 data available.**. Example: 14.16.", + "robot_fault_prediction|actuation_data|M5CurrVal": "REAL. Motor 5 current in Amps. **NULL means no motor 5 data available.**. Example: 6.16.", + "robot_fault_prediction|actuation_data|M6CurrVal": "REAL. Motor 6 current in Amps. **NULL means no motor 6 data available.**. Example: 1.81.", + "robot_fault_prediction|actuation_data|M1VoltVal": "REAL. Motor 1 voltage in Volts. Example: 1.26.", + "robot_fault_prediction|actuation_data|M2VoltVal": "REAL. Motor 2 voltage in Volts. Example: 15.66.", + "robot_fault_prediction|actuation_data|M3VoltVal": "REAL. Motor 3 voltage in Volts. Example: 10.6.", + "robot_fault_prediction|actuation_data|M4VoltVal": "REAL. Motor 4 voltage in Volts. Example: 9.13.", + "robot_fault_prediction|actuation_data|M5VoltVal": "REAL. Motor 5 voltage in Volts. Example: 28.47.", + "robot_fault_prediction|actuation_data|M6VoltVal": "REAL. Motor 6 voltage in Volts. Example: 32.12.", + "robot_fault_prediction|mechanical_status|MechActRef": "TEXT. Foreign key to the actuation_data table (ActReg). FK to actuation_data.", + "robot_fault_prediction|mechanical_status|MechOperRef": "TEXT. Foreign key to the operation table (OperReg). PK, FK to operation.", + "robot_fault_prediction|mechanical_status|MechDetRef": "TEXT. Foreign key to the robot_details table (BotDetReg). FK to robot_details.", + "robot_fault_prediction|system_controller|SystemOverseerActuation": "TEXT. Foreign key to the actuation_data table (ActReg). PK, FK to actuation_data.", + "robot_fault_prediction|system_controller|SystemOverseerOperation": "TEXT. Foreign key to the operation table (OperReg). FK to operation.", + "robot_fault_prediction|system_controller|OverseerLoadValue": "REAL. Load value of the system overseer. Example: 0.99.", + "robot_fault_prediction|system_controller|MemUseVal": "REAL. Memory usage of the system overseer. Example: 32.07.", + "robot_fault_prediction|system_controller|OverseerThermalLevel": "TEXT. Thermal level of the system overseer. Example: 38.5°C.", + "robot_fault_prediction|system_controller|CabTempVal": "REAL. Cabinet temperature of the system overseer. Example: 33.84.", + "robot_fault_prediction|system_controller|CabHumidityLevel": "TEXT. Humidity level in the cabinet of the system overseer. Example: 47%.", + "robot_fault_prediction|maintenance_and_fault|UpkeepActuation": "TEXT. Foreign key to the actuation_data table (ActReg). PK, FK to actuation_data.", + "robot_fault_prediction|maintenance_and_fault|UpkeepOperation": "TEXT. Foreign key to the operation table (OperReg). FK to operation.", + "robot_fault_prediction|maintenance_and_fault|FaultCodeVal": "TEXT. Fault code identifier. **NULL means no fault code provided.**. Example: E8902.", + "robot_fault_prediction|maintenance_and_fault|IssueCategoryVal": "TEXT. Category of the issue in the fault. Possible values: COM, ELE, MEC, NON, SOF.", + "robot_fault_prediction|maintenance_and_fault|IssueLevelVal": "TEXT. Level of the issue in the fault. Possible values: Critical level, High level, Low level, Medium level, None level.", + "robot_fault_prediction|maintenance_and_fault|FaultPredScore": "REAL. Fault prediction score. **NULL means no fault prediction score available.**. Example: 0.021.", + "robot_fault_prediction|maintenance_and_fault|FaultTypeEstimation": "TEXT. Fault type estimation. **NULL means no fault type estimation available.**. Possible values: Controller, Gearbox, Joint, Motor.", + "robot_fault_prediction|maintenance_and_fault|RULHours": "BIGINT. Remaining useful life in hours. Example: 1601.", + "robot_fault_prediction|maintenance_and_fault|UpkeepDueDays": "BIGINT. Days until the next maintenance is due. Example: 16.", + "robot_fault_prediction|maintenance_and_fault|UpkeepCostEst": "TEXT. Estimated upkeep cost. Example: $7,299.59.", + "robot_fault_prediction|performance_and_safety|EffectivenessActuation": "TEXT. Foreign key to the actuation_data table (ActReg). PK, FK to actuation_data.", + "robot_fault_prediction|performance_and_safety|EffectivenessRobot": "TEXT. Foreign key to the robot_details table (BotDetReg). FK to robot_details.", + "robot_fault_prediction|performance_and_safety|ConditionIndexVal": "REAL. Condition index value of the robot. **NULL means no condition index data available.**. Example: 0.152.", + "robot_fault_prediction|performance_and_safety|EffectivenessIndexVal": "REAL. Effectiveness index value of the robot. **NULL means no effectiveness index data available.**. Example: 0.603.", + "robot_fault_prediction|performance_and_safety|QualityMeasureVal": "REAL. Quality measure value for the robot. **NULL means no quality measure data available.**. Example: 0.337.", + "robot_fault_prediction|performance_and_safety|EnergyUseKWHVal": "TEXT. Energy use in KWH. **NULL means no energy usage data available.**. Example: 70.54 kWh.", + "robot_fault_prediction|performance_and_safety|PwrFactorVal": "TEXT. Power factor value. **NULL means no power factor data available.**. Example: PF=0.82.", + "robot_fault_prediction|performance_and_safety|AirPressVal": "REAL. Air pressure value in the system. Example: 6.69.", + "robot_fault_prediction|performance_and_safety|SafetyStateVal": "TEXT. Safety state value of the robot. Possible values: Warning, ✓ Normal, ✗ Emergency.", + "robot_fault_prediction|performance_and_safety|ZoneViolNum": "BIGINT. Number of zone violations. Example: 1.", + "robot_fault_prediction|performance_and_safety|EmergencyStopCount": "BIGINT. Count of emergency stops. Possible values: 0, 1, 2, 3, 4, 5.", + "robot_fault_prediction|performance_and_safety|CollisionCount": "BIGINT. Count of collisions. Possible values: 0, 1, 2, 3.", + "robot_fault_prediction|performance_and_safety|OverloadCnt": "BIGINT. Count of overloads. Possible values: 0, 1, 2, 3, 4, 5.", + "robot_fault_prediction|performance_and_safety|SpeedViolNum": "BIGINT. Number of speed violations. Example: 5.", + "robot_fault_prediction|performance_and_safety|CalibStateVal": "CHAR(20). Calibration state value of the robot. Possible values: 0, N, Y.", + "robot_fault_prediction|performance_and_safety|ToolChangeCount": "BIGINT. Number of tool changes. **NULL means no tool change data available.**. Example: 940.0.", + "robot_fault_prediction|joint_performance|joint_metrics": { + "column_meaning": "JSONB column. Captures the kinematic performance of every joint (angle, speed, and torque) in a single hierarchical JSON structure for quick retrieval.", + "fields_meaning": { + "J1": { + "angle_deg": "REAL. Joint 1 angle value in degrees. Example: -37.72.", + "speed_dps": "REAL. Joint 1 speed value in radians per second. Example: 36.74.", + "torque_nm": "REAL. Joint 1 torque value in Nm. Example: 12.0." + }, + "J2": { + "angle_deg": "REAL. Joint 2 angle value in degrees. Example: 177.36.", + "speed_dps": "REAL. Joint 2 speed value in radians per second. Example: 65.06.", + "torque_nm": "REAL. Joint 2 torque value in Nm. Example: 79.88." + }, + "J3": { + "angle_deg": "REAL. Joint 3 angle value in degrees. Example: 83.27.", + "speed_dps": "REAL. Joint 3 speed value in radians per second. Example: 174.45.", + "torque_nm": "REAL. Joint 3 torque value in Nm. Example: 55.91." + }, + "J4": { + "angle_deg": "REAL. Joint 4 angle value in degrees. Example: -151.1.", + "speed_dps": "REAL. Joint 4 speed value in radians per second. Example: 96.65.", + "torque_nm": "REAL. Joint 4 torque value in Nm. Example: 75.22." + }, + "J5": { + "angle_deg": "REAL. Joint 5 angle value in degrees. Example: -162.94.", + "speed_dps": "REAL. Joint 5 speed value in radians per second. Example: 167.12.", + "torque_nm": "REAL. Joint 5 torque value in Nm. Example: 2.14." + }, + "J6": { + "angle_deg": "REAL. Joint 6 angle value in degrees. Example: -72.08.", + "speed_dps": "REAL. Joint 6 speed value in radians per second. Example: 69.94.", + "torque_nm": "REAL. Joint 6 torque value in Nm. Example: 45.07." + } + } + }, + "robot_fault_prediction|joint_condition|joint_health": { + "column_meaning": "JSONB column. Consolidates thermal, vibration, and backlash indicators that describe the health of each joint at the time of capture.", + "fields_meaning": { + "J1": { + "temperature_C": "REAL. Joint 1 temperature value in Celsius. Example: 20.57.", + "vibration_mmps": "REAL. Joint 1 vibration value in mm/s. Example: 1.14.", + "backlash_deg": "REAL. Joint 1 backlash value in mm. Example: 0.0352." + }, + "J2": { + "temperature_C": "REAL. Joint 2 temperature value in Celsius. Example: 39.54.", + "vibration_mmps": "REAL. Joint 2 vibration value in mm/s. Example: 1.636.", + "backlash_deg": "REAL. Joint 2 backlash value in mm. Example: 0.0272." + }, + "J3": { + "temperature_C": "REAL. Joint 3 temperature value in Celsius. Example: 42.16.", + "vibration_mmps": "REAL. Joint 3 vibration value in mm/s. Example: 1.687.", + "backlash_deg": "REAL. Joint 3 backlash value in mm. Example: 0.0946." + }, + "J4": { + "temperature_C": "REAL. Joint 4 temperature value in Celsius. Example: 34.88.", + "vibration_mmps": "REAL. Joint 4 vibration value in mm/s. Example: 3.264.", + "backlash_deg": "REAL. Joint 4 backlash value in mm. Example: 0.056." + }, + "J5": { + "temperature_C": "REAL. Joint 5 temperature value in Celsius. Example: 70.15.", + "vibration_mmps": "REAL. Joint 5 vibration value in mm/s. Example: 2.052.", + "backlash_deg": "REAL. Joint 5 backlash value in mm. Example: 0.0907." + }, + "J6": { + "temperature_C": "REAL. Joint 6 temperature value in Celsius. Example: 64.39.", + "vibration_mmps": "REAL. Joint 6 vibration value in mm/s. Example: 6.422.", + "backlash_deg": "REAL. Joint 6 backlash value in mm. Example: 0.046." + } + } + }, + "robot_fault_prediction|mechanical_status|component_status": { + "column_meaning": "JSONB column. Bundles together the real-time status of brakes, encoders, and gearbox health for easier diagnostics and alerting.", + "fields_meaning": { + "brakes": { + "J1": "TEXT. Brake 1 status. **NULL means no brake 1 status data available.**. Possible values: Error, Normal, Warning.", + "J2": "TEXT. Brake 2 status. Possible values: Error, Normal, Warning.", + "J3": "TEXT. Brake 3 status. Possible values: Error, Normal, Warning.", + "J4": "TEXT. Brake 4 status. Possible values: Error, Normal, Warning.", + "J5": "TEXT. Brake 5 status. Possible values: Error, Normal, Warning.", + "J6": "TEXT. Brake 6 status. Possible values: Error, Normal, Warning." + }, + "encoders": { + "J1": "TEXT. Encoder 1 status. **NULL means no encoder 1 status data available.**. Possible values: Error, Normal, Warning.", + "J2": "TEXT. Encoder 2 status. Possible values: Error, Normal, Warning.", + "J3": "TEXT. Encoder 3 status. Possible values: Error, Normal, Warning.", + "J4": "TEXT. Encoder 4 status. Possible values: Error, Normal, Warning.", + "J5": "TEXT. Encoder 5 status. Possible values: Error, Normal, Warning.", + "J6": "TEXT. Encoder 6 status. Possible values: Error, Normal, Warning." + }, + "gearboxes": { + "J1": { + "temperature_C": "REAL. Gearbox 1 temperature in Celsius. Example: 24.92.", + "vibration_mmps": "REAL. Gearbox 1 vibration in mm/s. Example: 3.273." + }, + "J2": { + "temperature_C": "REAL. Gearbox 2 temperature in Celsius. Example: 79.14.", + "vibration_mmps": "REAL. Gearbox 2 vibration in mm/s. Example: 1.912." + }, + "J3": { + "temperature_C": "REAL. Gearbox 3 temperature in Celsius. Example: 56.76.", + "vibration_mmps": "REAL. Gearbox 3 vibration in mm/s. Example: 1.361." + }, + "J4": { + "temperature_C": "REAL. Gearbox 4 temperature in Celsius. Example: 35.39.", + "vibration_mmps": "REAL. Gearbox 4 vibration in mm/s. Example: 5.302." + }, + "J5": { + "temperature_C": "REAL. Gearbox 5 temperature in Celsius. Example: 60.72.", + "vibration_mmps": "REAL. Gearbox 5 vibration in mm/s. Example: 7.001." + }, + "J6": { + "temperature_C": "REAL. Gearbox 6 temperature in Celsius. Example: 48.18.", + "vibration_mmps": "REAL. Gearbox 6 vibration in mm/s. Example: 5.74." + } + } + } + } +} \ No newline at end of file diff --git a/robot_fault_prediction/robot_fault_prediction_kb.jsonl b/robot_fault_prediction/robot_fault_prediction_kb.jsonl new file mode 100644 index 0000000000000000000000000000000000000000..43656329e7ffc39b8bebe83b95a1d55c24e4c7e5 --- /dev/null +++ b/robot_fault_prediction/robot_fault_prediction_kb.jsonl @@ -0,0 +1,65 @@ +{"id": 0, "knowledge": "Robot Age", "description": "Calculates the operational age of a robot in days since its commissioning date.", "definition": "The duration in days from the robot's installation date to the present. Formula: CURRENT_DATE - installation_date.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 1, "knowledge": "Payload Utilization Ratio (PUR)", "description": "Measures the ratio of the robot's actual working payload to its maximum rated payload capacity, indicating how heavily it is loaded.", "definition": "The proportion of the robot's maximum lifting capacity being used. Formula: actual_payload_weight / payload_capacity_kg.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 2, "knowledge": "Mean Time Between Stoppages (MTBS)", "description": "Calculates the average operating hours between emergency stop events, a key indicator of operational reliability.", "definition": "The total operational hours divided by the number of emergency stop events. Formula: total_operational_hours / emergency_stop_count.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 3, "knowledge": "Supply Sufficiency (Days)", "description": "Estimates for how many days the current supplies can last. This is determined by the most limiting resource (either food or water).", "definition": "Calculates the minimum number of days supplies can last, assuming a daily consumption of 2kg of food and 3 liters of water per person. Formula: LEAST((food_tons * 1000) / (affected_population * 2), water_liters / (affected_population * 3)).", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 4, "knowledge": "Average Joint Temperature", "description": "Calculates the average temperature across all of the robot's joints.", "definition": "The mean temperature across all six joints (J1 to J6). Formula: (J1_temp + ... + J6_temp) / 6.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 5, "knowledge": "Average Joint Vibration", "description": "Calculates the average vibration level across all of the robot's joints.", "definition": "The mean vibration level (in mm/s) across all of a robot's joints, calculated by averaging the 'vibration_mmps' values from the 'joint_health' JSONB field.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 6, "knowledge": "Controller Stress Score", "description": "A combined score representing the current processing and memory load on the system controller.", "definition": "A composite score representing controller load. Formula: (overseer_load_value + memory_usage_value) / 2.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 7, "knowledge": "Safety Incident Rate", "description": "Calculates the number of safety incidents (collisions, zone, and speed violations) per 1000 hours of operation.", "definition": "The frequency of safety violations per 1000 operational hours. Formula: (collision_count + zone_violations + speed_violations) * 1000 / total_operational_hours.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 8, "knowledge": "Relative Positional Error", "description": "Normalizes the Cartesian position error against the robot's maximum reach to provide a scale-independent accuracy metric.", "definition": "The robot's positional error as a fraction of its maximum reach. Formula: positional_error_mm / reach_mm.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 9, "knowledge": "Joint Temperature Differential", "description": "Measures the temperature difference between the hottest and coolest joints, which can indicate localized issues.", "definition": "The difference between the maximum and minimum temperature across all six joints (J1 to J6). Formula: GREATEST(J1_temp, ..., J6_temp) - LEAST(J1_temp, ..., J6_temp).", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 10, "knowledge": "Remaining Useful Life in Days", "description": "Converts the remaining useful life from hours to days for easier planning.", "definition": "The robot's projected operational lifespan in days. Formula: remaining_useful_life_hours / 24.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 11, "knowledge": "Budget Runway (Days)", "description": "Estimates the number of days until the allocated budget is depleted, based on the current burn rate.", "definition": "The estimated number of days an operation can continue before its budget is exhausted. Formula: (allocated_budget - total_operational_costs) / Financial Burn Rate.", "type": "calculation_knowledge", "children_knowledge": [4]} +{"id": 12, "knowledge": "Throughput Rate", "description": "Calculates the number of cycles completed per minute, as a direct measure of production throughput.", "definition": "The number of program cycles a robot completes per minute. Formula: 60 / cycle_time_seconds.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 13, "knowledge": "Overload Frequency", "description": "Calculates the frequency of overload events per 1000 operating hours.", "definition": "The number of overload events per 1000 operational hours. Formula: overload_count / (total_operational_hours / 1000.0).", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 14, "knowledge": "Response Effectiveness Score (RES)", "description": "A composite score measuring the overall effectiveness of the response operation.", "definition": "A weighted score of an operation's success. Formula: (0.4 * success_rate) + (0.3 * bene_feedbackscore * 10) + (0.3 * distequityidx * 100).", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 15, "knowledge": "Average Backlash", "description": "Calculates the average mechanical backlash across all of the robot's joints, an indicator of gear wear.", "definition": "The mean backlash (in degrees) across all of a robot's joints.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 16, "knowledge": "Maintenance Cost Per Hour", "description": "Estimates the maintenance cost per hour of operation based on the next scheduled upkeep.", "definition": "The robot's estimated upkeep cost divided by its total operational hours. Formula: upkeep_cost_est / total_ops_hours.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 17, "knowledge": "Predictive Maintenance Urgency (PMU)", "description": "A score that quantifies the urgency of performing maintenance based on predictive models.", "definition": "A risk score combining remaining useful life (RUL) and fault prediction. Formula: (1000 / (rulhours + 1)) + (faultpredscore * 10).", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 18, "knowledge": "Mechanical Wear Score", "description": "A calculated indicator that combines vibration and backlash data to estimate overall mechanical degradation.", "definition": "A weighted score of mechanical wear. Formula: (0.6 * Average Joint Vibration) + (0.4 * Average Backlash).", "type": "calculation_knowledge", "children_knowledge": [5, 15]} +{"id": 19, "knowledge": "Health Degradation Rate (HDR)", "description": "Calculates the rate at which the robot's composite health index is declining relative to its age.", "definition": "The rate of decline in a robot's health index per day. Formula: (1.0 - condition_index) / robot_age_in_days.", "type": "calculation_knowledge", "children_knowledge": [0]} +{"id": 20, "knowledge": "Financial Health Score (FHS)", "description": "An index assessing the financial stability of an operation based on budget and spending.", "definition": "A score assessing financial stability, calculated by averaging the budget remaining percentage and funding sufficiency. Formula: 0.5 * (100 - funds_util_pct) + 0.5 * ((budgetallotusd - COALESCE(resource_gaps_usd, 0)) / NULLIF(budgetallotusd, 0)) * 100.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 21, "knowledge": "Safety State Meanings", "description": "Illustrates the operational meaning of the robot's various safety states.", "definition": "Key safety states are: '✓ Normal' (normal operation), 'Protective Stop' (database value 'Warning'), and '✗ Emergency' (critical stop, database value '✗ Emergency').", "type": "value_illustration", "children_knowledge": -1} +{"id": 22, "knowledge": "Calibration State Meanings", "description": "Clarifies the status of the robot's calibration.", "definition": "Calibration states are: 'Success' (database value 'Y'), 'Failed' (database value '0'), or 'Pending' (database value 'N').", "type": "value_illustration", "children_knowledge": -1} +{"id": 23, "knowledge": "Imminent Supply Depletion Alert", "description": "A critical warning that essential life-sustaining supplies will run out within 48 hours.", "definition": "An 'Imminent Supply Depletion Alert' is triggered when the calculated Supply Sufficiency (Days) is less than 2.", "type": "domain_knowledge", "children_knowledge": [3]} +{"id": 24, "knowledge": "Fault Prediction Score Tiers", "description": "Categorizes the raw fault prediction probability score into actionable tiers.", "definition": "A Fault Prediction Score is 'Low' if < 0.3 (normal operation), 'Medium' if 0.3-0.7 (monitoring required), or 'High' if > 0.7 (high probability of impending fault).", "type": "value_illustration", "children_knowledge": -1} +{"id": 25, "knowledge": "Condition Index Tiers", "description": "Translates the composite health index score into qualitative ratings of the robot's condition.", "definition": "A Condition Index is 'Excellent' if > 0.9, 'Good' if 0.7-0.9, 'Fair' if 0.5-0.7, or 'Poor' if < 0.5 (significant wear or risk).", "type": "value_illustration", "children_knowledge": -1} +{"id": 26, "knowledge": "Public Health Emergency", "description": "A state where the risk of disease is high and the capacity to respond is critically low.", "definition": "A 'Public Health Emergency' exists if the Public Health Risk Score (PHRS) exceeds a critical threshold of 70.", "type": "domain_knowledge", "children_knowledge": [19]} +{"id": 27, "knowledge": "CPU Load Levels", "description": "Categorizes the controller's CPU load to identify processing strain.", "definition": "CPU load is 'Normal' if < 70%, 'High' if 70%-90%, or 'Critical' if > 90% (risks performance degradation).", "type": "value_illustration", "children_knowledge": -1} +{"id": 28, "knowledge": "Highly Effective Operation", "description": "An operation that demonstrates excellence across coordination, logistics, and beneficiary satisfaction.", "definition": "An operation is 'Highly Effective' if its Response Effectiveness Score (RES) is greater than 85 AND its Coordination Quality Index (CQI) is greater than 85.", "type": "domain_knowledge", "children_knowledge": [14, 15]} +{"id": 29, "knowledge": "Cabinet Humidity Risk", "description": "Defines risk levels based on the humidity inside the control cabinet.", "definition": "Humidity is 'Safe' if < 60%, a 'Warning' if 60%-80% (corrosion risk), or 'Danger' if > 80% (risk of electrical faults).", "type": "value_illustration", "children_knowledge": -1} +{"id": 30, "knowledge": "Payload Capacity Class", "description": "Classifies robots into weight categories based on their rated payload capacity.", "definition": "Classes are: 'Light-Duty' (< 20 kg), 'Medium-Duty' (20-150 kg), or 'Heavy-Duty' (>= 150 kg).", "type": "value_illustration", "children_knowledge": -1} +{"id": 31, "knowledge": "Overwhelmed Logistical Hub", "description": "Flags a distribution hub that is under extreme pressure and likely a bottleneck in the supply chain.", "definition": "A hub is 'Overwhelmed' if its Hub Strain Index (HSI) is greater than 1,000,000.", "type": "domain_knowledge", "children_knowledge": [7]} +{"id": 32, "knowledge": "Failing Operation", "description": "An operation characterized by poor logistical performance and low effectiveness.", "definition": "An operation is 'Failing' if its Logistical Throughput per Vehicle (LTV) is below 1.0 OR its Response Effectiveness Score (RES) is below 40.", "type": "domain_knowledge", "children_knowledge": [6, 14]} +{"id": 33, "knowledge": "Air Pressure Status", "description": "Defines the status of the pneumatic system based on air pressure readings.", "definition": "Air pressure is 'Normal' if 5.5-7.0 bar, 'Low' if < 5.5 bar, or 'High' if > 7.0 bar.", "type": "value_illustration", "children_knowledge": -1} +{"id": 34, "knowledge": "Backlash Severity", "description": "Categorizes joint backlash measurements into severity levels.", "definition": "Backlash is 'Low' if < 0.01 degrees, 'Medium' if 0.01-0.05 degrees (early wear), or 'High' if > 0.05 degrees (significant wear). This applies to the average backlash across all joints.", "type": "value_illustration", "children_knowledge": [15]} +{"id": 35, "knowledge": "High Cost, Low Impact Operation", "description": "An operation that is financially expensive but is having little positive effect on the ground.", "definition": "An operation is 'High Cost, Low Impact' if its Cost Per Person Affected (CPPA) is above average AND its Response Effectiveness Score (RES) is below average.", "type": "domain_knowledge", "children_knowledge": [5, 14]} +{"id": 36, "knowledge": "Brake Status Interpretation", "description": "Defines the meaning of brake status codes.", "definition": "Key statuses are: 'Released' (disengaged for movement), 'Applied' (engaged to hold position), 'Replaced' (newly installed component), or 'Slipping' (fault condition).", "type": "value_illustration", "children_knowledge": -1} +{"id": 37, "knowledge": "Firmware Lifecycle Stage", "description": "Categorizes firmware versions to manage upgrades and support.", "definition": "Stages are: 'Current' (latest stable release), 'Supported' (previous but supported), or 'End-of-Life' (unsupported and a potential risk). All firmware versions starting with '3.' are considered End-of-Life.", "type": "value_illustration", "children_knowledge": -1} +{"id": 38, "knowledge": "Robot Architectures", "description": "An enumeration of the primary robot family types based on their kinematic structure.", "definition": "Core robot architectures include: 'Articulated', 'SCARA', 'Delta', and 'Cartesian'.", "type": "value_illustration", "children_knowledge": -1} +{"id": 39, "knowledge": "Common Application Groups", "description": "An enumeration of common business applications grouped by process type.", "definition": "Application groups include: 'Material Handling' (e.g., Picking, Packing, Palletizing), 'Processing' (e.g., Grinding, Polishing), 'Assembly', 'Welding', and 'Standby'.", "type": "value_illustration", "children_knowledge": -1} +{"id": 40, "knowledge": "High-Wear Applications", "description": "An enumeration of robot applications known to cause high levels of mechanical wear and tear.", "definition": "High-Wear Applications include 'Welding', 'Grinding', 'Deburring', 'Palletizing', and 'Machine Tending'.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 41, "knowledge": "Precision-Critical Applications", "description": "An enumeration of robot applications that demand the highest degree of positional and orientational accuracy.", "definition": "Precision-Critical Applications include 'Assembly', 'Electronics-Handling', 'Inspection', 'Dispensing', and 'Laser Cutting'.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 42, "knowledge": "Payload Overload Condition", "description": "A rule to determine if a robot is operating beyond its rated payload capacity.", "definition": "A Payload Overload Condition is active if the Payload Utilization Ratio (PUR) is greater than 1.0.", "type": "domain_knowledge", "children_knowledge": [1]} +{"id": 43, "knowledge": "Thermal Anomaly", "description": "Identifies a potential thermal issue in a robot's joints based on temperature deviations.", "definition": "A Thermal Anomaly is flagged if the Average Joint Temperature exceeds 75°C or if the Joint Temperature Differential is greater than 15°C.", "type": "domain_knowledge", "children_knowledge": [4, 9]} +{"id": 44, "knowledge": "Mechanical Wear Score Tiers", "description": "Categorizes the Mechanical Wear Score into severity levels.", "definition": "A Mechanical Wear Score is 'Low' if < 0.5, 'Medium' if 0.5 - 0.8, and 'High' if > 0.8.", "type": "value_illustration", "children_knowledge": -1} +{"id": 45, "knowledge": "Underutilized Asset", "description": "Identifies a robot that is significantly underused relative to its age.", "definition": "A robot is considered an Underutilized Asset if its Robot Age is greater than 365 days and its cumulative operating hours are less than 2000.", "type": "domain_knowledge", "children_knowledge": [0]} +{"id": 46, "knowledge": "Reliability Risk", "description": "Assesses the robot's operational reliability based on its history of unplanned stops.", "definition": "A robot is at 'High Reliability Risk' if its Mean Time Between Stoppages (MTBS) is less than 500 hours.", "type": "domain_knowledge", "children_knowledge": [2]} +{"id": 47, "knowledge": "Mechanical Degradation Alert", "description": "A composite alert for significant mechanical wear based on multiple indicators.", "definition": "A Mechanical Degradation Alert is triggered if the robot's Average Backlash corresponds to a 'High' Backlash Severity, or its Average Joint Vibration exceeds 3.0 mm/s.", "type": "domain_knowledge", "children_knowledge": [5, 34]} +{"id": 48, "knowledge": "Predictive Maintenance Trigger", "description": "A rule that triggers a high-priority maintenance work order based on predictive data.", "definition": "A Predictive Maintenance Trigger is activated if the Predictive Maintenance Urgency (PMU) score is greater than 50.", "type": "domain_knowledge", "children_knowledge": [17]} +{"id": 49, "knowledge": "High Safety-Risk Unit", "description": "Identifies a robot with a history of frequent or severe safety incidents.", "definition": "A robot is classified as a High Safety-Risk Unit if its Safety Incident Rate is greater than 2.0 OR its cumulative collision count is greater than 5.", "type": "domain_knowledge", "children_knowledge": [7]} +{"id": 50, "knowledge": "Precision Performance Degradation", "description": "Evaluates if a robot's accuracy is suitable for its task.", "definition": "Precision Performance Degradation is flagged if the Relative Positional Error exceeds 0.0001.", "type": "domain_knowledge", "children_knowledge": [8]} +{"id": 51, "knowledge": "Catastrophic Disaster", "description": "Classifies a disaster as 'Catastrophic' based on the DSI, indicating a need for maximum-level international response.", "definition": "A 'Catastrophic Disaster' is any event where the Overall Disaster Severity Index (DSI) is greater than 15,000.", "type": "domain_knowledge", "children_knowledge": [16]} +{"id": 52, "knowledge": "Workhorse Robot", "description": "Identifies highly utilized, mature robots that are critical to production.", "definition": "A robot is classified as a 'Workhorse' if its Robot Age is greater than 1,825 days (5 years) AND its cumulative operating hours exceed 40,000.", "type": "domain_knowledge", "children_knowledge": [0]} +{"id": 53, "knowledge": "Urgent Maintenance Required", "description": "Identifies a robot that requires immediate maintenance based on its predicted remaining life or scheduled due date.", "definition": "A robot requires 'Urgent Maintenance' if its Remaining Useful Life in Days is less than 7 OR its next scheduled maintenance is overdue (UpkeepDueDays <= 0).", "type": "domain_knowledge", "children_knowledge": [10]} +{"id": 54, "knowledge": "Legacy Robot", "description": "Identifies a robot that is old and may be a candidate for replacement or major overhaul.", "definition": "A robot is considered a 'Legacy Robot' if its Robot Age is greater than 3650 days (10 years).", "type": "domain_knowledge", "children_knowledge": [0]} +{"id": 55, "knowledge": "Medical Staff Ratio", "description": "The number of medical staff available per 1000 affected people.", "definition": "A ratio indicating medical staff availability. Formula: (medical_staff_count / affected_population) * 1000.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 56, "knowledge": "Imminent Failure Warning", "description": "Issues a high-priority warning when multiple indicators suggest an impending failure.", "definition": "An 'Imminent Failure Warning' is active if a robot has a 'High' Fault Prediction Score Tier AND is also flagged for Urgent Maintenance Required.", "type": "domain_knowledge", "children_knowledge": [24, 53]} +{"id": 57, "knowledge": "High Burn Rate Alert", "description": "An alert for operations that have consumed a significant portion of their budget.", "definition": "A 'High Burn Rate' alert is triggered for an operation if its total costs exceed 20% of its total allocated budget.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 58, "knowledge": "Intensive Workload", "description": "Identifies a robot that is consistently operating at a high intensity level.", "definition": "A robot has an 'Intensive Workload' if its Payload Utilization Ratio is > 0.8 AND its Throughput Rate is in the top 20th percentile (quintile 1) for its application.", "type": "domain_knowledge", "children_knowledge": [1, 12]} +{"id": 59, "knowledge": "Degrading Robot", "description": "Flags a robot whose health is currently deteriorating.", "definition": "A robot is considered to be 'Degrading' if its Health Degradation Rate (HDR) is greater than 0.", "type": "domain_knowledge", "children_knowledge": [19]} +{"id": 60, "knowledge": "Top-Tier Usage Model", "description": "Identifies robot models that are in the top 25% of the fleet based on their cumulative program cycle count, indicating they are the most heavily used models.", "definition": "A robot model is a 'Top-Tier Usage Model' if its total program cycle count is in the 75th percentile or higher of all models.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 61, "knowledge": "Anomalous Controller Stress", "description": "Identifies a controller whose current stress score is disproportionately high compared to the average stress score of its specific model series.", "definition": "A controller is under 'Anomalous Stress' if its current real-time Controller Stress Score is more than 20% higher than the average stress score for its model series.", "type": "domain_knowledge", "children_knowledge": [6]} +{"id": 62, "knowledge": "Critical Condition Robot", "description": "A robot that is simultaneously at risk from both thermal and mechanical issues.", "definition": "A robot is in 'Critical Condition' if it has both a 'Thermal Anomaly' AND a 'Mechanical Degradation Alert'.", "type": "domain_knowledge", "children_knowledge": [43, 47]} +{"id": 63, "knowledge": "Throughput Rate (Cycles per Hour)", "description": "Calculates the number of cycles completed per hour.", "definition": "The number of program cycles a robot completes per hour. Formula: 3600 / cycle_time_seconds.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 64, "knowledge": "High Safety-Risk", "description": "Identifies robots that pose a significant safety hazard based on their history of incidents, considering both absolute counts and incident rates.", "definition": "A robot is classified as 'High Safety-Risk' if it meets either of the following criteria: 1. It has experienced more than 5 total collisions. 2. Its combined rate of major incidents (collisions, zone violations, and speed violations) is greater than 2.0 per 1000 hours of operation.", "type": "domain_knowledge", "children_knowledge": -1} \ No newline at end of file diff --git a/robot_fault_prediction/robot_fault_prediction_schema.txt b/robot_fault_prediction/robot_fault_prediction_schema.txt new file mode 100644 index 0000000000000000000000000000000000000000..2f97cafe96c7dfc6a8ca3b1c3f313283c42428cd --- /dev/null +++ b/robot_fault_prediction/robot_fault_prediction_schema.txt @@ -0,0 +1,240 @@ +CREATE TABLE "joint_performance" ( +jperfid bigint NOT NULL DEFAULT nextval('joint_performance_jperfid_seq'::regclass), +jperfoperref text NULL, +jperfdetref text NULL, +joint_metrics jsonb NULL, + PRIMARY KEY (jperfid), + FOREIGN KEY (jperfoperref) REFERENCES operation(operreg), + FOREIGN KEY (jperfdetref) REFERENCES robot_details(botdetreg) +); + +First 3 rows: + jperfid jperfoperref jperfdetref joint_metrics +--------- -------------- ------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- + 5 OP_MCRH5P RB8530 {'J1': {'angle_deg': -83.54, 'speed_dps': 157.73, 'torque_nm': 59.95}, 'J2': {'angle_deg': 42.09, 'speed_dps': 68.7, 'torque_nm': 47.43}, 'J3': {'angle_deg': 135.47, 'speed_dps': 124.51, 'torque_nm': 55.59}, 'J4': {'angle_deg': -148.11, 'speed_dps': 68.99, 'torque_nm': 83.57}, 'J5': {'angle_deg': -79.32, 'speed_dps': 103.06, 'torque_nm': 21.06}, 'J6': {'angle_deg': 129.08, 'speed_dps': 85.43, 'torque_nm': 76.07}} + 6 OP_TZVO4Q RB4962 {'J1': {'angle_deg': 104.59, 'speed_dps': 171.76, 'torque_nm': 92.66}, 'J2': {'angle_deg': -74.8, 'speed_dps': 139.99, 'torque_nm': 25.65}, 'J3': {'angle_deg': 40.27, 'speed_dps': 55.57, 'torque_nm': 76.81}, 'J4': {'angle_deg': 98.29, 'speed_dps': 148.12, 'torque_nm': 98.86}, 'J5': {'angle_deg': 72.37, 'speed_dps': 64.7, 'torque_nm': 86.5}, 'J6': {'angle_deg': -124.89, 'speed_dps': 73.48, 'torque_nm': 82.62}} + 7 OP_R5HF3P RB6554 {'J1': {'angle_deg': -40.88, 'speed_dps': 122.45, 'torque_nm': 14.58}, 'J2': {'angle_deg': 119.78, 'speed_dps': 135.41, 'torque_nm': 6.19}, 'J3': {'angle_deg': -40.12, 'speed_dps': 109.32, 'torque_nm': 53.91}, 'J4': {'angle_deg': 139.65, 'speed_dps': 33.3, 'torque_nm': 20.42}, 'J5': {'angle_deg': 87.57, 'speed_dps': 55.34, 'torque_nm': 51.54}, 'J6': {'angle_deg': -51.28, 'speed_dps': 117.68, 'torque_nm': 90.76}} +... + + +CREATE TABLE "joint_condition" ( +jcondid bigint NOT NULL DEFAULT nextval('joint_condition_jcondid_seq'::regclass), +jcondoperref text NULL, +jcdetref text NULL, +joint_health jsonb NULL, + PRIMARY KEY (jcondid), + FOREIGN KEY (jcondoperref) REFERENCES operation(operreg), + FOREIGN KEY (jcdetref) REFERENCES robot_details(botdetreg) +); + +First 3 rows: + jcondid jcondoperref jcdetref joint_health +--------- -------------- ---------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- + 5 OP_MCRH5P RB8530 {'J1': {'backlash_deg': 0.0066, 'temperature_C': 54.8, 'vibration_mmps': 2.237}, 'J2': {'backlash_deg': 0.0619, 'temperature_C': 45.67, 'vibration_mmps': 5.44}, 'J3': {'backlash_deg': 0.0677, 'temperature_C': 47.96, 'vibration_mmps': 5.23}, 'J4': {'backlash_deg': 0.0066, 'temperature_C': 43.31, 'vibration_mmps': 4.054}, 'J5': {'backlash_deg': 0.0484, 'temperature_C': 38.88, 'vibration_mmps': 6.088}, 'J6': {'backlash_deg': 0.017, 'temperature_C': 25.62, 'vibration_mmps': 3.564}} + 6 OP_TZVO4Q RB4962 {'J1': {'backlash_deg': 0.079, 'temperature_C': 45.31, 'vibration_mmps': 2.637}, 'J2': {'backlash_deg': 0.014, 'temperature_C': 49.25, 'vibration_mmps': 9.917}, 'J3': {'backlash_deg': 0.0565, 'temperature_C': 47.23, 'vibration_mmps': 8.334}, 'J4': {'backlash_deg': 0.0673, 'temperature_C': 79.19, 'vibration_mmps': 9.775}, 'J5': {'backlash_deg': 0.0379, 'temperature_C': 22.43, 'vibration_mmps': 1.046}, 'J6': {'backlash_deg': 0.0841, 'temperature_C': 21.33, 'vibration_mmps': 3.324}} + 7 OP_R5HF3P RB6554 {'J1': {'backlash_deg': 0.0314, 'temperature_C': 64.21, 'vibration_mmps': 0.795}, 'J2': {'backlash_deg': 0.0007, 'temperature_C': 26.71, 'vibration_mmps': 4.509}, 'J3': {'backlash_deg': 0.0732, 'temperature_C': 36.36, 'vibration_mmps': 7.401}, 'J4': {'backlash_deg': 0.07, 'temperature_C': 55.33, 'vibration_mmps': 6.667}, 'J5': {'backlash_deg': 0.0148, 'temperature_C': 32.92, 'vibration_mmps': 4.557}, 'J6': {'backlash_deg': 0.04, 'temperature_C': 31.53, 'vibration_mmps': 8.408}} +... + + +CREATE TABLE "robot_record" ( +recreg text NULL, +rects timestamp without time zone NOT NULL, +botcode text NOT NULL, + PRIMARY KEY (botcode) +); + +First 3 rows: +recreg rects botcode +-------- ------------------- --------- +RF100725 2025-02-19 00:00:00 RB2073 +RF506310 2025-02-18 00:00:00 RB9067 +RF422033 2025-02-18 00:00:00 RB2996 +... + + +CREATE TABLE "robot_details" ( +botdetreg text NOT NULL, +mfgnameval text NULL, +modelseriesval text NULL, +bottypeval character NULL, +payloadcapkg real NULL, +reachmmval bigint NULL, +instdateval date NULL, +fwversionval text NULL, +ctrltypeval text NULL, + PRIMARY KEY (botdetreg), + FOREIGN KEY (botdetreg) REFERENCES robot_record(botcode) +); + +First 3 rows: +botdetreg mfgnameval modelseriesval bottypeval payloadcapkg reachmmval instdateval fwversionval ctrltypeval +----------- ------------ ---------------- ------------- -------------- ------------ ------------- -------------- ------------- +RB2073 FANUC Series_784 Delta 5 1592 2023-06-10 9.6.6 Controller_C2 +RB9067 Yaskawa Series_892 Collaborative 5 1160 2022-09-14 3.3.7 Controller_C4 +RB2996 Yaskawa Series_525 Cartesian 200 2374 2022-11-19 4.6.6 Controller_B5 +... + + +CREATE TABLE "operation" ( +operreg text NOT NULL, +operrecref text NULL, +totopshrval real NULL, +apptypeval text NULL, +opermodeval character NULL, +currprogval text NULL, +progcyclecount bigint NULL, +cycletimesecval real NULL, +axiscountval bigint NULL, + PRIMARY KEY (operreg), + FOREIGN KEY (operrecref) REFERENCES robot_record(botcode) +); + +First 3 rows: +operreg operrecref totopshrval apptypeval opermodeval currprogval progcyclecount cycletimesecval axiscountval +--------- ------------ ------------- ------------ ------------- ------------- ---------------- ----------------- -------------- +OP_ES8D6H RB2073 MANU 177681 nan 7 +OP_0FUE4V RB9067 Painting MANU 498231 211.82 6 +OP_BNMLPS RB2996 MANU PRG_4901 508274 nan 5 +... + + +CREATE TABLE "actuation_data" ( +actreg text NOT NULL, +actoperref text NULL, +actrecref text NULL, +tcpxval real NULL, +tcpyval real NULL, +tcpzval real NULL, +tcp_rxval real NULL, +tcp_ryval real NULL, +tcp_rzval real NULL, +tcpspeedval real NULL, +tcpaccelval real NULL, +pathaccmmval real NULL, +poserrmmval real NULL, +orienterrdegval real NULL, +payloadwval real NULL, +payloadival real NULL, +m1currval real NULL, +m2currval real NULL, +m3currval real NULL, +m4currval real NULL, +m5currval real NULL, +m6currval real NULL, +m1voltval real NULL, +m2voltval real NULL, +m3voltval real NULL, +m4voltval real NULL, +m5voltval real NULL, +m6voltval real NULL, + PRIMARY KEY (actreg), + FOREIGN KEY (actoperref) REFERENCES operation(operreg), + FOREIGN KEY (actrecref) REFERENCES robot_record(botcode) +); + +First 3 rows: +actreg actoperref actrecref tcpxval tcpyval tcpzval tcp_rxval tcp_ryval tcp_rzval tcpspeedval tcpaccelval pathaccmmval poserrmmval orienterrdegval payloadwval payloadival m1currval m2currval m3currval m4currval m5currval m6currval m1voltval m2voltval m3voltval m4voltval m5voltval m6voltval +--------- ------------ ----------- --------- --------- --------- ----------- ----------- ----------- ------------- ------------- -------------- ------------- ----------------- ------------- ------------- ----------- ----------- ----------- ----------- ----------- ----------- ----------- ----------- ----------- ----------- ----------- ----------- +AC_DJOHX8 OP_ES8D6H RB2073 nan nan -156.28 -150.3 -6.26 1231.14 6.65 0.797 0.069 0.471 nan 1.78 nan 8.61 3.31 14.16 6.16 nan 1.26 15.66 10.6 9.13 28.47 32.12 +AC_U95O0H OP_0FUE4V RB9067 nan nan -153.02 153.21 -130.1 1923.65 2.02 0.835 nan 0.365 nan 8.7 6.58 nan 10.97 nan nan nan 46.58 43.21 38.36 13.53 40.07 30.29 +AC_HPP9RV OP_BNMLPS RB2996 873 1618.63 -133.16 -85.31 166.64 191.04 8.43 0.07 nan 0.234 144.85 8.85 nan nan 19.18 1.31 2.38 1.81 40.66 5.33 14.07 45.05 19.58 11.39 +... + + +CREATE TABLE "mechanical_status" ( +mechactref text NULL, +mechoperref text NOT NULL, +mechdetref text NULL, +component_status jsonb NULL, + PRIMARY KEY (mechoperref), + FOREIGN KEY (mechactref) REFERENCES actuation_data(actreg), + FOREIGN KEY (mechoperref) REFERENCES operation(operreg), + FOREIGN KEY (mechdetref) REFERENCES robot_details(botdetreg) +); + +First 3 rows: +mechactref mechoperref mechdetref component_status +------------ ------------- ------------ -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- +AC_U95O0H OP_0FUE4V RB9067 {'brakes': {'J1': 'Warning', 'J2': 'Warning', 'J3': 'Error', 'J4': 'Warning', 'J5': 'Warning', 'J6': 'Warning'}, 'encoders': {'J1': None, 'J2': 'Warning', 'J3': 'Error', 'J4': 'Error', 'J5': 'Normal', 'J6': 'Warning'}, 'gearboxes': {'J1': {'temperature_C': 70.89, 'vibration_mmps': 9.278}, 'J2': {'temperature_C': 79.14, 'vibration_mmps': 5.005}, 'J3': {'temperature_C': 25.94, 'vibration_mmps': None}, 'J4': {'temperature_C': 24, 'vibration_mmps': 9.881}, 'J5': {'temperature_C': 60.1, 'vibration_mmps': 0.224}, 'J6': {'temperature_C': 41.26, 'vibration_mmps': 4.942}}} +AC_4CU2EX OP_BSYY54 RB3783 {'brakes': {'J1': None, 'J2': 'Warning', 'J3': 'Warning', 'J4': 'Warning', 'J5': 'Normal', 'J6': 'Warning'}, 'encoders': {'J1': 'Normal', 'J2': 'Warning', 'J3': 'Normal', 'J4': 'Warning', 'J5': 'Normal', 'J6': 'Error'}, 'gearboxes': {'J1': {'temperature_C': None, 'vibration_mmps': 9.911}, 'J2': {'temperature_C': None, 'vibration_mmps': 7.029}, 'J3': {'temperature_C': 25.69, 'vibration_mmps': None}, 'J4': {'temperature_C': 48.65, 'vibration_mmps': 2.817}, 'J5': {'temperature_C': 43.47, 'vibration_mmps': 4.913}, 'J6': {'temperature_C': 69.7, 'vibration_mmps': 9.421}}} +AC_CL809K OP_CNBZV4 RB7520 {'brakes': {'J1': 'Error', 'J2': 'Warning', 'J3': 'Error', 'J4': 'Normal', 'J5': 'Error', 'J6': 'Error'}, 'encoders': {'J1': 'Error', 'J2': 'Warning', 'J3': 'Warning', 'J4': 'Normal', 'J5': 'Warning', 'J6': 'Error'}, 'gearboxes': {'J1': {'temperature_C': None, 'vibration_mmps': 4.784}, 'J2': {'temperature_C': None, 'vibration_mmps': 9.827}, 'J3': {'temperature_C': 54.51, 'vibration_mmps': None}, 'J4': {'temperature_C': 60.29, 'vibration_mmps': 4.103}, 'J5': {'temperature_C': 77.06, 'vibration_mmps': 4.097}, 'J6': {'temperature_C': 68.85, 'vibration_mmps': 1.658}}} +... + + +CREATE TABLE "system_controller" ( +systemoverseeractuation text NOT NULL, +systemoverseeroperation text NULL, +overseerloadvalue real NULL, +memuseval real NULL, +overseerthermallevel text NULL, +cabtempval real NULL, +cabhumiditylevel text NULL, + PRIMARY KEY (systemoverseeractuation), + FOREIGN KEY (systemoverseeractuation) REFERENCES actuation_data(actreg), + FOREIGN KEY (systemoverseeroperation) REFERENCES operation(operreg) +); + +First 3 rows: +systemoverseeractuation systemoverseeroperation overseerloadvalue memuseval overseerthermallevel cabtempval cabhumiditylevel +------------------------- ------------------------- ------------------- ----------- ---------------------- ------------ ------------------ +AC_DJOHX8 OP_ES8D6H 0.99 32.07 33.84 +AC_U95O0H OP_0FUE4V 1.31 7.01 23.05 +AC_HPP9RV OP_BNMLPS 58.24 96.98 28.68 +... + + +CREATE TABLE "maintenance_and_fault" ( +upkeepactuation text NOT NULL, +upkeepoperation text NULL, +faultcodeval text NULL, +issuecategoryval text NULL, +issuelevelval text NULL, +faultpredscore real NULL, +faulttypeestimation text NULL, +rulhours bigint NULL, +upkeepduedays bigint NULL, +upkeepcostest text NULL, + PRIMARY KEY (upkeepactuation), + FOREIGN KEY (upkeepactuation) REFERENCES actuation_data(actreg), + FOREIGN KEY (upkeepoperation) REFERENCES operation(operreg) +); + +First 3 rows: +upkeepactuation upkeepoperation faultcodeval issuecategoryval issuelevelval faultpredscore faulttypeestimation rulhours upkeepduedays upkeepcostest +----------------- ----------------- -------------- ------------------ --------------- ---------------- --------------------- ---------- --------------- --------------- +AC_DJOHX8 OP_ES8D6H E8902 NON Low level 0.021 Motor 1601 16 +AC_X062CP OP_82TO6O E4278 COM High level 0.793 Controller 3167 46 +AC_HLO6GZ OP_RQ18FZ E6585 SOF High level 0.343 Gearbox 3106 22 +... + + +CREATE TABLE "performance_and_safety" ( +effectivenessactuation text NOT NULL, +effectivenessrobot text NULL, +conditionindexval real NULL, +effectivenessindexval real NULL, +qualitymeasureval real NULL, +energyusekwhval text NULL, +pwrfactorval text NULL, +airpressval real NULL, +safetystateval text NULL, +zoneviolnum bigint NULL, +emergencystopcount bigint NULL, +collisioncount bigint NULL, +overloadcnt bigint NULL, +speedviolnum bigint NULL, +calibstateval character NULL, +toolchangecount bigint NULL, +toolwearpct text NULL, + PRIMARY KEY (effectivenessactuation), + FOREIGN KEY (effectivenessactuation) REFERENCES actuation_data(actreg), + FOREIGN KEY (effectivenessrobot) REFERENCES robot_details(botdetreg) +); + +First 3 rows: +effectivenessactuation effectivenessrobot conditionindexval effectivenessindexval qualitymeasureval energyusekwhval pwrfactorval airpressval safetystateval zoneviolnum emergencystopcount collisioncount overloadcnt speedviolnum calibstateval toolchangecount toolwearpct +------------------------ -------------------- ------------------- ----------------------- ------------------- ----------------- -------------- ------------- ---------------- ------------- -------------------- ---------------- ------------- -------------- --------------- ----------------- ------------- +AC_DJOHX8 RB2073 0.152 0.603 0.337 6.69 ✗ Emergency 1 2 1 3 5 Y 940 +AC_U95O0H RB9067 0.537 0.751 nan 5.51 ✓ Normal 3 2 3 3 9 0 nan +AC_X062CP RB4545 0.708 0.625 nan 5.11 ✓ Normal 4 3 1 5 10 Y nan +... diff --git a/solar_panel/solar_panel_column_meaning_base.json b/solar_panel/solar_panel_column_meaning_base.json new file mode 100644 index 0000000000000000000000000000000000000000..a6d4ee301e2a23149b5fbdfae4ff3cd9c48d82ae --- /dev/null +++ b/solar_panel/solar_panel_column_meaning_base.json @@ -0,0 +1,160 @@ +{ + "solar_panel|panel_models|ModKey": "TEXT. Unique identifier for the solar panel model. Example: Model-102.", + "solar_panel|panel_models|MakerTag": "TEXT. Manufacturer of the solar panel. NULL means the encoder does not report status or data are missing. Possible values: Canadian Solar, JA Solar, JinkoSolar, Longi, Trina.", + "solar_panel|panel_models|PnlKind": "TEXT. Type of solar panel technology used. Possible values: Bifacial, HJT, Mono-PERC, Poly-PERC, TOPCon.", + "solar_panel|panel_models|Rated_W": "REAL. Maximum power output in watts under standard test conditions. Possible values: 450.0, 500.0, 550.0, 600.0, 650.0.", + "solar_panel|panel_models|EffPct": "REAL. Percentage efficiency of energy conversion. Example: 20.86.", + "solar_panel|panel_models|DegYrRate": "REAL. Annual power output degradation rate. Example: 0.36.", + "solar_panel|panel_models|tempCoeff": "REAL. Temperature coefficient of power. NULL means the encoder does not report status or data are missing. Example: -0.389.", + "solar_panel|panel_models|NomOpTempC": "TEXT. Normal operating temperature range. Example: 45.7.", + "solar_panel|plants|SiteKey": "TEXT. Unique identifier for the solar power plant. Example: SP9227.", + "solar_panel|plants|SiteLabel": "TEXT. Name or description of the solar facility. Example: Solar Plant West Davidport.", + "solar_panel|plants|Cap_MW": "REAL. Total installed capacity in megawatts. Example: 257.58.", + "solar_panel|plants|GoLiveOn": "DATE. Date when the plant became operational. Example: 43159.", + "solar_panel|plants|ModHook": "TEXT. Foreign-key referencing panel_models.ModKey. Links to panel model details.", + "solar_panel|plants|TiltDeg": "REAL. Angle of panel tilt from horizontal. Example: 1.2.", + "solar_panel|plants|AzmDeg": "REAL. Panel orientation relative to true north. Example: 169.8.", + "solar_panel|plants|RecycleNote": "TEXT. Status of panel recycling program. NULL means the encoder does not report status or data are missing. Possible values: Available, In Development.", + "solar_panel|plants|DocState": "TEXT. Completeness status of plant documentation. Possible values: Complete, Missing, Partial.", + "solar_panel|plants|WarrState": "TEXT. Current status of equipment warranty. NULL means the encoder does not report status or data are missing. Possible values: Active, Claimed, Expired.", + "solar_panel|plants|WarrClaims": "BIGINT. Number of warranty claims made. NULL means the encoder does not report status or data are missing. Possible values: 0.0, 1.0, 2.0, 3.0, 4.0, 5.0.", + "solar_panel|plants|InsurState": "TEXT. Current insurance coverage status. Possible values: Covered, Expired, Partial.", + "solar_panel|plants|ComplyFlag": "TEXT. Compliance status with regulations. Possible values: Compliant, Non-Compliant, Under Review.", + "solar_panel|plants|EnvTag": "TEXT. Environmental impact classification. Possible values: High, Low, Medium.", + "solar_panel|plant_panel_model|SiteLink": "TEXT. FK → plants.SiteKey. Reference to associated solar plant.", + "solar_panel|plant_panel_model|ModLink": "TEXT. FK → panel_models.ModKey. Reference to panel model used.", + "solar_panel|plant_record|SnapKey": "TEXT. Snapshot identifier. Unique ID for performance snapshot. Example: PV937101.", + "solar_panel|plant_record|SiteTie": "TEXT. FK → plants.SiteKey. Links to parent solar plant.", + "solar_panel|plant_record|SnapTS": "TIMESTAMP. Timestamp when snapshot was taken. Example: 43315.51482.", + "solar_panel|electrical_performance|SnapLink": "TEXT. PK & FK → plant_record.SnapKey. Links to performance snapshot record.", + "solar_panel|environmental_conditions|SnapRef": "TEXT. PK & FK → plant_record.SnapKey. Links to environmental snapshot.", + "solar_panel|mechanical_condition|SnapMk": "TEXT. PK & FK → plant_record.SnapKey. Links to mechanical condition snapshot.", + "solar_panel|operational_metrics|SnapOps": "TEXT. PK & FK → plant_record.SnapKey. Links to operational metrics snapshot.", + "solar_panel|operational_metrics|MTBFh": "REAL. Mean time between failures in hours. NULL means the encoder does not report status or data are missing. Example: 3713.0.", + "solar_panel|operational_metrics|MTTRh": "REAL. Mean time to repair in hours. Example: 8.8.", + "solar_panel|operational_metrics|MaintCost": "REAL. Maintenance cost amount. Example: 178.82.", + "solar_panel|operational_metrics|CleanCost": "REAL. Cleaning cost amount. Example: 1034.89.", + "solar_panel|operational_metrics|ReplCost": "REAL. Component replacement cost. Example: 31529.13.", + "solar_panel|operational_metrics|RevLoss": "TEXT. Revenue loss amount. Example: 13375.59.", + "solar_panel|operational_metrics|OptPot": "TEXT. Optimization potential assessment. Possible values: High, Low, Medium.", + "solar_panel|inspection|InspectMode": "TEXT. Inspection method used. Possible values: EL Imaging, IR Thermal, IV Curve, Visual.", + "solar_panel|inspection|InspectRes": "TEXT. Inspection results summary. Possible values: Major Issues, Minor Issues, Pass.", + "solar_panel|inspection|InspectDt": "DATE. Date of inspection. Example: 45528.", + "solar_panel|inspection|MaintSched": "TEXT. Maintenance schedule status. Possible values: Delayed, On Schedule, Overdue.", + "solar_panel|inspection|DQscore": "REAL. Data quality score. Example: 97.7.", + "solar_panel|alert|SnapAlrt": "TEXT. PK & FK → plant_record.SnapKey. Links to alert snapshot.", + "solar_panel|alert|AlrtState": "TEXT. Current alert status. NULL means the encoder does not report status or data are missing. Possible values: Critical, Warning.", + "solar_panel|alert|AlrtCnt": "TEXT. Count of active alerts. Example: 6.0.", + "solar_panel|alert|MaintPrio": "TEXT. Maintenance priority level. NULL means the encoder does not report status or data are missing. Possible values: High, Low, Medium.", + "solar_panel|alert|ReplPrio": "TEXT. Replacement priority level. Possible values: High, Low, Medium.", + "solar_panel|electrical_performance|elec_perf_snapshot": { + "column_meaning": "JSONB column. Stores all IV-curve parameters, inverter & grid metrics, efficiency losses and energy-yield KPIs captured for a single timestamp, so analytics engines can fetch the entire electrical-health view from one JSONB column.", + "fields_meaning": { + "efficiency": { + "instant_eff_pct": "REAL. Current operational efficiency percentage. Example: 18.54.", + "eff_loss_pct": "REAL. Efficiency loss since installation. Example: 2.32.", + "cumulative_deg_pct": "REAL. Cumulative degradation percentage. Example: 14.69.", + "soil_loss_pct": "REAL. Power loss due to soiling. Example: 13.41.", + "spectral_mismatch": "TEXT. Specification mismatch factor. Example: 1.005." + }, + "power": { + "power_now_w": "REAL. Current power output in watts. NULL means the encoder does not report status or data are missing. Example: 554.51.", + "power_loss_w": "REAL. Power loss since installation. NULL means the encoder does not report status or data are missing. Example: 95.49." + }, + "iv_curve": { + "isc_initial_a": "REAL. Initial short-circuit current. Example: 9.09.", + "isc_now_a": "REAL. Current short-circuit current. Example: 8.51.", + "voc_initial_v": "REAL. Initial open-circuit voltage. NULL means the encoder does not report status or data are missing. Example: 49.74.", + "voc_now_v": "REAL. Current open-circuit voltage. Example: 43.32.", + "imp_initial_a": "REAL. Initial current at maximum power. Example: 9.17.", + "imp_now_a": "REAL. Current at maximum power. NULL means the encoder does not report status or data are missing. Example: 7.21.", + "vmp_initial_v": "REAL. Initial voltage at maximum power. Example: 38.48.", + "vmp_now_v": "REAL. Current voltage at maximum power. Example: 36.98.", + "fill_factor_initial": "REAL. Initial fill factor. Example: 0.773.", + "fill_factor_now": "REAL. Current fill factor. Example: 0.71.", + "series_res_ohm": "TEXT. Series resistance in ohms. Example: 0.174.", + "shunt_res_ohm": "REAL. Shunt resistance in ohms. Example: 437.3." + }, + "inverter": { + "inverter_eff_pct": "REAL. Inverter efficiency percentage. Example: 98.43.", + "power_factor": "REAL. Inverter power factor. NULL means the encoder does not report status or data are missing. Example: 0.979.", + "inverter_temp_c": "REAL. Inverter operating temperature. Example: 49.1." + }, + "grid": { + "grid_voltage_v": "REAL. Grid voltage measurement. NULL means the encoder does not report status or data are missing. Example: 226.9.", + "grid_frequency_hz": "REAL. Grid frequency measurement. Example: 49.73.", + "power_quality_idx": "REAL. Power quality index. Example: 0.467.", + "harmonic_distortion_pct": "REAL. Harmonic distortion percentage. Example: 4.01.", + "reactive_power_var": "REAL. Reactive power measurement. NULL means the encoder does not report status or data are missing. Example: 33.0." + }, + "energy_yield": { + "energy_yield_kwh": "REAL. Energy yield in watt-hours. Example: 15.64.", + "performance_ratio": "REAL. Performance ratio. NULL means the encoder does not report status or data are missing. Example: 0.846.", + "specific_yield_kwh_kw": "REAL. Specific yield in kWh/kWp. Example: 5.18.", + "capacity_factor_pct": "REAL. Capacity factor percentage. Example: 27.47.", + "availability_pct": "REAL. System availability percentage. Example: 97.73." + } + } + }, + "solar_panel|environmental_conditions|env_snapshot": { + "column_meaning": "JSONB column. Bundles ambient weather, plane-of-array irradiance and soiling/atmospheric conditions measured at the plant into one JSONB object for performance-normalisation models.", + "fields_meaning": { + "temperatures": { + "cell_temp_c": "REAL. Solar cell temperature. NULL means the encoder does not report status or data are missing. Example: 47.3.", + "ambient_temp_c": "REAL. Ambient air temperature. Example: 41.2." + }, + "irradiance": { + "ghi_w_m2": "REAL. Solar irradiance measurement. NULL means the encoder does not report status or data are missing. Example: 530.4.", + "dni_w_m2": "REAL. Direct irradiance measurement. Example: 169.3.", + "dhi_w_m2": "REAL. Diffuse irradiance measurement. Example: 44.1.", + "poa_irr_w_m2": "REAL. Plane-of-array irradiance. Example: 135.1." + }, + "atmospheric": { + "relative_humidity_pct": "REAL. Relative humidity percentage. Example: 76.1.", + "air_pressure_hpa": "REAL. Atmospheric pressure measurement. Example: 1099.2.", + "uv_index": "REAL. UV index measurement. Example: 1.1.", + "cloud_cover_pct": "REAL. Cloud cover percentage. Example: 12.4.", + "dust_density_kg_m3": "REAL. Dust density measurement. Example: 0.056." + }, + "wind_rain_snow": { + "wind_speed_m_s": "REAL. Wind speed measurement. Example: 11.6.", + "wind_dir_deg": "REAL. Wind direction in degrees. Example: 249.3.", + "rain_mm": "REAL. Rainfall measurement in millimeters. Example: 41.3.", + "snow_cover_pct": "REAL. Snow cover percentage. Example: 24.8." + } + } + }, + "solar_panel|mechanical_condition|mech_health_snapshot": { + "column_meaning": "JSONB column. Captures tracker status, glass / back-sheet health, electrical connections and cleaning history for a panel string or block at a given snapshot, allowing O&M teams to query a single JSONB field for mechanical diagnostics.", + "fields_meaning": { + "tracker": { + "tracker_state": "TEXT. Tracking system operational status. Possible values: Error, Maintenance, Normal.", + "tracker_angle_deg": "REAL. Tracking system deviation angle. Example: 20.5." + }, + "module_surface": { + "backsheet_condition": "TEXT. Backsheet material condition assessment. Possible values: Fair, Good, Poor.", + "glass_condition": "TEXT. Front glass condition assessment. Possible values: Clear, Damaged, Dusty.", + "encapsulant_yellowing": "TEXT. Encapsulant yellowing status. NULL means the encoder does not report status or data are missing. Possible values: Mild, Severe.", + "delamination_status": "TEXT. Delamination status. NULL means the encoder does not report status or data are missing. Possible values: Major, Minor.", + "busbar_corrosion": "TEXT. Busbar corrosion status. NULL means the encoder does not report status or data are missing. Possible values: Severe, Visible.", + "hotspot_count": "BIGINT. Number of detected hot spots. NULL means the encoder does not report status or data are missing. Example: 1.0.", + "microcrack_count": "BIGINT. Count of microcracks detected. Example: 7.0.", + "snail_trail_severity": "TEXT. Snail trail severity. NULL means the encoder does not report status or data are missing. Possible values: Heavy, Light.", + "pid_severity": "TEXT. Potential induced degradation severity. NULL means the encoder does not report status or data are missing. Possible values: High, Low.", + "lid_status": "TEXT. Light-induced degradation status. Possible values: Ongoing, Stabilized, Unknown." + }, + "electrical_integrity": { + "bypass_diode_status": "TEXT. Bypass diode functionality status. Possible values: Failed, Normal, Partial.", + "junction_box_condition": "TEXT. Junction box condition assessment. Possible values: Fair, Good, Poor.", + "cable_condition": "TEXT. Cable condition assessment. NULL means the encoder does not report status or data are missing. Possible values: Fair, Good, Poor.", + "connector_condition": "TEXT. Connector condition assessment. Possible values: Fair, Good, Poor.", + "grounding_status": "TEXT. Grounding system status. Possible values: Check Required, Failed, Normal." + }, + "mount_cleaning": { + "mount_structure_status": "TEXT. Mounting structure condition. Possible values: Check Required, Stable, Unstable.", + "cleaning_cycle_days": "BIGINT. Number of cleaning cycles performed. Example: 38.", + "last_clean_date": "DATE. Date of last cleaning. Example: 45671." + } + } + } +} \ No newline at end of file diff --git a/solar_panel/solar_panel_kb.jsonl b/solar_panel/solar_panel_kb.jsonl new file mode 100644 index 0000000000000000000000000000000000000000..e7a80990ead1cf86816aff79d8f03f16f1f53172 --- /dev/null +++ b/solar_panel/solar_panel_kb.jsonl @@ -0,0 +1,50 @@ +{"id": 0, "knowledge": "Specific Yield", "description": "Measures the energy output of a solar plant relative to its rated power capacity over a period.", "definition": "A measure of energy productivity, calculated as: $Y_{S} = \\frac{E_{out}}{P_{rated}}$, where $E_{out}$ is the total energy yielded (in kWh) and $P_{rated}$ is the plant's rated power capacity (in kW).", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 1, "knowledge": "Power Capacity Factor", "description": "Calculates the ratio of a plant's actual energy output over a period to its potential maximum output.", "definition": "The ratio of the plant's actual generated energy to its maximum possible energy output over the same period, expressed as a percentage: $CF = \\frac{E_{out}}{P_{rated} \\times T_{period}} \\times 100\\%$, where $T_{period}$ is the number of hours in the period.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 2, "knowledge": "Soiling Loss Index", "description": "Quantifies the percentage of power loss due to the accumulation of dirt, dust, and other particulates on panel surfaces.", "definition": "The percentage reduction in power output caused by soiling. A value of 10% indicates that the current power is 10% lower than it would be if the panels were perfectly clean.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 3, "knowledge": "Thermal Loss Factor", "description": "Calculates the efficiency loss of a solar panel due to its operating temperature exceeding the reference temperature.", "definition": "The reduction in efficiency caused by cell temperature, calculated as: $L_{T} = (T_{cell} - T_{ref}) \\times |\\gamma|$, where $T_{cell}$ is the current cell temperature, $T_{ref}$ is the reference temperature (typically 25°C), and $\\gamma$ is the panel's temperature coefficient of power.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 4, "knowledge": "System Unavailability", "description": "Calculates the proportion of time a system is non-operational or in a state of failure.", "definition": "The probability that a system will not be operational when needed, calculated using mean time to repair (MTTR) and mean time between failures (MTBF): $U = \\frac{MTTR}{MTBF + MTTR}$.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 5, "knowledge": "Fill Factor Degradation", "description": "Measures the reduction in the IV curve's fill factor from its initial value, indicating a decline in solar cell quality.", "definition": "The absolute difference between the initial fill factor and the current fill factor: $FF_{deg} = FF_{initial} - FF_{now}$. A higher value indicates greater degradation.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 6, "knowledge": "Operational Expenditure Index", "description": "Provides a normalized measure of a plant's total operational costs relative to its power capacity.", "definition": "An index representing the cost per unit of capacity, calculated as: $OEI = \\frac{C_{maint} + C_{clean} + C_{replace}}{P_{rated}}$, where C represents costs and $P_{rated}$ is the plant's rated power.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 7, "knowledge": "Annual Degradation Rate (ADR)", "description": "Calculates the average yearly percentage loss in a solar plant's performance or efficiency since its commissioning.", "definition": "The annualized rate of performance loss, calculated as: $ADR = \\frac{\\text{Cumulative Degradation Pct}}{\\text{Age in Years}}$.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 8, "knowledge": "System Availability", "description": "Calculates the proportion of time a system is operational and available to perform its required function.", "definition": "The probability that a system is operational, calculated as: $A = 1 - \\text{System Unavailability}$. A value of 99% implies the system is non-operational for 1% of the time.", "type": "calculation_knowledge", "children_knowledge": [4]} +{"id": 9, "knowledge": "Performance Ratio (PR)", "description": "Measures the overall efficiency of a solar plant by comparing its actual energy output to its theoretically possible output under given climatic conditions.", "definition": "A quality factor for a PV plant, calculated as the ratio of the final plant Specific Yield to the reference yield from irradiance: $PR = \\frac{\\text{Specific Yield}}{H_{POA} / G_{ref}}$, where $H_{POA}$ is the total in-plane irradiance (in kWh/m²) and $G_{ref}$ is the reference irradiance (1 kW/m²).", "type": "calculation_knowledge", "children_knowledge": [0]} +{"id": 10, "knowledge": "Temperature-Corrected Performance", "description": "Estimates the performance of a solar asset if thermal losses were eliminated.", "definition": "The asset's power output adjusted for temperature effects, providing a clearer view of performance irrespective of thermal conditions. Calculated as: $P_{corrected} = \\frac{P_{actual}}{1 - \\text{Thermal Loss Factor}}$.", "type": "calculation_knowledge", "children_knowledge": [3]} +{"id": 11, "knowledge": "Lifetime Revenue Loss Projection", "description": "Estimates the total potential revenue loss over a plant's expected lifetime due to its ongoing annual degradation.", "definition": "A projection of future losses based on the Annual Degradation Rate (ADR). A simplified model is: $LRL = \\text{ADR} \\times \\text{Annual Energy Production} \\times \\text{Energy Price} \\times \\text{Remaining Lifetime}$.", "type": "calculation_knowledge", "children_knowledge": [7]} +{"id": 12, "knowledge": "Inverter Efficiency Loss", "description": "Calculates the percentage of DC power that is lost as heat during the conversion to AC power within the inverter.", "definition": "The energy loss occurring within the inverter, calculated as: $L_{inv} = 100\\% - E_{inv\\%}$, where $E_{inv\\%}$ is the reported inverter efficiency percentage.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 13, "knowledge": "System Power Loss Ratio", "description": "Measures the proportion of power that is generated but lost within the system before reaching the grid.", "definition": "The ratio of power lost to the total power generated, calculated as: $R_{loss} = \\frac{P_{loss}}{P_{output} + P_{loss}}$.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 14, "knowledge": "Voltage Degradation Factor", "description": "Quantifies the decline in a panel's maximum power point voltage (Vmp) over time.", "definition": "The fractional loss of voltage capability, calculated as: $V_{deg} = \\frac{V_{initial} - V_{now}}{V_{initial}}$.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 15, "knowledge": "Current Degradation Factor", "description": "Quantifies the decline in a panel's maximum power point current (Imp) over time.", "definition": "The fractional loss of current-producing capability, calculated as: $I_{deg} = \\frac{I_{initial} - I_{now}}{I_{initial}}$.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 16, "knowledge": "Effective Power Output", "description": "Calculates the net power delivered by the system after accounting for internal power losses.", "definition": "The actual usable power from a system, derived from its gross output and its System Power Loss Ratio. It is calculated as: $P_{eff} = P_{gross} \\times (1 - \\text{System Power Loss Ratio})$.", "type": "calculation_knowledge", "children_knowledge": [13]} +{"id": 17, "knowledge": "Degradation-Adjusted Capacity", "description": "Estimates the current effective power capacity of a plant, accounting for cumulative performance degradation since it began operation.", "definition": "The plant's nameplate capacity adjusted for long-term wear and aging. The calculation is: $P_{adj} = P_{rated} \\times (1 - (\\text{Annual Degradation Rate} \\times \\text{Age}))$.", "type": "calculation_knowledge", "children_knowledge": [7]} +{"id": 18, "knowledge": "Maintenance Cost to Revenue Impact Ratio", "description": "Compares the cost of maintenance activities to the revenue lost during downtime for those repairs.", "definition": "A ratio indicating the financial efficiency of maintenance, calculated as $R_{M/R} = \\frac{\\text{Total Maintenance Cost}}{\\text{Total Revenue Loss}}$. A value < 1 suggests that the cost of repair was less than the revenue saved by returning the system to operation.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 19, "knowledge": "Mean Repair Cost", "description": "Calculates the average cost associated with a single repair event.", "definition": "The total maintenance cost divided by the number of failure events (which can be inferred from the total operational time and MTBF). $C_{repair} = \\frac{\\text{Total Maintenance Cost}}{\\text{Total Time} / MTBF}$.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 20, "knowledge": "High Degradation Panel Model", "description": "Identifies a solar panel model that degrades at a faster rate than the industry average.", "definition": "A panel model is considered to have high degradation if its officially specified annual power degradation rate exceeds a certain threshold, for example, 0.7%.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 21, "knowledge": "Tracker Malfunction", "description": "Indicates that a solar tracking system is not functioning correctly and may be stuck, in maintenance, or reporting an error.", "definition": "A condition defined by the tracker's operational state being 'Error' or 'Maintenance', indicating it is not actively tracking the sun to optimize energy capture.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 22, "knowledge": "Data Quality Concern", "description": "Flags a dataset or record when its quality score falls below an acceptable threshold, suggesting the data may be unreliable.", "definition": "A state triggered when a data quality assessment score is below a predefined minimum standard, for instance, a score of less than 50 out of 100.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 23, "knowledge": "Structural Integrity Warning", "description": "Indicates that the physical mounting or support structure of the equipment may be compromised.", "definition": "A warning state is triggered if the reported status of a mounting structure is 'Unstable' or 'Check Required'.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 24, "knowledge": "High Maintenance Priority", "description": "A flag indicating that an asset requires immediate or urgent maintenance attention.", "definition": "A status assigned to an asset when its maintenance priority level is designated as 'High', signifying a need for prompt corrective action.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 25, "knowledge": "High Replacement Priority", "description": "A flag indicating that a component or asset is recommended for immediate or urgent replacement.", "definition": "A status assigned to an asset when its replacement priority level is designated as 'High', signifying that the component has reached the end of its life or is critically failing.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 26, "knowledge": "Severe Soiling Condition", "description": "Indicates that panel surfaces are heavily soiled, causing a significant loss of energy production.", "definition": "A condition met when the Soiling Loss Index for a plant is greater than a significant threshold, for example, 15%.", "type": "domain_knowledge", "children_knowledge": [2]} +{"id": 27, "knowledge": "High Environmental Risk Site", "description": "Identifies a plant located in an area with significant environmental challenges that could impact performance or safety.", "definition": "A classification assigned to a site based on its environmental risk tag being 'High'. This may be due to factors like extreme weather, high pollution, or other external risks.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 28, "knowledge": "Warranty Claim Risk", "description": "Indicates a high likelihood of future warranty claims based on past activity and current status.", "definition": "A risk level defined for plants that have a 'Claimed' warranty status and have logged a high number of claims (e.g., more than 2), suggesting recurring issues.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 29, "knowledge": "Critical Alert", "description": "An urgent notification indicating a severe issue that requires immediate maintenance and potential component replacement.", "definition": "A state is defined as a Critical Alert when an asset has both a High Maintenance Priority and a High Replacement Priority.", "type": "domain_knowledge", "children_knowledge": [24, 25]} +{"id": 30, "knowledge": "Electrical Integrity Failure", "description": "Indicates a failure in the fundamental electrical safety or operational components of the system.", "definition": "A failure state is declared if the electrical grounding status is 'Failed' or if a critical component like a bypass diode reports a non-normal status.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 31, "knowledge": "Major Module Degradation", "description": "A composite indicator of severe physical aging and damage to solar modules.", "definition": "A condition is met when multiple signs of severe physical wear are present, such as 'Severe' busbar corrosion, 'Major' delamination, and a high count of microcracks (e.g., > 5).", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 32, "knowledge": "Underperforming Asset", "description": "An asset whose energy conversion efficiency is significantly below its expected benchmark.", "definition": "An asset is classified as underperforming if its Performance Ratio (PR) falls below a minimum acceptable threshold, such as 0.75.", "type": "domain_knowledge", "children_knowledge": [9]} +{"id": 33, "knowledge": "Chronic Downtime Asset", "description": "An asset that suffers from frequent failures and low operational uptime.", "definition": "An asset is classified as having chronic downtime if its System Availability is below a critical threshold, for example, 95%.", "type": "domain_knowledge", "children_knowledge": [8]} +{"id": 34, "knowledge": "High-Cost Asset", "description": "An asset with operational expenditures that are excessively high relative to its capacity.", "definition": "An asset is flagged as high-cost when its Operational Expenditure Index exceeds a predefined value, indicating it is disproportionately expensive to run.", "type": "domain_knowledge", "children_knowledge": [6]} +{"id": 35, "knowledge": "Critical Mechanical Health", "description": "Indicates that an asset is suffering from severe and concurrent mechanical failures.", "definition": "A state of critical mechanical health is reached when there is an active Structural Integrity Warning and a simultaneous Tracker Malfunction.", "type": "domain_knowledge", "children_knowledge": [23, 21]} +{"id": 36, "knowledge": "Accelerated Aging Asset", "description": "An asset showing signs of rapid performance decline and physical deterioration.", "definition": "An asset is considered to be aging acceleratedly if it exhibits a high Annual Degradation Rate (ADR) and also shows signs of Major Module Degradation.", "type": "domain_knowledge", "children_knowledge": [7, 31]} +{"id": 37, "knowledge": "Overdue Critical Maintenance", "description": "A high-priority maintenance task that has not been completed by its scheduled date.", "definition": "A condition triggered when an inspection result is 'Major Issues' and the corresponding maintenance schedule status is 'Overdue'.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 38, "knowledge": "Neglected Asset", "description": "An asset that is in a critical state but is not receiving the required attention, indicated by data quality issues and overdue maintenance.", "definition": "An asset is defined as neglected if it has an active Critical Alert, is subject to Overdue Critical Maintenance, and also has a Data Quality Concern.", "type": "domain_knowledge", "children_knowledge": [29, 37, 22]} +{"id": 39, "knowledge": "Decommissioning Candidate", "description": "Identifies an asset that may be a candidate for retirement due to extremely poor performance, high costs, and neglect.", "definition": "A plant is flagged as a decommissioning candidate if it is classified as both a Neglected Asset and a High-Cost Asset.", "type": "domain_knowledge", "children_knowledge": [38, 34]} +{"id": 40, "knowledge": "Power Factor", "description": "Illustrates the value of the power factor.", "definition": "A dimensionless ratio ranging from -1 to 1 that represents the ratio of real power (doing work) to apparent power (flowing in the circuit). A value of 1.0 indicates perfect efficiency, while a value of 0.95 is typical for high-efficiency systems. Lower values indicate more reactive power, which does not perform work.", "type": "value_illustration", "children_knowledge": -1} +{"id": 41, "knowledge": "Temperature Coefficient of Power", "description": "Illustrates the value of a panel's temperature coefficient.", "definition": "Indicates how much a panel's power output decreases for every degree Celsius increase in temperature above a reference point (usually 25°C). A typical value is -0.39%/°C, meaning the power output drops by 0.39% for each 1°C rise in temperature.", "type": "value_illustration", "children_knowledge": -1} +{"id": 42, "knowledge": "IV Curve Fill Factor", "description": "Illustrates the value of the IV curve fill factor.", "definition": "A measure of the 'squareness' of the current-voltage (IV) curve, indicating the quality of the solar cell. It is the ratio of the maximum actual power to the theoretical maximum power. A higher value (e.g., >0.8) indicates a higher quality cell with low internal power losses.", "type": "value_illustration", "children_knowledge": -1} +{"id": 43, "knowledge": "UV Index", "description": "Illustrates the value of the UV Index.", "definition": "An international standard measurement of the strength of the sunburn-producing ultraviolet (UV) radiation at a particular place and time. The scale is open-ended, where 1-2 is Low, 3-5 is Moderate, 6-7 is High, 8-10 is Very High, and 11+ is Extreme.", "type": "value_illustration", "children_knowledge": -1} +{"id": 44, "knowledge": "Tracker System State", "description": "Illustrates the different operational states for a solar tracker.", "definition": "Describes the current status of the solar tracking system. Common states include: 'Normal' (actively tracking the sun), 'Maintenance' (undergoing service), 'Error' (malfunctioning), or 'Stow' (locked in a safe, fixed position due to high winds).", "type": "value_illustration", "children_knowledge": -1} +{"id": 45, "knowledge": "Data Quality Score (DQS)", "description": "Illustrates the meaning of a Data Quality Score.", "definition": "A metric, typically on a scale of 0 to 100, that assesses the quality, completeness, and consistency of a set of data. A score of 95-100 indicates high-quality data, while a score below 50 might suggest the data is unreliable for analysis.", "type": "value_illustration", "children_knowledge": -1} +{"id": 46, "knowledge": "Busbar Corrosion Severity", "description": "Illustrates the different levels of busbar corrosion on a solar module.", "definition": "Describes the extent of corrosion on the metallic strips (busbars) that conduct electricity within a solar panel. Levels can range from 'None' to 'Light' (minor discoloration), 'Visible' (clear corrosion affecting small areas), and 'Severe' (widespread corrosion that can disrupt conductivity and reduce power output).", "type": "value_illustration", "children_knowledge": -1} +{"id": 47, "knowledge": "Mean Time Between Failures (MTBF)", "description": "Illustrates the meaning of the MTBF value.", "definition": "A reliability metric that represents the average time a system operates before a failure occurs. It is typically measured in hours. A high MTBF (e.g., >8000 hours) indicates a highly reliable system, while a low MTBF (e.g., <1000 hours) indicates a system that fails frequently.", "type": "value_illustration", "children_knowledge": -1} +{"id": 48, "knowledge": "Nominal Operating Cell Temperature (NOCT)", "description": "Illustrates the meaning of Nominal Operating Cell Temperature.", "definition": "The temperature reached by open-circuited cells in a module under specific reference conditions (800 W/m² irradiance, 20°C ambient temperature, 1 m/s wind speed). A typical NOCT is around 45°C. It provides a more realistic measure of a panel's operating temperature in the field than standard test conditions.", "type": "value_illustration", "children_knowledge": -1} +{"id": 49, "knowledge": "Total Harmonic Distortion (THD)", "description": "Illustrates the meaning of the Total Harmonic Distortion percentage.", "definition": "A measurement of the harmonic distortion present in a signal, defined as the ratio of the sum of the powers of all harmonic components to the power of the fundamental frequency. In power systems, a low THD (e.g., <5%) is desirable for good power quality, while higher values can indicate potential problems for connected equipment.", "type": "value_illustration", "children_knowledge": -1} \ No newline at end of file diff --git a/solar_panel/solar_panel_schema.txt b/solar_panel/solar_panel_schema.txt new file mode 100644 index 0000000000000000000000000000000000000000..1682ddbb39d1d788bf7101c001426725c45e9ac6 --- /dev/null +++ b/solar_panel/solar_panel_schema.txt @@ -0,0 +1,188 @@ +CREATE TABLE "electrical_performance" ( +snaplink text NOT NULL, +elec_perf_snapshot jsonb NULL, + PRIMARY KEY (snaplink), + FOREIGN KEY (snaplink) REFERENCES plant_record(snapkey) +); + +First 3 rows: +snaplink elec_perf_snapshot +---------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ +PV937101 {'grid': {'grid_voltage_v': None, 'grid_frequency_hz': 49.73, 'power_quality_idx': 0.467, 'reactive_power_var': 33, 'harmonic_distortion_pct': 4.01}, 'power': {'power_now_w': None, 'power_loss_w': None}, 'inverter': {'power_factor': 0.979, 'inverter_temp_c': 49.1, 'inverter_eff_pct': 98.43}, 'iv_curve': {'imp_now_a': 7.21, 'isc_now_a': 8.51, 'vmp_now_v': 36.98, 'voc_now_v': 43.32, 'imp_initial_a': 9.17, 'isc_initial_a': 9.09, 'shunt_res_ohm': 437.3, 'vmp_initial_v': 38.48, 'voc_initial_v': 49.74, 'series_res_ohm': '0.17 Ω', 'fill_factor_now': 0.71, 'fill_factor_initial': 0.773}, 'efficiency': {'eff_loss_pct': 2.32, 'soil_loss_pct': 13.41, 'instant_eff_pct': 18.54, 'spectral_mismatch': '100.50%', 'cumulative_deg_pct': 14.69}, 'energy_yield': {'availability_pct': 97.73, 'energy_yield_kwh': 15.64, 'performance_ratio': None, 'capacity_factor_pct': 27.47, 'specific_yield_kwh_kw': 5.18}} +PV945724 {'grid': {'grid_voltage_v': None, 'grid_frequency_hz': 49.6, 'power_quality_idx': 0.245, 'reactive_power_var': 65.1, 'harmonic_distortion_pct': 4.61}, 'power': {'power_now_w': 639.47, 'power_loss_w': 10.53}, 'inverter': {'power_factor': 0.985, 'inverter_temp_c': 51.2, 'inverter_eff_pct': 98.37}, 'iv_curve': {'imp_now_a': 7.3, 'isc_now_a': 8.41, 'vmp_now_v': 36.73, 'voc_now_v': 43.1, 'imp_initial_a': 9.93, 'isc_initial_a': 10.85, 'shunt_res_ohm': 880.7, 'vmp_initial_v': 38.25, 'voc_initial_v': None, 'series_res_ohm': '0.35 Ω', 'fill_factor_now': 0.788, 'fill_factor_initial': 0.775}, 'efficiency': {'eff_loss_pct': 1.7, 'soil_loss_pct': 11.27, 'instant_eff_pct': 19.95, 'spectral_mismatch': '99.60%', 'cumulative_deg_pct': 1.62}, 'energy_yield': {'availability_pct': 96.91, 'energy_yield_kwh': 340.32, 'performance_ratio': None, 'capacity_factor_pct': 25.37, 'specific_yield_kwh_kw': 6}} +PV596868 {'grid': {'grid_voltage_v': 220.8, 'grid_frequency_hz': 50.19, 'power_quality_idx': 0.26, 'reactive_power_var': None, 'harmonic_distortion_pct': 3.17}, 'power': {'power_now_w': None, 'power_loss_w': None}, 'inverter': {'power_factor': None, 'inverter_temp_c': 49, 'inverter_eff_pct': 97.74}, 'iv_curve': {'imp_now_a': None, 'isc_now_a': 8.39, 'vmp_now_v': 37.07, 'voc_now_v': 47.94, 'imp_initial_a': 9.49, 'isc_initial_a': 9.86, 'shunt_res_ohm': 998, 'vmp_initial_v': 36.44, 'voc_initial_v': None, 'series_res_ohm': '0.49 Ω', 'fill_factor_now': 0.764, 'fill_factor_initial': 0.812}, 'efficiency': {'eff_loss_pct': 3.5, 'soil_loss_pct': 10.87, 'instant_eff_pct': 17.82, 'spectral_mismatch': '98.70%', 'cumulative_deg_pct': 2.37}, 'energy_yield': {'availability_pct': 97.88, 'energy_yield_kwh': 1357.4, 'performance_ratio': None, 'capacity_factor_pct': 20.83, 'specific_yield_kwh_kw': 5.46}} +... + + +CREATE TABLE "environmental_conditions" ( +snapref text NOT NULL, +env_snapshot jsonb NULL, + PRIMARY KEY (snapref), + FOREIGN KEY (snapref) REFERENCES plant_record(snapkey) +); + +First 3 rows: +snapref env_snapshot +--------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- +PV937101 {'irradiance': {'dhi_w_m2': 44.1, 'dni_w_m2': 169.3, 'ghi_w_m2': None, 'poa_irr_w_m2': 135.1}, 'atmospheric': {'uv_index': 1.1, 'cloud_cover_pct': 12.4, 'air_pressure_hpa': 1099.2, 'dust_density_kg_m3': 0.056, 'relative_humidity_pct': 76.1}, 'temperatures': {'cell_temp_c': 47.3, 'ambient_temp_c': 41.2}, 'wind_rain_snow': {'rain_mm': 41.3, 'wind_dir_deg': 249.3, 'snow_cover_pct': 24.8, 'wind_speed_m_s': 11.6}} +PV945724 {'irradiance': {'dhi_w_m2': 81.9, 'dni_w_m2': 686.2, 'ghi_w_m2': 556.4, 'poa_irr_w_m2': 500.1}, 'atmospheric': {'uv_index': 3.6, 'cloud_cover_pct': 93.8, 'air_pressure_hpa': 958.3, 'dust_density_kg_m3': 3.519, 'relative_humidity_pct': 50}, 'temperatures': {'cell_temp_c': 36.1, 'ambient_temp_c': 22.7}, 'wind_rain_snow': {'rain_mm': 36.2, 'wind_dir_deg': 29.8, 'snow_cover_pct': 59.7, 'wind_speed_m_s': 22.6}} +PV617932 {'irradiance': {'dhi_w_m2': 21.4, 'dni_w_m2': 346.6, 'ghi_w_m2': None, 'poa_irr_w_m2': 224.8}, 'atmospheric': {'uv_index': 11, 'cloud_cover_pct': 43.7, 'air_pressure_hpa': 965.8, 'dust_density_kg_m3': 3.875, 'relative_humidity_pct': 70.8}, 'temperatures': {'cell_temp_c': None, 'ambient_temp_c': -1.6}, 'wind_rain_snow': {'rain_mm': 28.7, 'wind_dir_deg': 231.5, 'snow_cover_pct': 37.5, 'wind_speed_m_s': 11.3}} +... + + +CREATE TABLE "panel_models" ( +modkey text NOT NULL, +makertag text NULL, +pnlkind text NULL, +rated_w real NULL, +effpct real NULL, +degyrrate real NULL, +tempcoeff real NULL, +nomoptempc text NULL, + PRIMARY KEY (modkey) +); + +First 3 rows: +modkey makertag pnlkind rated_w effpct degyrrate tempcoeff nomoptempc +--------- ---------- --------- --------- -------- ----------- ----------- ------------ +Model-102 Longi Mono-PERC 650 20.86 0.36 -0.389 45.7 °C +Model-892 HJT 650 21.65 0.88 -0.446 45.5 °C +Model-677 Longi Poly-PERC 450 21.93 0.94 nan 46.7 °C +... + + +CREATE TABLE "plants" ( +sitekey text NOT NULL, +sitelabel text NULL, +cap_mw real NULL, +goliveon date NULL, +modhook text NULL, +tiltdeg real NULL, +azmdeg real NULL, +recyclenote text NULL, +docstate text NULL, +warrstate text NULL, +warrclaims bigint NULL, +insurstate text NULL, +complyflag text NULL, +envtag text NULL, + PRIMARY KEY (sitekey), + FOREIGN KEY (modhook) REFERENCES panel_models(modkey) +); + +First 3 rows: +sitekey sitelabel cap_mw goliveon modhook tiltdeg azmdeg recyclenote docstate warrstate warrclaims insurstate complyflag envtag +--------- -------------------------- -------- ---------- --------- --------- -------- -------------- ---------- ----------- ------------ ------------ ------------- -------- +SP9227 Solar Plant West Davidport 257.58 2018-02-28 Model-102 1.2 169.8 Missing Claimed 2 Covered Compliant High +SP6740 Solar Plant Dillonmouth 437.71 2023-08-06 Model-892 13.1 158.8 In Development Missing Active 4 Covered Non-Compliant Low +SP7738 Solar Plant North Xavier 397.96 2022-06-18 Model-677 27.1 223 In Development Missing Claimed 2 Expired Non-Compliant High +... + + +CREATE TABLE "mechanical_condition" ( +snapmk text NOT NULL, +mech_health_snapshot jsonb NULL, + PRIMARY KEY (snapmk), + FOREIGN KEY (snapmk) REFERENCES plant_record(snapkey) +); + +First 3 rows: +snapmk mech_health_snapshot +-------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ +PV937101 {'tracker': {'tracker_state': 'Maintenance', 'tracker_angle_deg': 20.5}, 'module_surface': {'lid_status': 'Unknown', 'pid_severity': None, 'hotspot_count': 1, 'glass_condition': 'Clear', 'busbar_corrosion': 'Severe', 'microcrack_count': 7, 'backsheet_condition': 'Poor', 'delamination_status': 'Minor', 'snail_trail_severity': 'Light', 'encapsulant_yellowing': 'Severe'}, 'mount_cleaning': {'last_clean_date': '2025-01-14', 'cleaning_cycle_days': 38, 'mount_structure_status': 'Stable'}, 'electrical_integrity': {'cable_condition': 'Good', 'grounding_status': 'Normal', 'bypass_diode_status': 'Partial', 'connector_condition': 'Fair', 'junction_box_condition': 'Good'}} +PV945724 {'tracker': {'tracker_state': 'Error', 'tracker_angle_deg': -37.5}, 'module_surface': {'lid_status': 'Stabilized', 'pid_severity': 'Low', 'hotspot_count': 3, 'glass_condition': 'Damaged', 'busbar_corrosion': 'Severe', 'microcrack_count': 3, 'backsheet_condition': 'Fair', 'delamination_status': 'Major', 'snail_trail_severity': 'Light', 'encapsulant_yellowing': None}, 'mount_cleaning': {'last_clean_date': '2025-01-23', 'cleaning_cycle_days': 52, 'mount_structure_status': 'Unstable'}, 'electrical_integrity': {'cable_condition': 'Fair', 'grounding_status': 'Failed', 'bypass_diode_status': 'Normal', 'connector_condition': 'Poor', 'junction_box_condition': 'Fair'}} +PV227567 {'tracker': {'tracker_state': 'Normal', 'tracker_angle_deg': 21.3}, 'module_surface': {'lid_status': 'Stabilized', 'pid_severity': 'Low', 'hotspot_count': None, 'glass_condition': 'Damaged', 'busbar_corrosion': 'Visible', 'microcrack_count': 5, 'backsheet_condition': 'Good', 'delamination_status': 'Major', 'snail_trail_severity': 'Light', 'encapsulant_yellowing': None}, 'mount_cleaning': {'last_clean_date': '2025-02-05', 'cleaning_cycle_days': 17, 'mount_structure_status': 'Check Required'}, 'electrical_integrity': {'cable_condition': None, 'grounding_status': 'Check Required', 'bypass_diode_status': 'Partial', 'connector_condition': 'Fair', 'junction_box_condition': 'Good'}} +... + + +CREATE TABLE "plant_panel_model" ( +sitelink text NOT NULL, +modlink text NOT NULL, + PRIMARY KEY (sitelink, modlink), + FOREIGN KEY (sitelink) REFERENCES plants(sitekey), + FOREIGN KEY (modlink) REFERENCES panel_models(modkey) +); + +First 3 rows: +sitelink modlink +---------- --------- +SP9227 Model-102 +SP6740 Model-892 +SP7738 Model-677 +... + + +CREATE TABLE "plant_record" ( +snapkey text NOT NULL, +sitetie text NULL, +snapts timestamp without time zone NULL, + PRIMARY KEY (snapkey), + FOREIGN KEY (sitetie) REFERENCES plants(sitekey) +); + +First 3 rows: +snapkey sitetie snapts +--------- --------- ------------------- +PV937101 SP9227 2018-08-03 12:21:20 +PV945724 SP6740 2023-03-05 14:10:48 +PV227567 SP7738 2023-01-30 02:55:15 +... + + +CREATE TABLE "operational_metrics" ( +snapops text NOT NULL, +mtbfh real NULL, +mttrh real NULL, +maintcost real NULL, +cleancost real NULL, +replcost real NULL, +revloss text NULL, +optpot text NULL, + PRIMARY KEY (snapops), + FOREIGN KEY (snapops) REFERENCES plant_record(snapkey) +); + +First 3 rows: +snapops mtbfh mttrh maintcost cleancost replcost revloss optpot +--------- ------- ------- ----------- ----------- ---------- ---------- -------- +PV937101 nan 8.8 178.82 1034.89 31529.1 $13,375.59 Medium +PV945724 3172.1 31.5 9549.83 2957.08 15984.1 $17,065.19 High +PV227567 9686.8 21.7 9298.61 1934.67 3604.59 $14,035.91 Low +... + + +CREATE TABLE "inspection" ( +inspectmode text NOT NULL, +inspectres text NULL, +inspectdt date NULL, +maintsched text NULL, +dqscore real NULL, + PRIMARY KEY (inspectmode) +); + +First 3 rows: +inspectmode inspectres inspectdt maintsched dqscore +------------- ------------ ----------- ------------ --------- +Visual Minor Issues 2024-08-24 Delayed 97.7 +IR Thermal Major Issues 2024-12-23 Overdue 19.9 +IV Curve Pass 2025-02-13 Delayed 71.1 +... + + +CREATE TABLE "alert" ( +snapalrt text NOT NULL, +alrtstate text NULL, +alrtcnt text NULL, +maintprio text NULL, +replprio text NULL, + PRIMARY KEY (snapalrt), + FOREIGN KEY (snapalrt) REFERENCES plant_record(snapkey) +); + +First 3 rows: +snapalrt alrtstate alrtcnt maintprio replprio +---------- ----------- --------- ----------- ---------- +PV937101 6 events High High +PV945724 Warning 2 events High High +PV227567 9 events High +... diff --git a/sports_events/sports_events_column_meaning_base.json b/sports_events/sports_events_column_meaning_base.json new file mode 100644 index 0000000000000000000000000000000000000000..8ad083b791b98d6d0e8899c29dfa5d257fea7458 --- /dev/null +++ b/sports_events/sports_events_column_meaning_base.json @@ -0,0 +1,142 @@ +{ + "sports_events|circuits|CCTKEY": "INTEGER. Unique identifier for the circuit. PK. Example: 1.", + "sports_events|constructors|CSTR_Key": "INTEGER. Unique identifier for the constructor. PK. Example: 1.", + "sports_events|constructors|refCod": "TEXT. Constructor reference code. Example: mclaren.", + "sports_events|drivers|DRV_MAIN": "INTEGER. Unique identifier for the driver. PK. Example: 1.", + "sports_events|races|RAK_ID": "INTEGER. Unique identifier for the race. PK. Example: 1.", + "sports_events|races|Yr": "INTEGER. Year of the race. Example: 2009.", + "sports_events|races|rNUM": "INTEGER. Race number. Example: 1.", + "sports_events|races|trkBind": "INTEGER. Foreign key to the circuits table (CCTKEY). FK to circuits. Example: 1.", + "sports_events|constructor_results|CResRef": "INTEGER. Unique result reference. PK. Example: 15178.", + "sports_events|constructor_results|matchRef": "INTEGER. Foreign key to the races table (RAK_ID). FK to races. Example: 943.", + "sports_events|constructor_results|unitNode": "INTEGER. Foreign key to the constructors table (CSTR_Key). FK to constructors. Example: 9.", + "sports_events|constructor_results|ST_mark": "TEXT. Status mark for the result. Possible values: D, \\N.", + "sports_events|constructor_standings|CSTNDS": "INTEGER. Unique standings reference. PK. Example: 25335.", + "sports_events|constructor_standings|RRef": "INTEGER. Foreign key to the races table (RAK_ID). FK to races. Example: 898.", + "sports_events|constructor_standings|contUnit": "INTEGER. Foreign key to the constructors table (CSTR_Key). FK to constructors. Example: 9.", + "sports_events|driver_standings|DRV_STND": "INTEGER. Unique standings reference. PK. Example: 50065.", + "sports_events|driver_standings|rlink": "INTEGER. Foreign key to the races table (RAK_ID). FK to races. Example: 752.", + "sports_events|driver_standings|Drive_Link": "INTEGER. Foreign key to the drivers table (DRV_MAIN). FK to drivers. Example: 531.", + "sports_events|lap_times|rc_index": "INTEGER. Foreign key to the races table (RAK_ID). FK to races. Example: 1093.", + "sports_events|lap_times|wheel_unit": "INTEGER. Foreign key to the drivers table (DRV_MAIN). FK to drivers. Example: 4.", + "sports_events|lap_times|lapVal": "INTEGER. Lap value in the race. Example: 55.", + "sports_events|lap_times|pp": "INTEGER. Position for the lap. Example: 7.", + "sports_events|lap_times|msec_val": "INTEGER. Lap time in milliseconds. Example: 103848.", + "sports_events|pit_stops|matchIDX": "INTEGER. Foreign key to the races table (RAK_ID). FK to races. Example: 911.", + "sports_events|pit_stops|wUnit": "INTEGER. Foreign key to the drivers table (DRV_MAIN). FK to drivers. Example: 3.", + "sports_events|pit_stops|pause_no": "INTEGER. Unique identifier for the pit stop. Possible values: 1, 2, 3, 4, 5, 6.", + "sports_events|pit_stops|moment": "INTEGER. Moment during the race when the pit stop occurs. Example: 19.", + "sports_events|pit_stops|durTXT": "TEXT. Duration of the pit stop. Example: 22.936.", + "sports_events|pit_stops|ms_count": "INTEGER. Millisecond count for the pit stop. Example: 22936.", + "sports_events|qualifying|QualKey": "INTEGER. Unique qualifying key. PK. Example: 6815.", + "sports_events|qualifying|rbind": "INTEGER. Foreign key to the races table (RAK_ID). FK to races. Example: 954.", + "sports_events|qualifying|pilotRec": "INTEGER. Foreign key to the drivers table (DRV_MAIN). FK to drivers. Example: 1.", + "sports_events|qualifying|corpTag": "INTEGER. Foreign key to the constructors table (CSTR_Key). FK to constructors. Example: 131.", + "sports_events|qualifying|regNo": "INTEGER. Registration number of the driver. Example: 44.", + "sports_events|qualifying|PX_Pos": "INTEGER. Position in qualifying. Example: 1.", + "sports_events|qualifying|Q1_R": "TEXT. Q1 qualifying result. **NULL means no Q1 result.**. Example: 1:14.121.", + "sports_events|qualifying|Q2_R": "TEXT. Q2 qualifying result. **NULL means no Q2 result.**. Example: 1:13.076.", + "sports_events|qualifying|Q3_R": "TEXT. Q3 qualifying result. **NULL means no Q3 result.**. Example: 1:12.812.", + "sports_events|sprint_results|sResCode": "INTEGER. Unique sprint result code. PK. Example: 1.", + "sports_events|sprint_results|matchRef": "INTEGER. Foreign key to the races table (RAK_ID). FK to races. Example: 1061.", + "sports_events|sprint_results|unitDrive": "INTEGER. Foreign key to the drivers table (DRV_MAIN). FK to drivers. Example: 830.", + "sports_events|sprint_results|makeRef": "INTEGER. Foreign key to the constructors table (CSTR_Key). FK to constructors. Example: 9.", + "sports_events|sprint_results|Rno": "INTEGER. Sprint result number. **NULL means no sprint result number available.**. Example: 33.", + "sports_events|constructors|NameLabel": "VARCHAR. Full team name. **NULL means the constructor name has not been resolved or released.**. Example: McLaren.", + "sports_events|constructors|naty": "VARCHAR. Nationality of the constructor. **NULL means the team's nationality is not specified in the dataset.**. Example: British.", + "sports_events|constructors|linkPage": "VARCHAR. URL to the constructor's page. **NULL means the reference link is missing.**. Example: http://en.wikipedia.org/wiki/BMW_Sauber.", + "sports_events|lap_times|tmDesc": "VARCHAR. Recorded lap-time string. **NULL means the lap time was not captured or is invalid.**. Example: 1:43.848.", + "sports_events|pit_stops|atTime": "VARCHAR. Timestamp of the pit stop. **NULL means the exact pit-stop time wasn't logged.**. Example: 14:41:14.", + "sports_events|constructor_results|scoreVal": "REAL. Points scored by constructor in the race. **NULL means the constructor was unclassified or data are missing.**. Example: 22.0.", + "sports_events|constructor_standings|scr_tot": "REAL. Constructor's cumulative season points. **NULL means the constructor has not yet scored or the tally is not updated.**. Example: 553.0.", + "sports_events|driver_standings|acc_pt": "REAL. Driver's cumulative season points. **NULL means the driver has no points or totals not updated.**. Example: 0.0.", + "sports_events|constructor_standings|posNo": "INTEGER. Constructor's rank in standings. **NULL means no position computed.**. Example: 1.", + "sports_events|driver_standings|PX": "INTEGER. Driver's rank in standings. **NULL means ranking not yet calculated.**. Example: 56.0.", + "sports_events|constructor_standings|posLab": "VARCHAR. Position text label. **NULL means no label available.**. Example: 1.", + "sports_events|driver_standings|PX_Desc": "VARCHAR. Descriptive rank label. **NULL means label not assigned.**. Example: 56.0.", + "sports_events|constructor_standings|trophy_W": "INTEGER. Total wins by constructor. **NULL means the constructor has no wins or the count is pending update.**. Example: 12.", + "sports_events|driver_standings|TopMark": "INTEGER. Total wins by driver. **NULL means the driver has not won or data have not been entered.**. Example: 0.0.", + "sports_events|circuits|location_metadata": { + "column_meaning": "JSONB column. Encapsulates all geographical and identification data related to the circuit’s location.", + "fields_meaning": { + "name": "TEXT. Name of the circuit. Example: Albert Park Grand Prix Circuit.", + "reference_code": "TEXT. Circuit reference code. Example: albert_park.", + "location": { + "city": "VARCHAR. City or locality of the circuit. **NULL means the locality was not supplied or is unknown in the source feed.**. Example: Melbourne.", + "country": "TEXT. Name of the country where the circuit is located. Example: Australia." + }, + "coordinates": { + "latitude": "REAL. Latitude coordinate of the circuit. **NULL means the latitude is not recorded or the circuit's exact position is uncertain.**. Example: -37.8497.", + "longitude": "REAL. Longitude coordinate of the circuit. **NULL means longitude is missing or unavailable.**. Example: 144.968.", + "elevation_m": "INTEGER. Circuit elevation in metres. **NULL means elevation data were not provided for this circuit.**. Example: 10.0." + }, + "external_link": "VARCHAR. URL with more information about the circuit. **NULL means no official or stable link is on record.**. Example: http://en.wikipedia.org/wiki/Melbourne_Grand_Prix_Circuit." + } + }, + "sports_events|drivers|driver_identity": { + "column_meaning": "JSONB column. Consolidates driver’s identity, nationality, and references.", + "fields_meaning": { + "reference": "TEXT. Driver reference code. Example: hamilton.", + "racing_number": "VARCHAR. Permanent racing number. **NULL means the driver has no permanent number or it is not known.**. Example: 14.0.", + "code": "VARCHAR. Three-letter identifier code. **NULL means an FIA code has not been assigned.**. Example: HAM.", + "name": { + "first_name": "VARCHAR. Driver's given name. **NULL means the given name is not stored.**. Example: Lewis.", + "surname": "VARCHAR. Driver's surname. **NULL means the surname is missing.**. Example: Hamilton." + }, + "birth_date": "VARCHAR. Date of birth. **NULL means the birth date is unknown or undisclosed.**. Example: 1985-01-07.", + "nationality": "VARCHAR. Driver's nationality. **NULL means nationality is not recorded.**. Example: British.", + "info_link": "VARCHAR. Link to the driver's info page. **NULL means no profile link is available.**. Example: http://en.wikipedia.org/wiki/Nick_Heidfeld." + } + }, + "sports_events|races|event_schedule": { + "column_meaning": "JSONB column. Groups all date and time-related fields for a race weekend schedule including practice, qualifying, and sprint sessions.", + "fields_meaning": { + "event_name": "VARCHAR. Official event name. **NULL means the event name is not yet finalised or entered.**. Example: Australian Grand Prix.", + "date_set": "TEXT. Date set for the race. Example: 2009/03/29.", + "start_time": "VARCHAR. Scheduled race start time. **NULL means the start time is TBD or not recorded.**. Example: 06:00:00.", + "sessions": { + "fp1": { + "date": "VARCHAR. Free Practice 1 date. **NULL means FP1 is not scheduled or date not published.**. Example: 2022/03/18.", + "time": "VARCHAR. Free Practice 1 time. **NULL means FP1 time is not available.**. Possible values: 02:30:00, 04:30:00, 09:30:00, 10:00:00, 11:30:00, 12:00:00, 13:30:00, 16:30:00, 17:30:00, 18:00:00." + }, + "fp2": { + "date": "VARCHAR. Free Practice 2 date. **NULL means FP2 is not planned or date is missing.**. Example: 2021/04/16.", + "time": "VARCHAR. Free Practice 2 time. **NULL means FP2 time is not available.**. Example: 15:00:00." + }, + "fp3": { + "date": "VARCHAR. Free Practice 3 date. **NULL means FP3 does not occur or date not provided.**. Example: 2021/04/17.", + "time": "VARCHAR. Free Practice 3 time. **NULL means FP3 time is not recorded.**. Example: 12:00:00." + }, + "qualifying": { + "date": "VARCHAR. Qualifying date. **NULL means qualifying date is unset.**. Example: 2021/04/17.", + "time": "VARCHAR. Qualifying time. **NULL means qualifying time is unknown.**. Example: 06:00:00." + }, + "sprint": { + "date": "VARCHAR. Sprint-race date. **NULL means no sprint race for this event or date not released.**. Possible values: 2021/07/17, 2021/09/11, 2022/04/23, 2022/07/09, 2022/11/12, 2023/10/07, 2023/11/04, 2024/05/04, 2024/11/02.", + "time": "VARCHAR. Sprint-race time. **NULL means sprint time is not applicable or missing.**. Possible values: 13:00:00, 13:30:00, 14:30:00, 18:00:00, 18:30:00, 22:00:00." + } + }, + "details_url": "VARCHAR. URL to additional race details. **NULL means no external reference is available.**. Example: http://en.wikipedia.org/wiki/2009_Chinese_Grand_Prix." + } + }, + "sports_events|sprint_results|sprint_performance": { + "column_meaning": "JSONB column. Captures performance metrics in sprint races including lap data, timing, position, and score.", + "fields_meaning": { + "grid": "INTEGER. Starting grid position. Example: 2.", + "final_position": "INTEGER. Sprint-race finishing position. **NULL means the driver did not finish / classify.**. Example: 1.", + "position_label": "VARCHAR. Sprint position label. **NULL means label not provided.**. Example: 1.", + "ranking_order": "INTEGER. Order mark for the sprint result. Example: 1.", + "points": "REAL. Points scored in the sprint. **NULL means no points were awarded or result is pending.**. Possible values: 0, 1, 2, 3, 4, 5, 6, 7, 8.", + "laps_completed": "INTEGER. Number of loops in the sprint. Example: 17.", + "timing": { + "final_time": "VARCHAR. Final total time / status marker for the sprint race. **NULL means the finisher had no classified time or data are not available.**. Example: 25:38.426.", + "duration_ms": "INTEGER. Milliseconds for the sprint. Example: 1538426." + }, + "fastest_lap": { + "lap_number": "INTEGER. Fastest lap in the sprint. Example: 14.", + "lap_time": "TEXT. Fast lap time for the sprint. Example: 1:30.013." + }, + "status_code": "INTEGER. Sprint code. Possible values: 1, 3, 10, 23, 31, 43, 76, 130." + } + } +} \ No newline at end of file diff --git a/sports_events/sports_events_kb.jsonl b/sports_events/sports_events_kb.jsonl new file mode 100644 index 0000000000000000000000000000000000000000..da905e4dacd23473c48ace4c9067dcd30c89e944 --- /dev/null +++ b/sports_events/sports_events_kb.jsonl @@ -0,0 +1,56 @@ +{"id": 0, "knowledge": "Race Weekend Structure", "description": "Illustrates the standard sequence of sessions that constitute a championship race weekend.", "definition": "A typical race weekend consists of several sessions: up to three Free Practice sessions for teams to tune their cars, a Qualifying session to determine the starting order for the main race, and the Grand Prix Race itself. Some weekends also include a Sprint session.", "type": "value_illustration", "children_knowledge": -1} +{"id": 1, "knowledge": "Qualifying Format Explained", "description": "Illustrates the multi-stage knockout system used in qualifying to set the race starting grid.", "definition": "Qualifying is divided into three parts: Q1, Q2, and Q3. In Q1, all drivers compete, and the slowest are eliminated. The remaining drivers proceed to Q2, where more are eliminated. The final top drivers advance to Q3 to compete for Pole Position.", "type": "value_illustration", "children_knowledge": -1} +{"id": 2, "knowledge": "Sprint Session Explained", "description": "Illustrates the concept of a Sprint session within a race weekend.", "definition": "A Sprint is a shorter race held on some race weekends. It has its own abbreviated qualifying and awards fewer championship points than the main Grand Prix. Its result determines the starting grid for the main race. The inclusion of a Sprint session modifies the standard Race Weekend Structure.", "type": "value_illustration", "children_knowledge": [0]} +{"id": 3, "knowledge": "Data Unavailability for Circuit Location", "description": "Clarifies the meaning of unavailable geographical data for a circuit.", "definition": "When a circuit's city, latitude, or longitude are not provided, it indicates that this information was not supplied or is unknown in the source data feed.", "type": "value_illustration", "children_knowledge": -1} +{"id": 4, "knowledge": "Data Unavailability for Circuit Elevation", "description": "Clarifies the meaning of unavailable elevation data for a circuit.", "definition": "When a circuit's elevation in meters is not provided, it signifies that this specific data point was not recorded for the circuit.", "type": "value_illustration", "children_knowledge": -1} +{"id": 5, "knowledge": "Data Unavailability for Driver Identification", "description": "Clarifies the meaning of a missing permanent racing number or identification code for a driver.", "definition": "When a driver's permanent number or three-letter identifier code is unavailable, it indicates that one has not been officially assigned or it is not present in the dataset.", "type": "value_illustration", "children_knowledge": -1} +{"id": 6, "knowledge": "Data Unavailability for Constructor Nationality", "description": "Clarifies the meaning of unavailable nationality data for a constructor.", "definition": "When a constructor's nationality is not specified, it means the team's country of origin is not recorded in the dataset.", "type": "value_illustration", "children_knowledge": -1} +{"id": 7, "knowledge": "Indeterminate Event Timings", "description": "Clarifies the meaning of unavailable date or time information for any race weekend session.", "definition": "When the date or time for any session (practice, qualifying, sprint, or race) is not provided, it signifies that the schedule for that session is To Be Determined (TBD), not applicable for the event, or not yet published.", "type": "value_illustration", "children_knowledge": -1} +{"id": 8, "knowledge": "Championship Points System (Race)", "description": "Defines the standard points awarded for the top ten finishing positions.", "definition": "Points are awarded to the top 10 finishers as follows: 1st place - 25 points, 2nd - 18, 3rd - 15, 4th - 12, 5th - 10, 6th - 8, 7th - 6, 8th - 4, 9th - 2, 10th - 1.", "type": "value_illustration", "children_knowledge": -1} +{"id": 9, "knowledge": "Championship Points System (Sprint)", "description": "Defines the points awarded for top finishing positions in a Sprint session.", "definition": "Points are awarded to the top 8 finishers in a Sprint session as follows: 1st place - 8 points, 2nd - 7, 3rd - 6, 4th - 5, 5th - 4, 6th - 3, 7th - 2, 8th - 1. This is a key feature of the Sprint Session Explained.", "type": "value_illustration", "children_knowledge": [2]} +{"id": 10, "knowledge": "Pole Position", "description": "Defines the premier starting position for a race.", "definition": "A driver achieves Pole Position by setting the fastest lap time during the final stage of the Qualifying Format Explained.", "type": "domain_knowledge", "children_knowledge": [1]} +{"id": 11, "knowledge": "Podium Finish", "description": "Defines a top-tier race/season result for a driver.", "definition": "A Podium Finish is achieved when a driver's final rank is 1st, 2nd, or 3rd.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 12, "knowledge": "Points Finish", "description": "Defines a race result that earns championship points.", "definition": "A Points Finish is a race classification within the top positions that are awarded points, as specified by the Championship Points System (Race).", "type": "domain_knowledge", "children_knowledge": [8]} +{"id": 13, "knowledge": "Fastest Lap Award", "description": "Defines the conditions for being awarded the fastest lap of a race.", "definition": "The Fastest Lap Award is given to the driver who achieves the single quickest lap time during a race, under the condition that they must also secure a Points Finish.", "type": "domain_knowledge", "children_knowledge": [12]} +{"id": 14, "knowledge": "High-Altitude Circuit", "description": "Defines a circuit with specific environmental characteristics that impact vehicle performance.", "definition": "A circuit is considered a High-Altitude Circuit if its elevation is greater than 800 meters above sea level. These circuits pose unique challenges for aerodynamics and power unit performance.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 15, "knowledge": "Sprint Winner", "description": "Defines the winner of a sprint session.", "definition": "A Sprint Winner is the driver who is classified in 1st place at the conclusion of a Sprint Session.", "type": "domain_knowledge", "children_knowledge": [2]} +{"id": 16, "knowledge": "Race Winner", "description": "Defines the winner of the main race event.", "definition": "A Race Winner is the driver who is classified in 1st place at the conclusion of the main Grand Prix race.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 17, "knowledge": "Efficient Pit Stop", "description": "Defines a benchmark for an exceptionally fast pit stop.", "definition": "An Efficient Pit Stop is a pit stop where the total time the car is stationary is less than 2.5 seconds, indicating outstanding performance by the pit crew.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 18, "knowledge": "Hat Trick", "description": "Defines a collection of three key achievements in a single race weekend.", "definition": "A driver achieves a Hat Trick by securing Pole Position, being the Race Winner, and receiving the Fastest Lap Award all in the same event.", "type": "domain_knowledge", "children_knowledge": [10, 13, 16]} +{"id": 19, "knowledge": "Constructor's Double Podium", "description": "Defines a top-tier race result for a constructor (team).", "definition": "A Constructor's Double Podium occurs when both drivers from the same team achieve a Podium Finish in the same race.", "type": "domain_knowledge", "children_knowledge": [11]} +{"id": 20, "knowledge": "Driver Age", "description": "Calculates the current age of a driver based on their birth date.", "definition": "Age = \\lfloor \\frac{Date_{current} - Date_{birth}}{365.25} \\rfloor", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 21, "knowledge": "Lap Time in Seconds", "description": "Converts lap time measurements from milliseconds to seconds.", "definition": "T_{seconds} = \\frac{T_{milliseconds}}{1000}", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 22, "knowledge": "Pit Stop Duration in Seconds", "description": "Converts pit stop duration from milliseconds to a more human-readable seconds format.", "definition": "D_{seconds} = \\frac{D_{milliseconds}}{1000}", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 23, "knowledge": "Driver's Average Lap Time", "description": "Calculates a driver's average lap time over the course of a race.", "definition": "\\bar{T}_{lap} = \\frac{\\sum_{i=1}^{n} T_{lap_i}}{n}, \\text{where } T_{lap_i} \\text{ is the Lap Time in Seconds for each lap } i \\text{, and } n \\text{ is the total number of laps completed by the driver.}", "type": "calculation_knowledge", "children_knowledge": [21]} +{"id": 24, "knowledge": "Position Gain / Loss", "description": "Calculates the number of positions a driver gained or lost between the start and end of a race.", "definition": "G_{pos} = P_{start} - P_{finish}, \\text{where } P_{start} \\text{ is the starting grid position and } P_{finish} \\text{ is the final race position. A positive value indicates a gain.}", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 25, "knowledge": "Constructor's Total Race Points", "description": "Calculates the total points a constructor scores in a single race from both of its drivers.", "definition": "P_{constructor} = \\sum P_{driver}, \\text{where the sum includes the points from all drivers of a constructor who had a Points Finish.}", "type": "calculation_knowledge", "children_knowledge": [12]} +{"id": 26, "knowledge": "Driver's Points Per Race (PPR)", "description": "Calculates a driver's average points accumulation per race.", "definition": "PPR = \\frac{P_{total}}{R_{completed}}, \\text{where } P_{total} \\text{ is the driver's cumulative points and } R_{completed} \\text{ is the number of races the driver has participated in.}", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 27, "knowledge": "Constructor Reliability Rate", "description": "Calculates the percentage of times a constructor's cars have finished the races they started.", "definition": "R_{reliability} = \\frac{N_{finishes}}{N_{starts}} \\times 100, \\text{where } N_{finishes} \\text{ is the total number of times the constructor's cars were classified as finishers, and } N_{starts} \\text{ is the total number of times they started a race.} A race is considered 'finished' if the status mark is not specially marked (null).", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 28, "knowledge": "Qualifying Time Deficit to Pole", "description": "Calculates the time difference between a driver's qualifying lap and the pole sitter's lap.", "definition": "\\Delta_{qualifying} = T_{driver} - T_{pole}, \\text{where } T_{driver} \\text{ is the driver's best Lap Time in Seconds during qualifying, and } T_{pole} \\text{ is the time achieved by the driver in Pole Position.}", "type": "calculation_knowledge", "children_knowledge": [10, 21]} +{"id": 29, "knowledge": "Race Time Delta to Winner", "description": "Calculates the time gap between a driver's final race time and that of the winner.", "definition": "\\Delta_{race} = T_{driver} - T_{winner}, \\text{where } T_{driver} \\text{ is the driver's total race time in seconds and } T_{winner} \\text{ is the total race time of the Race Winner.}", "type": "calculation_knowledge", "children_knowledge": [15]} +{"id": 30, "knowledge": "Race Performance Index (RPI)", "description": "Calculates a driver's overall performance in a race, rewarding both a high finishing position and positions gained.", "definition": "RPI = (21 - P_{finish}) + G_{pos}, \\text{where } P_{finish} \\text{ is the final position of the Race Winner (or other driver) and } G_{pos} \\text{ is the driver's Position Gain / Loss.}", "type": "calculation_knowledge", "children_knowledge": [16, 24]} +{"id": 31, "knowledge": "Constructor's Performance Score (CPS)", "description": "Calculates a constructor's overall seasonal performance by weighting their points-scoring ability with their finishing reliability.", "definition": "CPS = (\\text{Season Total Points}) \\times \\frac{\\text{Constructor Reliability Rate}}{100}, \\text{where total points are derived from their Constructor's Total Race Points over the season.}", "type": "calculation_knowledge", "children_knowledge": [25, 27]} +{"id": 32, "knowledge": "Lap Time Consistency", "description": "Measures the stability of a driver's lap times during a race, calculated as the standard deviation.", "definition": "LTC = \\sqrt{\\frac{\\sum (T_{lap_i} - \\bar{T}_{lap})^2}{n}}, \\text{where } \\bar{T}_{lap} \\text{ is the driver's Driver's Average Lap Time.}", "type": "calculation_knowledge", "children_knowledge": [21, 23]} +{"id": 33, "knowledge": "Adjusted Race Time Delta", "description": "Calculates the race time difference to the winner after accounting for the total time the driver spent stationary during pit stops.", "definition": "\\Delta_{Adjusted} = \\Delta_{Race} - \\sum D_{seconds}, \\text{where } \\Delta_{Race} \\text{ is the Race Time Delta to Winner and } D_{seconds} \\text{ is each Pit Stop Duration in Seconds.}", "type": "calculation_knowledge", "children_knowledge": [22, 29]} +{"id": 34, "knowledge": "Qualifying to Race Pace Differential", "description": "Compares a driver's raw qualifying pace to their average race pace to analyze performance drop-off or improvement.", "definition": "QRD = \\bar{T}_{lap} - T_{qualifying}, \\text{where } \\bar{T}_{lap} \\text{ is the Driver's Average Lap Time and } T_{qualifying} \\text{ is their fastest qualifying lap from which the Qualifying Time Deficit to Pole is measured.}", "type": "calculation_knowledge", "children_knowledge": [23, 28]} +{"id": 35, "knowledge": "High-Altitude Performance Factor", "description": "Quantifies a driver's relative performance at circuits with unique atmospheric conditions.", "definition": "HAPF = \\frac{\\text{Driver's Average Lap Time at High-Altitude Circuit}}{\\text{Driver's Season Average Lap Time}}, \\text{which evaluates performance specifically at a High-Altitude Circuit.}", "type": "calculation_knowledge", "children_knowledge": [14, 23]} +{"id": 36, "knowledge": "Sprint Performance Index", "description": "Calculates a driver's overall performance in a Sprint session, combining their finishing result with points scored.", "definition": "SPI = (9 - P_{sprint}) + S_{pts}, \\text{where } P_{sprint} \\text{ is the finishing position of the Sprint Winner (or other driver) and } S_{pts} \\text{ is points awarded per the Championship Points System (Sprint).}", "type": "calculation_knowledge", "children_knowledge": [9, 15]} +{"id": 37, "knowledge": "Driver Performance Value", "description": "Calculates a value metric for a driver by comparing their points-scoring record to their age.", "definition": "DPV = \\frac{\\text{Driver's Points Per Race (PPR)}}{\\text{Driver Age}}, \\text{providing a measure of success relative to experience.}", "type": "calculation_knowledge", "children_knowledge": [20, 26]} +{"id": 38, "knowledge": "Team's Combined Race Result", "description": "Calculates a score for a team in a single race based on the collective finishing positions of their drivers.", "definition": "TCRR = P_{finishing position of driver 1} + P_{finishing result of driver 2}. \\text{A lower score is better, reaching its minimum when a team achieves a Constructor's Double Podium with a Race Winner.}", "type": "calculation_knowledge", "children_knowledge": [16, 19]} +{"id": 39, "knowledge": "Tyre Management Index", "description": "Estimates a driver's ability to manage tyre degradation, by comparing their lap time consistency against the number of pit stops made.", "definition": "TMI = \\frac{1}{\\text{Lap Time Consistency} \\times (1 + N_{stops})}, \\text{ where a higher value is better. Uses the concept of pit stops from Efficient Pit Stop.}", "type": "calculation_knowledge", "children_knowledge": [17, 32]} +{"id": 40, "knowledge": "Clutch Performer", "description": "Defines a driver who excels under pressure by gaining many positions to secure a top result.", "definition": "A driver is a Clutch Performer if their Position Gain / Loss is greater than 5 and they achieve a Podium Finish in the same race.", "type": "domain_knowledge", "children_knowledge": [11, 24]} +{"id": 41, "knowledge": "Qualifying Specialist", "description": "Defines a driver who excels in qualifying but may not maintain the same relative pace during the race.", "definition": "A driver is a Qualifying Specialist if their Qualifying Time Deficit to Pole is less than 0.2 seconds.", "type": "domain_knowledge", "children_knowledge": [28]} +{"id": 42, "knowledge": "Dominant Victory", "description": "Defines a particularly commanding win, characterized by a large margin over the competition.", "definition": "A Dominant Victory is when a Race Winner's final Race Time Delta to Winner over the second-place driver is greater than 5 seconds.", "type": "domain_knowledge", "children_knowledge": [15, 29]} +{"id": 43, "knowledge": "Strategic Masterclass", "description": "Defines a race won through superior strategy, often involving pit stops.", "definition": "A Strategic Masterclass is when a Race Winner also achieves one or more Efficient Pit Stops, demonstrating that flawless strategy contributed to the victory.", "type": "domain_knowledge", "children_knowledge": [16, 17]} +{"id": 44, "knowledge": "Grand Chelem", "description": "Defines the 'Grand Slam' of a race weekend, the most complete individual performance possible.", "definition": "A Grand Chelem is achieved when a driver successfully completes a Hat Trick and also leads every lap of the race from start to finish.", "type": "domain_knowledge", "children_knowledge": [18]} +{"id": 45, "knowledge": "Reliable and Performing Constructor", "description": "Defines a team that demonstrates both exceptional reliability and strong on-track performance.", "definition": "A team is a Reliable and Performing Constructor if their Constructor Reliability Rate is above 95% and their Constructor's Performance Score (CPS) is in the top three for the season.", "type": "domain_knowledge", "children_knowledge": [27, 31]} +{"id": 46, "knowledge": "High-Altitude Ace", "description": "Defines a driver who shows exceptionally strong performance at high-altitude venues compared to their own baseline.", "definition": "A driver is a High-Altitude Ace if their Race Performance Index (RPI) at a High-Altitude Circuit is at least 20% higher than their seasonal average RPI.", "type": "domain_knowledge", "children_knowledge": [14, 30]} +{"id": 47, "knowledge": "Underdog Win", "description": "Defines a surprise victory by a driver who is not a typical front-runner based on their performance.", "definition": "An Underdog Win occurs when a Race Winner has a Driver's Points Per Race (PPR) of less than 5 prior to the event.", "type": "domain_knowledge", "children_knowledge": [16, 26]} +{"id": 48, "knowledge": "Flawless Team Weekend", "description": "Defines a weekend of perfect execution from both the driver and the pit crew.", "definition": "A Flawless Team Weekend is when a driver secures Pole Position and is the Race Winner, and every service for their car is an Efficient Pit Stop.", "type": "domain_knowledge", "children_knowledge": [10, 16, 17]} +{"id": 49, "knowledge": "Veteran's Podium", "description": "Defines a significant achievement for an experienced, older driver.", "definition": "A Veteran's Podium is when a driver with a Driver Age of 35 years or more successfully achieves a Podium Finish.", "type": "domain_knowledge", "children_knowledge": [11, 20]} +{"id": 50, "knowledge": "Constructors with Significant Participation", "description": "A threshold criterion for constructors with Significant Participation in races.", "definition": "Constructors are considered to have significant participation if they have started more than 10 races, ensuring statistical validity for reliability rate calculations.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 51, "knowledge": "Pole-Based Race Win Probability", "description": "Defines the likelihood of a driver winning a race based on their starting position being pole.", "definition": "Assume a race win probability of 35% if the driver started from Pole Position, and 5% otherwise.", "type": "domain_knowledge", "children_knowledge": [10, 16]} +{"id": 52, "knowledge": "Pole-Based Fastest Lap Probability", "description": "Defines the likelihood of a driver setting the fastest lap based on their starting position being pole.", "definition": "Assume a fastest lap probability of 25% if the driver started from Pole Position, and 8% otherwise.", "type": "domain_knowledge", "children_knowledge": [10, 13]} +{"id": 53, "knowledge": "Qualifying Performance Cluster", "description": "Categorizes drivers based on their average qualifying deficit to pole position.", "definition": "Three tiers: 'Pole Threat' (<0.15s), 'Mid Gap' (0.15s-0.4s), 'Backmarker' (≥0.4s)", "type": "domain_knowledge", "children_knowledge": [28]} +{"id": 54, "knowledge": "Average Stops Per Car", "description": "Calculates the average number of pit stops undertaken by each competing car in a single race.", "definition": "For a given race, this is calculated as: (Total Number of Pit Stops) / (Number of Unique Cars that made a Pit Stop).", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 55, "knowledge": "Pit Strategy Cluster", "description": "Classifies races based on average number of pit stops per car.", "definition": "Three categories: 'Single-Stop Race' (<1.5 stops), 'Standard Two-Stop' (1.5-2.5 stops), 'High-Strategy Event' (≥2.5 stops)", "type": "domain_knowledge", "children_knowledge": [54]} \ No newline at end of file diff --git a/sports_events/sports_events_schema.txt b/sports_events/sports_events_schema.txt new file mode 100644 index 0000000000000000000000000000000000000000..4189644f28affdc4c826ea00da521184f56e39ac --- /dev/null +++ b/sports_events/sports_events_schema.txt @@ -0,0 +1,219 @@ +CREATE TABLE "circuits" ( +cctkey integer NOT NULL, +location_metadata jsonb NULL, + PRIMARY KEY (cctkey) +); + +First 3 rows: + cctkey location_metadata +-------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- + 3 {'name': 'Bahrain International Circuit', 'location': {'city': 'Sakhir', 'country': 'Bahrain'}, 'coordinates': {'latitude': 26.0325, 'longitude': 50.5106, 'elevation_m': None}, 'external_link': None, 'reference_code': 'bahrain'} + 4 {'name': 'Circuit de Barcelona-Catalunya', 'location': {'city': 'Montmeló', 'country': 'Spain'}, 'coordinates': {'latitude': 41.57, 'longitude': 2.26111, 'elevation_m': None}, 'external_link': 'http://en.wikipedia.org/wiki/Circuit_de_Barcelona-Catalunya', 'reference_code': 'catalunya'} + 5 {'name': 'Istanbul Park', 'location': {'city': None, 'country': 'Turkey'}, 'coordinates': {'latitude': 40.9517, 'longitude': 29.405, 'elevation_m': None}, 'external_link': 'http://en.wikipedia.org/wiki/Istanbul_Park', 'reference_code': 'istanbul'} +... + + +CREATE TABLE "drivers" ( +drv_main integer NOT NULL, +driver_identity jsonb NULL, + PRIMARY KEY (drv_main) +); + +First 3 rows: + drv_main driver_identity +---------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- + 2 {'code': 'HEI', 'name': {'surname': 'Heidfeld', 'first_name': 'Nick'}, 'info_link': 'http://en.wikipedia.org/wiki/Nick_Heidfeld', 'reference': 'heidfeld', 'birth_date': '1977-05-10', 'nationality': 'German', 'racing_number': None} + 3 {'code': None, 'name': {'surname': 'Rosberg', 'first_name': 'Nico'}, 'info_link': 'http://en.wikipedia.org/wiki/Nico_Rosberg', 'reference': 'rosberg', 'birth_date': None, 'nationality': 'German', 'racing_number': None} + 4 {'code': 'ALO', 'name': {'surname': None, 'first_name': None}, 'info_link': None, 'reference': 'alonso', 'birth_date': '1981-07-29', 'nationality': 'Spanish', 'racing_number': '14.0'} +... + + +CREATE TABLE "races" ( +rak_id integer NOT NULL, +yr integer NULL, +rnum integer NULL, +trkbind integer NULL, +event_schedule jsonb NULL, + PRIMARY KEY (rak_id), + FOREIGN KEY (trkbind) REFERENCES circuits(cctkey) +); + +First 3 rows: + rak_id yr rnum trkbind event_schedule +-------- ---- ------ --------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- + 2 2009 2 2 {'date_set': '2009/04/05', 'sessions': {'fp1': {'date': None, 'time': None}, 'fp2': {'date': None, 'time': None}, 'fp3': {'date': None, 'time': None}, 'sprint': {'date': None, 'time': None}, 'qualifying': {'date': None, 'time': None}}, 'event_name': None, 'start_time': '09:00:00', 'details_url': None} + 3 2009 3 17 {'date_set': '2009/04/19', 'sessions': {'fp1': {'date': None, 'time': None}, 'fp2': {'date': None, 'time': None}, 'fp3': {'date': None, 'time': None}, 'sprint': {'date': None, 'time': None}, 'qualifying': {'date': None, 'time': None}}, 'event_name': 'Chinese Grand Prix', 'start_time': None, 'details_url': 'http://en.wikipedia.org/wiki/2009_Chinese_Grand_Prix'} + 4 2009 4 3 {'date_set': '2009/04/26', 'sessions': {'fp1': {'date': None, 'time': None}, 'fp2': {'date': None, 'time': None}, 'fp3': {'date': None, 'time': None}, 'sprint': {'date': None, 'time': None}, 'qualifying': {'date': None, 'time': None}}, 'event_name': 'Bahrain Grand Prix', 'start_time': '12:00:00', 'details_url': None} +... + + +CREATE TABLE "constructor_results" ( +cresref integer NOT NULL, +matchref integer NULL, +unitnode integer NULL, +scoreval real NULL, +st_mark text NULL, + PRIMARY KEY (cresref), + FOREIGN KEY (matchref) REFERENCES races(rak_id), + FOREIGN KEY (unitnode) REFERENCES constructors(cstr_key) +); + +First 3 rows: + cresref matchref unitnode scoreval st_mark +--------- ---------- ---------- ---------- --------- + 15178 943 9 22 + 13224 549 64 0 + 14394 863 10 8 +... + + +CREATE TABLE "constructors" ( +cstr_key integer NOT NULL, +refcod text NULL, +namelabel text NULL, +naty text NULL, +linkpage text NULL, + PRIMARY KEY (cstr_key) +); + +First 3 rows: + cstr_key refcod namelabel naty linkpage +---------- ---------- ----------- ------- --------------------------------------- + 1 mclaren McLaren British + 2 bmw_sauber BMW Sauber http://en.wikipedia.org/wiki/BMW_Sauber + 3 williams Williams +... + + +CREATE TABLE "constructor_standings" ( +cstnds integer NOT NULL, +rref integer NULL, +contunit integer NULL, +scr_tot real NULL, +posno integer NULL, +poslab text NULL, +trophy_w integer NULL, + PRIMARY KEY (cstnds), + FOREIGN KEY (rref) REFERENCES races(rak_id), + FOREIGN KEY (contunit) REFERENCES constructors(cstr_key) +); + +First 3 rows: + cstnds rref contunit scr_tot posno poslab trophy_w +-------- ------ ---------- --------- ------- -------- ---------- + 25335 898 9 553 1 1 12 + 10478 451 53 16 7 7 0 + 8849 359 42 0 18 18 0 +... + + +CREATE TABLE "driver_standings" ( +drv_stnd integer NOT NULL, +rlink integer NULL, +drive_link integer NULL, +acc_pt real NULL, +px integer NULL, +px_desc text NULL, +topmark integer NULL, + PRIMARY KEY (drv_stnd), + FOREIGN KEY (rlink) REFERENCES races(rak_id), + FOREIGN KEY (drive_link) REFERENCES drivers(drv_main) +); + +First 3 rows: + drv_stnd rlink drive_link acc_pt px px_desc topmark +---------- ------- ------------ -------- ---- --------- --------- + 50065 752 531 nan 56 56 0 + 54588 651 362 0 nan 0 + 16188 329 92 nan 36 0 +... + + +CREATE TABLE "lap_times" ( +rc_index integer NOT NULL, +wheel_unit integer NOT NULL, +lapval integer NOT NULL, +pp integer NULL, +tmdesc text NULL, +msec_val integer NULL, + PRIMARY KEY (rc_index, wheel_unit, lapval), + FOREIGN KEY (rc_index) REFERENCES races(rak_id), + FOREIGN KEY (wheel_unit) REFERENCES drivers(drv_main) +); + +First 3 rows: + rc_index wheel_unit lapval pp tmdesc msec_val +---------- ------------ -------- ---- -------- ---------- + 1093 4 55 7 1:43.848 103848 + 64 2 7 19 1:21.522 81522 + 928 830 51 8 1:44.392 104392 +... + + +CREATE TABLE "pit_stops" ( +matchidx integer NOT NULL, +wunit integer NOT NULL, +pause_no integer NOT NULL, +moment integer NULL, +attime text NULL, +durtxt text NULL, +ms_count integer NULL, + PRIMARY KEY (matchidx, wunit, pause_no), + FOREIGN KEY (matchidx) REFERENCES races(rak_id), + FOREIGN KEY (wunit) REFERENCES drivers(drv_main) +); + +First 3 rows: + matchidx wunit pause_no moment attime durtxt ms_count +---------- ------- ---------- -------- -------- -------- ---------- + 911 3 2 19 14:41:14 22.936 22936 + 1084 840 2 48 16:01:18 21.952 21952 + 1081 844 1 9 15:20:26 23.212 23212 +... + + +CREATE TABLE "qualifying" ( +qualkey integer NOT NULL, +rbind integer NULL, +pilotrec integer NULL, +corptag integer NULL, +regno integer NULL, +px_pos integer NULL, +q1_r text NULL, +q2_r text NULL, +q3_r text NULL, + PRIMARY KEY (qualkey), + FOREIGN KEY (rbind) REFERENCES races(rak_id), + FOREIGN KEY (pilotrec) REFERENCES drivers(drv_main), + FOREIGN KEY (corptag) REFERENCES constructors(cstr_key) +); + +First 3 rows: + qualkey rbind pilotrec corptag regno px_pos q1_r q2_r q3_r +--------- ------- ---------- --------- ------- -------- -------- -------- -------- + 6815 954 1 131 44 1 1:14.121 1:13.076 1:12.812 + 8587 1041 832 1 55 10 1:27.378 1:26.361 1:26.709 + 6452 933 831 15 12 9 1:12.001 1:09.652 1:09.713 +... + + +CREATE TABLE "sprint_results" ( +srescode integer NOT NULL, +matchref integer NULL, +unitdrive integer NULL, +makeref integer NULL, +rno integer NULL, +sprint_performance jsonb NULL, + PRIMARY KEY (srescode), + FOREIGN KEY (matchref) REFERENCES races(rak_id), + FOREIGN KEY (unitdrive) REFERENCES drivers(drv_main), + FOREIGN KEY (makeref) REFERENCES constructors(cstr_key) +); + +First 3 rows: + srescode matchref unitdrive makeref rno sprint_performance +---------- ---------- ----------- --------- ----- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- + 5 1061 846 1 4 {'grid': 6, 'points': 0, 'timing': {'final_time': '+24.111', 'duration_ms': 1562537}, 'fastest_lap': {'lap_time': '1:30.566', 'lap_number': 16}, 'status_code': 1, 'ranking_order': 5, 'final_position': 5, 'laps_completed': 17, 'position_label': '5'} + 6 1061 817 1 3 {'grid': 7, 'points': 0, 'timing': {'final_time': '+30.959', 'duration_ms': 1569385}, 'fastest_lap': {'lap_time': '1:30.640', 'lap_number': 17}, 'status_code': 1, 'ranking_order': 6, 'final_position': 6, 'laps_completed': 17, 'position_label': '6'} + 7 1061 4 214 14 {'grid': 11, 'points': 0, 'timing': {'final_time': '+43.527', 'duration_ms': 1581953}, 'fastest_lap': {'lap_time': '1:31.773', 'lap_number': 17}, 'status_code': 1, 'ranking_order': 7, 'final_position': 7, 'laps_completed': 17, 'position_label': '7'} +... diff --git a/virtual_idol/virtual_idol_column_meaning_base.json b/virtual_idol/virtual_idol_column_meaning_base.json new file mode 100644 index 0000000000000000000000000000000000000000..da7eed361814290badec38b5149b560625930ffc --- /dev/null +++ b/virtual_idol/virtual_idol_column_meaning_base.json @@ -0,0 +1,231 @@ +{ + "virtual_idol|fans|user_registry": "TEXT. Unique identifier for fan user account registration in the virtual_idol idol platform. PK = Fans(User_Registry).", + "virtual_idol|fans|nick_label": "TEXT. Display nickname or username chosen by the fan for public identification.", + "virtual_idol|fans|reg_moment": "TEXT. Date and time when the fan account was first registered on the platform.", + "virtual_idol|fans|tier_step": "SMALLINT. Fan loyalty tier level indicating status progression within the platform hierarchy.", + "virtual_idol|fans|pts_val": "BIGINT. Accumulated loyalty points earned through various platform activities and interactions.", + "virtual_idol|fans|status_tag": "TEXT. Current account status classification for the fan user.", + "virtual_idol|fans|lang_pref": "TEXT. Fan's preferred language for platform interface and content. **NULL means language preference not specified or using platform default.** **DATA NOISE: Values have random case variations (lowercase, uppercase, mixed case) virtual_idol.**", + "virtual_idol|virtual_idolidols|entity_reg": "TEXT. Unique identifier for virtual_idol idol entity registration. PK = virtual_idolIdols(Entity_Reg).", + "virtual_idol|virtual_idolidols|name_tag": "TEXT. Official name or stage name of the virtual_idol idol character.", + "virtual_idol|virtual_idolidols|kind_tag": "TEXT. Type or category classification of the virtual_idol idol (e.g., vtuber, AI idol).", + "virtual_idol|virtual_idolidols|deb_date": "TEXT. Official debut date when the virtual_idol idol first appeared publicly.", + "virtual_idol|virtual_idolidols|assoc_group": "TEXT. Associated agency, company, or group managing the virtual_idol idol.", + "virtual_idol|virtual_idolidols|genre_tag": "TEXT. Primary genre or content category the virtual_idol idol specializes in.", + "virtual_idol|virtual_idolidols|prim_lang": "TEXT. Primary language used by the virtual_idol idol for content and communication.", + "virtual_idol|interactions|activity_reg": "TEXT. Unique identifier for individual fan-idol interaction activity session. PK = Interactions(Activity_Reg).", + "virtual_idol|interactions|time_mark": "TIMESTAMP. Precise timestamp when the interaction activity occurred.", + "virtual_idol|interactions|interact_fan_pivot": "TEXT. Reference to the fan user participating in this interaction. FK to Fans.", + "virtual_idol|interactions|interact_idol_pivot": "TEXT. Reference to the virtual_idol idol involved in this interaction. FK to virtual_idolIdols.", + "virtual_idol|interactions|act_kind": "TEXT. Type of interaction activity performed. **NULL means interaction type classification failed or is ambiguous.** **DATA NOISE: Values have random case variations (lowercase, uppercase, mixed case) virtual_idol.**", + "virtual_idol|interactions|act_plat": "TEXT. Platform where the interaction took place. **NULL means platform identification failed or cross-platform activity.** **DATA NOISE: Values have random case variations (lowercase, uppercase, mixed case) virtual_idol.**", + "virtual_idol|interactions|plat_used": "TEXT. Specific platform or service used for the interaction. **NULL means platform usage tracking incomplete or privacy mode enabled.** **DATA NOISE: Values have random case variations (lowercase, uppercase, mixed case) virtual_idol.**", + "virtual_idol|interactions|dev_type": "TEXT. Device type used by fan for the interaction. **NULL means device detection failed or privacy settings block tracking.** **DATA NOISE: Values have random case variations (lowercase, uppercase, mixed case) virtual_idol.**", + "virtual_idol|interactions|app_ver": "TEXT. Application version used during the interaction session.", + "virtual_idol|interactions|sess_dur_min": "SMALLINT. Duration of the interaction session in minutes.", + "virtual_idol|interactions|live_att": "SMALLINT. Number of live stream sessions attended during this interaction.", + "virtual_idol|interactions|watch_hrs": "REAL. Total hours of content watched during the interaction session.", + "virtual_idol|interactions|engagement_rate": "TEXT. Fan engagement efficiency per time unit during interaction sessions. Example: 15 actions/hr.", + "virtual_idol|interactions|gift_rate": "TEXT. Virtual gift-giving frequency and speed during interaction sessions. Example: 6 gifts/hr.", + "virtual_idol|interactions|message_rate": "TEXT. Chat message sending frequency during interaction sessions. Example: 45 msgs/hr.", + "virtual_idol|interactions|content_consumption_rate": "TEXT. Content viewing and consumption efficiency per session. Example: 489.68182650479406 mins/session.", + "virtual_idol|membershipandspending|member_reg": "BIGSERIAL. Auto-generated unique identifier for membership and spending record. PK = MembershipAndSpending(Member_Reg).", + "virtual_idol|membershipandspending|member_fan_pivot": "TEXT. Reference to the fan user for this membership record. FK to Fans.", + "virtual_idol|membershipandspending|memb_kind": "TEXT. Type or tier of membership subscription held by the fan.", + "virtual_idol|membershipandspending|memb_days": "SMALLINT. Duration of current membership in days since activation.", + "virtual_idol|membershipandspending|spend_usd": "REAL. Total amount spent by fan on platform in USD.", + "virtual_idol|membershipandspending|spend_freq": "TEXT. Frequency pattern of spending behavior by the fan.", + "virtual_idol|membershipandspending|pay_method": "TEXT. Preferred payment method used for transactions.", + "virtual_idol|membershipandspending|spend_rate": "TEXT. Daily spending velocity and financial engagement rate. Example: 32.296185365657216 USD/day.", + "virtual_idol|membershipandspending|value_per_day": "TEXT. Monthly spending projection based on current patterns. Example: 208.84866536458333 USD/month.", + "virtual_idol|membershipandspending|cost_efficiency": "TEXT. Value-to-cost ratio for membership and spending efficiency. Example: 85 value/USD.", + "virtual_idol|engagement|engage_reg": "BIGSERIAL. Auto-generated unique identifier for engagement metrics record. PK = Engagement(Engage_Reg).", + "virtual_idol|engagement|engage_activity_pivot": "TEXT. Reference to the interaction activity for engagement analysis. FK to Interactions.", + "virtual_idol|engagement|engage_member_pivot": "BIGINT. Reference to membership record for engagement correlation. FK to MembershipAndSpending.", + "virtual_idol|engagement|soc_int_score": "REAL. Social interaction score measuring fan's community participation level.", + "virtual_idol|engagement|eng_rate": "REAL. Overall engagement rate calculated from various interaction metrics.", + "virtual_idol|engagement|act_freq": "TEXT. Frequency classification of fan's activity on the platform.", + "virtual_idol|engagement|peak_time": "TEXT. Time period when fan shows highest activity levels.", + "virtual_idol|engagement|act_days_wk": "SMALLINT. Number of days per week fan is active on platform.", + "virtual_idol|engagement|avg_sess_count": "SMALLINT. Average number of sessions per active day.", + "virtual_idol|engagement|cont_pref": "TEXT. Fan's preferred content type or category.", + "virtual_idol|engagement|lang_pref": "TEXT. Fan's preferred language for content consumption.", + "virtual_idol|engagement|trans_use": "TEXT. Usage pattern of translation features by the fan.", + "virtual_idol|engagement|interaction_efficiency": "TEXT. Social interaction frequency and community participation rate. Example: 26.200000762939453 interactions/hr.", + "virtual_idol|engagement|session_productivity": "TEXT. Platform usage frequency and session completion rate. Example: 21 sessions/week, 14 sessions/week, 35 sessions/week.", + "virtual_idol|commerceandcollection|commerce_reg": "BIGSERIAL. Auto-generated unique identifier for commerce and collection record. PK = CommerceAndCollection(Commerce_Reg).", + "virtual_idol|commerceandcollection|commerce_engage_pivot": "BIGINT. Reference to engagement record for commerce behavior analysis. FK to Engagement.", + "virtual_idol|commerceandcollection|commerce_member_pivot": "BIGINT. Reference to membership record for commerce correlation. FK to MembershipAndSpending.", + "virtual_idol|commerceandcollection|merch_buy": "SMALLINT. Number of merchandise items purchased by the fan.", + "virtual_idol|commerceandcollection|merch_spend_usd": "REAL. Total amount spent on merchandise in USD.", + "virtual_idol|commerceandcollection|dig_own": "BIGINT. Number of digital items owned by the fan.", + "virtual_idol|commerceandcollection|phys_own": "BIGINT. Number of physical collectible items owned by the fan.", + "virtual_idol|commerceandcollection|coll_comp_rate": "REAL. Collection completion rate as percentage of available items.", + "virtual_idol|commerceandcollection|trade_level": "TEXT. Trading activity level classification. **NULL means trading activity tracking incomplete or user has not participated in trading.**", + "virtual_idol|socialcommunity|social_reg": "BIGSERIAL. Auto-generated unique identifier for social community record. PK = SocialCommunity(Social_Reg).", + "virtual_idol|socialcommunity|social_engage_pivot": "BIGINT. Reference to engagement record for social behavior analysis. FK to Engagement.", + "virtual_idol|socialcommunity|social_commerce_pivot": "BIGINT. Reference to commerce record for social influence on purchasing. FK to CommerceAndCollection.", + "virtual_idol|socialcommunity|comm_contrib": "TEXT. Type and level of contribution to community activities. **NULL means contribution assessment incomplete or minimal community participation.** **DATA NOISE: Values have random case variations (lowercase, uppercase, mixed case) virtual_idol.**", + "virtual_idol|socialcommunity|cont_create_stat": "TEXT. Status of content creation activities by the fan. **NULL means content creation tracking incomplete or user has not created content.** **DATA NOISE: Values have random case variations (lowercase, uppercase, mixed case) virtual_idol.**", + "virtual_idol|socialcommunity|content_creation_rate": "TEXT. User-generated content production frequency and creativity output. Example: 3 posts/week, 7 posts/week, 1 posts/week.", + "virtual_idol|socialcommunity|community_growth_rate": "TEXT. Social network expansion and follower acquisition rate. Example: 25 followers/month, 45 followers/month, 12 followers/month.", + "virtual_idol|socialcommunity|influence_rate": "TEXT. Social influence expansion and network connection growth speed. Example: 15 connections/week, 28 connections/week, 8 connections/week.", + "virtual_idol|eventsandclub|events_reg": "BIGSERIAL. Auto-generated unique identifier for events and fan club record. PK = EventsAndClub(Events_Reg).", + "virtual_idol|eventsandclub|events_social_pivot": "BIGINT. Reference to social community record for event participation analysis. FK to SocialCommunity.", + "virtual_idol|eventsandclub|events_member_pivot": "BIGINT. Reference to membership record for event access correlation. FK to MembershipAndSpending.", + "virtual_idol|eventsandclub|evt_part": "TEXT. Level and type of event participation by the fan. **NULL means event participation tracking incomplete or no events attended.** **DATA NOISE: Values have random case variations (lowercase, uppercase, mixed case) virtual_idol.**", + "virtual_idol|eventsandclub|camp_part": "TEXT. Participation level in campaigns and promotional activities. **NULL means campaign participation tracking incomplete or no campaigns joined.**", + "virtual_idol|eventsandclub|club_stat": "TEXT. Current status within official fan club membership.", + "virtual_idol|eventsandclub|club_j_date": "TEXT. Date when fan joined the official fan club.", + "virtual_idol|eventsandclub|club_contrib": "TEXT. Type and level of contribution to fan club activities. **NULL means fan club contribution assessment incomplete or minimal participation.** **DATA NOISE: Values have random case variations (lowercase, uppercase, mixed case) virtual_idol.**", + "virtual_idol|loyaltyandachievements|loyalty_reg": "BIGSERIAL. Auto-generated unique identifier for loyalty and achievements record. PK = LoyaltyAndAchievements(Loyalty_Reg).", + "virtual_idol|loyaltyandachievements|loyalty_events_pivot": "BIGINT. Reference to events record for loyalty program correlation. FK to EventsAndClub.", + "virtual_idol|loyaltyandachievements|loyalty_engage_pivot": "BIGINT. Reference to engagement record for loyalty calculation. FK to Engagement.", + "virtual_idol|loyaltyandachievements|loy_pts": "BIGINT. Total loyalty points accumulated through platform activities.", + "virtual_idol|loyaltyandachievements|rew_tier": "TEXT. Current reward tier level in the loyalty program.", + "virtual_idol|loyaltyandachievements|repute_lv": "TEXT. Reputation level classification within the community.", + "virtual_idol|loyaltyandachievements|trust_val": "REAL. Trust score calculated from community interactions and behavior.", + "virtual_idol|moderationandcompliance|mod_reg": "BIGSERIAL. Auto-generated unique identifier for moderation and compliance record. PK = ModerationAndCompliance(Mod_Reg).", + "virtual_idol|moderationandcompliance|moderation_interact_pivot": "TEXT. Reference to interaction record for moderation context. FK to Interactions.", + "virtual_idol|moderationandcompliance|moderation_social_pivot": "BIGINT. Reference to social community record for moderation analysis. FK to SocialCommunity.", + "virtual_idol|moderationandcompliance|rpt_count": "SMALLINT. Number of reports filed against this fan's content or behavior.", + "virtual_idol|moderationandcompliance|warn_count": "SMALLINT. Number of warnings issued to this fan for policy violations.", + "virtual_idol|moderationandcompliance|viol_hist": "TEXT. History of policy violations committed by the fan. **NULL means no violation history recorded or clean conduct record.**", + "virtual_idol|moderationandcompliance|mod_stat": "TEXT. Current moderation status and any active restrictions on the account.", + "virtual_idol|moderationandcompliance|cont_comp": "TEXT. Content compliance rating and adherence to platform guidelines.", + "virtual_idol|moderationandcompliance|age_verif": "TEXT. Age verification status for access to age-restricted content.", + "virtual_idol|moderationandcompliance|pay_verif": "TEXT. Payment method verification status. **NULL means payment verification not completed or not required for current membership tier.**", + "virtual_idol|moderationandcompliance|id_verif": "TEXT. Identity verification status for enhanced security features. **NULL means identity verification not completed or not required for current account type.**", + "virtual_idol|preferencesandsettings|pref_reg": "BIGSERIAL. Auto-generated unique identifier for preferences and settings record. PK = PreferencesAndSettings(Pref_Reg).", + "virtual_idol|preferencesandsettings|preferences_member_pivot": "BIGINT. Reference to membership record for preference correlation. FK to MembershipAndSpending.", + "virtual_idol|preferencesandsettings|preferences_social_pivot": "BIGINT. Reference to social community record for social preference analysis. FK to SocialCommunity.", + "virtual_idol|preferencesandsettings|priv_set": "TEXT. Privacy settings configuration chosen by the fan.", + "virtual_idol|preferencesandsettings|ds_consent": "TEXT. Data sharing consent status for analytics and personalization.", + "virtual_idol|preferencesandsettings|notif_pref": "TEXT. Notification preferences for various platform activities. **NULL means notification preferences not configured or using platform defaults.**", + "virtual_idol|preferencesandsettings|comm_pref": "TEXT. Communication preferences for interactions with other users. **NULL means communication preferences not specified or using standard settings.**", + "virtual_idol|preferencesandsettings|mark_pref": "TEXT. Marketing communication preferences and opt-in status.", + "virtual_idol|preferencesandsettings|lang_set": "TEXT. Language settings for platform interface and content.", + "virtual_idol|preferencesandsettings|access_set": "TEXT. Accessibility settings and accommodations enabled.", + "virtual_idol|preferencesandsettings|dev_count": "SMALLINT. Number of devices registered for account access.", + "virtual_idol|preferencesandsettings|log_freq": "TEXT. Login frequency pattern and regularity classification.", + "virtual_idol|preferencesandsettings|last_log_dt": "TEXT. Date and time of most recent login to the platform.", + "virtual_idol|preferencesandsettings|conn_qual": "TEXT. Connection quality classification based on technical performance.", + "virtual_idol|supportandfeedback|support_reg": "BIGSERIAL. Auto-generated unique identifier for support and feedback record. PK = SupportAndFeedback(Support_Reg).", + "virtual_idol|supportandfeedback|support_interact_pivot": "TEXT. Reference to interaction record for support context. FK to Interactions.", + "virtual_idol|supportandfeedback|support_pref_pivot": "BIGINT. Reference to preferences record for support correlation. FK to PreferencesAndSettings.", + "virtual_idol|supportandfeedback|surv_part": "TEXT. Participation level in surveys and research studies.", + "virtual_idol|supportandfeedback|beta_part": "TEXT. Participation status in beta testing programs.", + "virtual_idol|retentionandinfluence|ret_reg": "BIGSERIAL. Auto-generated unique identifier for retention and influence record. PK = RetentionAndInfluence(Ret_Reg).", + "virtual_idol|retentionandinfluence|retain_engage_pivot": "BIGINT. Reference to engagement record for retention analysis. FK to Engagement.", + "virtual_idol|retentionandinfluence|retain_loyalty_pivot": "BIGINT. Reference to loyalty record for retention correlation. FK to LoyaltyAndAchievements.", + "virtual_idol|retentionandinfluence|churn_flag": "TEXT. Churn risk classification and prediction status.", + "virtual_idol|retentionandinfluence|ref_count": "SMALLINT. Number of referrals made by this fan to bring new users.", + "virtual_idol|additionalnotes|notes_reg": "BIGSERIAL. Auto-generated unique identifier for additional notes record. PK = AdditionalNotes(Notes_Reg).", + "virtual_idol|additionalnotes|notes_retain_pivot": "BIGINT. Reference to retention record for contextual notes. FK to RetentionAndInfluence.", + "virtual_idol|additionalnotes|note_info": "TEXT. Free-form notes and additional information about the fan. **NULL means no additional notes recorded or information not available.**", + "virtual_idol|fans|Demo_Profile": { + "column_meaning": "JSONB column. Demographic profile information including age, gender, location, and personal background", + "fields_meaning": { + "Age_Count": "SMALLINT. Fan's age in years for demographic analysis and content targeting.", + "Gender_Type": "TEXT. Fan's gender classification for demographic segmentation.", + "Loc_Nation": "TEXT. Fan's country of residence for geographic analysis.", + "Loc_Town": "TEXT. Fan's city or local area for detailed geographic segmentation.", + "Occu_Path": "TEXT. Fan's occupation or professional background. **NULL means occupation information not provided or privacy setting enabled.** **DATA NOISE: Values have random case variations (lowercase, uppercase, mixed case) virtual_idol.**", + "Interest_Set": "TEXT. Fan's declared interests and hobbies for content personalization. **NULL means interest profile incomplete or not disclosed.** **DATA NOISE: Values have random case variations (lowercase, uppercase, mixed case) virtual_idol.**" + } + }, + "virtual_idol|interactions|Gift_Metrics": { + "column_meaning": "JSONB column. Comprehensive gift-giving behavior metrics including frequency, quantities, values, and preferences", + "fields_meaning": { + "Gift_Freq": "TEXT. Frequency classification of gift-giving behavior during interaction.", + "Gift_Tot": "BIGINT. Total number of virtual_idol gifts sent during this interaction.", + "Gift_Val_Usd": "REAL. Total monetary value in USD of gifts sent during interaction.", + "Fav_Gift_Tag": "TEXT. Most frequently used or preferred gift type during interaction." + } + }, + "virtual_idol|interactions|Chat_Activity": { + "column_meaning": "JSONB column. Chat and messaging activity data including message counts, language, sentiment, and emoji usage", + "fields_meaning": { + "Chat_Msg": "BIGINT. Number of chat messages sent by the fan during this interaction.", + "Chat_Lang": "TEXT. Primary language used in chat messages during the interaction.", + "Msg_Tone": "TEXT. Sentiment or tone analysis result of messages sent during interaction.", + "Emoji_Count": "BIGINT. Total number of emojis used in messages during the interaction.", + "Stk_Count": "BIGINT. Total number of stickers used during the interaction session." + } + }, + "virtual_idol|socialcommunity|Network_Stats": { + "column_meaning": "JSONB column. Social network statistics including connections, followers, and community involvement", + "fields_meaning": { + "Soc_Net_Sz": "BIGINT. Size of fan's social network within the platform.", + "Foll_Count": "BIGINT. Number of other users following this fan.", + "Fing_Count": "BIGINT. Number of users this fan is following.", + "Friend_Con": "BIGINT. Number of mutual friend connections within the platform.", + "Grp_Memb": "SMALLINT. Number of groups or communities the fan belongs to.", + "Grp_Role": "TEXT. Role or position held within community groups. **NULL means no specific group role assigned or participating as regular member.**" + } + }, + "virtual_idol|socialcommunity|Content_Creation": { + "column_meaning": "JSONB column. User-generated content creation metrics including submissions, performances, and quality ratings", + "fields_meaning": { + "Art_Subs": "BIGINT. Number of fan art submissions made by the fan.", + "Fic_Subs": "BIGINT. Number of fan fiction submissions made by the fan.", + "Cover_Perf_Cnt": "BIGINT. Number of cover performances or tribute content created.", + "UGC_Val": "BIGINT. Total user-generated content value or count created by fan.", + "Cont_Qual_Rate": "REAL. Quality rating of content created by the fan.", + "Collab_Count": "SMALLINT. Number of collaborative projects participated in by the fan." + } + }, + "virtual_idol|eventsandclub|Evt_Participation": { + "column_meaning": "JSONB column. Event participation data including attendance counts for different event types and voting behavior", + "fields_meaning": { + "Off_Evt_Att": "SMALLINT. Number of offline events attended by the fan.", + "On_Evt_Att": "SMALLINT. Number of online events attended by the fan.", + "Meet_Att": "SMALLINT. Number of meet-and-greet events attended.", + "Conc_Att": "SMALLINT. Number of concert or performance events attended.", + "Vote_Part_Rate": "REAL. Participation rate in voting activities as percentage." + } + }, + "virtual_idol|loyaltyandachievements|Achiev_Stats": { + "column_meaning": "JSONB column. Achievement and recognition statistics including counts of achievements, badges, titles, and rankings", + "fields_meaning": { + "Ach_Count": "BIGINT. Total number of achievements unlocked by the fan.", + "Badge_Coll": "BIGINT. Number of badges collected through various activities.", + "Spec_Titles": "BIGINT. Number of special titles earned by the fan.", + "Rank_Pos": "BIGINT. Current ranking position among all platform users.", + "Infl_Score": "REAL. Influence score measuring fan's impact on community and platform." + } + }, + "virtual_idol|preferencesandsettings|Usage_Metrics": { + "column_meaning": "JSONB column. Platform usage metrics including session data, time spent, and activity consistency measurements", + "fields_meaning": { + "Sess_Count": "BIGINT. Total number of login sessions since account creation.", + "Time_Hrs": "BIGINT. Total time spent online on the platform in hours.", + "Avg_Daily_Min": "SMALLINT. Average daily time spent on platform in minutes.", + "Peak_Sess": "SMALLINT. Maximum concurrent sessions recorded for this user.", + "Int_Consist": "REAL. Interaction consistency score measuring regular engagement patterns.", + "Plat_Stable": "REAL. Platform stability score based on session reliability and connection quality." + } + }, + "virtual_idol|supportandfeedback|Feedback_Data": { + "column_meaning": "JSONB column. Support and feedback engagement data including issue reports, submissions, and satisfaction ratings", + "fields_meaning": { + "Tech_Issue_Rpt": "SMALLINT. Number of technical issues reported by the fan.", + "Supp_Tix": "SMALLINT. Number of support tickets opened by the fan.", + "Fb_Subs": "SMALLINT. Number of feedback submissions made by the fan.", + "Feat_Req_Subs": "SMALLINT. Number of feature requests submitted by the fan.", + "Bug_Subs": "SMALLINT. Number of bug reports submitted by the fan.", + "Sat_Rate": "REAL. Overall satisfaction rating provided by the fan.", + "NPS_Val": "SMALLINT. Net Promoter Score value indicating likelihood to recommend platform." + } + }, + "virtual_idol|retentionandinfluence|Infl_Impact": { + "column_meaning": "JSONB column. User influence and viral impact metrics including content reach, viral content, and trending participation", + "fields_meaning": { + "Cont_Reach": "BIGINT. Reach and visibility metrics for fan-created content.", + "Viral_Cont": "SMALLINT. Number of viral content pieces created or shared.", + "Trend_Part": "SMALLINT. Number of trending topics or events participated in.", + "React_Count": "SMALLINT. Number of reactivation attempts or successful returns.", + "Hash_Use": "SMALLINT. Number of hashtags used for content discovery and engagement." + } + } +} \ No newline at end of file diff --git a/virtual_idol/virtual_idol_kb.jsonl b/virtual_idol/virtual_idol_kb.jsonl new file mode 100644 index 0000000000000000000000000000000000000000..19a01c524fe8b09bd6d2239da1d54d252ee6390c --- /dev/null +++ b/virtual_idol/virtual_idol_kb.jsonl @@ -0,0 +1,64 @@ +{"id": 0, "knowledge": "Fan Monetization Index (FMI)", "description": "Measures a fan's direct financial contribution per minute of engagement, indicating their propensity for in-session spending.", "definition": "FMI = \\frac{\\text{Total Gift Value in USD}}{\\text{Session Duration in Minutes}}", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 1, "knowledge": "Content Consumption Index (CCI)", "description": "Quantifies a fan's level of passive engagement by measuring the hours of content they watch relative to their session time.", "definition": "CCI = \\frac{\\text{Total Watch Hours}}{\\text{Session Duration in Minutes}}", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 2, "knowledge": "Chattiness Score (CS)", "description": "Measures a fan's active communication during a session, reflecting their level of social interaction.", "definition": "CS = \\frac{\\text{Number of Chat Messages Sent}}{\\text{Session Duration in Minutes}}", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 3, "knowledge": "Fan Financial Value (FFV)", "description": "Estimates a fan's monthly financial contribution to the platform, serving as a proxy for their lifetime value.", "definition": "FFV = \\frac{\\text{Total Spending in USD}}{\\text{Membership Duration in Days} / 30.44}", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 4, "knowledge": "Merchandise Affinity Score (MAS)", "description": "Calculates the proportion of a fan's total spending that is dedicated to merchandise, indicating their interest in physical goods.", "definition": "MAS = \\frac{\\text{Total Merchandise Spend in USD}}{\\text{Total Spending in USD}}", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 5, "knowledge": "Follower-to-Following Ratio (FFR)", "description": "Measures a fan's social influence by comparing the number of users who follow them to the number of users they follow.", "definition": "FFR = \\frac{\\text{Follower Count}}{\\text{Following Count}}, \\text{ where Following Count is > 0}", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 6, "knowledge": "Content Creator Score (CCS)", "description": "Evaluates a fan's contribution as a content creator by combining the volume and quality of their user-generated content.", "definition": "CCS = (\u03A3 \\text{Content Submissions} + \\text{Collaborations}) \\times \\text{Average Content Quality Rating}", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 7, "knowledge": "Community Influence Index (CII)", "description": "A composite score that measures a fan's overall influence within the community, based on their social network reach and recognized influence.", "definition": "CII = \\sqrt{\\text{Follower Count} \\times \\text{Content Reach}}", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 8, "knowledge": "Achievement Density (AD)", "description": "Measures the rate at which a fan earns achievements, indicating their level of engagement with platform goals.", "definition": "AD = \\frac{\\text{Total Achievements Unlocked}}{\\text{Membership Duration in Days}}", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 9, "knowledge": "Support Load Score (SLS)", "description": "Quantifies the amount of support resources a fan utilizes, based on their submission of tickets and issue reports.", "definition": "SLS = \\text{Support Tickets} + \\text{Bug Reports} + \\text{Technical Issue Reports}", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 10, "knowledge": "Viral Potential Score (VPS)", "description": "Assesses a fan's ability to create and participate in viral trends.", "definition": "VPS = \\text{Viral Content Pieces} \\times (1 + \\text{Trend Participation Count})", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 11, "knowledge": "Event Participation Score (EPS)", "description": "A weighted score that measures a fan's engagement with platform events, prioritizing in-person attendance.", "definition": "EPS = (\\text{Online Events} \\times 1) + (\\text{Offline Events} \\times 1.5) + (\\text{Meet-and-Greets} \\times 2)", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 12, "knowledge": "Platform Stickiness Score (PSS)", "description": "Measures a fan's loyalty and connection to the platform based on their interaction consistency and session stability.", "definition": "PSS = \\frac{\\text{Interaction Consistency Score} + \\text{Platform Stability Score}}{2}", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 13, "knowledge": "Loyalty Progression Rate (LPR)", "description": "Measures the efficiency of a fan's progression through loyalty tiers based on points accumulated.", "definition": "LPR = \\frac{\\text{Total Loyalty Points}}{\\text{Current Tier Level}}", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 14, "knowledge": "Social Capital Score (SCS)", "description": "Represents the size and strength of a fan's social network on the platform.", "definition": "SCS = \\text{Social Network Size} + (\\text{Friend Connections} \\times 1.5)", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 15, "knowledge": "Collection Rate (CR)", "description": "Measures the total number of digital and physical collectible items a fan owns.", "definition": "CR = \\text{Digital Items Owned} + \\text{Physical Items Owned}", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 16, "knowledge": "Fan Age Category", "description": "Categorizes fans into distinct age groups for demographic analysis.", "definition": "Fan age is categorized as: Teen (<=18), Young Adult (19-29), Adult (30-49), Senior (>=50).", "type": "value_illustration", "children_knowledge": -1} +{"id": 17, "knowledge": "NPS Score Tiers", "description": "Classifies fans into categories based on their Net Promoter Score, indicating their likelihood to recommend the platform.", "definition": "Net Promoter Score (NPS) is categorized as: Promoters (score 9-10), Passives (score 7-8), and Detractors (score 0-6).", "type": "value_illustration", "children_knowledge": -1} +{"id": 18, "knowledge": "Premium Member", "description": "Identifies fans who have a paid membership subscription.", "definition": "A fan is considered a Premium Member if their membership type is not 'Free'. This includes tiers like 'Basic', 'Elite', etc.", "type": "value_illustration", "children_knowledge": -1} +{"id": 19, "knowledge": "Whale Fan", "description": "A fan who demonstrates extremely high in-session spending, representing a top-tier financial contributor.", "definition": "A fan is classified as a Whale Fan if their Fan Monetization Index (FMI) is greater than 20.", "type": "domain_knowledge", "children_knowledge": [0]} +{"id": 20, "knowledge": "Engaged Lurker", "description": "A fan who actively consumes a large amount of content but participates minimally in social chat.", "definition": "A fan is an Engaged Lurker if their Content Consumption Index (CCI) is greater than 0.5 and their Chattiness Score (CS) is less than 0.5.", "type": "domain_knowledge", "children_knowledge": [1, 2]} +{"id": 21, "knowledge": "High-Value Member", "description": "A fan with a paid subscription who consistently contributes significant financial value to the platform on a monthly basis.", "definition": "A fan is a High-Value Member if they are a Premium Member and their Fan Financial Value (FFV) is greater than $50 per month.", "type": "domain_knowledge", "children_knowledge": [18, 3]} +{"id": 22, "knowledge": "Collector Fan", "description": "A fan who is highly focused on acquiring platform collectibles, both digital and physical.", "definition": "A fan is a Collector Fan if their Collection Rate (CR) is greater than 50 and their collection completion rate is above 75%.", "type": "domain_knowledge", "children_knowledge": [15]} +{"id": 23, "knowledge": "Community Influencer", "description": "A fan with significant social capital, characterized by a large and engaged follower base.", "definition": "A fan is a Community Influencer if their Follower-to-Following Ratio (FFR) is greater than 2.0 and their Community Influence Index (CII) is greater than 10000.", "type": "domain_knowledge", "children_knowledge": [5, 7]} +{"id": 24, "knowledge": "Super Creator", "description": "A fan who is a prolific and high-quality content creator, contributing significantly to the platform's user-generated content.", "definition": "A fan is a Super Creator if their Content Creator Score (CCS) is greater than 200.", "type": "domain_knowledge", "children_knowledge": [6]} +{"id": 25, "knowledge": "At-Risk Fan", "description": "A fan who shows signs of disengagement or has been flagged as having a high probability of churning.", "definition": "A fan is considered At-Risk if their Platform Stickiness Score (PSS) is below 0.4 or their churn flag is 'High'.", "type": "domain_knowledge", "children_knowledge": [12]} +{"id": 26, "knowledge": "Loyal Achiever", "description": "A fan who is deeply engaged with the platform's gamification systems, rapidly earning achievements and progressing through loyalty tiers.", "definition": "A fan is a Loyal Achiever if their Achievement Density (AD) is greater than 0.2 and their Loyalty Progression Rate (LPR) is greater than 500.", "type": "domain_knowledge", "children_knowledge": [8, 13]} +{"id": 27, "knowledge": "High-Maintenance Fan", "description": "A fan who frequently requires customer support, indicating potential issues with their user experience or platform stability.", "definition": "A fan is classified as High-Maintenance if their Support Load Score (SLS) is greater than 10.", "type": "domain_knowledge", "children_knowledge": [9]} +{"id": 28, "knowledge": "Platform Promoter", "description": "A fan who is highly likely to recommend the platform to others, as indicated by a top-tier NPS score.", "definition": "A fan is a Platform Promoter if they are classified in the 'Promoters' category based on NPS Score Tiers.", "type": "domain_knowledge", "children_knowledge": [17]} +{"id": 29, "knowledge": "Social Butterfly", "description": "A fan who is exceptionally well-connected and active within the platform's social communities.", "definition": "A fan is a Social Butterfly if their Social Capital Score (SCS) is greater than 1000 and they are a member of more than 5 community groups.", "type": "domain_knowledge", "children_knowledge": [14]} +{"id": 30, "knowledge": "Event Enthusiast", "description": "A fan who actively and frequently participates in a wide range of platform events, especially high-value ones like meet-and-greets.", "definition": "A fan is an Event Enthusiast if their Event Participation Score (EPS) is greater than 25.", "type": "domain_knowledge", "children_knowledge": [11]} +{"id": 31, "knowledge": "Professional Fan", "description": "An adult fan who identifies with a professional occupation, representing a key demographic segment.", "definition": "A fan is a Professional Fan if they are in the 'Adult' or 'Senior' Fan Age Category and their occupation path is 'Professional'.", "type": "domain_knowledge", "children_knowledge": [16]} +{"id": 32, "knowledge": "Idol Superfan", "description": "An elite fan who combines high spending with high event participation, representing the most dedicated segment of the user base.", "definition": "A fan is an Idol Superfan if they are classified as both a Whale Fan and an Event Enthusiast.", "type": "domain_knowledge", "children_knowledge": [19, 30]} +{"id": 33, "knowledge": "Rising Star Influencer", "description": "A fan with a demonstrated ability to create viral content who is on the cusp of becoming a major community influencer.", "definition": "A fan is a Rising Star Influencer if their Viral Potential Score (VPS) is greater than 50 and their Community Influence Index (CII) is between 1000 and 10000.", "type": "domain_knowledge", "children_knowledge": [10, 7]} +{"id": 34, "knowledge": "Power User", "description": "A highly engaged premium member who uses the platform extensively across multiple devices.", "definition": "A fan is a Power User if they are a Premium Member, have a total session count greater than 400, and use more than 2 devices.", "type": "domain_knowledge", "children_knowledge": [18]} +{"id": 35, "knowledge": "Fan Status Tiers", "description": "Illustrates the meaning of different fan account statuses on the platform.", "definition": "Fan status indicates their platform standing. 'Active' means regular recent activity. 'Inactive' means no recent activity. 'VIP' is a special high-value status granted based on contribution and spending.", "type": "value_illustration", "children_knowledge": -1} +{"id": 36, "knowledge": "Interaction Tone", "description": "Explains the sentiment classification of a fan's chat messages.", "definition": "Indicates the overall sentiment of a fan's chat messages. Common values are 'Positive', 'Negative', 'Neutral', or 'Mixed'.", "type": "value_illustration", "children_knowledge": -1} +{"id": 37, "knowledge": "Community Contribution Level", "description": "Defines the different levels of a fan's positive impact on the community.", "definition": "Measures a fan's positive impact on the community. Levels include 'High', 'Medium', and 'Low', often associated with group roles and content creation.", "type": "value_illustration", "children_knowledge": -1} +{"id": 38, "knowledge": "Reputation Level", "description": "Describes the hierarchy of fan reputation and trustworthiness within the community.", "definition": "Represents a fan's standing and trustworthiness in the community. Common levels are 'Respected', 'Established', and 'Elite'.", "type": "value_illustration", "children_knowledge": -1} +{"id": 39, "knowledge": "Gift Frequency", "description": "Clarifies the categories used to classify a fan's gift-giving behavior.", "definition": "Classifies how often a fan gives virtual gifts during interactions. Categories include 'Often', 'Frequently', 'Occasionally', and 'Rarely'.", "type": "value_illustration", "children_knowledge": -1} +{"id": 40, "knowledge": "Idol Type Classification", "description": "Explains the different types of virtual idols on the platform.", "definition": "Describes the nature of the virtual idol. '2D' refers to traditionally animated characters. '3D' refers to computer-generated models. 'AI Generated' refers to idols created and/or operated by artificial intelligence.", "type": "value_illustration", "children_knowledge": -1} +{"id": 41, "knowledge": "Churn Risk Flag", "description": "Defines the predictive flags for a fan's likelihood of leaving the platform.", "definition": "A predictive classification of a fan's likelihood to stop using the platform. 'High' indicates a strong probability of churning soon. 'Medium' indicates a moderate risk. 'Low' indicates a loyal and stable user.", "type": "value_illustration", "children_knowledge": -1} +{"id": 42, "knowledge": "Spending Frequency", "description": "Illustrates the different patterns of fan spending behavior.", "definition": "Describes a fan's purchasing behavior pattern. 'Weekly' or 'Daily' indicates consistent spending. 'Occasional' indicates sporadic purchasing.", "type": "value_illustration", "children_knowledge": -1} +{"id": 43, "knowledge": "Community Role Hierarchy", "description": "Defines the various roles a fan can hold within a community group.", "definition": "Defines a fan's position within a community group. 'Member' is a standard participant. 'Moderator' helps enforce rules. 'Leader' manages the group.", "type": "value_illustration", "children_knowledge": -1} +{"id": 44, "knowledge": "Loyalty Reward Tiers", "description": "Explains the progression of tiers within the fan loyalty program.", "definition": "Represents the levels in the loyalty program, such as 'Bronze', 'Silver', 'Gold', and 'Platinum', each unlocking different rewards.", "type": "value_illustration", "children_knowledge": -1} +{"id": 45, "knowledge": "Moderation Status", "description": "Defines the account standings related to compliance with platform rules.", "definition": "Reflects a fan's account standing. 'Good Standing' means no issues. 'Warning' indicates a minor violation. 'Restricted' indicates temporary limitations due to a serious violation.", "type": "value_illustration", "children_knowledge": -1} +{"id": 46, "knowledge": "Privacy Setting Levels", "description": "Explains the different profile privacy options available to fans.", "definition": "Indicates the level of information a fan chooses to share. 'Public' allows anyone to see their profile. 'Private' restricts profile visibility to approved connections.", "type": "value_illustration", "children_knowledge": -1} +{"id": 47, "knowledge": "Trade Activity Level", "description": "Classifies a fan's engagement in the trading of collectibles.", "definition": "Classifies a fan's engagement in the trading of digital or physical collectibles. 'High', 'Medium', and 'Low' levels reflect the frequency and volume of trades.", "type": "value_illustration", "children_knowledge": -1} +{"id": 48, "knowledge": "Language Preference Setting", "description": "Describes the language settings a fan can choose for content consumption.", "definition": "Defines the fan's preference for content language. 'Original' means content is shown in the idol's primary language. 'Translated' means the fan prefers translated versions. 'Both' indicates a willingness to consume content in either form.", "type": "value_illustration", "children_knowledge": -1} +{"id": 49, "knowledge": "Occupation Path", "description": "Categorizes the professional background of fans.", "definition": "Represents the declared occupation category of a fan, such as 'Professional', 'Student', 'Creative', or 'Technical'.", "type": "value_illustration", "children_knowledge": -1} +{"id": 50, "knowledge": "Fan Segments", "description": "Defines four distinct fan categories by segmenting the user base into a 2x2 grid based on their relative financial value and support load.", "definition": "A classification of fans based on their rank for Fan Financial Value (FFV) and Support Load Score (SLS). Fans are split into top/bottom 50% for each metric to create four quadrants: 'Ideal Fans', 'At-Risk VIPs', 'Quiet Majority', and 'Resource Drain'.", "type": "domain_knowledge", "children_knowledge": [3, 9]} +{"id": 51, "knowledge": "Data Archival Process", "description": "A routine data management task that involves moving old or inactive data from primary operational tables to a secondary storage table to improve performance, followed by the deletion of the original data.", "definition": "A transactional process that first copies a specific subset of data (defined by age and user status) from a source table to an archive table, and then deletes that same subset of data from the source table.", "type": "domain_knowledge", "children_knowledge": []} +{"id": 52, "knowledge": "Composite Index", "description": "A database index that includes multiple columns, allowing for faster data retrieval when queries filter on all or the leading columns of the index.", "definition": "A database object created using CREATE INDEX on two or more columns of a table.", "type": "domain_knowledge", "children_knowledge": []} +{"id": 53, "knowledge": "Upsert Operation", "description": "A database operation that will INSERT a new row, or UPDATE an existing row if a conflict (like a duplicate key) occurs.", "definition": "An INSERT statement that includes an ON CONFLICT DO UPDATE clause to handle potential primary key violations by updating the existing record instead of failing.", "type": "domain_knowledge", "children_knowledge": []} +{"id": 54, "knowledge": "Data Integrity Constraint", "description": "A rule enforced on a database column to ensure the accuracy and consistency of data. It can limit the type, format, or range of values.", "definition": "A database rule, often implemented with a CHECK clause in a CREATE TABLE or ALTER TABLE statement, that validates data upon insertion or update.", "type": "domain_knowledge", "children_knowledge": []} +{"id": 55, "knowledge": "Monetary Domain", "description": "A custom, reusable data type created to enforce specific rules for columns that store monetary values, such as format and non-negativity.", "definition": "A custom data type created using CREATE DOMAIN that is based on a numeric type and includes a CHECK constraint to ensure the value is non-negative.", "type": "domain_knowledge", "children_knowledge": []} +{"id": 56, "knowledge": "Peak Monetization Index", "description": "The highest Fan Monetization Index (FMI) a fan has achieved in any single interaction session.", "definition": "The maximum value of the Fan Monetization Index (FMI) calculated across all of a fan's interaction sessions.", "type": "calculation_knowledge", "children_knowledge": [0]} +{"id": 57, "knowledge": "Conversion Funnel Analysis", "description": "Defines the key metrics for analyzing the fan journey to premium membership, including the critical estimation for the subscription date.", "definition": "This analysis requires calculating Time to Conversion and Interactions Before Conversion. As the subscription date is not explicitly stored, it must be estimated using the formula: (current date - membership duration in days).", "type": "domain_knowledge", "children_knowledge": [18]} +{"id": 58, "knowledge": "Lone Wolf", "description": "A fan who is highly engaged with platform goals (achievements) but has low social connectivity.", "definition": "A fan is a 'Lone Wolf' if their Achievement Density (AD) is greater than 0.5 AND their Social Capital Score (SCS) is less than 100.", "type": "domain_knowledge", "children_knowledge": [8, 14]} +{"id": 59, "knowledge": "Cumulative Spending", "description": "The running total of a fan's spending up to a specific point in time.", "definition": "A value calculated using a window function (SUM OVER PARTITION BY fan ORDER BY date) on time-ordered interaction data, specifically summing the Total Gift Value in USD from interaction gifts.", "type": "calculation_knowledge", "children_knowledge": -1} +{"id": 60, "knowledge": "Spending Velocity Cohort Analysis", "description": "An analysis that measures the average Cumulative Spending for a defined group of fans at specific time milestones after their registration.", "definition": "This analysis identifies a fan cohort, calculates each fan's daily Cumulative Spending trajectory, and then computes the average of these daily cumulative totals for all days up to each milestone (e.g., 7, 30, and 90 days).", "type": "domain_knowledge", "children_knowledge": [59]} +{"id": 61, "knowledge": "Average Time to First Gift", "description": "Measures the average number of days between a fan's registration and their first gift-giving interaction.", "definition": "For a given fan cohort, this is calculated by first finding the minimum time difference between the registration date and the date of any interaction with a gift value greater than zero for each fan, and then averaging these minimum time differences across the entire cohort.", "type": "domain_knowledge", "children_knowledge": -1} +{"id": 62, "knowledge": "Ripple Effect Analysis", "description": "Measures the impact of a specific user segment (e.g., influencers) on the behavior of other users within the same interaction session.", "definition": "This analysis identifies all interactions involving the target user segment. For each such interaction, it defines a 'session' based on a Session Time Window. It then aggregates the behaviors (e.g., chat sentiment) of all non-target users within that session to measure the influence.", "type": "domain_knowledge", "children_knowledge": [23, 36, 63]} +{"id": 63, "knowledge": "Session Time Window", "description": "Defines the time duration used to group related interactions into a single session for analysis.", "definition": "A session consists of all interactions with the same idol that occur within a 5-minute window (before and after) of a key interaction.", "type": "calculation_knowledge", "children_knowledge": -1} diff --git a/virtual_idol/virtual_idol_schema.txt b/virtual_idol/virtual_idol_schema.txt new file mode 100644 index 0000000000000000000000000000000000000000..3c2e3d6699c3e0305e323b6474f591eb26b3028d --- /dev/null +++ b/virtual_idol/virtual_idol_schema.txt @@ -0,0 +1,336 @@ +CREATE TABLE "fans" ( +user_registry text NOT NULL, +nick_label text NULL, +reg_moment text NULL, +tier_step smallint NULL, +pts_val bigint NULL, +status_tag text NULL, +lang_pref text NULL, +demo_profile jsonb NULL, + PRIMARY KEY (user_registry) +); + +First 3 rows: +user_registry nick_label reg_moment tier_step pts_val status_tag lang_pref demo_profile +--------------- ------------ ------------ ----------- --------- ------------ ----------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- +FAN55719 brownandrew 2024-07-22 52 93976 Inactive multiple {'Loc_Town': 'Patelbury', 'Age_Count': 55, 'Occu_Path': 'Professional', 'Loc_Nation': 'United States Minor Outlying Islands', 'Gender_Type': 'Other', 'Interest_Set': 'technology'} +FAN83471 ymoore 2023-07-19 3 53540 Active CHINESE {'Loc_Town': 'Juliefort', 'Age_Count': 50, 'Occu_Path': None, 'Loc_Nation': 'Niue', 'Gender_Type': 'Female', 'Interest_Set': None} +FAN75581 lauren67 2024-01-23 41 72104 VIP Korean {'Loc_Town': 'Virginiabury', 'Age_Count': 38, 'Occu_Path': 'professional', 'Loc_Nation': 'United States Minor Outlying Islands', 'Gender_Type': 'Male', 'Interest_Set': 'Anime'} +... + + +CREATE TABLE "socialcommunity" ( +social_reg bigint NOT NULL DEFAULT nextval('socialcommunity_social_reg_seq'::regclass), +social_engage_pivot bigint NULL, +social_commerce_pivot bigint NULL, +comm_contrib text NULL, +cont_create_stat text NULL, +network_stats jsonb NULL, +content_creation jsonb NULL, +content_creation_rate text NULL, +community_growth_rate text NULL, +influence_rate text NULL, + PRIMARY KEY (social_reg), + FOREIGN KEY (social_commerce_pivot) REFERENCES commerceandcollection(commerce_reg), + FOREIGN KEY (social_engage_pivot) REFERENCES engagement(engage_reg) +); + +First 3 rows: + social_reg social_engage_pivot social_commerce_pivot comm_contrib cont_create_stat network_stats content_creation content_creation_rate community_growth_rate influence_rate +------------ --------------------- ----------------------- -------------- ------------------ ---------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------- ----------------------- ----------------------- ------------------ + 1 6 3 High active {'Grp_Memb': 15, 'Grp_Role': 'Moderator', 'Fing_Count': 626, 'Foll_Count': 3668, 'Friend_Con': 406, 'Soc_Net_Sz': 568} {'UGC_Val': 0, 'Art_Subs': 35, 'Fic_Subs': 4, 'Collab_Count': 0, 'Cont_Qual_Rate': 0.8, 'Cover_Perf_Cnt': 2} 5 posts/week 122 followers/month 5 connections/week + 2 7 4 MEDIUM Active {'Grp_Memb': 4, 'Grp_Role': 'Leader', 'Fing_Count': 779, 'Foll_Count': 1206, 'Friend_Con': 223, 'Soc_Net_Sz': 972} {'UGC_Val': 73, 'Art_Subs': 20, 'Fic_Subs': 3, 'Collab_Count': 16, 'Cont_Qual_Rate': 0.4, 'Cover_Perf_Cnt': 23} 2 posts/week 40 followers/month 9 connections/week + 3 9 5 Medium {'Grp_Memb': 2, 'Grp_Role': 'Leader', 'Fing_Count': 17, 'Foll_Count': 3524, 'Friend_Con': 302, 'Soc_Net_Sz': 112} {'UGC_Val': 9, 'Art_Subs': 39, 'Fic_Subs': 7, 'Collab_Count': 11, 'Cont_Qual_Rate': 1.2, 'Cover_Perf_Cnt': 4} 5 posts/week 117 followers/month 1 connections/week +... + + +CREATE TABLE "supportandfeedback" ( +support_reg bigint NOT NULL DEFAULT nextval('supportandfeedback_support_reg_seq'::regclass), +support_interact_pivot text NULL, +support_pref_pivot bigint NULL, +surv_part text NULL, +beta_part text NULL, +feedback_data jsonb NULL, + PRIMARY KEY (support_reg), + FOREIGN KEY (support_interact_pivot) REFERENCES interactions(activity_reg), + FOREIGN KEY (support_pref_pivot) REFERENCES preferencesandsettings(pref_reg) +); + +First 3 rows: + support_reg support_interact_pivot support_pref_pivot surv_part beta_part feedback_data +------------- ------------------------ -------------------- ----------- ----------- ----------------------------------------------------------------------------------------------------------------------- + 1 FI814576 1 Active Yes {'Fb_Subs': 30, 'NPS_Val': 6, 'Bug_Subs': 2, 'Sat_Rate': 2, 'Supp_Tix': 9, 'Feat_Req_Subs': 8, 'Tech_Issue_Rpt': 2} + 2 FI648876 3 Active Yes {'Fb_Subs': 2, 'NPS_Val': 10, 'Bug_Subs': 0, 'Sat_Rate': 1.2, 'Supp_Tix': 5, 'Feat_Req_Subs': 19, 'Tech_Issue_Rpt': 14} + 3 FI817373 4 Occasional Yes {'Fb_Subs': 2, 'NPS_Val': 8, 'Bug_Subs': 12, 'Sat_Rate': 1.7, 'Supp_Tix': 6, 'Feat_Req_Subs': 6, 'Tech_Issue_Rpt': 5} +... + + +CREATE TABLE "eventsandclub" ( +events_reg bigint NOT NULL DEFAULT nextval('eventsandclub_events_reg_seq'::regclass), +events_social_pivot bigint NULL, +events_member_pivot bigint NULL, +evt_part text NULL, +camp_part text NULL, +club_stat text NULL, +club_j_date text NULL, +club_contrib text NULL, +evt_participation jsonb NULL, + PRIMARY KEY (events_reg), + FOREIGN KEY (events_member_pivot) REFERENCES membershipandspending(member_reg), + FOREIGN KEY (events_social_pivot) REFERENCES socialcommunity(social_reg) +); + +First 3 rows: + events_reg events_social_pivot events_member_pivot evt_part camp_part club_stat club_j_date club_contrib evt_participation +------------ --------------------- --------------------- ---------- ----------- ----------- ------------- -------------- -------------------------------------------------------------------------------------------- + 1 2 8 rare Selective Elite 2023/03/04 Outstanding {'Conc_Att': 23, 'Meet_Att': 8, 'On_Evt_Att': 17, 'Off_Evt_Att': 0, 'Vote_Part_Rate': 39.1} + 2 3 10 Non-member 2023/09/20 Low {'Conc_Att': 20, 'Meet_Att': 1, 'On_Evt_Att': 52, 'Off_Evt_Att': 5, 'Vote_Part_Rate': 51.1} + 3 4 11 Active Elite 2023/09/21 Outstanding {'Conc_Att': 24, 'Meet_Att': 0, 'On_Evt_Att': 98, 'Off_Evt_Att': 11, 'Vote_Part_Rate': 91.1} +... + + +CREATE TABLE "virtualidols" ( +entity_reg text NOT NULL, +name_tag text NULL, +kind_tag text NULL, +deb_date text NULL, +assoc_group text NULL, +genre_tag text NULL, +prim_lang text NULL, + PRIMARY KEY (entity_reg) +); + +First 3 rows: +entity_reg name_tag kind_tag deb_date assoc_group genre_tag prim_lang +------------ ---------------- ------------ ---------- ---------------------------- ----------- ----------- +VI1517 Brandon Buck 2D 01/02/2025 Archer, Martinez and Jimenez Electronic English +VI8705 Brittney Freeman AI Generated 20/05/2022 Carpenter and Sons Electronic Chinese +VI6535 Anita Snyder 3D 24/04/2020 Tran, Aguirre and Jenkins Dance English +... + + +CREATE TABLE "retentionandinfluence" ( +ret_reg bigint NOT NULL DEFAULT nextval('retentionandinfluence_ret_reg_seq'::regclass), +retain_engage_pivot bigint NULL, +retain_loyalty_pivot bigint NULL, +churn_flag text NULL, +ref_count smallint NULL, +infl_impact jsonb NULL, + PRIMARY KEY (ret_reg), + FOREIGN KEY (retain_engage_pivot) REFERENCES engagement(engage_reg), + FOREIGN KEY (retain_loyalty_pivot) REFERENCES loyaltyandachievements(loyalty_reg) +); + +First 3 rows: + ret_reg retain_engage_pivot retain_loyalty_pivot churn_flag ref_count infl_impact +--------- --------------------- ---------------------- ------------ ----------- ------------------------------------------------------------------------------------------- + 1 12 3 High 16 {'Hash_Use': 265, 'Cont_Reach': 90332, 'Trend_Part': 2, 'Viral_Cont': 4, 'React_Count': 4} + 2 16 4 Medium 11 {'Hash_Use': 724, 'Cont_Reach': 94612, 'Trend_Part': 49, 'Viral_Cont': 4, 'React_Count': 1} + 3 21 5 High 3 {'Hash_Use': 38, 'Cont_Reach': 53260, 'Trend_Part': 3, 'Viral_Cont': 10, 'React_Count': 3} +... + + +CREATE TABLE "interactions" ( +activity_reg text NOT NULL, +time_mark timestamp without time zone NULL, +interact_fan_pivot text NULL, +interact_idol_pivot text NULL, +act_kind text NULL, +act_plat text NULL, +plat_used text NULL, +dev_type text NULL, +app_ver text NULL, +sess_dur_min smallint NULL, +live_att smallint NULL, +watch_hrs real NULL, +gift_metrics jsonb NULL, +chat_activity jsonb NULL, +engagement_rate text NULL, +gift_rate text NULL, +message_rate text NULL, +content_consumption_rate text NULL, + PRIMARY KEY (activity_reg), + FOREIGN KEY (interact_fan_pivot) REFERENCES fans(user_registry), + FOREIGN KEY (interact_idol_pivot) REFERENCES virtualidols(entity_reg) +); + +First 3 rows: +activity_reg time_mark interact_fan_pivot interact_idol_pivot act_kind act_plat plat_used dev_type app_ver sess_dur_min live_att watch_hrs gift_metrics chat_activity engagement_rate gift_rate message_rate content_consumption_rate +-------------- -------------------------- -------------------- --------------------- ---------- ------------ ----------- ---------- --------- -------------- ---------- ----------- ------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------- ----------------- ----------- -------------- ------------------------------- +FI537855 2024-12-10 10:48:19.557855 FAN55719 VI1517 vote YouTube tablet WINDOWS 3.4.0 44 53 359.1 {'Gift_Tot': 373, 'Gift_Freq': 'Often', 'Fav_Gift_Tag': 'Limited', 'Gift_Val_Usd': 663.63} {'Chat_Msg': 905, 'Msg_Tone': 'Negative', 'Chat_Lang': 'Mixed', 'Stk_Count': 29, 'Emoji_Count': 323} 106 actions/hr 4 gifts/hr 132 msgs/hr 489.68182650479406 mins/session +FI528045 2024-11-22 11:17:12.560133 FAN75581 VI8705 COMMENT Twitter mobile iOS 4.6.7 79 21 169.9 {'Gift_Tot': 867, 'Gift_Freq': 'Often', 'Fav_Gift_Tag': 'Custom', 'Gift_Val_Usd': 4710.36} {'Chat_Msg': 539, 'Msg_Tone': 'Negative', 'Chat_Lang': 'Translation', 'Stk_Count': 104, 'Emoji_Count': 461} 42 actions/hr 7 gifts/hr 237 msgs/hr 129.03797004796283 mins/session +FI137526 2024-08-19 10:13:29.560133 FAN27370 VI6535 vote OFFICIAL APP Console android 3.9.2 14 73 388.1 {'Gift_Tot': 494, 'Gift_Freq': 'Rarely', 'Fav_Gift_Tag': 'Custom', 'Gift_Val_Usd': 3270.78} {'Chat_Msg': 424, 'Msg_Tone': 'Positive', 'Chat_Lang': 'Mixed', 'Stk_Count': 122, 'Emoji_Count': 306} 146 actions/hr 1 gifts/hr 42 msgs/hr 1663.2857404436384 mins/session +... + + +CREATE TABLE "loyaltyandachievements" ( +loyalty_reg bigint NOT NULL DEFAULT nextval('loyaltyandachievements_loyalty_reg_seq'::regclass), +loyalty_events_pivot bigint NULL, +loyalty_engage_pivot bigint NULL, +loy_pts bigint NULL, +rew_tier text NULL, +repute_lv text NULL, +trust_val real NULL, +achiev_stats jsonb NULL, + PRIMARY KEY (loyalty_reg), + FOREIGN KEY (loyalty_engage_pivot) REFERENCES engagement(engage_reg), + FOREIGN KEY (loyalty_events_pivot) REFERENCES eventsandclub(events_reg) +); + +First 3 rows: + loyalty_reg loyalty_events_pivot loyalty_engage_pivot loy_pts rew_tier repute_lv trust_val achiev_stats +------------- ---------------------- ---------------------- --------- ---------- ----------- ----------- ------------------------------------------------------------------------------------------ + 1 1 7 7209 Bronze Respected 94.3 {'Rank_Pos': 1494, 'Ach_Count': 83, 'Badge_Coll': 35, 'Infl_Score': 1.7, 'Spec_Titles': 4} + 2 4 11 10961 Gold Established 29.5 {'Rank_Pos': 618, 'Ach_Count': 78, 'Badge_Coll': 44, 'Infl_Score': 94.9, 'Spec_Titles': 3} + 3 5 12 6490 Platinum Elite 79.5 {'Rank_Pos': 2856, 'Ach_Count': 87, 'Badge_Coll': 3, 'Infl_Score': 24.5, 'Spec_Titles': 4} +... + + +CREATE TABLE "additionalnotes" ( +notes_reg bigint NOT NULL DEFAULT nextval('additionalnotes_notes_reg_seq'::regclass), +notes_retain_pivot bigint NULL, +note_info text NULL, + PRIMARY KEY (notes_reg), + FOREIGN KEY (notes_retain_pivot) REFERENCES retentionandinfluence(ret_reg) +); + +First 3 rows: + notes_reg notes_retain_pivot note_info +----------- -------------------- ----------------------- + 1 1 Body better piece drug. + 2 2 + 3 3 +... + + +CREATE TABLE "membershipandspending" ( +member_reg bigint NOT NULL DEFAULT nextval('membershipandspending_member_reg_seq'::regclass), +member_fan_pivot text NULL, +memb_kind text NULL, +memb_days smallint NULL, +spend_usd real NULL, +spend_freq text NULL, +pay_method text NULL, +spend_rate text NULL, +value_per_day text NULL, +cost_efficiency text NULL, + PRIMARY KEY (member_reg), + FOREIGN KEY (member_fan_pivot) REFERENCES fans(user_registry) +); + +First 3 rows: + member_reg member_fan_pivot memb_kind memb_days spend_usd spend_freq pay_method spend_rate value_per_day cost_efficiency +------------ ------------------ ----------- ----------- ----------- ------------ -------------- -------------------------- ---------------------------- ----------------- + 132 FAN39666 Free 0 7221.69 Occasional Mobile Payment 240.722998046875 USD/month 100 value/USD + 1 FAN55719 Free 194 6265.46 Occasional Credit Card 32.296185365657216 USD/day 208.84866536458333 USD/month 81 value/USD + 2 FAN75581 Basic 798 9993.63 Weekly Mobile Payment 12.52334571781015 USD/day 333.12099609375 USD/month 21 value/USD +... + + +CREATE TABLE "preferencesandsettings" ( +pref_reg bigint NOT NULL DEFAULT nextval('preferencesandsettings_pref_reg_seq'::regclass), +preferences_member_pivot bigint NULL, +preferences_social_pivot bigint NULL, +priv_set text NULL, +ds_consent text NULL, +notif_pref text NULL, +comm_pref text NULL, +mark_pref text NULL, +lang_set text NULL, +access_set text NULL, +dev_count smallint NULL, +log_freq text NULL, +last_log_dt text NULL, +conn_qual text NULL, +usage_metrics jsonb NULL, + PRIMARY KEY (pref_reg), + FOREIGN KEY (preferences_member_pivot) REFERENCES membershipandspending(member_reg), + FOREIGN KEY (preferences_social_pivot) REFERENCES socialcommunity(social_reg) +); + +First 3 rows: + pref_reg preferences_member_pivot preferences_social_pivot priv_set ds_consent notif_pref comm_pref mark_pref lang_set access_set dev_count log_freq last_log_dt conn_qual usage_metrics +---------- -------------------------- -------------------------- ---------- ------------ ------------ ----------- ----------- ---------- ------------ ----------- ---------- ------------- ----------- -------------------------------------------------------------------------------------------------------------------- + 1 14 7 Public Minimal All Push Opted Out Translated Standard 1 Rare 2025.01.20 Excellent {'Time_Hrs': 1533, 'Peak_Sess': 3, 'Sess_Count': 164, 'Int_Consist': 0.82, 'Plat_Stable': 0.4, 'Avg_Daily_Min': 289} + 2 17 8 Private Minimal Push Opted In Original Standard 4 Weekly 2025.02.04 Fair {'Time_Hrs': 4552, 'Peak_Sess': 1, 'Sess_Count': 297, 'Int_Consist': 0.49, 'Plat_Stable': 0.92, 'Avg_Daily_Min': 61} + 3 19 10 Public Minimal All Opted Out Auto Standard 3 Daily 2025.02.14 Fair {'Time_Hrs': 2775, 'Peak_Sess': 5, 'Sess_Count': 477, 'Int_Consist': 0.43, 'Plat_Stable': 0.17, 'Avg_Daily_Min': 39} +... + + +CREATE TABLE "engagement" ( +engage_reg bigint NOT NULL DEFAULT nextval('engagement_engage_reg_seq'::regclass), +engage_activity_pivot text NULL, +engage_member_pivot bigint NULL, +soc_int_score real NULL, +eng_rate real NULL, +act_freq text NULL, +peak_time text NULL, +act_days_wk smallint NULL, +avg_sess_count smallint NULL, +cont_pref text NULL, +lang_pref text NULL, +trans_use text NULL, +interaction_efficiency text NULL, +session_productivity text NULL, + PRIMARY KEY (engage_reg), + FOREIGN KEY (engage_activity_pivot) REFERENCES interactions(activity_reg), + FOREIGN KEY (engage_member_pivot) REFERENCES membershipandspending(member_reg) +); + +First 3 rows: + engage_reg engage_activity_pivot engage_member_pivot soc_int_score eng_rate act_freq peak_time act_days_wk avg_sess_count cont_pref lang_pref trans_use interaction_efficiency session_productivity +------------ ----------------------- --------------------- --------------- ---------- ---------- ----------- ------------- ---------------- ----------- ----------- ----------- ---------------------------------- ---------------------- + 1 FI537855 1 13.1 0.522 Weekly Afternoon 7 20 Music Both Always 26.200000762939453 interactions/hr 140 sessions/week + 2 FI528045 2 86.2 0.64 Monthly Afternoon 6 9 Dance Original Sometimes 172.39999389648438 interactions/hr 54 sessions/week + 3 FI137526 3 98.8 0.878 Weekly Evening 1 13 Gaming Translated Sometimes 197.60000610351562 interactions/hr 13 sessions/week +... + + +CREATE TABLE "moderationandcompliance" ( +mod_reg bigint NOT NULL DEFAULT nextval('moderationandcompliance_mod_reg_seq'::regclass), +moderation_interact_pivot text NULL, +moderation_social_pivot bigint NULL, +rpt_count smallint NULL, +warn_count smallint NULL, +viol_hist text NULL, +mod_stat text NULL, +cont_comp text NULL, +age_verif text NULL, +pay_verif text NULL, +id_verif text NULL, + PRIMARY KEY (mod_reg), + FOREIGN KEY (moderation_interact_pivot) REFERENCES interactions(activity_reg), + FOREIGN KEY (moderation_social_pivot) REFERENCES socialcommunity(social_reg) +); + +First 3 rows: + mod_reg moderation_interact_pivot moderation_social_pivot rpt_count warn_count viol_hist mod_stat cont_comp age_verif pay_verif id_verif +--------- --------------------------- ------------------------- ----------- ------------ ----------- ------------- ----------- ------------ ----------- ---------- + 1 FI648876 10 8 4 Minor Good Standing Compliant Not Required Verified + 2 FI202186 18 0 5 Restricted Violation Pending Pending Pending + 3 FI156375 25 1 0 Minor Warning Compliant Pending Verified +... + + +CREATE TABLE "commerceandcollection" ( +commerce_reg bigint NOT NULL DEFAULT nextval('commerceandcollection_commerce_reg_seq'::regclass), +commerce_engage_pivot bigint NULL, +commerce_member_pivot bigint NULL, +merch_buy smallint NULL, +merch_spend_usd real NULL, +dig_own bigint NULL, +phys_own bigint NULL, +coll_comp_rate real NULL, +trade_level text NULL, + PRIMARY KEY (commerce_reg), + FOREIGN KEY (commerce_engage_pivot) REFERENCES engagement(engage_reg), + FOREIGN KEY (commerce_member_pivot) REFERENCES membershipandspending(member_reg) +); + +First 3 rows: + commerce_reg commerce_engage_pivot commerce_member_pivot merch_buy merch_spend_usd dig_own phys_own coll_comp_rate trade_level +-------------- ----------------------- ----------------------- ----------- ----------------- --------- ---------- ---------------- ------------- + 1 2 2 0 626.15 69 34 77.9 Low + 2 5 5 38 838.82 52 27 39.9 High + 3 6 7 6 578.34 17 42 60.4 High +...