instance_id
stringlengths
11
37
selected_database
stringclasses
22 values
query
stringlengths
36
847
normal_query
stringlengths
41
892
preprocess_sql
listlengths
0
2
clean_up_sqls
listlengths
0
2
sol_sql
listlengths
0
0
external_knowledge
listlengths
0
0
test_cases
listlengths
0
0
category
stringclasses
2 values
high_level
bool
2 classes
conditions
dict
planets_data_19
planets_data
For how many planets do we have a size measurement, but we know it's just a 'less-than-or-equal-to' kind of number because it's marked as an upper limit?
How many planets have a value for planetary radius, but this value is not a confirmed measurement and is instead flagged as an upper boundary?
[]
[]
[]
[]
[]
Query
false
{ "decimal": -1, "distinct": false, "order": false }
planets_data_20
planets_data
For the star that looks brightest in our night sky, what's its standard gravitational parameter, μ? Give me the answer in scientific notation, with three digits of precision. You'll need to use G = 6.67430E-11 and convert the star's mass from solar masses to kg using the factor 1.98847E30.
Calculate the gravitational parameter (μ) for the star that appears brightest from Earth. Provide the result in scientific notation with 3 digits of precision. Note that the Gravitational constant 'G' is 6.67430E-11 and the conversion factor for solar mass to kilograms is 1.98847E30.
[]
[]
[]
[]
[]
Query
false
{ "decimal": 3, "distinct": false, "order": true }
planets_data_M_1
planets_data
Let's clean up the discovery methods because they're a mess. Can you make a new view called `v_discovery_method_summary`? It should list every planet's id, its original discovery method from the table, and a new column with a neat, standardized category like 'radial velocity' or 'transit' that works no matter how the original method is capitalized.
Create a view named `v_discovery_method_summary`. This view should contain the planet's reference id, its original discovery method string, and a new `discovery_category` column. The new column should perform a case-insensitive standardization of the various discovery method names into unified categories such as 'radial velocity', 'transit', and 'imaging', based on the known variations.
[]
[]
[]
[]
[]
Management
false
{ "decimal": -1, "distinct": false, "order": false }
planets_data_M_2
planets_data
I need a new table called `planet_properties_earth_units` to keep track of planet sizes in a way that's easier to compare to home. It should have a reference to the planet, its mass in earths, and its radius in earths. Once you've made the table, go ahead and fill it up with all the planets we have the right data for.
Create a table named `planet_properties_earth_units`. The table should store the planet's reference id (as a foreign key to the `planets` table), the planet mass in earth units, and the planet radius in earth units. After creating the table, populate it with data for all planets that have known jupiter-mass and jupiter-radius values.
[]
[]
[]
[]
[]
Management
false
{ "decimal": -1, "distinct": false, "order": false }
planets_data_M_3
planets_data
Let's create a summary view for all the star systems; call it `v_system_overview`. For each star, I want to see its name, how big it is, how hot it is, and then two numbers: how many of its planets were found by the 'wobble' method and how many were found by the 'dimming' method (ignoring capitalization for both).
I need a new view called `v_system_overview`. This view should list each host star and include its name, its stellar radius, its temperature, and two separate counts of its planets: one for discoveries via radial velocity and one for discoveries via the transit method (both matched case-insensitively).
[]
[]
[]
[]
[]
Management
false
{ "decimal": -1, "distinct": false, "order": false }
planets_data_M_4
planets_data
Make me a table called `high_precision_params` that flags planets with super-accurate measurements. It needs to link to the planet and then have three true/false columns: one for a high-precision mass, one for radius, and one for period. Then, fill the table with every planet for which we can calculate at least one of these uncertainty values, even if all flags end up being false.
Create a table `high_precision_params`. The table should contain a reference to the planet and boolean flags indicating if its mass, radius, and period are high-precision. Populate this table for all planets that have at least one valid, non-null uncertainty measurement for either mass, radius, or period, regardless of whether it qualifies as high-precision.
[]
[]
[]
[]
[]
Management
false
{ "decimal": -1, "distinct": false, "order": false }
planets_data_M_5
planets_data
I want to categorize all the stars with a known mass by how big they are. Can you make a new table called `star_classifications` for this? It needs to link to the star and give it a label. Call them 'massive' if they're huge (over 3 solar masses), 'intermediate' if they're a fair bit bigger than the sun, 'sun-like' if they're in the same ballpark as ours (down to about 0.8 solar masses), and 'low-mass' for all the little ones. Then fill the table with these labels.
Create a new table `star_classifications` with columns for `stellarref` and `class_name`. The `stellarref` should be a foreign key to the `stars` table. Then, for all stars with a known stellar mass value, populate this table by assigning a `class_name` based on that mass: 'massive' for stars more than three times the sun's mass, 'intermediate' for those between that and one-and-a-half solar masses, 'sun-like' for those down to eighty percent of the sun's mass, and 'low-mass' for anything smaller.
[]
[]
[]
[]
[]
Management
false
{ "decimal": -1, "distinct": false, "order": false }
planets_data_M_6
planets_data
Can you make me a pre-compiled list of all the super puffy gas planets where we can actually calculate their temperature? Let's call it `v_inflated_giants_report`. The list should only include planets where the star's temperature and radius and the planet's orbital distance are known. For those planets, show their name, star, mass and radius in jupiter units, density, and estimated temperature, with the temperature as a whole number.
Please create a materialized view called `v_inflated_giants_report`. This view should contain all planets classified as inflated gas giants for which a planetary equilibrium temperature can be calculated (i.e., the host star's temperature and radius, and the planet's semi-major axis are all known). For each such planet, include its name, its host star, its mass in jupiter units, its radius in jupiter units, its density, and its planetary equilibrium temperature, rounded to the nearest integer.
[]
[]
[]
[]
[]
Management
false
{ "decimal": 0, "distinct": false, "order": false }
planets_data_M_7
planets_data
Let's make a new table called `planet_types` to label every planet. It should have the planet's id and its type. Use our standard rulebook to classify them: check first for the special types like 'hot jupiter' and 'super-earth', then for the basic 'rocky' or 'gas giant' types. If a planet doesn't fit any of those buckets, just label it 'unknown'. Go ahead and fill the table after you create it.
Create a new table `planet_types` that contains the planet's reference id and a string representing its type. Then, populate this table by classifying each planet according to the established hierarchical definitions for 'hot jupiter', 'super-earth', 'rocky planet', and 'gas giant', assigning 'unknown' to any that do not fit a category.
[]
[]
[]
[]
[]
Management
false
{ "decimal": -1, "distinct": false, "order": false }
planets_data_M_8
planets_data
I want a quick way to see which planets have dodgy star measurements. Make a view called `v_uncertain_measurements`. It should list any planet where the star's mass, size, or temperature reading might be mixed with another star's light. Show me the planet's name, its star, and then three flags telling me 'yes' or 'no' for whether the mass is blended, the radius is blended, and the temp is blended.
Create a view called `v_uncertain_measurements`. This view should list all planets that have a blended measurement for their stellar mass, radius, or temperature. Include the planet's letter, host star name, and boolean flags indicating which of the three measurements (mass, radius, temperature) are blended.
[]
[]
[]
[]
[]
Management
false
{ "decimal": -1, "distinct": false, "order": false }
planets_data_M_9
planets_data
Let's make a table for studying tightly-packed solar systems, call it `system_period_ratios`. It should have the star's id, the inner planet's orbital period, the outer planet's orbital period, and the ratio between them. Go ahead and fill it up with this info for every neighboring pair of planets that have known periods, in any system that has more than one planet.
Create a table `system_period_ratios` to analyze compact systems. It should store `hostlink`, the `inner_planet_period`, `outer_planet_period`, and the calculated orbital period ratio. Populate this table for all adjacent planet pairs with known orbital periods in systems where the star has more than one planet.
[]
[]
[]
[]
[]
Management
false
{ "decimal": -1, "distinct": false, "order": false }
planets_data_M_10
planets_data
I need a pre-calculated summary of how we're finding planets. Please create a materialized view called `v_discovery_stats`. It should list every discovery technique after you've cleaned them up by trimming spaces and ignoring case. For each one, show how many planets we found with it, the average distance to those planets in light-years (calculated only from planets with known distances and shown with two decimal points), and the very last date any record for that method was updated.
Create a materialized view `v_discovery_stats`. The view should list each distinct discovery method, after cleaning the text by trimming whitespace and converting to lowercase. It should also provide the total count of planets discovered with that method, the average stellar distance in light-years for those discoveries (only for planets with a known distance) rounded to two decimal places, and the most recent update timestamp associated with any planet of that discovery method.
[]
[]
[]
[]
[]
Management
false
{ "decimal": 2, "distinct": false, "order": false }
museum_artifact_1
museum_artifact
I'm worried about our items that are most at risk. Can you pull a list of artifacts that are both considered high-value (let's say historical significance over 8 and cultural score over 20) and are also officially listed with a 'High' or 'Medium' risk level? For each one, show its ID, title, its actual risk level, and both of those scores.
Generate a report of artifacts that are both high-risk (level 'High' or 'Medium') and high-value (historical significance > 8 and cultural score > 20). The report should include the artifact's ID, title, risk level, historical significance, and cultural score.
[]
[]
[]
[]
[]
Query
true
{ "decimal": -1, "distinct": false, "order": false }
museum_artifact_2
museum_artifact
I'm concerned about environmental damage. Can we find the artifacts most at risk by calculating a score based on their average sensitivity to the environment? Only show me the ones where the score is over 4. For those, I need the artifact's ID, name, its exact score, and a list of all specific sensitivities rated 'High'. Please sort the list to show the highest-risk items first.
Identify artifacts with dangerously high environmental risks by calculating their Environmental Risk Factor (ERF). The report should include the artifact's ID, its name, the calculated ERF score, and a JSON summary of all its 'High' sensitivity ratings. Only include artifacts where the ERF score exceeds 4, and sort the results from highest to lowest risk.
[]
[]
[]
[]
[]
Query
true
{ "decimal": -1, "distinct": false, "order": true }
museum_artifact_3
museum_artifact
To help us plan conservation, I need to focus on artifacts from the 'Ming' and 'Qing' dynasties. Please calculate a priority score for each one, and then assign a priority level. I'd like a report showing the artifact's ID, title, dynasty, the calculated score, and the final priority level. Please sort it to show the highest scores at the top.
Generate a conservation priority report for artifacts from the 'Ming' and 'Qing' dynasties. For each artifact, calculate its Conservation Priority Index (CPI) and determine its Priority Level. The report should include the Artifact ID, Title, Dynasty, CPI Score, and Priority Level, sorted by CPI Score in descending order.
[]
[]
[]
[]
[]
Query
true
{ "decimal": -1, "distinct": false, "order": true }
museum_artifact_4
museum_artifact
I need a report on how well we're funding artifact conservation across different dynasties. For each artifact, show its dynasty, its priority score, the specific budget we've assigned, and whether that funding is 'Sufficient' or 'Insufficient'.
For each artifact with a known dynasty, create a report showing its dynasty, its CPI score, its specific budget allocation, and its generalized budget status.
[]
[]
[]
[]
[]
Query
true
{ "decimal": -1, "distinct": false, "order": false }
museum_artifact_5
museum_artifact
Can you whip up a fast list for me? I want to see if any artifacts are deteriorating too quickly. For each one, give me the ID, name, current temp and humidity, a count of its high sensitivities, and a 'Yes' or 'No' on whether it's in this danger zone. Don't skip any artifacts, even if data's missing. Sort it all by artifact ID.
Check if artifacts are in an Accelerated Deterioration Scenario. The report should show each artifact's ID, its name, the current temperature and humidity in its display case, how many high sensitivities it has, and a 'Yes' or 'No' for the scenario. Include all artifacts and sort by artifact ID.
[]
[]
[]
[]
[]
Query
true
{ "decimal": -1, "distinct": false, "order": true }
museum_artifact_6
museum_artifact
Can you whip up a fast rundown for me? I need all the showcase IDs that have unstable environmental conditions. Get every unique ID and line them up in alphabetical order.
Generate a list of all unique showcase IDs that are experiencing an Environmental Instability Event. The list should be sorted alphabetically by ID.
[]
[]
[]
[]
[]
Query
true
{ "decimal": -1, "distinct": true, "order": true }
museum_artifact_7
museum_artifact
Can you sniff out all the showcase IDs that could be heading toward a problem? We'll say a showcase is a problem if its environmental stability score drops below 5, OR if it has at least three major maintenance issues like a poor seal, overdue status, or a filter/silica needing replacement. Show me just the showcase IDs for these problem cases, lined up alphabetically.
Identify all showcase IDs that are at risk of failure. A showcase is considered 'At Risk' if its calculated environmental stability score is less than 5, OR if it has at least three major maintenance issues ('Poor' seal state, 'Overdue' maintenance status, 'Replace Now' filter, or 'Replace Now' silica). The report should list the IDs of the at-risk showcases, sorted alphabetically.
[]
[]
[]
[]
[]
Query
true
{ "decimal": -1, "distinct": true, "order": true }
museum_artifact_8
museum_artifact
Can you pull together a rundown of artifacts with 'Medium' or 'High' humidity sensitivity? I need the registry number, name, material type, and that sensitivity level for each. Plus, check if they're 'Over Exposure' or 'Within Limits' using a more cautious humidity threshold, and line them up by registry number.
List all artifacts with 'Medium' or 'High' Humidity Sensitivity. For each, show the registry number, title, material type, and sensitivity level. Also, determine if they are 'Over Exposure' or 'Within Limits' based on a secondary, more cautious humidity threshold, and sort by registry number.
[]
[]
[]
[]
[]
Query
true
{ "decimal": -1, "distinct": false, "order": true }
museum_artifact_9
museum_artifact
We need a priority list based on comprehensive risk. First, calculate a total environmental threat score for all artifacts. I'm only interested in those officially at the second-highest risk level. From that group, find the top 10 with the highest threat scores. Give me their registration IDs and their scores, sorted from highest to lowest.
Identify the top 10 artifacts in greatest danger by calculating their Total Environmental Threat Level (TETL). The report should only consider artifacts at the second-highest risk level and list their registration IDs and TETL scores, sorted from highest to lowest.
[]
[]
[]
[]
[]
Query
true
{ "decimal": -1, "distinct": false, "order": true }
museum_artifact_10
museum_artifact
Let's figure out our next rotation plan. For all items currently in a resting state, I need to see their ID, name, material, and how many months it's been on display. Then, calculate its maximum safe display time. Using our standard method, compute its final rotation priority score. To make things clear, add a final column that flags it for 'Immediate Rotation' if needed, otherwise just label it as 'Monitor'.
Generate a rotation schedule based on the Exhibition Rotation Priority Score (ERPS). The report should only include artifacts in 'Resting' state and show their ID, name, material, current display duration, and Display Safety Duration (DSD) limit. It must also include the ERPS value and a final recommendation ('Immediate Rotation' or 'Monitor').
[]
[]
[]
[]
[]
Query
true
{ "decimal": -1, "distinct": false, "order": false }
museum_artifact_11
museum_artifact
I'm trying to figure out our data storage plan. Can you analyze the environmental readings for 2023, 2024, and 2025? First, for each year, find the average temperature. Then, for each of those years, count how many days the temperature was off by a specific amount—a deviation that's more than zero but no more than 4 degrees from that year's average. Give me a table with the year, its average temperature, and the total count of these anomaly days.
To assist with data partitioning strategy, generate a report showing the annual environmental anomaly count for the years 2023-2025. An anomaly is defined as a daily reading where the temperature deviation from that year's annual average is greater than 0°C and less than or equal to 4°C. The report should show the year, the calculated average temperature for that year, and the total count of anomaly days.
[]
[]
[]
[]
[]
Query
true
{ "decimal": 2, "distinct": true, "order": true }
museum_artifact_12
museum_artifact
I'm trying to find our most valuable artifacts (let's define that as a cultural score over 80) that might be at risk due to a weird environment. Can you find any of them that are in a showcase where the average temperature is more than 1 degree different from the average temperature of all other showcases in that same hall? For any you find, list the artifact's ID, its title, material, its showcase's temperature, what the hall's average temp is, and the exact deviation.
Identify artifacts that are both a 'High-Value Artifact' (cultural score > 80) and an 'Environmental Outlier'. An outlier is defined as an artifact whose showcase has an average temperature that deviates by more than 1°C from the average temperature of all showcases in its hall. For these artifacts, show their ID, title, material, their showcase's temperature, the hall's average temperature, and the temperature deviation.
[]
[]
[]
[]
[]
Query
true
{ "decimal": -1, "distinct": false, "order": false }
museum_artifact_13
museum_artifact
Figure out the conservation priority score for every artifact, showing their ID, name, and score, ordered from highest to lowest priority.
Calculate the Conservation Priority Index (CPI) for each artifact. The report should include the artifact ID, title, and the CPI score, sorted in descending order by CPI.
[]
[]
[]
[]
[]
Query
true
{ "decimal": 2, "distinct": false, "order": true }
museum_artifact_14
museum_artifact
Find all really valuable artifacts, group them by their historical period, and show the period, how many valuable artifacts there are, and their average cultural score, ordered from highest to lowest count.
Identify all High-Value Artifacts and group them by dynasty. The report should show the dynasty, a count of high-value artifacts, and their average cultural score, sorted by the count in descending order.
[]
[]
[]
[]
[]
Query
true
{ "decimal": 2, "distinct": false, "order": true }
museum_artifact_15
museum_artifact
I want to find artifacts that we've put in the wrong place. Can you make a list of any artifact that has a 'Medium' sensitivity to something (like light, temp, or humidity) and is in a showcase that we've classified as having a 'Medium' level of that same thing? Show me the artifact's name, material, the showcase it's in, which specific sensitivity is the problem, what the sensitivity level is, and what the showcase's environment profile is.
Identify environmental mismatches by finding artifacts whose specific environmental sensitivities are incompatible with the typical environment of their showcase. A 'mismatch' occurs if an artifact with 'Medium' sensitivity to an environmental factor (e.g., humidity) is in a showcase classified with a 'Medium' level of that same factor. The report should list the artifact's title, its material, the showcase ID, the mismatched sensitivity type, the sensitivity level, and the environment profile.
[]
[]
[]
[]
[]
Query
true
{ "decimal": -1, "distinct": false, "order": true }
museum_artifact_16
museum_artifact
Group artifacts by how vulnerable their organic material is and their environmental risk score. Show me what they're made of, their vulnerability status, their average environmental risk, and a count for each group, ordered by the highest average risk.
Cluster artifacts by their Organic Material Vulnerability and Environmental Risk Factor (ERF). The report should show material type, vulnerability status, average ERF, and artifact count, sorted by average ERF descending.
[]
[]
[]
[]
[]
Query
true
{ "decimal": 2, "distinct": false, "order": true }
museum_artifact_17
museum_artifact
I'm looking for ticking time bombs in our collection. Can you find all the artifacts that are currently listed as 'Good' or 'Excellent', but are at a high risk of light damage? Let's say 'high risk' means they have 'High' light sensitivity and they've already soaked up more than 70,000 lux-hours of light. For each one you find, show me its name, dynasty, its exact light sensitivity, the current lux level, and its total light exposure. Please list the ones with the most total exposure first.
Generate a report of artifacts with a 'Good' or 'Excellent' conservation status that are at high risk of light damage. A high-risk artifact is defined as having 'High' light sensitivity AND has a cumulative light exposure exceeding 70,000 lxh. The report should include the artifact's title, dynasty, light sensitivity level, current lux, and cumulative exposure (visLxh), sorted by cumulative exposure in descending order.
[]
[]
[]
[]
[]
Query
true
{ "decimal": -1, "distinct": false, "order": true }
museum_artifact_18
museum_artifact
What's the single highest risk score for any artifact, considering both its conservation priority and environmental sensitivity?
What is the maximum Artifact Vulnerability Score (AVS) found among all artifacts in the collection?
[]
[]
[]
[]
[]
Query
true
{ "decimal": -1, "distinct": false, "order": false }
museum_artifact_19
museum_artifact
How well do our showcases generally suit the artifacts from the Ming dynasty? I'm looking for a single number that tells me the average compatibility.
Calculate the average Artifact Exhibition Compatibility (AEC) for all artifacts from the 'Ming' dynasty.
[]
[]
[]
[]
[]
Query
true
{ "decimal": -1, "distinct": false, "order": false }
museum_artifact_20
museum_artifact
Can you give me a total count of display cases that are at risk of failing? I'm talking about any case with a very unstable environment or at least three major maintenance problems.
Calculate the total number of showcases that are currently considered a Showcase Failure Risk.
[]
[]
[]
[]
[]
Query
true
{ "decimal": 0, "distinct": true, "order": false }
museum_artifact_M_1
museum_artifact
I need to add a maintenance alert. Find every artifact that has a condition report on file, and for each one, append a timestamped alert saying 'Alert (Conservation Emergency): Immediate action recommended' to its maintenance log.
For all artifacts that have an existing condition assessment record, append a timestamped alert with the text 'Alert (Conservation Emergency): Immediate action recommended' to their maintenance log.
[]
[]
[]
[]
[]
Management
true
{ "decimal": 0, "distinct": false, "order": false }
museum_artifact_M_2
museum_artifact
Write a function called 'calculate_cpi' that figures out how important an artifact is for conservation, using its historical, research, and cultural value, plus its current condition, and gives back a final score.
Create a function named 'calculate_cpi' to compute a conservation priority score. This score should be based on historical significance, research value, cultural importance, and conservation condition, returning a numeric value.
[]
[ "DROP FUNCTION IF EXISTS calculate_cpi(SMALLINT, INT, SMALLINT, VARCHAR);" ]
[]
[]
[]
Management
false
{ "decimal": -1, "distinct": false, "order": false }
museum_artifact_M_3
museum_artifact
Put a rule on the artifact ratings table to make sure the historical importance score stays between 1 and 10.
Add a check constraint named 'hist_sign_rating_check' to the 'ArtifactRatings' table. This constraint should ensure the historical significance score is between 1 and 10, inclusive.
[ "UPDATE \"ArtifactRatings\" SET \"HIST_sign\" = 11 WHERE \"ART_link\" = 'ART54317';" ]
[ "ALTER TABLE \"ArtifactRatings\" DROP CONSTRAINT IF EXISTS hist_sign_rating_check;", "UPDATE \"ArtifactRatings\" SET \"HIST_sign\" = 7 WHERE \"ART_link\" = 'ART54317';" ]
[]
[]
[]
Management
false
{ "decimal": 0, "distinct": false, "order": false }
museum_artifact_M_4
museum_artifact
We need to decide which artifacts to put into storage next. Figure out the five most urgent candidates for rotation based on a priority score, but with a twist: if an artifact is made of organic material like textile, wood, or paper, increase its final score by multiplying it by 1.2 because it's more fragile. I just need the artifact titles and their final adjusted scores, with the most urgent one (the one with the lowest score) first.
Create a database view named 'V_Top5_Rotation_Priority' that provides a priority list of the top 5 artifacts for exhibition rotation. The standard Exhibition Rotation Priority Score (ERPS) calculation needs to be adjusted: for artifacts made of 'Organic' materials ('Textile', 'Wood', 'Paper'), their final ERPS score should be multiplied by 1.2 (a 20% vulnerability factor). The view should include the artifact's title and its final, adjusted ERPS score, sorted by the adjusted ERPS in ascending order (lowest score is highest priority).
[]
[ "DROP VIEW IF EXISTS V_Top5_Rotation_Priority;", "DROP VIEW IF EXISTS V_Rotation_Priority_Analysis;" ]
[]
[]
[]
Management
true
{ "decimal": -1, "distinct": false, "order": true }
museum_artifact_M_5
museum_artifact
Can you find all the artifacts made of materials like textile or paper that are both extremely valuable and highly sensitive to their environment? For each one, show its name and its exact sensitivity levels for light, temperature, and humidity.
Create a database view named 'V_Precious_Vulnerable_Organic_Items' to identify all artifacts that are both a High-Value Artifact and meet the Organic Material Vulnerability criteria. The view should display each artifact's title and its specific sensitivity ratings for light, temperature, and humidity.
[]
[ "DROP VIEW IF EXISTS V_Precious_Vulnerable_Organic_Items;", "DROP VIEW IF EXISTS V_High_Light_Risk_Artifact_Status;" ]
[]
[]
[]
Management
true
{ "decimal": 0, "distinct": false, "order": false }
museum_artifact_M_6
museum_artifact
I need to know which of our halls are a security nightmare. Find any hall that has a high visitor impact score (say, over 15) but at the same time has a low security score (less than 8). For each of those problem halls, show me the hall ID and both of those scores so I can see what's going on.
Create a view named 'V_High_Threat_Halls' to identify 'High Threat' exhibition halls, defined as those with a Visitor Impact Risk (VIR) score greater than 15 and a calculated Security Score below 8. The view should show the hall's ID, its VIR score, and its Security Score.
[]
[ "DROP VIEW IF EXISTS V_High_Threat_Halls;", "DROP VIEW IF EXISTS V_Critical_Artifacts_In_High_Threat_Halls;" ]
[]
[]
[]
Management
true
{ "decimal": -1, "distinct": false, "order": false }
museum_artifact_M_7
museum_artifact
Out of all our textiles, how many are in a 'Poor' state of environmental compliance? Assume the ideal is 20 degrees Celsius and 50 percent humidity to make the call.
Create a function named 'get_poor_compliance_count_by_material' that accepts a material type (e.g., 'textile') and calculates the number of artifacts of that material with a 'Poor' Compliance Level. The calculation should be based on the Environmental Compliance Index (ECI) with an ideal temperature of 20°C and ideal humidity of 50%.
[]
[ "DROP FUNCTION IF EXISTS get_poor_compliance_count_by_material(TEXT);", "DROP VIEW IF EXISTS V_Compliance_Report_By_Material;" ]
[]
[]
[]
Management
true
{ "decimal": 0, "distinct": false, "order": false }
museum_artifact_M_8
museum_artifact
Can you tell me the exact number of our important dynasty artifacts (from Ming, Han, or Tang) that are aging too quickly and are considered at risk?
Create a function named 'get_dynasty_artifacts_at_risk_count' to calculate the total number of artifacts classified as a Dynasty Artifact at Risk.
[]
[ "DROP FUNCTION IF EXISTS get_dynasty_artifacts_at_risk_count();", "DROP VIEW IF EXISTS V_Dynasty_Risk_Report;" ]
[]
[]
[]
Management
true
{ "decimal": 0, "distinct": false, "order": false }
museum_artifact_M_9
museum_artifact
I need a list of our most neglected showcases. Find every single one that has all three of these problems at once: the filter needs replacing now, the silica needs replacing now, and its general maintenance is overdue. For that list of problem showcases, tell me their ID, which hall they're in, and a count of how many high-sensitivity artifacts are stuck inside them.
Create a view named 'V_Chronic_Maintenance_Backlog_Showcases' to identify showcases with chronic maintenance backlogs. A 'chronic backlog' is defined as a showcase where the filter is overdue ('Replace Now'), the silica is exhausted ('Replace Now'), and the general maintenance status is 'Overdue'. For these showcases, the view should list their ID, hall ID, and the number of high-sensitivity artifacts they contain.
[]
[ "DROP VIEW IF EXISTS V_Chronic_Maintenance_Backlog_Showcases;", "DROP FUNCTION IF EXISTS get_worst_backlog_showcase_env();" ]
[]
[]
[]
Management
true
{ "decimal": -1, "distinct": false, "order": true }
museum_artifact_M_10
museum_artifact
We've got a $25,000 budget to catch up on overdue cleanings. I need a priority list. Figure out the risk score for every late artifact, and also estimate a treatment cost based on its material and complexity. Then, tell me which artifacts we can afford to fix, starting with the highest risk ones, without going over our budget. List the artifact name, its risk score, and its estimated cost.
Create a materialized view named 'MV_Prioritized_Maintenance_Plan' to generate a prioritized maintenance plan for overdue cleanings within a simulated budget of $25,000. Calculate the 'cost of treatment' for each overdue artifact based on its material and treatment complexity. The view should list the artifacts that can be treated within the budget, ordered by their Conservation Backlog Risk (CBR), and show their title, CBR score, and calculated cost.
[]
[ "DROP MATERIALIZED VIEW IF EXISTS MV_Prioritized_Maintenance_Plan;", "DROP FUNCTION IF EXISTS get_remaining_maintenance_budget();" ]
[]
[]
[]
Management
true
{ "decimal": -1, "distinct": false, "order": true }
fake_account_1
fake_account
Which accounts are growing their networks the fastest? Give me that blended growth score rounded to three decimal places and sort from fastest to slowest.
Compute the Network Growth Velocity (NGV) for every account using its follower and following growth rates, rounded to three decimal places, and list accounts with the highest NGV first.
[]
[]
[]
[]
[]
Query
false
{ "decimal": 3, "distinct": false, "order": true }
fake_account_2
fake_account
Show me the top 10 accounts by their logins-per-day score. For each account, take the biggest lifecycle login total you've ever seen for it, divide by its age in days (skip age missing or ≤ 0), round the score to 3 decimals, and show just the account ID plus that score.
Compute each account's Account Activity Frequency (AAF) per the domain definition. For each account, use the highest recorded lifecycle session total across all session snapshots, exclude anyone with age in days missing or ≤ 0, then return the ten accounts with the greatest AAF in descending order, showing the account identifier and AAF rounded to three decimals.
[]
[]
[]
[]
[]
Query
false
{ "decimal": 3, "distinct": false, "order": true }
fake_account_3
fake_account
List each account together with their attempts to evade detection. Calculate that sneakiness score, rounded to 3 decimal places, and rank them from most to least sneaky.
List each account along with its Technical Evasion Index (TEI), rounded to three decimal places, and sort them descending.
[]
[]
[]
[]
[]
Query
false
{ "decimal": 3, "distinct": false, "order": true }
fake_account_4
fake_account
Bucket every user into four sneakiness tiers based on how much they rely on tricks like VPNs, and list each account with the tier number.
Divide all accounts into quartiles based on their TEI values and return each account id with its TEI quartile.
[]
[]
[]
[]
[]
Query
false
{ "decimal": -1, "distinct": false, "order": false }
fake_account_5
fake_account
How many users land in each risk level based on the number of an account's attempts to evade detection?
Count how many accounts fall into each TEI Risk Category (low, moderate, high, very high).
[]
[]
[]
[]
[]
Query
false
{ "decimal": -1, "distinct": false, "order": true }
fake_account_6
fake_account
List the top twenty accounts by the highest overall exposure metric based on multiple signals. Only include users whose required inputs are all present; sort high to low; show the user ID and the metric rounded to three decimals.
Show the twenty accounts with the highest Security Risk Score (SRS). Compute the score only for accounts where all three inputs (risk value, trust value, impact value) are present; sort by the unrounded score descending; display the account identifier and the score rounded to three decimals.
[]
[]
[]
[]
[]
Query
false
{ "decimal": 3, "distinct": false, "order": true }
fake_account_7
fake_account
Show every user that needs immediate attention: they must have the top severity label and an ongoing detection. Rank them by their overall exposure metric (rounded to 3 decimals) from high to low, and include the severity label, only show those larger than 0.7.
List all High-Risk Accounts: compute SRS per its definition, keep only those with SRS > 0.7, whose overall severity is Critical, and that have at least one ongoing detection. Return the account ID, the score rounded to three decimals, and the severity, sorted by the unrounded score descending.
[]
[]
[]
[]
[]
Query
false
{ "decimal": 3, "distinct": false, "order": true }
fake_account_8
fake_account
For each account, combines multiple bot-detection metrics into a single score, rounded to three decimal places, and display the highest value.
Calculate the Bot Behavior Index (BBI) for each account, rounded to three decimal places, and show the top one.
[]
[]
[]
[]
[]
Query
false
{ "decimal": 3, "distinct": false, "order": true }
fake_account_9
fake_account
Identify all accounts exhibiting systematic VPN usage and report the total count of distinct countries from which they have authenticated.
Identify all VPN Abuser accounts and show how many different login countries they have used.
[]
[]
[]
[]
[]
Query
false
{ "decimal": -1, "distinct": false, "order": true }
fake_account_10
fake_account
Estimate the average authenticity level for content on each platform, rounded to three decimal places and sorted in a descending order.
Compute the average Content Authenticity Score (CAS) for each platform, rounded to three decimal places and sorted from highest to lowest.
[]
[]
[]
[]
[]
Query
false
{ "decimal": 3, "distinct": false, "order": true }
fake_account_M_1
fake_account
Update the StateFlag to 'Suspended' for every high-risk account, according to the High-Risk Account rule, excluding those already in suspended state.
Suspend every High-Risk Account by setting its StateFlag to "Suspended", according to the High-Risk Account rule. Make sure accounts already suspended are not updated twice.
[]
[]
[]
[]
[]
Management
false
{ "decimal": -1, "distinct": false, "order": false }
fake_account_11
fake_account
Show the top 10 accounts that have most content manipulation patterns. Evaluate their scores, rounded to 3 decimal places, and sort in descending order.
Retrieve the ten accounts with the highest Content Manipulation Score (CMS), sorted in a descending order and rounded to three decimal places.
[]
[]
[]
[]
[]
Query
false
{ "decimal": 3, "distinct": false, "order": true }
fake_account_12
fake_account
Find all accounts pumping out loads of near-duplicate posts, list their posts-per-day, sorted from most to least.
List all accounts classified as Content Farms along with their daily post frequency, ordered by frequency from highest to lowest.
[]
[]
[]
[]
[]
Query
false
{ "decimal": -1, "distinct": false, "order": true }
fake_account_M_2
fake_account
Create active, low-priority watch items for accounts that heavily rely on traffic-masking tools and have logged in from at least three different countries, skipping anyone who already has an active watch item. Tag the new items as coming from an automated scan and timestamp them with the current time.
Insert a low-priority monitoring entry for every account that qualifies under the VPN-abuse rule (TEI > 0.8 and login countries ≥ 3), skipping any account that already has an active monitoring entry. Use a random unique ID, the current timestamp, mark the source as Algorithm, and set the entry state to Active.
[]
[]
[]
[]
[]
Management
false
{ "decimal": -1, "distinct": false, "order": false }
fake_account_13
fake_account
For each account, build one overall risk score by combining an automation-likeness signal, a coordination signal, and the group size; round to three decimals, list those above 0.8, and sort high to low.
Compute Coordinated Bot Risk (CBR) for each account using the defined BBI and CAS formulas; round to three decimals, return those with CBR greater than 0.8, and sort in descending order.
[]
[]
[]
[]
[]
Query
false
{ "decimal": 3, "distinct": false, "order": true }
fake_account_14
fake_account
What's the overall average trust score of everyone's connections, rounded to three decimals?
Determine the overall average Network Trust Score (NTS) across all accounts, rounded to three decimal places.
[]
[]
[]
[]
[]
Query
false
{ "decimal": 3, "distinct": false, "order": false }
fake_account_15
fake_account
Generate a report of every cluster whose role is “SocialGroup”. For each cluster, show its identifier, the number of unique member accounts, the cluster's maximum coordination score. Quantify each account's position and influence in interaction network if the data are available, otherwise NULL, and indetify whether it is sophisticated influence campaigns
Generate a report of every cluster whose role is “SocialGroup”. For each cluster, show its identifier, the number of unique member accounts, the average Network Influence Centrality (NIC) of those members if NIC data are available, otherwise NULL, the cluster's maximum coordination score, and a “Yes/No” flag that is “Yes” when the cluster satisfies the Coordinated Influence Operation definition.
[]
[]
[]
[]
[]
Query
true
{ "decimal": -1, "distinct": false, "order": true }
fake_account_16
fake_account
Estimate each account's authentication-related risk, rounded to 3 decimal places. List accounts with a score above 0.7, sorted from highest to lowest.
Identify accounts with an Authentication Risk Score (ARS) greater than 0.7, round the score to 3 decimal places and order from highest to lowest.
[]
[]
[]
[]
[]
Query
false
{ "decimal": 3, "distinct": false, "order": true }
fake_account_17
fake_account
For each account, show the most recent system estimate of automation likelihood using the latest detection event.
Return each account's Latest Bot Likelihood Score (LBS).
[]
[]
[]
[]
[]
Query
false
{ "decimal": -1, "distinct": false, "order": false }
fake_account_18
fake_account
For each account, measure how much its hourly activity over a day diverges from its usual behavior, round to 3 decimals and sort descending. Show those above 0.7.
Identify accounts whose Temporal Pattern Deviation Score (TPDS) exceeds 0.7, rounded to 3 decimal places and sorted in descending order.
[]
[]
[]
[]
[]
Query
false
{ "decimal": 3, "distinct": false, "order": true }
fake_account_19
fake_account
List all accounts that exert strong influence while posting at a high daily rate, acting as key amplifiers in coordinated networks. List their influence score and daily post frequency, sorted by influence score in a descending order.
List all High-Impact Amplifier accounts together with their influence score and daily post frequency, sorted by influence score in descending order.
[]
[]
[]
[]
[]
Query
false
{ "decimal": -1, "distinct": false, "order": true }
fake_account_20
fake_account
Measure account-reputation stability, round to 3 decimals, and show whoever has the highest score.
Show the account with the highest Reputation Volatility Index (RVI), rounded to 3 decimal places.
[]
[]
[]
[]
[]
Query
false
{ "decimal": 3, "distinct": false, "order": true }
fake_account_21
fake_account
Retrieve accounts with elevated engagement levels based on the number of sessions or total posting frequency. Show their account ID and the daily post number.
Retrieve accounts classified as High-Activity Accounts, showing their account ID and the daily post number.
[]
[]
[]
[]
[]
Query
false
{ "decimal": -1, "distinct": false, "order": false }
fake_account_22
fake_account
Group results by platform type and show the average of the 0-1 score indicating how real the interactions feel, keep three decimals, and list them from high to low.
Compute the average engagement authenticity score for each platform type, rounded to 3 decimal places and sort in descending order.
[]
[]
[]
[]
[]
Query
false
{ "decimal": 3, "distinct": false, "order": true }
fake_account_23
fake_account
How many accounts are currently inactive and also classified as automated?
Count the number of accounts that are both in the inactive status and belong to the automated category.
[]
[]
[]
[]
[]
Query
false
{ "decimal": -1, "distinct": false, "order": false }
fake_account_M_3
fake_account
If an account meets our trust checks, shows no recent detections for 180 days, and has been quiet for at least 90 days based on the latest activity proxy, mark its monitoring priority as "Review_Inactive_Trusted".
For accounts that pass the trust threshold, have had no detections in the last 180 days, and whose most recent activity proxy is older than 90 days, set their monitoring priority to "Review_Inactive_Trusted".
[]
[]
[]
[]
[]
Management
false
{ "decimal": -1, "distinct": false, "order": false }
fake_account_24
fake_account
By platform kind, average a score built as: how manipulated the content looks, times how urgently it should be reviewed, times how central it is in the network.
For each platform kind, compute the average Content Impact Score, where the score equals manipulation intensity multiplied by moderation priority and by network influence centrality.
[]
[]
[]
[]
[]
Query
false
{ "decimal": 3, "distinct": false, "order": true }
fake_account_M_4
fake_account
Make a materialized view that shows all accounts with a credibility value above 0.9.
Create a materialized view listing all accounts whose built-in credibility value is greater than 0.9.
[]
[]
[]
[]
[]
Management
false
{ "decimal": -1, "distinct": false, "order": false }
fake_account_M_5
fake_account
Analyze the table that keeps monitoring snapshots so the database updates its size and stats.
Run ANALYZE on the table that stores monitoring snapshots to refresh its statistics.
[]
[]
[]
[]
[]
Management
false
{ "decimal": -1, "distinct": false, "order": false }
fake_account_M_6
fake_account
Write a helper called pct_to_numeric(text) that turns strings like '85%' into decimals like 0.85 and returns a numeric result.
Create a utility function named pct_to_numeric(text) that converts a percentage string (e.g., '85%') into a numeric value (e.g., 0.85), returning a numeric type.
[]
[]
[]
[]
[]
Management
false
{ "decimal": -1, "distinct": false, "order": false }
cold_chain_pharma_compliance_1
cold_chain_pharma_compliance
Can you check our cold chain data and tell me how long temperature problems typically last on our riskiest shipping routes? I'm looking for the average time in minutes that temperatures went outside acceptable ranges, but only for the shipments marked as high risk. Just round to two decimal places for me.
I want to find the average Temperature Excursion Duration for shipments on High Risk routes only. Please show me the route type label and the average excursion duration in minutes, rounded to two decimal places.
[]
[]
[]
[]
[]
Query
true
{ "decimal": 2, "distinct": false, "order": false }
cold_chain_pharma_compliance_2
cold_chain_pharma_compliance
Would check what percentage of our shipments actually stayed in the right temperature range the whole time? I need a simple number showing how many shipments had zero temperature problems compared to our total shipments. Just round it to two decimal places so it's easy to report.
What is our overall Cold Chain Compliance Rate for all shipments? Please calculate the percentage of compliant shipments out of the total shipments monitored, and round the result to two decimal places.
[]
[]
[]
[]
[]
Query
true
{ "decimal": 2, "distinct": false, "order": false }
cold_chain_pharma_compliance_3
cold_chain_pharma_compliance
Will you help me figure out how our shipments are performing in terms of timing? I want to see how many shipments are arriving early, on-time, or running late. Can you count up our shipments by these delivery timing categories? Just give me each category and its count, with the biggest numbers at the top.
I plan to analyze our cold chain delivery performance using Delivery Performance Classification. Show me the performance category and the number of shipments in each category, sorted by shipment count in descending order.
[]
[]
[]
[]
[]
Query
true
{ "decimal": -1, "distinct": false, "order": true }
cold_chain_pharma_compliance_4
cold_chain_pharma_compliance
I want to figure out if our shipping performance is better when our GPS tracking is working actively versus when it's in the intermittent location tracking state. Can you compare the average on-time performance between these two categories? Just give me the two tracking categories and their average performance scores rounded to two decimal places.
I am working on comparing the On-Time Delivery Performance between shipments with different Location Tracking States. Specifically, analyze the average OTDP for shipments that have either 'Active' or 'Intermittent' tracking states. Show me each tracking state and its corresponding average OTDP value rounded to two decimal places.
[]
[]
[]
[]
[]
Query
true
{ "decimal": 2, "distinct": false, "order": false }
cold_chain_pharma_compliance_5
cold_chain_pharma_compliance
I am trying to figure out if our quality agreements are working properly. Can you check how many shipments failed compliance checks for each type of agreement we have? Just show me each agreement status and how many shipments were flagged as non-compliant under that status.
I hope to analyze how different Quality Agreement Status types relate to non-compliance issues. Could you count the number of shipments that were classified as 'Non-compliant' for each quality agreement status category? Please show each agreement status with its corresponding count of non-compliant shipments.
[]
[]
[]
[]
[]
Query
true
{ "decimal": -1, "distinct": false, "order": false }
cold_chain_pharma_compliance_6
cold_chain_pharma_compliance
Our quality team needs to flag any high-risk shipments for immediate review. Could you pull a list of all shipments falling into red quality risk zone? Just show me the shipment ID, what percentage of time it stayed in the acceptable range, and how many total minutes it was out of range. Round the portion value into 2 decimal points.
I am going to identify shipments in the Red Zone of our Quality Risk Zones for further investigation. For each shipment in the Red Zone, show me the shipment ID, calculated TIRP percentage (rounded to 2 decimal places), and the total excursion duration in minutes.
[]
[]
[]
[]
[]
Query
true
{ "decimal": 2, "distinct": false, "order": false }
cold_chain_pharma_compliance_7
cold_chain_pharma_compliance
Can you figure out what the average temperature impact was for all our shipments where the temperature went outside the acceptable range? We need that special calculation that accounts for how temperature fluctuations affect products over time. Just give me one number that summarizes this across all problematic shipments.
I would like to calculate the Mean Kinetic Temperature for all shipments that have experienced temperature excursions. Please provide me with the average MKT value across these shipments.
[]
[]
[]
[]
[]
Query
true
{ "decimal": 2, "distinct": false, "order": false }
cold_chain_pharma_compliance_8
cold_chain_pharma_compliance
Please help me find shipment routes where our risk labels don't match what's actually happening. I want to see where we've mislabeled routes compared to what the temperature excursion data shows. Make sure you check all our routes, even if some might be missing monitoring data. Just show me the top 3 most problematic routes according to average excursion count.
We need to identify where our shipping route risk classifications don't match reality. Using the Route Risk Classification and High-Risk Shipping Origin-Destination Pairs knowledge, compare our documented risk notes against calculated risk levels, even those without environmental monitoring data. Show me only routes where the documented risk level differs from the calculated risk level, ordered by average excursion count from highest to lowest. Limit results to the top 3 discrepancies.
[]
[]
[]
[]
[]
Query
true
{ "decimal": 0, "distinct": false, "order": true }
cold_chain_pharma_compliance_9
cold_chain_pharma_compliance
Just give me a rough idea of how our shipments did in terms of temperature control. You can group them into risk categories like green/yellow/red using time-in-range and total out-of-range time. It’s fine to use a simplified method—doesn’t have to be perfect—as long as we get a general sense.
Please provide an approximate analysis of cold chain shipment quality by grouping them into Quality Risk Zones. Use Time In Range Percentage (TIRP) and total temperature excursion duration as approximate indicators for classification. A proxy-based zoning approach is acceptable where exact excursion details are unavailable. Return the number and percentage of shipments in each zone (Green, Yellow, Red), sorted from lowest to highest risk.
[]
[]
[]
[]
[]
Query
true
{ "decimal": 2, "distinct": false, "order": true }
cold_chain_pharma_compliance_10
cold_chain_pharma_compliance
Can you help me figure out how strong our supply chain is overall? Let’s turn the inputs into scores like this: route risk gets 8 if it’s low, 5 for medium, 2 for high; carrier certification gets 10 for both types, 8 if it’s just 'gdp' or 'ceiv pharma', 4 otherwise; vehicles score 9 if validated, 7 if qualified, and 5 otherwise; and compliance gets 9 for full, 6 for partial, 3 otherwise. Use the 0.4/0.3/0.2/0.1 weighting and round the final number to two decimal places.
I want to calculate the overall Supply Chain Resilience Score using a weighted average of several proxy indicators. For this, please map: 'low' route risk to 8, 'medium' to 5, and 'high' to 2; for carrier certification, use 10 for 'both', 8 for 'gdp' or 'ceiv pharma', and 4 otherwise; for vehicle qualification, use 9 for 'validated', 7 for 'qualified', and 5 otherwise; and for GDP compliance, use 9 for 'full', 6 for 'partial', and 3 otherwise. Then apply weights: 0.4 for route risk, 0.3 for carrier certification, 0.2 for vehicle, and 0.1 for compliance. Round to two decimals.
[]
[]
[]
[]
[]
Query
true
{ "decimal": 2, "distinct": false, "order": false }
cold_chain_pharma_compliance_11
cold_chain_pharma_compliance
I wonder that, for each type of GDP certification, how many different shipping companies have that certification, and out of those, how many actually have at least one fully validated vehicle. Please show the certification type, the total number of unique certified companies, and how many of those companies have validated vehicles. Put the ones with the most companies using validated vehicles at the top.
For each GDP Certification Status, I want to know how many distinct carriers hold that certification, and among them, how many distinct carriers also have at least one vehicle with Validated Cold Chain Vehicle Qualification Status. Please display the GDP certification level, the total number of distinct GDP-certified carriers, and the number of those distinct carriers with validated vehicles. Sort the results by the number of carriers with validated vehicles in descending order.
[]
[]
[]
[]
[]
Query
true
{ "decimal": 0, "distinct": true, "order": true }
cold_chain_pharma_compliance_14
cold_chain_pharma_compliance
I need to know, on average, how much of their allowed stability budget temperature-sensitive shipments are using up. Only count shipments where there actually was a temperature excursion. Give me the average percentage used, rounded to two decimal places.
Calculate the average Stability Budget Consumption Rate for all shipments of temperature-sensitive products. Return the average SBCR as a percentage, rounded to two decimal places and only count shipments where there actually was a temperature excursion.
[]
[]
[]
[]
[]
Query
false
{ "decimal": 2, "distinct": false, "order": false }
cold_chain_pharma_compliance_15
cold_chain_pharma_compliance
Out of all the biologics product batches, what percent need to be kept at ultra-low temperatures? Round it to two decimal places.
Please calculate the percentage of biologics products that require ultra-low temperature storage. The result should be rounded to two decimal places.
[]
[]
[]
[]
[]
Query
false
{ "decimal": 2, "distinct": false, "order": false }
cold_chain_pharma_compliance_16
cold_chain_pharma_compliance
I want to know, on average, how much carbon was produced by shipments that were way behind schedule—specifically, those whose delivery performance category counted as severely delayed. Return this value into two decimal places.
Calculate the average carbon footprint for all shipments that are classified as 'Severely Delayed' based on the Delivery Performance Classification standard. Please return a value rounded to two decimal places.
[]
[]
[]
[]
[]
Query
false
{ "decimal": 2, "distinct": false, "order": false }
cold_chain_pharma_compliance_17
cold_chain_pharma_compliance
Could you figure out the overall reliability score for all our temperature data loggers? Treat any logger with a missing recording interval as having a reading failure, and count a calibration failure if the calibration date is before June 26, 2024.
Estimate the overall Data Logger Reliability Score for all monitoring devices in the fleet. Assume a reading failure if the recording interval is missing, and a calibration failure if the calibration date is before 2024-06-26.
[]
[]
[]
[]
[]
Query
false
{ "decimal": -1, "distinct": false, "order": false }
cold_chain_pharma_compliance_18
cold_chain_pharma_compliance
Which three people have had the most shipments rejected when making product release decisions? I just want a list of their names and how many times each had a rejection, sorted so the person with the most rejections is at the top.
Using the Product Release Decision Framework, identify the top 3 responsible persons with the highest number of 'Rejected' product releases based on the Product Release Decision Framework. Please list each responsible person and their count of rejected shipments, ordered from highest to lowest count.
[]
[]
[]
[]
[]
Query
false
{ "decimal": -1, "distinct": false, "order": true }
cold_chain_pharma_compliance_19
cold_chain_pharma_compliance
Can you show me, for each type of regulatory compliance status, what the average number of temperature and humidity excursions is? And sort the results so the highest average temperature excursions come first.
For each regulatory compliance status, calculate the average number of temperature excursions and average number of humidity excursions. Display a table with compliance status, average temperature excursions, and average humidity excursions, sorted by temperature excursions in descending order.
[]
[]
[]
[]
[]
Query
false
{ "decimal": 2, "distinct": false, "order": true }
cold_chain_pharma_compliance_20
cold_chain_pharma_compliance
Can you show me the three riskiest shipping routes based on temperature excursions and shipping delays? Just give me the route, how many shipments went that way, and the risk score. Only count routes with more than one shipment.
I want to identify the top 3 riskiest shipping lanes by calculating the Lane Risk Potential for each route. Only include temperature excursions and shipping delays in the risk score. Please provide a report listing the route string, total number of shipments, and the calculated lane risk potential, sorted by lane risk potential in descending order. Only include lanes with more than one shipment.
[]
[]
[]
[]
[]
Query
false
{ "decimal": 2, "distinct": false, "order": true }
cold_chain_pharma_compliance_M_1
cold_chain_pharma_compliance
Can you make a function calculate_ted that, when I give you a shipment's ID, tells me how many minutes it spent outside the right temperature range?
For a given shipment, try to create a function calculate_ted that calculates the Temperature Excursion Duration. The input is a shipment's ID and output the TED value as an integer.
[]
[]
[]
[]
[]
Management
false
{ "decimal": 0, "distinct": false, "order": false }
cold_chain_pharma_compliance_M_2
cold_chain_pharma_compliance
For each shipment, I want you to include two things in the summary. First, tell me if it managed to stay in the right temperature the whole way. Second, show what percent of all shipments got that right. Even if some shipments don’t have temperature info, still include both pieces of info in their summary.
Please update every shipment so that its summary includes two clear insights. First, show whether that shipment stayed within the correct temperature range the entire time. Second, include the percentage of all shipments that successfully stayed within the proper temperature range from start to finish. This information should be added for every shipment, even for those where no temperature data is available.
[]
[]
[]
[]
[]
Management
false
{ "decimal": 2, "distinct": false, "order": false }
cold_chain_pharma_compliance_M_3
cold_chain_pharma_compliance
Can you make a view called v_product_temperature_analysis that breaks down our products by how sensitive they are to temperature changes and by their storage type? For each kind of product and storage group, show how many batches there are, what the average temperature range is, a list of the different sensitivity levels, and the lowest and highest storage temps. Please sort the results by product type and storage method.
Create a view named v_product_temperature_analysis that analyzes pharmaceutical products by both Temperature Sensitivity Tiers and Product Storage Classifications. For each product category and storage classification, display the batch count, average temperature range, all unique sensitivity descriptions, and the minimum and maximum storage temperatures. Order the results by product category and storage classification.
[]
[]
[]
[]
[]
Management
false
{ "decimal": -1, "distinct": false, "order": true }
cold_chain_pharma_compliance_M_4
cold_chain_pharma_compliance
Can you show, for every delivery, how well it matched the expected timing — like how close it was to being on time? And also include something simple that tells whether it was early, late, or arrived just right. Keep everything else as it is.
Please enhance each delivery record by adding two insights. The first is the On-Time Delivery Performance, which shows how closely the actual delivery matched the planned schedule, expressed as a percentage. The second is the Delivery Performance Classification, which gives a simple label describing the delivery’s overall timeliness. These additions should be included along with the original delivery details.
[]
[]
[]
[]
[]
Management
false
{ "decimal": 2, "distinct": false, "order": false }
cold_chain_pharma_compliance_M_5
cold_chain_pharma_compliance
Whenever we add or change a shipment’s temperature data, I want the system to automatically figure out how much of the time the temperature stayed where it should, and also give it a simple color rating based on that. These two things should be added back into the same record.
Whenever a new or updated environmental monitoring entry is recorded, the system should automatically assess the Quality Risk Zones for that shipment. It should use the temperature data to calculate Time In Range Percentage based on a standard 72-hour journey, then assign the appropriate risk level. Both the quality risk zone and time in range percentage should be added back into the same record.
[]
[]
[]
[]
[]
Management
true
{ "decimal": 2, "distinct": false, "order": false }
cold_chain_pharma_compliance_M_6
cold_chain_pharma_compliance
I need to find all the product batches that count as top-level cold chain products. For each one, list its batch tag, product code, name, and value, and make sure it’s clearly marked as tier 1 in the records. Also, let me know how many you found, and if anything goes wrong, just keep going and let me know about the problem.
I want to batch identify all Tier 1 Cold Chain Products in our database. For each product batch, check if it meets the Tier 1 Cold Chain Products criteria. For every qualifying batch, record its batch tag, product code, product label, and value in USD, and flag it as Tier 1. Also, update the product batch records to append a '[TIER1]' flag to the value field for all identified Tier 1 Cold Chain Products. Please ensure the process logs the number of Tier 1 products found and handles any errors gracefully.
[]
[]
[]
[]
[]
Management
true
{ "decimal": -1, "distinct": false, "order": false }
cold_chain_pharma_compliance_M_7
cold_chain_pharma_compliance
I need to refresh our monitoring device reliability records using the data logger value for reliability assessment. For every device, figure out its reliability value. Show all the details and scores in a temp table first. Then, save a backup of the current device list, wipe it clean, and fill it back up with the updated info from the results. Make sure to include all the device details from the original record, the made-up failure rates, the reliability score, and when you did the analysis.
I plan to recalculate and rebuild the reliability tracking for all monitoring devices in our system using the Data Logger Reliability Score. For each device, calculate DLRS and store the results in a staging table. Then, back up the current monitoringdevices table and repopulate it with the original device columns from the staging results. Please include the device reference, calibration timestamp, device accuracy, recording interval, temperature points, all simulated failure rates, the calculated DLRS, and the analysis timestamp in the staging output.
[]
[]
[]
[]
[]
Management
true
{ "decimal": 2, "distinct": false, "order": false }
cold_chain_pharma_compliance_M_8
cold_chain_pharma_compliance
I'm looking to see, for every shipment, what's the biggest number of shock events it went through, basically the highest shock count for each shipment, even if some shipments didn't have any shock data. Just show me the shipment code and that top shock number for each one.
For each shipment in the cold chain database, I want to determine the maximum shock event count observed, using the concept of Shock Event Significance Levels. Please provide the shipment identifier and its corresponding maximum shock event count, ensuring that all shipments are included in the results even if no shock data is present, just use 0 to represent null data. The output should display the shipment code alongside the highest shock event count recorded for that shipment.
[]
[]
[]
[]
[]
Management
true
{ "decimal": -1, "distinct": false, "order": true }
cold_chain_pharma_compliance_M_9
cold_chain_pharma_compliance
If I give you a shipment, would you tell me what its regulatory compliance status is, and also just let me know with a true or false if it’s considered compliant?
For a specified shipment, I require a summary of its regulatory compliance status according to the concept of Regulatory Compliance Status Definitions. The output should include both the compliance status and an indicator of whether the shipment is compliant, with the indicator expressed as a boolean value.
[]
[]
[]
[]
[]
Management
true
{ "decimal": -1, "distinct": false, "order": false }
cold_chain_pharma_compliance_M_10
cold_chain_pharma_compliance
Whenever a shipment’s product release status is set to rejected, go ahead and mark its insurance claim as rejected too, and don’t bother updating claims that are already marked as rejected.
For all shipments, please update the insurance claim status to 'Rejected' in the insuranceclaims table for every case where the product release status is 'Rejected', strictly following the Product Release Decision Framework. Ensure that only claims not already marked as 'Rejected' are updated.
[]
[]
[]
[]
[]
Management
true
{ "decimal": -1, "distinct": false, "order": false }