query
stringlengths 107
3k
| description
stringlengths 183
5.37k
|
---|---|
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-container-describe-clusters' AND json.rule =status equals "RUNNING" and resourceLabels.goog-composer-version does not start with "composer-1" and ((workloadIdentityConfig[*] does not exist) or (workloadIdentityConfig[*] exists and (nodePools[?any(config.workloadMetadataConfig does not contain GKE_METADATA)] exists)))``` | GCP Kubernetes Engine cluster workload identity is disabled
This policy identifies GCP Kubernetes Engine clusters for which workload identity is disabled. Manual approaches for authenticating Kubernetes workloads violates the principle of least privilege on a multi-tenanted node when one pod needs to have access to a service, but every other pod on the node that uses the service account does not. Enabling Workload Identity manages the distribution and rotation of Service account keys for the workloads to use.
This is applicable to gcp cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to GCP console\n2. Navigate to service 'Kubernetes Engine'(Left Panel)\n3. Select the reported cluster from the available list\n4. Under section 'Security', click on edit icon for 'Workload Identity'\n5. Click on the checkbox 'Enable Workload Identity'\n6. Ensure that the Workload Identity Namespace is set to the namespace of the GCP\nproject containing the cluster, e.g: $PROJECT_ID.svc.id.goog\n7. Click on 'SAVE CHANGES'\n8. After enabling, go to tab 'NODES'\n9. To investigate each node pool, Click on 'Edit', In section 'Security', select the 'Enable GKE Metadata Server' checkbox\n10. Click on 'SAVE'. |
```config from cloud.resource where api.name = 'azure-machine-learning-datastores' AND json.rule = properties.datastoreType equal ignore case AzureBlob as X; config from cloud.resource where api.name = 'azure-storage-account-list' as Y; filter ' $.X.properties.accountName equal ignore case $.Y.name ' ; show Y;``` | Azure Blob Storage utilized for Azure Machine Learning training job data
This policy identifies Azure Blob Storage accounts used for storing data utilized in Azure Machine Learning training jobs. This policy provides visibility into storage utilization for Machine Learning workloads but does not indicate a security or compliance risk.
Azure Blob Storage serves as a robust storage solution for large-scale Machine Learning training data. This policy emphasizes the importance of securing stored data by employing encryption and additional security parameters like firewalls, private endpoints, and access policies to safeguard sensitive information.
As a security best practice, it is recommended to properly configure Azure Blob Storage utilized in Azure Machine Learning training jobs.
This is applicable to azure cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: For configuring Azure Blob Storage used in Azure Machine Learning training jobs, refer to the following link for Blob storage security recommendations:\nhttps://learn.microsoft.com/en-us/azure/storage/blobs/security-recommendations. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-redshift-describe-clusters' AND json.rule = "clusterGroupsDetails[*].parameters[?(@.parameterName=='require_ssl')].parameterValue is false"``` | AWS Redshift does not have require_ssl configured
This policy identifies Redshift databases in which data connection to and from is occurring on an insecure channel. SSL connections ensures the security of the data in transit.
This is applicable to aws cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Login to the AWS and navigate to the 'Amazon Redshift' service.\n2. Expand the identified 'Redshift' cluster and make a note of the 'Cluster Parameter Group'\n3. In the navigation panel, click on the 'Parameter group'.\n4. Select the identified 'Parameter Group' and click on 'Edit Parameters'.\n5. Review the require_ssl flag. Update the parameter 'require_ssl' to true and save it.\nNote: If the current parameter group is a Default parameter group, it cannot be edited. You will need to create a new parameter group and point it to an affected cluster.. |
```config from cloud.resource where cloud.type = 'alibaba_cloud' and api.name = 'alibaba-cloud-vpc' AND json.rule = vpcFlowLogs[*].flowLogId does not exist and status equal ignore case Available``` | Alibaba Cloud VPC flow log not enabled
This policy identifies Virtual Private Clouds (VPCs) where flow logs are not enabled.
VPC flow logs capture information about the traffic entering and exiting network interfaces in the VPC. Without VPC flow logs, there’s limited visibility into network traffic, making it challenging to detect and investigate suspicious activities, potential data breaches, or security policy violations. Enabling VPC flow logs enhances network monitoring, improves threat detection, and supports compliance requirements.
As a security best practice, it is recommended to enable VPC flow logs.
This is applicable to alibaba_cloud cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to Alibaba Cloud Portal\n2. Navigate to VPC console\n3. Under 'O&M and Monitoring', click on 'Flow Log'\n4. Create and configure a new flow log for the reported VPC, specifying the required traffic filters and log storage destination\n5. Enable and save the configuration. |
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-vertex-ai-workbench-instance' AND json.rule = gceSetup.metadata.notebook-upgrade-schedule does not exist``` | GCP Vertex AI Workbench Instance auto-upgrade is disabled
This policy identifies GCP Vertex AI Workbench Instances that have auto-upgrade disabled.
Auto-upgrading Google Cloud Vertex environments ensures timely security updates, bug fixes, and compatibility with APIs and libraries. It reduces security risks associated with outdated software, enhances stability, and enables access to new features and optimizations.
It is recommended to enable auto-upgrade to minimize maintenance overhead and mitigate security risks.
This is applicable to gcp cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Login to the GCP console\n2. Navigate to the 'Vertex AI' service\n3. In side panel, under 'Notebooks', go to 'Workbench'\n4. Select 'INSTANCES' tab\n5. Click on the reported notebook\n6. Go to 'SYSTEM' tab\n7. Enable 'Environment auto-upgrade'\n8. Configure upgrade schedule as required\n9. Click 'SUBMIT'. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-container-registry' AND json.rule = 'webhooks[*] contains config and webhooks[*].config.serviceUri starts with http://'``` | Azure ACR HTTPS not enabled for webhook
Ensure you send container registry webhooks only to a HTTPS endpoint. This policy checks your container registry webhooks and alerts if it finds a URI with HTTP.
This is applicable to azure cloud and is considered a medium severity issue.
Sample categories of findings relevant here are ['UNENCRYPTED_DATA'].
Mitigation of this issue can be done as follows: Update your container registry webhook URI to use HTTPS.\n\n1. Sign in to the Azure portal.\n2. Navigate to the container registry in which you want to modify the webhook.\n3. Under Services, select Webhooks.\n4. Select your existing webhook.\n5. Near the top of the next window pane, select Configure.\n6. Under Service URI in the next window, modify your URI to use https:// and click Save.. |
```config from cloud.resource where api.name = 'aws-ec2-client-vpn-endpoint' and json.rule = authorizationRules[*].accessAll exists and authorizationRules[*].accessAll equals "True" ``` | Detect Unrestricted Access to EC2 Client VPN Endpoints
This policy helps you identify AWS EC2 Client VPN endpoints that have been configured to allow access for all clients, which could potentially expose your VPN to unauthorized users. By detecting such configurations, the policy enables you to take necessary actions to secure your VPN endpoints, ensuring that only authorized clients can access your cloud resources and maintain a strong security posture in your public cloud environment.
This is applicable to aws cloud and is considered a informational severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: N/A. |
```config from cloud.resource where api.name = "aws-ec2-describe-instances" AND json.rule = architecture contains "foo"``` | API automation policy sizbn
This is applicable to aws cloud and is considered a medium severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: N/A. |
```config from cloud.resource where cloud.type = 'aws' AND api.name='aws-ec2-describe-images' AND json.rule = image.platform contains windows and image.imageId contains ami-1e542176``` | AWS Amazon Machine Image (AMI) infected with mining malware
This policy identifies Amazon Machine Images (AMIs) that are infected with mining malware. As per research, AWS Community AMI Windows 2008 hosted by an unverified vendor containing malicious code running an unidentified crypto (Monero) miner. It is recommended to delete such AMIs to protect from malicious activity and attack blast.
This is applicable to aws cloud and is considered a medium severity issue.
Sample categories of findings relevant here are ['MALWARE'].
Mitigation of this issue can be done as follows: To delete reported AMI follow below mentioned URL:\nhttps://docs.aws.amazon.com/AWSEC2/latest/UserGuide/deregister-ami.html. |
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-organization-asset-group-member' as X; config from cloud.resource where api.name = 'gcloud-projects-get-iam-user' AND json.rule = '(roles[*] contains roles/editor or roles[*] contains roles/owner or roles[*] contains roles/appengine.* or roles[*] contains roles/browser or roles[*] contains roles/compute.networkAdmin or roles[*] contains roles/cloudtpu.serviceAgent or roles[*] contains roles/composer.serviceAgent or roles[*] contains roles/composer.ServiceAgentV2Ext or roles[*] contains roles/container.serviceAgent or roles[*] contains roles/dataflow.serviceAgent)' as Y; filter '($.X.groupKey.id contains $.Y.user)'; show Y;``` | pcsup-13966-policy
This is applicable to gcp cloud and is considered a high severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: N/A. |
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-compute-firewall-rules-list' AND json.rule = disabled is false and direction equals INGRESS and (sourceRanges[*] equals ::0 or sourceRanges[*] equals 0.0.0.0 or sourceRanges[*] equals 0.0.0.0/0 or sourceRanges[*] equals ::/0 or sourceRanges[*] equals ::) and allowed[?any(ports contains _Port.inRange(22,22) or (ports does not exist and (IPProtocol contains tcp or IPProtocol contains udp)))] exists``` | GCP Firewall rule allows all traffic on SSH port (22)
This policy identifies GCP Firewall rules which allow all inbound traffic on SSH port (22). Allowing access from arbitrary IP addresses to this port increases the attack surface of your network. It is recommended that the SSH port (22) should be allowed to specific IP addresses.
This is applicable to gcp cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: If the Firewall rule reported indeed needs to restrict all traffic, follow the instructions below:\n1. Login to GCP Console\n2. Go to 'VPC Network'\n3. Go to the 'Firewall'\n4. Click on the reported Firewall rule\n5. Click on 'EDIT'\n6. Modify Source IP ranges to specific IP\n7. Click on 'SAVE'.. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-storage-account-list' AND json.rule = totalPublicContainers > 0 and (properties.allowBlobPublicAccess is true or properties.allowBlobPublicAccess does not exist) and properties.publicNetworkAccess equal ignore case Enabled and networkRuleSet.virtualNetworkRules is empty and (properties.privateEndpointConnections is empty or properties.privateEndpointConnections does not exist)``` | Azure storage account has a blob container with public access
This policy identifies blob containers within an Azure storage account that allow anonymous/public access ('CONTAINER' or 'BLOB'). As a best practice, do not allow anonymous/public access to blob containers unless you have a very good reason. Instead, you should consider using a shared access signature token for providing controlled and time-limited access to blob containers.
'Public access level' allows you to grant anonymous/public read access to a container and the blobs within Azure blob storage. By doing so, you can grant read-only access to these resources without sharing your account key, and without requiring a shared access signature.
This is applicable to azure cloud and is considered a high severity issue.
Sample categories of findings relevant here are ['INTERNET_EXPOSURE'].
Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Navigate to 'Storage Accounts' dashboard\n3. Select the reported storage account\n4. Under 'Data storage' section, Select 'Containers'\n5. Select the blob container you need to modify\n6. Click on 'Change access level'\n7. Set 'Public access level' to 'Private (no anonymous access)'\n8. Click on 'OK'. |
```config from cloud.resource where api.name = "aws-ec2-describe-instances" AND json.rule = architecture contains "foo"``` | API automation policy sklde
This is applicable to aws cloud and is considered a medium severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: N/A. |
```config from cloud.resource where api.name = 'aws-lambda-list-functions' AND json.rule = policy.Statement[?any(Effect equals Allow and Principal equals "*" and Condition does not exist and (Action equals "*" or Action equals lambda:*))] exists``` | AWS Lambda Function resource-based policy is overly permissive
This policy identifies Lambda Functions that have overly permissive resource-based policy. Lambda functions having overly permissive policy could lead to lateral movement in account or privilege being escalated when compromised. It is highly recommended to have the least privileged access policy to protect the Lambda Functions from unauthorized access.
For more details:
https://docs.aws.amazon.com/lambda/latest/dg/access-control-resource-based.html
This is applicable to aws cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['PRIVILEGE_ESCALATION'].
Mitigation of this issue can be done as follows: To modify permission from AWS Lambda Function resource-based policy\n1. Sign in to the AWS console\n2. Select the region, from the region drop-down, for which the alert is generated\n3. Navigate to AWS Lambda Dashboard\n4. Click on the 'Functions' (Left panel)\n5. Select the lambda function on which the alert is generated\n6. Go to Configuration tab\n7. Select Permissions\n8. Scroll to the \"Resource-based policy\" area\n9. For each policy statement, use fine-grained and restrictive permissions instead of using wildcards (Lambda:* and Resource:*) OR add in appropriate conditions with least privilege access.\n10. Click on \"Edit\" button to modify the statement\n11. When you finish configuring the statement, choose 'Save'.\n\nTo remove permission from AWS Lambda Function resource-based policy\n1. Sign in to the AWS console\n2. Select the region, from the region drop-down, for which the alert is generated\n3. Navigate to AWS Lambda Dashboard\n4. Click on the 'Functions' (Left panel)\n5. Select the lambda function on which the alert is generated\n6. Go to Configuration tab\n7. Select Permissions\n8. Scroll to the \"Resource-based policy\" area\n9. For each policy statement, use fine-grained and restrictive permissions instead of using wildcards (Lambda:* and Resource:*) OR add in appropriate conditions with least privilege access.\n10. Click on \"Delete\" button to modify the statement\n11. In Delete statement dialog box, click on \"Delete\" button.. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-rds-describe-event-subscriptions' AND json.rule = 'sourceType equals db-instance and ((status does not equal active or enabled is false) or (status equals active and enabled is true and (sourceIdsList is not empty or eventCategoriesList is not empty)))'``` | AWS RDS Event subscription All event categories and All instances disabled for DB instance
This policy identifies AWS RDS event subscriptions for DB instance which has 'All event categories' and 'All instances' is disabled. As a best practice enabling 'All event categories' for 'All instances' helps to get notified when an event occurs for a DB instance.
This is applicable to aws cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Sign into the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to Amazon RDS Dashboard\n4. Click on 'Event subscriptions' (Left Panel)\n5. Choose the reported Event subscription\n6. Click on 'Edit'\n7. On 'Edit event subscription' page, Under 'Details' section; Select 'Yes' for 'Enabled' and Make sure you have subscribed your DB to 'All instances' and 'All event categories'\n8. Click on 'Edit'. |
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-container-describe-clusters' AND json.rule = binaryAuthorization.evaluationMode does not exist or binaryAuthorization.evaluationMode equal ignore case EVALUATION_MODE_UNSPECIFIED or binaryAuthorization.evaluationMode equal ignore case DISABLED``` | asasas23
This is applicable to gcp cloud and is considered a medium severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: N/A. |
```config from cloud.resource where cloud.type = 'AWS' AND finding.type = 'AWS GuardDuty IAM' AND finding.name = 'Impact:IAMUser/AnomalousBehavior'``` | GuardDuty IAM Impact: AnomalousBehavior
This is applicable to aws cloud and is considered a high severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: N/A. |
```config from cloud.resource where api.name = "aws-ec2-describe-instances" AND json.rule = architecture contains "foo"``` | API automation policy nrnqu
This is applicable to aws cloud and is considered a medium severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: N/A. |
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-kms-crypto-keys-list' AND json.rule = purpose equal ignore case "ENCRYPT_DECRYPT" and primary.state equals "ENABLED" and (rotationPeriod does not exist or rotationPeriod greater than 7776000)``` | GCP KMS Symmetric key not rotating in every 90 days
This policy identifies GCP KMS Symmetric keys that are not rotating every 90 days. A key is used to protect some corpus of data. A collection of files could be encrypted with the same key and people with decrypt permissions on that key would be able to decrypt those files. It's recommended to make sure the 'rotation period' is set to a specific time to ensure data cannot be accessed through the old key.
This is applicable to gcp cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: To configure automatic rotation for GCP KMS Symmetric keys, please refer to the URL given below and configure "Rotation period" to less than or equal to 90 days:\nhttps://cloud.google.com/kms/docs/rotating-keys#automatic. |
```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-datacatalog-catalogs' AND json.rule = lifecycleState equal ignore case ACTIVE and (attachedCatalogPrivateEndpoints is empty or attachedCatalogPrivateEndpoints does not exist)``` | OCI Data Catalog configured with overly permissive network access
This policy identifies Data Catalogs configured with overly permissive network access.
The OCI Data Catalog service provides a centralized repository to manage and govern data assets, including their metadata. When network access settings are too permissive, it can expose sensitive metadata to unauthorized users or malicious actors, potentially leading to data breaches and compliance issues.
As a best practice, it is recommended to configure the Data catalog with private endpoints; so that the Data catalog is accessible only to restricted entities.
This is applicable to oci cloud and is considered a high severity issue.
Sample categories of findings relevant here are ['INTERNET_EXPOSURE'].
Mitigation of this issue can be done as follows: To configure private endpoint to your Data catalog, follow the below URL:\nhttps://docs.oracle.com/en-us/iaas/data-catalog/using/private-network.htm. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-iam-get-policy-version' AND json.rule = 'policyName equals AWSSupportAccess and policyArn contains arn:aws:iam::aws:policy/AWSSupportAccess and (isAttached is false or (isAttached is true and entities.policyRoles[*].roleId is empty))'``` | AWS IAM support access policy is not associated to any role
This policy identifies IAM policies with support role access which are not attached to any role for an account. AWS provides a support centre that can be used for incident notification and response, as well as technical support and customer services.
This is applicable to aws cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['UNUSED_PRIVILEGES'].
Mitigation of this issue can be done as follows: 1. Log in to AWS console\n2.Go to service IAM under Services panel.\n3.From left panel click on 'Policies'\n4.Search for the existence of a support policy 'AWSSupportAccess'\n5.Create a IAM role \n6.Attach 'AWSSupportAccess' managed policy to the created IAM role. |
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-sql-instances-list' AND json.rule = 'state equals RUNNABLE and databaseVersion contains SQLSERVER and settings.databaseFlags[*].name contains "user options"'``` | GCP SQL server instance database flag user options is set
This policy identifies GCP SQL server instances fo which database flag user options is set. The user options option specifies global defaults for all users. A list of default query processing options is established for the duration of a user's work session. A user can override these defaults by using the SET statement. It is recommended that, user options database flag for SQL Server instance should not be configured.
This is applicable to gcp cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to the GCP console\n2. Navigate SQL Instances page\n3. Click on the reported SQL server instance\n4. Click on EDIT\n5. If the flag has not been set on the instance, \nUnder 'Customize your instance' section, go to 'Flags and parameters', go to the flag 'user options' and click on delete icon\n6. Click on SAVE \n7. If 'Changes requires restart' pop-up appears, click on 'SAVE AND RESTART'. |
```config from cloud.resource where cloud.type = 'alibaba_cloud' AND api.name = 'alibaba-cloud-ram-user' AND json.rule = 'Policies[*] size > 0'``` | Alibaba Cloud RAM policy attached to users
This policy identifies Resource Access Management (RAM) policies that are attached to users. By default, RAM users, groups, and roles have no access to Alibaba Cloud resources. RAM policies are the means by which privileges are granted to users, groups, or roles. It is recommended that RAM policies be applied directly to groups and roles but not users.
This is applicable to alibaba_cloud cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['PRIVILEGE_ESCALATION'].
Mitigation of this issue can be done as follows: 1. Log in to Alibaba Cloud Portal\n2. Go to Resource Access Management\n3. In the left-side navigation pane, click 'Users'\n4. Click on the reported RAM user\n5. Under the 'Permissions' tab, In 'Individual' sub-tab\n6. Click on 'Remove Permission' for user reported,\n7. On 'Remove Permission' popup window, Click on 'OK'\n\nIf a group with a similar policy already exists, put the user in that group. If such a group does not exist, create a new group with relevant policy and assign the user to the group.. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-network-vpn-connection-list' AND json.rule = 'ipsecPolicies is empty and connectionType does not equal ExpressRoute'``` | Azure VPN is not configured with cryptographic algorithm
This policy identifies Azure VPNs which are not configured with cryptographic algorithm. Azure VPN gateways to use a custom IPsec/IKE policy with specific cryptographic algorithms and key strengths, rather than the Azure default policy sets. IPsec and IKE protocol standard supports a wide range of cryptographic algorithms in various combinations. If customers do not request a specific combination of cryptographic algorithms and parameters, Azure VPN gateways use a set of default proposals. Typically due to compliance or security requirements, you can now configure your Azure VPN gateways to use a custom IPsec/IKE policy with specific cryptographic algorithms and key strengths, rather than the Azure default policy sets. It is thus recommended to use custom policy sets and choose strong cryptography.
This is applicable to azure cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: Follow Microsoft Azure documentation and setup your respective VPN connections using strong recommended cryptographic requirements.\nFMI: https://docs.microsoft.com/en-us/azure/vpn-gateway/vpn-gateway-about-compliance-crypto#cryptographic-requirements. |
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-compute-firewall-rules-list' AND json.rule = disabled is false and direction equals INGRESS and (sourceRanges[*] equals ::0 or sourceRanges[*] equals 0.0.0.0 or sourceRanges[*] equals 0.0.0.0/0 or sourceRanges[*] equals ::/0 or sourceRanges[*] equals ::) and allowed[?any(ports contains _Port.inRange(10250,10250) or (ports does not exist and (IPProtocol contains tcp or IPProtocol contains udp or IPProtocol contains "all")))] exists as X; config from cloud.resource where api.name = 'gcloud-container-describe-clusters' AND json.rule = status equals RUNNING as Y; filter '$.X.network contains $.Y.networkConfig.network' ; show X;``` | GCP Firewall rule exposes GKE clusters by allowing all traffic on port 10250
This policy identifies GCP Firewall rule allowing all traffic on port 10250 which allows GKE full node access. The port 10250 on the kubelet is used by the kube-apiserver (running on hosts labelled as Orchestration Plane) for exec and logs. As per security best practice, port 10250 should not be exposed to the public.
This is applicable to gcp cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: As port 10250 exposes sensitive information of GKE pod configuration it is recommended to disable this firewall rule. \nOtherwise, remove the overly permissive source IPs following the below steps,\n\n1. Login to GCP Console\n2. Navigate to 'VPC Network'(Left Panel)\n3. Go to the 'Firewall' section (Left Panel)\n4. Click on the reported Firewall rule\n5. Click on 'EDIT'\n6. Modify Source IP ranges to specific IP\n7. Click on 'SAVE'.. |
```config from cloud.resource where cloud.type = 'aws' AND api.name= 'aws-lambda-list-functions' AND json.rule = authType equal ignore case NONE``` | PCSUP-16458-CLI-Test
This is applicable to aws cloud and is considered a low severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: 1. Sign in to the AWS console\n2. Select the region, from the region drop-down, for which the alert is generated\n3. Navigate to AWS Lambda Dashboard\n4. Click on the 'Functions' (Left panel)\n5. Select the lambda function on which the alert is generated\n6. Go to 'Configuration' tab\n7. Select 'Function URL'\n8. Click on 'Edit'\n9. Set 'Auth type' to 'AWS_IAM'\n10. Click on 'Save'. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-spring-cloud-service' AND json.rule = properties.powerState equals Running and sku.tier does not equal Basic and properties.networkProfile.serviceRuntimeSubnetId does not exist``` | Azure Spring Cloud service is not configured with virtual network
This policy identifies Azure Spring Cloud services that are not configured with a virtual network. Spring Cloud configured with a virtual network isolates apps and service runtime from the internet on your corporate network and provides control over inbound and outbound network communications for Azure Spring Cloud. As best security practice, It is recommended to deploy Spring Cloud service in a virtual network.
This is applicable to azure cloud and is considered a medium severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: You can select your Azure virtual network only when you create a new Azure Spring Cloud service instance. You cannot change to use another virtual network after Azure Spring Cloud has been created. \nTo resolve this alert create a new Spring Cloud service configuring virtual network, migrate all data to newly created Spring Cloud service and then delete the reported Spring Cloud service.\n\nTo create a new Spring Cloud service with virtual network, follow the below URL:\nhttps://docs.microsoft.com/en-us/azure/spring-cloud/how-to-deploy-in-azure-virtual-network?tabs=azure-portal \n\nNOTE: Azure Virtual network feature is not available to Basic tier Spring Cloud services.. |
```config from cloud.resource where cloud.type = 'alibaba_cloud' AND api.name = 'alibaba-cloud-ecs-instance' AND json.rule = 'instanceNetworkType does not equal vpc or vpcAttributes is empty'``` | Alibaba Cloud ECS instance is not using VPC network
This policy identifies ECS instances which are still using the ECS classic network instead of the VPC network that enables you to leverage enhanced infrastructure security controls.
Note: If you purchased an ECS instance after 17:00 (UTC+8) on June 14, 2017, you cannot choose the classic network type.
This is applicable to alibaba_cloud cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: You can select the VPC network only when you create a new ECS instance. So to fix this alert, create a new ECS instance with VPC network and then migrate all required ECS instance data from the reported ECS instance to this newly created ECS instance.\n\nTo set up the new ECS instance with VPC network, perform the following:\n1. Log in to Alibaba Cloud Portal\n2. Go to Elastic Compute Service\n3. In the left-side navigation pane, click 'Instances'\n4. On the Instances list page, click Create Instance.\n5. Complete the Basic Configurations\n6. Click 'Next: Networking', Select a 'Network Type' as 'VPC'. Select the desired VPC and a VSwitch.\n7. Complete the System Configurations, Grouping and Preview the configurations.\n8. Click on 'Create Order'\n\nTo delete reported ECS instance, perform the following:\n1. Log in to Alibaba Cloud Portal\n2. Go to Elastic Compute Service\n3. In the left-side navigation pane, click 'Instances'\n4. Click on the reported ECS instance\n5. Click on 'Stop', It will be auto-released.. |
```config from cloud.resource where api.name = "aws-ec2-describe-instances" AND json.rule = architecture contains "foo"``` | API automation policy lgwpn
This is applicable to aws cloud and is considered a medium severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: N/A. |
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-compute-project-info' AND json.rule = commonInstanceMetadata.items[?any(key contains "enable-oslogin" and (value contains "Yes" or value contains "Y" or value contains "True" or value contains "true" or value contains "TRUE" or value contains "1"))] exists as X; config from cloud.resource where api.name = 'gcloud-compute-instances-list' AND json.rule = (metadata.items[?any(key exists and key contains "enable-oslogin" and (value contains "False" or value contains "N" or value contains "No" or value contains "false" or value contains "FALSE" or value contains "0"))] exists and name does not start with "gke-" and status equals RUNNING) as Y;filter'$.Y.zone contains $.X.name';show Y;``` | GCP VM instance OS login overrides Project metadata OS login configuration
This policy identifies GCP VM instances where OS login configuration is disabled and overriding enabled Project OS login configuration. Enabling OS Login ensures that SSH keys used to connect to instances are mapped with IAM users. Revoking access to IAM user will revoke all the SSH keys associated with that particular user. It facilitates centralized and automated SSH key pair management which is useful in handling cases like a response to compromised SSH key pairs.
Note: Enabling OS Login on instances disables metadata-based SSH key configurations on those instances. Disabling OS Login restores SSH keys that you have configured in a project or instance metadata.
Reference: https://cloud.google.com/compute/docs/instances/managing-instance-access
This is applicable to gcp cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Login to GCP Portal\n2. Go to Computer Engine (Left Panel)\n3. Go to the VM instances\n4. Select the alerted VM instance\n5. Click on the 'EDIT' button\n6. Go to 'Custom metadata'\n7. Remove the metadata entry where the key is 'enable-oslogin' and the value is 'FALSE' or 'false' or 0.(For more information on adding boolean values, refer: https://cloud.google.com/compute/docs/metadata/setting-custom-metadata#boolean)\n8. Click on 'Save' to apply the changes. |
```config from cloud.resource where api.name = 'aws-emr-studio' AND json.rule = DefaultS3Location exists and DefaultS3Location contains "aws-emr-studio-" as X; config from cloud.resource where api.name = 'aws-s3api-get-bucket-acl' AND json.rule = bucketName contains "aws-emr-studio-" as Y; filter 'not ($.X.BucketName equals $.Y.bucketName)' ; show X;``` | AWS EMR Studio using the shadow resource bucket for workspace storage
This policy identifies that the AWS EMR Studio using the bucket for workspace storage is not managed from the current account. This could potentially be using the shadow resource bucket for workspace storage.
AWS EMR enables data processing and analysis using big data frameworks like Hadoop, Spark, and Hive. To create an EMR Studio, the EMR service automatically generates an S3 bucket. This S3 bucket follows the naming pattern 'aws-emr-studio-{Account-ID}-{Region}'. An attacker can create an unclaimed bucket with this predictable name and wait for the victim to deploy a new EMR Studio in a new region. This can result in multiple attacks, including cross-site scripting (XSS) when the user opens the compromised notebook in EMR Studio.
It is recommended to verify the expected bucket owner and update the AWS EMR storage location and enforce the aws: ResourceAccount condition in the policy of the service role used by the AWS EMR to check that the AWS account ID of the S3 bucket used by AWS EMR Studio according to your business requirements.
This is applicable to aws cloud and is considered a medium severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: To update an EMR Studio with the new workspace storage, Follow the below actions:\n\n1. Sign in to the AWS Management Console\n2. Move the required script to a new S3 bucket as per your requirements.\n3. Open the Amazon EMR console at https://console.aws.amazon.com/emr.\n4. Under EMR Studio on the left navigation, choose Studios.\n5. Select the reported studio from the Studios list and Click the 'Edit' button on the right corner to edit the Studio details.\n6. Verify that the 'Workspace storage' is authorized and managed according to your business requirements. \n7. On the Edit studio page, Update 'Workspace storage' by selecting 'Browse S3', and select the 'Encrypt Workspace files with your own AWS KMS key' as per your organisation's requirements.\n8. Click 'Save Changes'.. |
```config from cloud.resource where api.name = 'gcloud-compute-instances-list' AND json.rule = status equals "RUNNING" as X; config from cloud.resource where api.name = 'gcloud-vertex-ai-workbench-instance' as Y; filter ' $.Y.labels.resource-name equals $.X.labels.resource-name '; show X;``` | GCP VM instance used by Vertex AI Workbench Instance
This policy identifies GCP VM instances used by Vertex AI Workbench.
Vertex AI Workbench relies on GCP Compute Engine VM instances for backend processing. The selection of the appropriate VM instance type, size, and configuration directly impacts the performance and security of the Workbench. Proper configuration of these VM instances is critical to ensuring the security of the associated Vertex AI environment.
It is recommended to regularly identify and assess the VM instances supporting Vertex AI Workbench to maintain a strong security posture and ensure compliance with best practices.
This is applicable to gcp cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: Review and validate the GCP VM instances used by Vertex AI Workbench Instances. Verify the VM instance is configured as per organizational needs.. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-cloudfront-list-distributions' AND json.rule = defaultRootObject is empty``` | dnd_test_validate_compliance_hyperion_policy_ss_finding_1
Description-d84c12b2-384e-429e-967a-2e9ea515846d
This is applicable to aws cloud and is considered a medium severity issue.
Sample categories of findings relevant here are ['SSH_BRUTE_FORCE'].
Mitigation of this issue can be done as follows: N/A. |
```config from cloud.resource where cloud.type = 'azure' and api.name = 'azure-active-directory-authorization-policy' AND json.rule = defaultUserRolePermissions.allowedToCreateSecurityGroups is true ``` | Azure user not restricted to create Microsoft Entra Security Group
This policy identifies instances in the Microsoft Entra ID configuration where security group creation is not restricted to administrators only.
When the ability to create security groups is enabled, all users in the directory can create new groups and add members to them. Unless there is a specific business need for this broad access, it is best to limit the creation of security groups to administrators only.
As a best practice, it is recommended to restrict the ability to create Microsoft Entra Security Groups to administrators only.
This is applicable to azure cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to Azure Portal and search for 'Microsoft Entra ID'\n2. Select 'Microsoft Entra ID'\n3. Under 'Manage' select 'Groups'\n4. Under 'Settings' select 'General'\n5. Under 'Security Groups' section, set 'Users can create security groups in Azure portals, API or PowerShell' to No\n6. Select 'Save'. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-ec2-describe-route-tables' AND json.rule = "routes[*].vpcPeeringConnectionId exists and routes[?(@.destinationCidrBlock=='0.0.0.0/0' || @.destinationIpv6CidrBlock == '::/0')].vpcPeeringConnectionId starts with pcx"``` | AWS route table with VPC peering overly permissive to all traffic
This policy identifies VPC route tables with VPC peering connection which are overly permissive to all traffic. Being highly selective in peering routing tables is a very effective way of minimizing the impact of breach as resources outside of these routes are inaccessible to the peered VPC.
This is applicable to aws cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to the AWS Console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated.\n3. Navigate to 'VPC' dashboard from 'Services' dropdown\n4. From left menu, select 'Route Tables'\n5. Click on the alerted route table\n6. From top click on 'Action' button\n7. From the Action menu dropdown, select 'Edit routes'\n8. From the list of destination remove the extra permissive destination by clicking the cross symbol available for that destination\n9. Add a destination with 'least access'\n10. Click on 'Save Routes'.. |
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-sql-instances-list' AND json.rule = "state equals RUNNABLE and databaseVersion contains POSTGRES and (settings.databaseFlags[*].name does not contain log_error_verbosity or settings.databaseFlags[?any(name contains log_error_verbosity and value contains verbose)] exists)"``` | GCP PostgreSQL instance database flag log_error_verbosity is not set to default or stricter
This policy identifies PostgreSQL database instances in which database flag log_error_verbosity is not set to default. The flag log_error_verbosity controls the amount of detail written in the server log for each message that is logged. Valid values are TERSE, DEFAULT, and VERBOSE. It is recommended to set log_error_verbosity to default or terse.
This is applicable to gcp cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to GCP console\n2. Navigate SQL Instances page\n3. Click on reported PostgreSQL instance\n4. Click EDIT\n5. If the flag has not been set on the instance, \nUnder 'Customize your instance', click on 'ADD FLAG' in 'Flags' section, choose the flag 'log_error_verbosity' from the drop-down menu and set the value as 'default' or 'terse'\nOR\nIf the flag has been set to other than default or terse, Under 'Customize your instance', In 'Flags' section choose the flag 'log_error_verbosity' and set the value as 'default' or 'terse'\n6. Click on 'DONE' and then 'SAVE'. |
```config from cloud.resource where api.name = 'azure-storage-account-list' AND json.rule = properties.provisioningState equal ignore case Succeeded as X; config from cloud.resource where api.name = 'azure-storage-account-table-diagnostic-settings' AND json.rule = properties.logs[*].enabled all true as Y; filter 'not($.X.name equal ignore case $.Y.StorageAccountName)'; show X;``` | Azure Storage account diagnostic setting for table is disabled
This policy identifies Azure Storage account tables that have diagnostic logging disabled.
By enabling diagnostic settings, you can capture various types of activities and events occurring within these storage account tables. These logs provide valuable insights into the operations, performance, and security of the storage account tables.
As a best practice, it is recommended to enable diagnostic logs on all storage account tables.
This is applicable to azure cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to Azure Portal\n2. Navigate to the Storage Accounts dashboard\n3. Click on the reported Storage account\n4. Under the 'Monitoring' menu, click on 'Diagnostic settings'\n5. Select the table resource\n6. Under 'Diagnostic settings', click on 'Add diagnostic setting'\n7. At the top, enter the 'Diagnostic setting name'\n8. Under 'Logs', select all the checkboxes under 'Categories'\n9. Under 'Destination details', select the destination for logging\n10. Click on 'Save'. |
```config from cloud.resource where api.name = 'gcloud-compute-ssl-policies' AND json.rule = (profile equals MODERN or profile equals CUSTOM) and minTlsVersion does not equal "TLS_1_2" as X; config from cloud.resource where api.name = 'gcloud-compute-target-https-proxies' AND json.rule = sslPolicy exists as Y; filter "$.X.selfLink contains $.Y.sslPolicy"; show Y;``` | Check BC
This is applicable to gcp cloud and is considered a low severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: N/A. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-elb-describe-load-balancers' AND json.rule = "policies[*].policyAttributeDescriptions[?(@.attributeName=='DHE-RSA-AES128-SHA'|| @.attributeName=='DHE-DSS-AES128-SHA' || @.attributeName=='CAMELLIA128-SHA' || @.attributeName=='EDH-RSA-DES-CBC3-SHA' || @.attributeName=='DES-CBC3-SHA' || @.attributeName=='ECDHE-RSA-RC4-SHA' || @.attributeName=='RC4-SHA' || @.attributeName=='ECDHE-ECDSA-RC4-SHA' || @.attributeName=='DHE-DSS-AES256-GCM-SHA384' || @.attributeName=='DHE-RSA-AES256-GCM-SHA384' || @.attributeName=='DHE-RSA-AES256-SHA256' || @.attributeName=='DHE-DSS-AES256-SHA256' || @.attributeName=='DHE-RSA-AES256-SHA' || @.attributeName=='DHE-DSS-AES256-SHA' || @.attributeName=='DHE-RSA-CAMELLIA256-SHA' || @.attributeName=='DHE-DSS-CAMELLIA256-SHA' || @.attributeName=='CAMELLIA256-SHA' || @.attributeName=='EDH-DSS-DES-CBC3-SHA' || @.attributeName=='DHE-DSS-AES128-GCM-SHA256' || @.attributeName=='DHE-RSA-AES128-GCM-SHA256' || @.attributeName=='DHE-RSA-AES128-SHA256' || @.attributeName=='DHE-DSS-AES128-SHA256' || @.attributeName=='DHE-RSA-CAMELLIA128-SHA' || @.attributeName=='DHE-DSS-CAMELLIA128-SHA' || @.attributeName=='ADH-AES128-GCM-SHA256' || @.attributeName=='ADH-AES128-SHA' || @.attributeName=='ADH-AES128-SHA256' || @.attributeName=='ADH-AES256-GCM-SHA384' || @.attributeName=='ADH-AES256-SHA' || @.attributeName=='ADH-AES256-SHA256' || @.attributeName=='ADH-CAMELLIA128-SHA' || @.attributeName=='ADH-CAMELLIA256-SHA' || @.attributeName=='ADH-DES-CBC3-SHA' || @.attributeName=='ADH-DES-CBC-SHA' || @.attributeName=='ADH-RC4-MD5' || @.attributeName=='ADH-SEED-SHA' || @.attributeName=='DES-CBC-SHA' || @.attributeName=='DHE-DSS-SEED-SHA' || @.attributeName=='DHE-RSA-SEED-SHA' || @.attributeName=='EDH-DSS-DES-CBC-SHA' || @.attributeName=='EDH-RSA-DES-CBC-SHA' || @.attributeName=='IDEA-CBC-SHA' || @.attributeName=='RC4-MD5' || @.attributeName=='SEED-SHA' || @.attributeName=='DES-CBC3-MD5' || @.attributeName=='DES-CBC-MD5' || @.attributeName=='RC2-CBC-MD5' || @.attributeName=='PSK-AES256-CBC-SHA' || @.attributeName=='PSK-3DES-EDE-CBC-SHA' || @.attributeName=='KRB5-DES-CBC3-SHA' || @.attributeName=='KRB5-DES-CBC3-MD5' || @.attributeName=='PSK-AES128-CBC-SHA' || @.attributeName=='PSK-RC4-SHA' || @.attributeName=='KRB5-RC4-SHA' || @.attributeName=='KRB5-RC4-MD5' || @.attributeName=='KRB5-DES-CBC-SHA' || @.attributeName=='KRB5-DES-CBC-MD5' || @.attributeName=='EXP-EDH-RSA-DES-CBC-SHA' || @.attributeName=='EXP-EDH-DSS-DES-CBC-SHA' || @.attributeName=='EXP-ADH-DES-CBC-SHA' || @.attributeName=='EXP-DES-CBC-SHA' || @.attributeName=='EXP-RC2-CBC-MD5' || @.attributeName=='EXP-KRB5-RC2-CBC-SHA' || @.attributeName=='EXP-KRB5-DES-CBC-SHA' || @.attributeName=='EXP-KRB5-RC2-CBC-MD5' || @.attributeName=='EXP-KRB5-DES-CBC-MD5' || @.attributeName=='EXP-ADH-RC4-MD5' || @.attributeName=='EXP-RC4-MD5' || @.attributeName=='EXP-KRB5-RC4-SHA' || @.attributeName=='EXP-KRB5-RC4-MD5')].attributeValue equals true"``` | AWS Elastic Load Balancer (Classic) SSL negotiation policy configured with insecure ciphers
This policy identifies Elastic Load Balancers (Classic) which are configured with SSL negotiation policy containing insecure ciphers. An SSL cipher is an encryption algorithm that uses encryption keys to create a coded message. SSL protocols use several SSL ciphers to encrypt data over the Internet. As many of the other ciphers are not secure, it is recommended to use only the ciphers recommended in the following AWS link: https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-ssl-security-policy.html.
This is applicable to aws cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Sign in to the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Go to the EC2 Dashboard, and select 'Load Balancers'\n4. Click on the reported Load Balancer\n5. On 'Listeners' tab, Change the cipher for the 'HTTPS/SSL' rule\nFor a 'Predefined Security Policy', change 'Cipher' to 'ELBSecurityPolicy-TLS-1-2-2017-01' or latest\nFor a 'Custom Security Policy', select from the secure ciphers as recommended in the below AWS link:\nhttps://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-ssl-security-policy.html\n6. 'Save' your changes. |
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-bigquery-dataset-list' AND json.rule = defaultEncryptionConfiguration.kmsKeyName does not exist``` | GCP BigQuery Dataset not configured with default CMEK
This policy identifies BigQuery Datasets that are not configured with default CMEK. Setting a Default Customer-managed encryption key (CMEK) for a data set ensures any tables created in the future will use the specified CMEK if none other is provided. It is recommended to configure all BigQuery Datasets with default CMEK.
This is applicable to gcp cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: To configure default Customer-managed encryption key (CMEK), use following command for "bq" utility\nbq update --default_kms_key=<CMEK> <DATASET_ID>\n\nPlease refer to URL mentioned below for more details on the bq update command:\nhttps://cloud.google.com/bigquery/docs/reference/bq-cli-reference#bq_update. |
```config from cloud.resource where cloud.type = 'azure' AND api.name= 'azure-network-nsg-list' AND json.rule = securityRules[?any(access equals Allow and direction equals Inbound and (sourceAddressPrefix equals Internet or sourceAddressPrefix equals * or sourceAddressPrefix equals 0.0.0.0/0 or sourceAddressPrefix equals ::/0) and (protocol equals Tcp or protocol equals *) and (destinationPortRange contains _Port.inRange(445,445) or destinationPortRanges[*] contains _Port.inRange(445,445) ))] exists``` | Azure Network Security Group allows all traffic on Windows SMB (TCP Port 445)
This policy identifies Azure Network Security Groups (NSG) that allow all traffic on Windows SMB TCP port 445. Review your list of NSG rules to ensure that your resources are not exposed. As a best practice, restrict DNS solely to known static IP addresses. Limit the access list to include known hosts, services, or specific employees only.
This is applicable to azure cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: Before making any changes, please check the impact to your applications/services. Evaluate whether you want to edit the rule and limit access to specific users, hosts, and services only, deny access, or delete the rule completely.\n\n1. Log in to the Azure Portal.\n2. Select 'All services'.\n3. Select 'Network security groups', under NETWORKING.\n4. Select the Network security group you need to modify.\n5. Select 'Inbound security rules' under Settings.\n6. Select the rule you need to modify, and edit it to allow specific IP addresses OR set the 'Action' to 'Deny' OR 'Delete' the rule based on your requirement.\n7. 'Save' your changes.. |
```config from cloud.resource where cloud.type = 'aws' AND api.name='aws-s3api-get-bucket-acl' AND json.rule = 'policy.Statement[?any((Condition.StringNotEquals contains aws:SourceVpce and Effect equals Deny and (Action contains s3:* or Action[*] contains s3:*)) or (Condition.StringEquals contains aws:SourceVpce and Effect equals Allow and (Action contains s3:* or Action[*] contains s3:*)))] exists'``` | AWS S3 bucket having policy overly permissive to VPC endpoints
This policy identifies S3 buckets that have the bucket policy overly permissive to VPC endpoints. It is recommended to follow the principle of least privileges ensuring that the VPC endpoints have only necessary permissions instead of full permission on S3 operations.
NOTE: When applying the Amazon S3 bucket policies for VPC endpoints described in this section, you might block your access to the bucket without intending to do so. Bucket permissions that are intended to specifically limit bucket access to connections originating from your VPC endpoint can block all connections to the bucket. The policy might disable console access to the specified bucket because console requests don't originate from the specified VPC endpoint. So remediation should be done very carefully.
For details refer https://docs.aws.amazon.com/AmazonS3/latest/dev/example-bucket-policies-vpc-endpoint.html
This is applicable to aws cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['PRIVILEGE_ESCALATION'].
Mitigation of this issue can be done as follows: 1. Log in to the AWS console\n2. Navigate to the S3 dashboard\n3. Choose the reported S3 bucket\n4. In the 'Permissions' tab, click on the 'Bucket Policy'\n5. Update the S3 bucket policy for the VPC endpoint so that it has only required permissions instead of full S3 permission.\nRefer for example: https://docs.aws.amazon.com/AmazonS3/latest/dev/example-bucket-policies-vpc-endpoint.html. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-kms-get-key-rotation-status' AND json.rule = keyMetadata.keyState equals Enabled and policies.default.Statement[?any(Principal.AWS contains * and Effect equal ignore case allow and Condition does not exist)] exists``` | AWS KMS Key policy overly permissive
This policy identifies KMS Keys that have a key policy overly permissive. Key policies are the primary way to control access to customer master keys (CMKs) in AWS KMS. It is recommended to follow the principle of least privilege ensuring that KMS key policy does not have all the permissions to be able to complete a malicious action.
For more details:
https://docs.aws.amazon.com/kms/latest/developerguide/control-access-overview.html#overview-policy-elements
This is applicable to aws cloud and is considered a medium severity issue.
Sample categories of findings relevant here are ['PRIVILEGE_ESCALATION'].
Mitigation of this issue can be done as follows: 1. Log in to the AWS console\n2. In the console, select the specific region from region drop-down on the top right corner, for which the alert is generated\n3. Navigate to Key Management Service (KMS)\n4. Click on 'Customer managed keys' (Left Panel)\n5. Select reported KMS Customer managed key\n6. Click on the 'Key policy' tab\n7. Click on 'Edit',\nReplace the 'Everyone' grantee (i.e. '*') from the Principal element value with an AWS account ID or an AWS account ARN.\nOR\nAdd a Condition clause to the existing policy statement so that the KMS key is restricted.\n8. Click on 'Save Changes'. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-app-service' AND json.rule = 'config.isDotnetcoreVersionLatest exists and config.isDotnetcoreVersionLatest equals false'``` | Azure App Service Web app doesn't use latest .Net Core version
This policy identifies App Service Web apps that are not configured with latest .Net Core version. Periodically, newer versions are released for .Net Core software either due to security flaws or to include additional functionality. It is recommended to use the latest .Net version for web apps in order to take advantage of security fixes, if any.
This is applicable to azure cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to the Azure Portal\n2. Navigate to 'App Services' dashboard\n3. Select the reported web app service\n4. Under 'Settings' section, Click on 'Configuration'\n5. Click on 'General settings' tab, Ensure that Stack is set to .NET and Minor version is set to latest version.\n6. Click on 'Save'. |
```config from cloud.resource where cloud.type = 'aws' AND api.name='aws-kms-get-key-rotation-status' AND json.rule = keyMetadata.keyState equals Enabled and keyMetadata.keyManager equals CUSTOMER and keyMetadata.origin equals AWS_KMS and (rotation_status.keyRotationEnabled is false or rotation_status.keyRotationEnabled equals "null") and keyMetadata.customerMasterKeySpec equals SYMMETRIC_DEFAULT``` | AWS Customer Master Key (CMK) rotation is not enabled
This policy identifies Customer Master Keys (CMKs) that are not enabled with key rotation. AWS KMS (Key Management Service) allows customers to create master keys to encrypt sensitive data in different services. As a security best practice, it is important to rotate the keys periodically so that if the keys are compromised, the data in the underlying service is still secure with the new keys.
This is applicable to aws cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to the AWS console\n2. In the console, select the specific region from region drop-down on the top right corner, for which the alert is generated\n3. Navigate to Key Management Service (KMS)\n4. Click on 'Customer managed keys' (Left Panel)\n5. Select reported KMS Customer managed key\n6. Under the 'Key Rotation' tab, Enable 'Automatically rotate this CMK every year'\n7. Click on Save. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-database-maria-db-server' AND json.rule = properties.userVisibleState equal ignore case Ready and properties.privateEndpointConnections[*] is empty``` | Azure Database for MariaDB not configured with private endpoint
This policy identifies Azure MariaDB database servers that are not configured with private endpoint. Private endpoint connections enforce secure communication by enabling private connectivity to Azure Database for MariaDB. Configuring a private endpoint enables access to traffic coming from only known networks and prevents access from malicious or unknown IP addresses which includes IP addresses within Azure. It is recommended to create private endpoint for secure communication for your Azure MariaDB database.
This is applicable to azure cloud and is considered a medium severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: To configure private endpoint for MariaDB, follow below URL:\nhttps://learn.microsoft.com/en-us/azure/mariadb/howto-configure-privatelink-portal. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-sql-server-list' AND json.rule = ['sqlServer'].['properties.state'] equal ignore case Ready and ['sqlServer'].['properties.publicNetworkAccess'] equal ignore case Enabled and ['sqlServer'].['properties.privateEndpointConnections'] is empty and firewallRules[*] is empty``` | Azure SQL server public network access setting is enabled
This policy identifies Azure SQL servers which have public network access setting enabled.
Publicly accessible SQL servers are vulnerable to external threats with risk of unauthorized access or may remotely exploit any vulnerabilities.
It is recommended to configure the SQL servers with IP-based strict server-level firewall rules or virtual-network rules or private endpoints so that servers are accessible only to restricted entities.
This is applicable to azure cloud and is considered a medium severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: To configure IP-based strict server-level firewall rules on your SQL server, follow below URL:\nhttps://learn.microsoft.com/en-gb/azure/azure-sql/database/firewall-create-server-level-portal-quickstart\n\nTo configure virtual-network rules on your SQL server, follow below URL:\nhttps://learn.microsoft.com/en-gb/azure/azure-sql/database/vnet-service-endpoint-rule-overview\n\nTo configure private endpoints on your SQL server, follow below URL:\nhttps://learn.microsoft.com/en-gb/azure/azure-sql/database/private-endpoint-overview\n\nNOTE: These settings take effect immediately after they're applied. You might experience connection loss if you don't meet the requirements for each setting.. |
```config from cloud.resource where api.name = 'oci-networking-loadbalancer' AND json.rule = lifecycleState equal ignore case "ACTIVE" and backendSets.*.backends is empty OR backendSets.*.backends equals "[]"``` | OCI Load Balancer not configured with backend set
This policy identifies OCI Load Balancers that have no backend set configured.
A backend set is a crucial component of a Load Balancer, comprising a load balancing policy, a health check policy, and a list of backend servers. Without a backend set, the Load Balancer lacks the necessary configuration to distribute incoming traffic and monitor the health of backend servers.
As best practice, it is recommended to properly configure the backend set for the Load Balancer to function effectively, distribute incoming data, and maintain the reliability of backend services.
This is applicable to oci cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: To configure the OCI Load Balancers with backend sets, refer to the following documentation:\nhttps://docs.oracle.com/en-us/iaas/Content/Balance/Tasks/managingbackendsets_topic-Creating_Backend_Sets.htm#top. |
```config from cloud.resource where cloud.type = 'azure' AND api.name= 'azure-network-nsg-list' AND json.rule = securityRules[?any((sourceAddressPrefix equals Internet or sourceAddressPrefix equals * or sourceAddressPrefix equals 0.0.0.0/0 or sourceAddressPrefix equals ::/0) and protocol equals Tcp and access equals Allow and direction equals Inbound and destinationPortRange contains *)] exists``` | Azure overly permissive HTTP(S) access
This is applicable to azure cloud and is considered a low severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: N/A. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-sql-server-list' AND json.rule = (serverSecurityAlertPolicy.properties.state equal ignore case Disabled) or (serverSecurityAlertPolicy.properties.state equal ignore case Enabled and vulnerabilityAssessments[*].type does not exist)``` | Azure SQL Server ADS Vulnerability Assessment is disabled
This policy identifies Azure SQL Server which has ADS Vulnerability Assessment setting disabled. Advanced Data Security - Vulnerability Assessment service scans SQL databases for known security vulnerabilities and highlight deviations from best practices, such as misconfigurations, excessive permissions, and unprotected sensitive data. It is recommended to enable ADS - VA service.
This is applicable to azure cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Select 'SQL servers', and select the SQL server you need to modify\n3. Click on 'Microsoft Defender for Cloud' under 'Security'\n4. Click on 'Enable Microsoft Defender for SQL' if Azure Defender is not enabled for SQL already\n5. Click on '(Configure)' next to 'Microsoft Defender for SQL: Enabled at the server-level'\n6. Ensure that 'MICROSOFT DEFENDER FOR SQL' status is 'ON'\n7. 'Save' your changes. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-s3api-get-bucket-acl' AND json.rule = policy.Statement[?any(Effect equals Allow and (Principal.AWS does not equal * and Principal does not equal * and Principal.AWS contains arn and Principal.AWS does not contain $.accountId) and (Action contains "s3:Put*" or Action contains "s3:Delete*" or Action equals "*" or Action contains "s3:*" or Action is member of ('s3:DeleteBucketPolicy','s3:PutBucketAcl','s3:PutBucketPolicy','s3:PutEncryptionConfiguration','s3:PutObjectAcl') ))] exists``` | AWS S3 bucket with cross-account access
This policy identifies the AWS S3 bucket policy allows one or more of the actions (s3:DeleteBucketPolicy, s3:PutBucketAcl, s3:PutBucketPolicy, s3:PutEncryptionConfiguration, s3:PutObjectAcl) for a principal in another AWS account.
An S3 bucket policy that defines permissions and conditions for accessing an Amazon S3 bucket and its objects. Granting permissions like s3:DeleteBucketPolicy, s3:PutBucketAcl, s3:PutBucketPolicy, s3:PutEncryptionConfiguration, and s3:PutObjectAcl to other AWS accounts can lead to unauthorized access and potential data breaches.
It is recommended to review and remove permissions from the S3 bucket policy by deleting statements that grant access to restricted actions for other AWS accounts.
This is applicable to aws cloud and is considered a medium severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Login to the AWS Console\n2. Navigate to the 'S3' service\n3. Click on the 'S3' resource reported in the alert\n4. Choose Permissions, and then choose Bucket Policy.\n5. In the Bucket policy editor text box, do one of the following:\n 5a. Remove the statements that grant access to denied actions to other AWS accounts\n or\n 5b. Remove the permitted denied actions from the statements\n6. Choose Save.. |
```config from cloud.resource where cloud.type = 'ibm' and api.name = 'ibm-iam-policy' AND json.rule = type equal ignore case "access" and roles[?any( role_id is member of ("crn:v1:bluemix:public:cloud-object-storage::::serviceRole:ObjectReader","crn:v1:bluemix:public:cloud-object-storage::::serviceRole:ContentReader") )] exists and resources[?any( attributes[?any( name equal ignore case "resourceType" and value equal ignore case "bucket" and operator is member of ("stringEquals", "stringMatch") )] exists )] exists and subjects[?any( attributes[?any( name contains "access_group_id" and value contains "AccessGroupId-PublicAccess")] exists )] exists as X; config from cloud.resource where api.name = 'ibm-object-storage-bucket' as Y; filter ' $.X.resources[*].attributes[*].value intersects $.Y.name and $.X.resources[*].attributes[*].value intersects $.Y.service_instance_id '; show Y;``` | IBM Cloud Object Storage bucket is publicly readable through an access group
This policy identifies an IBM Cloud Object Storage bucket that is publicly readable by 'ObjectReader' or 'ContentReader' roles via the public access group.
IBM Public Access Group is a predefined group that manages public permissions and access control for resources and services. Assigning an access policy to the public access group with a resource, provides access to the resource to anyone, whether they're a member of your account or not, because authentication is no longer required. With this configuration, you may risk compromising critical data by leaving the IBM Cloud Object Storage public.
As a best security practice, avoid adding policies to the public access group to make sure buckets are not publicly accessible.
This is applicable to ibm cloud and is considered a medium severity issue.
Sample categories of findings relevant here are ['INTERNET_EXPOSURE'].
Mitigation of this issue can be done as follows: To remove the public access policy for a bucket,\n\n1. Log in to the IBM Cloud console\n2. In the IBM Cloud console, click 'Manage' on the title bar > 'Access (IAM)', and click on 'Access groups' in the left panel\n3. Click on the 'Public Access' access group\n4. Click on the three dots in the right corner of a row for the policy that has the reported resource or bucket name in the Resources section\n5. Click on 'Remove' to delete the public access policy in the reported resource\n6. Review the policy details that you're about to remove, and confirm by clicking 'Remove'. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-appsync-graphql-api' AND json.rule = authenticationType equals "API_KEY" or additionalAuthenticationProviders[?any( authenticationType equals "API_KEY" )] exists``` | AWS AppSync GraphQL API is authenticated with API key
This policy identifies the AWS AppSync Graphql API using API key for primary or additional authentication methods.
AWS AppSync GraphQL API is a fully managed service by Amazon Web Services for building scalable and secure GraphQL APIs. An API key is a hard-coded value in your application generated by the AWS AppSync service when you create an unauthenticated GraphQL endpoint. Using API keys for authentication can pose security risks such as exposure to unauthorized access and limited control over access privileges, potentially compromising sensitive data and system integrity.
It is recommended to use authentication methods other than API Keys like IAM, Amazon Cognito User Pools, or OpenID Connect providers for securing AWS AppSync GraphQL APIs, to ensure enhanced security and access control.
This is applicable to aws cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: Note: Changing the API authorization mode from API key to other methods could cause potential disruptions to existing clients or applications relying on API key authentication. It may require updates to client configurations and authentication workflows for your applications.\n\nTo update the Primary authorization mode option for your AWS AppSync GraphQL API, perform the following actions:\n\n1. Sign in to the AWS Management Console\n2. Select the specific region from the region drop-down in the top right corner, for which the alert is generated\n3. In the Navigation Panel on the left, Select 'All services' and under 'Front-end Web & Mobile', select 'AWS AppSync'\n4. Under the 'APIs' section, select the AppSync API that is reported\n5. Navigate to the 'Settings page' from the left panel, Click 'Edit' on the 'Primary authorization mode' section\n6. In the 'Primary authorization mode' window, change the 'Authorization mode' from 'API key' to other authentication methods and configure it according to your business requirements\n7. Click 'Save'\n\nTo update the Additional authorization modes for your AWS AppSync GraphQL API, perform the following actions:\n\n1. Sign in to the AWS Management Console\n2. Select the specific region from the region drop-down in the top right corner, for which the alert is generated\n3. In the Navigation Panel on the left, Select 'All services' and under 'Front-end Web & Mobile', select 'AWS AppSync'\n4. Under the 'APIs' section, select the AppSync API that is reported\n5. Navigate to the 'Settings page' from the left panel, and click 'Add' in the 'Additional authorization modes' section.\n6. In the 'Additional authorization mode' window, select any 'Authorization mode' except 'API key' and configure according to your business requirements, and click 'Add'\n7. Navigate to the 'Settings page' from the left panel, select the 'API key' in the 'Authorization mode' column from the 'Additional authorization modes' section, and click 'Delete' to remove the API key authorization mode. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-eks-describe-cluster' AND json.rule = resourcesVpcConfig.endpointPublicAccess is true or resourcesVpcConfig.endpointPrivateAccess is false``` | again test perf of AWS EKS cluster endpoint access publicly enabled
When you create a new cluster, Amazon EKS creates an endpoint for the managed Kubernetes API server that you use to communicate with your cluster (using Kubernetes management tools such as kubectl). By default, this API server endpoint is public to the internet, and access to the API server is secured using a combination of AWS Identity and Access Management (IAM) and native Kubernetes Role Based Access Control (RBAC).
This policy checks your Kubernetes cluster endpoint access and triggers an alert if publicly enabled.
This is applicable to aws cloud and is considered a low severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: Enable private access to the Kubernetes API server so that all communication between your worker nodes and the API server stays within your VPC. Disable public access to your API server so that it's not accessible from the internet.\n\n1. Login to AWS Console\n2. Navigate to the Amazon EKS dashboard\n3. Choose the name of the cluster to display your cluster information\n4. Under Networking, choose 'Manage networking'\n5. Select 'Private' radio button\n6. Click on 'Save changes'. |
```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-iam-user' AND json.rule = 'customerSecretKeys[?any(lifecycleState equals ACTIVE and (_DateTime.ageInDays(timeCreated) > 90))] exists'``` | OCI users customer secret keys have aged more than 90 days without being rotated
This policy identifies all of your IAM User customer secret keys which have not been rotated in the past 90 days. It is recommended to verify that they are rotated on a regular basis in order to protect OCI customer secret keys access directly or via SDKs or OCI CLI.
This is applicable to oci cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['KEYS_AND_SECRETS'].
Mitigation of this issue can be done as follows: 1. Login to the OCI Console\n2. Select Identity & Security from the Services menu.\n3. Select Users from the Identity menu.\n4. Click on an individual user under the Name heading.\n5. Click on Customer Secret Keys in the lower left-hand corner of the page.\n6. Delete any Access Keys with a date of 90 days or older under the Created column of\nthe Customer Secret Keys.. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-security-center-settings' AND json.rule = settings[?any( name equals MCAS and properties.enabled is false )] exists ``` | Azure Microsoft Defender for Cloud MCAS integration Disabled
This policy identifies Azure Microsoft Defender for Cloud (previously known as Azure Security Center and Azure Defender) which has Microsoft Defender for Cloud Apps (MCAS) integration disabled. Enabling Microsoft Defender for Cloud provides the tools needed to harden your resources, track your security posture, protect against cyberattacks, and streamline security management. It is highly recommended to enable Microsoft Defender for MCAS.
This is applicable to azure cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Go to 'Microsoft Defender for Cloud'\n3. Select 'Environment Settings'\n4. Click on the subscription name\n5. Select the 'Integrations'\n6. Check/Enable option 'Allow Microsoft Defender for Cloud Apps to access my data'\n7. Select 'Save'. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-fsx-file-system' AND json.rule = FileSystemType equals "WINDOWS" and ( WindowsConfiguration.AuditLogConfiguration.FileAccessAuditLogLevel equals "DISABLED" AND WindowsConfiguration.AuditLogConfiguration.FileShareAccessAuditLogLevel equals "DISABLED")``` | AWS FSX Windows filesystem is not configured with file access auditing
This policy identifies the AWS FSX Windows filesystem that lacks configuration for FileAccessAuditLogLevel and FileShareAccessAuditLogLevel.
Amazon FSx for Windows File Server offers the capability to audit user access to files, folders, and file shares. The settings for FileAccessAuditLogLevel and FileShareAccessAuditLogLevel can be adjusted to record successful access attempts, failed attempts, both, or none, based on your auditing needs. Failing to configure these audit logs may result in unrecognized unauthorized access and potential non-compliance with security standards.
It is advisable to set up logging for both file and folder access as well as file share access in alignment with your business needs. This ensures thorough logging, enhances visibility and accountability, supports compliance, and facilitates effective monitoring and incident response.
This is applicable to aws cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: To change the file access auditing configuration, Perform the following actions:\n\n1. Sign into the AWS console and Open the Amazon FSx console at https://console.aws.amazon.com/fsx/\n2. Navigate to 'File systems', and choose the Windows file system that is reported\n3. Choose the 'Administration' tab\n4. On the 'File Access Auditing' panel, choose 'Manage'\n5. On the 'Manage file access auditing settings dialog', change the desired settings\n 5a. For 'Log access to files and folders', select the 'Log successful attempts' and/or 'Log failed attempts'\n \n or\n\n 5b. For 'Log access to file shares', select the 'Log successful attempts' and/or 'Log failed attempts'\n6. For 'Choose an audit event log destination', choose 'CloudWatch Logs' or 'Kinesis Data Firehose'. Then choose an existing log or delivery stream or create a new one\n7. Choose 'Save'. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-mwaa-environment' AND json.rule = EnvironmentClass contains "foo" ``` | bobby run build informational
This is applicable to aws cloud and is considered a informational severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: N/A. |
```config from cloud.resource where api.name = 'aws-s3api-get-bucket-acl' as X; config from cloud.resource where api.name = 'aws-bedrock-custom-model' as Y; filter ' $.Y.outputDataConfig.bucketName equals $.X.bucketName'; show X;``` | AWS S3 bucket used for storing AWS Bedrock Custom model training artifacts
This policy identifies the AWS S3 bucket used for storing AWS Bedrock Custom model training job output.
S3 buckets hold the results and artifacts generated from training models in AWS Bedrock. Ensuring proper configuration and access control is crucial to maintaining the security and integrity of the training output. Improperly secured S3 buckets used for storing AWS Bedrock training output can lead to unauthorized access and potential exposure of model information.
It is recommended to implement strict access controls, enable encryption, and audit permissions to secure AWS S3 buckets for AWS Bedrock training job output and ensure compliance.
NOTE: This policy is designed to identify the S3 buckets utilized for storing results and storing artifacts generated from training custom models in AWS Bedrock. It does not signify any detected misconfiguration or security risk.
This is applicable to aws cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: To protect the S3 buckets utilized by the AWS Bedrock Custom model training results data, please refer to the following link for recommended best practices\nhttps://docs.aws.amazon.com/AmazonS3/latest/userguide/security-best-practices.html. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-policy-assignments' AND json.rule = '((name == SecurityCenterBuiltIn and properties.parameters.systemUpdatesMonitoringEffect.value equals Disabled) or (name == SecurityCenterBuiltIn and properties.parameters[*] is empty and properties.displayName does not start with "ASC Default"))'``` | Azure Microsoft Defender for Cloud system updates monitoring is set to disabled
This policy identifies the Azure Microsoft Defender for Cloud (previously known as Azure Security Center and Azure Defender) policies which have system updates monitoring set to disabled. It retrieves a daily list of available security and critical updates from Windows Update or Windows Server Update Services. The retrieved list depends on the service that's configured for that virtual machine and recommends that the missing updates be applied. For Linux systems, the policy uses the distro-provided package management system to determine packages that have available updates. It also checks for security and critical updates from Azure Cloud Services virtual machines.
This is applicable to azure cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Go to 'Microsoft Defender for Cloud'\n3. Select 'Environment Settings'\n4. Choose the reported subscription\n5. Click on the 'Security policy' under 'Policy settings' section\n6. Click on 'SecurityCenterBuiltIn'\n7. Select 'Parameters' tab\n8. Set the 'System updates should be installed on your machines' to 'AuditIfNotExists'\n9. If no other changes required then Click on 'Review + save'. |
```config from cloud.resource where api.name = 'aws-ec2-describe-security-groups' AND json.rule = _AWSCloudAccount.orgHierarchyNames() does not intersect ("all-accounts")``` | jashah_ms_does_not_intersect
This is applicable to aws cloud and is considered a informational severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: N/A. |
```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-iam-user' AND json.rule = lifecycleState equal ignore case ACTIVE and capabilities.canUseConsolePassword is true and isMfaActivated is false``` | OCI MFA is disabled for IAM users
This policy identifies Identify Access Management (IAM) users for whom Multi Factor Authentication (MFA) is disabled. As a best practice, enable MFA to add an extra layer of protection for increased security of your OCI user’s identity and complete the sign-in process.
This is applicable to oci cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MFA'].
Mitigation of this issue can be done as follows: 1. Login to the OCI Console Page: https://console.ap-mumbai-1.oraclecloud.com/\n2. Select Identity from Services menu\n3. Select Users from Identity menu.\n4. Click on each non-complaint user.\n5. Click on Enable Multi-Factor Authentication.\n\nNote : The console URL is region specific, your tenancy might have a different home region and thus console URL.. |
```config from cloud.resource where api.name = 'gcloud-storage-buckets-list' as X; config from cloud.resource where api.name = 'gcloud-vertex-ai-aiplatform-model' as Y; filter ' $.Y.artifactUri contains $.X.id '; show X;``` | GCP Storage Bucket storing Vertex AI model
This policy identifies publicly exposed GCS buckets that are used to store the GCP Vertex AI model.
GCP Vertex AI models (except AutoML Models) are stored in the Storage bucket. Vertex AI model is considered sensitive and confidential intellectual property and its storage location should be checked regularly. The storage location should be as per your organization's security and compliance requirements.
It is recommended to monitor, identify, and evaluate storage location for GCP Vertex AI model regularly to prevent unauthorized access and AI model thefts.
This is applicable to gcp cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: Review and validate the Vertex AI models are stored in the right Storage buckets. Move and/or delete the model and other related artifacts if they are found in an unexpected location. Review how the model was uploaded to an unauthorised/unapproved storage bucket.. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-cloudfront-list-distributions' AND json.rule = defaultRootObject is empty``` | dnd_test_create_hyperion_policy_without_asset_type_finding_2
Description-3accdba0-4ab9-4751-8797-ed0c62c25bfb
This is applicable to aws cloud and is considered a medium severity issue.
Sample categories of findings relevant here are ['HIGH_PRIVILEGED_ROLE'].
Mitigation of this issue can be done as follows: N/A. |
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-compute-firewall-rules-list' AND json.rule = disabled is false and direction equals INGRESS and (sourceRanges[*] equals ::0 or sourceRanges[*] equals 0.0.0.0 or sourceRanges[*] equals 0.0.0.0/0 or sourceRanges[*] equals ::/0 or sourceRanges[*] equals ::) and allowed[?any(ports contains _Port.inRange(27017,27017) or (ports does not exist and (IPProtocol contains tcp or IPProtocol contains udp)))] exists``` | GCP Firewall rule allows all traffic on MongoDB port (27017)
This policy identifies GCP Firewall rules which allow all inbound traffic on MongoDB port (27017). Allowing access from arbitrary IP addresses to this port increases the attack surface of your network. It is recommended that the MongoDB port (27017) should be allowed to specific IP addresses.
This is applicable to gcp cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: If the Firewall rule reported indeed needs to restrict all traffic, follow the instructions below:\n1. Login to GCP Console\n2. Go to 'VPC Network'\n3. Go to the 'Firewall'\n4. Click on the reported Firewall rule\n5. Click on 'EDIT'\n6. Modify Source IP ranges to specific IP\n7. Click on 'SAVE'.. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-mysql-server' AND json.rule = properties.userVisibleState equal ignore case Ready and properties.privateEndpointConnections[*] is empty``` | Azure Database for MySQL server not configured with private endpoint
This policy identifies Azure MySQL database servers that are not configured with private endpoint. Private endpoint connections enforce secure communication by enabling private connectivity to Azure Database for MySQL. Configuring a private endpoint enables access to traffic coming from only known networks and prevents access from malicious or unknown IP addresses which includes IP addresses within Azure. It is recommended to create private endpoint for secure communication for your Azure MySQL database.
This is applicable to azure cloud and is considered a medium severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Login to Azure portal.\n2. Navigate to 'Azure Database for MySQL servers'\n3. Click on the reported MySQL server instance you want to modify \n4. Select 'Networking' under 'Settings' from left panel \n5. Under 'Private endpoint', click on Add private endpoint' to create a add add a private endpoint\n\nRefer to below link for step by step process:\nhttps://learn.microsoft.com/en-us/azure/mysql/single-server/how-to-configure-private-link-cli. |
```config from cloud.resource where cloud.type = 'azure' and api.name = 'azure-active-directory-group-settings' and json.rule = values[?any(name equals LockoutDurationInSeconds and (value less than 60 or value does not exist))] exists``` | Azure Microsoft Entra ID account lockout duration less than 60 seconds
This policy identifies if the account lockout duration for Microsoft Entra ID (formerly Azure AD) accounts is configured to be less than 60 seconds. The lockout duration determines how long the account remains locked after exceeding the lockout threshold.
A lockout duration of less than 60 seconds increases the risk of brute-force or password spray attacks. Malicious actors can exploit a short lockout period to attempt multiple logins more frequently, increasing the likelihood of gaining unauthorized access. Configuring the lockout duration to be at least 60 seconds helps reduce the frequency of repeated login attempts during a brute-force attack, improving protection against such attacks while ensuring a reasonable delay for legitimate users after exceeding the threshold.
As a security best practice, it is recommended to configure the account lockout duration to greater than or equal to 60 seconds.
This is applicable to azure cloud and is considered a high severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to Azure Portal and search for 'Microsoft Entra ID'\n2. Select 'Microsoft Entra ID'\n3. Under Manage, select Security\n4. Under Manage, select Authentication methods\n5. Under Manage, select Password protection\n6. Set the 'Lockout duration in seconds' to 60 or higher\n7. Click 'Save'. |
```config from cloud.resource where api.name = 'aws-emr-describe-cluster' as X; config from cloud.resource where api.name = 'aws-emr-security-configuration' as Y; filter '($.X.status.state does not contain TERMINATING) and ($.X.securityConfiguration contains $.Y.name) and ($.Y.AuthenticationConfiguration.KerberosConfiguration does not exist)' ; show X;``` | AWS EMR cluster is not configured with Kerberos Authentication
This policy identifies EMR clusters which are not configured with Kerberos Authentication. Kerberos uses secret-key cryptography to provide strong authentication so that passwords or other credentials aren't sent over the network in an unencrypted format.
This is applicable to aws cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to the AWS Console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to 'EMR' dashboard from 'Services' dropdown\n4. Go to 'Security configurations', click 'Create'\n5. On the Create security configuration window,\n6. In 'Name' box, provide a name for the new EMR security configuration\n7. Under the section 'Enable Kerberos authentication' select the check box\n8. Follow below link for configuration steps,\nhttps://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-kerberos.html\n9. Click on 'Create' button\n10. On the left menu of EMR dashboard Click 'Clusters'\n11. Select the EMR cluster for which the alert has been generated and click on the 'Clone' button from the top menu\n12. In the Cloning popup, choose 'Yes' and Click 'Clone'\n13. On the Create Cluster page, in the Security Options section, click on 'security configuration'.\n14. From the 'Security configuration' drop down select the name of the security configuration created at step 4 to step 8, click 'Create Cluster'\n15. Once you the new cluster is set up verify its working and terminate the source cluster in order to stop incurring charges for it.\n16. On the left menu of EMR dashboard Click 'Clusters', from the list of clusters select the source cluster which is alerted\n17. Click on the 'Terminate' button from the top menu\n18. On the 'Terminate clusters' pop-up, click 'Terminate'.. |
```config from cloud.resource where api.name = 'azure-container-registry' AND json.rule = (skuName contains Standard or skuName contains Premium) and properties.provisioningState equal ignore case Succeeded and properties.anonymousPullEnabled is false``` | Azure Container Registry with anonymous authentication enabled
This policy identifies Azure Container Registries with anonymous authentication enabled, allowing unauthenticated access to the registry.
Allowing anonymous pull or access to container registries poses a significant security risk, exposing them to unauthorized users who may retrieve or manipulate container images. To enhance security, disable anonymous access and require authentication through Azure Active Directory (Azure AD). Additionally, local authentication methods such as admin user, repository-scoped access tokens, and anonymous pull should be turned off to ensure authentication relies solely on Azure AD, providing improved control and accountability.
As a security best practice, it is recommended to disable anonymous authentication for Azure Container Registries.
This is applicable to azure cloud and is considered a high severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: Currently, the Azure UI does not support disabling anonymous authentication for Azure Container Registries. To disable anonymous authentication, refer to the following link:\nhttps://learn.microsoft.com/en-us/azure/container-registry/anonymous-pull-access#disable-anonymous-pull-access. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-activity-log-alerts' AND json.rule = "location equal ignore case Global and properties.enabled equals true and properties.scopes[*] does not contain resourceGroups and properties.condition.allOf[?(@.field=='operationName')].equals equals Microsoft.Network/networkSecurityGroups/securityRules/write" as X; count(X) less than 1``` | Azure Activity log alert for Create or update network security group rule does not exist
This policy identifies the Azure accounts in which activity log alert for Create or update network security group rule does not exist. Creating an activity log alert for Create or update network security group rule gives insight into network rule access changes and may reduce the time it takes to detect suspicious activity.
This is applicable to azure cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to Azure Portal\n2. Click on Monitor (Left Panel)\n3. Select 'Alerts'\n4. Click on Create > Alert rule\n5. In 'Create an alert rule' page, choose the Scope as your Subscription and under the CONDITION section, choose 'Create or Update Security Rule (Microsoft.Network/networkSecurityGroups/securityRules)' and Other fields you can set based on your custom settings.\n6. Click on Create. |
```config from cloud.resource where api.name = 'aws-rds-db-cluster' as X; config from cloud.resource where api.name = 'aws-rds-db-cluster-parameter-group' AND json.rule = (((DBParameterGroupFamily starts with "postgres" or DBParameterGroupFamily starts with "aurora-postgresql") and (['parameters'].['rds.force_ssl'].['ParameterValue'] does not equal 1 or ['parameters'].['rds.force_ssl'].['ParameterValue'] does not exist)) or ((DBParameterGroupFamily starts with "aurora-mysql" or DBParameterGroupFamily starts with "mysql") and (parameters.require_secure_transport.ParameterValue is not member of ("ON", "1") or parameters.require_secure_transport.ParameterValue does not exist))) as Y; filter '$.X.dBclusterParameterGroupArn equals $.Y.DBClusterParameterGroupArn' ; show X;``` | AWS RDS cluster encryption in transit is not configured
This policy identifies AWS RDS database clusters that are not configured with encryption in transit. This covers MySQL, PostgreSQL, and Aurora clusters.
Enabling encryption is crucial to protect data as it moves through the network and enhances the security between clients and storage servers. Without encryption, sensitive data transmitted between your application and the database is vulnerable to interception by malicious actors. This could lead to unauthorized access, data breaches, and potential compromises of confidential information.
It is recommended that data be encrypted while in transit to ensure its security and reduce the risk of unauthorized access or data breaches.
This is applicable to aws cloud and is considered a medium severity issue.
Sample categories of findings relevant here are ['UNENCRYPTED_DATA'].
Mitigation of this issue can be done as follows: To enable the in-transit encryption feature for your Amazon RDS cluster, perform the following actions:\nDefault cluster parameter groups for RDS DB clusters cannot be modified. Therefore, you must create a custom parameter group, modify it, and then attach it to your RDS for Cluster. Changes to parameters in a customer-created DB cluster parameter group are applied to all DB clusters that are associated with the DB cluster parameter group.\nFollow the below links to create and associate a DB parameter group with a DB cluster,\nTo Create a DB cluster parameter group, refer to the below link\nhttps://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithDBClusterParamGroups.html#USER_WorkingWithParamGroups.CreatingCluster\nTo Modifying parameters in a DB cluster parameter group,\n1. Sign in to the AWS Management Console and open the Amazon RDS console at https://console.aws.amazon.com/rds/.\n2. In the navigation pane, choose 'Parameter Groups'.\n3. In the list, choose the parameter group that is associated with the reported RDS DB Cluster.\n4. For Parameter group actions, choose 'Edit'.\n5. Change the values of the parameters that you want to modify. You can scroll through the parameters using the arrow keys at the top right of the dialog box.\n6. In the 'Modifiable parameters' section, enter 'rds.force_ssl' in the Filter Parameters search box for PostgreSQL and Aurora PostgreSQL databases, and type 'require_secure_transport' in the search box for MySQL and Aurora MySQL databases.\n a. For the 'rds.force_ssl' database parameter, enter '1' in the Value configuration box to enable the Transport Encryption feature. \n or\n b. For the 'require_secure_transport' parameter, enter '1' for MySQL Databases or 'ON' for Aurora MySQL databases based on allowed values in the Value configuration box to enable the Transport Encryption feature.\n7. Choose Save changes.\n8. Reboot the primary (writer) DB instance in the cluster to apply the changes to it.\n9. Then reboot the reader DB instances to apply the changes to them.. |
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-cloud-spanner-database' AND json.rule = state equal ignore case ready AND encryptionConfig.kmsKeyNames does not exist``` | GCP Spanner Databases not encrypted with CMEK
This policy identifies GCP Spanner databases that are not encrypted with a Customer-Managed Encryption Key (CMEK).
Google Cloud Spanner is a scalable, globally distributed, and strongly consistent database service. By using CMEK with Spanner, you retain complete control over the encryption keys protecting your sensitive data, ensuring that only authorized users with access to these keys can decrypt and access the information. Without CMEK, data is encrypted with Google-managed keys, which may not provide the level of control required for handling sensitive data in certain industries.
It is recommended to encrypt Spanner database data using a Customer-Managed Encryption Key (CMEK).
This is applicable to gcp cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: Encryption configuration can only be updated during spanner database creation. Follow the below steps to create a new spanner database with a customer-managed encryption key:\n\n1. Sign in to the Google Cloud Management Console. Navigate to the Cloud Spanner page\n2. Under instances, select the instance under which the reported database exists\n3. Under databases, select the 'CREATE DATABASE' option\n4. Under the create database page, under the 'SHOW ENCRYPTION OPTIONS' section, select 'Cloud KMS Key'\n5. Select the KMS key you prefer\n6. Click on 'CREATE'.\n\nNote: It is recommended to migrate data from an old database to a new database created.. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-cloudformation-describe-stacks' AND json.rule = "(($.stackResources[?( @.resourceType == 'AWS::EC2::SecurityGroup' || @.resourceType == 'AWS::EC2::SecurityGroupIngress' || @.resourceType == 'AWS::EC2::NetworkAclEntry')].resourceStatus any equal CREATE_COMPLETE) or ($.stackResources[?( @.resourceType == 'AWS::EC2::SecurityGroup' || @.resourceType == 'AWS::EC2::SecurityGroupIngress' || @.resourceType == 'AWS::EC2::NetworkAclEntry')].resourceStatus any equal UPDATE_COMPLETE)) and (($.cloudFormationTemplate.Resources.{}.SecurityGroupIngress[*].CidrIp any equal 0.0.0.0/0 or $.cloudFormationTemplate.Resources.{}.SecurityGroupIngress[*].CidrIpv6 any equal ::/0 or $.cloudFormationTemplate.Resources.{}.Properties.CidrIp any equal 0.0.0.0/0 or $.cloudFormationTemplate.Resources.{}.Properties.CidrIpv6 any equal ::/0) or ($.cloudFormationTemplate.Resources.{}.Properties.CidrBlock any equal 0.0.0.0/0 or $.cloudFormationTemplate.Resources.{}.Properties.Ipv6CidrBlock any equal ::/0 or $.cloudFormationTemplate.Resources.{}.Properties.Protocol any equal -1))"``` | AWS CloudFormation template contains globally open resources
This alert triggers if a CloudFormation template that when launched will result in resources allowing global network access. Below are three common causes:
- Security Group with a {0.0.0.0/0, ::/0} rule
- Network Access Control List with a {0.0.0.0/0, ::/0} rule
- Network Access Control List with -1 IpProtocol
This is applicable to aws cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['PRIVILEGE_ESCALATION'].
Mitigation of this issue can be done as follows: Prisma Cloud encourages you to review the template and ensure this is the intended behavior.\n\n1. Goto the AWS CloudFormation dashboard.\n2. Click on the Stack you want to modify.\n3. Select the Template tab and then View in Designer.\n4. Make your template modifications.\n5. Check for syntax errors in your template by choosing Validate template near the top of the page.\n6. Select Save from the file (icon) menu.\n7. Choose Amazon S3 bucket, name your template and Save.\n8. Copy the bucket URL and click OK.\n9. Select Close to close Designer. \n10. Click on the Stack you want to modify.\n11. From the Actions pull down menu, select Update stack\n12. Choose Replace current template and paste the URL from Designer into the Amazon S3 URL field. Then click on Next.\n13. Specify stack details, then click on Next.\n14. Configure stack options, then click on Next.\n15. Review, then select Update stack near the bottom of the page.. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-activity-log-alerts' AND json.rule = "location equal ignore case Global and properties.enabled equals true and properties.scopes[*] does not contain resourceGroups and properties.condition.allOf[?(@.field=='operationName')].equals equals Microsoft.Authorization/policyAssignments/write" as X; count(X) less than 1``` | Azure Activity log alert for Create policy assignment does not exist
This policy identifies the Azure accounts in which activity log alert for Create policy assignment does not exist. Creating an activity log alert for Create policy assignment gives insight into changes done in azure policy - assignments and may reduce the time it takes to detect unsolicited changes.
This is applicable to azure cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to Azure Portal\n2. Click on Monitor (Left Panel)\n3. Select 'Alerts'\n4. Click on Create > Alert rule\n5. In 'Create an alert rule' page, choose the Scope as your Subscription and under the CONDITION section, choose 'Create policy assignment (Microsoft.Authorization/policyAssignments)' and Other fields you can set based on your custom settings.\n6. Click on Create. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-cloudfront-list-distributions' AND json.rule = enabled is true and origins.items[*] contains customOriginConfig and origins.items[?any(customOriginConfig.originProtocolPolicy does not contain https-only and ( domainName contains ".data.mediastore." or domainName contains ".mediapackage." or domainName contains ".elb." ))] exists``` | AWS CloudFront origin protocol policy does not enforce HTTPS-only
This policy identifies AWS CloudFront which has an origin protocol policy that does not enforce HTTPS-only. Enforcing HTTPS protocol policy between origin and CloudFront will encrypt all communication and will be more secure. As a security best practice, enforce HTTPS-only traffic between a CloudFront distribution and the origin.
This is applicable to aws cloud and is considered a medium severity issue.
Sample categories of findings relevant here are ['UNENCRYPTED_DATA'].
Mitigation of this issue can be done as follows: Communication between CloudFront and your Custom Origin should enforce HTTPS-only traffic. Modify the CloudFront Origin's Origin Protocol Policy to HTTPS only.\n\n1. Go to the AWS console CloudFront dashboard.\n2. Select your distribution Id.\n3. Select the 'Origins' tab.\n4. Check the origin you want to modify then select Edit.\n5. Change the Origin Protocol Policy to 'https-only.'\n6. Select 'Yes, Edit.'. |
```config from cloud.resource where api.name = 'oci-networking-loadbalancer' AND json.rule = listeners.*.protocol equals HTTP and lifecycleState equals ACTIVE and isPrivate is false as X; config from cloud.resource where api.name = 'oci-loadbalancer-waf' AND json.rule = lifecycleState equal ignore case ACTIVE and (webAppFirewallPolicyId exists and webAppFirewallPolicyId does not equal "null") as Y; filter 'not ($.X.id equals $.Y.loadBalancerId) '; show X;``` | OCI Load balancer not configured with Web application firewall (WAF)
This policy identifies OCI Load balancers that are not configured with a Web application firewall (WAF).
A Web Application Firewall (WAF) helps protect web applications by filtering and monitoring HTTP traffic between a web application and the Internet. Without WAF, load balancers are vulnerable to various web-based attacks, including SQL injection, cross-site scripting (XSS), and other common exploits. This can lead to unauthorized access, data breaches, and other security incidents.
As a best practice, it is recommended to configure Web Application Firewall (WAF) for OCI Load Balancers to enhance security.
This is applicable to oci cloud and is considered a medium severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: To configure an OCI Load Balancer with a Web Application Firewall (WAF), refer to the following documentation:\nhttps://docs.oracle.com/en/learn/oci-waf-flex-lbaas/index.html#introduction. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-emr-describe-cluster' AND json.rule = status.state does not contain TERMINATING and terminationProtected is false``` | AWS EMR cluster is not enabled with termination protection
This policy identifies the AWS EMR Cluster that is not enabled with termination protection.
Termination protection serves as a safeguard against unintentional termination of your clusters. When this feature is enabled, any efforts to terminate the cluster via the AWS Management Console, CLI, or API will be prevented unless the protection is deliberately disabled beforehand. This feature is particularly beneficial for long-running or essential clusters, as accidental termination could lead to data loss or considerable downtime.
It is advisable to activate termination protection on AWS EMR clusters to prevent accidental terminations.
This is applicable to aws cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: To turn termination protection on for the AWS EMR cluster with the console, Perform the following actions:\n\n1. Sign in to the AWS Management Console, and open the Amazon EMR console at https://console.aws.amazon.com/emr\n2. Under EMR on EC2 in the left navigation pane, choose 'Clusters'\n3. Click on the cluster that is reported\n4. On the 'Properties' tab on the cluster details page, Under 'Cluster termination and node replacement' section click 'Edit'\n5. Select to use 'Termination protection' check box to turn the feature on or off\n6. Select 'Save changes' to confirm. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-mysql-flexible-server' AND json.rule = properties.state equal ignore case Ready and firewallRules[*] is empty and properties.network.publicNetworkAccess equal ignore case Enabled``` | Azure Database for MySQL flexible server public network access setting is enabled
This policy identifies Azure Database for MySQL flexible servers which have public network access setting enabled.
Publicly accessible MySQL servers are vulnerable to external threats with risk of unauthorized access or may remotely exploit any vulnerabilities.
As a best security practice, it is recommended to configure the MySQL servers with IP-based strict server-level firewall rules or virtual-network rules or private endpoints so that servers are accessible only to restricted entities.
This is applicable to azure cloud and is considered a medium severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: To configure IP-based strict server-level firewall rules on your MySQL server, follow below URL:\nhttps://learn.microsoft.com/en-gb/azure/mysql/flexible-server/how-to-manage-firewall-portal\n\nTo configure virtual-network rules on your MySQL server, follow below URL:\nhttps://learn.microsoft.com/en-gb/azure/mysql/flexible-server/how-to-manage-virtual-network-portal\n\nTo configure private endpoints on your MySQL server, follow below URL:\nhttps://learn.microsoft.com/en-gb/azure/mysql/flexible-server/how-to-networking-private-link-portal\n\nNOTE: These settings take effect immediately after they're applied. You might experience connection loss if you don't meet the requirements for each setting.. |
```config from cloud.resource where resource.status = Active AND api.name = 'oci-compute-instance' AND json.rule = lifecycleState exists``` | OCI Hosts test - Ali
This is applicable to oci cloud and is considered a informational severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: N/A. |
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-compute-firewall-rules-list' AND json.rule = disabled is false and direction equals INGRESS and (sourceRanges[*] equals ::0 or sourceRanges[*] equals 0.0.0.0 or sourceRanges[*] equals 0.0.0.0/0 or sourceRanges[*] equals ::/0 or sourceRanges[*] equals ::) and allowed[?any(ports contains _Port.inRange(25,25) or (ports does not exist and (IPProtocol contains tcp or IPProtocol contains udp)))] exists``` | GCP Firewall rule allows all traffic on SMTP port (25)
This policy identifies GCP Firewall rules which allow all inbound traffic on SMTP port (25). Allowing access from arbitrary IP addresses to this port increases the attack surface of your network. It is recommended that the SMTP port (25) should be allowed to specific IP addresses.
This is applicable to gcp cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: If the Firewall rule reported indeed needs to restrict all traffic, follow the instructions below:\n1. Login to GCP Console\n2. Go to 'VPC Network'\n3. Go to the 'Firewall'\n4. Click on the reported Firewall rule\n5. Click on 'EDIT'\n6. Modify Source IP ranges to specific IP\n7. Click on 'SAVE'.. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-key-vault-list' and json.rule = secrets[?any(attributes.exp equals -1 and attributes.enabled contains true)] exists and properties.enableRbacAuthorization is true``` | Azure Key Vault secret has no expiration date (RBAC Key vault)
This policy identifies Azure Key Vault secrets that do not have an expiry date for the RBAC Key vaults. As a best practice, set an expiration date for each secret and rotate the secret regularly.
Before you activate this policy, ensure that you have added the Prisma Cloud Service Principal to each Key Vault: https://docs.paloaltonetworks.com/prisma/prisma-cloud/prisma-cloud-admin/connect-your-cloud-platform-to-prisma-cloud/onboard-your-azure-account/azure-onboarding-checklist
Alternatively, run the following command on the Azure cloud shell:
az keyvault list | jq '.[].id' | xargs -I {} az role assignment create --assignee "<Object ID of Prisma Cloud Principal>" --role "Key Vault Reader" --scope {}
This is applicable to azure cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to the Azure portal.\n2. Select 'All services' > 'Key vaults'.\n3. Select the Key vault instance where the secrets are stored.\n4. Select 'Secrets', and select the secret that you need to modify.\n5. Select the current version.\n6. Set the expiration date.\n7. 'Save' your changes.. |
```config from cloud.resource where api.name = 'aws-code-build-project' AND json.rule = environment.privilegedMode exists and environment.privilegedMode is true``` | AWS CodeBuild project environment privileged mode is enabled
This policy identifies the CodeBuild projects where the privileged mode is enabled. Privileged mode grants unrestricted access to all devices and runs the Docker daemon inside the container. It is recommended to enable this mode only for building Docker images. It recommended disabling the privileged mode to prevent unintended access to Docker APIs and container hardware, reducing the risk of potential tampering or critical resource deletion.
This is applicable to aws cloud and is considered a medium severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: To disable the Privileged mode for the CodeBuild project:\n\n1. Log in to the AWS Management Console.\n2. In the console, select the specific region from the region drop-down in the top right corner, for which the alert is generated.\n3. Navigate to 'Developer Tools' from the 'Services' dropdown and select the 'CodeBuild'.\n4. In the navigation pane, choose 'Build projects'.\n5. Select the reported Build project and choose Edit, then click ‘Environment'.\n6. On the Edit Environment page, expand the configuration by clicking the 'Override image' button.\n7. Uncheck the checkbox 'Enable this flag if you want to build Docker images or want your builds to get elevated privileges.' under the 'Privileged' section.\n8. When you have finished changing your CodeBuild environment configuration, click ‘Update environment’.. |
```config from cloud.resource where api.name='gcloud-sql-instances-list' AND json.rule='$.settings.backupConfiguration.binaryLogEnabled is false and $.databaseVersion contains MYSQL'``` | GCP SQL MySQL DB instance point-in-time recovery backup (Binary logs) is not enabled
This policy identifies Cloud SQL MySQL DB instances whose point-in-time recovery backup is not enabled. In case of an error, point-in-time recovery helps you recover an instance to a specific point in time. It is recommended to enable automated backups with point-in-time recovery to prevent any data loss in case of an unwanted scenario.
This is applicable to gcp cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: Follow the below mentioned URL to enable point-in-time recovery backup (Binary logs) for the reported MySQL instance:\n\nhttps://cloud.google.com/sql/docs/mysql/backup-recovery/pitr. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-ec2-elastic-address' AND json.rule = associationId does not exist``` | AWS Elastic IP not in use
This policy identifies unused Elastic IP (EIP) addresses in your AWS account. Any Elastic IP in your AWS account is adding charges to your monthly bill, although it is not associated with any resources. As a best practice, it is recommended to associate/remove Elastic IPs that are not associated with any resources, it will also help you avoid unexpected charges on your bill.
For more details:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/elastic-ip-addresses-eip.html#using-instance-addressing-eips-associating
This is applicable to aws cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to the AWS Console\n2. In the console, select the specific region from the region drop-down on the top right corner, for which the alert is generated.\n3. Navigate to the VPC dashboard \n4. Go to 'Elastic IPs', from the left panel\n5. Select the reported Elastic IP\n- If Elastic IP is not required; release IP by selecting 'Release Elastic IP address' from the 'Actions' dropdown.\n- If Elastic IP is required; associate IP by selecting 'Associate Elastic IP address' from the 'Actions' dropdown.. |
```config from cloud.resource where api.name = 'aws-ec2-autoscaling-launch-configuration' AND json.rule = metadataOptions.httpEndpoint exists and metadataOptions.httpEndpoint equals "enabled" and metadataOptions.httpPutResponseHopLimit greater than 1 as X; config from cloud.resource where api.name = 'aws-describe-auto-scaling-groups' as Y; filter ' $.X.launchConfigurationName equal ignore case $.Y.launchConfigurationName'; show X;``` | AWS Auto Scaling group launch configuration configured with Instance Metadata Service hop count greater than 1
This policy identifies the autoscaling group launch configuration where the Instance Metadata Service network hops count is set to greater than 1. A launch configuration is an instance configuration template that an Auto Scaling group uses to launch EC2 instances. With the metadata response hop limit count for the IMDS greater than 1, the PUT response that contains the secret token can travel outside the EC2 instance. Only metadata with a limited hop count for all your EC2 instances is recommended.
This is applicable to aws cloud and is considered a medium severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: You cannot modify a launch configuration after you create it. To change the launch configuration for an Auto Scaling group, use an existing launch configuration as the basis for a new launch configuration with IMDS with a hop count equal to 1.\n\nTo update the Auto Scaling group to use the new launch configuration follow the steps below:\n\n1. Open the Amazon EC2 console.\n2. On the left navigation pane, under 'Auto Scaling', choose 'Auto Scaling Groups' and Choose 'Launch configurations' near the top of the page.\n3. Select the reported launch configuration, choose Actions, then click 'Copy launch configuration'. This sets up a new launch configuration with the same options as the original, but with 'Copy' added to the name.\n4. On the 'Create launch configuration' page, expand 'Advanced details' under 'Additional Configuration - optional'.\n5. Under the 'Advanced details', go to the 'Metadata response hop limit' section.\n6. Edit the text box and set the value to 1.\n7. When you have finished, click on the 'Create launch configuration' button at the bottom of the page.\n8. On the navigation pane, under Auto Scaling, choose Auto Scaling Groups.\n9. Select the check box next to the Auto Scaling group.\n10. A split pane opens up at the bottom part of the page, showing information about the group that's selected.\n11. On the Details tab, click on the 'Edit' button adjacent to the 'Launch configuration' option.\n12. Under the 'Launch configuration' dropdown, select the newly created launch configuration.\n13. When you have finished changing your launch configuration, click on the 'Update' button at the bottom of the page.\n\nAfter you change the launch configuration for an Auto Scaling group, any new instances are launched with the new configuration options. Existing instances are not affected. To update existing instances, \n\n1. Log in to the AWS Console\n2. In the console, select the specific region from the region drop-down in the top right corner, for which the alert is generated.\n3. Refer 'Configure instance metadata options for existing instances' section from the following URL:\nhttps://docs.aws.amazon.com/AWSEC2/latest/UserGuide/configuring-IMDS-existing-instances.html\n\nTo delete the reported Auto Scaling group launch configuration follow the steps below:\n\n1. Open the Amazon EC2 console.\n2. On the left navigation pane, under 'Auto Scaling', choose 'Auto Scaling Groups' and Choose 'Launch configurations' near the top of the page.\n3. Select the reported launch configuration, choose Actions, then click 'Delete launch configuration'.\n4. Click on the 'Delete' button to delete the autoscaling group launch configuration.. |
```config from cloud.resource where api.name = 'aws-logs-describe-metric-filters' as X; config from cloud.resource where api.name = 'aws-cloudwatch-describe-alarms' as Y; config from cloud.resource where api.name = 'aws-cloudtrail-describe-trails' as Z; filter '(($.Z.cloudWatchLogsLogGroupArn is not empty and $.Z.cloudWatchLogsLogGroupArn contains $.X.logGroupName and $.Z.isMultiRegionTrail is true and $.Z.includeGlobalServiceEvents is true) and (($.X.filterPattern contains "eventName=" or $.X.filterPattern contains "eventName =") and ($.X.filterPattern does not contain "eventName!=" and $.X.filterPattern does not contain "eventName !=") and $.X.filterPattern contains AuthorizeSecurityGroupIngress and $.X.filterPattern contains AuthorizeSecurityGroupEgress and $.X.filterPattern contains RevokeSecurityGroupIngress and $.X.filterPattern contains RevokeSecurityGroupEgress and $.X.filterPattern contains CreateSecurityGroup and $.X.filterPattern contains DeleteSecurityGroup) and ($.X.metricTransformations[*] contains $.Y.metricName))'; show X; count(X) less than 1``` | AWS Security group changes are not monitored
This is applicable to aws cloud and is considered a low severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: N/A. |
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-sql-instances-list' AND json.rule = "state equals RUNNABLE and databaseVersion contains POSTGRES and (settings.databaseFlags[*].name does not contain log_parser_stats or settings.databaseFlags[?any(name contains log_parser_stats and value contains on)] exists)"``` | GCP PostgreSQL instance database flag log_parser_stats is not set to off
This policy identifies PostgreSQL database instances in which database flag log_parser_stats is not set to off. The PostgreSQL planner/optimizer is responsible to parse and verify the syntax of each query received by the server. The log_parser_stats flag enables a crude profiling method for logging parser performance statistics. Even though it can be useful for troubleshooting, it may increase the number of logs significantly and have performance overhead. It is recommended to set log_parser_stats as off.
This is applicable to gcp cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to GCP console\n2. Navigate SQL Instances page\n3. Click on reported PostgreSQL instance\n4. Click EDIT\n5. If the flag has not been set on the instance, \nUnder 'Customize your instance', click on 'ADD FLAG' in 'Flags' section, choose the flag 'log_parser_stats' from the drop-down menu and set the value as 'off'\nOR\nIf the flag has been set to other than off, Under 'Customize your instance', In 'Flags' section choose the flag 'log_parser_stats' and set the value as 'off'\n6. Click on 'DONE' and then 'SAVE'. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-app-service' AND json.rule = properties.state equal ignore case Running and kind contains workflowapp and config.http20Enabled is false``` | Azure Logic App does not utilize HTTP 2.0 version
This policy identifies Azure Logic apps that are not utilizing HTTP 2.0 version.
Azure Logic app using HTTP 1.0 for its connection is considered not secure as HTTP 2.0 version has additional performance improvements on the head-of-line blocking problem of old HTTP version, header compression, and prioritisation of requests. HTTP 2.0 no longer supports HTTP 1.1's chunked transfer encoding mechanism, as it provides its own, more efficient, mechanisms for data streaming.
As a security best practice, it is recommended to configure HTTP 2.0 version for Logic apps connections.
This is applicable to azure cloud and is considered a medium severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to Azure portal\n2. Navigate to Logic apps\n3. Click on the reported Logic app\n4. Under 'Setting' section, click on 'Configuration'\n5. Under 'General settings' tab, Set 'HTTP version' to '2.0'\n6. Click on 'Save'. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-cloudfront-list-distributions' AND json.rule = defaultRootObject is empty``` | dnd_test_create_hyperion_policy_system_policy_as_child_policies_ss_finding_1
Description-d2b8d109-2e3d-4743-8da0-41e105b5cecc
This is applicable to aws cloud and is considered a medium severity issue.
Sample categories of findings relevant here are ['SSH_BRUTE_FORCE'].
Mitigation of this issue can be done as follows: N/A. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-rds-describe-db-instances' AND json.rule = dbinstanceStatus equals available and (copyTagsToSnapshot is false or copyTagsToSnapshot does not exist) and engine does not contain aurora and engine does not contain docdb and engine does not contain neptune``` | AWS RDS instance with copy tags to snapshots disabled
This policy identifies RDS instances that have copy tags to snapshots disabled. Copy tags to snapshots copies all the user-defined tags from the DB instance to snapshots. Copying tags allow you to add metadata and apply access policies to your Amazon RDS resources.
NOTE: Setting Copy tags to snapshots for an Aurora DB instance has no effect on the DB setting. So Aurora DB instances are excluded from policy check.
This is applicable to aws cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Sign into the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to Amazon RDS console\n4. Choose DB Instances, and then select the reported DB instance\n5. Click on 'Modify'\n6. In 'Additional Configuration' section, In 'Backup' sub-section select the 'Copy tags to snapshots'\n7. Click on 'Continue'\n8. On the 'Summary of Modifications' panel, review the configuration changes. From 'Scheduling of Modifications' section, select whether changes to 'Apply immediately' or 'Apply during the next scheduled maintenance window'.\n9. On the confirmation page, Review the changes and Click on 'Modify DB Instance' to save your changes.. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-elbv2-describe-load-balancers' AND json.rule = state.code contains active and listeners[?any(protocol equals HTTP and defaultActions[?any(type equals redirect and redirectConfig.protocol equals HTTPS)] does not exist )] exists``` | AWS Elastic Load Balancer v2 (ELBv2) listener that allow connection requests over HTTP
This policy identifies Elastic Load Balancers v2 (ELBv2) listener that are configured to accept connection requests over HTTP instead of HTTPS. As a best practice, use the HTTPS protocol to encrypt the communication between the application clients and the application load balancer.
This is applicable to aws cloud and is considered a medium severity issue.
Sample categories of findings relevant here are ['UNENCRYPTED_DATA'].
Mitigation of this issue can be done as follows: 1. Sign in to the AWS console\n2. Select the region, from the region drop-down, in which the alert is generated\n3. Navigate to EC2 dashboard\n4. Click on 'Load Balancers' (Left Panel)\n5. Select the reported ELB\n6. Click on 'Listeners' tab\n7.'Edit' the 'Listener ID' rule that uses HTTP\n8. Select 'HTTPS' in the 'Protocol : port' section, Choose appropriate Default action, Security policy and Default SSL certificate parameters as per your requirement.\n9. Click on 'Update'. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-mysql-flexible-server' AND json.rule = properties.state equal ignore case Ready and properties.network.publicNetworkAccess equal ignore case Enabled and firewallRules[?any(properties.startIpAddress equals 0.0.0.0 and properties.endIpAddress equals 255.255.255.255)] exists``` | Azure Database for MySQL flexible server firewall rule allow access to all IPv4 address
This policy identifies Azure Database for MySQL flexible servers which have firewall rule allowing access to all IPV4 address.
MySQL server having a firewall rule with start IP being 0.0.0.0 and end IP being 255.255.255.255 (i.e. all IPv4 addresses) would allow access to server from any host on the internet. Allowing access to all IPv4 addresses expands the potential attack surface and exposes the MySQL server to increased threats.
As a best security practice, it is recommended to configure the MySQL servers with restricted IP-based server-level firewall rules so that servers are accessible only to restricted entities.
This is applicable to azure cloud and is considered a medium severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to Azure Portal\n2. Navigate to Azure Database for MySQL flexible servers dashboard\n3. Click on reported MySQL server\n4. Under 'Settings', click on 'Networking'.\n5. Under 'Firewall rules' section, delete the rule which has 'Start IP' as 0.0.0.0 and 'End IP' as 255.255.255.255. Add specific IPs as per your business requirement.\n6. Click on 'Save'\n\nNOTE: These settings take effect immediately after they're applied. You might experience connection loss if you don't meet the requirements for each setting.. |
```config from cloud.resource where cloud.type = 'ibm' AND api.name = 'ibm-kubernetes-cluster' AND json.rule = type equal ignore case kubernetes and state equal ignore case normal and features.pullSecretApplied is false``` | IBM Cloud Kubernetes cluster has Image pull secrets disabled
This policy identifies IBM Cloud Kubernetes Clusters with Image pull secrets disabled. If Image pull secrets feature Is disabled, it stores registry credentials to connect to container registry. It is recommended to enable image pull secrets feature for proper protection of personal information
This is applicable to ibm cloud and is considered a medium severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: To enable image pull secrets feature on a Kubernetes cluster, refer following URLs:\nhttps://cloud.ibm.com/docs/containers?topic=containers-registry#imagePullSecret_migrate_api_key\nhttps://cloud.ibm.com/docs/containers?topic=containers-registry#update-pull-secret. |
```config from cloud.resource where cloud.type = 'aws' AND api.name= 'aws-rds-db-cluster' AND json.rule = status contains available and (engine contains postgres or engine contains mysql) and iamdatabaseAuthenticationEnabled is false``` | AWS RDS cluster not configured with IAM authentication
This policy identifies RDS clusters that are not configured with IAM authentication. If you enable IAM authentication you don't need to store user credentials in the database, because authentication is managed externally using IAM. IAM database authentication provides the network traffic to and from database clusters is encrypted using Secure Sockets Layer (SSL), Centrally manage access to your database resources and Profile credentials instead of a password, for greater security.
For details:
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/UsingWithRDS.IAMDBAuth.html
This is applicable to aws cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: To enable IAM authentication on your RDS cluster follow the below mentioned URL:\nhttps://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/UsingWithRDS.IAMDBAuth.Enabling.html. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-cosmos-db' AND json.rule = properties.provisioningState equals Succeeded and properties.privateEndpointConnections[*] does not exist``` | Azure Cosmos DB Private Endpoint Connection is not configured
This policy identifies Cosmos DBs that are not configured with a private endpoint connection. Azure Cosmos DB private endpoints can be configured using Azure Private Link. Private Link allows users to access an Azure Cosmos account from within the virtual network or from any peered virtual network. When Private Link is combined with restricted NSG policies, it helps reduce the risk of data exfiltration. It is recommended to configure Private Endpoint Connection to Cosmos DB.
This is applicable to azure cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: Refer to the following URL to configure Private endpoints on your Cosmos DB:\nhttps://docs.microsoft.com/en-us/azure/cosmos-db/how-to-configure-private-endpoints. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-active-directory-user' AND json.rule = userType equals Guest``` | Azure Active Directory Guest users found
This policy identifies Azure Active Directory Guest users. Azure Active Directory allows B2B collaboration which lets you invite people from outside your organisation to be guest users in your cloud account. Avoid creating guest user in your cloud account unless you have business need. Guest users are usually added for users outside your employee on-boarding/off-boarding process and could potentially be overlooked leading to a potential vulnerability.
This is applicable to azure cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to the Azure Portal\n2. Go to 'Azure Active Directory' (Left Panel)\n3. Click on 'Users' under 'Manage'\n4. Search for reported user in search pane\n5. Select on check box for the reported user\n6. Click on 'Delete user' in top pane\n7. Select 'OK' to confirm\n\nNote: Verify impact caused by deleting Guest user. |
```config from cloud.resource where api.name = 'gcloud-compute-instances-list' AND json.rule = name does not start with "gke-" and status equals RUNNING as X; config from cloud.resource where api.name = 'gcloud-projects-get-iam-user' as Y; filter '($.X.serviceAccounts[*].email equals $.Y.user) and not($.Y.roles[*] contains projects or $.Y.roles[*] all equal roles/viewer)'; show X;``` | GCP VM instances with excessive service account permissions
This policy identifies VM instances with service account which have excessive permissions other than viewer/reader access. It is recommended that each instance that needs to call a Google API should run as a service account with the minimum permissions necessary for that instance to do its job. In practice, this means you should configure service accounts for your instances with the following process:
- Create a new service account rather than using the Compute Engine default service account.
- Grant IAM roles to that service account for only the resources that it needs.
- Configure the instance to run as that service account.
- Configure VM instance least permissive service account with only viewer/reader role until it is necessary to have more access.
Avoid granting more access than necessary and regularly check your service account permissions to make sure they are up-to-date.
This is applicable to gcp cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['PRIVILEGE_ESCALATION'].
Mitigation of this issue can be done as follows: Note: To change an instance's service account and access scopes, the instance must be temporarily stopped. To stop your instance, read the documentation for Stopping an instance. After changing the service account or access scopes, remember to restart the instance.\n\nTo change service account of the stopped instance:\n1. Login to GCP portal \n2. Go to Compute Engine\n3. Choose VM instances\n4. Click on the reported VM instance for which you want to change the service account\n5. If the instance is not stopped, click the Stop button. Wait for the instance to be stopped\n6. Next, click the Edit button\n7. Scroll down to the Service Account section, From the drop-down menu, select the desired service account\nNote: To fix this alert either you have to associate service account which has only viewer access or if VM has desired service account and access then dismiss the alert for particular VM instance.\n8. Click the Save button. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-cloudfront-list-distributions' AND json.rule = defaultRootObject is empty``` | dnd_test_create_hyperion_policy_ss_update_child_policy_finding_1
Description-81f1240b-8ec0-4626-86af-79a0b93913f4
This is applicable to aws cloud and is considered a medium severity issue.
Sample categories of findings relevant here are ['INTERNET_EXPOSURE'].
Mitigation of this issue can be done as follows: N/A. |
```config from cloud.resource where api.name = 'azure-storage-account-list' AND json.rule = properties.provisioningState equal ignore case Succeeded as X; config from cloud.resource where api.name = 'azure-storage-account-queue-diagnostic-settings' AND json.rule = properties.logs[*].enabled all true as Y; filter 'not($.X.name equal ignore case $.Y.StorageAccountName)'; show X;``` | Azure Storage account diagnostic setting for queue is disabled
This policy identifies Azure Storage account queues that have diagnostic logging disabled.
By enabling diagnostic settings, you can capture various types of activities and events occurring within these storage account queues. These logs provide valuable insights into the operations, performance, and security of the storage account queues.
As a best practice, it is recommended to enable diagnostic logs on all storage account queues.
This is applicable to azure cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to Azure Portal\n2. Navigate to the Storage Accounts dashboard\n3. Click on the reported Storage account\n4. Under the 'Monitoring' menu, click on 'Diagnostic settings'\n5. Select the queue resource\n6. Under 'Diagnostic settings', click on 'Add diagnostic setting'\n7. At the top, enter the 'Diagnostic setting name'\n8. Under 'Logs', select all the checkboxes under 'Categories'\n9. Under 'Destination details', select the destination for logging\n10. Click on 'Save'. |
```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-iam-authentication-policy' AND json.rule = 'passwordPolicy.isSpecialCharactersRequired isFalse'``` | OCI IAM password policy for local (non-federated) users does not have a symbol
This policy identifies Oracle Cloud Infrastructure(OCI) accounts that do not have a symbol in the password policy for local (non-federated) users. As a security best practice, configure a strong password policy for secure access to the OCI console.
This is applicable to oci cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['WEAK_PASSWORD'].
Mitigation of this issue can be done as follows: 1. Login to the OCI Console Page: https://console.ap-mumbai-1.oraclecloud.com/\n2. Go to Identity in the Services menu.\n3. Select Authentication Settings from the Identity menu.\n4. Click Edit Authentication Settings in the middle of the page.\n5. Ensure the checkbox is selected next to MUST CONTAIN AT LEAST 1 SPECIAL CHARACTER.\n\nNote : The console URL is region specific, your tenancy might have a different home region and thus console URL.. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.