---
title: "Nirmata Control Hub"
description: "Cloud Control Point — continuous posture management and admission control for AWS, GCP, and Azure."
diataxis: how-to
applies_to:
  product: "nirmata-control-hub"
audience: ["platform-engineer","devsecops"]
last_updated: 2026-04-16
url: https://docs.nirmata.io/docs/controllers/nch-cloud/
---


## Introduction
Cloud Controller is an innovative admission controller designed for cloud environments, introduced by Nirmata to bring robust governance and security capabilities to any cloud or cloud service. Inspired by Kubernetes admission controllers like Kyverno, Cloud Controller fills a critical gap in cloud-native operations by enforcing policy-as-code standards directly in cloud resource configurations. This capability enables organizations to prevent misconfigurations from reaching production environments, ensuring that resources adhere to defined policies for security and compliance.

As a core component of the Nirmata Control Hub, Cloud Controller provides a unified solution for managing security and governance across pipelines, clusters, and cloud environments. With admission control, continuous background scanning, and event-driven reporting, Cloud Controller helps teams maintain a consistent and secure posture across their entire cloud infrastructure.

## Key Features

* **Cloud Admission Control:** Cloud Controller introduces admission control for cloud environments, allowing you to prevent misconfigurations before they impact production. It enforces policies at the moment resources are created or modified, ensuring compliance from the start.
* **Comprehensive Multi-Cloud Compatibility:** Designed to work with any cloud provider and service, Cloud Controller offers flexibility for diverse environments. Its policies can be applied universally, giving organizations consistent security and governance across all cloud platforms.
* **Continuous Background Scanning:** Beyond initial admission control, Cloud Controller performs ongoing scans of cloud resources, identifying and alerting teams to misconfigurations and potential vulnerabilities as environments evolve. This continuous monitoring enhances long-term compliance and security.
* **Event-Driven Reporting:** Cloud Controller generates detailed reports based on events, similar to Kyverno's report formats, and integrates with the working group policy API. These reports provide insights into policy compliance, security posture, and operational effectiveness.
* **Integration with Nirmata Control Hub:** As part of the Nirmata Control Hub, Cloud Controller enables centralized visibility into pipeline, cluster, and cloud security. By consolidating governance data in one platform, it empowers teams to proactively manage their security and compliance postures across all stages of the deployment lifecycle.

## AWS Asset Discovery

### AWS Organisation and Account Discovery

The AWS Organisation and Account Discovery feature introduces a new custom resource called `AWSOrgConfig`. This feature allows users to create an `AWSOrgConfig` for an Organisation Unit or root Org. The cloud controller will then discover all the child OUs for the configured org, create an `AWSOrgConfig` for them, and discover the AWS accounts within those OUs, creating `AWSAccountConfig` for them. The discovery process is recursive, ensuring that all child orgs and child accounts at all levels are discovered.

#### Example `AWSOrgConfig`

```yaml
apiVersion: nirmata.io/v1alpha1
kind: AWSOrgConfig
metadata:
  name: root
spec:
  customAssumeRoleName: DevTestAccountAccessRole
  orgID: r-zyre
  orgName: Root
  regions:
  - us-west-1
  roleARN: arn:aws:iam::<account-id>:role/<role-name>
  scanInterval: 1h
  services:
  - EKS
  - ECS
  - EC2
  - Lambda
  - RDS
```text

#### Field Descriptions
- **orgID**: The ID of the organisation unit or root to be configured, assigned by AWS.
- **orgName**: The name of the organisation as desired by the user. It is recommended to keep it the same as the AWS assigned name.
- **regions**: The regions from which resources need to be scanned in the discovered child AWS accounts.
- **scanInterval**: The frequency of the scan.
- **services**: The services in which resources need to be scanned.
- **roleARN**: This is the critical role that needs to be created in the management account. It must have permissions to fetch accounts, fetch OUs, describe them, and can be assumed by the IAM role bound to the Service account of the cloud scanner through the pod identity agent.
- **customAssumeRoleName**: The name of the IAM role that must be present in the discovered accounts, with permissions to fetch resources in the specified services. It is similar to the role for the scanner.

## Licensing

Nirmata Control Hub is **commercial software** available under a paid Nirmata subscription. Use is governed by the [Nirmata Terms of Use](https://nirmata.com/terms-of-use/). See the [Licensing]({{< relref "/docs/reference/licensing/" >}}) page for details.

## Pricing Information
Contact [Nirmata Customer Support](https://nirmata.com/contact-us) for pricing details.


---

## Getting Started


This section provides quick start guides for using the Cloud Controller, including its core features like the cloud admission controller and cloud scanner.

## Prerequisites

Before you begin, ensure you have the following prerequisites:

### Amazon EKS Cluster Setup

Ensure you have an Amazon EKS cluster running. If you don't have one, create an EKS cluster by following the steps in the [EKS Documentation](https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html).

### Enable Pod Identity Addon

Enable the **EKS Pod Identity Addon** to allow seamless access to AWS resources without using explicit AWS credentials. You can enable this addon through either the AWS Management Console or the AWS CLI.

Refer to the [AWS documentation](https://docs.aws.amazon.com/eks/latest/userguide/pod-id-agent-setup.html) for more details.

- **Via AWS Management Console**

  - When creating a new EKS cluster, select the **Pod Identity Addon** during the setup process.  
  - If your cluster is already created, you can add the addon by navigating to the **EKS Clusters** section in the AWS Management Console:
    - Select your cluster.
    - Go to the **Add-ons** tab.
    - Choose **Add Add-on**, search for **Pod Identity**, and select it.
    - Follow the prompts to complete the setup.

- **Via AWS CLI**

  Run the following command to enable the Pod Identity Addon:

  ```bash
  aws eks create-addon --cluster-name <EKS_CLUSTER_NAME> --addon-name eks-pod-identity-agent --addon-version v1.0.0-eksbuild.1
  ```

  Replace `<EKS_CLUSTER_NAME>` with your cluster name and add the latest version available. 

Ensure the worker node IAM role has the necessary permissions to assume the `eks-auth:AssumeRoleForPodIdentity` action. If you are using the managed policy `AmazonEKSWorkerNodePolicy`, no additional configuration is needed.

### Create an IAM Role for Scanner Pods

- Create an IAM role in the same account as the EKS cluster. Attach the following trust policy to allow the EKS Pod Identity Agent to assume the role:

  ```json
  {
    "Version": "2012-10-17",
    "Statement": [
      {
        "Sid": "AllowEksAuthToAssumeRoleForPodIdentity",
        "Effect": "Allow",
        "Principal": {
          "Service": "pods.eks.amazonaws.com"
        },
        "Action": [
          "sts:AssumeRole",
          "sts:TagSession"
        ]
      }
    ]
  }
  ```

- Attach a policy to the IAM role with the permissions required for scanning AWS resources.

```json
  {
	"Version": "2012-10-17",
	"Statement": [
		{
			"Effect": "Allow",
			"Action": [
				"cloudformation:ListResources",
				"cloudformation:GetResource"
			],
			"Resource": "*"
		}
	]
}
```text

Depending on which services you want to scan, provide the necessary read access (List* & Get*). For example, to scan all Lambda services, provide the following read permissions to the cloud controller. You can add similar permissions for S3, SQS, EKS, etc.

### Create Pod Identity Association

Associate the IAM role created earlier with the existing service account `cloud-controller-scanner` in the `nirmata` namespace. Use the AWS CLI:

```bash
aws eks create-pod-identity-association \
  --cluster-name <EKS_CLUSTER_NAME> \
  --role-arn <IAM_ROLE_ARN> \
  --namespace nirmata \
  --service-account cloud-controller-scanner
```text

Replace `<EKS_CLUSTER_NAME>` with your cluster name and `<IAM_ROLE_ARN>` with the ARN of the IAM role created earlier.

> **Important:** The pod-identity association must be done before installing the Helm chart. If the Helm chart is already installed, restart the pods to ensure Pod Identity works correctly.


## Deploy the Cloud Control Helm Chart

Deploy the cloud controller Helm chart into your EKS cluster:

Create a `values.yaml` file with the following information:
```bash
scanner:
  primaryAWSAccountConfig:
    accountID: "your-account-id"
    accountName: "cloud-control-demo"
    regions: ["us-west-1","us-east-1"] # insert any other regions
    services: ["EKS","ECS","Lambda"] # insert services to scan
```text

Refer to the complete list of fields for Helm values [here](https://github.com/nirmata/kyverno-charts/tree/main/charts/cloud-controller#values).

>NOTE: The services provided in values.yaml should be accessible by the cloud controller. Make sure to attach appropriate IAM policy when creating the IAM Role above.

```bash
helm repo add nirmata https://nirmata.github.io/kyverno-charts
helm repo update nirmata
helm install cloud-controller nirmata/cloud-controller \ 
  --create-namespace \ 
  --namespace nirmata \ 
  -f values.yaml
```

## Verify Installation

Verify that the cloud controller pods are running in the `nirmata` namespace:

```bash
kubectl get pods -n nirmata
```

The output should display the running pods:

```bash
NAME                                                  READY   STATUS    RESTARTS   AGE
cloud-control-admission-controller-dfd7f69fd-jjhn5   1/1     Running   0          17d
cloud-control-reports-controller-7954bb477d-lb2ld    1/1     Running   0          17d
cloud-control-scanner-7756dc6ddf-qljbb               1/1     Running   0          17d
```text

## Cloud Admission Controller

This section provides a step-by-step guide on how to use the admission controller to intercept AWS requests and apply policies to them.

### Setting up a Proxy

To intercept AWS requests, you need to create a proxy server that listens on a specific port. The proxy server will apply the policies to the requests and forward them to the AWS cloud if they are compliant.

In this example, we will create a proxy server that listens on port `8443`. It intercepts all requests destined for AWS. It then checks these requests against defined policies, specifically those labeled `app: kyverno`. Only compliant requests are forwarded to AWS.

```yaml
apiVersion: nirmata.io/v1alpha1
kind: Proxy
metadata:
  name: proxy-sample
spec:
  port: 8443
  caKeySecret:
    name: cloud-controller-admission-controller-svc.nirmata.svc.tls-ca
    namespace: nirmata
  urls:
    - ".*.amazonaws.com"
  policySelectors:
    - matchLabels:
        app: kyverno
```text

The admission controller automatically generates self-signed CA certificates. These certificates are stored as a Secret in the `nirmata` Namespace.

To retrieve the Secret name, run the following command:

```bash
kubectl get secrets -n nirmata
```text

The output should show the generated secret:

```text
NAME                                                        TYPE                 DATA   AGE
cloud-controller-admission-controller-svc.nirmata.svc.tls-ca   kubernetes.io/tls    2      4m28s
```text

The `cloud-controller-admission-controller-svc.nirmata.svc.tls-ca` Secret contains the required CA certificate. As shown in the above Proxy configuration, the `spec.caKeySecret` field references this Secret.

The proxy server is now running within your Kubernetes cluster, listening on port 8443. To use this proxy from your local machine, you need to establish a connection between your local port 8443 and the proxy server's port 8443 within the cluster. This is achieved using port forwarding.

```bash
kubectl port-forward svc/cloud-controller-admission-controller-svc 8443:8443 -n nirmata
```text

By running this command, any traffic sent to `localhost:8443` on your machine will be forwarded to the proxy server in the cluster. 
This allows you to interact with the proxy and, consequently, enforce your policies on AWS requests as if the proxy server was running locally.

### ValidatingPolicies

We will create a ValidatingPolicy to ensure that ECS clusters include the `group` tag. The policy will be labeled `app: kyverno` to align with the policy selector specified in the Proxy configuration. Operating in `Enforce` mode, this policy will block and prevent non-compliant requests from being forwarded to AWS.

```yaml
apiVersion: nirmata.io/v1alpha1
kind: ValidatingPolicy
metadata:
  name: ecs-cluster
  labels:
    app: kyverno
spec:
  failureAction: Enforce
  admission: true
  rules:
    - name: check-tags
      identifier: payload.clusterName
      match:
        all:
        - (metadata.provider): "AWS"
        - (metadata.service): "ecs"
        - (metadata.action): "CreateCluster"
      assert:
        all:
        - message: A 'group' tag is required
          check:
            payload:
              (tags[?key=='group'] || `[]`):
                (length(@) > `0`): true
```text

### Using the AWS CLI

You need to configure your AWS CLI to route requests through the proxy server.
This involves setting two environment variables:

1. `HTTPS_PROXY`: This tells the AWS CLI to send all requests through the controller acting as a local proxy.

    ```bash
    export HTTPS_PROXY=http://localhost:8443
    ```

2. `AWS_CA_BUNDLE`: The controller uses a self-signed security certificate. This variable tells the AWS CLI to trust that certificate. 
   
    First, download the certificate:
    
    ```bash
    kubectl get secrets -n nirmata cloud-control-admission-controller-svc.nirmata.svc.tls-ca -o jsonpath="{.data.tls\.crt}" | base64 --decode > ca.crt
    ```
    
    Then, set the environment variable:

    ```bash
    export AWS_CA_BUNDLE=ca.crt
    ```

    It tells the AWS CLI which Certificate Authority (CA) to trust for verifying the proxy's SSL certificate. 
    Because the cloud admission controller is using a self-signed certificate (not issued by a publicly trusted CA), 
    the AWS CLI won't trust it by default. By setting `AWS_CA_BUNDLE` to the path of the controller's CA certificate (ca.crt), 
    you're explicitly telling the AWS CLI that this certificate is valid and should be used to establish a secure connection with the proxy. 
    Without this, the AWS CLI would reject the connection due to the untrusted certificate.

Once configured, your AWS CLI commands will be checked against the defined policies before being sent to AWS.

### Example: Creating an ECS Cluster

The following examples demonstrate how the admission controller enforces a policy requiring all ECS clusters to have a `group` tag.

1. Create an ECS cluster without the `group` tag:

    ```bash
    aws ecs create-cluster --cluster-name bad-cluster
    ```

    The output should be similar to the following:

    ```
    An error occurred (406) when calling the CreateCluster operation: ecs-cluster.check-tags bad-cluster: -> A 'group' tag is required
    -> all[0].check.data.(tags[?key=='group'] || `[]`).(length(@) > `0`): Invalid value: false: Expected value: true
    ```

    As expected, the request was blocked since it violates the ValidatingPolicy that requires all ECS clusters to have the `group` tag.

2. Create an ECS cluster with the `group` tag:

    ```bash
    aws ecs create-cluster --cluster-name good-cluster --tags key=group,value=test key=owner,value=test
    ```

    The output should be similar to the following:

    ```
    {
        "cluster": {
            "clusterArn": "arn:aws:ecs:us-east-1:844333597536:cluster/good-cluster",
            "clusterName": "good-cluster",
            "status": "ACTIVE",
            "registeredContainerInstancesCount": 0,
            "runningTasksCount": 0,
            "pendingTasksCount": 0,
            "activeServicesCount": 0,
            "statistics": [],
            "tags": [
                {
                    "key": "owner",
                    "value": "test"
                },
                {
                    "key": "group",
                    "value": "test"
                }
            ],
            "settings": [
                {
                    "name": "containerInsights",
                    "value": "disabled"
                }
            ],
            "capacityProviders": [],
            "defaultCapacityProviderStrategy": []
        }
    }
    ```

    The request was successful since it complies with the ValidatingPolicy that requires all ECS clusters to have the `group` tag.

## Cloud Scanner


### ECS Clusters and Task Definitions

To test the scanner, we will create ECS resources, both compliant and non-compliant with the ValidatingPolicies that check the `group` tag, by creating ECS clusters and task definitions, some with and some without the required `group` tag.

1. Create an ECS cluster named `bad-cluster` without the `group` tag:

    ```bash
    aws ecs create-cluster --cluster-name bad-cluster
    ```

2. Register a task definition named `bad-task` without the `group` tag:

    ```bash
    aws ecs register-task-definition \
    --family bad-task \
    --container-definitions '[{"name": "my-app", "image": "nginx:latest", "essential": true, "portMappings": [{"containerPort": 80, "hostPort": 80}]}]' \
    --requires-compatibilities FARGATE \
    --cpu 256 \
    --memory 512 \
    --network-mode awsvpc
    ```

3. Create an ECS cluster named `good-cluster` with the `group` tag:

    ```bash
    aws ecs create-cluster --cluster-name good-cluster --tags key=group,value=development
    ```

4. Register a task definition named `good-task` with the `group` tag:

    ```bash
    aws ecs register-task-definition \
    --family good-task \
    --container-definitions '[{"name": "my-app", "image": "nginx:latest", "essential": true, "portMappings": [{"containerPort": 80, "hostPort": 80}]}]' \
    --requires-compatibilities FARGATE \
    --cpu 256 \
    --memory 512 \
    --network-mode awsvpc \
    --tags '[{"key": "group", "value": "production"}]'
    ```


### View Reports

In this example, the scanner will generate four ClusterPolicyReports: two for the `bad-cluster` and `bad-task` resources, and two for the `good-cluster` and `good-task` resources. The reports will show the compliance status of the resources based on the ValidatingPolicies.

To view the generated reports, run the following command:

```bash
kubectl get clusterpolicyreports
```

The output should show the generated reports:

```bash
NAME                                                              KIND                NAME             PASS   FAIL   WARN   ERROR   SKIP   AGE
1a468eba2818db9333ede8428bf6c910d467db5d5fc1b36adc535ce32cea2c5   ECSCluster          good-cluster     1      0      0      0       0      4s
1c4a23fad560fe66ee7d139b456451d8aaf7c75a9aaa12007a37f1d6f3056b9   ECSCluster          bad-cluster      0      1      0      0       0      4s
91696bc8dbb327de99c4d34c579de8bd71e2ef45ad325d10d39d690ad14776c   ECSTaskDefinition   bad-task__2      0      1      0      0       0      4s
cf987d912032e51712ad73a2067a1c5ffee16d8872575166c0739ffedfc0766   ECSTaskDefinition   good-task__2     1      0      0      0       0      4s
```text


---

## AWS Asset Discovery Guide


## Introduction

This guide provides a detailed walkthrough for setting up and using the AWS Organisation and Account Discovery feature in Nirmata Cloud Controller. This feature allows for comprehensive discovery of AWS accounts, organisational units (OUs), and EKS clusters within an AWS Organisation. The discovery process follows a hierarchical approach: OUs are discovered first, then accounts within those OUs, and finally EKS clusters within the discovered accounts.

## Prerequisites

Before you begin, ensure you meet the following prerequisites:
- **Access to Nirmata Cloud Controllert**: Ensure you have access to the cluster where the Cloud Controller is hosted, along with the necessary RBAC permissions to create an AWSOrgConfig Custom Resource.
- **IAM Role Creation Permissions in the Management Account**: Verify that you have the permissions required to create an IAM role in the management account of the AWS Organisation.
- **Permissions in Discovered Accounts**: Confirm that you have the permissions needed to create IAM roles within the AWS accounts that will be discovered.

## AWSOrgConfig

`AWSOrgConfig` is a custom resource used in Cloud Controller to facilitate the discovery of AWS Organisation Units (OUs) and accounts. 

### Purpose
The primary purpose of `AWSOrgConfig` is to enable the cloud controller to recursively discover all child OUs and AWS accounts within a specified AWS Organisation. For each discovered account, the system automatically creates `AWSAccountConfig` resources, then discovers EKS clusters within those accounts through `ClusterConfig` resources. This ensures that all resources are accounted for.

### Key Fields
- **apiVersion**: Specifies the API version, e.g., `nirmata.io/v1alpha1`.
- **kind**: The type of resource, which is `AWSOrgConfig`.
- **metadata**: Contains metadata about the resource, including:
  - **name**: The name of the `AWSOrgConfig` instance.
- **spec**: Defines the desired state of the resource, including:
  - **orgID**: The ID of the organisation unit or root to be configured, assigned by AWS.
  - **orgName**: The name of the organisation as desired by the user.
  - **regions**: The AWS regions from which resources need to be scanned.
  - **roleARN**: Enter the ARN of the role created in the management account. This role must have the necessary permissions to list AWS accounts and organizational units.
  - **customAssumeRoleName**: Specify the name (not ARN) of the IAM role that is created in the AWS accounts. This role should have permissions to access resources in the configured services.
  - **scanInterval**: The frequency of the scan.
  - **services**: The AWS services in which resources need to be scanned.

### Usage
To use `AWSOrgConfig`, create a new instance in the Cloud Controller with the necessary fields filled out. Once configured, the cloud controller will automatically handle the discovery of OUs and accounts, creating additional `AWSOrgConfig` and `AWSAccountConfig` resources as needed. It will then discover EKS clusters within their respective accounts and create `ClusterConfig` resources for each discovered cluster.

### Example
Here is an example of an `AWSOrgConfig`:

```yaml
apiVersion: nirmata.io/v1alpha1
kind: AWSOrgConfig
metadata:
  name: root
spec:
  orgID: r-zyre
  orgName: Root
  regions:
  - us-west-1
  - us-east-1
  roleARN: arn:aws:iam::<account-id>:role/<role-name>
  customAssumeRoleName: <role-name-in-child-accounts>
  scanInterval: 1h
  services:
  - EKS
  - ECS
  - EC2
  - Lambda
  - RDS
```text

This configuration will initiate the discovery process for the specified organisation, ensuring all child OUs and accounts are managed effectively.

## AWSAccountConfig

`AWSAccountConfig` is automatically created by the cloud controller for each discovered AWS account. This resource manages the scanning and discovery of resources within individual AWS accounts.

### Purpose
The `AWSAccountConfig` resource enables the cloud controller to scan resources within a specific AWS account and discover EKS clusters. When EKS clusters are found, the system automatically creates `ClusterConfig` resources for each cluster.

### Key Fields
- **apiVersion**: Specifies the API version, e.g., `nirmata.io/v1alpha1`.
- **kind**: The type of resource, which is `AWSAccountConfig`.
- **metadata**: Contains metadata about the resource, including:
  - **name**: The name of the AWS account.
- **spec**: Defines the desired state of the resource, including:
  - **accountID**: The AWS account ID.
  - **accountName**: The name of the AWS account.
  - **roleARN**: The ARN of the IAM role in the account for resource access.
  - **regions**: The AWS regions to scan for resources.
  - **scanInterval**: The frequency of the scan.
  - **services**: The AWS services to scan for resources.

### Example
Here is an example of an `AWSAccountConfig`:

```yaml
apiVersion: nirmata.io/v1alpha1
kind: AWSAccountConfig
metadata:
  name: aws-scan
spec:
  scanInterval: 1h
  accountID: "844333509XXX"
  accountName: "example-account"
  roleARN: "arn:aws:iam::844333509XXX:role/DevTestAccountAccessRole"
  regions:
    - us-east-1
    - us-west-1
  services:
    - EKS
    - ECS
    - EC2
    - Lambda
    - RDS
    - ApiGateway
    - ApiGatewayV2
```text

## ClusterConfig

`ClusterConfig` is automatically created by the cloud controller for each discovered EKS cluster within an AWS account.

### Purpose
The `ClusterConfig` resource represents a discovered EKS cluster and contains the necessary information for the cloud controller to interact with and manage the cluster.

### Key Fields
- **apiVersion**: Specifies the API version, e.g., `nirmata.io/v1alpha1`.
- **kind**: The type of resource, which is `ClusterConfig`.
- **metadata**: Contains metadata about the resource, including:
  - **name**: The name of the EKS cluster.
  - **labels**: Contains cloud provider metadata such as account ID and account name.
  - **ownerReferences**: References the parent `AWSAccountConfig` resource.
- **spec**: Defines the cluster configuration, including:
  - **cloudProvider**: The cloud provider (AWS).
  - **clusterEndpoint**: The API endpoint of the EKS cluster.
  - **clusterName**: The name of the EKS cluster.
  - **region**: The AWS region where the cluster is located.
  - **scanInterval**: The frequency of cluster scanning.

### Example
Here is an example of a `ClusterConfig`:

```yaml
apiVersion: nirmata.io/v1alpha1
kind: ClusterConfig
metadata:
  name: example-cluster
  labels:
    cloud.nirmata.io/account-id: "844333509XXX"
    cloud.nirmata.io/account-name: example-account
  ownerReferences:
  - apiVersion: nirmata.io/v1alpha1
    blockOwnerDeletion: true
    controller: true
    kind: AWSAccountConfig
    name: example-account
    uid: 808b9ff2-545b-401e-a038-0032e139c7a1
spec:
  cloudProvider: AWS
  clusterEndpoint: https://XXXXX.sk1.us-west-1.eks.amazonaws.com
  clusterName: example-cluster
  region: us-west-1
  scanInterval: 1h0m0s
```text

## Step-by-Step Setup

### Step 1: Create an IAM Role in the Management Account

1. **Navigate to the AWS Management Console**: Log in to the AWS management account of the organisation and access the IAM service.
2. **Create a New Policy**:
   - Go to the Policies section and click on 'Create policy'.
   - Choose the JSON tab and enter the following policy document:
     ```json
     {
         "Version": "2012-10-17",
         "Statement": [
             {
                 "Effect": "Allow",
                 "Action": [
                     "organizations:ListAccounts",
                     "organizations:ListOrganizationalUnitsForParent",
                     "organizations:ListAccountsForParent",
                     "eks:ListClusters",
                     "eks:DescribeCluster"
                 ],
                 "Resource": "*"
             },
         ]
     }
     ```
   - Review and create the policy.
3. **Create a New Role**:
   - Go to the Roles section and click on 'Create role'.
   - Attach the policy you just created.
   - Set up the trust relationship with the following JSON:
     ```json
     {
         "Version": "2012-10-17",
         "Statement": [
             {
                 "Effect": "Allow",
                 "Principal": {
                     "AWS": <ARN of the IAM Role bound to cloud scanner through pod identity agent>
                 },
                 "Action": [
                     "sts:AssumeRole",
                     "sts:TagSession"
                 ]
             }
         ]
     }
     ```
   - Review and create the role.

This IAM role will have the necessary permissions to list AWS accounts, organizational units, and EKS clusters, and can be assumed by the IAM role attached to cloud scanner service account through pod identity agent.

### Step 2: Create an IAM Role in Discovered AWS Accounts

1. **Refer to Cross-Account Setup Documentation**: For detailed instructions on setting up cross-account access, please refer to the [Nirmata Cloud Scanner Cross-Account Setup](https://docs.nirmata.io/docs/nch-cloud/cloud-scanner/).
2. **Create IAM Role in Each Discovered Account**:
   - Ensure that each discovered AWS account has an IAM role with the necessary permissions to allow the cloud scanner to access resources in the configured services.

This step ensures that the cloud scanner can effectively access resources across all AWS accounts within the organisation.

### Step 3: Create the AWSOrgConfig Custom Resource

1. **Access the Cloud Controller Cluster**: Ensure you have access to the cluster where the Cloud Controller is deployed.
2. **Create AWSOrgConfig Custom Resource**:
   - Prepare a YAML file with the necessary configuration for the AWSOrgConfig.
   - Example YAML:
     ```yaml
     apiVersion: nirmata.io/v1alpha1
     kind: AWSOrgConfig
     metadata:
       name: root
     spec:
       orgID: r-zyre
       orgName: Root
       regions:
       - us-west-1
       - us-east-1
       roleARN: arn:aws:iam::<account-id>:role/<role-name>
       customAssumeRoleName: DevTestAccountAccessRole
       scanInterval: 1h
       services:
       - EKS
       - ECS
       - EC2
       - Lambda
       - RDS
     ```

This step sets up the AWSOrgConfig directly in the Cloud Controller cluster.

### Step 4: Verify Configuration

1. **Monitor Discovery**: The cloud controller will automatically discover child OUs and accounts.
2. **Check Created Resources**: Verify that `AWSAccountConfig` resources are created for discovered accounts:
   ```bash
   kubectl get awsaccountconfig
   ```
3. **Verify EKS Cluster Discovery**: Check that `ClusterConfig` resources are created for discovered EKS clusters:
   ```bash
   kubectl get clusterconfig
   ```

## Troubleshooting

- **Failure Records**: Any failure during the discovery process is recorded in the status field of the `AWSOrgConfig`. Check this field for detailed error messages and troubleshooting information.

## Conclusion

By following this guide, you should have successfully set up AWS Organisation and Account Discovery in Nirmata Cloud Controller. The system will automatically discover your organizational structure, AWS accounts, and EKS clusters, creating the appropriate configuration resources for comprehensive cloud asset management.

---

## Cloud Admission Controller


## What is an Admission Controller

In Kubernetes, an admission controller is a key component that intercepts requests to the Kubernetes API server, validating or mutating resource configurations before they are persisted in the cluster. Admission controllers are designed to enforce policies, ensuring that any new or modified resources—such as pods, services, or deployments—meet certain compliance and security standards.

For example, an admission controller can prevent a deployment if it doesn't adhere to specified security policies, such as disallowing images with certain vulnerabilities or ensuring that all containers are running with minimal privileges. By catching non-compliant configurations at this stage, admission controllers protect the system from risky or unintended changes, adding an essential layer of governance.

## What an Admission Controller Means for the Cloud

While admission controllers are standard in Kubernetes, there has traditionally been no equivalent for cloud environments. In the cloud, resources are often provisioned dynamically and across multiple providers, making it challenging to enforce consistent governance and prevent misconfigurations before they impact the environment.

Introducing admission controller-like functionality to the cloud - like Nirmata Cloud Controller - brings the same level of preventive governance to cloud resources. This means that every new resource created in a cloud environment can be evaluated against policies in real-time, whether it's a virtual machine, a database instance, or a storage bucket. This capability helps prevent misconfigurations and ensures cloud resources are provisioned securely, adhering to organizational policies and compliance requirements.

## Benefits of Cloud Admission Controller

Implementing an admission controller for the cloud offers several significant benefits:

* **Proactive Prevention of Misconfigurations:** By intercepting and evaluating resources before they are fully created or modified, an admission controller prevents misconfigurations from reaching production. This is critical for avoiding security vulnerabilities, compliance violations, and unintended costs.

* **Consistent Governance Across Cloud Environments:** With an admission controller in place, policies are consistently enforced across multiple cloud providers and services. This ensures that governance standards are met regardless of where resources are hosted.

* **Increased Operational Efficiency and Reduced Risk:** An admission controller reduces the need for reactive fixes or costly rollbacks, catching issues early and minimizing the risk of human error.

## Example Use Case

Imagine an organization that has a policy requiring all cloud storage buckets to be encrypted. Without an admission controller, a team member could accidentally create an unencrypted bucket, which might go unnoticed and expose sensitive data. With Cloud Controller acting as an admission controller, any new storage bucket is evaluated against this policy, and if encryption is missing, the bucket creation is blocked. This preventive control ensures that only compliant configurations are allowed, greatly reducing security risks.

## Components

### Proxy Controller

The Proxy Controller monitors for the creation of Proxy Custom Resources (CRs), which define configurations for the proxy server. Each Proxy CR specifies settings like target URLs for interception and applicable policies. Upon detecting a new Proxy CR, the Proxy Controller automatically deploys a proxy server instance tailored to these configurations.

Example of a Proxy CR:

```yaml
apiVersion: nirmata.io/v1alpha1
kind: Proxy
metadata:
  name: proxy-sample
spec:
  port: 8443
  caKeySecret:
    name: cloud-admission-controller-service.nirmata.svc.tls-ca
    namespace: nirmata
  urls:
    - ".*.amazonaws.com"
  policySelectors:
    - matchLabels:
        app: kyverno
```

#### Configuration Options

| Field	| Description |
| ----- | ----------- |
| port| The port on which the proxy will listen for HTTPS connections. In this example, the port is set to 8443. |
| caKeySecret	| Defines the Kubernetes secret containing the CA certificate and private key for the proxy. |
| caKeySecret.name | Specifies the name of the Kubernetes Secret holding the Certificate Authority (CA) and key. |
| caKeySecret.namespace |	Specifies the namespace where the caKeySecret is located. |
| urls | A list of URL patterns to intercept and proxy. In this example, the configuration targets any subdomain of amazonaws.com. |
| policySelectors	| Defines selectors to match specific policies in the Kubernetes environment. In this example, it matches labels for resources associated with app: kyverno. |

#### Explanation of caKeySecret and Its Role in MITM

The caKeySecret field is critical for enabling the MITM functionality of the proxy. This field refers to a Kubernetes Secret containing a Certificate Authority (CA) certificate and its associated private key, allowing the proxy to establish and decrypt HTTPS connections transparently.

The Cloud Admission controller automatically generates the CA certificate and private key when deploying the proxy server; however, you can also use a custom CA by providing the certificate and key in a Kubernetes Secret.

**Certificate Authority (CA):** The CA acts as a trusted intermediary that the proxy uses to issue certificates dynamically for any intercepted HTTPS connections. When a client attempts to connect to a URL that matches one in the urls field, the proxy will generate a temporary certificate signed by the CA, which the client will trust if it recognizes this CA.

**Private Key:** The CA's private key enables the proxy to sign these certificates and establish trusted connections on the fly. Without the private key, the proxy would be unable to mimic the HTTPS connection to the target server.

#### Why a CA is Required

In a man-in-the-middle configuration, the proxy intercepts secure HTTPS traffic, which is encrypted. The CA and its private key allow the proxy to decrypt this traffic securely. By generating signed certificates for each session, the proxy ensures that clients do not reject the connection due to untrusted certificates, maintaining the integrity of the proxy while still allowing full inspection or modification of the traffic.

Note: For security, it is essential to restrict access to the caKeySecret and ensure it is stored securely, as unauthorized access to the CA's private key would compromise the integrity of the proxy and the intercepted traffic.

### Proxy Server

The Proxy Server is the core of the Cloud Admission Controller, acting as the intermediary between clients and cloud providers. It intercepts requests, processes them, and evaluates compliance against relevant policies before forwarding or blocking the request. Key functions include:

* **Request Interception:** Captures API requests before they reach the cloud provider as per the defined URLs in the Proxy CR.
* **Pre-processing:** Extracts necessary details (e.g., provider, service, action, and region) for policy evaluation, creating a structured payload.
* **Policy Evaluation:** Applies policies from the Policy Cache by evaluating the structured payload against these policies using the Policy Engine.

**Policy Action:**

* **Enforce Mode:** Requests violating policies are blocked, and an error response is returned. An event is generated.
* **Audit Mode:** Non-compliant requests are allowed but logged, with an Event generated for further review.

Note: You need to set the `spec.admission` field to true in the Policy CR to enable the admission controller functionality.


---

## Cloud Scanner


The Cloud Scanner automatically assesses your cloud resources for compliance with predefined policies, helping you maintain security and best practices across your cloud environments. 

It integrates seamlessly with your Kubernetes cluster, leveraging powerful tools like AWS Cloud Control API and Kyverno JSON policies to simplify compliance management.

## Purpose

The Cloud Scanner proactively identifies and reports on policy violations within your cloud infrastructure. 

This helps you:

1. **Enforce security best practices**: Ensure your cloud resources adhere to organizational security standards.

2. **Maintain compliance**: Meet regulatory requirements and industry best practices within your cloud environment.

3. **Prevent misconfigurations**: Detect potentially harmful cloud configurations before they impact your applications.

4. **Gain visibility**: Obtain a comprehensive overview of your cloud resource compliance status.

## Why You Need Cloud Scanner

Manual cloud compliance checks are time-consuming, error-prone, and difficult to scale. 

The Cloud Scanner automates this process, providing:

1. **Proactive monitoring**: Continuous scanning ensures your cloud environment stays compliant.

2. **Automated reporting**: Detailed, actionable reports help you quickly address any compliance violations.

3. **Reduced risk**: Early detection of misconfigurations minimizes security vulnerabilities in your cloud environment.

4. **Improved efficiency**: Automation frees up valuable time for other critical tasks.

## How it Works

The Cloud Scanner operates within your Kubernetes cluster and leverages Kyverno JSON policies to evaluate your cloud resources. 

For AWS, it uses the AWS Cloud Control API to interact with your AWS infrastructure. Here's a simplified workflow:

1. **Configuration**: Define the scope of the AWS scan by specifying your account details, regions, and services to target using the `AWSAccountConfig` custom resource.
This includes providing necessary credentials securely, such as assuming an IAM role.

2. **Resource Retrieval**: The scanner uses the AWS Cloud Control API to fetch the resources defined in your `AWSAccountConfig`.

3. **Policy Evaluation**: The fetched AWS resources are evaluated against predefined Kyverno JSON policies, which define the compliance rules.

4. **Reporting**: The scanner generates `ClusterEphemeralReports` intermediary resources for further processing by the Reports Controller. 
These reports summarize the compliance status of your AWS resources. 
They highlight any violations, providing information about the affected resources and the specific policies that were violated. 
You can easily access these reports within your Kubernetes cluster.

>**NOTE**: Currently, only AWS Cloud is supported. Support for other cloud providers is planned for future releases.

## AWS Cloud

### Leveraging the AWS Cloud Control API

The Cloud Scanner utilizes the AWS Cloud Control API to fetch resource information so you can use the [aws cloudcontrol](https://docs.aws.amazon.com/cli/latest/reference/cloudcontrol/) command to explore resource structures, aiding in crafting precise compliance policies.

#### Fetching Resource Data with the AWS CLI

To retrieve resource data, you can use the AWS Cloud Control CLI. The `aws cloudcontrol list-resources` command provides a list of resources for a given service. 

For example, to list ECS clusters:

```shell
aws cloudcontrol list-resources --type-name AWS::ECS::Cluster
```text

This returns a JSON payload with summary information for each cluster:

```json
{
    "ResourceDescriptions": [
        {
            "Identifier": "bad-cluster",
            "Properties": "{\"ClusterSettings\":[],\"DefaultCapacityProviderStrategy\":[],\"CapacityProviders\":[],\"ClusterName\":\"bad-cluster\",\"Arn\":\"arn:aws:ecs:us-east-1:844333597536:cluster/bad-cluster\",\"Tags\":[]}"
        },
        {
            "Identifier": "another-cluster",
            "Properties": "{\"ClusterSettings\":[],\"DefaultCapacityProviderStrategy\":[],\"CapacityProviders\":[],\"ClusterName\":\"another-cluster\",\"Arn\":\"arn:aws:ecs:us-east-1:844333597536:cluster/another-cluster\",\"Tags\":[]}"
        }
    ],
    "TypeName": "AWS::ECS::Cluster"
}
```text

To retrieve the full details for a specific resource, use the `aws cloudcontrol get-resource` command.

For example, to get the detailed configuration of the "bad-cluster":

```shell
aws cloudcontrol get-resource --type-name AWS::ECS::Cluster --identifier bad-cluster
```text

This returns a more comprehensive JSON payload for the specified resource:

```json
{
    "TypeName": "AWS::ECS::Cluster",
    "ResourceDescription": {
        "Identifier": "bad-cluster",
        "Properties": "{\"ClusterSettings\":[{\"Value\":\"disabled\",\"Name\":\"containerInsights\"}],\"DefaultCapacityProviderStrategy\":[],\"CapacityProviders\":[],\"ClusterName\":\"bad-cluster\",\"Arn\":\"arn:aws:ecs:us-east-1:844333597536:cluster/bad-cluster\",\"Tags\":[]}"
    }
}
```text

#### Writing ValidatingPolicies

Using the sample payload obtained from the `get-resource` command in AWS Cloud Control CLI, you can write precise ValidatingPolicies. These policies can target specific resources based on their metadata and enforce compliance rules based on the payload.

For example, a policy ensuring all ECS clusters have a `group` tag might look like:

```yaml
apiVersion: nirmata.io/v1alpha1
kind: ValidatingPolicy
metadata:
  name: check-ecs-cluster-tags
spec:
  scan: true
  rules:
    - name: check-tags
      identifier: payload.clusterName
      match:
        all:
        - (metadata.provider): "AWS"
        - (metadata.region): us-east-1
        - (metadata.service): "ecs"
        - (metadata.resource): "Cluster"
      assert:
        all:
        - message: A 'group' tag is required
          check:
            payload:
              (tags[?key=='group'] || `[]`):
                (length(@) > `0`): true
```text

This policy checks that all ECS clusters in the `us-east-1` region have a `group` tag. If a cluster is missing this tag, the policy will report a violation.

In order to apply this policy when scanning AWS resources, you need to set the field `spec.scan` to `true` in the policy definition. Otherwise, the policy will not be evaluated during the scan.

### AWS Scan Configuration

To configure the Cloud Scanner to scan AWS, you need to define an `AWSAccountConfig` custom resource. This resource specifies the AWS account details, regions, and services to target.

For example, to scan all ECS clusters in the `us-east-1` region, you would create an `AWSAccountConfig` like this:

```yaml
apiVersion: nirmata.io/v1alpha1
kind: AWSAccountConfig
metadata:
  name: aws-account-config
spec:
  # The `scanInterval` field controls how often the scanner runs to scan the services configured in this resource.
  scanInterval: 2h
  # The `accountID` field specifies the AWS account ID to scan.
  accountID: "YOUR_AWS_ACCOUNT_ID"
  # The `accountName` field specifies the AWS account name.
  accountName: "YOUR_AWS_ACCOUNT_NAME"
  # The `regions` field specifies the AWS regions to scan.
  regions:
    - us-east-1
  # The `services` field specifies the AWS services to scan.
  services:
    - ECS
```text

The `spec.scanInterval` field in AWSAccountConfig is optional. If omitted, it defaults to the value specified by the `--scanInterval` flag which is 1 hour by default.

In the provided example configuration, the Cloud Scanner operates on a two-hour cycle. Every two hours, the following steps occur:

1. Resource Discovery: The scanner queries AWS for all ECS clusters within the `us-east-1` region.

2. Policy Filtering: It identifies all ValidatingPolicies where the `spec.scan` field is set to `true`. These are the policies that will be applied to the discovered ECS clusters.

3. Policy Evaluation: Each discovered ECS cluster is evaluated against the filtered policies. This determines whether the cluster complies with each policy's rules.

4. Reporting: The scanner generates `ClusterEphemeralReports` resources. These intermediate reports contain the results of the policy evaluations for each ECS cluster in the `us-east-1` region. The [Reports Controller](../reporting-system/) processes and consolidates them into `ClusterPolicyReports`. These final reports provide a comprehensive overview of the compliance status of your ECS clusters, making it easier to identify and address any policy violations. These reports are accessible within your Kubernetes cluster.

### AWS Cross-Account Scanning

The Cloud Scanner supports scanning AWS resources across multiple accounts. This is useful for organizations with multiple AWS accounts that need to enforce compliance policies across all accounts.

To enable cross-account scanning, you need to assume an IAM role in each account that grants the necessary permissions to the Cloud Scanner.

**Prerequisites**:

1. An EKS cluster with the Cloud Controller Helm chart deployed.

2. An IAM role named `cloud-scanner-source` in your **source** AWS account (where the Cloud Controller runs). This role needs permissions to assume roles in your **target** accounts (the accounts you want to scan).

**High-Level Overview:**

The process involves setting up IAM roles and trust relationships between your source and target accounts, then configuring the Cloud Scanner to use those roles for access.

Here is a breakdown of the steps involved:

1. Configure your target accounts:

    For each AWS account you want to scan (your target accounts), you will need to create an IAM role. We will refer to this role as `cloud-scanner-target-role` in these instructions, but you can choose a different name.

      - **Create the cloud-scanner-target-role**: In your target account's IAM console, create a new role. Choose "Another AWS account" as the trusted entity. In the "Account ID" field, enter your **source** account ID (the account where your EKS cluster and Cloud Controller are running).

      - **Define the Trust Policy:** The trust policy defines which accounts can assume this role. Use the following policy document, replacing `YOUR_SOURCE_ACCOUNT_ID` with your actual **source** account ID:

        ```
        {
              "Version": "2012-10-17",
              "Statement": [
                  {
                      "Sid": "Statement1",
                      "Effect": "Allow",
                      "Principal": {
                          "AWS": "arn:aws:iam::YOUR_SOURCE_ACCOUNT_ID:role/cloud-scanner-source"
                      },
                      "Action": [
                          "sts:AssumeRole",
                          "sts:TagSession"
                      ]
                  }
              ]
        }
        ```

      - **Grant Permissions:** Attach managed policies or create a custom policy to grant the `cloud-scanner-target-role` the necessary permissions to access the resources you want to scan. 
      
        Follow the principle of least privilege: grant only the minimum required permissions.
        For example, To scan ECS services, attach the `AmazonECS_FullAccess` policy.

        >**NOTE**: Every target account must have this role configured, but the permissions attached will vary based on the services being scanned.

        You must attach the `AWSCloudFormationFullAccess` policy as it is required for the scanner to function correctly.

2. Configure your source account:

    - **Grant sts:AssumeRole Permission**: In your source account, modify the IAM policy attached to the `cloud-scanner-source` role. Add a statement granting permission to assume the `cloud-scanner-target-role` in your target accounts. Replace YOUR_TARGET_ACCOUNT_ID with the actual account ID of the target account:

      ```
      {
          "Sid": "Statement2",
          "Effect": "Allow",
          "Action": "sts:AssumeRole",
          "Resource": "arn:aws:iam::YOUR_TARGET_ACCOUNT_ID:role/cloud-scanner-target-role"
      }
      ```

      Repeat this step for each target account, adding a separate statement for each `cloud-scanner-target-role` ARN.

3. Define AWSAccountConfig: 

    For each target account, create an `AWSAccountConfig` that specifies the `roleARN` field. This field should contain the ARN of the IAM role you created in the target AWS account.

    For example, to scan ECS clusters in the `us-east-1` region across multiple AWS accounts, you would create an `AWSAccountConfig` like this:

    ```yaml
    apiVersion: nirmata.io/v1alpha1
    kind: AWSAccountConfig
    metadata:
      name: aws-account-config
    spec:
      scanInterval: 2h
      accountID: "AWS_TARGET_ACCOUNT_ID"
      accountName: "AWS_TARGET_ACCOUNT_NAME"
      regions:
        - us-east-1
      services:
        - ECS
      roleARN: "arn:aws:iam::AWS_TARGET_ACCOUNT_ID:role/cloud-scanner-target-role"
    ```

    In this configuration, the Cloud Scanner will assume the IAM role `cloud-scanner-target-role` in the target AWS account to scan ECS clusters in the `us-east-1` region.


---

## Reporting System


Reports are Kubernetes Custom Resources, generated and managed automatically by Cloud Controller, which contain the results of applying matching `ValidatingPolicy` or `ImageVerificationPolicy` to existing resources. They are created on a fixed interval as defined globally or on every AWSAccountConfig. If the resource matches multiple rules, there will be multiple results in the report. When resources are deleted, the corresponding policy report also gets deleted. Reports, therefore, always represent the current state of the cluster and do not record historical information. Cloud Controller uses a standard and open format published by the [Kubernetes Policy working group](https://github.com/kubernetes-sigs/wg-policy-prototypes/tree/master/policy-report) which proposes a common policy report format across Kubernetes tools.

Below is a sample `ClusterPolicyReport` generated by Cloud Controller for a given ECS TaskDefinition:
```yaml
apiVersion: wgpolicyk8s.io/v1alpha2
kind: ClusterPolicyReport
metadata:
  creationTimestamp: "2024-11-04T07:04:25Z"
  generation: 1
  labels:
    app.kubernetes.io/managed-by: cloud-control-point
    cloud.policies.nirmata.io/account-id: "123456789101"
    cloud.policies.nirmata.io/account-name: accountname
    cloud.policies.nirmata.io/last-modified: "1730703865"
    cloud.policies.nirmata.io/provider: AWS
    cloud.policies.nirmata.io/region: us-east-1
    cloud.policies.nirmata.io/resource-id: cf987d912031e51752ad73a2067a1c5f9ee06d8872575166c0739ffedfc0766
    cloud.policies.nirmata.io/resource-name: good-task
    cloud.policies.nirmata.io/resource-type: TaskDefinition
    cloud.policies.nirmata.io/service: ecs
    cloud.policies.nirmata.io/ttl: 20m0s
  name: cf987d912031e51752ad73a2067a1c5f9ee06d8872575166c0739ffedfc0766
  resourceVersion: "690"
  uid: 3c9da537-d4b1-40c1-9284-bea79f51853a
results:
- message: Validation rule 'check-task-definition-tags' passed.
  policy: check-task-definition-tags
  result: pass
  rule: check-task-definition-tags
  scored: true
  source: cloud-control
  timestamp:
    nanos: 0
    seconds: 1730703865
scope:
  apiVersion: nirmata.io/v1alpha1
  kind: ECSTaskDefinition
  name: good-task-2
summary:
  error: 0
  fail: 0
  pass: 1
  skip: 0
  warn: 0
```text

The report consists of multiple components:
1. General information about the owner resources in `metadata.labels`
2. List of results for the rules that match the resource
3. Scope contains the kind and name of the owner resource
4. Summary contains count of results for each result type

## Report result logic

Entries in a policy report contain a `result` field which can be either `pass`, `skip`, `error`, or `fail`.

| Result | Description                                                                                                                        |
|--------|------------------------------------------------------------------------------------------------------------------------------------|
| pass   | The resource was applicable to a rule and the pattern passed evaluation.                                                           |
| skip   | Match conditions were not satisfied so further processing was not performed.                                                        |
| fail   | The resource failed the pattern evaluation.                                                                                        |
| error  | An error was encountered while executing the policy.                                                                                      |

## Viewing policy report summaries

You can view a summary of the policy reports using the following command:

```sh
kubectl get clusterpolicyreports
```text

For example, below are the policy reports in a small test cluster created with kind.

```sh
$ kubectl get cpolr
1c4a23fad560fe66ee7d139b456451d8aaf7c75a9aaa12007a37f1d6f3056b9   ECSCluster          bad-cluster      0      1      0      0       0      78s
42652e303978721aa052cce4c5b33b5c3db1e7c3eb8fb39ca2681c422a5d325   ECSCluster          bad-cluster-4    0      1      0      0       0      78s
71ed4b96c24e21830b15ea11f19abb16b947707ea7f8343aa48a5b25228d559   ECSCluster          bad-cluster-2    0      1      0      0       0      78s
91696bc8dbb327de99c4d34c579de8bd71e2ef45ad325d10d39d690ad14776c   ECSTaskDefinition   bad-task__2      0      1      0      0       0      78s
9bf1d560308032c3ca5987cc7a9b2db756d86fe035a7b59732f1ba3e48deda2   ECSCluster          bad-cluster-11   0      1      0      0       0      78s
cf987d912032e51712ad73a2067a1c5ffee16d8872575166c0739ffedfc0766   ECSTaskDefinition   good-task__2     1      0      0      0       0      78s
d1b1118f087d2e756a437727f8a1e4147c86dd4729381a4451ea944a0701968   ECSCluster          bad-test         0      1      0      0       0      78s
```text

## Viewing policy violations

Since the report provides information on all rule and resource execution, returning only select entries requires a filter expression.

Policy reports can be inspected using either `kubectl describe` or `kubectl get`. For example, here is a command, requiring `yq`, to view only failures for the policy report `1c4a23fad560fe66ee7d139b456451d8aaf7c75a9aaa12007a37f1d6f3056b9`:
```sh
$ kubectl get cpolr 1c4a23fad560fe66ee7d139b456451d8aaf7c75a9aaa12007a37f1d6f3056b9 -o jsonpath='{.results[?(@.result=="fail")]}' | yq -p json -
```text

Output:
```yaml
message: |-
  -> A 'group' tag is required
   -> all[0].check.payload.(tags[?key=='group'] || `[]`).(length(@) > `0`): Invalid value: false: Expected value: true
policy: check-ecs-cluster-tags
result: fail
rule: check-tags
scored: true
source: cloud-control
timestamp:
  nanos: 0
  seconds: 1731055159
```

## Policy reports deletion

Cloud Controller uses a TTL-based approach to deleting reports. If a policy report has not been updated for some time, the policy report is marked as stale and deleted. The TTL duration for a report is the `scanInterval` for the resource. If a report has not been updated for 1 scan interval for any reason (such as the resource being deleted, the configuration being updated, or Cloud Controller no longer being able to access the resource), the report gets deleted.

## Report internals

The `ClusterPolicyReport` is the final resource composed of results for policies matching a resource as determined by Cloud Controller. However, these reports are built from intermediary resources. For results of scanning, `ClusterEphemeralReport`s are created which have the same basic contents as a policy report and are used internally by Cloud Controller to build the final policy report. Cloud Controller will merge these results automatically into the appropriate policy report and no manual interaction is required.



