---
title: "Cluster Onboarding"
description: "Learn how to onboard and manage Kubernetes clusters with Nirmata Control Hub\n"
diataxis: how-to
applies_to:
  product: "nirmata-control-hub"
audience: ["platform-engineer","cluster-admin"]
last_updated: 2026-04-16
url: https://docs.nirmata.io/docs/control-hub/cluster/
---


> **Applies to:** Nirmata Control Hub 4.0 and later

## Prerequisites
Before onboarding your Kubernetes cluster to Nirmata Control Hub, ensure that your cluster is CNCF-compliant. You can onboard both cloud-provided and local Kubernetes clusters, such as kind and minikube clusters.


## Onboarding Workflow - UI Wizard
### Step 1: Add Cluster
1. Navigate to the Clusters page in Nirmata Control Hub.
1. Click on the Add Cluster button to open the onboarding wizard.
1. Enter Cluster Information:
    1. Provide a name for your cluster.
    1. Optionally, add labels to your cluster for better identification.

### Step 2: Choose Onboarding Method
You have two options for onboarding:
1. NCTL (Nirmata CLI): Recommended for users who want a streamlined process.
1. Helm: For users who prefer to use Helm charts. You can switch to the Helm tab for detailed instructions.

>NOTE: We recommend using NCTL if you are just trying out Nirmata, with version 4.7.0 or higher required for a smooth onboarding experience.

Follow the steps mentioned in the wizard and once the command runs successfully, run the `I have run the commands - Verify Kyverno` button.

### Step 3: Verify Kyverno Health
In this stage, we check the health of Kyverno running in the cluster to ensure it is optimally configured:
* No Greenfield Cluster Required: If your cluster is running an older version of Nirmata Enterprise for Kyverno or even open-source Kyverno, it can still be onboarded without issues.
* We will also recommend newer Nirmata Enterprise for Kyverno versions if an update is needed for optimal performance.

### Step 4: Select PolicySets
Nirmata provides several built-in policy sets that you can deploy to your cluster:
* Pod Security Standards (17 controls in total) are available by default during onboarding.
* You can choose to deploy these policies immediately or select them later if you prefer to manage policies on your own.

>NOTE: Deploying policy sets during onboarding is optional, and you can skip this step if you already have your own set of policies.

### Step 5: Final Verification
Once the above steps are completed, the final stage ensures that all related components are properly installed and running:
* Kyverno (opensource or enterprise).
* Kyverno Operator for health monitoring and policy management.
* PolicySets (optional. Only if you had installed policysets in previous step.)
* Nirmata kube-controller, the agent that communicates with Nirmata SaaS and monitors your cluster.


## Onboarding with the Helm chart

### Add and update Helm repo

Add the Nirmata Helm chart repository.
```bash
helm repo add nirmata https://nirmata.github.io/kyverno-charts/
helm repo update nirmata
```text

### Install Nirmata Kube Controller

#### Using a User API Token

```bash
helm install nirmata-kube-controller nirmata/nirmata-kube-controller -n nirmata --create-namespace \
  --set cluster.name=test \
  --set namespace=nirmata \
  --set apiToken=<nirmata-api-token> \
  --set features.policyExceptions.enabled=true \
  --set features.policySets.enabled=true
```text

#### Using a Service Account Token (Recommended for Automation)

For GitOps pipelines and automated cluster registration workflows, you can authenticate using an Nirmata Control Hub Service Account token instead of a user API token. The `serviceAccountToken` field replaces `apiToken` and accepts the Service Account secret generated in Nirmata Control Hub.

```bash
helm install nirmata-kube-controller nirmata/nirmata-kube-controller -n nirmata --create-namespace \
  --set cluster.name=<cluster-name> \
  --set serviceAccountToken=<nch-service-account-secret> \
  --set features.policyExceptions.enabled=true \
  --set features.policySets.enabled=true \
  --set clusterOnboardingToken=<onboarding-token> \
  --set nirmataURL=wss://nirmata.io/tunnels
```text

To create a Service Account and generate a token:

1. Log in to [Nirmata Control Hub](https://nirmata.io)
2. Navigate to **Identity & Access** from the left sidebar
3. Go to the **Service Accounts** section and create a new Service Account with the appropriate cluster registration permissions
4. Copy the generated secret and use it as the `serviceAccountToken` value

>NOTE: You will have a clusterOnboardingToken only if you are installing from the UI wizard. If you are making this a part of automation, you can skip this field.

### Install Nirmata Enterprise for Kyverno Operator

The enterprise kyverno operator is used to monitor Kyverno, and its policies. It is also used to prevent tampering of Kyverno configuration and policies in the cluster.

To install the enterprise kyverno operator, run the following commands.
```bash
helm install kyverno-operator nirmata/nirmata-kyverno-operator -n nirmata-system \
  --create-namespace \
  --set enablePolicyset=true
```text
>NOTE: To install reports server along with enterprise kyverno, follow the documentation [here](../../controllers/n4k/reports-server/#installation). The below command installs **only** enterprise kyverno (without reports-server).

### Install Nirmata Enterprise for Kyverno
```bash
helm install kyverno nirmata/kyverno -n kyverno --create-namespace \
  --set features.policyExceptions.namespace="kyverno" \
  --set crds.reportsServer.enabled=false \
  --set features.policyExceptions.enabled=true
```

## Secure Installation Tips
### Configure Nirmata Permissions

See [Cluster Deployment Options](deployment-options/) to choose between Read-Only mode (you manage resources with your own tools) and Read-Write mode (Nirmata deploys Policies and Policy Exceptions directly).



---

## Nirmata Kube Controller


Nirmata Kube Controller is used to register the cluster with the Nirmata platform.

The following resources will be deployed to the target cluster.

### Deployment

<details>
<summary>nirmata-kube-controller</summary>

```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nirmata-kube-controller
  namespace: nirmata
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nirmata-kube-controller
      nirmata.io/container.type: system
      app.kubernetes.io/name: nirmata
      app.kubernetes.io/instance: nirmata
  template:
    metadata:
      labels:
        app: nirmata-kube-controller
        nirmata.io/container.type: system
        app.kubernetes.io/name: nirmata
        app.kubernetes.io/instance: nirmata
    spec:
      containers:
        - args:
            - -token
            - $(TOKEN)
            - -url
            - $(URL)
            - -event-aggregation
          command:
            - /nirmata-kube-controller
          env:
            - name: TOKEN
              value: 6fcee39e-44dc-43a6-9792-468b82fd5a24
            - name: URL
              value: wss://www.nirmata.io/tunnels
          image: ghcr.io/nirmata/nirmata-kube-controller:v3.9.8
          imagePullPolicy: IfNotPresent
          livenessProbe:
            exec:
              command:
                - /nirmata-kube-controller
          name: nirmata-kube-controller
          readinessProbe:
            exec:
              command:
                - /nirmata-kube-controller
          resources:
            limits:
              memory: 512Mi
            requests:
              memory: 200Mi
              cpu: 250m
          securityContext:
            allowPrivilegeEscalation: false
            capabilities:
              drop:
                - ALL
            readOnlyRootFilesystem: true
            runAsNonRoot: true
            seccompProfile:
              type: RuntimeDefault
      hostNetwork: false
      imagePullSecrets:
        - name: nirmata-controller-registry-secret
      securityContext:
        seccompProfile:
          type: RuntimeDefault
      serviceAccountName: nirmata
      tolerations:
        - effect: NoSchedule
          key: node-role.kubernetes.io/master
          operator: Exists
```text
</details>

<details>
<summary>otel-agent</summary>

```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: otel-agent
  namespace: nirmata
  labels:
    app: opentelemetry
    component: otel-agent
    app.kubernetes.io/instance: nirmata
    app.kubernetes.io/name: nirmata
spec:
  selector:
    matchLabels:
      app: opentelemetry
      component: otel-agent
      app.kubernetes.io/instance: nirmata
      app.kubernetes.io/name: nirmata
  template:
    metadata:
      labels:
        app: opentelemetry
        component: otel-agent
        app.kubernetes.io/instance: nirmata
        app.kubernetes.io/name: nirmata
    spec:
      containers:
        - name: otel-agent
          image: ghcr.io/nirmata/metrics-agent:0.38.3
          resources:
            limits:
              memory: 512Mi
            requests:
              cpu: 100m
              memory: 200Mi
          securityContext:
            allowPrivilegeEscalation: false
            capabilities:
              drop:
                - ALL
            readOnlyRootFilesystem: true
            runAsNonRoot: true
            seccompProfile:
              type: RuntimeDefault
          livenessProbe:
            httpGet:
              path: /metrics
              port: 8888
              scheme: HTTP
          readinessProbe:
            httpGet:
              path: /metrics
              port: 8888
              scheme: HTTP
          volumeMounts:
            - mountPath: /etc/otel/config.yaml
              name: data
              subPath: config.yaml
              readOnly: true
      terminationGracePeriodSeconds: 30
      volumes:
        - name: data
          configMap:
            name: otel-agent-config
```text
</details>

### ServiceAccount

<details>
<summary>nirmata</summary>

```yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nirmata
  namespace: nirmata
secrets:
  - name: nirmata-sa-secret
```text
</details>

<details>
<summary>nirmata-controller</summary>

```yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nirmata-controller
  namespace: nirmata
```text
</details>

### ConfigMap

<details>
<summary>nirmata-kube-controller-config</summary>

```yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: nirmata-kube-controller-config
  namespace: nirmata
data:
  IgnoreFields: metadata.managedFields
  FilterPatches: |-
    /metadata/resourceVersion
    /metadata/generation
    /results/*/timestamp/*
  IgnoreEvents: Normal.PolicyApplied.*
  WatchedResources: |-
    events.v1.
    policyreports.v1alpha2.wgpolicyk8s.io
    clusterpolicyreports.v1alpha2.wgpolicyk8s.io
    policies.v1.kyverno.io
    clusterpolicies.v1.kyverno.io
    policyexceptions.v2alpha1.kyverno.io
  FilterEvents: Warning.PolicyViolation.*,Normal.PolicySkipped.*
```text
</details>

<details>
<summary>otel-agent-config</summary>

```yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: otel-agent-config
  namespace: nirmata
data:
  config.yaml: >-
    receivers:
      prometheus:
        config:
          scrape_configs:
          - job_name: "kyverno"
            scrape_interval: 1m
            static_configs:
            - targets: ["kyverno-svc-metrics.kyverno.svc.cluster.local:8000"]
            metric_relabel_configs:
            - source_labels: [__name__]
              regex: "(kyverno_admission_review_duration_seconds.*|kyverno_policy_execution_duration_seconds.*|kyverno_policy_results_total|kyverno_policy_rule_info_total|kyverno_admission_requests_total|kyverno_controller_reconcile_total|kyverno_controller_requeue_total|kyverno_controller_drop_total)"
              action: keep
    exporters:
      prometheusremotewrite:
        endpoint: https://www.nirmata.io/host-gateway/metrics-receiver
        external_labels:
          clusterId: 6fcee39e-44dc-43a6-9792-468b82fd5a24
        remote_write_queue:
          queue_size: 2000
          num_consumers: 1
        timeout: 300s
    service:
      pipelines:
        metrics:
          receivers: [prometheus]
          exporters: [prometheusremotewrite]
```text
</details>


### ClusterRole
<details>
<summary>nirmata:nirmata-privileged</summary>
Note: This ClusterRole is only needed for NDP

```yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations: {}
  name: nirmata:nirmata-privileged
rules:
  - apiGroups:
      - kyverno.io
      - operator.kyverno.io
      - security.nirmata.io
    nonResourceURLs: []
    resourceNames: []
    resources:
      - policies
      - clusterpolicies
      - reportchangerequests
      - clusterreportchangerequests
      - kyvernooperators/status
      - kyvernooperators
      - imagekeys
      - imagekeys/finalizers
      - imagekeys/status
      - admissionreports
      - clusteradmissionreports
      - backgroundscanreports
      - clusterbackgroundscanreports
      - policyexceptions
      - cleanuppolicies
      - clustercleanuppolicies
      - kyvernoes
      - kyvernoes/status
    verbs:
      - "*"
  - apiGroups: []
    nonResourceURLs:
      - /metrics
    resourceNames: []
    resources: []
    verbs:
      - get
  - apiGroups:
      - "*"
    nonResourceURLs: []
    resourceNames: []
    resources:
      - tokenreviews
      - subjectaccessreviews
    verbs:
      - get
      - create
  - apiGroups:
      - wgpolicyk8s.io/v1alpha1
      - wgpolicyk8s.io/v1alpha2
    nonResourceURLs: []
    resourceNames: []
    resources:
      - policyreports
      - clusterpolicyreports
    verbs:
      - "*"
  - apiGroups:
      - "*"
    nonResourceURLs: []
    resourceNames: []
    resources:
      - policies
      - policies/status
      - clusterpolicies
      - clusterpolicies/status
      - policyreports
      - policyreports/status
      - clusterpolicyreports
      - clusterpolicyreports/status
      - generaterequests
      - generaterequests/status
      - reportchangerequests
      - reportchangerequests/status
      - clusterreportchangerequests
      - clusterreportchangerequests/status
      - updaterequests
      - updaterequests/status
      - admissionreports
      - clusteradmissionreports
      - backgroundscanreports
      - clusterbackgroundscanreports
    verbs:
      - create
      - delete
      - get
      - list
      - patch
      - update
      - watch
      - deletecollection
  - apiGroups:
      - apiextensions.k8s.io
    nonResourceURLs: []
    resourceNames: []
    resources:
      - customresourcedefinitions
    verbs:
      - delete
      - create
      - get
      - list
      - patch
      - update
      - watch
  - apiGroups:
      - "*"
    nonResourceURLs: []
    resourceNames: []
    resources:
      - namespaces
      - networkpolicies
      - secrets
      - configmaps
      - resourcequotas
      - limitranges
      - deployments
      - services
      - serviceaccounts
      - roles
      - rolebindings
      - clusterroles
      - clusterrolebindings
      - events
      - mutatingwebhookconfigurations
      - validatingwebhookconfigurations
      - certificatesigningrequests
      - certificatesigningrequests/approval
      - poddisruptionbudgets
      - ingresses
      - ingressclasses
    verbs:
      - create
      - update
      - delete
      - list
      - get
      - patch
      - watch
  - apiGroups:
      - "*"
    nonResourceURLs: []
    resourceNames: []
    resources:
      - "*"
    verbs:
      - get
      - list
      - watch
      - update
  - apiGroups:
      - certificates.k8s.io
    nonResourceURLs: []
    resourceNames:
      - kubernetes.io/legacy-unknown
    resources:
      - certificatesigningrequests
      - certificatesigningrequests/approval
      - certificatesigningrequests/status
    verbs:
      - create
      - delete
      - get
      - update
      - watch
  - apiGroups:
      - certificates.k8s.io
    nonResourceURLs: []
    resourceNames:
      - kubernetes.io/legacy-unknown
    resources:
      - signers
    verbs:
      - approve
  - apiGroups:
      - coordination.k8s.io
    nonResourceURLs: []
    resourceNames: []
    resources:
      - leases
    verbs:
      - create
      - delete
      - get
      - patch
      - update
```text
</details>

<details>
<summary>nirmata:policyexception-manager</summary>

```yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: nirmata:policyexception-manager
rules:
- apiGroups:
  - kyverno.io
  resources:
  - policies
  - clusterpolicies
  - policyexceptions
  verbs:
  - '*'
```text
</details>

### ClusterRoleBindings

<details>
<summary>nirmata-cluster-admin-binding</summary>
Note: This ClusterRoleBinding is only needed for NDP

```yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: nirmata-cluster-admin-binding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: nirmata:nirmata-privileged
subjects:
  - kind: ServiceAccount
    name: nirmata
    namespace: nirmata
```text
</details>

<details>
<summary>nirmata-controller-binding</summary>

```yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: nirmata-controller-binding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: view
subjects:
  - kind: ServiceAccount
    name: nirmata-controller
    namespace: nirmata
```text
</details>

<details>
<summary>nirmata:policyexception-manager</summary>

```yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: nirmata:policyexception-manager
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: nirmata:policyexception-manager
subjects:
  - kind: ServiceAccount
    name: nirmata
    namespace: nirmata
  - kind: ServiceAccount
    name: kyverno-cleanup-controller
    namespace: kyverno  
```text
</details>

### RoleBinding
<details>
<summary>nirmata-admin-binding</summary>

```yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: nirmata-admin-binding
  namespace: nirmata
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: admin
subjects:
  - kind: ServiceAccount
    name: nirmata
    namespace: nirmata
```text
</details>

### Secret

<details>
<summary>nirmata-sa-secret</summary>

```yaml
apiVersion: v1
kind: Secret
metadata:
  name: nirmata-sa-secret
  namespace: nirmata
  annotations:
    kubernetes.io/service-account.name: nirmata
type: kubernetes.io/service-account-token
```text
</details>


---

## Nirmata Operator


## Overview
The Nirmata Operator is a Kubernetes operator designed to manage Kyverno installations and policies with ease and efficiency. When integrated with Nirmata Control Hub, the Nirmata Operator enables streamlined policy management, security, and compliance for clusters. Key functionalities include managing PolicySets with a GitOps approach, tamper detection and prevention for policies, and continuous monitoring of Kyverno and policies critical to the security of Kubernetes clusters.

## Key Features
* PolicySet Management (GitOps Style)
  * GitOps-based policy management: Enables users to manage PolicySets using Git repositories as the source of truth
  * Automatic Sync: Automatically synchronizes policies from Git repositories, ensuring consistency across clusters
* Tamper Detection and Prevention
  * Policy Integrity: Detects unauthorized changes to policies and alerts users for preventive action
  * Enforcement Mechanisms: Automatically restores policies to their desired state if tampering is detected, ensuring security compliance
* Monitoring and Alerts
  * Kyverno Health Monitoring: Monitors Kyverno's health and performance, alerting when issues arise
  * Policy Status Tracking: Continuously tracks the status of applied policies, providing insights into policy violations and compliance adherence

## Installation
### Prerequisites
* Helm 3.0+ must be installed.
* A Kubernetes cluster with appropriate permissions for installing and managing operators.

#### Step 1: Install Nirmata Operator
To install the Nirmata Operator using Helm, execute the following command:
```bash
helm repo add nirmata https://nirmata.github.io/kyverno-charts/
helm repo update
helm install enterprise-kyverno-operator nirmata/enterprise-kyverno-operator --namespace nirmata-system --create-namespace
```text

>Note: To install RC versions of the Operator chart, use the `--devel` flag in the `helm install` command.

#### Step 2: Verify Installation
Check the status of the Nirmata Operator to ensure it is installed and running:
```bash
kubectl get pods -n nirmata-system
```text


---

## Cluster Deployment Options


> **Applies to:** Nirmata Control Hub 4.0 and later

Choose whether to allow Nirmata to deploy custom resources directly to your cluster or manage them using your own GitOps and Continuous Delivery tools.

## Read-Only

Nirmata will not deploy Policies or Policy Exceptions to your cluster. You retain complete control and deploy these resources yourself using your own tools (Argo CD, Flux, kubectl, etc.).

Nirmata still provides full visibility: compliance reports, violation dashboards, and monitoring all function normally.

**Best for:** Teams with strict GitOps requirements or existing CD pipelines.

## Read-Write

Nirmata deploys Policies and Policy Exceptions directly to your cluster. This enables one-click policy set deployment, automated remediations, and full use of the Agent Hub.

> **Note:** We recommend enabling SSO and MFA when using Read-Write mode, since Nirmata has direct write access to cluster resources.

**Best for:** Teams that want to manage policies through the Nirmata UI or AI agents.

## Changing the Permission Mode

You can change the permission mode after onboarding:

1. Navigate to the **Clusters** page in Nirmata Control Hub.
2. Click on the cluster name.
3. Go to **Settings**.
4. Toggle the permission mode and save.


