---
title: "Nirmata Control Hub"
description: "Release notes for Nirmata DevSecOps Platform (NDP)"
diataxis: how-to
applies_to:
  product: "nirmata-control-hub"
audience: ["platform-engineer"]
last_updated: 2026-03-25
url: https://docs.nirmata.io/docs/release-notes/control-hub/
---


## Available Versions



---

## v4.24.0


## Introduction

These release notes highlight the updates included in the NDP Private Edition v4.24.0. It introduces key features and enhancements added since version 4.22, along with guidance for upgrading from v4.22 to the latest release.

## New Features

The following features have been introduced since the release of 4.22:

- **[Cluster Policy Report Results](https://docs.nirmata.io/docs/control-hub/policy_reports/)**
- **[Invite new users from approved domain via Share Report dialog](https://docs.nirmata.io/docs/control-hub/policy_reports/)**
- **[Pipeline Scanning](https://docs.nirmata.io/docs/control-hub/how-to/pipelinescanning/)**
- **[Repository Scan Reports](https://docs.nirmata.io/docs/control-hub/repository_scan_reports/)**
- **[Repository Compliance](https://docs.nirmata.io/docs/control-hub/compliance/compliance-per-repository/)**
- **[AI-Powered Remediation](https://docs.nirmata.io/docs/control-hub/remediations/)**
- **[AI-Powered Violation Insights](https://docs.nirmata.io/docs/control-hub/policy_reports/violation_insights/)**
- **[Suppress Policy Violations](https://docs.nirmata.io/docs/control-hub/policy_reports/suppress_policy_reports/)**
- **[GitOps for PolicySets](https://docs.nirmata.io/docs/control-hub/policy-sets/)**
- **[Jira Integration](https://docs.nirmata.io/docs/control-hub/policy_reports/create_jira_tickets/)**

## Installation

Nirmata DevSecOps Platform (NDP) can be deployed using Helm charts provided in the following repository. You can follow the instructions provided in the README page for more details.

### To clone the repository:

```bash
git clone https://github.com/nirmata/nch-charts.git
cd nch-charts
git checkout release/4.24
make help
```text

## Upgrade from Release 4.22

To perform an upgrade of a system from 4.22.x to 4.24.0, you must render the charts provided in [https://github.com/nirmata/nch-charts](https://github.com/nirmata/nch-charts)

### Step 1: Clone the Helm chart repository

```bash
git clone https://github.com/nirmata/nch-charts.git
cd nch-charts
git checkout release/4.24
make help
```text

### Step 2: Edit the value file

Edit the value file `./config/values/environments/prod.yml`

This should be the only value file that you will have to modify. You can specify values to the system that you are upgrading:

- **namespace**
- **request/limits for each service**
- **Replicas for each service**
- **Bedrock inference profile**
- **NDP true/false**
- **Image registry**
- **MongoDB configuration**: hosted service versus local deployment, credentials, authorization, and encryption parameters

### Step 3: Render the Kubernetes manifests

#### NDP setup

```bash
make render-all ENV=prod NDP=true OUT=<output-directory>
```text

## Manifest Changes

To help you apply all the YAML changes thoroughly and safely, this section highlights the main changes from 4.22. All these changes are taken care of in the files shared with Duke.

### Key Changes

1. **Zookeeper Removal**: Zookeeper is no longer required and must be removed

2. **Environment Variables**: 
   - `nirmata.workflow.usecurator` added to catalog and config Deployments

3. **Policies Service Split**: Split into 3 different deployments:
   - **Policies**: exposes the policies API
   - **Policies-processor**: implements background tasks
   - **Policies-event-processor**: processes events coming from the clusters
   
   The deployments share the same image. The identity of the pods is defined by environment variables: `nirmata.policies.api`, `nirmata.policies.processor`, `nirmata.policies.event.processor`.

4. **New Environment Variables**: Added 2 environment variables:
   - `nirmata.llm.apps.host`
   - `nirmata.datapipeline.enabled`

5. **LLM Apps Service**: All Nirmata services implementing AI features must send AI Bedrock requests to llm-apps.

6. **Environments Service Split**: Split into 2 deployments:
   - **Environments**: provides the API
   - **Environment-processor**: implements background tasks

## AI Configuration

This release introduces two new AI-powered features: Summarization & Prioritization, and Remediation.

These features are exclusively compatible with AWS Bedrock, the AI backend service validated by Nirmata. We have validated these features using two models: Anthropic Claude SONNET 3.7 and Anthropic Claude SONNET 4.0.

Please note that if you intend to use Claude 4.0, it is crucial to increase its model quotas to at least match the default quotas of Claude 3.7. The default quotas for Claude 4.0 are significantly lower, which can lead to frequent request throttling.

The inference profile must be created in the AWS account of the customer.

### Inference Profile Creation

To create an inference profile, first locate the ARN of your desired model (Claude 4.0 or Claude 3.7). Model ARNs follow a specific format. You can then use this ARN to generate the profile.

```text
arn:aws:bedrock:<region>:<your-aws-account-number>:inference-profile/us.anthropic.claude-sonnet-4-20250514-v1:0
```text

Or

```text
arn:aws:bedrock:<region>:<your-aws-account-number>:inference-profile/us.anthropic.claude-3-7-sonnet-20250219-v1:0
```text

```bash
aws bedrock list-inference-profiles --region <your-region> --no-cli-pager

aws bedrock create-inference-profile        \
    --inference-profile-name <profile-name> \
    --model-source  copyFrom='<model-arn>'  \ 
    --region <your-region>
```text

### Pod Identity Configuration for EKS Clusters

To establish a trust relationship between Nirmata's llm-app pods and the AWS Bedrock service, you must first configure an AWS Role. This role will be assumed by the llm-app pods, which are responsible for sending all Bedrock API requests. Therefore, all pods within this deployment need to be trusted by AWS.

Create an IAM role named `nirmata-bedrock-role` in your AWS Console. Then, attach the `AmazonBedrockFullAccess` policy to this role.

Then select the **Trust relationships** tab and click on **Edit trust policy**. Insert the following JSON:

```json
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "Service": "pods.eks.amazonaws.com"
            },
            "Action": [
                "sts:AssumeRole",
                "sts:TagSession"
            ]
        }
    ]
}
```text

This next step links the IAM role from Step 1 to the Kubernetes service account of the llm-apps pods: `llm-app`. This association tells EKS that any pod using the `llm-apps` service account should be allowed to assume the `nirmata-bedrock-role`.

```bash
# Replace placeholders with your values
aws eks create-pod-identity-association \
  --cluster-name <YOUR-CLUSTER-NAME> \
  --namespace <NAMESPACE> \
  --service-account llm-apps \
  --role-arn arn:aws:iam::<AWS-ACCOUNT-ID>:role/nirmata-bedrock-role
```text

The EKS Pod Identity Agent will automatically handle injecting the necessary AWS credential environment variables into the pod.

### Pod Identity Configuration for Non-EKS Clusters

For a non-EKS Kubernetes cluster, you would configure IAM Roles for Service Accounts (IRSA) by setting up your own OIDC provider.

The principle is the same as with EKS: your pod gets a short-lived token from its service account, and AWS exchanges that token for temporary IAM credentials. The main difference is that you are responsible for setting up the OIDC "trust bridge" that EKS normally manages for you.

The process involves three main parts:

1. **Kubernetes Cluster**: Your cluster's API server acts as an OIDC issuer, creating and signing JWTs (JSON Web Tokens) for your service accounts.
2. **Public OIDC Endpoint**: You expose the cluster's OIDC discovery documents to the internet so AWS can verify the JWTs.
3. **AWS IAM**: You configure IAM to trust your cluster's OIDC endpoint, allowing it to exchange the JWT for temporary role credentials.

Here is the step-by-step guide to setting this up.

#### Step 1: Expose the Cluster's OIDC Endpoint

Your Kubernetes API server already has an OIDC issuer, but it's typically not publicly accessible. You must expose the discovery endpoint (`/.well-known/openid-configuration`) and the JSON Web Key Set (JWKS) URL to the public internet so AWS can reach them.

This is often done using an Ingress controller or a dedicated proxy service. The public URL will be your OIDC provider URL (e.g., `https://oidc.your-domain.com`).

#### Step 2: Create an OIDC Identity Provider in IAM

Next, you need to tell AWS to trust your cluster's public OIDC endpoint.

**Get the Root CA Thumbprint**: You need the thumbprint of the certificate chain for your OIDC endpoint. You can get this with openssl.

```bash
# Replace oidc.your-domain.com with your public OIDC URL
openssl s_client -servername oidc.your-domain.com -showcerts -connect oidc.your-domain.com:443 < /dev/null 2>/dev/null | openssl x509 -fingerprint -noout
```text

This will output a fingerprint. You need the final hash (e.g., `9E99A48A9960D1492597E0D9C9287EE1D16652C5`).

**Create the IAM OIDC Provider**: Use the public URL and the thumbprint to create the provider in AWS.

```bash
aws iam create-open-id-connect-provider \
  --url https://oidc.your-domain.com \
  --client-id-list sts.amazonaws.com \
  --thumbprint-list <YOUR_THUMBPRINT>
```text

#### Step 3: Create the IAM Role and Trust Policy

This step is similar to the EKS configuration, but the trust policy is different. It will trust the OIDC provider you just created instead of the EKS service.

Create a file named `non-eks-trust-policy.json`:

```json
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "Federated": "arn:aws:iam::<AWS_ACCOUNT_ID>:oidc-provider/oidc.your-domain.com"
            },
            "Action": "sts:AssumeRoleWithWebIdentity",
            "Condition": {
                "StringEquals": {
                    "oidc.your-domain.com:sub": "system:serviceaccount:<NAMESPACE>:<SERVICE_ACCOUNT_NAME>"
                }
            }
        }
    ]
}
```text

**Key Differences**:
- **Principal**: The Federated principal points to the OIDC provider you created.
- **Action**: The action is `sts:AssumeRoleWithWebIdentity`.
- **Condition**: The condition checks the JWT's `sub` (subject) claim to ensure it matches the specific namespace and service account name.

Create the role using this policy and attach the permissions your pod needs (e.g., S3 access).

#### Step 4: Configure the Pod

Your application's AWS SDK needs to be told which role to assume and where to find the JWT. This is done by injecting specific environment variables into the pod.

The service account token is automatically mounted into the pod at `/var/run/secrets/kubernetes.io/serviceaccount/token`.

Here is an example of the environment variables for the llm-apps pods:

```yaml
env: 
# Tells the AWS SDK which role to assume 
- name: AWS_ROLE_ARN 
  value: "arn:aws:iam::<AWS_ACCOUNT_ID>:role/MyNonEksRole" 
# Tells the AWS SDK where to find the token for exchange 
- name: AWS_WEB_IDENTITY_TOKEN_FILE 
  value: /var/run/secrets/kubernetes.io/serviceaccount/token 
# Standard AWS environment variables 
- name: AWS_REGION 
  value: "us-west-2"
```

## Known Issues

1. **Kube Controller Upgrade**: Upgrading to the latest nirmata-kube-controller doesn't work out of the box. It requires manual YAML updates. This will be fixed in a future patch.

2. **PolicySets Configuration**: New PolicySets can only be used with the Kyverno Operator after configuring the secret to access the repository.

---




---

## v4.24.4


## Introduction

These release notes highlight the bug fixes included in the NDP Private Edition v4.24.4 patch release.

## Bugs Fixed

The following bugs have been fixed in this release:

- Fixed inability to update catalog helm application values.yaml after creation

- Restored broader scrollbar width in UI (regression fix)

- Backported fabric8 version change in nirmata kube controller to fix connectivity issues for k8s versions 1.32 and above

- Fixed NullPointerException preventing users from exporting files from environments

- Fixed HPA max replica count not syncing from cluster to Nirmata UI

- Fixed cronjobs not visible at environment level (only showed at namespace level)

- Fixed Axios 500 error preventing nirmata kube controller upgrade on Diamanti clusters

- Fixed non-admin users unable to import secrets via environment level (catalog import worked)

- Fixed GitOps owners not visible to non-admin users with edit permissions

- Added logging for timeseries data conversion status during upgrade 4.22.

- Fixed error when sorting nodes by CPU(%) or Memory(%) columns

- Fixed application deployment failure when GitHub directory path contains whitespace

- Added support for teams as GitOps owners (in addition to individual users)

- Fixed deleted teams still appearing in environment access control

- Fixed UI incorrectly showing new nirmata kube controller version available after upgrade

- Added timezone support for pod logs (respects cluster timezone instead of GMT)

- Fixed NullPointerException causing deployment validation errors for persistent volumes


---

## v4.24.6


## Introduction

These release notes highlight the bug fixes included in the NDP Private Edition v4.24.6 patch release.

## Bugs Fixed

The following bugs have been fixed in this release:

- Fixed missing close button on environment list window during catalog app deployment

- Fixed Y-axis display truncation in analytics timeseries charts to properly show formatted values for metrics like memory and cpu usage


