---
title: "Nirmata Assistant"
description: "AI-powered personal agent for platform engineers — policy development, testing, and Kubernetes operations from your terminal."
diataxis: how-to
applies_to:
  product: "nctl"
audience: ["developer","platform-engineer"]
last_updated: 2026-04-16
url: https://docs.nirmata.io/docs/ai/nctl-ai/
---


> **Applies to:** nctl 4.0 and later

The Nirmata Personal Agent (`nctl ai`) runs on your workstation and integrates directly into your development workflow, offering specialized guidance and automation without requiring a dedicated server.

`nctl ai` is built with a security-first design — it only accesses directories you explicitly allow, loads built-in skills and only the skills you provide (with `--skills`), and asks for your confirmation before performing any operation. See [Security](security/) for details.

## Step-by-Step Install

Install `nctl` using Homebrew:

```sh
brew tap nirmata/tap
brew install nctl
```

For more installation options, see [nctl installation](https://downloads.nirmata.io/nctl/downloads/).

Run the personal agent in interactive mode:

```sh
nctl ai
```

You will be prompted to enter your business email to:
- sign up for a free trial
- or sign in to your account

```sh
Using nctl AI requires authentication with Nirmata Control Hub to access 
AI-enabled services. Please enter your business email to sign up for a 
free trial, or sign in to your account

Enter email: ****@******.com

A verification code has been sent to your email.
Enter verification code: ******

Email verified successfully!
Your credentials have been fetched and successfully saved.

👋 Hi, I am your Nirmata AI Platform Engineering Assistant!

I can help you automate security, compliance, and operational best practices 
across your clusters and pipelines.

💡 Here are some tasks I can do for you, or ask anything:
  ▶ scan clusters
  ▶ generate policies and tests
  ▶ optimize costs

💡 type 'help' to see commands for working in nctl ai

───────────────────────────────────────────────────────────────────────────────────────
>
───────────────────────────────────────────────────────────────────────────────────────
```

Try some sample prompts like:
* scan my cluster
* generate a policy to require pod labels
* summarize violations across my clusters
* perform a Kyverno health check

**Non-Interactive Mode**:

You can also provide a prompt directly for single shot requests:

```bash
nctl ai --prompt "create a policy that requires all pods to have resource limits"
```

See [Command Reference](/docs/nctl/commands/nctl_ai/) for full details.


## Accessing Nirmata Control Hub

After successful authentication, you can also access the Nirmata Control Hub web interface:

1. Navigate to https://nirmata.io
2. Use the same email address you provided during nctl setup
3. Use the password you created in the authentication process

Alternatively, you can sign up for a [15-day free trial](https://nirmata.io/security/signup-pa.html) and log in manually using the CLI:

```bash
nctl login --userid YOUR_USER_ID --token YOUR_API_TOKEN
```


## Key Capabilities

`nctl ai` is a personal agent specializing in Kubernetes, Policy as Code and Platform Engineering.
It provides comprehensive support across these domains:

### Policy as Code
- Generate Kyverno policies from natural language descriptions
- Create and execute comprehensive Kyverno CLI and Chainsaw tests
- Generate policy exceptions for failing workloads
- Upgrade Kyverno policies from older versions to CEL
- Convert policies from OPA/Sentinel to Kyverno

### Platform Engineering
- Troubleshoot Kyverno engine, webhook, and controller issues
- Get policy recommendations for your environments
- Manage compliance across clusters
- Manage Nirmata agents across your clusters
- Install and configure Kyverno and other controllers


## Available Tools

The agent has access to tools for command execution, Kyverno and policy workflows, file system operations, Slack and email, and task management. See the [Available Tools](/docs/ai/nctl-ai/tools/) reference for the full list in a searchable table.

**Examples:**

List Slack channels:
```bash
nctl ai --prompt "list my slack channels"
```

Send a message to a channel:
```bash
nctl ai --prompt "scan my cluster and send the report to dev-general channel"
```

## Available Skills

`nctl ai` loads specialized skills dynamically based on your task (policy generation, cluster assessment, troubleshooting, cost management, and more). See the [Available Skills](/docs/ai/nctl-ai/skills/) reference for the full list in a table.

### Skills Safety

Built-in skills are curated and safe. They require read-only permissions and do not write to external URLs. They follow all security best practices.

You can also [add your own skills](extend/#adding-custom-skills) to customize the agent.

## Command Reference

The authoritative reference for `nctl ai` flags and examples is the [nctl ai command documentation](/docs/nctl/commands/nctl_ai/). That page is maintained to match the CLI.

- **In interactive mode:** type `help` for a full list of commands and capabilities.
- **From the terminal:** run `nctl ai --help` for the latest usage, examples, and flags from your installed version.

## More Topics

| Topic | Description |
|-------|-------------|
| [Security](security/) | Filesystem sandboxing, permission checks, and automation flags |
| [Session & Task Management](session-management/) | Sessions, task tracking, execution limits, and plan mode |
| [AI Provider Configuration](providers/) | Nirmata, Anthropic, Gemini, Azure OpenAI, and Bedrock |
| [Extending Nirmata Assistant](extend/) | MCP servers, custom skills, and running as an MCP server |


---

## Available Tools


The following tools are available to `nctl ai`. The agent selects the appropriate tool based on your request.

## Tools by category

| Category | Tool | Description |
|----------|------|-------------|
| **Command execution** | `bash` | Execute a bash command. Use when you need to run a shell command. |
| **Command execution** | `kubectl` | Command-line tool for interacting with Kubernetes clusters. |
| **Policy** | `generate_policy` | Generate a Kyverno policy. |
| **Policy** | `generate_kyverno_tests` | Generate Kyverno CLI tests for a policy. Returns filenames and contents for `kyverno-test.yaml`, `resources.yaml`, and optionally `variables.yaml`. |
| **Policy** | `generate_chainsaw_tests` | Generate or update Chainsaw tests for Kyverno policies. |
| **Policy** | `run_kyverno_tests` | Test Kyverno policies using the Kyverno CLI test command. |
| **Security** | `remediate` | Fix policy violations for a resource. |
| **Security** | `scan_kubernetes_resources` | Scan Kubernetes resource files against policies and return results. |
| **Security** | `scan_kubernetes_cluster` | Scan Kubernetes resources in a cluster against policies and return results. |
| **Security** | `scan_terraform` | Scan Terraform resources against policies and return results. |
| **Security** | `scan_prompt` | Scan LLM prompts against security policies for injection attacks, jailbreak patterns, PII leakage, credential exposure, and other risks. Accepts inline content, file paths, or directories. Returns policy evaluation results with a risk score. |
| **Security** | `skills_scan` | Scan a skill (folder or artifact) against policies and return a signed/hashed receipt with decision and findings. Normalizes the skill directory, applies Kyverno ValidatingPolicies, computes a trust score and decision (Allow/Review/Deny), and produces a receipt for later verification. |
| **Communication** | `email` | Send an email via Nirmata Control Hub. |
| **Communication** | `list_slack_channels` | List all Slack channels the user has access to. |
| **Communication** | `send_slack_message` | Send a message to a Slack channel via Nirmata Control Hub. |
| **File system** | `read_file` | Read the complete contents of a file. |
| **File system** | `read_multiple_files` | Read the contents of multiple files in a single operation. |
| **File system** | `write_file` | Create a new file or overwrite an existing file with new content. |
| **File system** | `modify_file` | Update a file by finding and replacing text. Pattern matching without needing exact character positions. |
| **File system** | `copy_file` | Copy files and directories. |
| **File system** | `move_file` | Move or rename files and directories. |
| **File system** | `delete_file` | Delete a file or directory from the file system. |
| **File system** | `create_directory` | Create a new directory or ensure a directory exists. |
| **File system** | `list_directory` | Get a detailed listing of all files and directories in a specified path. |
| **File system** | `tree` | Return a hierarchical JSON representation of a directory structure. |
| **File system** | `get_file_info` | Retrieve detailed metadata about a file or directory. |
| **File system** | `search_files` | Recursively search for files and directories matching a pattern. |
| **File system** | `search_within_files` | Search for text within file contents. Scans text files for matching substrings; binary files are excluded. Reports file paths and line numbers. |
| **File system** | `list_allowed_directories` | Return the list of directories the server is allowed to access. |
| **File system** | `add_allowed_directory` | Add a directory to the allowed list for filesystem operations. Use when you get errors about directories being outside allowed directories. |
| **Utility** | `todo` | Manage a todo list (add, remove, update, list items). Automatically prevents duplicate items. |
| **Utility** | `worker` | Manage background workers for concurrent task processing. |

## Slack integration

Slack tools (`list_slack_channels`, `send_slack_message`) require [Slack integration configured in Nirmata Control Hub](/docs/control-hub/settings/integrations/). No additional environment variables are needed once configured in Nirmata Control Hub.

## Extending with MCP

You can add more tools by connecting [MCP servers](/docs/ai/nctl-ai/extend/#extending-with-mcp-servers). See [Extending Nirmata Assistant](/docs/ai/nctl-ai/extend/) for configuration.


---

## Available Skills


`nctl ai` loads specialized skills dynamically based on your task. The following built-in skills are available.

## Skills by category

| Category | Skill | Description |
|----------|------|-------------|
| **Design** | brand-guidelines | Applies Nirmata's official brand colors and typography to generated content. Use when creating emails, reports, presentations, Slack/Teams messages, or any artifact requiring Nirmata branding or company design standards. |
| **Policy** | chainsaw-tests | Generate and run Chainsaw E2E integration tests. Use when the user asks for chainsaw tests, e2e tests, or integration tests, or wants to test policies in a real Kubernetes cluster. Creates test manifests and validates admission webhook behavior for ValidatingPolicy, MutatingPolicy, and ClusterPolicy. |
| **Setup** | cluster-setup | Set up a local Kubernetes development environment with Docker, Kind, Kyverno, and testing tools. For developers who can install tools locally. |
| **Policy** | converting-chainsaw-tests | Convert Chainsaw tests from ClusterPolicy (kyverno.io/v1) to ValidatingPolicy (policies.kyverno.io/v1alpha1) format. Use when converting existing test suites to work with new Kyverno ValidatingPolicy resources. |
| **Policy** | converting-policies | Convert any policy to modern Kyverno ValidatingPolicy format. Use when the user asks to convert, migrate, upgrade, or transform a policy. Handles ClusterPolicy to ValidatingPolicy, OPA Rego migration, Gatekeeper constraint templates, Sentinel policies, and cross-engine policy translation. |
| **Cost** | cost-management | Installs, configures, and validates the Nirmata Cost Management Add-on. Deploys OpenCost for cost visibility, Prometheus integration, Grafana dashboards for chargeback, and Kyverno cost guardrails for namespace labeling and resource requests. Supports kind, EKS, GKE, and AKS with real cloud pricing. Use when setting up cost visibility, cost allocation, cost hygiene labels, or troubleshooting OpenCost. |
| **Setup** | installing-remediator-agent | Installs and configures the Remediator Agent for policy violation remediation. Guides through environment selection (ArgoCD Hub, Local Cluster, VCS Target), LLM provider setup (NirmataAI, AWS Bedrock, Azure OpenAI), GitHub auth (App or PAT), action config (CreatePR, CreateIssue), scheduling, and verification. Use when setting up automated AI-powered policy remediation. |
| **Compliance** | cis-benchmark-scan | Scans Kubernetes clusters against CIS Benchmarks using nctl scan compliance and generates a full markdown compliance report. No policies are deployed to the cluster — nctl evaluates them locally with results stored as snapshots. Supports EKS (CIS EKS Benchmark v1.7.0), AKS, GKE, and generic Kubernetes (CIS Kubernetes Benchmark v1.8.0). Covers RBAC and Pod Security controls, plus AWS API checks for Control Plane (Section 2) and cluster networking (Section 5.3–5.5) on EKS. Use when performing CIS compliance audits, generating compliance reports for security teams, or assessing cluster security posture against industry benchmarks. |
| **Compliance** | compliance-evidence | Collects and packages Kubernetes-native compliance evidence for external auditors. Exports RBAC configurations, NetworkPolicies, admission webhooks, Kyverno PolicyReports and PolicyExceptions, and generates a timestamped MANIFEST.md with control-ID mapping and a manual evidence checklist. Supports NSA/CISA, NIST SP 800-53, SOC 2 Type II, ISO/IEC 27001, and PCI-DSS. Use when preparing evidence packages for SOC 2, ISO 27001, NIST, or PCI-DSS auditors, or to document accepted risks via PolicyExceptions. |
| **Compliance** | compliance-scan | Scans Kubernetes clusters against regulatory compliance standards using nctl scan compliance and generates a full markdown report with control-ID mapping. Supports NSA/CISA Kubernetes Hardening Guide, NIST SP 800-53, SOC 2 Type II, ISO/IEC 27001, and PCI-DSS. No policies are deployed — nctl evaluates them locally and stores results as snapshots. Use when performing regulatory audits, generating SOC 2 or ISO 27001 evidence, or assessing Kubernetes security posture against NIST or NSA/CISA frameworks. |
| **Compliance** | kyverno-compliance-management | Install Kyverno or Nirmata Enterprise Kyverno with optional compliance dashboards. Detects if Kyverno is missing and guides installation. Supports Pod Security Standards (PSS Baseline, PSS Restricted), RBAC Best Practices, and Grafana compliance visualization. Use when installing Kyverno/Nirmata Enterprise for Kyverno, setting up Kubernetes compliance, or configuring PSS or RBAC policies. |
| **Policy** | kyverno-policies | Generate and create Kyverno policies from natural language requirements. Use when the user asks to generate, create, or write a policy, or needs help with policy development. Covers ValidatingPolicy, MutatingPolicy, GeneratingPolicy, ClusterPolicy, and other Kyverno policy types. |
| **Policy** | kyverno-tests | Generate and run Kyverno CLI unit tests for fast offline policy validation. Use when the user asks for unit tests, kyverno test, cli tests, or wants to test policies without a cluster. Creates kyverno-test.yaml files and runs the kyverno test command. |
| **Onboarding** | quickstart | First-run cluster assessment: checks cluster maturity, identifies issues, runs security scans, and recommends policy packs. Alias: assessment. Use on first launch, or when assessing a new cluster, running a health check, getting security recommendations, checking policy coverage, or identifying quick wins for Kubernetes governance. |
| **Policy** | recommend-policies | Analyzes Kubernetes clusters to recommend Kyverno policies based on installed workloads and platform type. Detects baseline security gaps (pod-security, RBAC, workload-security), platform-specific needs (EKS, OpenShift), and add-on policies (Istio, Linkerd, Flux, Tekton, Veeam Kasten, KubeVirt, Karpenter, ArgoCD, Crossplane). Use when assessing cluster security posture, implementing policy governance, or ensuring compliance. |
| **Policy** | policy-exception | generate PolicyExceptions for running workloads so that Enforce or Deny mode does not block existing workloads; migrate policies from Audit to Enforce by creating exceptions for current violations |
| **Troubleshooting & Operations** | troubleshooting-kyverno | Diagnoses Kyverno issues: webhook timeouts, OOMKilled pods, CrashLoopBackOff, policy failures, permission errors, performance degradation, report accumulation. Use when policies not enforcing, admission controller crashing, context deadline exceeded, client-side throttling, or cloud-specific failures on EKS/GKE/AKS. |
| **Troubleshooting & Operations** | troubleshooting-workloads | Troubleshoot Kubernetes workloads, pods, and applications in any namespace. Diagnoses CrashLoopBackOff, ImagePullBackOff, Pending pods, OOMKilled, failed probes, resource constraints. Use when debugging pods, investigating application failures, pods not starting, containers crashing, high restart counts, or services unreachable. Recommends Kyverno policies to prevent recurrence. |

## Adding custom skills

You can extend the agent with your own skills. See [Adding Custom Skills](/docs/ai/nctl-ai/extend/#adding-custom-skills) for loading custom skill directories and creating `SKILL.md` files.


---

## Security


> **Applies to:** nctl 4.0 and later

`nctl ai` is built with a security-first approach. The agent operates within strict boundaries and always asks for permission before performing operations.

## Allowed Directories

By default, `nctl ai` can only access the current working directory. To grant access to additional directories, use the `--allowed-dirs` flag:

```bash
nctl ai --allowed-dirs "/path/to/policies,/tmp"
```

You can also set the `NIRMATA_AI_ALLOWED_DIRS` environment variable:

```bash
export NIRMATA_AI_ALLOWED_DIRS="/path/to/policies,/tmp"
nctl ai
```

The agent will refuse to read, write, or execute files outside of the allowed directories, ensuring your filesystem remains protected.

## Permission Checks

Before performing any operation that modifies your system (writing files, executing commands, applying Kubernetes resources), `nctl ai` prompts for explicit confirmation. This ensures you remain in control of all changes.

For automated workflows where manual confirmation is not practical, you can disable permission checks:

```bash
nctl ai --skip-permission-checks --prompt "scan my cluster"
```

To allow destructive operations (e.g., deleting resources) in non-interactive mode, both `--prompt` and `--skip-permission-checks` must be combined with the `--force` flag:

```bash
nctl ai --force --skip-permission-checks --prompt "delete unused configmaps"
```

> **Warning:** Use `--skip-permission-checks` and `--force` with caution. These flags bypass safety prompts and should only be used in trusted automation pipelines.

## Security Summary

| Feature | Default Behavior | Override |
|---------|-----------------|----------|
| File system access | Current working directory only | `--allowed-dirs` |
| Tool execution | Requires user confirmation | `--skip-permission-checks` |
| Destructive operations | Blocked in non-interactive mode | `--force` (requires `--skip-permission-checks` and `--prompt`) |
| Skill loading | Built-in skills only | `--skills` |
| TLS verification | Enforced | `--insecure` (not recommended) |


---

## Session & Task Management


> **Applies to:** nctl 4.0 and later

## Session Management

Sessions automatically capture your conversation history, tool calls, and results. You can resume any previous session to continue where you left off.

**Interactive commands:**

| Command | Description |
|---------|-------------|
| `sessions` | List all available sessions |
| `save` | Save current session |
| `new` | Create a new session |
| `resume <id>` | Resume a specific session (or `latest`) |
| `exit` / `quit` / `q` | Save session and exit |
| `exit-nosave` | Exit without saving |

**CLI flags:**

```bash
# Start a new session
nctl ai --new-session

# Resume the most recent session
nctl ai --resume-session latest

# Resume a specific session by ID
nctl ai --resume-session 20260210-0206

# List all available sessions
nctl ai --list-sessions

# Delete a session by ID
nctl ai --delete-session 20260210-0206
```

Sessions work with any provider (Nirmata, Anthropic, Bedrock, etc.) and are saved periodically during conversation. Use `Ctrl+D` to explicitly save and exit, or `Ctrl+C` to exit without saving (the session ID is displayed for later resuming).

## Task Management

`nctl ai` tracks tasks automatically during complex, multi-step operations. The agent creates and updates a task list as it works, giving you visibility into progress.

**Interactive commands:**

| Command | Description |
|---------|-------------|
| `tasks` | Show current todo list and task progress |
| `task <N>` | Show detailed information for task N (including worker output) |

The task list updates in real time as the agent works through multi-step workflows like cluster scanning, policy generation, or compliance assessments.

## Execution Limits

Two flags control how much work the agent is allowed to do in a single run:

| Flag | Default | Description |
|------|---------|-------------|
| `--max-tool-calls` | 200 | Maximum total tool calls before the agent stops |
| `--max-background-workers` | 3 | Maximum parallel background workers spawned per tool call |

These are useful in non-interactive pipelines to cap cost and execution time:

```bash
nctl ai --max-tool-calls 50 --max-background-workers 1 --prompt "scan my cluster"
```

## Plan Mode

Plan mode adds a structured review step before the agent executes any actions. When enabled with `--plan`, the agent must first create a written plan and present it to you for approval before running any tools that modify state.

```bash
nctl ai --plan --prompt "generate pod security policies for my cluster"
```

### How It Works

1. **Plan creation** — The agent analyzes your request and creates a structured `PLAN.md` listing all tasks it intends to execute.
2. **Review** — The plan is displayed in your terminal. You are prompted to approve, reject, or provide feedback.
3. **Approval** — On approval, the plan is converted to a task list and execution begins.
4. **Feedback loop** — If you describe changes instead of approving, the agent updates the plan and prompts again.
5. **Rejection** — Replying with `no` (or similar) discards the plan without executing anything.

### Approval Prompt

After the plan is displayed, you will see:

```text
Does this plan look good? Reply with yes/no to approve or reject, or describe any changes you'd like.
```

- `yes`, `y`, `approve`, `ok`, `looks good` — approve and begin execution
- `no`, `n`, `reject`, `cancel` — discard the plan
- Any other text — treated as feedback; the agent revises the plan and prompts again

### What Runs Without Approval

Read-only and scanning tools are always available so the agent can gather context while building the plan:

- File reads and directory listings
- `scan_kubernetes`, `scan_resources`, `scan_terraform`
- File search tools

All tools that write files, execute commands, or modify resources are blocked until the plan is approved.

### Interactive Plan Commands

| Command | Description |
|---------|-------------|
| `plan` | Show the current plan and its status |

### When to Use Plan Mode

Plan mode is useful for complex or multi-step requests where you want to review and confirm the agent's intended actions before any changes are made — for example, generating a set of policies across multiple directories, or remediating violations across a cluster.

For simple, single-step requests, the agent handles planning automatically. Use `--plan` to force the review step even for simpler tasks.


---

## AI Provider Configuration


> **Applies to:** nctl 4.0 and later

By default, `nctl ai` uses Nirmata Control Hub as its AI provider. You can configure it to work with other AI providers using the `--provider` flag.

## Nirmata (Default)

The default provider uses Nirmata Control Hub for AI services. Authentication uses `nctl login` — no additional setup needed.

```bash
nctl ai --prompt "generate a policy to require pod labels"
```

## Anthropic Claude

Set the environment variable with your Anthropic API key:

```bash
export ANTHROPIC_API_KEY=<your-api-key>
```

```bash
nctl ai --provider anthropic --prompt "What is Kubernetes? Answer in one sentence."
```

Get your API key from [Anthropic Console](https://console.anthropic.com/). Uses Claude's latest models by default.

## Google Gemini

Set the environment variable with your Google AI API key:

```bash
export GEMINI_API_KEY=<your-api-key>
```

```bash
nctl ai --provider gemini --prompt "what is 5+5? answer in one word"
```

- Environment variable is `GEMINI_API_KEY` (not `GOOGLE_API_KEY`)
- Default model: `gemini-2.5-pro`
- Free tier rate limit: approximately 2 requests per minute
- Get your API key from [Google AI Studio](https://ai.google.dev/)

## Azure OpenAI

Set the environment variables with your Azure OpenAI endpoint and API key:

```bash
export AZURE_OPENAI_ENDPOINT="https://<your-resource-name>.openai.azure.com/"
export AZURE_OPENAI_API_KEY="<your-api-key>"
```

```bash
nctl ai --provider azopenai --model gpt-4o --prompt "what is 5+5? answer in one word"
```

- Requires both endpoint URL and API key
- You must specify the model with `--model` (e.g., `gpt-4o`, `gpt-4`, `gpt-35-turbo`)
- Get your credentials from [Azure Portal](https://portal.azure.com/)

## Amazon Bedrock

Amazon Bedrock uses AWS credentials for authentication. Ensure you have a valid AWS profile configured with appropriate Bedrock access permissions.

**Step 1:** Login to AWS SSO (if using SSO):

```bash
aws sso login --profile your-profile-name
```

**Step 2:** Set your AWS profile:

```bash
export AWS_PROFILE=your-profile-name
```

**Step 3:** Verify credentials:

```bash
aws sts get-caller-identity
```

Expected output:

```json
{
    "UserId": "AROA4JFRUINQC7VCOQ7UD:user@example.com",
    "Account": "123456789012",
    "Arn": "arn:aws:sts::123456789012:assumed-role/YourRole/user@example.com"
}
```

**Usage:**

```bash
nctl ai --provider bedrock --model us.anthropic.claude-sonnet-4-5-20250929-v1:0 --prompt "Your prompt here"
```

- Model IDs must start with `us.` prefix (e.g., `us.anthropic.claude-...`). Without the prefix, you'll get an "on-demand throughput isn't supported" error.
- Supports Claude models from Anthropic available through Bedrock
- For more information, see [Amazon Bedrock Documentation](https://docs.aws.amazon.com/bedrock/)

## Provider Comparison

| Provider | Environment Variables | Model Selection | Notes |
|----------|----------------------|-----------------|-------|
| Nirmata (default) | Authentication via `nctl login` | Automatic | Includes access to Nirmata platform features |
| Anthropic | `ANTHROPIC_API_KEY` | Automatic | Best for Claude-specific features |
| Google Gemini | `GEMINI_API_KEY` | Default: gemini-2.5-pro | Free tier available with rate limits |
| Azure OpenAI | `AZURE_OPENAI_ENDPOINT`<br>`AZURE_OPENAI_API_KEY` | Required via `--model` | Enterprise-ready with Azure integration |
| Amazon Bedrock | `AWS_PROFILE` (or AWS credentials) | Required via `--model` | AWS-native with IAM authentication |

## Usage Details

To view token consumption for the current session and exit without running any prompts:

```bash
nctl ai --usage-details
```

This prints per-provider token usage stats, which is useful for monitoring costs across runs.

## Using AI/LLM Proxies

You can configure `nctl ai` to route requests through AI/LLM proxy services. This is useful for:
- Centralizing API key management
- Implementing rate limiting and cost controls
- Adding observability and monitoring
- Load balancing across multiple providers
- Using self-hosted AI gateways

Each provider supports proxy configuration through a base URL environment variable:

**Anthropic with Proxy:**

```bash
export ANTHROPIC_API_KEY=<your-api-key>
export ANTHROPIC_BASE_URL=http://your-proxy:8000

nctl ai --provider anthropic --prompt "Your prompt here"
```

**Google Gemini with Proxy:**

```bash
export GEMINI_API_KEY=<your-api-key>
export GEMINI_BASE_URL=http://your-proxy:8000

nctl ai --provider gemini --prompt "Your prompt here"
```

**Azure OpenAI with Proxy:**

```bash
export AZURE_OPENAI_API_KEY=<your-api-key>
export AZURE_OPENAI_ENDPOINT=http://your-proxy:8000

nctl ai --provider azopenai --model gpt-4o --prompt "Your prompt here"
```

- The proxy must be compatible with the provider's API format
- Popular proxy solutions include [LiteLLM](https://github.com/BerriAI/litellm), [OpenLLM](https://github.com/bentoml/OpenLLM), and enterprise gateways
- The base URL should include the protocol (http:// or https://) and port if needed
- When using a proxy, set `AZURE_OPENAI_ENDPOINT` to your proxy URL instead of your Azure endpoint


---

## Extending Nirmata Assistant


> **Applies to:** nctl 4.0 and later

## Extending with MCP Servers

The Model Context Protocol (MCP) allows you to extend `nctl ai` with additional capabilities by connecting external MCP servers. These servers can provide specialized tools, resources, and functionality beyond the built-in features.

### Configuration

To configure MCP servers, create a configuration file at `~/.nirmata/nctl/mcp.yaml`. To use a different path, pass `--mcp-config`:

```bash
nctl ai --mcp-config "/path/to/custom/mcp.yaml"
```

An example `~/.nirmata/nctl/mcp.yaml`:

```yaml
servers:
  - name: resend-email
    command: node
    args:
      - /path/to/directory/mcp-send-email/build/index.js
    env:
      RESEND_API_KEY: your_api_key_here
      SENDER_EMAIL_ADDRESS: example@email.com
      REPLY_TO_EMAIL_ADDRESS: another_example@email.com
    capabilities:
      tools: true
      prompts: false
      resources: false
      attachments: true
```

### Configuration Options

- `name`: Unique identifier for the MCP server
- `command`: Executable command to start the server (e.g., `node`, `python`, binary path)
- `args`: Array of command-line arguments passed to the server
- `env`: Environment variables required by the server (API keys, configuration values, etc.)
- `capabilities`: Defines what features the server provides:
  - `tools`: Server provides callable tools/functions
  - `prompts`: Server provides prompt templates
  - `resources`: Server provides data resources
  - `attachments`: Server can handle file attachments

> **Note:** Make sure the MCP server executable is installed and accessible at the specified path before adding it to the configuration.

### Common Use Cases

MCP servers can extend `nctl ai` with capabilities like:
- Sending emails and notifications
- Interacting with external APIs and services
- Accessing databases and data sources
- Integration with cloud platforms
- Custom business logic and workflows

## Adding Custom Skills

You can extend `nctl ai` with custom domain knowledge and best practices by creating skill files. Skills provide specialized guidance that the personal agent dynamically loads based on the task context.

### Loading Custom Skills

Use the `--skills` flag to load skills from any local directory:

```bash
nctl ai --skills "/path/to/custom-skills"
```

You can load multiple skill directories:

```bash
nctl ai --skills "/path/to/team-skills,/path/to/project-skills"
```

You can also set the `NIRMATA_AI_SKILLS` environment variable to always load your custom skills:

```bash
export NIRMATA_AI_SKILLS="/path/to/custom-skills"
nctl ai
```

### Default Skills Directory

Skills placed in the `~/.nirmata/nctl/skills` directory are loaded automatically without requiring the `--skills` flag:

```text
~/.nirmata/nctl/skills/
  └── kyverno-cli-tests/
      └── SKILL.md
  └── my-custom-skill/
      └── SKILL.md
```

### Creating a Skill File

Each skill is a Markdown file (named `SKILL.md`) containing domain knowledge, instructions, and best practices. Here's an example:

**Example: `~/.nirmata/nctl/skills/kyverno-cli-tests/SKILL.md`**

````markdown
# Kyverno Tests (Unit Tests)

Kyverno CLI tests are used to validate policy behaviors against sample "good" and "bad" resources. Carefully follow the instructions and best practices below when running Kyverno CLI tests:

- Always use the supplied tools to generate and execute Kyverno tests.
- **Testing:** When creating test files for Kyverno policies, always name the test file as "kyverno-test.yaml".
- **Test Execution:** After generating a Kyverno policy, test file (kyverno-test.yaml), and Kubernetes resource files, always run the "kyverno test" command to validate that the policy works correctly with the test scenarios.
- **Test Results:** All Kyverno tests must `Pass` for a successful outcome. Stop when all tests pass.
- Only test for `Audit` mode. Do not try to update policies and test for `Enforce` mode.

## Test File Organization

Organize Kyverno CLI test files in a `.kyverno-test` sub-directory where the policy YAML is contained.

```text
pod-security/
  ├── disallow-privileged-containers/
  │   ├── disallow-privileged-containers.yaml
  │   └── .kyverno-tests/
  │       ├── kyverno-test.yaml
  │       ├── resources.yaml
  │       └── variables.yaml
  └── other-policies/
```
````

Skills can also include executable scripts (bash, Python, etc.) that the agent can run locally on your workstation for custom automation and validation workflows.

### Skill Best Practices

- **Clear Structure**: Use headings and lists to organize information
- **Actionable Guidance**: Provide specific, actionable instructions
- **Examples**: Include code examples and sample outputs
- **Context**: Explain when and why to use specific approaches
- **Avoid Ambiguity**: Be explicit about requirements and expectations
- **Executable Scripts**: Include scripts that can be run locally to automate workflows

### How Skills Work

When you interact with `nctl ai`, the personal agent automatically:
1. Analyzes your request to determine the relevant domain
2. Loads applicable skills from the default directory and any `--skills` paths
3. Applies the guidance and best practices from those skills
4. Provides responses aligned with your custom knowledge base

> **Note:** Skills are loaded dynamically based on context. You don't need to restart `nctl ai` after adding or modifying skill files.

## Running as an MCP Server

Run the agent as an MCP server using stdio transport (default):

```sh
nctl ai --mcp-server
```

For Cursor and Claude Desktop, edit `~/.cursor/mcp.json` or `~/Library/Application Support/Claude/claude_desktop_config.json`:

```json
{
  "mcpServers": {
    "nctl": {
      "command": "nctl",
      "args": ["ai", "--mcp-server", "--token", "YOUR_NIRMATA_TOKEN"]
    }
  }
}
```

You can also run the MCP server over HTTP for remote or networked setups:

```bash
nctl ai --mcp-server --mcp-server-transport http --mcp-server-port 8080
```

To enable verbose logging from the MCP server (useful for debugging tool calls):

```bash
nctl ai --mcp-server -v 1
```


