---
title: "Extending Nirmata Assistant"
description: "Add MCP servers, custom skills, and run nctl ai as an MCP server for Cursor and Claude Desktop."
diataxis: how-to
applies_to:
  product: "nctl"
audience: ["developer","platform-engineer"]
last_updated: 2026-04-16
url: https://docs.nirmata.io/docs/ai/nctl-ai/extend/
---


> **Applies to:** nctl 4.0 and later

## Extending with MCP Servers

The Model Context Protocol (MCP) allows you to extend `nctl ai` with additional capabilities by connecting external MCP servers. These servers can provide specialized tools, resources, and functionality beyond the built-in features.

### Configuration

To configure MCP servers, create a configuration file at `~/.nirmata/nctl/mcp.yaml`. To use a different path, pass `--mcp-config`:

```bash
nctl ai --mcp-config "/path/to/custom/mcp.yaml"
```

An example `~/.nirmata/nctl/mcp.yaml`:

```yaml
servers:
  - name: resend-email
    command: node
    args:
      - /path/to/directory/mcp-send-email/build/index.js
    env:
      RESEND_API_KEY: your_api_key_here
      SENDER_EMAIL_ADDRESS: example@email.com
      REPLY_TO_EMAIL_ADDRESS: another_example@email.com
    capabilities:
      tools: true
      prompts: false
      resources: false
      attachments: true
```

### Configuration Options

- `name`: Unique identifier for the MCP server
- `command`: Executable command to start the server (e.g., `node`, `python`, binary path)
- `args`: Array of command-line arguments passed to the server
- `env`: Environment variables required by the server (API keys, configuration values, etc.)
- `capabilities`: Defines what features the server provides:
  - `tools`: Server provides callable tools/functions
  - `prompts`: Server provides prompt templates
  - `resources`: Server provides data resources
  - `attachments`: Server can handle file attachments

> **Note:** Make sure the MCP server executable is installed and accessible at the specified path before adding it to the configuration.

### Common Use Cases

MCP servers can extend `nctl ai` with capabilities like:
- Sending emails and notifications
- Interacting with external APIs and services
- Accessing databases and data sources
- Integration with cloud platforms
- Custom business logic and workflows

## Adding Custom Skills

You can extend `nctl ai` with custom domain knowledge and best practices by creating skill files. Skills provide specialized guidance that the personal agent dynamically loads based on the task context.

### Loading Custom Skills

Use the `--skills` flag to load skills from any local directory:

```bash
nctl ai --skills "/path/to/custom-skills"
```

You can load multiple skill directories:

```bash
nctl ai --skills "/path/to/team-skills,/path/to/project-skills"
```

You can also set the `NIRMATA_AI_SKILLS` environment variable to always load your custom skills:

```bash
export NIRMATA_AI_SKILLS="/path/to/custom-skills"
nctl ai
```

### Default Skills Directory

Skills placed in the `~/.nirmata/nctl/skills` directory are loaded automatically without requiring the `--skills` flag:

```text
~/.nirmata/nctl/skills/
  └── kyverno-cli-tests/
      └── SKILL.md
  └── my-custom-skill/
      └── SKILL.md
```

### Creating a Skill File

Each skill is a Markdown file (named `SKILL.md`) containing domain knowledge, instructions, and best practices. Here's an example:

**Example: `~/.nirmata/nctl/skills/kyverno-cli-tests/SKILL.md`**

````markdown
# Kyverno Tests (Unit Tests)

Kyverno CLI tests are used to validate policy behaviors against sample "good" and "bad" resources. Carefully follow the instructions and best practices below when running Kyverno CLI tests:

- Always use the supplied tools to generate and execute Kyverno tests.
- **Testing:** When creating test files for Kyverno policies, always name the test file as "kyverno-test.yaml".
- **Test Execution:** After generating a Kyverno policy, test file (kyverno-test.yaml), and Kubernetes resource files, always run the "kyverno test" command to validate that the policy works correctly with the test scenarios.
- **Test Results:** All Kyverno tests must `Pass` for a successful outcome. Stop when all tests pass.
- Only test for `Audit` mode. Do not try to update policies and test for `Enforce` mode.

## Test File Organization

Organize Kyverno CLI test files in a `.kyverno-test` sub-directory where the policy YAML is contained.

```text
pod-security/
  ├── disallow-privileged-containers/
  │   ├── disallow-privileged-containers.yaml
  │   └── .kyverno-tests/
  │       ├── kyverno-test.yaml
  │       ├── resources.yaml
  │       └── variables.yaml
  └── other-policies/
```
````

Skills can also include executable scripts (bash, Python, etc.) that the agent can run locally on your workstation for custom automation and validation workflows.

### Skill Best Practices

- **Clear Structure**: Use headings and lists to organize information
- **Actionable Guidance**: Provide specific, actionable instructions
- **Examples**: Include code examples and sample outputs
- **Context**: Explain when and why to use specific approaches
- **Avoid Ambiguity**: Be explicit about requirements and expectations
- **Executable Scripts**: Include scripts that can be run locally to automate workflows

### How Skills Work

When you interact with `nctl ai`, the personal agent automatically:
1. Analyzes your request to determine the relevant domain
2. Loads applicable skills from the default directory and any `--skills` paths
3. Applies the guidance and best practices from those skills
4. Provides responses aligned with your custom knowledge base

> **Note:** Skills are loaded dynamically based on context. You don't need to restart `nctl ai` after adding or modifying skill files.

## Running as an MCP Server

Run the agent as an MCP server using stdio transport (default):

```sh
nctl ai --mcp-server
```

For Cursor and Claude Desktop, edit `~/.cursor/mcp.json` or `~/Library/Application Support/Claude/claude_desktop_config.json`:

```json
{
  "mcpServers": {
    "nctl": {
      "command": "nctl",
      "args": ["ai", "--mcp-server", "--token", "YOUR_NIRMATA_TOKEN"]
    }
  }
}
```

You can also run the MCP server over HTTP for remote or networked setups:

```bash
nctl ai --mcp-server --mcp-server-transport http --mcp-server-port 8080
```

To enable verbose logging from the MCP server (useful for debugging tool calls):

```bash
nctl ai --mcp-server -v 1
```

