Extending Nirmata Assistant
Applies to: nctl 4.0 and later
Extending with MCP Servers
The Model Context Protocol (MCP) allows you to extend nctl ai with additional capabilities by connecting external MCP servers. These servers can provide specialized tools, resources, and functionality beyond the built-in features.
Configuration
To configure MCP servers, create a configuration file at ~/.nirmata/nctl/mcp.yaml. To use a different path, pass --mcp-config:
nctl ai --mcp-config "/path/to/custom/mcp.yaml"
An example ~/.nirmata/nctl/mcp.yaml:
servers:
- name: resend-email
command: node
args:
- /path/to/directory/mcp-send-email/build/index.js
env:
RESEND_API_KEY: your_api_key_here
SENDER_EMAIL_ADDRESS: example@email.com
REPLY_TO_EMAIL_ADDRESS: another_example@email.com
capabilities:
tools: true
prompts: false
resources: false
attachments: true
Configuration Options
name: Unique identifier for the MCP servercommand: Executable command to start the server (e.g.,node,python, binary path)args: Array of command-line arguments passed to the serverenv: Environment variables required by the server (API keys, configuration values, etc.)capabilities: Defines what features the server provides:tools: Server provides callable tools/functionsprompts: Server provides prompt templatesresources: Server provides data resourcesattachments: Server can handle file attachments
Note: Make sure the MCP server executable is installed and accessible at the specified path before adding it to the configuration.
Common Use Cases
MCP servers can extend nctl ai with capabilities like:
- Sending emails and notifications
- Interacting with external APIs and services
- Accessing databases and data sources
- Integration with cloud platforms
- Custom business logic and workflows
Adding Custom Skills
You can extend nctl ai with custom domain knowledge and best practices by creating skill files. Skills provide specialized guidance that the personal agent dynamically loads based on the task context.
Loading Custom Skills
Use the --skills flag to load skills from any local directory:
nctl ai --skills "/path/to/custom-skills"
You can load multiple skill directories:
nctl ai --skills "/path/to/team-skills,/path/to/project-skills"
You can also set the NIRMATA_AI_SKILLS environment variable to always load your custom skills:
export NIRMATA_AI_SKILLS="/path/to/custom-skills"
nctl ai
Default Skills Directory
Skills placed in the ~/.nirmata/nctl/skills directory are loaded automatically without requiring the --skills flag:
~/.nirmata/nctl/skills/
└── kyverno-cli-tests/
└── SKILL.md
└── my-custom-skill/
└── SKILL.md
Creating a Skill File
Each skill is a Markdown file (named SKILL.md) containing domain knowledge, instructions, and best practices. Here’s an example:
Example: ~/.nirmata/nctl/skills/kyverno-cli-tests/SKILL.md
# Kyverno Tests (Unit Tests)
Kyverno CLI tests are used to validate policy behaviors against sample "good" and "bad" resources. Carefully follow the instructions and best practices below when running Kyverno CLI tests:
- Always use the supplied tools to generate and execute Kyverno tests.
- **Testing:** When creating test files for Kyverno policies, always name the test file as "kyverno-test.yaml".
- **Test Execution:** After generating a Kyverno policy, test file (kyverno-test.yaml), and Kubernetes resource files, always run the "kyverno test" command to validate that the policy works correctly with the test scenarios.
- **Test Results:** All Kyverno tests must `Pass` for a successful outcome. Stop when all tests pass.
- Only test for `Audit` mode. Do not try to update policies and test for `Enforce` mode.
## Test File Organization
Organize Kyverno CLI test files in a `.kyverno-test` sub-directory where the policy YAML is contained.
```text
pod-security/
├── disallow-privileged-containers/
│ ├── disallow-privileged-containers.yaml
│ └── .kyverno-tests/
│ ├── kyverno-test.yaml
│ ├── resources.yaml
│ └── variables.yaml
└── other-policies/
```
Skills can also include executable scripts (bash, Python, etc.) that the agent can run locally on your workstation for custom automation and validation workflows.
Skill Best Practices
- Clear Structure: Use headings and lists to organize information
- Actionable Guidance: Provide specific, actionable instructions
- Examples: Include code examples and sample outputs
- Context: Explain when and why to use specific approaches
- Avoid Ambiguity: Be explicit about requirements and expectations
- Executable Scripts: Include scripts that can be run locally to automate workflows
How Skills Work
When you interact with nctl ai, the personal agent automatically:
- Analyzes your request to determine the relevant domain
- Loads applicable skills from the default directory and any
--skillspaths - Applies the guidance and best practices from those skills
- Provides responses aligned with your custom knowledge base
Note: Skills are loaded dynamically based on context. You don’t need to restart
nctl aiafter adding or modifying skill files.
Running as an MCP Server
Run the agent as an MCP server using stdio transport (default):
nctl ai --mcp-server
For Cursor and Claude Desktop, edit ~/.cursor/mcp.json or ~/Library/Application Support/Claude/claude_desktop_config.json:
{
"mcpServers": {
"nctl": {
"command": "nctl",
"args": ["ai", "--mcp-server", "--token", "YOUR_NIRMATA_TOKEN"]
}
}
}
You can also run the MCP server over HTTP for remote or networked setups:
nctl ai --mcp-server --mcp-server-transport http --mcp-server-port 8080
To enable verbose logging from the MCP server (useful for debugging tool calls):
nctl ai --mcp-server -v 1