AI Provider Configuration
Applies to: nctl 4.0 and later
By default, nctl ai uses Nirmata Control Hub as its AI provider. You can configure it to work with other AI providers using the --provider flag.
Nirmata (Default)
The default provider uses Nirmata Control Hub for AI services. Authentication uses nctl login — no additional setup needed.
nctl ai --prompt "generate a policy to require pod labels"
Anthropic Claude
Set the environment variable with your Anthropic API key:
export ANTHROPIC_API_KEY=<your-api-key>
nctl ai --provider anthropic --prompt "What is Kubernetes? Answer in one sentence."
Get your API key from Anthropic Console. Uses Claude’s latest models by default.
Google Gemini
Set the environment variable with your Google AI API key:
export GEMINI_API_KEY=<your-api-key>
nctl ai --provider gemini --prompt "what is 5+5? answer in one word"
- Environment variable is
GEMINI_API_KEY(notGOOGLE_API_KEY) - Default model:
gemini-2.5-pro - Free tier rate limit: approximately 2 requests per minute
- Get your API key from Google AI Studio
Azure OpenAI
Set the environment variables with your Azure OpenAI endpoint and API key:
export AZURE_OPENAI_ENDPOINT="https://<your-resource-name>.openai.azure.com/"
export AZURE_OPENAI_API_KEY="<your-api-key>"
nctl ai --provider azopenai --model gpt-4o --prompt "what is 5+5? answer in one word"
- Requires both endpoint URL and API key
- You must specify the model with
--model(e.g.,gpt-4o,gpt-4,gpt-35-turbo) - Get your credentials from Azure Portal
Amazon Bedrock
Amazon Bedrock uses AWS credentials for authentication. Ensure you have a valid AWS profile configured with appropriate Bedrock access permissions.
Step 1: Login to AWS SSO (if using SSO):
aws sso login --profile your-profile-name
Step 2: Set your AWS profile:
export AWS_PROFILE=your-profile-name
Step 3: Verify credentials:
aws sts get-caller-identity
Expected output:
{
"UserId": "AROA4JFRUINQC7VCOQ7UD:user@example.com",
"Account": "123456789012",
"Arn": "arn:aws:sts::123456789012:assumed-role/YourRole/user@example.com"
}
Usage:
nctl ai --provider bedrock --model us.anthropic.claude-sonnet-4-5-20250929-v1:0 --prompt "Your prompt here"
- Model IDs must start with
us.prefix (e.g.,us.anthropic.claude-...). Without the prefix, you’ll get an “on-demand throughput isn’t supported” error. - Supports Claude models from Anthropic available through Bedrock
- For more information, see Amazon Bedrock Documentation
Provider Comparison
| Provider | Environment Variables | Model Selection | Notes |
|---|---|---|---|
| Nirmata (default) | Authentication via nctl login | Automatic | Includes access to Nirmata platform features |
| Anthropic | ANTHROPIC_API_KEY | Automatic | Best for Claude-specific features |
| Google Gemini | GEMINI_API_KEY | Default: gemini-2.5-pro | Free tier available with rate limits |
| Azure OpenAI | AZURE_OPENAI_ENDPOINTAZURE_OPENAI_API_KEY | Required via --model | Enterprise-ready with Azure integration |
| Amazon Bedrock | AWS_PROFILE (or AWS credentials) | Required via --model | AWS-native with IAM authentication |
Usage Details
To view token consumption for the current session and exit without running any prompts:
nctl ai --usage-details
This prints per-provider token usage stats, which is useful for monitoring costs across runs.
Using AI/LLM Proxies
You can configure nctl ai to route requests through AI/LLM proxy services. This is useful for:
- Centralizing API key management
- Implementing rate limiting and cost controls
- Adding observability and monitoring
- Load balancing across multiple providers
- Using self-hosted AI gateways
Each provider supports proxy configuration through a base URL environment variable:
Anthropic with Proxy:
export ANTHROPIC_API_KEY=<your-api-key>
export ANTHROPIC_BASE_URL=http://your-proxy:8000
nctl ai --provider anthropic --prompt "Your prompt here"
Google Gemini with Proxy:
export GEMINI_API_KEY=<your-api-key>
export GEMINI_BASE_URL=http://your-proxy:8000
nctl ai --provider gemini --prompt "Your prompt here"
Azure OpenAI with Proxy:
export AZURE_OPENAI_API_KEY=<your-api-key>
export AZURE_OPENAI_ENDPOINT=http://your-proxy:8000
nctl ai --provider azopenai --model gpt-4o --prompt "Your prompt here"
- The proxy must be compatible with the provider’s API format
- Popular proxy solutions include LiteLLM, OpenLLM, and enterprise gateways
- The base URL should include the protocol (http:// or https://) and port if needed
- When using a proxy, set
AZURE_OPENAI_ENDPOINTto your proxy URL instead of your Azure endpoint