---
title: "AI Provider Configuration"
description: "Configure nctl ai to use Nirmata, Anthropic Claude, Google Gemini, Azure OpenAI, or Amazon Bedrock as the AI backend."
diataxis: reference
applies_to:
  product: "nctl"
audience: ["developer","platform-engineer"]
last_updated: 2026-04-16
url: https://docs.nirmata.io/docs/ai/nctl-ai/providers/
---


> **Applies to:** nctl 4.0 and later

By default, `nctl ai` uses Nirmata Control Hub as its AI provider. You can configure it to work with other AI providers using the `--provider` flag.

## Nirmata (Default)

The default provider uses Nirmata Control Hub for AI services. Authentication uses `nctl login` — no additional setup needed.

```bash
nctl ai --prompt "generate a policy to require pod labels"
```

## Anthropic Claude

Set the environment variable with your Anthropic API key:

```bash
export ANTHROPIC_API_KEY=<your-api-key>
```

```bash
nctl ai --provider anthropic --prompt "What is Kubernetes? Answer in one sentence."
```

Get your API key from [Anthropic Console](https://console.anthropic.com/). Uses Claude's latest models by default.

## Google Gemini

Set the environment variable with your Google AI API key:

```bash
export GEMINI_API_KEY=<your-api-key>
```

```bash
nctl ai --provider gemini --prompt "what is 5+5? answer in one word"
```

- Environment variable is `GEMINI_API_KEY` (not `GOOGLE_API_KEY`)
- Default model: `gemini-2.5-pro`
- Free tier rate limit: approximately 2 requests per minute
- Get your API key from [Google AI Studio](https://ai.google.dev/)

## Azure OpenAI

Set the environment variables with your Azure OpenAI endpoint and API key:

```bash
export AZURE_OPENAI_ENDPOINT="https://<your-resource-name>.openai.azure.com/"
export AZURE_OPENAI_API_KEY="<your-api-key>"
```

```bash
nctl ai --provider azopenai --model gpt-4o --prompt "what is 5+5? answer in one word"
```

- Requires both endpoint URL and API key
- You must specify the model with `--model` (e.g., `gpt-4o`, `gpt-4`, `gpt-35-turbo`)
- Get your credentials from [Azure Portal](https://portal.azure.com/)

## Amazon Bedrock

Amazon Bedrock uses AWS credentials for authentication. Ensure you have a valid AWS profile configured with appropriate Bedrock access permissions.

**Step 1:** Login to AWS SSO (if using SSO):

```bash
aws sso login --profile your-profile-name
```

**Step 2:** Set your AWS profile:

```bash
export AWS_PROFILE=your-profile-name
```

**Step 3:** Verify credentials:

```bash
aws sts get-caller-identity
```

Expected output:

```json
{
    "UserId": "AROA4JFRUINQC7VCOQ7UD:user@example.com",
    "Account": "123456789012",
    "Arn": "arn:aws:sts::123456789012:assumed-role/YourRole/user@example.com"
}
```

**Usage:**

```bash
nctl ai --provider bedrock --model us.anthropic.claude-sonnet-4-5-20250929-v1:0 --prompt "Your prompt here"
```

- Model IDs must start with `us.` prefix (e.g., `us.anthropic.claude-...`). Without the prefix, you'll get an "on-demand throughput isn't supported" error.
- Supports Claude models from Anthropic available through Bedrock
- For more information, see [Amazon Bedrock Documentation](https://docs.aws.amazon.com/bedrock/)

## Provider Comparison

| Provider | Environment Variables | Model Selection | Notes |
|----------|----------------------|-----------------|-------|
| Nirmata (default) | Authentication via `nctl login` | Automatic | Includes access to Nirmata platform features |
| Anthropic | `ANTHROPIC_API_KEY` | Automatic | Best for Claude-specific features |
| Google Gemini | `GEMINI_API_KEY` | Default: gemini-2.5-pro | Free tier available with rate limits |
| Azure OpenAI | `AZURE_OPENAI_ENDPOINT`<br>`AZURE_OPENAI_API_KEY` | Required via `--model` | Enterprise-ready with Azure integration |
| Amazon Bedrock | `AWS_PROFILE` (or AWS credentials) | Required via `--model` | AWS-native with IAM authentication |

## Usage Details

To view token consumption for the current session and exit without running any prompts:

```bash
nctl ai --usage-details
```

This prints per-provider token usage stats, which is useful for monitoring costs across runs.

## Using AI/LLM Proxies

You can configure `nctl ai` to route requests through AI/LLM proxy services. This is useful for:
- Centralizing API key management
- Implementing rate limiting and cost controls
- Adding observability and monitoring
- Load balancing across multiple providers
- Using self-hosted AI gateways

Each provider supports proxy configuration through a base URL environment variable:

**Anthropic with Proxy:**

```bash
export ANTHROPIC_API_KEY=<your-api-key>
export ANTHROPIC_BASE_URL=http://your-proxy:8000

nctl ai --provider anthropic --prompt "Your prompt here"
```

**Google Gemini with Proxy:**

```bash
export GEMINI_API_KEY=<your-api-key>
export GEMINI_BASE_URL=http://your-proxy:8000

nctl ai --provider gemini --prompt "Your prompt here"
```

**Azure OpenAI with Proxy:**

```bash
export AZURE_OPENAI_API_KEY=<your-api-key>
export AZURE_OPENAI_ENDPOINT=http://your-proxy:8000

nctl ai --provider azopenai --model gpt-4o --prompt "Your prompt here"
```

- The proxy must be compatible with the provider's API format
- Popular proxy solutions include [LiteLLM](https://github.com/BerriAI/litellm), [OpenLLM](https://github.com/bentoml/OpenLLM), and enterprise gateways
- The base URL should include the protocol (http:// or https://) and port if needed
- When using a proxy, set `AZURE_OPENAI_ENDPOINT` to your proxy URL instead of your Azure endpoint

