Kubernetes Clusters

With Nirmata, you can easily deploy and operate Kubernetes Clusters on any cloud. You can compose a cluster from Host Groups. This flexible composition allows using hetrogeneous resources, and even use different pricing strategies to provide resources for your cluster.

_images/concepts-clusters.png

Nirmata can also easily discover existing Kubernetes clusters to provide complete visibility and management. This capability allows you to use managed Kubernetes services from cloud providers (EKS, AKS, GKE, etc.) to create the clusters, and use Nirmata for policy-based workload management.

Cluster Policies

Before creating a Kubernetes cluster, you need to configure a policy to specify the settings for your cluster. A cluster policy can be reused for multiple clusters and simplifies configuration of the cluster.

Nirmata provides default policies for all major cloud providers. You can use these as is, or customize them to your needs.

To configure a Cluster Policy:

  1. Go to the Policies section in the left navigation and select the Cluster Policies tab.
  2. Click on the Add Cluster Policy and enter the name
  3. The policy will be created with the default settings. Click on the policy name in the table to view the details.
  4. On the policy details page, you can change the version and the cloud provider. You can also update the component settings, network plugins, add-ons and storage classes.
  5. Once the policy is created, you can use it when deploying a Kubernetes cluster

Note: If no network plugin is specified, the default network plugin for the cloud provider will be used:

AWS: aws-vpc-cni plugin (alpha) Azure: flannel Other: flannel

Notes:

When using a self-signed certificate for Nirmata PE, you will need to use insecure connection for Nirmata controller. This can be done by setting the isInsecure option in Controller section to true.

To use http/https proxy for your Kubernetes cluster components, update the settings in the Proxy Settings section in the policy. These settings are only used by apiserver, controller-manager and kubelet. Usually proxy settings are required when deploying Kubernetes on a cloud (e.g. AWS, Azure) since Kubernetes components need to access the cloud provider APIs. You will need to specify the IP addresses/CIDR for all the nodes in the Kubernets cluster in the No Proxy settings.

Manage an existing Kubernetes cluster

To manage an existing Kubernetes cluster with Nirmata

  1. Go to the Clusters panel and click on the Add Cluster button.
  2. Select: Yes - I have already installed Kubernetes
_images/create-kubernetes-cluster-2.png
  1. Provide the cluster name and select the provider for your cluster. Leave the provider as Other in case your cluster provider is not in the list.
_images/create-kuberenetes-cluster-disc-1.png
  1. Follow the displayed instructions to install the Nirmata Kubernetes controller and click on button confirming the installation.
_images/create-kuberenetes-cluster-disc-2.png
  1. Within a few seconds, the controller should connect and the cluster state will show as Connected.
_images/create-kuberenetes-cluster-disc-3.png

Once the cluster is in Ready state, you can deploy Applications to this cluster.

Note: Only Kubernetes versions 1.8.6 and above are currently supported.

Create a new Kubernetes cluster

To create a new Kubernetes cluster from scratch using Nirmata follow the steps below. With this option, Kubernetes cluster will be deployed on an existing host group.

Configure firewall/security groups

Please ensure that the following ports are open in the host group:

TCP: 6443, 8080 (from master only), 2379 (for etcd), 12050 (all hosts) UDP: 4789 (vxlan) ICMP on all ports

Create a Managed Kubernetes Cluster:

  1. Go to the Clusters panel and click on the Add Cluster button.
  2. Select: No - Install and manage Kubernetes for me
_images/create-kubernetes-cluster-ins-1.png

3. Provide the cluster name and add host groups that you would like to install the cluster on. Also select the cluster policy. Other fields are optional. Click on Create cluster and start the installaction button to proceed with the cluster install.

_images/create-kubernetes-cluster-ins-2.png
  1. Within a few minutes, the cluster will be deployed, the Nirmata controller should connect and the cluster state will show as Connected.
_images/create-kubernetes-cluster-ins-3.png

Once the cluster is deployed and in Ready state, you can create Environments for this cluster to deploy your applications.

Cloud Provider Integrations

AWS

You can use AWS as a Kuberenetes cloud provider to enable EBS, ELB, and EC2 integrations. To do this, you need to configure an IAM role for Kubernetes.

Below are the permissions that need to be enabled for this role. As a best practice, you can customize the role to be limited to a sub-set of resources:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": "ec2:Describe*",
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": "ec2:AttachVolume",
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": "ec2:DetachVolume",
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "ec2:*"
            ],
            "Resource": [
                "*"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "elasticloadbalancing:*"
            ],
            "Resource": [
                "*"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "ec2:CreateNetworkInterface",
                "ec2:AttachNetworkInterface",
                "ec2:DeleteNetworkInterface",
                "ec2:DetachNetworkInterface",
                "ec2:DescribeNetworkInterfaces",
                "ec2:DescribeInstances",
                "ec2:ModifyNetworkInterfaceAttribute",
                "ec2:AssignPrivateIpAddresses"
            ],
            "Resource": [
                "*"
            ]
        },
        {
            "Effect": "Allow",
            "Action": "tag:TagResources",
            "Resource": "*"
        }
    ]
}

To create a AWS Host Group see Host Groups section.

vSphere

Prior to deploying Kubernetes clusters on vSphere, see requirements.

Note: To enable vSphere storage to work, you need to enable disk.EnableUUID option when creating the VM template. For instructions, see FAQs

High Availability (HA) Clusters

For a high availability cluster, you will need to do the following prior to creating a cluster

  1. Identify master nodes in your host group by adding the following label on each host that the master components should be deployed on:

    key: nirmata.io/cluster.role value: control-plane

  2. You will need to setup a network load balancer (e.g. nginx). The load balancer should be setup in SSL passthrough mode to pass SSL traffic received at the load balancer onto the api servers.

    e.g. For nginx: Use TCP load balancing as described here

     stream {
        upstream apiserver {
            server <apiserver-1-ip-address>:6443;
            server <apiserver-2-ip-address>:6443;
            server <apiserver-3-ip-address>:6443;
        }
    
        server {
            listen 443;
            proxy_pass apiserver;
        }
    }
    
  1. Use the host name for the server running nginx as the Endpoint when creating the cluster (e.g. https://<nginx-address>)

Once the cluster is deployed, you should be able to connect to the cluster by launching the terminal and checking if the worker nodes have connected (kubectl get nodes)

Monitoring a Kubernetes cluster

Once the cluster is in Ready state, Nirmata collects and displays all the information necessary to monitor the cluster. To view cluster information, click on the cluster card. For each cluster, the following information is displayed:

  1. Cluster Availability
  2. Alarms
  3. User Activity
  4. Statistics
  5. Node State
  6. Cluster Objects such as Pods, Volumes, Namespaces etc.

Additional information for each of the above can be found by click on the respective link/panel.

Managing Pods

You can view all the pods deployed in the cluster along with their state. You can also perform the following operations on each pod:

  1. View Details - Lets you view the pod YAML
  2. View Log - Lets you view the logs for any container in the pod
  3. Launch Terminal - Lets you launch the terminal on any container in the pod to execute commands

Other Cluster Operations

You can also perform other operations on the cluster:

  1. Scale up or down the cluster
  2. Upgrade the cluster
  3. Launch Terminal to connect to the cluster via kubectl CLI
  4. Apply any YAML to the cluster
  5. Download the kubeconfig for the cluster
  6. Download the Nirmata controller yaml

Deleting a Kubernetes Cluster

To delete you Kubernetes cluster, just click on the Delete Cluster menu option. The cluster will then shut down and all the cluster components will be removed. For Nirmata managed Host Groups, the cloud instances will automatically be deleted for you. For Direct Connect Host Groups, you can manually delete your VMs once the cluster is deleted.

Note: It is not recommended to reuse the VMs/Hosts to deploy another cluster to avoid any installation related to settings that remain on the VMs (e.g. data, NAT rules, etc.). You should shut down the VMs and deploy new VMs for your cluster.

To manually cleanup your hosts/VMs, run these commands on each host

# Stop and remove any running containers
sudo docker stop $(sudo docker ps | grep “flannel” | gawk '{print $1}')
sudo docker stop $(sudo docker ps | grep "nirmata" | gawk '{print $1}')

sudo docker stop $(sudo docker ps | grep "kube" | gawk '{print $1}')
sudo docker rm  $(sudo docker ps -a | grep "Exit" |gawk '{print $1}')

# Remove any cni plugins
sudo rm -rf /etc/cni/*
sudo rm -rf /opt/cni/*

# Clear IP Tables
sudo iptables --flush
sudo iptables -tnat --flush

# Restart docker
sudo systemctl stop docker
sudo systemctl start docker
sudo docker ps

# Deletes the cni interface
sudo ifconfig cni0 down
sudo brctl delbr cni0
sudo ifconfig flannel.1 down
sudo ip link delete cni0
sudo ip link delete flannel.1

# Remove cluster database
sudo rm -rf /data