Clusters

A Cluster is a collection of Host Groups. While Host Groups are typically pools of similar resources, like VMs created from the same template. Clusters can be used to aggregate hetrogeneous resources. This organization makes it possible to combine static and elastic resource pools, and leverage the Cluster as a natural grouping for policy management. Using Nirmata you can deploy Kubernetes as a container orchestrator for your cluster.

Kubernetes cluster

You can use an existing Kubernetes cluster with Nirmata or create a new Kubernetes cluster from scratch. Once the Kubernetes cluster is created and onboarded in to Nirmata, you can monitor and manager your cluster as well as deploy applications to your cluster.

Creating Cluster Policy

Before creating a Kubernetes cluster, you need to create a policy to specify the settings for your cluster. A cluster policy can be reused for multiple clusters and simplifies configuration of the cluster.

  1. Go to the Policies section in the left navigation and select the Cluster Policies tab.
  2. Click on the Add Cluster Policy and enter the name
  3. The policy will be created with the default settings. Click on the policy name in the table to view the details.
  4. On the policy details page, you can change the version and the cloud provider. You can also update the component settings, network plugins, add-ons and storage classes.
  5. Once the policy is created, you can use it when deploying a Kubernetes cluster

Note: If not network plugin is specified, the default network plugin for that cloud provider will be used:

AWS: aws-vpc-cni plugin (alpha) Azure: flannel Other: flannel

Use existing Kubernetes cluster

To use an existing Kubernetes cluster with Nirmata

  1. Go to the Clusters panel and click on the Add Cluster button.
  2. Select: Yes - I have already installed Kubernetes
_images/create-kubernetes-cluster-2.png
  1. Provide the cluster name and select the provider for your cluster. Leave the provider as Other in case your cluster provider is not in the list.
_images/create-kuberenetes-cluster-disc-1.png
  1. Follow the displayed instructions to install the Nirmata Kubernetes controller and click on button confirming the installation.
_images/create-kuberenetes-cluster-disc-2.png
  1. Within a few seconds, the controller should connect and the cluster state will show as Connected.
_images/create-kuberenetes-cluster-disc-3.png

Once the cluster is in Ready state, you can deploy Applications to this cluster.

Note: Only Kubernetes versions 1.8.6 and above are currently supported.

Create a new Kubernetes cluster

To create a new Kubernetes cluster from scratch using Nirmata follow the steps below. With this option, Kubernetes cluster will be deployed on an existing host group. Please ensure that the following ports are open in the host group:

TCP: 6443, 8080 (from master only), 2379 (for etcd), 12050 (all hosts) UDP: 4789 (vxlan) ICMP on all ports

Create a Managed Kubernetes Cluster:

  1. Go to the Clusters panel and click on the Add Cluster button.
  2. Select: No - Install and manage Kubernetes for me
_images/create-kubernetes-cluster-ins-1.png

3. Provide the cluster name and add host groups that you would like to install the cluster on. Also select the cluster policy. Other fields are optional. Click on Create cluster and start the installaction button to proceed with the cluster install.

_images/create-kubernetes-cluster-ins-2.png
  1. Within a few minutes, the cluster will be deployed, the Nirmata controller should connect and the cluster state will show as Connected.
_images/create-kubernetes-cluster-ins-3.png

Once the cluster is deployed and in Ready state, you can create Environments for this cluster to deploy your applications.

Cloud Integrations

AWS

For requirements to deploy Kubernetes clusters on AWS, see Host Groups section.

vSphere

Prior to deploying Kubernetes clusters on vSphere, see requirements.

Note: To enable vSphere storage to work, you need to enable disk.EnableUUID option when creating the VM template. For instructions, see FAQs

High Availability Cluster

For a high availability cluster, you will need to do the following prior to creating a cluster

  1. Identify master nodes in your host group by adding the following label on each host that the master components should be deployed on:

    key: nirmata.io/cluster.role value: control-plane

  2. You will need to setup a network load balancer (e.g. nginx). The load balancer should be setup in SSL passthrough mode to pass SSL traffic received at the load balancer onto the api servers.

    e.g. For nginx: Use TCP load balancing as described here

     stream {
        upstream apiserver {
            server <apiserver-1-ip-address>:6443;
            server <apiserver-2-ip-address>:6443;
            server <apiserver-3-ip-address>:6443;
        }
    
        server {
            listen 443;
            proxy_pass apiserver;
        }
    }
    
  1. Use the host name for the server running nginx as the Endpoint when creating the cluster (e.g. https://<nginx-address>)

Once the cluster is deployed, you should be able to connect to the cluster by launching the terminal and checking if the worker nodes have connected (kubectl get nodes)

Monitor Kubernetes cluster

Once the cluster is in Ready state, Nirmata collects and displays all the information necessary to monitor the cluster. To view cluster information, click on the cluster card. For each cluster, the following information is displayed:

  1. Cluster Availability
  2. Alarms
  3. User Activity
  4. Statistics
  5. Node State
  6. Cluster Objects such as Pods, Volumes, Namespaces etc.

Additional information for each of the above can be found by click on the respective link/panel.

Manage Pods

You can view all the pods deployed in the cluster along with their state. You can also perform the following operations on each pod:

  1. View Details - Lets you view the pod YAML
  2. View Log - Lets you view the logs for any container in the pod
  3. Launch Terminal - Lets you launch the terminal on any container in the pod to execute commands

Other Operations

You can also perform other operations on the cluster:

  1. Scale up or down the cluster
  2. Upgrade the cluster
  3. Launch Terminal to connect to the cluster via kubectl CLI
  4. Apply any YAML to the cluster
  5. Download the kubeconfig for the cluster
  6. Download the Nirmata controller yaml

Delete Kubernetes Cluster

To delete you Kubernetes cluster, just click on the Delete Cluster menu option. The cluster will then shut down and all the cluster components will be removed. Once the cluster is deleted you can shutdown your VMs.

Note: It is not recommended to reuse the VMs/Hosts to deploy another cluster to avoid any installation related to settings that remain on the VMs (e.g. data, NAT rules, etc.). You should shut down the VMs and deploy new VMs for your cluster.