How to Install Nirmata PE

Nirmata PE is installed using nadm in a high-availability (HA) Kubernetes cluster setup using kubeadm.

Prerequisites

  1. Install Kubernetes
  2. Install kubeadm

Install and Check Load Balancer

To start, setup a Load Balancer for the Kubernetes API server.

Next, ensure that the Load Balancer is accessible from all hosts. Verify accessibility with nc and curl by using the Check Load Balancer Command.

Check Load Balancer Command (nc):

root@nadmtest30:~# nc -v haproxy0.lab.nirmata.io 6443

Check Load Balancer Command (curl):

root@nadmtest30:~# curl -k https://haproxy0.lab.nirmata.io:6443

If the Load Balancer is accessible the nc and curl commands will return a success message:

Connection to nadmtest10.lab.nirmata.io 6443 port [tcp/*] succeeded!

If the Load Balancer is not accessible, a Service Unavailable message will return:

<html><body><h1>503 Service Unavailable</h1>
No server is available to handle this request.

Install Kubernetes Components

Install Kubernetes Components using the linked instructions.

Next, disable swap using the Disable Swap command.

Disable Swap Command:

Run: sudo swapoff -a

Remove any swap entries from: /etc/fstab

Configure Proxy for Docker

Next, configure proxy for Docker. The Docker daemon uses the HTTP_PROXY, HTTPS_PROXY, and NO_PROXY environmental variables in its start-up environment to configure HTTP or HTTPS proxy behavior.

To configure proxy for Docker:

  1. Create a systemd drop-in directory for the docker service:

    $ sudo mkdir -p /etc/systemd/system/docker.service.d
    
  2. Create a file called /etc/systemd/system/docker.service.d/http-proxy.conf that adds the HTTP_PROXY environment variable:

    [Service]
    Environment="HTTP_PROXY=http://nibr1:3128/"  "NO_PROXY=localhost,127.0.0.1,nibr1,nibr2.nibr3"
    Environment="HTTPS_PROXY=http://nibr1:3128/" "NO_PROXY=localhost,127.0.0.1,nibr1,nibr2,nibr3"
    
  3. Flush changes using the Flush Changes command.

    Flush Changes Command:

    $ sudo systemctl daemon-reload
    
  4. Restart Docker using the Restart Docker command.

    Restart Docker Command:

    $ sudo systemctl restart docker
    
  5. After Docker restarts, verify that the configuration loaded using the Verify Configuration command.

    Verify Configuration Command:

    $ systemctl show --property=Environment docker
    Environment=HTTPS_PROXY=https://proxy.example.com:443/
    

Install kubeadm

Install kubeadm using the linked instructions.

Configure kubeadm and Install Kubernetes on First Master

Generate a kubeadm-config.yaml on the first node with the Load Balancer and Endpoint information.

To use flannel, configure the pod-cidr as follows:


apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
kubernetesVersion: 1.13.1
apiServer:
  certSANs:
  - "haproxy0.lab.nirmata.io"
controlPlaneEndpoint: "haproxy0.lab.nirmata.io:6443"
networking:
  podSubnet: 10.244.0.0/16

Then run the Configure command.

Configure Command:

kubeadm init --config=kubeadm-config.yaml --node-name=<FQDN>

Next, setup a kubectl using the Setup command.

Setup Command:

mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Install CNI Plugin

To install the CNI plugin, run the Install CNI Plugin using sudo privilege.

Install CNI Plugin Command:

sudo curl -sSf -L --retry 5 https://github.com/containernetworking/plugins/releases/download/v0.7.4/cni-plugins-amd64-v0.7.4.tgz | tar -xz -C /opt/cni/bin

Check Pod Status

To check pod status, run the Check Pod command.

Check Pod Command:

root@nadmtest10:~# kubectl get pod -n kube-system -w

The Check Pod command should return that all pods are running.

NAME                                 READY   STATUS    RESTARTS   AGE
coredns-86c58d9df4-cmsq6             1/1     Running   0          9m14s
coredns-86c58d9df4-nxf92             1/1     Running   0          9m14s
etcd-nadmtest10                      1/1     Running   0          8m29s
kube-apiserver-nadmtest10            1/1     Running   0          8m21s
kube-controller-manager-nadmtest10   1/1     Running   0          8m33s
kube-flannel-ds-amd64-cwznh          1/1     Running   0          2m55s
kube-proxy-qw6d7                     1/1     Running   0          9m14s
kube-scheduler-nadmtest10            1/1     Running   0          8m46s

Install Flannel

To install flannel, run the Install Flannel command on all hosts using sudo privilege.

Install Flannel Command:

sysctl net.bridge.bridge-nf-call-iptables=1

Choose any host with kubectl and apply the following command:

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

Copy Certificates

Look at USER=centos_user # customizable
CONTROL_PLANE_IPS="10.0.0.7 10.0.0.8"
for host in ${CONTROL_PLANE_IPS}; do
   scp /etc/kubernetes/pki/ca.crt "${USER}"@$host:
   scp /etc/kubernetes/pki/ca.key "${USER}"@$host:
   scp /etc/kubernetes/pki/sa.key "${USER}"@$host:
   scp /etc/kubernetes/pki/sa.pub "${USER}"@$host:
   scp /etc/kubernetes/pki/front-proxy-ca.crt "${USER}"@$host:
   scp /etc/kubernetes/pki/front-proxy-ca.key "${USER}"@$host:
   scp /etc/kubernetes/pki/etcd/ca.crt "${USER}"@$host:etcd-ca.crt
   scp /etc/kubernetes/pki/etcd/ca.key "${USER}"@$host:etcd-ca.key
   scp /etc/kubernetes/admin.conf "${USER}"@$host:
Done

Install Kubernetes on Other Nodes

To install Kubernetes on all other nodes, run the Join Command with the additional arg.

Join Command:

kubeadm join <master-node>.io:443 --token 1234567890 --discovery-token-ca-cert-hash sha256:1234567890 --experimental-control-plane --node-name=<current-node-FQDN>

Untaint the Master Nodes

To untaint the master nodes, run the Untaint command.

Untaint Command:

kubectl taint nodes --all node-role.kubernetes.io/master-

Download Nirmata

Download the Nirmata install binary and run the appropriate version.

2.5.0

curl -LO

2.5.1

curl -LO

2.6.0

curl -LO

2.6.0 -134

curl -LO

Pre-Provision Disks

Nadm can automatically provision the Local Storage Provisioner but requires that the storage directories are pre-created. Nirmata will create a storage class and use those directories for Nirmata binaries.

On all nodes that are part of base cluster, complete the following:

  1. Copy nadm on each node and run “tar -xvf nadm-xx”

  2. Go to $HOME/nadm-xx directory and run ./nadm.

  3. Go to the configuration folder ~$HOME/.nirmata-nadm/nadm-util/script and run “sudo ./mount.sh” to create the persistent volume folders.

  4. Verify that volumes are created by running command - ls -al /mnt/nirmata-disks. You should see 4 folders vol1-vol4.

Create Certificates

Create certificates by running the Create Certificates command.

Create Certificates Command:

openssl req -subj '/O=Nirmata/CN=nirmata.local/C=US' -new -newkey rsa:2048 -days 3650 -sha256 -nodes -x509 -keyout server.key -out server.crt

Install Nirmata

To install Nirmata, run nadm install on any one of the Kubernetes master-nodes.

Note that:

Check the status of the install from another term window with command using the Check Status command.

Check Status Command:

./nadm status -w -n <namespace>

Uninstall

To uninstall, delete the cluster.

To delete a cluster, run the kubeadm Reset command on all master nodes.

Kubeadm Reset Command:

$ kubeadm reset 

Next, run the Nirmata Agent Cleanup on the hostgroup using the Nirmata Cleanup command.

Nirmata Cleanup Command:

systemctl stop nirmata-agent.service && systemctl disable nirmata-agent.service && rm -rf /etc/systemd/system/nirmata-agent.service

After running the Nirmata Cleanup command, the symlink is removed and the host moves to an unknown status in the host group.

Finally, cleanup the Kubernetes cluster using the Cleanup Kubernetes Cluster command.

Cleanup Kubernetes Cluster Command:

wget https://github.com/nirmata/custom-scripts/blob/master/cleanup-script.sh
chmod 755 cleanup-script.sh
./cleanup-script.sh