Installation Requirements

A minimum Nirmata PE installation consists of a minimum of 2 VMs (6 VMs for HA configuration), one for each instance of the architectural component described above. Each VM will run multiple containerized microservices.

For production deployments, each component can be independently sized and scaled for availability and performance.

Compute

The table below provides node counts and specifications for the Nirmata components:

Component Counts Specifications
Core Services - PoCs: 1 - 16 GB RAM
- Production (HA): 3 - 4 vCPUs
- 256 GB SSD
Shared Services - PoCs: 1 - 16 GB RAM
- Production (HA): 3 - 4 vCPUs
- 256 GB SSD
Network
Ports

The table below details the networking requirements for Nirmata PE:

Component Connections
Core Services Inbound Connections
- HTTPS (443) from the LB / users
- HTTPS (443) from Nirmata agent on Container Hosts
- SSH (22) for secure shell access
Outbound Connections
- HTTPS (443) to image registry
Shared Services Inbound Connections
- TCP (27017) MongoDB from Nirmata Core Services
- TCP (9092) Kafka from Nirmata Core Services
- TCP (2181) ZooKeeper from Nirmata Core Services
- TCP (2888) ZooKeeper from Nirmata Shared Services
- TCP (3888) ZooKeeper from Nirmata Shared Services
- TCP (9200) ElasticSearch from Nirmata Core Services
- TCP (9300) ElasticSearch from Nirmata Shared Services
- SSH (22) for secure shell access
Outbound Connections

Additionally, a well known address will need to be configured for Nirmata (e.g. nirmata.company-name.com). This address will resolve to the load balancer IP address (VIP) or Nirmata server IP address, if load balancer is not used.

Note 1: A load-balancer is not required for the PoC deployment.

Note 2: It is assumed that Container Hosts in the private data-center (e.g. Diamanti VMs) and Container Hosts in public cloud (e.g. Azure) have direct L3 connectivity to the Nirmata Core Services.

Network Proxy

In case you are required to use network proxy for any external communication, please keep the proxy settings available so that they can be used during the installation.

Configure proxy for Docker Engine by adding the following to /etc/systemd/system/docker.service.d/http-proxy.conf:

[Service]
Environment="HTTP_PROXY=<proxy-address>"
Environment="HTTPS_PROXY=<proxy-address>"
Environment="NO_PROXY=127.0.0.1,localhost,<nirmata-services-host-ip,nirmata-shared-services-host-ip>

You can find more details on configuring HTTP/HTTPS proxy for Docker Engine at https://docs.docker.com/config/daemon/systemd/#httphttps-proxy.

DNS

It is required that host names for all hosts resolve via DNS.

image

Storage

There are no additional storage requirements. Nirmata shared services use the storage on the host instances.

Security

Here are some additional security related considerations:

X.509 Certificates

All Nirmata services use SSL for secure communications. The services use self-signed certificates. A CA signed certificate can be installed, if available.

For details on generating a self-signed certificate, see Step 5 - Install an x509 Certificate.

Single Sign On with SAML

Nirmata supports Single Sign On (SSO) using SAML 2.0. If SAML SSO is required, Nirmata will need connectivity to the SAML IDP service, for example ADFS 3.0 (Active Directory Federation Services).

HA Deployment

For production environment, Nirmata should be deployed in HA configuration. For HA setup, 6 VMs (nodes) are required – 3 for shared services and 3 for core services.

Note: Some distributed services, like Zookeeper, require an odd number of nodes to form a quorum and hence a 2-node configuration is not supported.

To ensure resiliency against cloud/datacenter outage, the nodes should be deployed in at least 2 different availability zones (preferably 3 different availability zones). Every node needs access to every other node via a routable IP address or domain name (L3).

The diagram below shows the details of the setup:

image