Nutanix Components – Part3

In this article, I will introduce Nutanix cluster components in a simple way, and I will explain all details here in more depth in another article.

Nutanix cluster contains a group of components that work together in harmony and depends on one another, and I will review here in a hurry the role of each element in the work of the cluster.

All components run on multiple nodes in the cluster and depend on connectivity between their peers that also run the component. Most components also depend on other components.


A key element of a distributed system is a method for all nodes to store and update the cluster’s configuration. This configuration includes details about the physical components in the cluster, such as hosts and disks, and logical components, like containers.

The state of these components, including their IP addresses, capacities, and data replication rules, are also stored in the cluster configuration.

Zeus is the Nutanix library that all other components used to access the cluster configuration, which is currently implemented using Apache Zookeeper.


Zookeeper runs on either three or five nodes, depending on the redundancy factor that is applied to the cluster. Using multiple nodes prevents stale data from being returned to other components while having an odd number provides a method for breaking ties if two nodes have different information.

Of these three nodes, one Zookeeper node is elected as the leader. 

The leader receives all requests for information and confers with the two follower nodes. If the leader stops responding, a new leader is elected automatically.

Zookeeper has no dependencies, meaning that it can start without any other cluster components running.

For additional information about Zeus & Zookeeper click here


Distributed systems that store data for other systems (for example, a hypervisor that hosts virtual machines) must have a way to keep track of where that data is.

In the case of a Nutanix cluster, it is also important to track where the replicas of that data are stored.

Medusa is a Nutanix abstraction layer that sits in front of the database that holds this metadata.

The database is distributed across all nodes in the cluster, using a modified form of Apache Cassandra.


Cassandra is a distributed, high-performance, scalable database that stores all metadata about the guest VM data stored in a Nutanix datastore. 

In the case of NFS datastores, Cassandra also holds small files saved in the datastore. When a file reaches 512K in size, the cluster creates a vDisk to hold the data.

Cassandra runs on all nodes of the cluster. These nodes communicate with each other once a second using the Gossip protocol, ensuring that the state of the database is current on all nodes.

Cassandra depends on Zeus to gather information about the cluster configuration.

For additional information about Medusa & Cassandra click here


A distributed system that presents storage to other systems (such as a hypervisor) needs a unified component for receiving and processing data that it receives.

The Nutanix cluster has a large software component called Stargate that manages this responsibility.

From the perspective of the hypervisor, Stargate is the main point of contact for the Nutanix cluster. All read and write requests are sent across vSwitchNutanix to the Stargate process running on that node.

Stargate depends on Medusa to gather metadata and Zeus to gather cluster configuration data.

Tip: If Stargate cannot reach Medusa, the log files will include an HTTP timeout. Zeus communication issues will include a Zookeeper timeout.
For additional information about Stargate click here


In a distributed system, it is important to have a component that watches over the entire process. Otherwise, metadata that points to unused blocks of data could pile up, or data could become unbalanced, either across nodes or across disk tiers.

In the Nutanix cluster, each node runs a Curator process that handles these responsibilitiesA Curator master node periodically scans the metadata database and identifies cleanup and optimization tasks that Stargate or other components should perform. Analyzing the metadata is shared across other Curator nodes, using a MapReduce algorithm.

Curator depends on Zeus to learn which nodes are available, and Medusa to gather metadata. Based on that analysis, it sends commands to Stargate.

For additional information about Curator click here


A distributed system is worthless if users can’t access itPrism provides a management gateway for administrators to configure and monitor the Nutanix cluster. This includes the nCLI and web console.

Prism runs on every node in the cluster, but like other components, it elects a leader. All requests are forwarded from followers to the leader using Linux iptablesThis allows administrators to access Prism using any Controller VM IP address. If the Prism leader fails, a new leader is elected.

Prism communicates with Zeus for cluster configuration data and Cassandra for statistics to present to the user. It also communicates with the ESXi hosts for VM status and related information.

For additional information about Prism click here


Thanks for Reading!

Leave a Reply

Your email address will not be published. Required fields are marked *