How to create a nested virtual NutanixCE cluster

In my last post, I configured a Nutanix CE cluster node running on a single nested VM on Workstation. My intention was to create a 3 node cluster but unfortunately I was unable to because I built the single node cluster and then extra nodes separately.

NUTANIXCLUSTERIt appears that CE behaves the same way as the full version in that if you are adding extra nodes to a cluster (expansion), the default behavior is to automatically check for an Intelligent Platform Management Interface (IPMI) that is usually built into the main board of your Nutanix Supermicro hardware. Given that my nodes were all virtual, it was a little hard to work around this issue! (I did search long and hard!).

I spoke to @Joep and took his advice on posting on the excellent Nutanix Forum. AdamF helped me out and suggested that I tear down the cluster and start again, but this time – build the cluster at the start with all nodes present. So, here is how I did it in my environment:

NutanixCE Cluster node – build and configuration

1) Quite simple, follow my previous post up to step 10. Ensure that your VM has been built, configured with networking and running KVM and CVM. Then repeat and build three other nodes.


2) Once they are powered up, check that everything is running by using the following command on each node:


3) At this stage, I performed a little maintenance and reduced the overhead that the CVM would have on my limited resources, I shutdown the CVM’s on each KVM host by doing the following:

4) Once the VM was shutdown I modified the settings of the VM to only use 6GB RAM instead of 12GB:

Using VI editor, amend the memory sections to reflect the new allocation:

5) Ensure that the configuration is saved (:w!) and then you can start the VM, repeat these steps for all three CVM virtual machines:

6) Now it is time to create the cluster, login to one of the CVM machines via SSH and enter in the following:












7) Now the cluster is built and running enter in the ncli prompt to setup a few things:



8)  As the cluster is started and configured, I then logged in to the master node (Zeus Leader) via the Web Gui. (Step 13 onwards in my previous blog post).



Safely shutdown and start up cluster nodes

I found out that it is not wise to just shutdown your KVM nodes through VMware Workstation, it can have pretty disastrous consequences! Therefore I’m just going to quickly run through shutting down the cluster when you are finished.

1) Using Prism, make sure you have any non-system VM’s shutdown across the cluster.

2) Login to any CVM node that is part of the cluster (I logged into the master, I do not believe it matters). Stop the cluster:



3) At this stage, it is safe to log in to each CVM node and run a shutdown command.


4) When all of the CVMs are shut down, it is safe to shut down the guest O/S through Workstation (Shutdown the KVM hosts).

5) Starting the cluster is easier. Power on each KVM virtual host and wait a few minutes for the CVM and services to start. Once services are up, start the cluster!


That is it really, I hope to post further articles about running some nested VMs within the nested environment and more about the Nutanix platform and how to use it! Hopefully that will come in time as I gain more exposure to it.


  1. Jason

    I just recently applied for the Beta and I am awaiting my acceptance but I have been reading documentation online about setting this up in VMware Workstation which led me here. I know the Nutanix requires 3 IP addresses, 1 for IPMI, 1 for the CVM, and one for the hypervisor. Couldn’t you create 2 virtual networks within Workstation one for IPMI and one for the CVM and Hypervisor?

    • Ryan Harris
      Ryan Harris


      Yes that is a valid way of doing things. You would need to setup the necessary networking to allow two separate subnets for configuration.

      However the issue is for me, because I virtualized my KVM hosts, they do not have an IPMI interface. In the “real” Nutanix release on actual hardware; you do require 3 IP addresses but in Nutanix CE and nesting the environment, you create the cluster manually using CLI. There is no BMC in the virtual hardware and therefore nothing to connect an extra “IPMI NIC” too.

      Hope this makes sense. The work around for me was to create the cluster manually as outlined in this post!

  2. Pingback: NutanixCE: Cluster Creation Error - Could not discover all nodes - - Virtualization Blog
  3. Wes Prather

    I’m running this configuration on Windows 10/Workstation 12.1, and still have this nagging issue:
    -On 1st install/setup, I can ping the node and the CVM IP, SSH to CVM for proper shutdown, etc.
    -subsequent boot up for the nested VM results in the node being reachable, but the CVM is not (even from the host)
    -this results in a setup that is rather non-persistent
    -I’m using the NAT network to be able to allow the cluster to authenticate with NEXT credentials, because it never succeeds NEXT login if I use my routed/Host-Only VMnet. (I route all host only traffic out of a NAT firewall VM)

    Testing under Ubuntu 14.04 LTS next to see if it’s Workstation or host OS…

      • Wesley Prather

        I have just deployed the 2016.09.23 version on VMware Workstation 12.5, and have this same issue.
        I created a 1-host cluster, logged into Prism, *glad the DNS script works!*, and was able to create a Virtual Network (vlan.0). I logged into the CVM via SSH, did a “cluster stop”, and then logged into the host and did a “sudo shutdown -P now”. After a minute or so the VM powered off. Upon power-up, the host looks OK, but the CVM isn’t reachable (and hence Prism or SSH). Anyone know how this should work?

        • Wesley Prather

          Well I guess it’s a local issue, then… I’m using a NAT network interface under Workstation 12.5, with as the associated subnet. I like to be able to use the Web Browser on my local Windows 7 host to access the IP I give the CVM, and I also can reach the CVM & KVM host via SSH. Something in my network stack keeps getting stepped on, and I cannot access from the host anymore. If I connect from a separate Windows 7 client running on the NAT network, things are flawless and repeatable, with shutdown/restart working reliably. Go figure… Let’s see – is it the FortiClient, the Cisco AnyConnect, the Juniper Pulse, or the Palo Alto GlobalProtect VPN that’s getting me? Time for some QA iterations… 😉 **YRMV and YMMV, of course**

  4. VivekS

    Creating 3 node cluster with 250 GB SSD & 500 GB HDD on vSphere 6 using Community Edition. Post creation in hardware type unable to get / detected capacity of 3 x 500 GB HDD only it’s showing SSD capacity.

  5. Pingback: Adding Node to Nutanix Cluster – All about Cloud Ecosystem based on VMware
  6. Pingback: Nutanix CE Cluster on MAC Mini - xenappblog
  7. Pingback: Nested Nutanix Community Edition Home Lab Cluster - CCIE44938

Leave a Reply

Your email address will not be published. Required fields are marked *