What is Nutanix Community Edition
  • free version of Nutanix AOS
  • designed for test driving its main features
  • on own hardware and infrastructure
  • intended for internal business operations and non-production use only

Nutanix Community Edition is the free version of Nutanix AOS, the operating system running on the commercial hardware from Nutanix. By the way the Nutanix commercial solution is also available on selected , Lenovo, Cisco and HPE hardware. The Nutanix Community Edition is designed for test driving its main features on your own hardware and infrastructure. Please not that the Community Edition is intended for internal business operations and non-production use only!

If you don’t know the basics about the Nutanix technology, the following clip explains the main concept behind their solution from a high level view. This is about how the commercial solution works, but Community Edition is the same concept.

What’s In Community Edition
  • Hypervisor (Acropolis hypervisor with management)
  • Single pane of glass control (Prism web console to manage the cluster)
  • Command-line management (Nutanix command line – nCLI)
  • Ability to add nodes to the cluster (One, three, or four nodes can comprise a cluster)
  • Ease of installation and use ( and boot the hypervisor and AOS environment from USB device)

So what do you get with Nutanix Community Edition? You get a hypervisor together with a distributed file system. You get an out-of-the box HCI solution, that can be managed from a single pane of glass – the PRISM web console. For advanced configuration and management tasks you get command line access. You can deploy the Community Edition as a one, three or four node cluster . And of course it’s really easy to install and use.

Recommended Hardware

If you want to build your own CE cluster, here are some recommendations about the hardware. First  from the CPU perspective it’s best to have a minimum of 4 cores. There’s a way to get CE up and running with 2 cores only. I will show you later how to tweak the installer in that case.

For RAM it’s very simple, more is always better, but you need at least 16 GB. Said that with the minimum you will not have much available to run your VMs, because in the default configuration the Controller VM (CVM) takes 12 GB by default, so there are only 4 GB left. But there’s also a way to lower the memory  consumption of the CVM to 8 GB, I will show you this later too.

For the storage you need at least 2 drives per node (1 + 1 HDD) plus you need a 3rd device as your boot device on top.

Other Recommendations and Tips
  • plan for a three-node or four-node cluster and be aware that single-node cluster can not be expanded
  • use static addresses for the hypervisor hosts and Controller VMs
    • do not use the subnet, it’s reserved for internal use
  • don’t do nested deployment on ESXi for a real lab
  • spend your bucks on memory and cores, rather than on storage
  • use high quality USB 3.0 thumb drives
  • experience a live Nutanix instance for 2 hours and test drive Nutanix CE in the cloud

To experience the full power, storage features, data protection and HA of the Nutanix solution, you should plan to go for a three-node cluster or even better a four-node cluster. Be aware that you can’t expand a single-node cluster after the initial setup. You will have to destroy the cluster and start over for a multi-node cluster.


The Nutanix Community Edition is free, so of course there are some limitations compared to the commercial product.

  • your cluster requires internet connectivity (outgoing traffic on tcp/80 and tcp/8443) to send telemetry data and usage statistics to Nutanix through «Pulse» mechanism
  • you must upgrade within 30 calendar days when an upgrade is available, else access to your cluster will be blocked
  • your hardware (especially HBAs, NICs, NVMe) may or may not work with Nutanix CE
  • Nutanix Next account is mandatory (needs registration)
  • only community support
Preparation and creation of the USB installation media

Now let’s move on to the deployment and setup on your own hardware. First you have to register for a Next account and join the community (https://www.nutanix.com/community-edition/), where you will be able to the initial installation image from the Nutanix Next Community forum.

Register for a Next account to get access to the Community Edition
Rufus can burn the installation image to your boot media

To prepare your boot media with the installation image on a Windows machine you need some utility. I can recommend «Rufus», which is a nice tool for this and is available on https://rufus.akeo.ie for free.

  1. extract the initial install image from the downloaded file (ce-2017.02.23-stable.img.gz) – 7-zip can do this on Windows
  2. create a bootable disk using the option «DD Image»
  3. select the extracted initial installer image (ce-2017.02.23-stable.img)
  4. boot your system from the USB installation media

During the installation you will have to provide your network configuration. You need 2 IPs per node for the host and the Controller VM (CVM) plus one additional IP for your virtual cluster IP, and of course you have to configure the appropriate subnet mask and gateway address. As already mentioned, do not use the subnet, it’s reserved for internal use.

Tweaking the Minimum Requirements

To lower the minimum requirements for memory and/or cores you can boot the installer image and login as root with the password nutanix/4u and then edit the appropriate values of the COMMUNITY_EDITION section in the /home/install/phx_iso/phoenix/minimum_reqs.py file.

Lowering the CVM memory

To lower the default CVM memory (pre-install) you can boot the installer image and login as root with the password nutanix/4u and then edit the appropriate value of the COMMUNITY_EDITION section in th /home/install/phx_iso/phoenix/sysUtil.py file. You can drop the CVM memory to 8GB if you’re not using any data services (deduplication, compression).


To adjust the CVM memory after the initial installation to 8GB connect as root with the password nutanix/4u to the host running the CVM and execute the following commands.

virsh list –all

virsh shutdown <CVM-Name>

virsh setmem 8G –config

virsh setmaxmem 8G –config

virsh start <CVM-Name>

virsh list –all

After applying the new memory configuration make sure your CVM is back up and running.

Setup of a 3 or 4 Node Cluster

To setup a three-node or four-node cluster do NOT select the «Create single-node cluster» option, just install all your nodes one by one then connect to one of your CVMs (Controller VMs) via . «putty» might be your favorite tool of choice for this (http://www.putty.org).

From the CLI we have to manually create the cluster and configure with appropriate and NTP settings.

manually create and configure the cluster

Execute the following commands to create, start and check your cluster.

Create the cluster

cluster –s [cvm1_ip],[cmv2_ip], [cmv3_ip], [cmv4_ip] create

Start the cluster

cluster start

Check the status of the cluster

cluster status

Configuring a Multi-Node Cluster

To configure your cluster with the appropriate settings use the following commands.

Name the cluster

ncli cluster edit-params new-name=[cluster_name]

Check / add / remove name servers

ncli cluster get-name-servers

ncli cluster add-to-name-servers servers=”[dns_server_ip]”

ncli cluster remove-from-name-servers servers=”[dns_server_ip]”

Check / add / remove time servers

ncli cluster get-ntp-servers

ncli cluster add-to-ntp-servers servers=”[ntp_server]”

ncli cluster remove-from-ntp-servers servers=”[ntp_server]”

Set a virtual cluster IP address

ncli cluster set-external-ip-address external-ip-address=”[cluster_ip]”

Additional Commands

Stopping a cluster

cluster stop

Destroying a cluster (deletes all cluster and guest VM data!)

cluster –f destroy

Configuring a proxy server

ncli http-proxy add name=[proxy_name] address=[proxy_ip] port=[proxy_port] proxyTypes=http,https

Check proxy configuration

ncli http-proxy ls


Print Friendly, PDF & Email



Bài viết liên quan