Recently I’ve been trying to get Nutanix Community Edition 5.5 (CE 2018.01.31) including Prism Central (PC) up and running on VMware ESXi 6.5 in a nested configuration. Out of the box this isn’t working, you have to run through a couple of extra configuration steps that can be found on forums and blogs. I thought it would be a good idea to bundle all this stuff in one blog article, to get you up to speed faster.
I also tried to get Prism Central up and running in the nested environment, again you will encounter some issues…I could solve some of them, but didn’t succeed for the full 100%. Please read on to learn more.
Download the CE software and create a vmdk
The first step is to download Nutanix CE and build a VM from it. Download the ce-2018.01.31-stable.img.gz image, rename it to ce-flat.vmdk and upload it to your ESXi host. On the ESXi create a new vmdk descriptor file for the ce-flat.vmdk following this procedure.
Create a Nutanix CE VM
Next step is to create a Nutanix CE VM on your ESXi host. My configuration specs are:
Hard disk 1 is the ce.vmdk, harddisk 2 and 3 (not visible) have a size of 500 GB. Hard disk 1 is connected to SATA controller 0, Hard disk 2 en 3 are connected to SCSI controller 0. You can use the VMware Paravirtual SCSI adapter here. Don’t forget to check “Expose hardware assisted virtualization to the guest OS”, otherwise the installation will fail.
Make changes to the installer OS
You have to make a few changes to the Nutanix VM Guest OS before your can run the installer in a nested environment. This is one comes from this thread on the Nutanix forum, kudos to matmassez. Boot the Nutanix CE VM you’ve just created, logon to the VM with root, password: nutanix/4u and walk through the following procedure:
Add pmu state in svm template by following the next steps:
- Boot CE VM
- Login with root / nutanix/4u
- Navigate to /home/install/phx_iso/phoenix/svm_template/kvm
- Edit default.xml and add pmu state value.
Change the following XML file:
- Navigate to /var/cache/libvirt/qemu/capabilities/
- There should be 1 xml file: 3c76bc41d59c0c7314b1ae8e63f4f765d2cf16abaeea081b3ca1f5d8732f7bb1.xml
- Edit the file and at the bottom you have different machine types defined
- Remove the line that contains ‘pc-i440fx-rhel7.2.0’;
- Change the line that contains ‘pc-i440fx-rhel7.3.0’ and change this to ‘pc-i440fx-rhel7.2.0’;
- Your config should now looks like this:
- Save the file and reboot the VM.
I would advise to make a template of the VM, so you can easily do a deployment. Deploy the Nutanix CE VM, and start the installation and follow the normal procedure. If everything went fine, you will have a Nutanix AHV host up and running (if you choose for a one node cluster), including the Controller VM (CVM).
Shutting down Nutanix CE
When you want to shut down your Nutanix CE cluster , it’s very important to walk through the following procedure:
- Bring down all regular VMs;
- Bring down AFS fileservers (if applicable) through minerva -a stop (to be executed on the CVM);
- Stop the Nutanix cluster: cluster stop (on the CVM);
- Shutdown the CVM: cvm_shutdown -P now (on the CVM);
- Stop the AHV host: poweroff.
Deploying the Prism Central on Nutanix CE
The Prism Central software download is also available on the Nutanix CE downloads page. You need the ce-pc-deploy-2018.01.31.tar and the ce-2018.01.31-metadata.zip file. The zip file contains a JSON file (ce-pc-deploy-2018.01.31-metadata.json) that is required when you upload the tar to your Nutanix cluster through Prism Elements.
Upload the Prism Central files to your environment, and start the installation. The Prism Central VM is deployed to your environment. After your PC VM starts, it will probably run into a kernel panic as documents in this forum thread.
Marcrousseau documented the following procedure to resolve this kernel panic:
- deploy PrismCE using ce-pc-2017.07.20-metadata.json and ce-pc-2017.07.20.tar [also applicable to the 2018.01.31 version)
- grab /var/lib/libvirt/NTNX-CVM/svmboot.iso from an AHV host using SCP/SFTP
- upload it as an ISO image in a PrismCE container with name boot_PRISMCE
- edit PrismCE VM settings:
delete DISK scsi.0
delete CDROM ide.0
add new disk type CDROM / Clone from Image service / Bus type=IDE / Image=boot_PRISMCE
select CDROM as Boot Device
- power on PrismCE VM
blank screen during 20 sec and then everything works
Because you cannot edit the PC properties in the Prism interface, you will need to use the acli for this. The following commands will help you here:
vm.list vm.off <<prism_vm_name>> vm.disk_list <<prism_vm_name>> vm.disk_delete <<prism_vm_name>> disk_addr=ide.0 vm.disk_delete <<prism_vm_name>> disk_addr=scsi.0 vm.disk_create <<prism_vm_name>> cdrom=true clone_from_image=boot_PRISMCE bus=ide vm.update_boot_device <<prism_vm_name>> disk_addr=ide.0 vm.on <<prism_vm_name>>
Now logon to the Prism Central VM, and configure the IP settings as documented here.
Next step is to configure Prism Central:
cluster --cluster_function_list="multicluster" -s static_ip_address create
If you receive a Genesis error, you can also try:
cluster --cluster_function_list="multicluster" -s static_ip_address --skip_discovery create
Unfortunately I couldn’t get Prism Central up and running, only four services are started succesfully. The following services couldn’t be started for unclear reasons:
If you have any suggestions here, please leave a your input as a comment below.
I hope this article was useful for you to get Nutanix CE deployed and running as a VM on ESXi 6.5.