Tag Archives: vmware

Nutanix 4.5 Cross-Hypervisor VM Conversion

One of the reasons people stay with a particular type of hypervisor is that it is too hard (or too costly) to migrate to another type. All that drama of converting, testing and making sure all is right and then the risk of having to move back if something went wrong.

Sure, there are separate software tools you can buy to do the conversion for you . . . but what if the virtualisation infrastructure itself – the thing that is actually providing your servers and storage – could do it as an in-built function? What if that could be done just by clicking a few buttons?

So in the demo video below, I take a running Windows VM on a Nutanix Cluster “A” running vSphere and then take a snapshot of it and send it to a second Nutanix Cluster “B” running Nutanix’s own free Hypervisor (AHV) and then start the VM. Job done. Easy.

Here’s the setup:

clustersetup

Basic lab setup using a flat L2 network. Production and DR deployments would use L3 networks – which is fine of course

..and here’s the demo:

For brevity, I cut out the initial one-off processes to set up the Replication. The full process was below (check out the Nutanix Index for articles describing setting up Replication):

1. Setup a Data Protection Remote Site ‘pair’ of clusters (so that they can replicate to each other) and test the connection.

Site A (ESXi cluster)
Site B (AHV cluster)

2. Set up a Protection Domain policy, add the VM you want to be a part of the replication policy and set a schedule.

3. On the Windows VM on ESXi on site A that you want to snap to Site B running AHV, make sure you install the Nutanix VM Mobility drivers MSI from the my.nutanix.com support portal. (These will soon be included in Nutanix Guest Tools (NGT) post Nutanix AOS 4.6 release, so by installing the NGT you will automatically get the VM Mobility drivers). The Nutanix VM Mobility installer deploys the drivers that are required at the destination AHV cluster. After you prepare the source VMs, they can be exported (snapped) to the AHV cluster.

4. Run the snapshot and restore operation as per the video. That’s it!

Word on keyboard

Almost as easy as clicking this button

A few points to note:

In the video I am just taking a crash-consistent snapshot, if you want a clean snap then shut down the source VM first, then snap, then restore. Live app-consistent snapshots will be coming in 4.6 for ESXi and AHV.

Obviously if your VMs have static IPs or to avoid computer naming issues, you should take care of these before joining the newly created AHV VM to the network. When you restore the VM on AHV, by default there is no virtual nic connected (so the risk is minimal if you just want to test). If you wanted it to connect to the network you would attach a nic to the restored VM on via Prism on the AHV cluster (go to the VM page).

Only 64-bit guest operating systems are supported at the time of writing (Nutanix AOS 4.5).

For Windows 7 and Windows 2008 R2 operating systems, you have to install SHA-2 code signing support patch before installing Nutanix VM Mobility installer. For more information, see https://technet.microsoft.com/en-us/library/security/3033929.

More info can be found in the Nutanix Prism Web Console Guide under the “Nutanix VM Mobility for Windows” section – which can be found on the Nutanix Support Portal.

Use cases:

A lot of people are trying AHV for the first time, and larger customers usually have a test/dev set of Nutanix nodes for testing. This method would be perfect to try snapping production VMs on AHV for testing and verify all is OK.

Also, I can see a use case where DR clusters could now use the in-built AHV on Nutanix clusters and save some licensing dollars.

It would also be possible to use Nutanix Community Edition as the AHV target – in case you had some spare hardware and wanted to just try this out without the need for a full Nutanix set of nodes.

Future software plans:

In a few weeks (early 2016), Nutanix will release AOS 4.6. With it, two-way VM conversion (ESXi<->AHV in either direction) should be included. In a future release AOS is expected to add support for Hyper-V, delta disks, and volume groups.

Yes, Nutanix will enable the ability to leave AHV and migrate your VMs *back* to ESXi (for example) should you choose. Put simply, the onus is on Nutanix to keep innovating to maintain your loyalty, rather than any technical or license ‘lock-in’. At the end of the day your workloads are just virtual machines – you should be free to move them wherever you see fit (even away from Nutanix if you choose).

There will be lots of improvement and extra features coming in future releases of course, which you will get by simply doing a standard Nutanix non-disruptive upgrade.

Conclusion: 

In essence, you can see why going Hyper-converged makes doing things like this almost trivial compared trying to do the same in a traditional 3-tier infrastructure (separate servers and storage layers). As the Nutanix software improves, your life gets easier each time. With each Nutanix release, more and more features like this will continue to be added and improved. Being 100% in-software is going to be a necessity in the next decade and beyond.

Thanks to @danmoz for letting me borrow his Dell XC cluster… and I treated it badly too (eg. multiple times I hard powered it off with no care – and it self-recovered every time).

shackles

Hypervisor lock-in is sooo 2007 :)

Upgrading ESXi on Nutanix by CLI

Firstly, you MUST check requirements for any changes as per your Nutanix hardware model and what the Nutanix upgrade guide (for the version you are using) says to check.

The advantage of the Nutanix platform is that even when the local controller VM (CVM) is offline, the Nutanix NFS datastore is still visible to the host on which that CVM resides.

The below is just for the upgrading ESXi part. If you also need to upgrade the Nutanix NOS software version, consult the appropriate Nutanix Upgrade guide.

  1. Download the vSphere depot zip file (offline bundle) from vmware.com
  2. Put it on the NTNX NFS datastore (Here it is named : NFS1)
  3. Shut down the CVM on the node you want to upgrade, make sure all guest VMs are off or migrated off that host. “ha.py” will kick in so the NFS is still visible on that host despite the CVM being off.
  4. Upgrade the host by doing the following (example below is upgrading to 5.1u1):To get the “profile” name, SSH to a host and run:esxcli software sources profile list -d=[NFS1]update-from-esxi5.1-5.1_update01.zipand use the ‘standard’ name for the upgrade commandlet below.
  5. Run the upgrade on the ESXi host:esxcli software profile update --depot=[NFS1]update-from-esxi5.1-5.1_update01.zip --profile=ESXi-5.1.0-20130402001-standard…wait about 30 seconds or so for the upgrade to complete. A whole lot of garbage fills the screen about vibs etc. Scroll up a bit and it says “reboot required” or similar.
  6. Reboot the host and it should be upgraded. Once the host restarts, the CVM will power on and then re-join the Nutanix cluster. Make sure you check all is ok with a cluster status on any CVM before repeating the above procedure on the next ESXi host.

Do the same sort of thing for ESXi 5.5 upgrades of course – change the filenames/depot files/profile name as appropriate. Remember to fully check your hardware model and other Nutanix specific requirements from the upgrade guide steps before kicking off the upgrade or things may end badly for you :)

The Nutanix support team is happy to walk you through the above as well.

Basic configuration of the Nutanix 2RU block

Winter is coming for the old SANThought I’d describe what you get when the shiny new 2RU Nutanix block gets to your door and how to get it configured to a basic level from the point of view as a new user of the Nutanix solution. Obviously, talk to your Nutanix partner before diving in, but this should give you a bit more of an idea on what you need to do to prepare.

In my case the block was comprised of four nodes and I ordered it with 192 GB RAM per node (768 GB total). I won’t go into the tech specs in detail as you can get that from their web site : http://www.nutanix.com/products.html#Specs

The rail kit is a snap-on type, easy to install, but don’t be a goose (like I was) and try to lift/install it into your rack by yourself. It isn’t light as the disks are pre-installed. After a short trip to the emergency room I continued with the config… :)

The next thing I did was send an email to Nutanix support to get registered as a customer when then allows you access to the knowledge base. Within about 10 minutes I had a reply and login to their support portal.  I searched for ‘setup’ and got the setup guide. If you aren’t sure what version to get, contact support – they are very fast to respond. In my case it was v2.6.2 (latest for Oct 2012).

Physical cabling was easy: 2 power cables (carve off to separate UPS power feeds); Each node has 1 x 10GigE network interface and 2 x 1 GigE interfaces for failover of the 10GigE as well one IPMI (lights out management) 1 GigE interface. That’s it. I assigned the IPMI as access ports and the 10Gig and 1Gig uplink-to-network ports as trunks.

The guide itself is pretty straight forward and easy to follow and the easiest method is to use IPv6 to configure the cluster. I used a Win7 laptop with IPv6 enabled, Bonjour for Windows, Google Chrome (with DNSSD extension) and your typical dumb gigabit switch to hook it all up in an isolated environment initially (if you want to isolate it). The cluster uses the 1 gig interfaces as a failover to the 10 gig nics, so that’s why it works on a basic 1 GigE switch. The setup process assigns IPv4 addresses to the cluster components so don’t worry about needing to be an IPv6 guru – you don’t. You don’t have to use Win7, other OS options are ok too. I didn’t try any other OS/browser combo so YMMV.

In my case I’ve assigned a complete /24 subnet in the DC for Nutanix infrastructure. It is recommended that all the components are in the same L2 domain but it is not mandatory. Addresses will be split out between the ESXi hosts, IPMI adapters and Nutanix controller virtual machines. Do not use 192.168.5.x/24 as this is reserved for the cluster’s internal communication.

I reserved addresses in segments so that when I get more Nutanix blocks, the expansion is contiguous. You don’t have to follow my example of course, but here it is:

Block/ Node ID  ESXi host   Controller VM    IPMI interface
Block 1 Node A     10.1.1.21     10.1.1.121          10.1.1.201
Block 1 Node B     10.1.1.22     10.1.1.122          10.1.1.202
Block 1 Node C     10.1.1.23     10.1.1.123          10.1.1.203
Block 1 Node D     10.1.1.24     10.1.1.124          10.1.1.204

As you can see, any future blocks can continue the sequence. eg:

Block 2 Node A     10.1.1.25     10.1.1.125          10.1.1.205  … and so on for any future nodes.

I don’t think I’ll ever have 54 nodes so that sequencing should be ok :) The controller virtual machine is where all the magic happens. There is one of these per node and the key to the whole Nutanix solution is the software and processes that run within the Controller vm; keeping everything in check and optimised; even in the event of failure.

The block ships with evaluation versions of vSphere ESXi hypervisor, vCenter server, Windows 2008 R2 Enterprise (for vCenter), MS SQL 2008 Enterprise (for the vCenter database). You can apply your own licenses as appropriate. Each host has its own local datastore (stored on the SATA SSD) and the distributed NFS datastore is comprised of the FusionIO drive (PCIe SSD) and the SATA disks. There are also ‘diagnostic’ vm’s pre-deployed which are used to benchmark the performance of the cluster.

You do not have to use the vCenter and you can decide to use your pre-existing one (it will save you a license). At this stage I’ll probably keep a separate vCenter for the VDI deployment but that is up to your own individual deployment scenario.

Once the cluster is ‘up’ you can then follow the guide and configure NTP and DNS from the CLI, configure the vMA, configure the cluster and hosts in vCenter, install the VAAI plugin and the NFS storage.

I also added an external non-Nutanix NFS datastore to all ESXi hosts so that I could use it as a transfer mechanism to get vm’s and templates from existing vSphere infrastructure to the Nutanix block should I want to. Note that there is a way to allow external-to-Nutanix ESXi hosts to connect to the internal Nutanix NFS datastore, however I think it is easier and better to keep the only hosts that can write to the Nutanix NFS datastore as the Nutanix nodes themselves.

When you take into account picking up the box from the loading dock, unpacking, lifting/racking, cabling, getting your Win7 laptop ready, cluster and vSphere configuration, DC network configuration, moving from isolated to production, installing the VAAI plugin, configuring NFS storage and final checks I’d say you were looking at a few hours in total to complete. Obviously adding any more blocks will take significantly less time given most of the clustering components are already done.

The ease of configuration and administration of the Nutanix block has been great. The other thing to keep in mind is that the support team from Nutanix (and their partners) can assist you with the above deployment process if you like.

So, at the end, you have a complete storage and compute modular building block that is easy to deploy and scale out when you require. In the coming weeks I’ll provide updates on the experience on using the block for our VDI project, as well as going into some detail on how the block has been designed from the ground up to handle a lot of different failure scenarios.

Be sure to check out some of the Nutanix YouTube videos as well if you haven’t done so: http://www.youtube.com/user/Nutanix and get ready for a life without a SAN in your DC.

The Nutanix box arrives in Australia