Category Archives: Uncategorized

Upgrading ESXi on Nutanix by CLI

Firstly, you MUST check requirements for any changes as per your Nutanix hardware model and what the Nutanix upgrade guide (for the version you are using) says to check.

The advantage of the Nutanix platform is that even when the local controller VM (CVM) is offline, the Nutanix NFS datastore is still visible to the host on which that CVM resides.

The below is just for the upgrading ESXi part. If you also need to upgrade the Nutanix NOS software version, consult the appropriate Nutanix Upgrade guide.

  1. Download the vSphere depot zip file (offline bundle) from vmware.com
  2. Put it on the NTNX NFS datastore (Here it is named : NFS1)
  3. Shut down the CVM on the node you want to upgrade, make sure all guest VMs are off or migrated off that host. “ha.py” will kick in so the NFS is still visible on that host despite the CVM being off.
  4. Upgrade the host by doing the following (example below is upgrading to 5.1u1):To get the “profile” name, SSH to a host and run:esxcli software sources profile list -d=[NFS1]update-from-esxi5.1-5.1_update01.zipand use the ‘standard’ name for the upgrade commandlet below.
  5. Run the upgrade on the ESXi host:esxcli software profile update --depot=[NFS1]update-from-esxi5.1-5.1_update01.zip --profile=ESXi-5.1.0-20130402001-standard…wait about 30 seconds or so for the upgrade to complete. A whole lot of garbage fills the screen about vibs etc. Scroll up a bit and it says “reboot required” or similar.
  6. Reboot the host and it should be upgraded. Once the host restarts, the CVM will power on and then re-join the Nutanix cluster. Make sure you check all is ok with a cluster status on any CVM before repeating the above procedure on the next ESXi host.

Do the same sort of thing for ESXi 5.5 upgrades of course – change the filenames/depot files/profile name as appropriate. Remember to fully check your hardware model and other Nutanix specific requirements from the upgrade guide steps before kicking off the upgrade or things may end badly for you :)

The Nutanix support team is happy to walk you through the above as well.

ACS 5.x Database Purge

I received an alert:

Purge is successful. The size of records present in view data base is 22.58 GB. The physical size of the view data base on the disk 96.1 GB. If you want to reduce the physical size of the view data base, run acsview-db-compress command from acs-config mode through command line.

Use the acsview-db-compress command to compress the view database file size. This command compresses the ACS View database by rebuilding each table in the database and release the unused space. As a result, the physical size of the database is reduced.

Ok so time to fix the database during a maintenance window.

acsadmin(config-acs)# acsview-db-compress
You chose to compress ACS View database. This operation will take more time if the size 
of the database is big. During this operation, ACS services will be stopped. Services will 
be started automatically when the compression is over. Do you want to continue (y/n)?  y

Please wait till ACS services come back after the view db is compressed. Refer ADE.log 
for more details about the view db compress.
admin#

How long to wait? Who knows – so I decided to jump in and run it to see how long….I pressed ‘y’ (as above) and waited. I could not find any command to show any status/progress indication – so I had to rely on nagios and the following command only.

 

admin# show application status acs

Application initializing...
Status is not yet available.
Please check again in a minute.

 

Yes ok – thanks Cisco!….. Anyway 3 hours 20 mins later:

 

admin# show application status acs

ACS role: PRIMARY

Process 'database'                  running
Process 'management'                running
Process 'runtime'                   running
Process 'adclient'                  running
Process 'view-database'             running
Process 'view-jobmanager'           running
Process 'view-alertmanager'         running
Process 'view-collector'            running
Process 'view-logprocessor'         running

 

…so cleaning up about 75GB of whitespace in the database took about 3 hours… so you can expect about 25GB an hour perhaps. Plan your outage window accordingly.

VLAN groups for fwsm and ace on 6513

With the data centre move, I took the opportunity to clean up some of the 6513 config which had got out of control.

The original groupings looked all over the place:

svclc vlan-group 1 27,28,40,76
svclc vlan-group 2 44,45,48
svclc vlan-group 42 42
svclc vlan-group 43 43
svclc vlan-group 400 402
svclc vlan-group 427 427,428
svclc vlan-group 500 527,528,544,545,548
svclc vlan-group 600 612,614,618,620,621,622,623,624,625,626,628,632,636,699
svclc vlan-group 602 602
svclc vlan-group 700 720,721,724,725,732
svclc vlan-group 990 996,997
svclc vlan-group 999 9,999
svclc module 2 vlan-group 2,43,400,427,500,600,602,700,999,
firewall module 1 vlan-group 1,42,427,428,600,990,999,

….YUK. However I decided to break the FWSM and ACE’s visible vlans into groups with more meaning; specifically:

Group 1 = specific to FWSM
Group 2 = common to both
Group 3 = specific to ACE

In the end the config gets cleaned up and looks like:

svclc vlan-group 1 27,28,40,42,76,612,614,618,622,623,626,628,636,699,996,997
svclc vlan-group 2 9,427,428,620,621,624,625,632,999
svclc vlan-group 3 43,44,45,48,402,527,528,544,545,548,602,720,721,724,725,732

firewall module 1 vlan-group 1,2
svclc module 2 vlan-group 2,3

…much better ! A few of the other commands that got me out of a few dramas:

firewall autostate
firewall multiple-vlan-interfaces
svclc autostate
svclc multiple-vlan-interfaces

(The autostate command helps the devices track the loss of an access link).