Category: VMware

vCSA Automated Backup Failure

Recently we have gone through the process of upgrading our Windows 6.0 vCenter Server with external SQL to vCSA 6.5. I must say now how good the entire process was from start to finish, VMware have really done themselves proud on that tool. Our environment isn’t huge but it is big enough that we thought we might see problems – but no!

Part of the migration work was to get backups up and runnign as they were with our Windows vCenter (if not slightly different/better). My understanding is that the supported method for backup is to use the VAMI interface and run a full “file dump” backup of the vCSA with which you can restore into any blank deployed vCSA and you are back in the game. We have a Rubrik for snapshotting but using the VMware method is of course supported and preferred.

The Issue

Upon using the VMware provided Bash Script we encountered the following error in the backup.log file that is produced:

“{“type”:”com.vmware.vapi.std.errors.unauthenticated”,”value”:{“messages”:[{“args”:[],”default_message”:”Unable to authenticate user”,”id”:””}]}}”

Further investigation showed further errors in the VAPI endpoint log

We could run a manual backup from the VAMI interface as the root user but just not using the bash script which is essentially using the VAMI API to curl a request to run a backup. The error above seems related to “” and being unable to validate the signing chain signature. Without further help there was no way I was going in to modify or look at that script on my own on a now Production vCSA.

I also created a seperate master user in the @vsphere.local domain to test running the backups but still had no luck.

I ran the script manually and the problem occured at the start of the POST to the appliances rest API.

The Fix

After speaking with several smart people in the vExpert slack channel, I raised a case with VMware support. I eventually received a response which told me to edit the following file:

There is a value that needed changing from:

To the following:

Be careful with the amendment, there is space indentation on the code and there must be exactly 8 spaces in from the new line

Then a simple stop and start of the applmgmt service to apply the fix:

Now the script runs perfectly daily to our backup respository. I believe this might become defunct in vSphere 6.7 as I think there is now a GUI way of scheduling backups!

vCSA 6.5 High Availability Configuration Error

Recently I have been experimenting with configuring the built-in vCSA 6.5 HA functionality. Upon reading the documentation found here. I set about the task of configuring a basic HA deployment.

The error I saw upon completing the wizard was:

“A general system error occured: Failed to run pre-setup”.

Unfortunately, there wasn’t much to go on in the vCenter logs via the web GUI so it was time to SSH into the vCSA and go digging around for some logs with a little more information. After a brief meander, I found the following log

The interesting contents of the log were spat out as follows:

Looking at the log, it seemed that insufficient privileges were given to the user trying to create vcha user (root!). I then remembered the recent issues that VMware have had with Photon and root passwords expiring after 365 days. I logged into the VAMI for the vCSA and tried to reset the password but I was given an error.

The fix, in this case, was to simply reset the root password of the user via the bash shell.

At this point I was able to login with the new password and then login to the VAMI and set the root password to never expire. You can also do it via the command line using the “chage” command on the root user.

After restarting the deployment the pre-checks ran successfully and the configuration continued!

Hopefully this might help someone who is trying to do something similar!

VMworld USA 2017 – Wednesday Breakdown

Day three at VMworld was a bit of a slow start for me, the Rubrik party was a late one and there was no keynote so I decided to rest up a little try and save my energy.

Hanging out in the community areas, which is the best part of the event, was high on the agenda. Early on in the day we swung by to see our favourite Cloud Cred lady Noell Grier . I gave her a bit of a hand doing some “booth babe” duty whilst Rob Bishop collected his Go Pro 5 that he won for completing a CloudCred challenge! Noell is an awesome lady and if you aren’t familiar with CloudCred then you should go to the site, sign up, follow her on twitter and get on it!

The main highlight for Wednesday for me was heading to the customer party. Thanks to #LonVMUGWolfPack shenanigans Gareth Edwards, Rob Bishop and I ended up wearing some very jazzy VMware code t-shirts. The concert was a blast and we had a great time, I really enjoyed Blink 182 despite not being allowed on the main floor. Here are some pics of the event:

(Credit to Gareth for some of these pictures, thanks dude!)

NSX Performance
Speaker: Samuel Kommu

Samuel starts by a show of hands and it seems that most of the audience are on dual 10Gbe for their ESXi host networking.

NSX Overview of overlays
There is not much difference between VXLAN encapsulation and original ethernet frames. Only the VXLAN header is extra.
With Geneve Frame format there is an additional options field (length) that specified how much more data can be packed into the header. This can be interesting as you can pack extra information within it. This then helps capture information on particular flows or packets.

Perfomrance tuning
Parameters that matter – MTU mismatch is a pain to try and figure out. There are two places you can set it: ESXi host and on the VM level. From a performance perspective the MTU on the host doesn’t matter unless you change it at the VM level too.

There is a large chance if you change the MTU you will change the performance on your systems. The advice is to change the MTU to recommended values. The reason for this is the amount of payload vs. headers goes down therefore you are getting more for your money.

The vDS MTU sets the host MTU as that is what the host is connected to. The underlying physical network needs the same MTU setting too. Fairly standard stuff but important to check and consider.
Optimizations on TCP/IP , sending a large payload without spending CPU cycles. This is TSO. The act of sending a 1MB file for example, doesn’t happen within the system but it happens on the pNIC when chopping it up.**

With ESXi 6.5 they have brought in LRO in a software LRO rather than having the physical hardware only having it. Now it is possible to leverage LRO without physical capability on NSX 6.5.
When RSS is enabled
– Network adapter has multiple queues to handle receive traffic
– 5 tuple based hash for optimal distribution to queues
– Kernel thread per receive queue helps.

Rx/Tx filters
– Use inner packet headers to queue traffic

Native Driver – vmklinux driver data gets translated to vmkernel data structure. The native driver decreases the translation between both. Meaning less CPU cycles used.

Each vNIC now has it’s own queue down to the pNIC, rather than sharing the same queue. This scales throughput accurately through to the pNIC. It is also now possible to have multiple queues per single vNIC to pNIC.

Compatibility guide

The HCL is an obvious place to start with checking versions to ensure they are all correct and in support. It is then possible to select the right settings so that you can receive the latest and correct drivers to download and install onto your hosts.

Traffic Processes
Traffic flows, E/W and N/S traffic. E/W means a logical switch communication within the same switch to other VM’s .This is usually the most amount of traffic, smaller amounts go out on N/S traffic flow and this also goes through NSX Edge.

Long flows:
– Designed to maximums on bandwith
– Logs
– Backups
Short flows:
– Databases, specifically in memory ones or cache layers.}

Small packets:
– Keep alive messages

Not all tools are able to test the latest optmizations. Make sure the tools are right for the job. Application level is often best but be aware.
Fast Path
When packets come in, a new flow, it has different actions depending on the header. This happens throughout the entire stack regardless of E/W N/S traffic.

When you see new flows that are similar type, fast path disregards the flows actions and fast tracks to the destination, with no hash table. This is for a cluster of packets that arrive together, the flow is hashed and then sent via fast path. This causes 75% less CPU cycles.

The session got quite deep at times and went way further than my limited NSX experience could take me. I’m also not a network admin by day either so if there are any mistakes in my notes I’ll correct them as I go.

VMworld USA 2017 – Tuesday Breakdown

Tuesday starts with excitement at VMworld 2017, the keynote beings…


Pat Gelsinger and Michael Dell take the stage for Day 2, the crowds are in anticipation of a great session.

Pat opens up by talking about support and GSS. In recent years it is the opinion of some that support has been an issue and that standards might not as be as high as previous years. Pat states that he is committed to being the best technology partner and hopefully this will drive change from the top down through VMware to improve this area. This is fantastic news!

Michael gives his thoughts on machine learning and quantum computing topics. He talks about the sheer number of devices available now and the IoT trend. Data is growing at an exponential rate and if we are able to overlay computer science and machine intelligence to this data we reach a tremendous age of humans and computers working together for some great possibilities. He believes we are the dawn of this era.

Pat comes out with a classic line summizing this topic of conversation:

“Today is the slowest day of technical evolution in the rest of your life”.

Pat and Dell have a great rapport, this much is clear from their discussions on stage. There is a small amount of banter between them which gets the crowd laughing. It’s moments like these that make the event more enjoyable to watch as it shows that they are just guys who are passionate about technology, not multi-billion dollar CEO’s.

VMware and Pivotal are also announced as platinum partners of the Cloud Native Computing Foundation.

Later, Ray O’Farrel joins the stage to a big applause, he is a peoples favourite. He jokes that VMworld this year is a bit like a rock concert. He asks the question, how do we build the products that they put forward for us, the customer. The main principals to this focus are:
1) Customers can take advantage of the most modern infrastructure available.
2) Pragmatic about how we consume the technologies. Robust, quality delivery.#
3) New consumption models and how things can be delivered “as a service”.
4) Developer friendliness, allow devs to leverage the infrastructure and applications via code.

Ray Demonstrates using a fictitious company “Elastric Sky Pizza”. They are using Project Leonardo to push forward the company with digital transformation. The question being how can this company use products from VMware to help this transformation?

The answer is VMware Cloud Infrastructure a Unified SDDC Platform. A dive into how this complete stack of technology delivers a complete digital transformation for the business. The most impressive thing to me in the demonstration that follows is the VMware App Defence product. It is also nice to see end-to-end use of the entire product stack. I highly recommend that people catch up on the this day two keynote!


The rest of my day after the keynote was spent catching up with fellow community members and wondering around the solutions Exchange. I met up with my buddy, and fellow @LonVMUG member Gareth Edwards and we had a flourish of creativity in order to try and win the Turbonomic competition. As luck would have it, I was judged as the most creative of the day and won an Nvidia Shield. Thank you so much!

Here are my winning entries and me collecting my prize.

In the evening, we went to the Pinball hall of fame for the vExpert party. It was great fun to chat with fellow dedicated community members! We had several drinks, some excellent food and played some old/new school pinball and video games!

The final end (or start) to the evening was attending the Rubrik Party at the Cosmopolitan Hotel/Casino. Gareth and I attended with our white VIP wristbands and went into a fully booked night club for the evening to watch Ice Cube and mingle!

A great end to the evening (at 4:30am)! Thanks to all who were out, especially a big shout out to my main Eric Lee who was on fire the entire evening. Such fun times and entirely the reason I love being a vExpert and party of the VMware community.

VMworld USA 2017 – vSphere 6.5 Storage Deepdive

So, you want to learn about storage… Who better to drop some knowledge on you other than two community legends Cormac Hogan and Cody Hosterman?

As soon as I saw the session in the content catalogue, I knew it was something I’d be attending! Here are my notes on an excellent breakout session.

A Deep Dive into vSphere 6.5 Core Storage Features and Functionality [SER1143BU]

After a quick intro, Cormac and Cody broke the content down into several sections.


ESXi hosts now support:
– 2k paths (doubled)
– 512 devices (doubled)
– 512e Advanced Format Device Support.

If your storage is set to 512n the defaults of VMFS go to 5, where as if you are 4kn or 512e it defaults to VMFS 6.

DSRNO hypervisor queue depth limit has been 256 but it now matches the HBA device queue depths.


Has two new internal block sizes small file black (SFB) and large file block (LFB), 1MB and 512MB respectively. This dictates the growth of a VMFS-6 disk. Thin disks are backed by SFB’s. Thick disks are allocated with LFB’s – this also includes vswap files. Therefore the power on/creation of eager thick zero VM’s is much quicker.

The system resource files are now extended dynamically for VMFS-6. These were previously static. If the file-system exceeds resources, the resource files can be extended to create more. This now supports millions of files assuming free space on the volume.

Journals are used for VMFS when performing metadata updates on the file-system. Previous versions used regular file blocks for journalling. In VMFS-6 they are tracked separately as system resource files and can be scaled easily as a result.

VM-based block allocation affinity is now a thing! Resources for VM (blocks, file descriptors) used to be allocated on a per host basis (host-based). This caused contention locking issues when they were created on one host and then migrated to another host.

New hosts would allocate blocks to the VM/VMDK and this used to cause locks and contention on the resource files. Thanks to VM-based block allocation this decreases resource lock contention.

Hot extend support now enables VM disk grow of VMDK’s that are larger than 2TB whilst a VM remains powered on. Previously a power off action was required. This is a vSphere 6.5 feature, so as soon as your host is updated it will be able to perform this task on VMFS-5/6 volumes. Massive win!

Finally, the only slight caveat for VMFS-6 is that you have to decommission your old data store and migrate data and reformat to the new file-system. There is no upgrade option, however, thanks to SDRS it’s not the end of the world right?


ATS Miscompare handling (Atomic Test & Set) unlocks the scale-ability of VMFS.

Every so often there used to be a mis-compare; between the test and set option there is a comparison from the previous test and set operation. Sometimes the timeout was too low and the storage array would take time to respond. This used to confuse host and storage comparison and cause issues. More often than not, issues that arose were down to a false mis-compare. Meaning that there was no real issue – just potentially some high latency. These small issues occurred anywhere between the vSphere, Storage Array and HBA.

Good news is that now, in the event of false mis-comapres now, VMFS will not immediately abort IO’s. It retries ATS Heartbeat after a short interval (less than 100ms). Meaning a smoother process all around.

UNMAP was introduced in 5.0. It enables the host to understand that blocks have been moved or deleted on the back-end on a thin disk. This allows to reclaim the freed blocks, freeing up space.

Automatic UNMAP is back for VMFS-6. Hosts asynchronously reclaim space by running UNMAP crawler mechanisms. This does require a 1MB reclaim block size on the back-end array. This is an automatic process and can take 12-24 hours to fully reclaim the space.

In-guest UNMAP was introduced in 6.0. A thin provisioned guest was able to inform the host of dead space within the file-system, this would then UNMAP down to the array.

Linux support has now been introduced thanks to introduction of SPC-4 support in vSphere 6.5.


Storage policy based management, also known as VAIO filters.

It is a comon framework to allow storage and host related capability to be consumed via policies.

Rules are built for data services provided by hosts, such as “I want a VM on storage dedupe but no encryption, with X availability requirement”. These rule sets can be applied to a provisioned VM and it receives the policies and rules.

Two new I/O filters have been shipped with 6.5. VM Encryption amd storage I/O control settings.

VM Encryption requires an external KMS. The encryption mechanism is implemented in the hypervisor, making it guest OS agnostic. It doesn’t just encrypt the VMDK but the VM home directory contents too (vmx files, etc).

NFS 4.1

VAAI-NAS primiatives have been improved. It is now possible to offload storage tasks to the backend array.

IPv6 support added for NFS 4.1 kerberos.


Now supports haing the iSCSI initiator and target residing in difference subnets WITH port binding. Apparently VMware had many customers wanting to achieve this!

VMware now supports UEFI iSCSI boot on Dell 13th gen servers with dual NICs.


New virtual storage node option on the hard disk option in guest VM, a new HBA for all flash storage.

Supports NVMe specification v1.9e mandatory admin and I/O commands.

Interoperability for this exists within all vSphere features except SMP-FT.

That is the end of my notes on the session. I highly recommend to anyone to come back and re-watch this session when it is publicly available. I had a quick chat with the guys after the session, with many others, to try and soak up any additional knowledge they had on offer. The guys were also nice enough to let me grab a picture with them for the blog. Thanks again for an excellent session!