VMworld USA 2017 – vSphere 6.5 Storage Deepdive

So, you want to learn about storage… Who better to drop some knowledge on you other than two community legends Cormac Hogan and Cody Hosterman?

As soon as I saw the session in the content catalogue, I knew it was something I’d be attending! Here are my notes on an excellent breakout session.

A Deep Dive into vSphere 6.5 Core Storage Features and Functionality [SER1143BU]

After a quick intro, Cormac and Cody broke the content down into several sections.

Limits

ESXi hosts now support:
– 2k paths (doubled)
– 512 devices (doubled)
– 512e Advanced Format Device Support.

If your storage is set to 512n the defaults of VMFS go to 5, where as if you are 4kn or 512e it defaults to VMFS 6.

DSRNO hypervisor queue depth limit has been 256 but it now matches the HBA device queue depths.

VMFS-6

Has two new internal block sizes small file black (SFB) and large file block (LFB), 1MB and 512MB respectively. This dictates the growth of a VMFS-6 disk. Thin disks are backed by SFB’s. Thick disks are allocated with LFB’s – this also includes vswap files. Therefore the power on/creation of eager thick zero VM’s is much quicker.

The system resource files are now extended dynamically for VMFS-6. These were previously static. If the file-system exceeds resources, the resource files can be extended to create more. This now supports millions of files assuming free space on the volume.

Journals are used for VMFS when performing metadata updates on the file-system. Previous versions used regular file blocks for journalling. In VMFS-6 they are tracked separately as system resource files and can be scaled easily as a result.

VM-based block allocation affinity is now a thing! Resources for VM (blocks, file descriptors) used to be allocated on a per host basis (host-based). This caused contention locking issues when they were created on one host and then migrated to another host.

New hosts would allocate blocks to the VM/VMDK and this used to cause locks and contention on the resource files. Thanks to VM-based block allocation this decreases resource lock contention.

Hot extend support now enables VM disk grow of VMDK’s that are larger than 2TB whilst a VM remains powered on. Previously a power off action was required. This is a vSphere 6.5 feature, so as soon as your host is updated it will be able to perform this task on VMFS-5/6 volumes. Massive win!

Finally, the only slight caveat for VMFS-6 is that you have to decommission your old data store and migrate data and reformat to the new file-system. There is no upgrade option, however, thanks to SDRS it’s not the end of the world right?

VAAI

ATS Miscompare handling (Atomic Test & Set) unlocks the scale-ability of VMFS.

Every so often there used to be a mis-compare; between the test and set option there is a comparison from the previous test and set operation. Sometimes the timeout was too low and the storage array would take time to respond. This used to confuse host and storage comparison and cause issues. More often than not, issues that arose were down to a false mis-compare. Meaning that there was no real issue – just potentially some high latency. These small issues occurred anywhere between the vSphere, Storage Array and HBA.

Good news is that now, in the event of false mis-comapres now, VMFS will not immediately abort IO’s. It retries ATS Heartbeat after a short interval (less than 100ms). Meaning a smoother process all around.

UNMAP was introduced in 5.0. It enables the host to understand that blocks have been moved or deleted on the back-end on a thin disk. This allows to reclaim the freed blocks, freeing up space.

Automatic UNMAP is back for VMFS-6. Hosts asynchronously reclaim space by running UNMAP crawler mechanisms. This does require a 1MB reclaim block size on the back-end array. This is an automatic process and can take 12-24 hours to fully reclaim the space.

In-guest UNMAP was introduced in 6.0. A thin provisioned guest was able to inform the host of dead space within the file-system, this would then UNMAP down to the array.

Linux support has now been introduced thanks to introduction of SPC-4 support in vSphere 6.5.

SPBM

Storage policy based management, also known as VAIO filters.

It is a comon framework to allow storage and host related capability to be consumed via policies.

Rules are built for data services provided by hosts, such as “I want a VM on storage dedupe but no encryption, with X availability requirement”. These rule sets can be applied to a provisioned VM and it receives the policies and rules.

Two new I/O filters have been shipped with 6.5. VM Encryption amd storage I/O control settings.

VM Encryption requires an external KMS. The encryption mechanism is implemented in the hypervisor, making it guest OS agnostic. It doesn’t just encrypt the VMDK but the VM home directory contents too (vmx files, etc).

NFS 4.1

VAAI-NAS primiatives have been improved. It is now possible to offload storage tasks to the backend array.

IPv6 support added for NFS 4.1 kerberos.

iSCSI

Now supports haing the iSCSI initiator and target residing in difference subnets WITH port binding. Apparently VMware had many customers wanting to achieve this!

VMware now supports UEFI iSCSI boot on Dell 13th gen servers with dual NICs.

NVMe

New virtual storage node option on the hard disk option in guest VM, a new HBA for all flash storage.

Supports NVMe specification v1.9e mandatory admin and I/O commands.

Interoperability for this exists within all vSphere features except SMP-FT.

That is the end of my notes on the session. I highly recommend to anyone to come back and re-watch this session when it is publicly available. I had a quick chat with the guys after the session, with many others, to try and soak up any additional knowledge they had on offer. The guys were also nice enough to let me grab a picture with them for the blog. Thanks again for an excellent session!

Leave a Reply

Your email address will not be published. Required fields are marked *