Migrating ESXi Management VMkernel

I have been doing a fair amount of work with NSX recently. In order to start this work we have had some environment changes to go through before achieving this. One of the changes we had to make was to the network that contains the VMkernel for host management traffic. The overall aim was to migrate the interfaces to a new management VLAN (new subnet, gateway, etc).

Here is how I managed to do it without disruption to any existing management or services running.

1) The first step was to create a portgroup on my vDS for the new Management VLAN that had been trunked to the hosts.




I would advise to configure the port group further for your environment based on VMware Network Best Practices for things like Traffic Shaping, Teaming/Failover, etc.

2) Now the port group exists, add in a new VMkernel for all of your hosts for management traffic. For me, I ended up with 3 vmks: old management (vmk0), vMotion (vmk1) and new management (vmk2).


3) From here, I put hosts into maintenance mode that I was going to reconfigure, just to be on the safe side.

4) At this point, it isn’t possible to remove the existing vmk0 because it is in use. The reason for this, is the hosts TCP/IP stack configuration has the old VMkernel gateway configured. This should be changed to the new management network gateway address on each host:


5) From here, I disconnected the hosts from vCenter.


6) I then changed the host records of my ESXi servers to the new management IP address. Allowed some propagation (in fact I checked from the vCSA appliance that it had picked up the newest record from my DNS servers).

7) Reconnect the host(s) back into vCenter.


8) It is now possible to remove the old management VMkernel adapter (vmk0 in my case).


9) I did follow through the process of rebooting my hosts before exiting maintenance mode, but I do not actually think it matters too much.

There we go! A fairly straight forward process and one that I can’t imagine many people doing. I did have a look to see if anyone else had performed a similar process but they hadn’t moved subnet and gateway. Hopefully this might help someone out there who wants to do this!

Rubrik – PowerShell/API SLA Backup & Restores

Having been lucky enough to procure a Rubrik Cloud Data Management appliance at my work recently; we have had the pleasure of experiencing a fantastic technical solution which has assisted us in improving our backup/recover and business continuity planning. The solution, for us, is still in its infancy but we hope to scale and grow as the business realises the full potential of the service. Until then, we have had fun in preparing it for our own production use as it is such a joy to work with!

One thing we questioned was how we get a list of our SLA Domains (as we’ve made a fair few) and their contents. This could be useful in the scenario of someone accidentally deleting policies or machines out of policies. Another potential use case could be if we needed to ‘rebuild’ our Brik SLA configuration in the event of a major failure – highly unlikely but better to be prepared and have committed some brain cycles to it, right?

With that in mind, my esteemed colleague @LordMiddlewick has written some PowerShell scripts with the help of @joshuastenhouse previous blog posts about using Rubrik RestAPIs .


Backup Script

This script can be scheduled to run at your own convenience. Ensure that you fill in the variables in the top section for your own environment. It is possible to encrypt the password within the file itself, this can be achieved using a methodology described here. We have only encrypted it for transmission to the Rubrik service in the case below for simplicity.

The key take aways from this script in whatever fashion you run it are:

* You receive a bunch of .txt files, for each SLA you have defined, in JSON format. Useful for restoring SLA’s. Here is an example:

* Another take home is the file “VM-SLA.csv” which contains a list of all your VMs that are backed up and to what policy they belong. This is really useful restoring VM’s into SLA’s or bulk importing VM’s into SLA’s.

Restore SLA Domain Policies

To reverse the backup process and restore an SLA or all of your SLA’s into the Rubrik, use the following script:

This script will take any .txt (SLA Backup files) in the designated $path and try and create it back on your Rubrik.

Restore/Import VMs to SLAs

The final part of this excercise is to be able to restore a list of VMs that have been pulled out, against the SLA domain policies that you have. The following script does this by using the above “VM-SLA.csv” to import a list of objects in and assign them as per the csv.

The format for the VM-SLA.csv file is as follows:

In theory, if you have lots of machines you want to bulk assign to any given policies, you can create your own CSV and run it to import your VM estate to your predefined policies using this script. We used this several times when assigning 100+ objects to a given policy and it worked a treat!

Disclaimer:Please try to fully read and understand the above scripts before implementing them. You should test them fully first in a development environment before implementing in any production sense. I/we do not take any responsibility for rouge administrators stupidity.

I’m sure as Rubrik continue to steam ahead with excellent releases, infact they might evne build in some of this functioanlity making these scripts redundant. In the meantime hopefully someone finds these scripts useful, I know we have. Once again big shout out to @LordMiddlewick for writing this and giving me permission to post it and also to @joshuastenhouse for his blog https://virtuallysober.com .

VMworld USA 2017 – Wednesday Breakdown

Day three at VMworld was a bit of a slow start for me, the Rubrik party was a late one and there was no keynote so I decided to rest up a little try and save my energy.

Hanging out in the community areas, which is the best part of the event, was high on the agenda. Early on in the day we swung by to see our favourite Cloud Cred lady Noell Grier . I gave her a bit of a hand doing some “booth babe” duty whilst Rob Bishop collected his Go Pro 5 that he won for completing a CloudCred challenge! Noell is an awesome lady and if you aren’t familiar with CloudCred then you should go to the site, sign up, follow her on twitter and get on it!

The main highlight for Wednesday for me was heading to the customer party. Thanks to #LonVMUGWolfPack shenanigans Gareth Edwards, Rob Bishop and I ended up wearing some very jazzy VMware code t-shirts. The concert was a blast and we had a great time, I really enjoyed Blink 182 despite not being allowed on the main floor. Here are some pics of the event:


(Credit to Gareth for some of these pictures, thanks dude!)

NSX Performance
Speaker: Samuel Kommu
#NET1343BU

Samuel starts by a show of hands and it seems that most of the audience are on dual 10Gbe for their ESXi host networking.

NSX Overview of overlays
There is not much difference between VXLAN encapsulation and original ethernet frames. Only the VXLAN header is extra.
With Geneve Frame format there is an additional options field (length) that specified how much more data can be packed into the header. This can be interesting as you can pack extra information within it. This then helps capture information on particular flows or packets.

Perfomrance tuning
Parameters that matter – MTU mismatch is a pain to try and figure out. There are two places you can set it: ESXi host and on the VM level. From a performance perspective the MTU on the host doesn’t matter unless you change it at the VM level too.

There is a large chance if you change the MTU you will change the performance on your systems. The advice is to change the MTU to recommended values. The reason for this is the amount of payload vs. headers goes down therefore you are getting more for your money.

The vDS MTU sets the host MTU as that is what the host is connected to. The underlying physical network needs the same MTU setting too. Fairly standard stuff but important to check and consider.
Optimizations on TCP/IP , sending a large payload without spending CPU cycles. This is TSO. The act of sending a 1MB file for example, doesn’t happen within the system but it happens on the pNIC when chopping it up.**

With ESXi 6.5 they have brought in LRO in a software LRO rather than having the physical hardware only having it. Now it is possible to leverage LRO without physical capability on NSX 6.5.
When RSS is enabled
– Network adapter has multiple queues to handle receive traffic
– 5 tuple based hash for optimal distribution to queues
– Kernel thread per receive queue helps.

Rx/Tx filters
– Use inner packet headers to queue traffic

Native Driver – vmklinux driver data gets translated to vmkernel data structure. The native driver decreases the translation between both. Meaning less CPU cycles used.

Each vNIC now has it’s own queue down to the pNIC, rather than sharing the same queue. This scales throughput accurately through to the pNIC. It is also now possible to have multiple queues per single vNIC to pNIC.

Compatibility guide

The HCL is an obvious place to start with checking versions to ensure they are all correct and in support. It is then possible to select the right settings so that you can receive the latest and correct drivers to download and install onto your hosts.

Traffic Processes
Traffic flows, E/W and N/S traffic. E/W means a logical switch communication within the same switch to other VM’s .This is usually the most amount of traffic, smaller amounts go out on N/S traffic flow and this also goes through NSX Edge.

Long flows:
– Designed to maximums on bandwith
– Logs
– Backups
– FTP
Short flows:
– Databases, specifically in memory ones or cache layers.}

Small packets:
– DNS
– DHCP
– TCP ACKs
– Keep alive messages

Not all tools are able to test the latest optmizations. Make sure the tools are right for the job. Application level is often best but be aware.
PIC OF STUFF
Fast Path
When packets come in, a new flow, it has different actions depending on the header. This happens throughout the entire stack regardless of E/W N/S traffic.

When you see new flows that are similar type, fast path disregards the flows actions and fast tracks to the destination, with no hash table. This is for a cluster of packets that arrive together, the flow is hashed and then sent via fast path. This causes 75% less CPU cycles.

The session got quite deep at times and went way further than my limited NSX experience could take me. I’m also not a network admin by day either so if there are any mistakes in my notes I’ll correct them as I go.

VMworld USA 2017 – Tuesday Breakdown

Tuesday starts with excitement at VMworld 2017, the keynote beings…

Keynote

Pat Gelsinger and Michael Dell take the stage for Day 2, the crowds are in anticipation of a great session.

Pat opens up by talking about support and GSS. In recent years it is the opinion of some that support has been an issue and that standards might not as be as high as previous years. Pat states that he is committed to being the best technology partner and hopefully this will drive change from the top down through VMware to improve this area. This is fantastic news!

Michael gives his thoughts on machine learning and quantum computing topics. He talks about the sheer number of devices available now and the IoT trend. Data is growing at an exponential rate and if we are able to overlay computer science and machine intelligence to this data we reach a tremendous age of humans and computers working together for some great possibilities. He believes we are the dawn of this era.

Pat comes out with a classic line summizing this topic of conversation:

“Today is the slowest day of technical evolution in the rest of your life”.

Pat and Dell have a great rapport, this much is clear from their discussions on stage. There is a small amount of banter between them which gets the crowd laughing. It’s moments like these that make the event more enjoyable to watch as it shows that they are just guys who are passionate about technology, not multi-billion dollar CEO’s.

VMware and Pivotal are also announced as platinum partners of the Cloud Native Computing Foundation.

Later, Ray O’Farrel joins the stage to a big applause, he is a peoples favourite. He jokes that VMworld this year is a bit like a rock concert. He asks the question, how do we build the products that they put forward for us, the customer. The main principals to this focus are:
1) Customers can take advantage of the most modern infrastructure available.
2) Pragmatic about how we consume the technologies. Robust, quality delivery.#
3) New consumption models and how things can be delivered “as a service”.
4) Developer friendliness, allow devs to leverage the infrastructure and applications via code.

Ray Demonstrates using a fictitious company “Elastric Sky Pizza”. They are using Project Leonardo to push forward the company with digital transformation. The question being how can this company use products from VMware to help this transformation?

The answer is VMware Cloud Infrastructure a Unified SDDC Platform. A dive into how this complete stack of technology delivers a complete digital transformation for the business. The most impressive thing to me in the demonstration that follows is the VMware App Defence product. It is also nice to see end-to-end use of the entire product stack. I highly recommend that people catch up on the this day two keynote!

Community

The rest of my day after the keynote was spent catching up with fellow community members and wondering around the solutions Exchange. I met up with my buddy, and fellow @LonVMUG member Gareth Edwards and we had a flourish of creativity in order to try and win the Turbonomic competition. As luck would have it, I was judged as the most creative of the day and won an Nvidia Shield. Thank you so much!

Here are my winning entries and me collecting my prize.

In the evening, we went to the Pinball hall of fame for the vExpert party. It was great fun to chat with fellow dedicated community members! We had several drinks, some excellent food and played some old/new school pinball and video games!

The final end (or start) to the evening was attending the Rubrik Party at the Cosmopolitan Hotel/Casino. Gareth and I attended with our white VIP wristbands and went into a fully booked night club for the evening to watch Ice Cube and mingle!

A great end to the evening (at 4:30am)! Thanks to all who were out, especially a big shout out to my main Eric Lee who was on fire the entire evening. Such fun times and entirely the reason I love being a vExpert and party of the VMware community.

VMworld USA 2017 – vSphere 6.5 Storage Deepdive

So, you want to learn about storage… Who better to drop some knowledge on you other than two community legends Cormac Hogan and Cody Hosterman?

As soon as I saw the session in the content catalogue, I knew it was something I’d be attending! Here are my notes on an excellent breakout session.

A Deep Dive into vSphere 6.5 Core Storage Features and Functionality [SER1143BU]

After a quick intro, Cormac and Cody broke the content down into several sections.

Limits

ESXi hosts now support:
– 2k paths (doubled)
– 512 devices (doubled)
– 512e Advanced Format Device Support.

If your storage is set to 512n the defaults of VMFS go to 5, where as if you are 4kn or 512e it defaults to VMFS 6.

DSRNO hypervisor queue depth limit has been 256 but it now matches the HBA device queue depths.

VMFS-6

Has two new internal block sizes small file black (SFB) and large file block (LFB), 1MB and 512MB respectively. This dictates the growth of a VMFS-6 disk. Thin disks are backed by SFB’s. Thick disks are allocated with LFB’s – this also includes vswap files. Therefore the power on/creation of eager thick zero VM’s is much quicker.

The system resource files are now extended dynamically for VMFS-6. These were previously static. If the file-system exceeds resources, the resource files can be extended to create more. This now supports millions of files assuming free space on the volume.

Journals are used for VMFS when performing metadata updates on the file-system. Previous versions used regular file blocks for journalling. In VMFS-6 they are tracked separately as system resource files and can be scaled easily as a result.

VM-based block allocation affinity is now a thing! Resources for VM (blocks, file descriptors) used to be allocated on a per host basis (host-based). This caused contention locking issues when they were created on one host and then migrated to another host.

New hosts would allocate blocks to the VM/VMDK and this used to cause locks and contention on the resource files. Thanks to VM-based block allocation this decreases resource lock contention.

Hot extend support now enables VM disk grow of VMDK’s that are larger than 2TB whilst a VM remains powered on. Previously a power off action was required. This is a vSphere 6.5 feature, so as soon as your host is updated it will be able to perform this task on VMFS-5/6 volumes. Massive win!

Finally, the only slight caveat for VMFS-6 is that you have to decommission your old data store and migrate data and reformat to the new file-system. There is no upgrade option, however, thanks to SDRS it’s not the end of the world right?

VAAI

ATS Miscompare handling (Atomic Test & Set) unlocks the scale-ability of VMFS.

Every so often there used to be a mis-compare; between the test and set option there is a comparison from the previous test and set operation. Sometimes the timeout was too low and the storage array would take time to respond. This used to confuse host and storage comparison and cause issues. More often than not, issues that arose were down to a false mis-compare. Meaning that there was no real issue – just potentially some high latency. These small issues occurred anywhere between the vSphere, Storage Array and HBA.

Good news is that now, in the event of false mis-comapres now, VMFS will not immediately abort IO’s. It retries ATS Heartbeat after a short interval (less than 100ms). Meaning a smoother process all around.

UNMAP was introduced in 5.0. It enables the host to understand that blocks have been moved or deleted on the back-end on a thin disk. This allows to reclaim the freed blocks, freeing up space.

Automatic UNMAP is back for VMFS-6. Hosts asynchronously reclaim space by running UNMAP crawler mechanisms. This does require a 1MB reclaim block size on the back-end array. This is an automatic process and can take 12-24 hours to fully reclaim the space.

In-guest UNMAP was introduced in 6.0. A thin provisioned guest was able to inform the host of dead space within the file-system, this would then UNMAP down to the array.

Linux support has now been introduced thanks to introduction of SPC-4 support in vSphere 6.5.

SPBM

Storage policy based management, also known as VAIO filters.

It is a comon framework to allow storage and host related capability to be consumed via policies.

Rules are built for data services provided by hosts, such as “I want a VM on storage dedupe but no encryption, with X availability requirement”. These rule sets can be applied to a provisioned VM and it receives the policies and rules.

Two new I/O filters have been shipped with 6.5. VM Encryption amd storage I/O control settings.

VM Encryption requires an external KMS. The encryption mechanism is implemented in the hypervisor, making it guest OS agnostic. It doesn’t just encrypt the VMDK but the VM home directory contents too (vmx files, etc).

NFS 4.1

VAAI-NAS primiatives have been improved. It is now possible to offload storage tasks to the backend array.

IPv6 support added for NFS 4.1 kerberos.

iSCSI

Now supports haing the iSCSI initiator and target residing in difference subnets WITH port binding. Apparently VMware had many customers wanting to achieve this!

VMware now supports UEFI iSCSI boot on Dell 13th gen servers with dual NICs.

NVMe

New virtual storage node option on the hard disk option in guest VM, a new HBA for all flash storage.

Supports NVMe specification v1.9e mandatory admin and I/O commands.

Interoperability for this exists within all vSphere features except SMP-FT.

That is the end of my notes on the session. I highly recommend to anyone to come back and re-watch this session when it is publicly available. I had a quick chat with the guys after the session, with many others, to try and soak up any additional knowledge they had on offer. The guys were also nice enough to let me grab a picture with them for the blog. Thanks again for an excellent session!