VMworld USA 2017 – Wednesday Breakdown

Day three at VMworld was a bit of a slow start for me, the Rubrik party was a late one and there was no keynote so I decided to rest up a little try and save my energy.

Hanging out in the community areas, which is the best part of the event, was high on the agenda. Early on in the day we swung by to see our favourite Cloud Cred lady Noell Grier . I gave her a bit of a hand doing some “booth babe” duty whilst Rob Bishop collected his Go Pro 5 that he won for completing a CloudCred challenge! Noell is an awesome lady and if you aren’t familiar with CloudCred then you should go to the site, sign up, follow her on twitter and get on it!

The main highlight for Wednesday for me was heading to the customer party. Thanks to #LonVMUGWolfPack shenanigans Gareth Edwards, Rob Bishop and I ended up wearing some very jazzy VMware code t-shirts. The concert was a blast and we had a great time, I really enjoyed Blink 182 despite not being allowed on the main floor. Here are some pics of the event:


(Credit to Gareth for some of these pictures, thanks dude!)

NSX Performance
Speaker: Samuel Kommu
#NET1343BU

Samuel starts by a show of hands and it seems that most of the audience are on dual 10Gbe for their ESXi host networking.

NSX Overview of overlays
There is not much difference between VXLAN encapsulation and original ethernet frames. Only the VXLAN header is extra.
With Geneve Frame format there is an additional options field (length) that specified how much more data can be packed into the header. This can be interesting as you can pack extra information within it. This then helps capture information on particular flows or packets.

Perfomrance tuning
Parameters that matter – MTU mismatch is a pain to try and figure out. There are two places you can set it: ESXi host and on the VM level. From a performance perspective the MTU on the host doesn’t matter unless you change it at the VM level too.

There is a large chance if you change the MTU you will change the performance on your systems. The advice is to change the MTU to recommended values. The reason for this is the amount of payload vs. headers goes down therefore you are getting more for your money.

The vDS MTU sets the host MTU as that is what the host is connected to. The underlying physical network needs the same MTU setting too. Fairly standard stuff but important to check and consider.
Optimizations on TCP/IP , sending a large payload without spending CPU cycles. This is TSO. The act of sending a 1MB file for example, doesn’t happen within the system but it happens on the pNIC when chopping it up.**

With ESXi 6.5 they have brought in LRO in a software LRO rather than having the physical hardware only having it. Now it is possible to leverage LRO without physical capability on NSX 6.5.
When RSS is enabled
– Network adapter has multiple queues to handle receive traffic
– 5 tuple based hash for optimal distribution to queues
– Kernel thread per receive queue helps.

Rx/Tx filters
– Use inner packet headers to queue traffic

Native Driver – vmklinux driver data gets translated to vmkernel data structure. The native driver decreases the translation between both. Meaning less CPU cycles used.

Each vNIC now has it’s own queue down to the pNIC, rather than sharing the same queue. This scales throughput accurately through to the pNIC. It is also now possible to have multiple queues per single vNIC to pNIC.

Compatibility guide

The HCL is an obvious place to start with checking versions to ensure they are all correct and in support. It is then possible to select the right settings so that you can receive the latest and correct drivers to download and install onto your hosts.

Traffic Processes
Traffic flows, E/W and N/S traffic. E/W means a logical switch communication within the same switch to other VM’s .This is usually the most amount of traffic, smaller amounts go out on N/S traffic flow and this also goes through NSX Edge.

Long flows:
– Designed to maximums on bandwith
– Logs
– Backups
– FTP
Short flows:
– Databases, specifically in memory ones or cache layers.}

Small packets:
– DNS
– DHCP
– TCP ACKs
– Keep alive messages

Not all tools are able to test the latest optmizations. Make sure the tools are right for the job. Application level is often best but be aware.
PIC OF STUFF
Fast Path
When packets come in, a new flow, it has different actions depending on the header. This happens throughout the entire stack regardless of E/W N/S traffic.

When you see new flows that are similar type, fast path disregards the flows actions and fast tracks to the destination, with no hash table. This is for a cluster of packets that arrive together, the flow is hashed and then sent via fast path. This causes 75% less CPU cycles.

The session got quite deep at times and went way further than my limited NSX experience could take me. I’m also not a network admin by day either so if there are any mistakes in my notes I’ll correct them as I go.

Leave a Reply

Your email address will not be published. Required fields are marked *