The final official day of VMworld came around really quick! When I first left the UK to fly out it seemed that I had loads of time to spend going to all these cool sessions. The actual reality was that because I was so busy, time flew by!
I somehow managed to get up in time for the keynote, although I was a little late arriving at the blogger tables. Luckily there was room, which surprised me a little. I guess lots of others had difficulty getting up in the morning or perhaps they were watching the stream from the comfort of their hotel room? I created a live blog post of the Day 5 keynote which can be found in the VMworld USA section of my site. I believe it is tradition that the final keynote isn’t surrounding VMware specifically, but more a look forward to technology and developments for the future. This session was very eye opening and involved some very smart people giving excellent, imagination-capturing talks. If you haven’t seen that keynote, I urge you to watch it; it can be likened to three 20 minute Ted Talks.
After the keynote, I spent some time catching up on writing my blog. I had a session booked but I felt that I had too much to do and wanted to take it easy on the last day and talk with some people in the vBrownbag area. One good thing about attending VMworld is that all the sessions will be made available on line, so there is no need to kill yourself trying to attend everything. On a side note, the vBrownbag crew were positioned right next to the blogger area and they did a great job covering extra content all week. A superb effort by all the guys in that team, they pretty much put on their own mini-conference within the conference. Plus the content is made available super quick, pretty much same day!
My first session was a Next Generation Desktop Architecture overview. This was important one as I’m fairly involved with EUC at my workplace and wanted to see what the future might behold. My session notes are below but the promise of using VMfork technology to provision desktops is an exciting take-away. Notes at the bottom of this post!
The session I was looking forward to most was in fact my last of the day. Jason Nash and Chris Wahl doing a technical deepdive on the vSphere 6.0 Distributed switch. The thing that struck me most about these two is that they obviously have a good working relationship, given that they’ve been running the technical deep dive for quite a few years. This was indicated by the quality of their presenting, content and on stage rapport. The fact that this session was at 13:30-14:30 was packed on the last day, when some attendees have started to make their weary travels back home, is a testiment to these distinguished gentlemen. My notes are below!
As the talk came an end, I realised that my final was session over. Sad Panda! Although it was a great to end the conference on a high! I went back to the Blogger space and said goodbye to a few guys before heading off to start a crash course in San Francisco sight seeing with a well regimented plan! Then time to fly back home to recharge for VMworld Europe 🙂
Next Generation Desktop Architecture Overview – Frank Taylor & Ken Ringdahl
The future platform is going to be:
– Single platform regardless of location of workload
– Ability to burst to the cloud and or migrate workloads based on demand
– Follow me data persona
– Unified management with a common look and feel from a single URL
– Fully managed IaaS monitoring an update.
– Rapid image update, real-time app delivery.
– Next gen access gateway with identity.
The goals of this platform are going to be:
– Simplify the management of Virtual Desktops and Apps
– Drive down the cost (CapEx and Opex) to deploy virtual desktops and apps
– Common platform for on prem and cloud deployments that can be globally distributed.
– SaaS platform management experience for the desktop admin
– Resiliency to cloud interruptions
Project ENZO Smartnode
– Self-contained unit of infrastructure capacity built for desktops and applications (collection of homogenous infrastructure wrapped up in software)
– A software based management construct.
– All core desktop workloads can be smartnodes
o On prem appliance e.g.- EVO, Horizon Air, View
o Could be different capabilities depending on type of SmartNode
– Provide on-ramp from existing platforms to Smartnode
Converged or Hyper-converged types of smartnode infrastructure.
Horizon 6 Smartnode
– Management layer wrapping Horizon 6 to provided unified technology.
Smartnode technical components:
– 100% Linux based technology stack
– Horizon DaaS platform
– Highlight available, self-updating, Linux based software appliances
– Distinct separation of infrastructure and desktop management
– Next gen remote access gateway
– Hardened SLES appliance
– Provides all features of security server (PSG, BSG, Tunnel Server)
– Supports a hierarchical deployment for DMZ placement.
An alternative, cut down, lightweight replacement to the Windows based Security Server; that has multiple architecture options available.
Next generation provisioning
– VMFork based provisioning engine enabling rapid desktop creation
Workspace environment management
– AppVolumes manager and UEM embedded.
– Moving from monolithic layer desktop
o Decompose desktop into separate entities
– Reduced number of configurations
– More customized user experienced
– More easily managed
o Replicate just user data, not entire VM
o Easier to recreate in a different location
The future holds a throw away and rebuild mentality as it’s easier to manage.
– VMfork Instant clone
o Build a new VM by forking a partially booted parent VM
o Reduces the I/O cost of provisioning
– Reboot-less guest customization
o Machine identity the configuration injected during remained of post fork boot.
o No costly reboots
– Low cost content injection
o User specific content hot mounted at login
o Incredibly low I/O cost for injecting content (it’s already been paid)
Take the persona of your user and abstracting the user environment settings, application configuration, personalization and dynamic configuration.
vSphere Distributed Switch 6.0 – Technical Deep dive – Jason Nash & Chris Wahl
Granular Network guarantees
– Network IO control version 3
– Setting guarantees on VM’s and DPGs
Using multiple TCP/IP stacks
– Setup a supported routed vMotion
– Migrate workloads from vCenter to another
100% VDS Fuelled Data Center
– How to protect vCenter Server and other dependences
– Toss out the Standard vSwitch completely
– Network IO Control v3.0
– Multicast IGMP snooping
– Multiple TCP/IP Stack for vMotion (It was possible and supported in 5.5 but not commonly known).
– VMware no longer sells 1000v (**cough**YEY**cough**)
– It is supported in 6.0, however.
– 1000v AVS Mode is NOT supported
Build and Upgrade 6:
– vDS 4.0 is no longer able to be built. vDS 6.0 in fat client and web client. Web client has more details in the description.
– Version number of the switch is visible in Web Client. Once you have upgraded to vSphere 6.0, upgrade vDS separately (don’t forget!)
Network IO Control v3:
– More guardrails = less fluidity in the DC
– The best designs utilizing it are simple!
– Does it really solve a problem?
Traffic placement engine:
– Places VMs network adapter on optimal NIC
– Must be able to meet reservation
– Still adheres to teaming policies
– Two control points
o Distributed Port Group (All VM’s attached)
o Virtual Machine (Per VM)
– vSphere DRS
o It will migrate when reservation exceeds host capacity
o It will migrate if the NIC fail
– HA Considers the reservations when powering on VM’s
Cross vSwitch vMotion
– Choose destination network when vMotion VMs
– Can go between
o vSS to vSS
o vSS to vDS
o vDS to vDS
– Must be on Enterprise Plus
– vCenters must be in Enhanced Linked Mode
– Good time sync is mandatory
Long Distance vMotion
– Can now vMotion across links up to 150ms of latency
– Keep in mind that this can affect VM application performance
o Do this in non-peak hours
Protecting vCenter with a vDS
ESXi hosts contain a cached copy of the vDS configuration.
If vCenter is lost you can’t change anything on the config. What happens if host fails whilst vCenter fails with a portgroup that has static port-binding? Invalid-Backing errors!
The fix is to build an ephemeral port group which is exactly like the existing port group which vCenter uses (VLAN, Security, Load Balancing, etc)
Chris made a post about this earlier in the year for those of you who want to read up on it.