As soon as I saw the session in the content catalogue, I knew it was something I’d be attending! Here are my notes on an excellent breakout session.
A Deep Dive into vSphere 6.5 Core Storage Features and Functionality [SER1143BU]
After a quick intro, Cormac and Cody broke the content down into several sections.
ESXi hosts now support:
– 2k paths (doubled)
– 512 devices (doubled)
– 512e Advanced Format Device Support.
If your storage is set to 512n the defaults of VMFS go to 5, where as if you are 4kn or 512e it defaults to VMFS 6.
DSRNO hypervisor queue depth limit has been 256 but it now matches the HBA device queue depths.
Has two new internal block sizes small file black (SFB) and large file block (LFB), 1MB and 512MB respectively. This dictates the growth of a VMFS-6 disk. Thin disks are backed by SFB’s. Thick disks are allocated with LFB’s – this also includes vswap files. Therefore the power on/creation of eager thick zero VM’s is much quicker.
The system resource files are now extended dynamically for VMFS-6. These were previously static. If the file-system exceeds resources, the resource files can be extended to create more. This now supports millions of files assuming free space on the volume.
Journals are used for VMFS when performing metadata updates on the file-system. Previous versions used regular file blocks for journalling. In VMFS-6 they are tracked separately as system resource files and can be scaled easily as a result.
VM-based block allocation affinity is now a thing! Resources for VM (blocks, file descriptors) used to be allocated on a per host basis (host-based). This caused contention locking issues when they were created on one host and then migrated to another host.
New hosts would allocate blocks to the VM/VMDK and this used to cause locks and contention on the resource files. Thanks to VM-based block allocation this decreases resource lock contention.
Hot extend support now enables VM disk grow of VMDK’s that are larger than 2TB whilst a VM remains powered on. Previously a power off action was required. This is a vSphere 6.5 feature, so as soon as your host is updated it will be able to perform this task on VMFS-5/6 volumes. Massive win!
Finally, the only slight caveat for VMFS-6 is that you have to decommission your old data store and migrate data and reformat to the new file-system. There is no upgrade option, however, thanks to SDRS it’s not the end of the world right?
ATS Miscompare handling (Atomic Test & Set) unlocks the scale-ability of VMFS.
Every so often there used to be a mis-compare; between the test and set option there is a comparison from the previous test and set operation. Sometimes the timeout was too low and the storage array would take time to respond. This used to confuse host and storage comparison and cause issues. More often than not, issues that arose were down to a false mis-compare. Meaning that there was no real issue – just potentially some high latency. These small issues occurred anywhere between the vSphere, Storage Array and HBA.
Good news is that now, in the event of false mis-comapres now, VMFS will not immediately abort IO’s. It retries ATS Heartbeat after a short interval (less than 100ms). Meaning a smoother process all around.
UNMAP was introduced in 5.0. It enables the host to understand that blocks have been moved or deleted on the back-end on a thin disk. This allows to reclaim the freed blocks, freeing up space.
Automatic UNMAP is back for VMFS-6. Hosts asynchronously reclaim space by running UNMAP crawler mechanisms. This does require a 1MB reclaim block size on the back-end array. This is an automatic process and can take 12-24 hours to fully reclaim the space.
In-guest UNMAP was introduced in 6.0. A thin provisioned guest was able to inform the host of dead space within the file-system, this would then UNMAP down to the array.
Linux support has now been introduced thanks to introduction of SPC-4 support in vSphere 6.5.
Storage policy based management, also known as VAIO filters.
It is a comon framework to allow storage and host related capability to be consumed via policies.
Rules are built for data services provided by hosts, such as “I want a VM on storage dedupe but no encryption, with X availability requirement”. These rule sets can be applied to a provisioned VM and it receives the policies and rules.
Two new I/O filters have been shipped with 6.5. VM Encryption amd storage I/O control settings.
VM Encryption requires an external KMS. The encryption mechanism is implemented in the hypervisor, making it guest OS agnostic. It doesn’t just encrypt the VMDK but the VM home directory contents too (vmx files, etc).
VAAI-NAS primiatives have been improved. It is now possible to offload storage tasks to the backend array.
IPv6 support added for NFS 4.1 kerberos.
Now supports haing the iSCSI initiator and target residing in difference subnets WITH port binding. Apparently VMware had many customers wanting to achieve this!
VMware now supports UEFI iSCSI boot on Dell 13th gen servers with dual NICs.
New virtual storage node option on the hard disk option in guest VM, a new HBA for all flash storage.
Supports NVMe specification v1.9e mandatory admin and I/O commands.
Interoperability for this exists within all vSphere features except SMP-FT.
That is the end of my notes on the session. I highly recommend to anyone to come back and re-watch this session when it is publicly available. I had a quick chat with the guys after the session, with many others, to try and soak up any additional knowledge they had on offer. The guys were also nice enough to let me grab a picture with them for the blog. Thanks again for an excellent session!
Monday has started and VMworld is in full swing: registrations have happened, VMunderground party has been attended and here I am….ready.
After a vibrant musical start to the first day keynote, Pat Gelsinger comes out on stage full of energy and praise for the VMware community.
(Apologies for the picture quality, camera is a potato)
“Science fiction is becoming every day science fact!”
Pat talks of older films such as Alien and how now there are similar cyber suits in Korea offering similar functionality. Dreams are becoming reality thanks to advances in technology. This means that we have a very exciting future ahead of us!
The question is, out of all changes in technology, what is the most profound? Our expectations are the most prominent it would appear. It is hard to believe that the next generations perspective has changed so much in recent times. He jokes about self driving cars driving like his old Grandma and how there is a dynamic of tech leaving the nest of tech.
He states VMware’s strategy of “Enabling businesses to create and deliver apps for their businesses”:
– Any device
– Any application
– Intrinsic Security
He talks of how the industry is reaching further than ever before; companies working together to unleash the potential of users most valuable resources. There is a reference to this with Fujistu working with Toyota to bring the next generation of pulse IoT for next gen user experiences.
Pat later shows footage of an early VMware sales call, it is full of 90’s references: from the Nirvana T-shirt rocker guy to the terribly dressed and over enthusiastic board meeting member acting. Funny stuff. Since those early sales calls everything has changed: virtualised data centers and public/private clouds. Pat’s passion and enthusiasm is clear when he discusses dusting off his old electronics kit and building a computer with his grandson.
Andy Jassey, CEO of AWS joins Pat on stage to discuss their recent partnerships in the public cloud. This partnership, as most know, is quite huge. They discuss the roadmap for the future, which is driven by us…the customers! It is good to know that our collective input is being considered in the platform that is delivered. Geographic expansion is a large part of this roadmap and future, which I will be watching with interest.
The discussion of security arises and how things can be secured for the better:
– Architect security into the infrastructure, making it native.
– Deeply architect the integrated ecosystem
– Return to basics and follow simple “cyber hygine” principals are adhered too.
This leads to the into the announcement that VMware AppDefense is off the product line and ready to go. It is going to improve security by monitoring applications in a virtual environment to provide reactive, real-time defence against potential malicious attacks.
Later, the Red Cross join Sanjay Poonen on stage. There is a moving video intro showing all the good work they do across the world. It is quite eye opening to understand the logistics of the organisation and how IT drives them forward. Examples given this really underpin the value of IT Industry in organisations such as this, making a difference in peoples lives.
As he wraps up the keynote, Sanjay states whats coming up in the forthcoming days for the delegates. It looks to me that there is lots of exciting content on the way and I’m looking forward to blogging about it. Michael Dell is due on stage tomorrow to have a chat with Pat and Sanjay which will be fantastic. News of a secret announcement on its way…
VMTN + Blogger Info
I have been hanging out in the VM Village catching up with folks and have had the pleasure of catching up/meeting with Corey, Katie, Elsa and others regarding the latest work coming out of the Digital Marketing social media teams.
A lot of work has gone into VMTN in the last year or so with noticeable upgrades specifically focused on performance. This is down to the great work that Katie Bradley has put in.
There is combined effort with the VMware blogging program underway to make improvements to VMTN and blogs.vmware.com which will involve pulling together all the community content and digital assets for community consumption. This is coming soon so make sure you register to be a VMTN member if you haven’t already! The main program that is launching is coming out in Beta around the end of September. It’s a bi-directional object server to enable bloggers/digital-content-creators to display exceptional content from the community that will be hosted on the server.
The blogger program is hopefully going to start with a monthly round table call. The aim will be to have a 30 minute touch base and understand if there any pain points for us and how the blogging community can be further improved. There will be a top bloggers of the month award and they will receive prizes, with potential VMworld ticket prize give aways! There will be experts come in on the call to discuss topics and provide extra content to blog about.
Finally, there are going to be some improvements where there will be targeted content based on trending subjects and hopefully additional benefits for those who are looking to promote their blogs.
It all sounds great and I’m excited to see what comes out in the next few months!
Most of the time reading my email, be it personal or work, isn’t that enthralling. However, not so long ago I opened up an email from VMware’s CommunityGuy confirming that I had been awarded a blogger pass to go to VMworld 2017 in Las Vegas!! It is indeed an honour and a privilege to be selected, I know there are many vExperts out there who might not have been so lucky this year, so I’m going to do as I did last time and try my absolute best to bring home some of the conference to those who didn’t get to go.
To all those reading who might have never been, I posted in 2015 about why you should consider it and some useful tips if you do register to go!
My previous post’s from when I attended VMworld 2015 can also be found here.
“I am shaping the future”
“I am an innovator”
“I am a visionary”
“I am expanding opportunities”
“I am a game changer”
“I am challenging the ordinary”
“We thrive together”
The tag lines from the event marketing this year are bold and perhaps what the cynical among you might expect and groan at. I’m the opposite, working in IT is mostly an invisible thankless job which can be challenging at times as most people don’t appreciate or understand what it is that we do to make their lives better (for the most part I hope). These tag lines, whilst a little cheesy, are what many of us are/do without the reminder of it on frequent basis. We are the pioneers of driving technology forward, using our technical know-how and foresight to shape the IT landscape in front of us for the best possible outcome. Finally, the last line is most important to me – thriving together as a community is the only way I see the aforementioned being achieved. Sharing knowledge and being part of something bigger makes such a massive difference and I’m glad that I’m apart of the excellent VMware community. I’m thrilled to be going back this year and getting to reacquaint with friends that I made first time around!
If you haven’t already, you should register for the event as it’s going to be great! If you are a fan of Blink 182 or Bleechers then you should definitely register as they have only just recently been announced as the Wednesday night party acts!
As a side note, here are just a few of the great posts out there from some well known bloggers about this years VMworld:
I plan on posting soon with my thoughts on the sessions and what I plan to attend. Until then, see you next time!
In my final post in this series, I’m taking a look at performing a test fail-over of some VM’s I have on premises to an Azure Site Recovery instance.
This is a really easy process and doesn’t take too much documenting. Afterwards I’m going to share a few things I’ve learned along the way and my thoughts on ASR.
1) Navigate to the recovery vault from within your Azure Dashboard.
2) Under Protected Items Navigation pane and select “Replicated Items”.
3) Here you should get a view of all the machines that have been protected by you and their status.
4) Select a VM that you want to test and click Test Fail-over.
5) Choose the recovery point and the network that you want to fail-over to and select OK.
6) Some checks should complete and the fail-over environment is prepared. After around 15 minutes I had my VM running and awaiting user input checks.
As you can see, the fail-over process is easy for a single VM. I’m still in a testing phase at the moment so haven’t performed a mass failover but it doesn’t seem to take more than 10 minutes to move.
An issue is presented with testing, there is no “KVM” option of the VM – unlike other hosting providers. So the only way to check the VM had been put in the network and came up correctly was to build a management 2012 R2 server in my ASR recovery network with a public IP and enable RDP.
1) There are some limitations on what is possible with your test fail-over VMs. Example being no KVM at present (only a screenshot view of the system).
2) All VM’s failed over receive a 20GB “free” disk for temporary working. A nice feature but for us it caused Linux VM’s to have their disk devices renamed. E.G – our ‘sdb’ disk became ‘sdc’ because the temporary disk took its label. Not ideal and I’m not sure what can be done to disable this at present. Microsoft recommend mounting disks by their UUID to work around this although this is contrary to Red Hat advice and mounting logical volumes.
3) If you have logical volumes which contain thinpool logical volumes, they are undetected by the Microsoft agent on the Linux VM. This isn’t great for our environment as we have these for DB snapshots, as an example.
4) Having a management server built and running to enable communications into your test fail-over network is a good idea!
I think that this service is very good and for Windows Servers very easy. Microsoft seem to be pushing their Azure offering hard and new features have been implemented in the portal by the time I managed to finish this blog series. There are limitations with the Linux offering and it is slightly more clunky, but that’s expected. The important thing to note is that it does work!
I believe that for a greenfield site where you can take into account a lot of the DR/Fail-over caveats and issues it is a great service. If you aren’t greenfield you might come across limitations that might be hard to overcome without having to re-architect a lot of your services (this is true in a lot of environments where DR is being built in as an afterthought). This is the issue I face at my workplace with this service.
For a small/medium size business, being able to replicate your infrastructure out into the cloud and pay a “minimal” fee to have DR capability is almost a no brainer. The simplicity of setup and the safety in knowing you can spin up in the cloud to maintain service is really excellent. It is almost certainly much cheaper than having to buy another DC/Room/Rack to put in a load of other kit and replicate too, especially for those with tighter purse strings.
Really understanding the service and what it will cost you is key, whilst replication costs and cloud storage are “minimal” – in the event of a fail-over you might find the bill coming in from Microsoft to be a much higher than normal. I guess that is the roll of the dice you take with having an opex “buy now pay later, should I need it” approach.
I would recommend ASR to people looking at a cloud DR service, obviously adding in caution and heavily advising to do your homework before taking the plunge.
This is the third instalment of my VMware and Azure Site Recovery series. In the last two posts I covered how to prepare for the service and installing the components within the cloud and private infrastructure to setup ready to make magic happen!
In this post, I’m going to go through client installation for a Windows and Linux box and setup replication jobs via the ASR interface online.
This process can be achieved in a number of ways: the Process Server can push out the client to servers you want to protect, you can install centrally (SCCM, GPO, Puppet, etc) or you can install manually. I decided to install it via the Process Server for the Windows server (this requires an account with local admin) and for the Linux server I installed manually as I didn’t trust the automated mechanism and the permissions model I have for Linux at work isn’t easy to have an account that isn’t root install things. (Easy life).
1. Within the online ASR portal, via Site Recovery. Select a source of replication (vCenter Server and Process Server already configured).
2. For target, set the right accounts and then select the network that you want to fail-over into (as created previously).
3. Select a VM that you want to replicate from your inventory list. This is pulled directly from the vCenter Serve via the Process Server.
4. On the configuration properties, select the account that has permissions. In my case, the “AD VC PoC Account” is an account with vCenter permission and is also configured as local admin via GPO for this box. Purely for PoC purposes.
5. Select your replication policy for the VM. For me this was the default policy I created earlier.
6. Important! Before you proceed to step 7. Make sure the firewall on your server is set to allow traffic through from the process server or the next steps will fail! You need: Firewall & Print Sharing,
7. Last step is to enable replication.
8. Now the Process Server will contact the server and install the right components (agent) and enable replication.
9. After about 15 minutes, the installation should be complete! Replication should now start up.
The Linux client requires two installs. One is the agent that talks/registers to the Process Server on site, the other is the replication agent.
1. On the process server get the appropriate Linux Agent file from: F:\Program Files (x86)\Microsoft Azure Site Recovery\home\svsystems\pushinstallsvc\repository and SCP it to the Linux box.
2. Navigate to your Process Server C:\ProgramData\Microsoft Azure Site Recovery\private. Open the “connection” file and copy the phrase down.
3. Unzip the installer and also create a “passphrase.txt” file and insert your passphrase.
4. Install/Register the agent from the zip file:
./install -t both -a host -R Agent -d /usr/local/ASR -i "IP_OF_PROCESS_SERVER" -p 443 -s y -c https -P "PASSPHRASE.TXT"
5. Download the latest “WALinuxAgent” for replication to your Linux Box.
6. Unzip the file
7. Install and register the replication service:
python setup.py install --register-service
8. Check that services are running:
ps -ef | grep wa
After some private->cloud replication time (process server refresh) it should be possible to select the Linux VM from your inventory and replicate it, similar to how the Windows Server above was configured – minus the agent push.
I’m going to stop the post here and leave the replication/test fail-over for another post which should be the final one. The good thing about ASR is that once configured correctly it provide a “Test DR/Fail-over” option where you can run multiple simulations whilst maintaining replication!
Until next time!