Most of the time reading my email, be it personal or work, isn’t that enthralling. However, not so long ago I opened up an email from VMware’s CommunityGuy confirming that I had been awarded a blogger pass to go to VMworld 2017 in Las Vegas!! It is indeed an honour and a privilege to be selected, I know there are many vExperts out there who might not have been so lucky this year, so I’m going to do as I did last time and try my absolute best to bring home some of the conference to those who didn’t get to go.
To all those reading who might have never been, I posted in 2015 about why you should consider it and some useful tips if you do register to go!
My previous post’s from when I attended VMworld 2015 can also be found here.
“I am shaping the future”
“I am an innovator”
“I am a visionary”
“I am expanding opportunities”
“I am a game changer”
“I am challenging the ordinary”
“We thrive together”
The tag lines from the event marketing this year are bold and perhaps what the cynical among you might expect and groan at. I’m the opposite, working in IT is mostly an invisible thankless job which can be challenging at times as most people don’t appreciate or understand what it is that we do to make their lives better (for the most part I hope). These tag lines, whilst a little cheesy, are what many of us are/do without the reminder of it on frequent basis. We are the pioneers of driving technology forward, using our technical know-how and foresight to shape the IT landscape in front of us for the best possible outcome. Finally, the last line is most important to me – thriving together as a community is the only way I see the aforementioned being achieved. Sharing knowledge and being part of something bigger makes such a massive difference and I’m glad that I’m apart of the excellent VMware community. I’m thrilled to be going back this year and getting to reacquaint with friends that I made first time around!
If you haven’t already, you should register for the event as it’s going to be great! If you are a fan of Blink 182 or Bleechers then you should definitely register as they have only just recently been announced as the Wednesday night party acts!
As a side note, here are just a few of the great posts out there from some well known bloggers about this years VMworld:
I plan on posting soon with my thoughts on the sessions and what I plan to attend. Until then, see you next time!
In my final post in this series, I’m taking a look at performing a test fail-over of some VM’s I have on premises to an Azure Site Recovery instance.
This is a really easy process and doesn’t take too much documenting. Afterwards I’m going to share a few things I’ve learned along the way and my thoughts on ASR.
1) Navigate to the recovery vault from within your Azure Dashboard.
2) Under Protected Items Navigation pane and select “Replicated Items”.
3) Here you should get a view of all the machines that have been protected by you and their status.
4) Select a VM that you want to test and click Test Fail-over.
5) Choose the recovery point and the network that you want to fail-over to and select OK.
6) Some checks should complete and the fail-over environment is prepared. After around 15 minutes I had my VM running and awaiting user input checks.
As you can see, the fail-over process is easy for a single VM. I’m still in a testing phase at the moment so haven’t performed a mass failover but it doesn’t seem to take more than 10 minutes to move.
An issue is presented with testing, there is no “KVM” option of the VM – unlike other hosting providers. So the only way to check the VM had been put in the network and came up correctly was to build a management 2012 R2 server in my ASR recovery network with a public IP and enable RDP.
1) There are some limitations on what is possible with your test fail-over VMs. Example being no KVM at present (only a screenshot view of the system).
2) All VM’s failed over receive a 20GB “free” disk for temporary working. A nice feature but for us it caused Linux VM’s to have their disk devices renamed. E.G – our ‘sdb’ disk became ‘sdc’ because the temporary disk took its label. Not ideal and I’m not sure what can be done to disable this at present. Microsoft recommend mounting disks by their UUID to work around this although this is contrary to Red Hat advice and mounting logical volumes.
3) If you have logical volumes which contain thinpool logical volumes, they are undetected by the Microsoft agent on the Linux VM. This isn’t great for our environment as we have these for DB snapshots, as an example.
4) Having a management server built and running to enable communications into your test fail-over network is a good idea!
I think that this service is very good and for Windows Servers very easy. Microsoft seem to be pushing their Azure offering hard and new features have been implemented in the portal by the time I managed to finish this blog series. There are limitations with the Linux offering and it is slightly more clunky, but that’s expected. The important thing to note is that it does work!
I believe that for a greenfield site where you can take into account a lot of the DR/Fail-over caveats and issues it is a great service. If you aren’t greenfield you might come across limitations that might be hard to overcome without having to re-architect a lot of your services (this is true in a lot of environments where DR is being built in as an afterthought). This is the issue I face at my workplace with this service.
For a small/medium size business, being able to replicate your infrastructure out into the cloud and pay a “minimal” fee to have DR capability is almost a no brainer. The simplicity of setup and the safety in knowing you can spin up in the cloud to maintain service is really excellent. It is almost certainly much cheaper than having to buy another DC/Room/Rack to put in a load of other kit and replicate too, especially for those with tighter purse strings.
Really understanding the service and what it will cost you is key, whilst replication costs and cloud storage are “minimal” – in the event of a fail-over you might find the bill coming in from Microsoft to be a much higher than normal. I guess that is the roll of the dice you take with having an opex “buy now pay later, should I need it” approach.
I would recommend ASR to people looking at a cloud DR service, obviously adding in caution and heavily advising to do your homework before taking the plunge.
This is the third instalment of my VMware and Azure Site Recovery series. In the last two posts I covered how to prepare for the service and installing the components within the cloud and private infrastructure to setup ready to make magic happen!
In this post, I’m going to go through client installation for a Windows and Linux box and setup replication jobs via the ASR interface online.
This process can be achieved in a number of ways: the Process Server can push out the client to servers you want to protect, you can install centrally (SCCM, GPO, Puppet, etc) or you can install manually. I decided to install it via the Process Server for the Windows server (this requires an account with local admin) and for the Linux server I installed manually as I didn’t trust the automated mechanism and the permissions model I have for Linux at work isn’t easy to have an account that isn’t root install things. (Easy life).
1. Within the online ASR portal, via Site Recovery. Select a source of replication (vCenter Server and Process Server already configured).
2. For target, set the right accounts and then select the network that you want to fail-over into (as created previously).
3. Select a VM that you want to replicate from your inventory list. This is pulled directly from the vCenter Serve via the Process Server.
4. On the configuration properties, select the account that has permissions. In my case, the “AD VC PoC Account” is an account with vCenter permission and is also configured as local admin via GPO for this box. Purely for PoC purposes.
5. Select your replication policy for the VM. For me this was the default policy I created earlier.
6. Important! Before you proceed to step 7. Make sure the firewall on your server is set to allow traffic through from the process server or the next steps will fail! You need: Firewall & Print Sharing,
7. Last step is to enable replication.
8. Now the Process Server will contact the server and install the right components (agent) and enable replication.
9. After about 15 minutes, the installation should be complete! Replication should now start up.
The Linux client requires two installs. One is the agent that talks/registers to the Process Server on site, the other is the replication agent.
1. On the process server get the appropriate Linux Agent file from: F:\Program Files (x86)\Microsoft Azure Site Recovery\home\svsystems\pushinstallsvc\repository and SCP it to the Linux box.
2. Navigate to your Process Server C:\ProgramData\Microsoft Azure Site Recovery\private. Open the “connection” file and copy the phrase down.
3. Unzip the installer and also create a “passphrase.txt” file and insert your passphrase.
4. Install/Register the agent from the zip file:
./install -t both -a host -R Agent -d /usr/local/ASR -i "IP_OF_PROCESS_SERVER" -p 443 -s y -c https -P "PASSPHRASE.TXT"
5. Download the latest “WALinuxAgent” for replication to your Linux Box.
6. Unzip the file
7. Install and register the replication service:
python setup.py install --register-service
8. Check that services are running:
ps -ef | grep wa
After some private->cloud replication time (process server refresh) it should be possible to select the Linux VM from your inventory and replicate it, similar to how the Windows Server above was configured – minus the agent push.
I’m going to stop the post here and leave the replication/test fail-over for another post which should be the final one. The good thing about ASR is that once configured correctly it provide a “Test DR/Fail-over” option where you can run multiple simulations whilst maintaining replication!
Until next time!
Following on from my previous post about VMware and Microsoft ASR, I’m going to run through some more of the technical configuration which is required to get your VMware VM’s protected in the Microsoft Cloud. This section will mainly deal with setting up the on-site configuration server and the connectivity to the Azure Site Recovery Subscription.
To create the link between cloud components and on-site you need to cover a Recovery Services Vault.
1. From your azure portal, under Monitoring + Management, select Backup & Site Recovery (OMS)
2. Create a Recovery Services Vault similar to above. I heavily recommend pinning this to your Dashboard to make it easily accessible later!
3. Once created, head to Site Recovery and Prepare Infrastructure. Select your goals (in this case, replicate to Azure from VMware).
4. The on-site configuration server and pre-req’s are required here (as mentioned in part 1). Download the installer and registration key to your server.
5. On your 2012 R2 server, run the installer.
6. Accept the EULA
7. Import the reg key as downloaded in step 4.
8. Select Proxy Server options.
9. Run the Prequisite checks (I had a warning of 500Gb secondary disk, not an issue as this is purely testing).
10. Enter the details of your MySQL passwords (interesting MySQL is used).
11. Agree to VMware virtual machine protection and validation of PowerCLI 6.0 takes place. It must be 6.0! I tried with the latest 6.5 and validation fails
12. Select your ASR install directory
13. Select the NIC on your box you want for replication traffic.
14. Hit Install!
15. Hopefully all goes well and you have some nice green ticks!
16. You will be given a passphrase for your configuration server. This is needed when you connect agents on protected VM’s to this server to be replicated. (It can be obtained later).
17. The Config Server admin opens for you automatically. Shortcut is also placed on Desktop. Enter in an account that has sufficient Administrator Privilege over your vCenter account.
18. Back in the portal you can add in the new source configuration server and AD account, select OK.
Note:- Sometimes changes on the config server are not available on the portal straight away. To fix this, you can find your Server from the pinned shortcut and perform a manual refresh!
19. Select your subscription, deployment model, storage account and network as a Target.
20. Create a default replication policy. I left mine as the defaults and came back and tweaked policies later.
21. Complete the infrastructure preparation by running the capacity planner and confirming. I have not done this as I’m only testing a few VM’s in the first instance.
This is a good place to stop here. The next post will detail adding some machines to be replicated, but in order to do that you need to either: install manually, push centrally or have the configuration server do it. Obviously the deployment method needs to be considered for your organisation (via GPO, DSC, Puppet, etc).
I’ve had the opportunity of investigating Disaster Recovery in my role recently. I have been looking at costs and methods of bringing our critical systems online in the event of a primary data center outage.
Without going into too much detail on my existing employer, there are many things to review and architecting DR into the existing infrastructure isn’t the easiest thing to do. Given our relationship with Microsoft, I was asked to investigate Azure Site Recovery to see if it was a viable option to provide us with a DR site in the cloud.
I’m going to be blogging in a small series on the technical implementation required to achieve VMware VM’s failing over from an on-site VMware cluster to an Azure Site Recovery instance. Hopefully if all goes well I’ll add to the series as I go, but for now I’m going to keep it simple with basic deployment.
The entire process that I am following has been documented by Microsoft and gives some good detail on how to achieve VM replication into the cloud.
It is important to read through the checklist of required items before starting the setup. This can be done beforehand or during the actual implementation. I surmised it down to the following:
1) An Azure account, free trial possible (I have MSDN sub)
2) Azure Storage, somewhere to put your data.
3) Azure Network, VM’s location after fail over.
1) Build a new 2012 R2 Process/Management Server with necessary specification (Ready for installing ASR components)
2) External network connectivity to cloud services.
3) VMware vCenter + ESXi 5.5 or greater.
4) Guest machines that do not exceed certain limitations of the service (e.g. – No disks larger than 1TB)
Once the pre-steps are complete, it was on to configuring the magic….
1) Signed in to my MSDN subscription and setup the Azure Free trial ($150 a month)
2) Login to https://portal.azure.com
3) Navigate to the market place, Networking, Virtual Network.
4) If this is all new, it’s best to stick with the Resource Manager deployment model as that is the latest and greatest. Click Create.
5) Create your virtual network by filling in your requirements. I went for the large default address space, naming it and then a small subnet within that for testing. In this instance I also created a new Resource Group for
NB:- A handy tip is to pin certain objects to the dashboard so you can see them on your main screen. I found this useful for the on-site Process/Management server.
6) Navigate to the market place, Storage, Storage account.
7) Create a storage account as instructed. I opted for standard performance and keeping the data geographically local as I’m only performing a PoC. Maintaining the resource in the previously created group.
8) Navigate to the market place, Monitoring + management, Backup and Site Recovery. Create a recovery vault to group resources together.
The next thing worth doing is creating an account in your Directory Services that VMware and ASR can use.
1) Create AD Service account
2) Add to vCenter/Datacenter object where VM’s will be replicated from.
3) Create a role for “Azure Site Recovery” and give it the following permissions:
Here is a good place to stop with this part of the guide. The hard part of this post was making sure all the pre-configuration bits are done and that you are ready to proceed.
In the next post I’ll run through configuring the actual Site Recovery and making the components communicate. The most notable comment I’d have for all of this is that Microsoft have gone quite a distance in making this process as easy as possible. That doesn’t mean, however, that it goes completely without technical caveats which I’ll cover later on in the series.
Until next time!