Category: Uncategorised

My Rubrik Experience

For those who don’t know Rubrik is an up and coming Cloud Data Management platform which essentially provides a converged, scale-out, clustered backup appliance for all of your Infrastructure backup needs. If you have been living under a rock for the last 3 years then please take a look at Rubrik.com :

Some other good reading on the product can be seen in the following blog post articles which explain in detail a lot more than I do in this post:

vBrain.Info

Penguin Punk

Recently I had the pleasure of this little beauty for a month for some testing:
Rubrik-2

Specifications:

R348 (Brik) – 1 Appliance
Nodes – 4
Disks – 4 SSD + 12 HDD (1 SSD / 3 HDD per node)
Memory: 256 GB (64 GB per node)
CPU: 4 * Intel 8-Core
Network: 4 * Dual-Port 10GbE & 4 * Dual Port 1GbE & 1 1GbE IPMI

Total Useable Capacity – 59.6 TB

Thoughts:

The reason I had my hands on this device was to test the functionality of Rubrik, pure and simple. I hooked it up to a 6 node vSphere 6.5 cluster running 10Tb of FC attached storage which covers around 100 virtual machines, ranging across Windows 10, 2008R2-2016 and Linux (RHEL6/7, Ubuntu, Centos). I had around a month of “playtime” with a fairly solid test plan to get through.

Simplicity: We had the appliance delivered ahead of time and the onsite engineer came a few days later after a simple rack and stack. Within 2 hours we had the cluster up and running (it would have been quicker if it wasn’t for our network blocking mDNS!). Beautiful, simple deployment.

Configuration: See simplicity! I’d already created a Rubrik Service account in my domain with the correct vCenter permissions. Adding my test cluster was a breeze and the VM discovery happened within minutes. I could have added all my machines to the built-in SLA Domain Protection policies and that would have me good to go, but I wanted to play in depth!

Useability: The navigation on the system is a beautiful HTML 5 interface that is really intuitive. If you haven’t seen it I suggest you take a look. Whilst we had an engineer present, everything was so simple to drive it felt natural and elegantly put together. One of the things I was really keen on, was replicating some archive VM data out to a cloud provider. It is fair to say that within about 10-15 minutes, before the engineer had time to get me a guide, I had configured an Archive target to a fresh Azure Blob store I had created. So easy.

Rubrik-3

Features: Coming from a legacy backup platform that isn’t very well geared towards a modern data center, I was blown away. Going from only having traditional agent based backups for Linux/Windows to having some awesome benefits such as:

– Snapshots
– Replication
– Archival (Local and Cloud)
– Live Mounts
– Google like File system search
– SQL DB Point in Time Recovery
– Physical O/S agent recovery
– Well documented API to consume

Was quite a big turn around in such a swift implementation.

Summary:

I understand the buzz of Rubrik and how they are a game changer in the Data Management, Backup/Recovery world. For modern data centers that are largely virtualised this is a product that really must be considered. Given the new 3.2 release where they provide the ability to backup your Cloud workloads using a Rubrik appliance, it really is starting to provide a well rounded and unique solution.

I would highly recommend any person looking into the backup space for their own benefit to make sure this vendor is reviewed!

I won the EVO:Rail Challenge – VMworld here I come!!

A while back I entered the EVO:Rail Challenge put on by VMware.

evorail-challenge

I did this for two reasons:

– I was interested to see their hyper-converged infrastructure appliance product in action.
– Motivation of winning a VMworld Full Conference pass!

 

I’ve always been impressed  by the Hands-On-Labs as it gives great exposure to a range of products with very little effort. My only bug-bear is that I don’t have enough time to fit them all in, as much as I’d love too!

The challenge takes pieces from the HOL-SDC-1428: Introduction to EVO:Rail lab and modifies it to make you fix the purposefully broken aspects of an EVO:Rail deployment. There are a few extra bits in there you have to complete before grabbing a screenshot of your time and ending the lab.

I received contact in early June stating that I had been shortlisted, due to having a good time, and asking for my screenshot – which I quickly submitted. The following week I was emailed by the team to inform me that I’d won. The official announcement was made last night! To say I am excited is an understatement!! It will be the second time I will have had the pleasure of attending VMworld and I can’t wait.

I’d recommend to anyone interested in EVO:Rail or winning a ticket to VMworld to give the EVO:Rail Challenge a go, you never know! You might be quick enough to win a ticket!

I’d like to close this post by thanking all the teams and individuals behind the scenes at VMware for making my week! It is an honor to have won and I look forward to seeing people at the event!

How to enable VM Multicast Traffic across multiple hosts on Cisco UCS

Recently I was asked to investigate an issue occurring on some VM’s within the environment I look after. The sysadmin informed me that he was unable to receive multicast traffic on VM’s in the same VLAN when they were separated on different hosts in the cluster. However, when the VM’s were on the same host, traffic flowed fine!

After doing some reading I found that the issue is with the UCS Fabric Interconnects. Here is what was needed to fix the problem.

Configure an IGMP querier on Nexus 5K

1. SSH into the first Nexus 5k switch.

2. Backup the configuration with.

3. Check the IGMP configuration on the VLAN that requires Multicast traffic.

IGMP0

4. As no querier is present, create one.

IGMP1

5. Check the configuration is now correct.

IGMP2


6. Repeat the above steps 1-5 on the second 5K switch.

Disable IGMP snooping on Nexus 1000v

1. SSH into your Nexus 1000v.

2. Backup the configuration as above.

3. Check the current IGMP snooping status, which is “Enabled” by default.

IGMP3

4. Disable IGMP snooping on the 1000v.

IGMP4

5. Verify that IGMP snooping has been disabled.

IGMP6

Conclusion

By default, VM’s in the same VLAN will only talk multicast traffic to each other if they reside on the same host. This is due to the way in which the UCS Fabric Interconnects handles multicast traffic and the fact that they only store MAC address information from the blade servers connected to them directly.

The work around is to is to enable an IGMP querier which will be used to facilitate multicast traffic on the upstream switches, for the specific VLAN in question.

I’m not entirely sure disabling IGMP snooping on the 1000v as a whole is a good idea, it might be best to do it on a per VLAN level, where it is required.

vCenter “FLUP-grade” Part II: Upgrade vCSA 5.5 to 6.0 on VMware Workstation

The second part of my FLUP-grade was to use the tool for vCenter Server Appliance 5.5 to upgrade to 6.0! The issues for me were:

– I use VMware Workstation for a nested lab using AutoLabs.
– The tool relies on your vCSA being hosted on an ESXi Server.
– There are only a few blogs for deploying vCSA 6 on Workstation, that I found.

I’d love to have a proper lab with real equipment, but sadly that is not the case at this moment in time; one day perhaps! Given that my home machine is only so fast, I don’t like to run many nested VM’s on my already virtualized ESXi servers as things become a bit slow. On the the Workstation Layer I have:

– A Domain Controller & VUM Server
– 2-3 Virtual ESXi hosts
– vCSA 5.5
– Openfiler + FreeNas storage VM’s
– A Router

In the ESXi layer, I run small lightweight VM’s mainly for testing features and deployments.

Here is how I upgraded:

1) I downloaded and installed the latest vCSA v6 .ISO file from VMware and mounted it. From within that ISO, I installed the VMware Client Integration Plugin 6.0.0
CAP3

 

 

 

 

 

 

 

 

 

 

2) Next, I shutdown my existing 5.5 vCSA and exported it as an .OVA.

3) Whilst this was happening, I increased the Memory on my ESXi hosts to 12GB each and powered them on. I then used the vSphere client to connect to the hosts directly and imported the vCSA to my first host. I checked the networking of the VM and powered it on.
CAP12

 

 

 

 

 

 

 

 

4) After checking services, I disabled DRS temporarily because I didn’t want any VM’s to be moved. I then went back to the .ISO of vCSA 6 and opened the vcsa-setup.html file.
CAP4

 

 

 

 

 

 

 

5) At this stage, my browser (chrome) opened but I needed to enable the plugin that I had installed previously.
CAP5

 

6) Once the plugin was sorted, I selected Upgrade from the menu.
CAP6

 

 

 

 

 

 

 

7) Checked the warning prompt and confirmed that I was on 5.5 and therefore supported. I also fully read and accepted the EULA.
CAP7

 

 

 

 

 

 


CAP8

 

 

 

 

 

 

 

 

 

8) Next I entered in the information of the host that I wanted to deploy the new vCSA 6 appliance on.
HOST2CAP

 

9) After accepting the SSL certificate warning, I was able to select my lab information for deployment.CAP15

 

10) Another SSL and environment check,  a warning about Postgres username/ password and SSH port requirement.
CAP16

 

 

 

 

 

 

 

 

11) I then chose the smallest deployment type for obvious reasons. At this stage, if I hadn’t increased my hosts memory I wouldn’t have been able to proceed as a warning appears stating insufficient memory. The destination host must have more than 8GB for the VM to live! CAP17

 

12) Chose which storage the VM should temporarily live on.
CAP19

 

13) I then assigned a static IP address for the vCSA 6 to take on my network. As stated in the text, it’s only temporary and when the whole operation is complete the new vCSA assumes the identity of the existing vCSA!
CAP20

 

14) Reviewed the summary and took a screenshot for your pleasure, before commencing with the big “Finish” button.CAP21

 

15) The process took a while and goes through several stages; extracting/installing packages, migrating data, starting services, etc.
CAP22  CAP24

 

16) Migration was complete, so I logged into my vCSA as if nothing had changed. Yep! All good!
CAP25

 

17) At this stage, it was time to shut down my new vCSA6 and then run an export back out of the nested virtual layer. (At this stage I felt like Cobb from Inception, only better looking!;))
CAP26

 

 

 

 

 

18) Getting close, I imported the vCSA6 .ovf file back into VMware Workstation.
CAP27

 

 

 

 

 

 

 

 

 

 

 

 

19) One final check of the network (my main hosts subnet) and I booted it up and crossed my fingers!
CAP28

 

 

 

 

 

 

 

 

 

 

 

 

It worked! It might not be the prettiest/recommended/sane method and is only for my home lab scenario, but I really wanted to upgrade from 5.5 to 6.0 vCSA using proper methods in case this is ever needed in Production (and it will be!).

So that ends the chapter of my vCenter FLUP-grade. A bit of a strange journey, but I got there in the end!

vCenter “FLUP-grade” Part I: Migrate vCenter Server 5.5 to vCenter Server Appliance

What happens when you Fling and then Upgrade? My best guess is a FLUP-grade!

A while ago I thought of making changes to my virtual home lab. I have become slowly frustrated with the amount of “stuff” and “things” running on my Windows 2008 R2 vCenter Server; to the point where the thought of booting it up, makes me yawn! It’s not useful for getting quick answers or to test something, which is most of the time!!

I saw the VCS to VCVA converter fling being pimped out a while back by the genius that is William Lam and thought that I’d really like to give it a go! It would enable me to remove my old vCenter Server, have a cleaner estate and let me get stuck in with the vCenter Appliance!

Here is the FLing part of my UPgrade. The aim is to publish Part II which will then move my environment forward to VCSA 6! I followed the documentation provided with the converter fling, which was excellent! I’m not going to post the entire process as you can download instructions that are in .docx format, but I’ll post the interesting bits.

Requirements:
– vSphere 5.5 – Check!
– VCS and VCSA same version – Check!
– Same hardware on both – Check!
– All VCS components on same host – Check!
– External  MS SQL 2008 R2 Database – Partial Check:- I’m running SQL Express DB and it’s not supported. But I’ll give it a go anyway (It’s only a simple lab!)
– Web Client Plugins registered with AD User – Check!
– Comms on 22, 443 and 445 enabled – Check!
– All Admin/DB credentials ready – Check!

1) After following the guide; stopping services, deploying/configuring the converter, building and snappnig an empty vCSA, I entered in my existing vCenter Server  local admin credentials:
CAP6

 

 

 

 

 

 

 

 

 

 

2) The migration appliance then collects what it needs from the existing VCS:
CAP7

 

 

 

 

 

 

 

 

 

 

3)  When prompted, I powered down my VCS, powered up my new VCSA and continued with the install, accepting the prompts for SSL key and entered the root password for my VCSA and entering in DNS information.

4) Then I proceed to login to the VAMI Interface (http://VCSAHOST:5480) and setup the embedded database & SSO.  Once that was all complete, I came back to the migration tool and entered in my AD details as I had joined my appliance to a domain.CAP24

 

 

 

 

 

 

 

 

 

 

5) I then had to wait for a while with files copying from the migration tool. Then, I was presented with the SQL database login credentials. The first time I attempted the DB migration, I selected “Migrate stats/events/tasks”.
CAP30

 

 

 

 

 

 

 

 

 

 

6)  This didn’t go so well, and the installation hung:
CAP27

 

 

 

 

 

 

 

 

 

 

7) After using  “Alt+F2” to get to the console, I took a look in /var/log/migrate.log and found this:CAP28

 

 

 

 

 

 

 

 

 

 

8) It seems that choosing to migrate the stats/events/tasks might not have been a good one for a simple home lab migration! (167,943 rows! Ooops!). This process took a LONG time and after watching it thrash my poor little 1 vCPU for ages, I got bored and left it overnight and went to bed 🙂CAP29

 

 

 

 

 

 

 

 

 

 

9) I came down the next morning and the process had got stuck, so I started again and at Step 5 chose NOT to migrate that information!
CAP31

 

 

 

 

 

 

 

 

 

 

That was it! I logged into the VAMI and all was well. I followed the final steps of the guide; re-enabling plugins, etc and then gave the new VCSA a reboot for good measure!

All in all, I was very impressed with this tool, it is a superb effort from the engineers involved. The next part of the plan was to upgrade from 5.5 to 6.0!