For those who don’t know Rubrik is an up and coming Cloud Data Management platform which essentially provides a converged, scale-out, clustered backup appliance for all of your Infrastructure backup needs. If you have been living under a rock for the last 3 years then please take a look at Rubrik.com :
Some other good reading on the product can be seen in the following blog post articles which explain in detail a lot more than I do in this post:
Recently I had the pleasure of this little beauty for a month for some testing:
R348 (Brik) – 1 Appliance
Nodes – 4
Disks – 4 SSD + 12 HDD (1 SSD / 3 HDD per node)
Memory: 256 GB (64 GB per node)
CPU: 4 * Intel 8-Core
Network: 4 * Dual-Port 10GbE & 4 * Dual Port 1GbE & 1 1GbE IPMI
Total Useable Capacity – 59.6 TB
The reason I had my hands on this device was to test the functionality of Rubrik, pure and simple. I hooked it up to a 6 node vSphere 6.5 cluster running 10Tb of FC attached storage which covers around 100 virtual machines, ranging across Windows 10, 2008R2-2016 and Linux (RHEL6/7, Ubuntu, Centos). I had around a month of “playtime” with a fairly solid test plan to get through.
Simplicity: We had the appliance delivered ahead of time and the onsite engineer came a few days later after a simple rack and stack. Within 2 hours we had the cluster up and running (it would have been quicker if it wasn’t for our network blocking mDNS!). Beautiful, simple deployment.
Configuration: See simplicity! I’d already created a Rubrik Service account in my domain with the correct vCenter permissions. Adding my test cluster was a breeze and the VM discovery happened within minutes. I could have added all my machines to the built-in SLA Domain Protection policies and that would have me good to go, but I wanted to play in depth!
Useability: The navigation on the system is a beautiful HTML 5 interface that is really intuitive. If you haven’t seen it I suggest you take a look. Whilst we had an engineer present, everything was so simple to drive it felt natural and elegantly put together. One of the things I was really keen on, was replicating some archive VM data out to a cloud provider. It is fair to say that within about 10-15 minutes, before the engineer had time to get me a guide, I had configured an Archive target to a fresh Azure Blob store I had created. So easy.
Features: Coming from a legacy backup platform that isn’t very well geared towards a modern data center, I was blown away. Going from only having traditional agent based backups for Linux/Windows to having some awesome benefits such as:
– Archival (Local and Cloud)
– Live Mounts
– Google like File system search
– SQL DB Point in Time Recovery
– Physical O/S agent recovery
– Well documented API to consume
Was quite a big turn around in such a swift implementation.
I understand the buzz of Rubrik and how they are a game changer in the Data Management, Backup/Recovery world. For modern data centers that are largely virtualised this is a product that really must be considered. Given the new 3.2 release where they provide the ability to backup your Cloud workloads using a Rubrik appliance, it really is starting to provide a well rounded and unique solution.
I would highly recommend any person looking into the backup space for their own benefit to make sure this vendor is reviewed!
A while back I entered the EVO:Rail Challenge put on by VMware.
I did this for two reasons:
– I was interested to see their hyper-converged infrastructure appliance product in action.
– Motivation of winning a VMworld Full Conference pass!
I’ve always been impressed by the Hands-On-Labs as it gives great exposure to a range of products with very little effort. My only bug-bear is that I don’t have enough time to fit them all in, as much as I’d love too!
The challenge takes pieces from the HOL-SDC-1428: Introduction to EVO:Rail lab and modifies it to make you fix the purposefully broken aspects of an EVO:Rail deployment. There are a few extra bits in there you have to complete before grabbing a screenshot of your time and ending the lab.
I received contact in early June stating that I had been shortlisted, due to having a good time, and asking for my screenshot – which I quickly submitted. The following week I was emailed by the team to inform me that I’d won. The official announcement was made last night! To say I am excited is an understatement!! It will be the second time I will have had the pleasure of attending VMworld and I can’t wait.
I’d recommend to anyone interested in EVO:Rail or winning a ticket to VMworld to give the EVO:Rail Challenge a go, you never know! You might be quick enough to win a ticket!
I’d like to close this post by thanking all the teams and individuals behind the scenes at VMware for making my week! It is an honor to have won and I look forward to seeing people at the event!
Recently I was asked to investigate an issue occurring on some VM’s within the environment I look after. The sysadmin informed me that he was unable to receive multicast traffic on VM’s in the same VLAN when they were separated on different hosts in the cluster. However, when the VM’s were on the same host, traffic flowed fine!
After doing some reading I found that the issue is with the UCS Fabric Interconnects. Here is what was needed to fix the problem.
Configure an IGMP querier on Nexus 5K
1. SSH into the first Nexus 5k switch.
2. Backup the configuration with.
copy running-config bootflash:FILENAME.txt
3. Check the IGMP configuration on the VLAN that requires Multicast traffic.
show ip igmp snooping vlan XXX
4. As no querier is present, create one.
vlan configuration xxx
ip igmp snooping querier X.X.X.X
5. Check the configuration is now correct.
show ip igmp snooping vlan XXX
6. Repeat the above steps 1-5 on the second 5K switch.
Disable IGMP snooping on Nexus 1000v
1. SSH into your Nexus 1000v.
2. Backup the configuration as above.
3. Check the current IGMP snooping status, which is “Enabled” by default.
show ip igmp snooping
4. Disable IGMP snooping on the 1000v.
no ip igmp snooping
5. Verify that IGMP snooping has been disabled.
show ip igmp snooping
By default, VM’s in the same VLAN will only talk multicast traffic to each other if they reside on the same host. This is due to the way in which the UCS Fabric Interconnects handles multicast traffic and the fact that they only store MAC address information from the blade servers connected to them directly.
The work around is to is to enable an IGMP querier which will be used to facilitate multicast traffic on the upstream switches, for the specific VLAN in question.
I’m not entirely sure disabling IGMP snooping on the 1000v as a whole is a good idea, it might be best to do it on a per VLAN level, where it is required.
The second part of my FLUP-grade was to use the tool for vCenter Server Appliance 5.5 to upgrade to 6.0! The issues for me were:
– I use VMware Workstation for a nested lab using AutoLabs.
– The tool relies on your vCSA being hosted on an ESXi Server.
– There are only a few blogs for deploying vCSA 6 on Workstation, that I found.
I’d love to have a proper lab with real equipment, but sadly that is not the case at this moment in time; one day perhaps! Given that my home machine is only so fast, I don’t like to run many nested VM’s on my already virtualized ESXi servers as things become a bit slow. On the the Workstation Layer I have:
– A Domain Controller & VUM Server
– 2-3 Virtual ESXi hosts
– vCSA 5.5
– Openfiler + FreeNas storage VM’s
– A Router
In the ESXi layer, I run small lightweight VM’s mainly for testing features and deployments.
Here is how I upgraded:
2) Next, I shutdown my existing 5.5 vCSA and exported it as an .OVA.
3) Whilst this was happening, I increased the Memory on my ESXi hosts to 12GB each and powered them on. I then used the vSphere client to connect to the hosts directly and imported the vCSA to my first host. I checked the networking of the VM and powered it on.
11) I then chose the smallest deployment type for obvious reasons. At this stage, if I hadn’t increased my hosts memory I wouldn’t have been able to proceed as a warning appears stating insufficient memory. The destination host must have more than 8GB for the VM to live!
13) I then assigned a static IP address for the vCSA 6 to take on my network. As stated in the text, it’s only temporary and when the whole operation is complete the new vCSA assumes the identity of the existing vCSA!
It worked! It might not be the prettiest/recommended/sane method and is only for my home lab scenario, but I really wanted to upgrade from 5.5 to 6.0 vCSA using proper methods in case this is ever needed in Production (and it will be!).
So that ends the chapter of my vCenter FLUP-grade. A bit of a strange journey, but I got there in the end!
What happens when you Fling and then Upgrade? My best guess is a FLUP-grade!
A while ago I thought of making changes to my virtual home lab. I have become slowly frustrated with the amount of “stuff” and “things” running on my Windows 2008 R2 vCenter Server; to the point where the thought of booting it up, makes me yawn! It’s not useful for getting quick answers or to test something, which is most of the time!!
I saw the VCS to VCVA converter fling being pimped out a while back by the genius that is William Lam and thought that I’d really like to give it a go! It would enable me to remove my old vCenter Server, have a cleaner estate and let me get stuck in with the vCenter Appliance!
Here is the FLing part of my UPgrade. The aim is to publish Part II which will then move my environment forward to VCSA 6! I followed the documentation provided with the converter fling, which was excellent! I’m not going to post the entire process as you can download instructions that are in .docx format, but I’ll post the interesting bits.
– vSphere 5.5 – Check!
– VCS and VCSA same version – Check!
– Same hardware on both – Check!
– All VCS components on same host – Check!
– External MS SQL 2008 R2 Database – Partial Check:- I’m running SQL Express DB and it’s not supported. But I’ll give it a go anyway (It’s only a simple lab!)
– Web Client Plugins registered with AD User – Check!
– Comms on 22, 443 and 445 enabled – Check!
– All Admin/DB credentials ready – Check!
3) When prompted, I powered down my VCS, powered up my new VCSA and continued with the install, accepting the prompts for SSL key and entered the root password for my VCSA and entering in DNS information.
4) Then I proceed to login to the VAMI Interface (http://VCSAHOST:5480) and setup the embedded database & SSO. Once that was all complete, I came back to the migration tool and entered in my AD details as I had joined my appliance to a domain.
5) I then had to wait for a while with files copying from the migration tool. Then, I was presented with the SQL database login credentials. The first time I attempted the DB migration, I selected “Migrate stats/events/tasks”.
6) This didn’t go so well, and the installation hung:
8) It seems that choosing to migrate the stats/events/tasks might not have been a good one for a simple home lab migration! (167,943 rows! Ooops!). This process took a LONG time and after watching it thrash my poor little 1 vCPU for ages, I got bored and left it overnight and went to bed 🙂
That was it! I logged into the VAMI and all was well. I followed the final steps of the guide; re-enabling plugins, etc and then gave the new VCSA a reboot for good measure!
All in all, I was very impressed with this tool, it is a superb effort from the engineers involved. The next part of the plan was to upgrade from 5.5 to 6.0!