Category: Hyperconverged

VDI on VSAN Upgrade – Part 2: VUM Server

Following on from my last post, I am continuing to upgrade my VDI environment. Next up is VMware Update Manager, as I’ve already upgraded my Windows vCenter Server.

VUM is nice and straightforward. I’d recommend that before you start, you are familiar wih your VUM box and how it runs. Does it use a service account to run the update manager service, how does it connect to the SQL database? For me, I use a a windows domain user managed service account for the service and a standalone SQL login for the database. I’ve double checked that I have the passwords and that the service is currently working before proceeding with the upgrade. Final step was to backup the VUM database on the SQL server, just in case!

Upgrade VMware Update Manager Server 

1) Make sure the vCenter 6.0 U1 media is attached to your VUM server.
VUM1

2) Run the installer and select VMware Update manager Server and select Install.
VUM2

3) Accept the EULA
VUM3

4) Click ok agreeing to the system detection/upgrade message.
VUM4

5) Read the support information carefully. I decided to download updates after installation.
VUM5

6) Fill in your vCenter server information at the prompt.
VUM6

7) I received a prompt about my SQL database recovery model. I noted the warning and clicked ok. I’ll review this another day.
VUM7

8) I kept the default settnigs for my Update Manager Server.
VUM8

9) Key in the correct credentials for your SQL server. For me it was my SQL server service account.
VUM9

10) Accept the database upgrade and tick the box relating to backing up the database.
VUM10

11) Click to install and then finish when done.
VUM11

VUM12


That completes the super easy process. At this point, I also installed the latest vSphere client from the media and then ran the Update Manager plugin install to the latest version. That way I was able to use the latest version of VUM through the C# client straight away.

The even more exciting thing now is that you can manage VUM through the Web client UI. Which I will be covering in my next post on in this series!!

VDI on VSAN Upgrade – Part 1: vCenter Server 5.5 to 6.0 U1

I have posted before about using the VMware fling appliance in order to migrate from a Windows vCenter Server running 5.5 to vCSA 5.5 and then Upgrade to vSphere 6.0 vCSA.

Recently, my work has brought me across upgrading an existing Windows vCenter from 5.5 U2 to 6.0 U1. This system is running a three node VSAN cluster for a non-production VDI environment. I thought it would be good to document the entire upgrade just so that others can see how easy it is and follow the same steps. For this environment, I only really have to consider five main components to upgrade to bring them in line with the latest release versions:

– vCenter Server
– VUM Server
– ESXi Hosts
– VSAN (From v1 to v2)
– View (6.0.1 to 6.2)

I’m going to chunk these into a five post series, simply detailing how to do these steps. It’s certainly not difficult, I just hope it is of use to someone.

vCenter Server Upgrade

I was quite surprised at how easy VMware have made this process, although I admit that this environment is a simple one: embedded PSC, single SSO domain, etc. The PSC was a consideration but this is all standalone so it is just a case of getting up and working on the latest version. Some things I did before starting the upgrade process which I’m not going to document:

– Check all services are functioning as intended before starting.
– Backup vCenter and VUM SQL Databases
– Snapshot vCenter/VUM
– Tested administrator@vsphere.local credentials and SQL DB Service account details before starting to make sure they work.

Then it was on with the show:

1) Attach the vSphere 6.0 U1 .ISO to my vCenter VM.

2) Run the step for vCenter Server for Windows.

3) I saw a message asking for a reboot due to a previous install, not sure why, but I did this before commencing.
VCUp2

4) Following on from the reboot, I proceeded with the install, the following is just screenshots as there no explanation needed:

VCUp1

VCUp3

VCUp4

VCUp6

5) I then received an error relating to SQL permissions.
VCUp7

6) A quick check of this VMware KB gave me the answer I needed. Log in to SQL server and run the following query:

VCUp8

7) Return and re-run the installer and continue as necessary. I changed my default install and data export locations and left all ports the same.

VCUp10

VCUp11

VCUp9

VCUp12

VCUp13

Done! I then logged into the web client and checked all my SSO configuration and ensured that authentication was working. vCenter was fine and was managing my 5.5 VSAN hosts without an issue! No alerts and everything running well. Next post will be upgrading VMware Update Manager!

NutanixCE: Live Presentation of a Nested Lab!

I haven’t been blogging for long and I’m not great with “web stuff” in general. The main reason I started was to share my experiences, meet new people and learn new things. I’ve heard that it can be a difficult thing to do: keeping up with content, taking the time to write it, getting views and generally keeping yourself out there and motivated. I understand that Rome wasn’t built in a day and that these things take time…

The amount of support I’ve already received from many communities has been astounding. It seems that since I started this journey things have progressed very quickly and it has been amazing to experience; notably, being selected to be one of the official VMworld USA Bloggers!

When I thought things couldn’t get any better, I was contacted by a guy by the name of Angelo Luciani (@AngeloLuciani) via the Nutanix Next forum, informing me that he had seen my earlier blog posts and wondered if I’d like to participate in an online hangout where we would demonstrate how to set up the lab, live! Now, I’ve not really done much public speaking before and I’m fairly new to the whole “put yourself out there” mentality so I was both very interested and apprehensive at the same time! Angelo was friendly and very understanding, as well as being an top bloke. I’d like to thank him (again) for giving me the opportunity and for sharing the experience with me!

For D-Day, we got in touch early and ran through some tests, using Google Hangouts to broadcast – which is actually a really good tool. After a little time getting cameras/mics/screen sharing to work we were ready and scheduled in for Wednesday 19th August at 21:00 GMT to do the broadcast. Angelo focussed on running the hangout and media and I had my lab up and tested (several times to avoid embarrassment).

The whole thing went by very quickly. I was nervous at first but then as the lab started and I got into it a bit more and settled down. Thankfully I had no issues with the lab, which is testament to the software and also my own pre-broadcast checks. The video is available on YouTube via the magic of Hangouts, which can be viewed here:

As I say in the broadcast, I’m a lover of technology.

I’m not the best at everything neither do I ever pretend to know it all – every day is a school day! Nutanix interested me because it is a great SDDC technology and I wanted to try it in a home lab as I didn’t have the opportunity anywhere else. I am also a big fan of VMware VSAN and have used it in several production environments and believe that both have their merits. I urge any IT Infrastructure advocate to get involved with a range of technology as it helps you understand more and enables you to form your own opinions and broaden your knowledge, making you a better technologist overall!

Starting a blog or presenting to a user forum of technical peers might seem a daunting task but it is actually good fun. If you enjoy your subject matter then this hopefully shows to your audience and everyone benefits from it. I’d recommend to anyone thinking of doing it to take the plunge, I’d always be happy to answer any questions from people who are thinking of getting involved in any shape or form!

NutanixCE: Cluster Creation Error – Could not discover all nodes

This is just a quick post to detail something that I have come across today. I decided to rebuild my NutanixCE virtual cluster as there was a new version released on 16/07/2015 and my old cluster was failing to connect via Pulse. (A known bug apparently).

When I tried to create a cluster, as per my previous guide, I saw the error:

“Could not discover all nodes specified. Please make sure that the SVMs from which you wish to create the cluster are not already part of another cluster”

CLUSTERERROR1

I found this a little odd, as these were brand new nodes on a fresh install. All I’d done is follow my previous guide! So I checked each node and tried to see if I could create a single node cluster on each manually using the command:

This failed on two of my three CVM nodes with the following error:

“WARNING: genesis_utils.py:512 Failed to reach a node where Genesis is up. Retrying….”
CLUSTERERROR2

After some googling, I found that someone had suggested to restart genesis service and check the logs to find out what was happening.

CLUSTERERROR4

So it seems that the CVM is trying to contact the host KVM and being unsuccessful. Sure enough, if I tried to ssh to my KVM host from the CVM box, it was also failing with the error:

FIPS mode initialized
Read from socket failed: Connection reset by peer

CLUSTERERROR5

After some futher searching I found that someone on the forum that had fixed the issue by regenerating the SSH keys on the KVM host – they didn’t explain how to do this and I’m a bit of a novice with Linux/KVM but the process is easy on an ESXi host. After a quick dig I found “sshd-keygen” in the /bin folder which I ran.

I then tried to SSH from my CVM to KVM again, and this time it prompted me for a password! Success!

CLUSTERERROR6

I repeated this process (sshd-keygen and then SSH from CVM) on my other broken node. I then retried my cluster creation and it also worked!
CLUSTERERROR7

I thought it would be interesting to document as I came across the issue and a combination of forum posts and a little digging from myself paid off. I hope this helps anyone with this issue!

How to create a nested virtual NutanixCE cluster

In my last post, I configured a Nutanix CE cluster node running on a single nested VM on Workstation. My intention was to create a 3 node cluster but unfortunately I was unable to because I built the single node cluster and then extra nodes separately.

NUTANIXCLUSTERIt appears that CE behaves the same way as the full version in that if you are adding extra nodes to a cluster (expansion), the default behavior is to automatically check for an Intelligent Platform Management Interface (IPMI) that is usually built into the main board of your Nutanix Supermicro hardware. Given that my nodes were all virtual, it was a little hard to work around this issue! (I did search long and hard!).

I spoke to @Joep and took his advice on posting on the excellent Nutanix Forum. AdamF helped me out and suggested that I tear down the cluster and start again, but this time – build the cluster at the start with all nodes present. So, here is how I did it in my environment:

NutanixCE Cluster node – build and configuration

1) Quite simple, follow my previous post up to step 10. Ensure that your VM has been built, configured with networking and running KVM and CVM. Then repeat and build three other nodes.

CAP41

2) Once they are powered up, check that everything is running by using the following command on each node:

CAP42

3) At this stage, I performed a little maintenance and reduced the overhead that the CVM would have on my limited resources, I shutdown the CVM’s on each KVM host by doing the following:


4) Once the VM was shutdown I modified the settings of the VM to only use 6GB RAM instead of 12GB:

CAP54
Using VI editor, amend the memory sections to reflect the new allocation:
CAP55

5) Ensure that the configuration is saved (:w!) and then you can start the VM, repeat these steps for all three CVM virtual machines:

6) Now it is time to create the cluster, login to one of the CVM machines via SSH and enter in the following:

CAP43

CAP44

 

 

 

 

 

 

 

 

CAP45

7) Now the cluster is built and running enter in the ncli prompt to setup a few things:

CAP46

CAP25

8)  As the cluster is started and configured, I then logged in to the master node (Zeus Leader) via the Web Gui. (Step 13 onwards in my previous blog post).

CAP51

 

Safely shutdown and start up cluster nodes

I found out that it is not wise to just shutdown your KVM nodes through VMware Workstation, it can have pretty disastrous consequences! Therefore I’m just going to quickly run through shutting down the cluster when you are finished.

1) Using Prism, make sure you have any non-system VM’s shutdown across the cluster.

2) Login to any CVM node that is part of the cluster (I logged into the master, I do not believe it matters). Stop the cluster:

CAP52

CAP56

3) At this stage, it is safe to log in to each CVM node and run a shutdown command.

CAP53

4) When all of the CVMs are shut down, it is safe to shut down the guest O/S through Workstation (Shutdown the KVM hosts).

5) Starting the cluster is easier. Power on each KVM virtual host and wait a few minutes for the CVM and services to start. Once services are up, start the cluster!

CAP35

That is it really, I hope to post further articles about running some nested VMs within the nested environment and more about the Nutanix platform and how to use it! Hopefully that will come in time as I gain more exposure to it.