LEAVING THE CLOUD, PORTING AZURE VMS TO IN HOUSE SERVERS

Moving everything in the cloud has been the ongoing trend these past years. Now, If only I can put my nephew into the cloud whenever he misbehaves, and then download him when he’s learned his lesson. Due to the influx of cloud migration projects, Many IT practitioners turn savvier at migrating physical in house servers to cloud hosing solutions such as the Amazon Web Services (AWS) or Microsoft’s Azure Cloud. Not so much attention given for companies opting to pull back their servers from the cloud to companies own physically hosted data centers. There are a variety of reasons why companies resolve to this such as wanting more control and insight, centralized management, high costs and latency issues among others. This move is more popularly known as cloud repatriation. Should you finally decide that moving back to your own physical in house setup is the more viable approach, I am going to show you the procedure here. The on-premise environment we will be using is Windows Server 2016 and a Hyper-V cluster

There are a couple of ways to move a VM from Point A to Point B. We will focus on a simple but effective method that can be applied to small to medium migrations. We will be doing everything manually, but we can optimize a good deal of the tasks later on.

 Keep a detailed and up to date Server and Network Configuration of the cloud environment

Consolidate all required details and information that will be critical during the cloud repatriation process such as vCPU, Disk space, Memory, # of NIC, IPs and other relevant configuration. You will have a bad day if you suddenly forget how many processors or disk space your cloud server were using after fully removing and closing the cloud setup!

Step1

In this article, our goal is to transition the entire SharePoint test environment, and the first task is to check how many VMs are involved on the process. If you have organized your environment using Resource Groups, then for you this step is a piece of cake. List the Resource Group and all VMs in use will be listed. From the list below we see that we need to migrate five VMs over.

Step2

The second step is to stop all the VMs that will be migrated. Keep in mind that this will bring the service down, and such outage must be planned and, of course, your end-users must be informed. To do that, just click on the desired VM, and on the new blade click on Stop and then Yes.

Step3

The final step of the assessment is to identify the disk names. This can be done by clicking on Disks. On the new blade, a list of the OS Disk and Data Disks that are in use by the current VM will be listed. We will click on each entry on the left and we will copy the disk name listed on the blade on the right side. (There is a copy button available when you select the field.)

The results of the current assessment should be a table like the one below, where we have all the information required to create the new VM on-premises and associate the correct disks to them.

Step4

Copying the VHD files

The second phase consists of using Microsoft Azure Storage Explorer utility. The installation process is simple and does not require any additional configuration. Just use the default values.

To make things easier, we will create a volume to hold the VMs from the SharePoint test environment, which happens to be all the VMs that are in Azure. That volume is being shared among all nodes of the Hyper-V cluster on-premises.

After opening the Microsoft Azure Storage Explorer, click on the second icon to configure an account. This process will create authentication with the Azure service. After it is authenticated, a list of all storage objects will be seen on the left side. Expand the ones related to the VMs to be transitioned, then expand Blog Containers, and then vhds. Select each VHD listed on the right side and click Download.

Step6

Each download will be listed on the bottom right side with its respective progress on the copy process. Wait to get all of them completed before moving forward to the next phase of the transition.

Finishing the transition

The easiest way to create a VM using an existing disk that is already stored on the volume where the future virtual machine will reside is using the Failover Cluster Manager. This is the way to go even if you have Virtual Machine Manager implemented.

The process is simple. Log on to the Failover Cluster Manager, right-click on the Roles, and click on Virtual Machines… and then click on New Virtual Machine… A new window listing all nodes will be displayed. Click on any available node and click on OK.

In the Before you Begin welcome page, just click Next.

Step7

In the Specify Name and Location page, type in the name of the VM and define the location to be the root folder of the volume that contains all disks. Click Next.

In the Specify Generation page, select Generation 1 for now and click Next. Note: We are going this way because these days virtual machines in Azure are created using VHD. We could convert the disk to VHDX and then use Generation 2, but for the sake of simplicity we will not change the disk format for the purposes of this article.

In the Assign Memory page, use the same amount of memory that the VM used to have on Azure and click Next.

In the Configure Networking page, select the Virtual Switch used by the VMs and click Next.

In the Connect Virtual Hard Disk page, select Use an existing virtual hard disk and click Browse. Go to the volume that has the OS Disk that we already copied from Azure and select it. Click Finish. Note: For now, we will configure just the OS Disk.

Step8

A new wizard to configure the High Availability will be displayed. Just use the default values to complete this part of the process.

Step9

Now, let’s go back to the main page of the Failover Cluster Manager. Right-click on the virtual machine that we have just created and click on Settings. In the new page, select IDE Controller 0, select Hard Drive and click on Add. A new disk will show up underneath the existing IDE Controller. Click on Browse… and select the data disk available. You can repeat this process if the VM has additional disks.

Before hitting OK on the VM Settings, click on Processor and match the number of CPUs that this new machine will have with the Azure configuration.

Testing and final touches

Now that the VM was created, it is time to fire it up. After the initial boot, the administrator must work on some minor details:

  • Page file
    By default Azure uses Temporary Storage on drive D: and as you may have noticed, we didn’t copy the Pagefile. So, we need to create a new one or configure it properly
  • Time Zone configuration
    Make sure that the Time Zone configuration is set accordingly with the site of the new VM
  • IP configuration
    The VM is using DHCP because we created a new VM. By doing that, a new network adapter was associated to the operating system. Make sure to configure the IP and wait for the DNS replication to take place and then update the entries.

One last piece of advice: After testing the entire process and making sure that your VMs are up and running on the on-premises environment, it is time to do some house cleaning. Specifically, this means the removal of the former VMs from Microsoft Azure.

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s