It’s no secret that one of the many benefits of using vSphere is vMotion, the ability to migrate running workloads (with no downtime) from one physical host to another. In vSphere 6.0 we announced the ability to perform long-distance vMotion and cross vCenter vMotion. This paved the path for customers to now do full-on live vMotions from their on-premises data centers to VMware Cloud on AWS. This is truly application mobility at its finest! Customers can now vMotion workloads bi-directionally both from the vSphere Client as well as from tools such as PowerCLI (script below).

 

Prerequisites:

  • Direct Connect between Datacenter and VMware Cloud on AWS
  • Layer 2 stretch network (Either using NSX on-prem or the free NSX Edge Appliance)
  • Layer 3 IPSEC VPN to the VMC Management Network
  • Hybrid Linked Mode (If doing vMotions from the UI. If doing them from PowerCLI this can be disregarded)
  • Configure VMC Firewall for vMotion

 

Direct Connect

This is not a big deal for most customers as many customers already have Direct Connect. As you can see in the picture below, we have our Virtual Interfaces (VIF) enabled and attached to this SDDC. Once we have Direct Connect attached we can go ahead and enable our Layer 2 stretch network.

 

Management Network Firewall Rules

To allow vMotion traffic to enter and exit the Management Network in VMware Cloud on AWS, we need to establish two firewall rules, Ingress and Egress. In the ‘Network’ tab of the VMC console, go to the Management Network and expand ‘Firewall Rules’. You’ll need to allow vMotion (TCP 8000) in from your on-prem network CIDR and change the destination to ‘ESXi’. For the Egress rule, the source will be ‘ESXi’ and the Destination will be the on-prem network CIDR. Here I let any traffic back through.

 

Layer 2 VPN

As mentioned previously in this post, the L2VPN consists of NSX in VMware Cloud on AWS (This part is pretty much configured for you) and either a bespoke NSX Edge appliance (or full NSX). I will not cover the step to configure L2VPN in NSX here, but a quick Google will give you loads of information on it. You can also check out this doc HERE. Once you’ve enabled L2 VPN and get the ‘Connected’ status

Layer 3 VPN

VMC has two separate networks. Management (used by vCenter, ESXi, NSX, etc) and Compute (All the customer workloads are placed here). The L2VPN allows us to stretch the compute network between on-prem and VMC. However, for Management communication to occur, we still need VPN connectivity to the Management network. This is where the Layer 3 VPN comes in to play. Once your VPN has come online, we can move on and configure Hybrid Linked Mode.

 

Hybrid Linked Mode

Hybrid Linked Mode allows users to manage their on-prem environment AS WELL AS their cloud environment, from a single pane of glass. As you can see in the image below, I have my VMware Cloud on AWS SDDC at the top, followed by my on-prem environment below it. This is because I went through the setup steps to add the on-prem identity store to my Cloud SDDC. Now I can login to my VMC vCenter with my on-prem credentials, see both environments, and even perform vMotions from this console (keep reading!)

TIME TO vMOTION!

Alright! Now that we’ve gone through our prereqs we can start to use our wonderful vMotion. Within the vSphere Client, right-click on the workload you want to vMotion up to VMware Cloud on AWS, and click on ‘Migrate’. Select Change both compute resource and storage, select the desired resource pool in VMC, select the workloadDatastore, select the desired folder, choose the correct stretched network, and finish. You’ll see the vMotion task begin in the vSphere Client, followed by the VM appearing in the VMC vCenter. Note that once the vMotion is complete, the vSphere Client icon for that VM will look like it is powered off. It is still powered-on and a quick click of the refresh icon at the top of the client will show it correctly.

 

So, that was cool to be able to live-migrate workloads across datacenters and up to the cloud with just a few clicks of the mouse. I’ve done these migrations back and forth because I find it so exciting *nerd alert*. But what if you want to migrate VMs in bulk? Nobody wants to sit there and migrate workloads one VM at a time. We’ve got a great PowerCLI script for you.

On-Prem to VMC

<<Direct Link to Github Script>>

If you do not have the latest version of PowerCLI installed, open PowerShell as administrator and run:

‘Install-Module VMware.PowerCLI’. This will allow you to run the following code.

All you need to do here is edit the Variables section with the values that correspond with your environments. Also, the values in the script are fake, so don’t bother trying to login to anything with these credentials 🙂

 

 

VMC to On-Prem

<<Direct Link to Github Script>>

Same type of script as the previous, however this has the variables setup for the opposite movement.

 

Written by

Brian

Brian Graf is a Sr. Technical Marketing Manager for VMware Cloud on AWS at VMware. He has also worked on ESXi Lifecycle, PowerCLI Automation and been the Product Manager for vSphere DRS and HA. Brian is co-author of the PowerCLI Deep Dive 2nd edition book and a Microsoft MVP.

You May Also Like to Read

Bitnami