My Experience Upgrading my VCPP Cloud to vSphere 7

Just recently finished upgrading my entire environment to vSphere 7 and wanted to share my experience. Overall the plan I had worked flawlessly with zero issues. I waited a little bit longer to upgrade my environment than some, but when VMware released both vCenter and ESXi Update 1, I felt like it was time to make the move. Be sure when planning a upgrade like this that all the different products that are being used in the environment are compatible with vSphere 7. Since this environment being upgraded is a multi-tenant cloud there are several pieces that have to be checked before attempting an upgrade. The goal with this post is to try and provide a sample blueprint for a VCPP to upgrade to vSphere 7. Let's start by identifying al the products we use and verify the version that is compatible with vSphere 7.

For this I leveraged the VMware Product Interoperability Matrix site. https://www.vmware.com/resources/compatibility/sim/interop_matrix.php

I wont go into a lot of detail of detail here I figure we all have used this site. However this site is invaluable when it comes to planning VMware upgrades. What I'll do next is provide a list of products I use with version then the version needed to be compatible with vSphere 7.

vRealize Log Insight was running on version 8.0. I updated it to 8.2. Here is a link to the release notes. https://docs.vmware.com/en/vRealize-Log-Insight/8.2/rn/vRealize-Log-Insight-82.html

vRealize Operations was running on version 8.0. I updated it to 8.2. Here is a link to the release notes. https://docs.vmware.com/en/vRealize-Operations-Manager/8.2/rn/vRealize-Operations-Manager-82.html

vRealize Orchestrator was running version 8.0, I updated it to 8.2. Here is a link to the release notes. https://docs.vmware.com/en/vRealize-Orchestrator/8.2/rn/VMware-vRealize-Orchestrator-82-Release-Notes.html

NSX-V was running version 6.4.6. I updated it to version 6.4.8. Here is a link to the release notes. https://docs.vmware.com/en/VMware-NSX-Data-Center-for-vSphere/6.4/rn/VMware-NSX-Data-Center-for-vSphere-648-Release-Notes.html

VMware Cloud Director was running version 10.0. I updated it to 10.2. Here is a link to the release notes. https://docs.vmware.com/en/VMware-Cloud-Director/10.2/rn/VMware-Cloud-Director-102-Release-Notes.html

Veeam Backup and Replication 10 GA. Was updated to version 10 Cumulative Patch 2. Here is some info on that patch. https://www.veeam.com/kb3161?ad=in-text-link

After these were all updated it was time for the vCenter and the ESXi host. Here are the release notes for those two .

https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-vcenter-server-701-release-notes.html

https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-esxi-701-release-notes.html

However for the ESXi 7.0 U1 software I used the HPE customized image. Here are those release notes as well. https://psnow.ext.hpe.com/doc/a00106627en_us

Upgrade Process to vSphere 7.0U1

Here are a couple other links I used to help in the planning for this upgrade.

Update sequence for vSphere 7.0 and its compatible VMware products (78221)
vSphere 7 Upgrade Best Practices (78205)

As for the update process, I followed the KB article mentioned above and updated the products in the order that VMware recommended.

Here are some quick instructions on how to update each of the products. As always I take a snapshot of the appliances before starting the updates

Updating vRealize Log insight

Log into you myvmware account and download the pak file. Then log into Log Insight.

  1. Navigate to the Administration tab.
  2. Under Management, click Cluster.
  3. Click Upgrade from PAK to upload the .pak file.
  4. Accept the new EULA to complete the upgrade procedure.

After the master node upgrade process is complete, you can view the remaining upgrade process, which is automatic.

Check for the email sent to the Admin to confirm the upgrade completed successfully.

Updating vRealize Operations

Log into you myvmware account and download the pak file.

  1. Log into the master node vRealize Operations Manager administrator interface of your cluster at https://master-node-FQDN-or-IP-address/admin.
  2. Click Software Update in the left pane.
  3. Click Install a Software Update in the main pane.
  4. Follow the steps in the wizard to locate and install your PAK file.
  5. This updates the OS on the virtual appliance and restarts each virtual machine. Note: When you upgrade to vRealize Operations Manager 8.2 version from a version prior to 8.0, the base OS automatically changes to Photon. Any customization done to the OS, for example, files or directories created somewhere on the root partition, like ~/.ssh/authorized_keys of the vRealize Operations Manager appliance gets deleted after the upgrade.
  6. Wait for the software update to complete. When it does, the administrator interface logs you out.
  7. Read the End User License Agreement and Update Information, and click Next.
  8. Click Install to complete the installation of software update.
  9. Log back into the master node administrator interface. The main Cluster Status page appears and cluster goes online automatically. The status page also displays the Bring Online button, but do not click it.
  10. Clear the browser caches and if the browser page does not refresh automatically, refresh the page. The cluster status changes to Going Online. When the cluster status changes to Online, the upgrade is complete

Updating vRealize Orchestrator

Download and mount the ISO image:

  1. Download the ISO image from the official VMware download site.
  2. Connect the CD-ROM drive of the vRealize Orchestrator Appliance virtual machine in vSphere. See the vSphere Virtual Machine Administration documentation. Note: After connecting the CD-ROM drive, navigate to your vRealize Orchestrator Appliance VM settings page and verify that Connect At Power On is enabled.
  3. Mount the ISO image to the CD-ROM drive of the vRealize Orchestrator Appliance virtual machine in vSphere. See the vSphere Virtual Machine Administration documentation.
Procedure
  1. Log in to the vRealize Orchestrator Appliance command line as root.
  2. Run the blkid command, and note the device name for the vRealize Orchestrator Appliance CD-ROM drive.
  3. Mount the CD-ROM drive.
  4. mount /dev/xxx /mnt/cdrom
  5. Important: For clustered vRealize Orchestrator deployments, you must perform steps 2 and 3 on all nodes in the cluster.
  6. Back up your vRealize Orchestrator deployment by taking a virtual machine (VM) snapshot
  7. Caution: vRealize Orchestrator 8.x does not currently support memory snapshots. Before taking the snapshot of your vRealize Orchestrator deployment, verify that the Snapshot the virtual machine’s memory option is disabled.
  8. To finish the upgrade, run the vracli upgrade exec -y --profile lcm --repo cdrom:// command on one of the nodes in your deployment.
  9. Note: For vRealize Orchestrator deployments authenticated with vSphere, enter the credentials of the user who registered your deployment with the vCenter Single Sign-On (SSO) service. Alternatively you can also, export the your password as a environmental variable. This can be useful for scenarios where you are using an automated script to upgrade multiple vRealize Orchestrator deployments. To export the SSO password, run the export VRO_SSO_PASSWORD= your_sso_password command.

Updating NSX-V

My environment has multiple vCenters so for this part I have to use the Cross-vCenter Section in the documentation.

To upgrade NSX Data Center for vSphere in a cross-vCenter environment, you must upgrade the NSX components in the order below;

NSX Data Center for vSphere components must be upgraded in the following order:

  1. Primary NSX Manager appliance
  2. All secondary NSX Manager appliances
  3. NSX Controller cluster
  4. Host clusters
  5. NSX Edge
  6. Guest Introspection

Procedure

Primary NSX Manager Upgrade
  1. Log in to the NSX Manager virtual appliance.
  2. From the home page, click Upgrade.
  3. Click Upload Bundle, and then click Choose File. Browse to the VMware-NSX-Manager-upgrade-bundle-releaseNumber-NSXbuildNumber.tar.gz file. Click Continue to start the upload.
  4. The upload status displays in the browser window.
  5. If you want to start the upgrade later, click Close in the Upgrade dialog box. When you are ready to start the upgrade, navigate to Home > Upgrade and click Begin Upgrade.
  6. In the Upgrade dialog box, select whether you want to enable SSH, and whether you want to participate in VMware's Customer Experience Improvement Program ("CEIP"). Click Upgrade to start the upgrade.
  7. The upgrade status displays in the browser window.
  8. Note: The Upgrade dialog box displays a message indicating that the automatic backup has been taken.
  9. Wait until the upgrade procedure finishes and the NSX Manager login page appears.
  10. Log in to the NSX Manager virtual appliance again, and from the home page click Upgrade. Confirm that the upgrade state is Complete, and the version and build number on the top right matches the upgrade bundle you just installed.
Secondary NSX Manager Upgrade
  1. Log in to the NSX Manager virtual appliance.
  2. From the home page, click Upgrade.
  3. Click Upload Bundle, and then click Choose File. Browse to the VMware-NSX-Manager-upgrade-bundle-releaseNumber-NSXbuildNumber.tar.gz file. Click Continue to start the upload.
  4. The upload status displays in the browser window.
  5. If you want to start the upgrade later, click Close in the Upgrade dialog box. When you are ready to start the upgrade, navigate to Home > Upgrade and click Begin Upgrade.
  6. In the Upgrade dialog box, select whether you want to enable SSH, and whether you want to participate in VMware's Customer Experience Improvement Program ("CEIP"). Click Upgrade to start the upgrade.
  7. The upgrade status displays in the browser window.
  8. Note: The Upgrade dialog box displays a message indicating that the automatic backup has been taken.
  9. Wait until the upgrade procedure finishes and the NSX Manager login page appears.
  10. Log in to the NSX Manager virtual appliance again, and from the home page click Upgrade. Confirm that the upgrade state is Complete, and the version and build number on the top right matches the upgrade bundle you just installed.
NSX Controller Cluster

The NSX Controller upgrade causes an upgrade file to be downloaded to each controller node. The controllers are upgraded one at a time. While an upgrade is in progress, the Upgrade Available link is not clickable, and API calls to upgrade the controller cluster are blocked until the upgrade is complete.

  1. Click Upgrade Available in the Controller Cluster Status column. The controllers in your environment are upgraded and rebooted one at a time. After you initiate the upgrade, the system downloads the upgrade file, upgrades each controller, reboots each controller, and updates the upgrade status of each controller.
  2. Monitor the progress of the upgrade.
  3. You can view the cluster upgrade progress in the Controller Cluster Status column in Installation and Upgrade > Management > NSX Managers.
  4. You can view the upgrade progress of each individual controller node in the Upgrade Status column in Installation and Upgrade > Management > NSX Controller Nodes.
Update the Host Clusters
  1. In the vSphere Web Client, navigate to Home > Networking & Security > Installation and Upgrade, select the Host Preparation tab.
  2. For each cluster that you want to upgrade, click Upgrade or Upgrade available.
  3. If you are upgrading to NSX 6.4.1, click Upgrade for the cluster.
  4. If you are upgrading to NSX 6.4.0, click Upgrade Available for the cluster.
  5. The cluster displays Installing NSX or Installing, and the hosts display In Progress.
  6. The cluster and host status displays Not Ready. Click Not Ready to display more information. Click Resolve all to attempt to finish the VIB installation. The hosts are put in maintenance mode, and rebooted if necessary, to finish the upgrade.
  7. The cluster displays Installing NSX or Installing. When the upgrade is finished, each host displays a green check mark and the upgraded NSX version.
  8. If the Resolve action fails when DRS is enabled, the hosts might require manual intervention to enter maintenance mode (for example, due to HA requirements or DRS rules). In this case, the upgrade process stops and the cluster displays Not Ready again. Click Not Ready to display more information. Check the hosts in the Hosts and Clusters view, make sure that the hosts are powered on, connected, and contain no running VMs. Then retry the Resolve action. The cluster displays Installing NSX or Installing. When the upgrade is finished, each host displays a green check mark and the upgraded NSX version.
  9. If the Resolve action fails when DRS is disabled and you are upgrading from NSX 6.3.0 or later with ESXi 6.0 or later, you must manually put the hosts into maintenance mode to finish the upgrade.
  10. Put the evacuated hosts in maintenance mode.
  11. In Hosts and Clusters, right-click each host and select Maintenance Mode > Enter Maintenance Mode.
  12. Navigate to Networking & Security > Installation and Upgrade > Host Preparation to monitor the upgrade. The upgrade automatically starts when the hosts enter maintenance mode. The cluster displays Installing NSX or Installing. If you do not see the status, refresh the page.
  13. When the upgrade is finished, each host displays a green check mark and the upgraded NSX version.
  14. Take the hosts out of maintenance mode.
  15. In Hosts and Clusters, right-click each host and select Maintenance Mode > Exit Maintenance Mode.
Upgrade the NSX Edge
  1. In the vSphere Web Client, select Networking & Security > NSX Edges.
  2. For each NSX Edge instance, click UPGRADE next to the current Edge version number. Alternatively, you can click Actions > Upgrade Version.
  3. If the upgrade fails with the error message "Failed to deploy edge appliance," make sure that the host on which the NSX Edge appliance is deployed is connected and not in maintenance mode.
Upgrade Guest Introspection
  1. n the Installation and Upgrade tab, click Service Deployments. The Installation Status column says Upgrade Available.
  2. Select the Guest Introspection deployment that you want to upgrade, and click Upgrade.
  3. Follow the UI prompts to configure the Guest Introspection upgrade.
  4. After Guest Introspection is upgraded, the installation status is Succeeded and service status is Up. Guest Introspection service virtual machines are visible in the vCenter Server inventory.

Upgrade VMware Cloud Director

Be sure on this one to shut down the VMware Cloud Director Appliance and take a snapshot before going any further. Then power the appliance back on.

  1. In a Web browser, log in to the appliance management user interface of a VMware Cloud Director appliance instance to identify the primary appliance, https://appliance_ip_address:5480.Make a note of the primary appliance name. You must upgrade the primary appliance before the standby and application cells. You must use the primary appliance when backing up the database.
  2. Download the update package to the appliance you are upgrading.
  3. Note: You must upgrade the primary appliance first.
  4. VMware Cloud Director is distributed as an executable file with a name of the form VMware_Cloud_Director_v.v.v.v- nnnnnnnn_update. tar.gz, where v. v. v. v represents the product version and nnnnnnnn the build number. For example, VMware_Cloud_Director_10.1.0.4424-14420378_update.tar.gz.
  5. Create the local-update-package directory in which to extract the update package.
  6. mkdir /tmp/local-update-package
  7. Extract the update package in the newly created directory.
  8. tar -zxf VMware_Cloud_Director_v.v.v.v-nnnnnnnn_update.tar.gz \
    -C /tmp/local-update-package
  9. Set the local-update-package directory as the update repository.
  10. vamicli update --repo file:///tmp/local-update-package
  11. Check for updates to verify that you established correctly the repository.
  12. vamicli update --check
  13. The upgrade release appears as an Available Update.
  14. Shut down VMware Cloud Director by running the following command:
  15. /opt/vmware/vcloud-director/bin/cell-management-tool -u <admin username> cell --shutdown
  16. Apply the available upgrade.
  17. vamicli update --install latest
  18. Repeat Step 2 to Step 8 on the remaining standby and application cells.
  19. From the primary appliance, back up the VMware Cloud Director appliance embedded database.
  20. /opt/vmware/appliance/bin/create-db-backup
  21. From any appliance, run the VMware Cloud Director database upgrade utility.
  22. /opt/vmware/vcloud-director/bin/upgrade
  23. Reboot each VMware Cloud Director appliance.
  24. shutdown -r now

Veeam Update

For all the Veeam servers download and install the Cumulative Patch 2 from this link. https://www.veeam.com/kb3161?ad=in-text-link

Be sure to remove all the snapshots from the appliances after the software updates have been tested and verified. Now we move into the vCenter updates.

vCenter Upgrade from 6.7U3 to 7.0U1

The vCenter upgrade is a two stage process.

The first stage walks you through the deployment wizard to get the deployment type of the source appliance that you want to upgrade and configure the new appliance settings. During this stage, you deploy the new appliance with temporary network settings. This stage finishes the deployment of the OVA file on the target server with the same deployment type as the source appliance and the appliance settings that you provide.

The second stage walks you through the setup wizard to select the data types to transfer from the old to the new appliance. The new appliance uses the temporary network settings until the data transfer finishes. After the data transfer finishes, the new appliance assumes the network settings of the old appliance. This stage finishes the data transfer, starts the services of the new upgraded appliance, and powers off the old appliance.

From here log in and test the vCenter. Be sure all is working well.

Upgrading ESXi

Before doing the ESXi upgrades I went out and verified I had all the latest firmware on all my servers. Since all my Servers are HPE. I went out and downloaded the HPE ESXi 7.0 Update 1 customized iso.

To do the upgrades on all my host there are several methods. I chose to use the lifecycle manager. Here is how,

  1. Navigate to the vSphere Lifecycle Manager home view. In the vSphere Client, select Menu > Lifecycle Manager.
  2. On the Imported ISOs tab, click Import ISO.
  3. In the Import ISO dialog box, select an image. Click the Browse button to import an ESXi image from your local system.
  4. Click Import.

Once I have the ISO imported I created a upgrade baseline and attached the baseline to my ESXi Host

  • Click the New Baseline Link

This baseline will contain the ESXi ISO we uploaded earlier and be attached to the ESXi hosts which we want to upgrade.

  • Give the baseline a name and description
  • Next
  • Ensure the ESXi ISO which we uploaded earlier is the only option selected
  • Next
  • Review the details and select Finish

After the Baseline is created attach it to the ESXi host

  • Select a host from the Hosts & Clusters view
  • Go to the Updates tab
  • Click the Attach dropdown then Attach Baseline or Baseline Group

From here the way I chose to do this was to select each host and enter the host into maintenance mode. After the host enters maintenance mode I selected the update tab, under the baselines I selected the ESXi7 Upgrade baseline I created then selected remediate.

The host will update and reboot.

After the reboot I checked compliance on the host to find out that there are new patches available. in a final step for this host I remediated the host one more time to apply the latest patches. The ESXi host will reboot one more time. Once the host is back up I checked compliance one more time. Then removed the host from maintenance mode.

From here I went down the list of host and used the same procedure to upgrade the rest to ESXi 7.0U1

VMtools Updates

I cheated a little on this part. I wrote a quick script to go out and get a list of VM's and update the vmtools to the latest and do that with no reboot. Here is a screenshot of the script.

This was a lot longer blog post than I expected. I just wanted all the steps captured in case someone else needed to use this plan to upgrade their environment. Upgrade is now complete!

Thanks for reading!