Published on

vSphere Upgrade

Authors
  • Name
    Jackson Chen

vSAN hosts integrated with NSX-T

When upgrading NSX-T host transport node, when the ESXi hosts are also vSAN nodes, the upgrade process is a bit more involved.

Preparation

  1. Document each ESXi host network configuration
# ESXi host network configuration
1. vmk0, vmk1 and vmk2, vmk(x) configurations
    a. IP address
    b. Network Mask
    c. Default gateway
    d. MTU size
    e. DNS Servers
    f. VLAN ID
    g. NTP servers (Time synchronization)
    h. iLo | iRAC IP address and access credential
    i. DNS enties and reverse DNS entries
    j. syslog.global.logHost    # ESXi advanced settings

2. vSAN disk groups configuration
3. NSX-T configuration
    a. vmk0, vmk1 and vmk2, vmk(s) configurations
  1. vSAN configuration
# Verify vSAN configurations
1. Verify and configure vSAN automatic rebalance
    vSAN -> Configuration -> Services -> Advanced Options
        Automatic Rebalance - Enabled      # Enable automatic rebalance
2. VM storage polices
3. ESXi host disk groups configuration
    a. Cache disk
    b. Capacity disk
  1. Verify and note down physical NICs in used
Note down vmnic(x) that are in used, and the uplink(x)

Important - Only upgrade ONE ESXi host at a time

Decommission or replacing ESXi host Steps

  1. Put the ESXi host on maintenane mode
Select Full Data Evacuate       
    # This will ensure all data are evacuated from the ESXi host,
      before we can decommission or replacing the ESXi host
  1. Check and set vSAN automatic rebalance of virtual objects/disks on the vSAN cluster to Enabled

  2. Check vSAN data virtual object compliance

a. Navigate to the vSAN cluster object in vCenter
b. On right click "Configuration"  -> virtual object
    Check and ensure that all ojects are in cmpliance
  1. Remove one disk group at a time
a. Select the vSAN cluster object -> xxxx -> Disk Group
b. Select the ESXi host (In maintenance mde), then the disk, and click ... (To select option)
c. Select Remove

Note: We can only remove ONE disk group at a time.
  1. Migrate vmkernel vmk0, vmk1, vmk2 and vmk(x) to vSwitch
# Process
a. In vCenter, navigate to the ESXi host (To be decommissioned/replace)
b. On right pane, Select Configure -> Networking -> Virtual Switches
c. Navigate to "Stanard Switch: vSwitch0, click "..."
d. Select "Migrate VMkernel Adapter" from the drop down list
    i. Migrate VMkernel Adapter
    ii. View Settings
    iii. Remove
e. On the Migrate VMkernel Network Adapter to vSwitch0 popup window
    i. Select VMkernel adapter, then select vmk2, click Next
    ii. Enter new Network Label     
        # Ensure enter the right|correct network label, such as vSAN (vMotion, or Management)
    iii. Enter the required VLAN ID

Note: Repeat steps d & e for all VMkernel adapter
  1. Make note of the IP configuration of all the VMKs and make note of the global and advance settings, such as syslog, NTP, DNS, dafeult gateway, vlan ID

  2. Remove NSX Configuration https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.1/installation/GUID-9C0927E8-F96B-494F-9799-C45DC1ABD9E4.html

Prerequisites

  1. If there are VMkernel adapters on the host that must be migrated to another switch during uninstallation, ensure that the network uninstall mapping is configured. See Verify Host Network Mappings for Uninstall.
  2. n vCenter Server, put the hosts in maintenance mode and power off VMs running on the hosts if you want to migrate VMkernel adapters during uninstallation.
# Procedure
1. From a browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-ip-address>.
2. Select System > Fabric > Nodes > Host Transport Nodes.
3. From the Managed by drop-down menu, select the vCenter Server.
4. Select the cluster you want to uninstall, and select Remove NSX.
  Note: 
  If NSX Intelligence is also deployed on the host, 
  uninstallation of NSX-T Data Center will fail because all transport nodes become part of a default network security group. 
  To successfully uninstall NSX-T Data Center, 
    you also need to select the Force Delete option before proceeding with uninstallation.
5. Verify that the NSX-T Data Center software is removed from the host.
   a. Log into the host's command-line interface as root.
   b. Run this command to check for NSX-T Data Center VIBs
        esxcli software vib list | grep -E 'nsx|vsipfwlib'
6. (Host on an N-VDS switch) If the host goes into a failed state and NSX-T Data Center VIBs cannot be removed, 
   then run the command to remove NSX-T Data Center from the host.
        del nsx 
   a. Before running the del nsx command, put the ESXi host in maintenance mode. 
      The vCenter Server does not allow the host to be put in maintenance mode, 
        unless all running VMs on the host are in powered off state or moved to a different host.
   b. Log in to the ESXi CLI terminal, run 
        nsxcli -c del nsx.
   c. Read the warning message. Enter Yes if you want to go ahead with NSX-T Data Center uninstallation.
   d. Verify that the existing VMkernel and physical NICs on the N-VDS switch are migrated to a new vSwitch. 
      If there are more than one N-VDS switches on the host, each N-VDS switch is migrated to a separate vSwitch. 
   e. In NSX Manager, if a transport node profile (TNP) is attached to the cluster, detach the profile.
   f. Select each host and click Remove NSX.
   g. In the popup window, select Force Delete and begin uninstallation.
   h. On the ESXi host, verify that system message displayed is 
        Terminated
      Note: This message indicates that NSX-T Data Center is completely removed from the host.
7. If the host goes into failed state and NSX-T Data Center VIBs cannot be removed, 
    then run the del nsx command to remove NSX from the host.
   a. Before running the del nsx command, put the ESXi host in maintenance mode. 
      The vCenter Server does not allow the host to be put in maintenance mode ,
      unless all running VMs on the host are in powered off state or moved to a different host.
   b. If there are VMkernel adapters on NSX port groups on the VDS switch, 
      you must manually migrate or remove vmks from NSX port group to DV port groups on the VDS switch. 
      If there are any vmks available on the NSX port groups, del nsx command execution fails.
   c. Log in to the ESXi CLI terminal, run 
        nsxcli -c del nsx
   d. Read the warning message. Enter Yes if you want to go ahead with NSX-T Data Center uninstallation.
   e. In the NSX Manager UI, if a Transport Node Profile is attached to the cluster, detach the profile.
   f. Select each host and click Remove NSX.
   g. In the popup window, select Force Delete and begin uninstallation.
   h. On the ESXi host, verify that system message displayed is Terminated.
  1. Move the ESXi host out of vSAN cluster to under the datacenter
  2. shutdown the ESXi host via iLO/iDRAC, vSphere web client or vCenter
    esxcli system shutdown --reason "ESXi host replacement
  1. Remove the ESXi host from vCenter inventory

Commission new ESXi host to NSX-T integrated vSAN cluster

  1. Build new ESXi host and Ensure host in Maintenace mode
a. Install vSphere using vanila ISO
b. Add required storage and RAID card drivers
c. Select the valid vmnic(x) for the managment vmk0
d. Configure network settings, DNS and domain name
e. Disable IPv6
f. Enable ssh and shell     # Disable them after successful ESXi host commission

Important:
Ensure the new ESXi host is in Maintenance mode.

# Process to place the ESXi host in Maintenance mode
1. Access the ESXi host shell by press Alt + F1
2. vicfg-hostops    # vCLI command to list Connection Options for vCLI Host Management Commands
3. Enter maintenance mode
    esxcli system maintenanceMode get   # verify system in maintenance mode
    esxcli <conn_options> system maintenanceMode set --enable true
        
    Note:
        a. Normally, after all virtual machines on the host have been suspended or migrated, the host enter maintenance mode
        b. To exit maintenance mode
            esxcli <conn_options> system maintenanceMode --enable false
  1. Add the ESXi host to vCenter to under the required datacenter
  2. Important Step - verify vmk0 MAC address https://kb.vmware.com/s/article/1008127
# If the vmk0 and physical NIC vmnic(x) are having the same MAC,
    remove vmk0 and re-add it   # vmk0 should have 00:50:xxxx automatic MAC

# Verify the interface and assocated MAC address
esxcli network ip interface list
esxcli network ip interface ipv4 get    # Verify IP address and subnet mask
esxcli network ip neighbor list     # verify ARP table
esxcli network ip route ipv4 list   # verify routing details

# Remove vmk0
esxcli network ip interface remove --interface-name=vmk0   

# Verify after vmk0 removal
esxcfg-vswitch -l   # verify the vswitch and its associated port groups
esxcli network vswitch standard list    # Verify standard switch
esxcli network vswitch standard portgroup list  # List port groups associate with standard switch

# If an existing standard switch does not exist, create one and the port group
esxcli network vswitch standard add --vswitch-name=vSwitch0
esxcli network vswitch standard portgroup add --portgroup-name=<portgroup> --vswitch-name=vSwitch0

# Re-create vmk0 and attach it to a portgroup on a Standard vSwitch
esxcli network ip interface add --interface-name=vmkX --portgroup-name=<portgroup>
esxcli network ip interface ipv4 set --interface-name=vmkX --ipv4=ipaddress --netmask=netmask --type=static

esxcli network ip interface add –interface-name=vmk0 –dvs-name=vSwitch0 –dvport-id= 1   # Distribute switch

Note: If the vmnics associated with the management network are VLAN trunks, 
        you may need to specify a VLAN ID for the management portgroup.
    esxcli network vswitch standard portgroup set -p portgroup --vlan-id VLAN

# Configure default gateway for vmk0
https://kb.vmware.com/s/article/2001426

esxcfg-route <vmk0-default-gateway>
esxcli network ip route ipv4 list    # verify the default gateway
    esxcfg-route -l     # check default gateway   
    esxcli network ip route ipv4 list   # Check default gateway  
esxcli network ip route ipv4/ipv6 add --gateway IPv4_address_of_router --network IPv4_address
    esxcli network ip route ipv4 add --gateway 192.168.0.1 --network 192.168.100.0/24

Note:
    # To remove a static route, run the command:
    esxcli network ip route ipv4 remove -n network_ip/mask -g gateway_ip
        esxcli network ip route ipv4 remove -n 192.168.100.0/24 -g 192.168.0.1

Alternatively
# Remove vmk0 by its port group name
esxcfg-vmknic -d -p "Management Network"

# Add vmk0 back
esxcfg-vmknic -a -i <vmk0-ip> -n <vmk0-netmask> -p "Management Network"
    Note: esxcfg-vmknic -a -i IP_address -n netmask -p portgroup
esxcfg-vswitch -l   # Verify "Management Network" port group has been created

# Tag vmk0 for Management
esxcli network ip interface tag add -i vmk0 -t Management
  1. Add the required port groups for standard switch
esxcli network vswitch standard portgroup add --portgroup-name=<portgroup-name> --vswitch-name=vSwitch0
esxcli network vswitch standard portgroup set -p Backend --vlan-id <portgroup-id>

Note:
    Add all the required port groups

# verify standard vswitch and its port groups
esxcli network vswitch standard list    
  1. Format the disks in the new ESXi host Important: Does NOT format the ESXi boot disk
# Process
a. Access the new ESXi host web client
b. Navigate to Storage -> Devices
c. Select the required disk
d. On the top menu item
    i. New datastore
    ii. Increase capacity
    iii. Rescan
    iv. Refresh
    v. Actions
        - Edit partitions
        - Clear partition table  <------- Select to clear parition table
        - Rescan

Note: Clear the partition table one disk at a time
  1. Move the ESXi host into vSAN cluster
# Wait for few minutes to ensure the vSphere HA agent is enabled for the host
1. Select the ESXi host from vCenter
2. Navigate to Configure -> System -> Services
3. Select "vSphere High Availability Agent"
    Ensure
    a. Running
    b. Start and stop with host     # Edit startup policy
  1. Update NTP servers
1. Select the ESXi host from vcenter
2. Navigate to Configure -> System -> Services
3. Select Time Configuration
4. Enter the required NTP servers
    Note: IP addresses separated by ","
  1. Update syslog.global.logHost
1. Select the ESXi host from vcenter
2. Navigate to Configure -> System -> Services
3. Select Advanced System Settings
4. Click Edit, and search syslog.global.logHost
5. Update the value
  1. Install NSX-T component
# Procedure
1. From a browser, log in with admin privileges to an NSX Manager at https://<nsx-manager-ip-address>.
2. Select System > Fabric > Nodes > Host Transport Nodes.
3. From the Managed by drop-down menu, select the vCenter Server.
4. Select the ESXi host you want to install, and select Configure NSX.
5. On the newly popup Window, confiure with the required values
    a. NSX switch name      # Enter requierd value
        Such as "N-VDS-DMZ-PDC-Servers"     # using "-", not "_" or space
    b. Transport Zone
        Select the required transport zone
    c. NIOC Profile
        Select the default "nsx-default-nioc-hostswitch-profile"
    d. Uplink Profile
        Select the required uplink profile, such as nsx-DMZ-uplink-hostswitch-profile
    e. LLDP Profile
        Select the default "LLDP [Send Packet Disabled]"
    f. Under Teaming Policy Uplink Mappings section
        i. Enter "vm" and search the required vmnic, such as vmnic3
        ii. Enter "up" and search for the required uplink, such as "uplink-1"
        Note:
            Repeat step f) to add other vmnic and uplink

Note:
a. We can change vmkernel adapter to NSX as part of the transport node NSX configuration if required, or
b. Migrate ESX VMkernel and physical adapters after successfully configuring NSX transport node

If NSX-T transport node shows Unknown for Node Status, run following command

# Restart netopa service
    /etc/init.d/netopa restart

Note: This issue is resolved in NSX-T v3.1.2
  1. Migrate ESX VMkernel and physical adapters
# Migrate Management, vMotion and vSAN VMkernel ports to NSX
1. From NSX-T managemetn console
2. Select System > Fabric > Nodes > Host Transport Nodes.
3. From the Managed by drop-down menu, select the vCenter Server.
4. Select the ESXi host you want to migrate VMkernel adapater and physical adapters vmnic(x)
5. On the newly popup windows
    a. Under Select VMkernel Adapters to Migration
        i. Click Add
        ii. Type "vm" in VMkernel Adapter and select "vmk0"
        iii. In Logical Switch, type and select the required NSX logical swtich (N-VDS)
    b. Under Edit Physical Adapters in the N-VDS
        i. Click Add
        ii. In Physical Adapter, type "vm" and select the required vmnic,
            such as vmnic3
        iii. In Uplink Name, type "up" and select the required uplink
            such as uplink-1
    Note:
        Repeate step a) and b) for additional VMkernel port and physical adapter
6. Click SAVE

Note:
Repeate step 1) to 6) to migrate vMotion and vSAN VMkernel port to N-VDS
  1. Create or add disk groups for the newly added ESXi host to vSAN
1. From vCenter, select the required vSAN cluster
    The vSAN cluster that the new ESXi host has been added
2. Click Configure, then navigate to vSAN -> Disk Management
3. Select ESXi host, then select
    a. Claim Unused Disks   <----- Select
    b. Create Disk Group
4. Select cache disk and capacity disks to create the disk group
5. Repeat step 1) to 4) to create other disk groups
  1. Take the ESXi host out of Maintenance mode
  2. Check vSAN health and disk capacity after commission the new ESXi host

How to build a vanila ESXi USB boot image - HPE DL390

  1. Connect a Windows 10 desktop and the HPE DL390 via direct network cable or to the switch
  2. Configure DL390 iLO TCP/IP and login credential
  3. Ensure the Windows 10 can access the DL390 iLO
    https://<iLO-iP>
  1. Connect the DL390 front USB and internal USB with 2 meter USB cable
  2. Download the VMware ESXi iso file, and use Rufus to create the bootable ESXi boot USB
  3. Plug bootable ESXi boot USB to the front USB port cable of DL390
  4. Plug the new USB to the USB cable which connects to the internal USB port of DL390
  5. Boot the DL390 from the front bootable ESXi boot USB, and choose to install vSphere to the internal USB
  6. After succesfully install vSphere to the internal USB, remove the front bootable USB
  7. Reboot DL390 with the internal USB attached
  8. Install required vSphere patch(es)
# Example
    ESXi670-202201001.zip Build 19195723
  1. Install required VIBs
# The required VIBs for the required ESXi physical server
a. Smartpqi storage driver
    Example:    Microchip-smartpqi_6.7.4150.119-10EM.670.0.0.8169922_offline_bundle-18384766.zip
  1. Enable ESXi shell and SSH, as these will be used for initial vSphere installation and troubleshooting
  2. Place the vSphere (ESXi) into maintenance mode
  3. Configure management netwrok
# Management vmk0 configuration (Press F2)
a. Select the required/connected vmknic(x)
b. IP address
c. Subnet Mask
d. Default Gateway
e. VLAN ID
f. DNS servers
g. Disable IPv6
  1. Reboot and verify the ESXi host
  2. Shutdown the ESXi host, and take out the internal ESXi image USB. Mark the vSphere USB
  3. Place the ESXi image to the required DL390 server
  4. Verify the upgraded/new DL390

Patch ESXi host

# Prerequisites

  1. Place the ESXi host in maintenance mode
  2. Instant-Clone hosts a. Horizon 7 (7.12 or earlier), must manually delete the instant clone parentVM

https://kb.vmware.com/s/article/2144808

#**** Manually delete the instant clone parentVM
# Method 1. From vSphere Client
1. Select the host that you want to put in maintenance mode. 
    If you are using the vSphere web client, make sure that the plug-in to edit Annotations is installed.
2. Look up Annotations in the host's Summary tab and
    set InstantClone.Maintenance to 1
3. Wait up to 3 minutes and the parent VMs on this host will be deleted. 
    Also, the value for InstantClone.Maintenance will change to 2
4. Put the host in maintenance mode. 
    Note:
        a. This host will no longer be used for provisioning.
        b. The parentVM cp-parentVM-..... will be deleted automatically
5. Perform maintenance
6. Take the host out of maintenance mode
7. Clear the InstantClone.Maintenance annotation value
8. As new provisioning happens, parent VMs and then instant clones will be created on this host.

# Method 2. From Connection Server
From the Connection Server, run IcMaint.cmd to delete the parent VMs and put the host in maintenance mode.
    IcMaint.cmd 
        -vc host name or IP address of vCenter Server 
        -uid vCenter Server user ID
        -hostname  ESXi host name
        -maintenance ON|OFF

b. Horizon 7.13 and later, can globally disable all of the instant clone parentVM on all ESXi hosts busing the instant-clone utilities.

https://docs.vmware.com/en/VMware-Horizon-7/7.13/virtual-desktops/GUID-6025D684-2E05-4857-9C24-18F16DDC38FD.html#GUID-6025D684-2E05-4857-9C24-18F16DDC38FD

If you are using VMware Update Manager (VUM), you must use the instant-clone maintenance utilities to delete the master image before you can patch the ESXi hosts, regardless of the Horizon 7 version.

# The utilities are located on Connection Server in C:\Program Files\VMware\VMware View\Server\tools\bin
    i. IcMaint.cmd
    ii. IcUnprotect.cmd
    iii. IcCleanup.cmd
IcMaint.cmd
# Command
    IcMaint.cmd -vc hostname_or_IP_address -uid user_ID -hostName ESXi_hostname -maintenance ON|OFF

    Parameters:
        -vc host name or IP address of vCenter Server
        -uid vCenter Server user ID
        -hostname ESXi host name
        -maintenance ON|OFF

Note: 
    It set the value for InstantClone.Maintenance
        1       # After the command is run, 
        2       # After the golden image VMs (parentVMs) are deleted

When set maintenance OFF, it will clear the value for InstantClone.Maintenance

After the command is run on the host, the InstantClone.Maintenance annotation value is set to 1 and the golden image VMs are deleted. After the golden image VMs are deleted, the InstantClone.Maintenance annotation value is set to 2 and no more golden image VMs are created on the host. When you run this command again with -maintenance OFF, the InstantClone.Maintenance annotation value is cleared for the host to become available for hosting golden image VMs.

# Patch local ESXi host
1. Access the ESXi shell console (Alt + F1)
2. Upload the ESXi patch file to ESXi host using WinSCP
    Example
        /vmfs/volumes/Patches/esxi670-202201001.zip
3. Patch ESXi host
    esxcli software vib update --depot=/vmfs/volumes/<path|folder>/esxi670-202201001.zip

# Patch remote ESXi host
    esxcli  --server=ESXi_IP_addres
            --username=root
            --password=<required password>
            software vib update
            --depot=/vmfs/volumes/Patches/esxi670-202201001.zip

How to deploy a vSAN Witness Appliance

  1. Deploying a vSAN witness appliance - VMware vSphere 6.7 https://docs.vmware.com/en/VMware-vSphere/6.7/com.vmware.vsphere.vsan-planning.doc/GUID-05C1737A-5FBA-4AEE-BDB8-3BF5DE569E0A.html

  2. Deploying a vSAN witness appliance - VMware vSphere 7.0 https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.vsan-planning.doc/GUID-05C1737A-5FBA-4AEE-BDB8-3BF5DE569E0A.html

When you deploy the vSAN witness appliance, you must configure the expected number of VMs supported by the vSAN stretched cluster. Choose one of the following options:

a. Tiny (10 VMs or fewer)
b. Medium (up to 500 VMs)
c. Large (more than 500 VMs)

You also must select a datastore for the vSAN witness appliance. The witness appliance must use a different datastore than the vSAN stretched cluster datastore.

  1. Download the appliance from the VMware website.
  2. Deploy the appliance to a vSAN host or cluster. For more information, see Deploying OVF Templates in the vSphere Virtual Machine Administration documentation.
  3. Configure the vSAN network on the witness appliance.
  4. Configure the management network on the witness appliance.
  5. Add the appliance to vCenter Server as a witness ESXi host. Make sure to configure the vSAN VMkernel interface on the host.
a. Set Up the vSAN Network on the Witness Appliance
    The vSAN witness appliance includes two preconfigured network adapters. 
    You must change the configuration of the second adapter so that the appliance can connect to the vSAN network.
    # Procedure
        1. Navigate to the virtual appliance that contains the witness host.
        2. Right-click the appliance and select Edit Settings.
        3. On the Virtual Hardware tab, expand the second Network adapter.
        4. From the drop-down menu, select the vSAN port group and click OK.

b. Configure Management Network
    Configure the witness appliance, so that it is reachable on the network.
    # Procedure
        1. Power on your witness appliance and open its console.
            Because your appliance is an ESXi host, you see the Direct Console User Interface (DCUI).
        2. Press F2 and navigate to the Network Adapters page.
        3. On the Network Adapters page, verify that at least one vmnic is selected for transport.
        4. Configure the IPv4 parameters for the management network.
            a. Navigate to the IPv4 Configuration section and change the default DHCP setting to static.
            b. Enter the following settings:
                - IP address
                - Subnet mask
                - Default gateway
        5. Configure DNS parameters.
                - Primary DNS server
                - Alternate DNS server
                - Hostname

How to upgrade ESXi 6.5 to ESXi 6.7

Upgrade path and Interoperability of VMware Products (70785)

https://kb.vmware.com/s/article/70785

https://interopmatrix.vmware.com/Upgrade

https://kb.vmware.com/s/article/67077#esxi_6.5_to_6.7_upgrade_matrix

Check VMware product interoperability Matrix, and choose Upgrade Path, select VMware vSphere Hypervisor (ESXi)

# How to upgrade VMware ESXi 6.5 to ESXi 6.7 using an Offline Bundle
The compatibility matrix shows ESXi 6.5 U3 can be upgraded to ESXi 6.7 U3

1. Download the ESXi 6.7 offline bundle zip file
2. Upload the VMware-ESXi-6.7.0-8169922-depot.zip file to a datastore that is accessible by all required hosts
3. Place the host you plan on upgrading into Maintenance Mode.
4. SSH into your host using PuTTY (or other client/terminal).
5. Type the commmand
    esxcli software vib upgrade -d /vmfs/volumes/<datastore-name|GUID>/VMware-ESXi-6.7.0-<build number>-depot.zip
6. Veriyf the upgrade
    tail -f <ESXi upgrade log file>
7. After the successful upgrade, reboot the host
    reboot
8. Verify the host after upgrade, then take the host out of maintenance mode


# Example
Version             Release Name        Release Date    Build Number    Installer Build Number
-------------------------------------------------------------------------------------------------
ESXi 6.5 Update 3   ESXi 6.5 Update 3   2019/07/02      13932383

VMware ESXi Patch Tracker

https://esxi-patches.v-front.de/ESXi-6.5.0.html

Upgrading or Migrating from vSphere 5.x to 6.x (6.5 , 6.7) , 7.x best practices & Approach

https://sivasankar.org/2018/1288/upgrading-migrating-from-vsphere-5-x-to-6-x-best-practices/

  1. VMware Compatibility Guide

https://www.vmware.com/resources/compatibility/search.php

  1. vSphere Back-in-time release upgrade restriction (67077)

https://kb.vmware.com/s/article/67077#vCenterServer6.5to7.0