- Published on
VMware vSphere 7 and vSphere 7 u3
- Authors
- Name
- Jackson Chen
vSphere with Tanzu Configuration and Management
vSphere with Tanzu Configuration and Management
VMware vSphere 7 Update 3 - Upgrade Guide
vSphere 7 Update 3 Upgrade Guide
ESXi System Storage FAQ
https://core.vmware.com/resource/esxi-system-storage-faq
ESXi System Storage Changes
https://core.vmware.com/resource/esxi-system-storage-changes
vSphere 7 System Storage Changes
We’ve reviewed and changed the lay-out for ESXi system storage partitions on its boot device. This is done to be more flexible, and to support other VMware, and 3rd party solutions. Prior to vSphere 7, the ESXi system storage lay-out had several limitations. The partition sizes were fixed and the partition numbers were static, limiting partition management. This effectively restricts the support for large modules, debugging functionality and possible third-party components.
That is why we changed the ESXi system storage partition layout. We have increased the boot bank sizes, and consolidated the system partitions and made them expandable. This article details these changes introduced with vSphere 7 and how that reflects on the boot media requirements to run vSphere 7. We've collected some of the most common questions around this topic in this ESXi System Storage FAQ resource.
ESXi 7 System Storage Sizes
Depending the boot media used and if its a fresh installation or upgrade, the capacity used for each partition varies. The only constant here is the system boot partition. If the boot media is larger than 128GB, a VMFS datastore is created automatically to use for storing virtual machine data.
ESXi 7 System Storage Contents
The sub-systems that require access to the ESXi partitions, access these partitions using the symbolic links. For example: /bootbank and /altbootbank symbolic links are used for accessing the active bootbank and alternative bootbank. The /var/core symbolic link is used to access the core-dumps.
ESXi 7.0 Hardware Requirements
ESXi 7.0 requires a boot disk of at least 32 GB of persistent storage such as HDD, SSD, or NVMe. Use USB, SD and non-USB flash media devices only for ESXi boot bank partitions. A boot device must not be shared between ESXi hosts.
Storage Requirements for ESXi 7.0 Installation or Upgrade
For best performance of an ESXi 7.0 installation, use a persistent storage device that is a minimum of 32 GB for boot devices. Upgrading to ESXi 7.0 requires a boot device that is a minimum of 4 GB. When booting from a local disk, SAN or iSCSI LUN, at least a 32 GB disk is required to allow for the creation of system storage volumes, which include a boot partition, boot banks, and a VMFS-L based ESX-OSData volume. The ESX-OSData volume takes on the role of the legacy /scratch partition, locker partition for VMware Tools, and core dump destination.
Other options for best performance of an ESXi 7.0 installation are the following:
1. A local disk of 138 GB or larger for optimal support of ESX-OSData. The disk contains the boot partition, ESX-OSData volume and a VMFS datastore.
2. A device that supports the minimum of 128 terabytes written (TBW).
3. A device that delivers at least 100 MB/s of sequential write speed.
4. To provide resiliency in case of device failure, a RAID 1 mirrored device is recommended.
Legacy SD and USB devices are supported with the following limitations:
- SD and USB devices are supported for boot bank partitions. For best performance, also provide a separate persistent local device with a minimum of 32 GB to store the /scratch and VMware Tools partitions of the ESX-OSData volume.
The optimal capacity for persistent local devices is 128 GB. The use of SD and USB devices for storing ESX-OSData partitions is being deprecated.
- Starting with ESXi 7.0 Update 3, if the boot device is a USB or SD card with no local persistent storage, such as HDD, SSD, or a NVMe device, the VMware Tools partition is automatically created on the RAM disk. For more information, see Knowledge Base article 83376.
- If you assign the /scratch partition to a USB or SD card with no local persistent storage, you see warnings to prevent you from creating or configuring partitions other than the boot bank partitions on flash media devices. For best performance, set the /scratch partition on the RAM disk. You can also configure and move the /scratch partition to a SAN or NFS. For more information, see Knowledge Base article 1033696.
- You must use an SD flash device that is approved by the server vendor for the particular server model on which you want to install ESXi on an SD flash storage device. You can find a list of validated devices on partnerweb.vmware.com.
- See Knowledge Base article 85685 on updated guidance for SD card or USB-based environments.
- To chose a proper SD or USB boot device, see Knowledge Base article 82515.
ESXi System Storage When Upgrading
https://core.vmware.com/resource/esxi-system-storage-when-upgrading
As you are aware of the fact that, usage of standalone SD card or USB devices are deprecated starting from vSphere 7 Update 3. The deprecation means that the system will continue to run with warnings as described here. It is advised that you should have, and in the future, must have a locally attached persistent storage device for storing ESX-OSData partition.
OPTION 1: Long Term (Supported) System Boot Partition and ESX-OSData Partition on the same high-endurance, locally attached persistent storage device. This should be the preferred configuration for the long term.
OPTION 2: Legacy (Supported) System Boot partition on SD card or USB device, and ESX-OSData partition on a high-endurance, locally attached persistent storage device. This configuration is also supported.
OPTION 3: Deprecated Standalone SD card or USB device to store system boot partition and RAMDisk for some portion of ESX-OSData Partition. No high-endurance, locally attached persistent storage device available for ESX-OSData Partition. This configuration is deprecated starting from vSphere 7 Update 3.
# Low-Quality Device
. SD card or USB drive
. Minimum 8 GB
. Device Endurance: Minimum 1 TBW
# High-Quality Device
. Device Endurance: 100TBW
. Locally attached devices such as NVMe (128 GB minimum)
a. SSD(128 GB minimum)
b. M.2 Industrial Grade (128 GB minimum)
c. HDD (32 GB Minimum)
d. anaged FCoE/iSCSI LUN (32 GB Minimum)
Partition Layout
To quickly recap what’s in the previous blog post, let’s look at how the partition layout changed between vSphere 6.x and vSphere 7. The small & large core-dump, locker, and scratch disk are consolidated into the new ESX-OSData partition.
Whether you freshly install or upgrade to vSphere 7, the partition layout as shown in the diagram above is applied. This partitioning reflects what happens in the vSphere upgrade process when the ESXi system storage media is HDD or SSD. The (system storage related) upgrade steps are:
- Backup potential partner VIBs (kernel modules), contents of the active boot-bank, locker and scratch partitions to memory (RAM).
- Cleanup all system partitions, non-datastore partitions are not destroyed.
- If the upgrade media does not have an existing VMFS partition, the upgrade process creates a new GPT partition lay-out.
- Create partitions (book-banks and ESX-OSData)
- Restore the contents from RAM to the appropriate partitions.
Upgrade Scenarios
Follow steps to move away from deprecated configurations involving the usage of standalone SD card or USB device
Upgrading ESXi 6.7 with Standalone SD card or USB Device to ESXi 7.x with an additional disk
Please follow the below steps If there is no persistent storage available for ESXi 6.7 host
- Add a high-endurance, locally attached persistent storage device on ESXi 6.x host
- Upgrade ESXi Host to ESXi 7.x.
- Enable autoPartition=True, This will auto partition the first unused boot device to be used as ESX-OSData partition. Please refer VMware KB Article 77009
- This will ensure SD card or USB device storing System boot partition and Newly added storage device is storing ESX-OSData partition, OPTION 2 in above diagram.
Partition newly added storage device to be used as ESX-OSData partition
If the ESXi Host is already upgraded to ESXi 7.x and running with a standalone SD card or USB device
- Add a high-endurance, locally attached persistent storage device
- Boot the ESXi host and set autoPartition = True, it will auto partition the first unused boot device to be used as ESX-OSData partition. Please refer VMware KB Article 77009
- This will ensure SD card or USB device storing System boot partition and Newly added storage device is storing ESX-OSData partition, OPTION 2 in above diagram.
Move away completely from the usage of SD card or USB device
Add a locally attached persistent storage device. Re-install ESXi 7.x on a locally attached storage device Please refer VMware KB Article 2042141 If you want to backup and restore ESXi configuration. This will ensure that all the partitions are stored on a high-endurance, locally attached storage device.
How to back up ESXi host configuration (2042141)
https://kb.vmware.com/s/article/2042141
#**** vSphere CLI *****
#*** Backing up ESXi host configuration data
Run this command to backup the ESXi configuration:
vicfg-cfgbackup --server=ESXi_host_IP_address --username=root -s output_file_name
For example:
vicfg-cfgbackup --server=10.0.0.1 --username=root -s ESXi_test1_backup.tgz
# In vSphere CLI for Windows:
1. Navigate to C:\Program Files\VMware\VMware vSphere CLI\bin
2. Run this command to backup the ESXi configuration:
vicfg-cfgbackup.pl --server=ESXi_host_IP_address --username=root -s output_file_name
For example:
vicfg-cfgbackup.pl --server=10.0.0.1 --username=root -s ESXi_test1_backup.tgz
Notes:
1. Use the --password=root_password option to skip the password prompt.
2. A backup text file is saved in the current working directory where you run the vicfg-cfgbackup script.
#*** Restoring ESXi host configuration data
Restoring the host configuration restores the state of the ESXi along with any vSphere standard switch networking configuration.
# In vSphere CLI
To restore the configuration data for an ESXi host using the vSphere CLI:
1. Put the host that you want to restore maintenance mode.
2. Log in to a server where the vCLI is installed.
3. Run the vicfg-cfgbackup script with the -l flag to load the host configuration from the specified backup file:
# vSphere CLI
vicfg-cfgbackup --server=ESXi_host_IP_address --username=root -l backup_file
For example:
vicfg-cfgbackup --server=10.0.0.1 --username=root -l ESXi_test1_backup.txt
vSphere CLI for Windows:
vicfg-cfgbackup.pl --server=ESXi_host_IP_address --username=root -l backup_file
For example:
vicfg-cfgbackup.pl --server=10.0.0.1 --username=root -l ESXi_test1_backup.txt
Note: Bypass the confirmation to proceed with the -q option.
To restore an ESXi host to the stock configuration settings, run the command:
vicfg-cfgbackup --server=ESXi_host_IP_address --username=root -r
For example:
vicfg-cfgbackup --server=10.0.0.1 --username=root -r
Note: ESXi host should be rebooted for the configuration changes to take effect.
#******* In vSphere PowerCLI *******
# Backing up ESXi host configuration data
Get-VMHostFirmware -VMHost ESXi_host_IP_address -BackupConfiguration -DestinationPath output_directory
For ESXi 6.7, see Reset the System Configuration
For example:
Get-VMHostFirmware -VMHost 10.0.0.1 -BackupConfiguration -DestinationPath C:\Downloads
Note: A backup file is saved in the directory specified with the -DestinationPath option.
# Restoring ESXi host configuration data
1. Put the host into maintenance mode by running the command:
Set-VMHost -VMHost ESXi_host_IP_address -State 'Maintenance'
2. Restore the configuration from the backup bundle by running the command:
Set-VMHostFirmware -VMHost ESXi_host_IP_address -Restore -SourcePath backup_file -HostUser username -HostPassword password
For example:
Set-VMHostFirmware -VMHost 10.0.0.1 -Restore -SourcePath c:\bundleToRestore.tgz -HostUser root -HostPassword exampleRootPassword
Note:
- When restoring configuration data, the build number of the host must match the build number of the host on backup file and UUID (can be obtained using the command "esxcfg-info -u") of the host should match the UUID of the host on backup file. Use the -f option (force) to override the UUID mismatch.
- However, starting from vSphere 7.0 U2, the configuration could be encrypted using TPMs and in which case, the -force option will not work if the host got changed. We need the same TPM that was used on the host during backup, to restore.
- In other words, from vSphere 7.0U2, the override will not work if the host has TPM enabled.