Published on

VMware vSAN

Authors
  • Name
    Jackson Chen

VMware vSAN Administration Guide

VMware vSAN 7.0 Administration Guide

VMware vSAN 7.02 Administration Guide

vSAn Plan and Deploy Guide

VMware vSAN 7 Update 2 Plan and Deploy Guide

vSAN Monitoring and Troubleshooting

vSAN Monitoring and Troubleshooting

vMware vSAN Design Guide https://core.vmware.com/resource/vmware-vsan-design-guide#section1

vSAN Network Design

vSAN Network Design

PowerCLI for VMware vSAN

PowerCLI Cookbook for VMware vSAN

2-Node vSAN Cluster Node Guide

vSAN 2-Node Cluster Guide

vSAN Licensing Guide and Features Support

VMware vSAN Licensing Guide and Features

vSAN Data-at-Rest Encryption

vSAN Data at Rest Encryption

Kubernetes on VMware vSAN

Kubernetes on vSAN Brief

VSAN Reference Sites

https://core.vmware.com/vsan

vSAN Operations and Management

https://core.vmware.com/vsan-operations-and-management

https://core.vmware.com/resource/troubleshooting-vsan-performance

VMware Compatibility Guide

https://www.vmware.com/resources/compatibility/search.php

vSAN Compatibility Guide

https://www.vmware.com/resources/compatibility/search.php?deviceCategory=vsan

Backup Products Compatible with vSAN

https://kb.vmware.com/s/article/56975

Troubleshooting vSAN Performance

https://core.vmware.com/resource/troubleshooting-vsan-performance

vSAN Ready Nodes Configurator

https://vsanreadynode.vmware.com/RN/RN

VMware Virtual SAN: Witness Component Deployment Logic

https://blogs.vmware.com/vsphere/2014/04/vmware-virtual-san-witness-component-deployment-logic.html

vRealize Operations and Log Insight in vSAN Environments

https://core.vmware.com/resource/vrealize-operations-and-log-insight-vsan-environments

vSAN Design Guide

https://core.vmware.com/resource/vmware-vsan-design-guide

vSAN Operations Guide

https://core.vmware.com/resource/vsan-operations-guide

vSAN Top 10 Operational Tips

https://core.vmware.com/resource/vsan-top-10-operational-tips

vSAN Hardware Quick Reference Guide

https://www.vmware.com/resources/compatibility/vsan_profile.html

vSAN Ready Node

https://vsanreadynode.vmware.com/RN/RN

VMware vCloud® Architecture Toolkit™ for Service Providers (vCAT-SP) Documentation Center.

https://download3.vmware.com/vcat/vmw-vcloud-architecture-toolkit-spv1-webworks/index.html#page/Welcome%2FDocumentMap.html

What happens when vCenter goes offline - vSAN

https://blogs.vmware.com/virtualblocks/2018/04/05/vsan-when-vcenter-server-is-offline/

VMware Product Interoperability Matrices

https://sim.esp.vmware.com/#/Interoperability

vSAN Performance Graphs in vSphere Web Client

https://kb.vmware.com/s/article/2144493#vSANDiskGroupPerf

vSAN Data Persistence Platform

https://blogs.vmware.com/virtualblocks/2021/02/04/introduction-vsan-data-persistence-platform/

Cloud Native Storage

https://blogs.vmware.com/virtualblocks/2019/08/14/introducing-cloud-native-storage-for-vsphere/

vSAN

In Hyperconverged infrastructure (HCI), uses vSAN as part of a sofotware defined storage.

VMware vSAN is a distributed layer of software that runs natively as a part of the ESXi hypervisor. vSAN aggregates local or direct-attached capacity devices of a host cluster and creates a single storage pool shared across all hosts in the vSAN cluster.

While supporting VMware features that require shared storage, such as HA, vMotion, and DRS, vSAN eliminates the need for external shared storage and simplifies storage configuration and virtual machine provisioning activities.

vSAN is a vSphere cluster feature that you can enable on an existing cluster or when creating a cluster, similar to how you enable the VMware vSphere High Availability (vSphere HA) and VMware vSphere Distributed Resource Scheduler (vSphere DRS) features.

vSAN is integrated directly into the hypervisor.

# vSAN node minimum requirements
1. Check ESXi host/server listed in VMware Compatibility Guide
2. Nodes requirements
    a. 1 SSD for caching
    b. 1 SSD for capacity (or HDD for hybrid mode)
    c. 10Gbps NIC - all flash
        1Gbps NIC - Hybrid mode
    d. SAS/SATA controllers
        RAID controllers must in passthrough or RAID 0 mode
    e. 8 - 32GB RAM
        Depdending on the amount of drives and disk groups, and workload

Verifying hardware compatability

The VMware Compatibility Guide has a dedicated section for vSAN. See the VMware Compatibility Guide before upgrading to ensure that new drivers and firmware have been certified and tested for use with VMware vSAN.

Ensure that the hardware devices are compatible with the versions of vSphere and vSAN. Verify the compatibility of the following hardware components:

  1. Driver and firmware version of the storage controller
  2. High-write endurance value and disk firmware details of the cache tier devices
  3. Firmware details of the capacity tier devices

Multiple Storage Controllers

You might need to install additional storage controllers in certain scenarios:

  1. You want to reduce the impact of a potential controller failure by placing disk groups on separate controllers.
  2. A recent scale-up with additional disks requires additional controllers.
  3. The queue depth of the current single controller is inadequate to meet the workload and physical disk configuration.
  4. The business wants better performance, which typically requires multiple controllers.

Configuring BIOS for High Performance

The frequency of the physical CPU is controlled by either the BIOS or by the ESXi Hypervisor (OS-controlled). For high performance, consider the following configurations:

  1. Select the OS-controlled CPU power management mode.
  2. Enable Turbo Boost mode.
  3. Expose BIOS C-states to the hypervisor to enable high performance, as needed.

CPU Power Management

If you do not select a policy, ESXi uses the Balanced Power policy by default. For vSAN nodes, the power policy should be adjusted for High Performance.

1. Select the ESXi host from vSphere client
2. Select Configure -> Hardware -> Overview
3. Navigate down the Power Management, and click Edit
    a. High performance     # Select for vSAN nodes
    b. Balanced             # default
    c. Low power
    d. Custom               # User defined power management policy

Verifying OS Controlled Mode

You use the vSphere Client to verify OS controlled mode.

Select the ESXi host and navigate to Configure > Hardware > Overview > Power Management.
If ACPI C-states or ACPI P-states appear in the Technology text box,
    your power management settings have been set to OS controlled

Manually Updating Drivers and Firmware

Drivers and Firmware can be downloaded from a vendor’s website or from the VMware Compatibility Guide website.

To install the downloaded drivers manually

1. Copy the VMware Installation Bundle (VIB) to a datastore or file system accessible to your host,
2. Run esxcli commands to install the drivers.
    esxcli software vib install -d /tmp/<driver-firmware.zip>

Because manually installing drivers on individual hosts in a larger infrastructure becomes complex to manage, use vSphere Lifecycle Manager. For firmware updates, follow the vendor recommendations.

vSphere Lifecycle Manager

vSphere Lifecycle Manager is a unified software and firmware management utility that uses the desired-state model for all life cycle operations:

  1. Monitors compliance (drift)
  2. Remediates back to the desired state

vSphere Lifecycle Manager centralizes automated patch and version management by supporting the following activities:

  1. Upgrading and patching ESXi hosts
  2. Installing and updating third-party software on ESXi hosts
  3. Standardizing ESXi images across hosts in a cluster
  4. Installing and updating ESXi drivers and firmware
  5. Managing VMware Tools and VM hardware upgrades
vSphere Lifecycle Manager Desired Image

The vSphere Lifecycle Manager Desired Image feature merges hypervisor and host life cycle management activities.

An image is created locally from desired state criteria comprised of the hypervisor base image and vendor drivers and firmware.

vSphere Lifecycle Manager Desired Image defines the exact software stack to run on all ESXi hosts in a cluster, and it includes the following elements:

  1. ESXi hypervisor base image containing software fixes and enhancements
  2. Components (a logical grouping of VIBs)
  3. Vendor add-ons (set of OEM bundled components)
  4. Firmware and driver add-on

To maintain consistency, you apply a single ESXi image to all hosts in a cluster

Switching from Baselines to vSphere Lifecycle Manager Desired Images

To start to using images, the cluster must meet following requirements:

  1. All ESXi hosts must be vSphere version 7.0 and later.
  2. All ESXi hosts must have a stateful installation.
  3. All ESXi hosts in the cluster must be from the same vendor.

If a host has a version of vSphere earlier than 7.0, you must first use an upgrade baseline to upgrade the host and then you can start using images.

Note
a. vSphere Baselines has limited VMware support.
b. vSphere Images is supported by VMware
c. Once switch to Images for the vSAN cluster, will not be able to revert back to Baselines for that cluster.
Configuring vSphere Lifecycle Manager Desired Image

Perform the following steps to configure an image

1. Select the vSAN cluster from vSphere client
2. On the right pane, select Updates -> Image
3. Click Setup Image
4. On Covert to an image
    a. Select the base ESXi Version.
    b. Select Vendor Addon.
    c. Select Firmware and Driver Add-on
    d. Include additional components
    click Save

vSAN Resilience and Data Availability Operations

Maintaining a fault-tolerant vSAN environment strengthens the resiliency of the environment and minimizes downtime.

vSAN storage policies, failure handling, and configuring fault domains are key activities for maintaining highly available vSAN data.

vSAN Component States

vSAN components can exist in different states:

Active          # Healthy and functioning correctly
Reconfiguring   # In the process of applying storage policy changes
Absent          # No longer available because of a failure
Stale           # No longer in sync with other components of the same vSAN object
Degraded        # Not expected to return because of a detected failure

vSAN Object Repair Timer

vSAN waits before rebuilding a disk object after a host is either in a failed state or in maintenance mode. Because vSAN is uncertain if the failure is transient or permanent, the repair delay value is set to 60 minutes by default.

# To reconfigure the Object Repair Time
Select the vSAN cluster and select Configure > vSAN > Services > Advanced Options > EDIT

The vSAN object health test includes functionality to rebuild components immediately, rather than waiting as specified by the Object Repair Timer.

# To repair objects immediately
1. select the vSAN cluster and select Monitor
2. Navigate to vSAN > Skyline Health > Data > vSAN object health
3. On the right pane, under Overview, selecct REPAIR OBJECTS IMMEDIATELY

Resynchronizing Components

The resynchronizing of components can be initiated in two ways:

# Failure-initiated resync
1. Cache device failure
2. Capacity device failure
3. Storage controller failure
4. Host network communication failure
5. Host failure 

# User-initiated resync:
1. Policy change
2. User-triggered reconfiguration
3. User placing host into maintenance mode

Impact of vSAN Failure Testing on VMs

When performing failure testing in a vSAN cluster, you must understand the expected VM behavior in several failure scenarios:

  1. VM objects are affected when multiple failures occur.
  2. The VM Home Namespace Object of a running VM becomes inaccessible.
  3. The VM might become inaccessible.

Planned Maintenance

When performing maintenance, you must plan your tasks to avoid failures and consider the following recommendations:

a. Unless Full Evacuation (Full Data Migration) is selected, components on a host become absent
    When the host enters maintenance mode, which counts as a failure
b. Data loss can occur if too many unrecoverable failures occur and no backups exist.
c. Never reboot, disconnect, or disable more hosts than the FTT values allow.
d. Never start another maintenance activity before all resyncs are completed.
e. Never put a host into maintenance mode if another failure exists in the cluster.
f. Never use FTT = 0 without application-level protection

Failure event handling

The following describes some of the failure events and how the failure event is handled

1. VM componet, such as VMDK compoent failed, causing I/O flow failure:
    a. The failure is detected, and the failed component are removed from the active set
    b. Assuming most object components are available, the I/O flow is restored, the VM is operational
2. Component rebuild:
    a. If the component state is absent, wait 60 minutes before initiating a rebuild
    b. Start rebuilding
3. When a cache device failure causes degraded components:
    a. an instant mirror copy is created if the component is affected
4. When a capacity device fails with error and causes degraded components:
    an instant mirror copy is created if the component is affected.
5. When a capacity device fails without error and causes absent components:
    a new mirror copy is created after 60 minutes.
6. When a storage controller fails and causes degraded components:
    resynchronizing begins immediately.
7. When a host failure causes absent components:
    vSAN waits 60 minutes before rebuilding absent components. 
    If the host returns within 60 minutes, vSAN synchronizes the stale components.
8. When host isolation resulting from a network failure causes absent components:
    vSAN waits 60 minutes before rebuilding absent components. 
    If the network connection is restored within 60 minutes, vSAN synchronizes the stale components.
9. In a 4 node vSAN cluster, when the network partition results in isolating the esxi-01 and esxi-04 hosts, for example:
    vSphere HA restarts the affected VMs on either the esxi-02 or the esxi-03 host. 
    These hosts are still in communication and own more than 50% of the VM components.

Backup and restore

Regardless of the storage system, backup and restore operations are fundamental to a vSAN environment.

vSAN datastore

A datastore is the basic unit of storage in virtualized environments.

Note:
Only one vSAN datastore is created, regardless of the number of storage devices and hosts in the cluster.

vSAN datastore has the following characteristics

  1. vSAN provides a single vSAN datastore accessible to all hosts in the cluster.
  2. A single vSAN datastore can provide different service levels for each VM or each virtual disk.
  3. Only capacity devices contribute to datastore capacity

vSAN disk groups

A disk group is a unit of physical storage capacity on a host and a group of physical devices that provide performance and capacity to the vSAN cluster. On each ESXi host that contributes its local devices to a vSAN cluster, devices are organized into disk groups.

Hosts can include a maximum of five disk groups, each of which must have one flash cache device and one or more capacity devices. In vSAN, you can configure a disk group with either all-flash or hybrid configurations.

Note:
1. Maximum 5 disk groups in the host
2. Maximum 7 capacity disk in on disk group
3. A single caching device must be dedicated to a single disk group.
Hybrid Disk Groups

The vSAN hybrid disk group configurations include one flash device for cache and between one and seven magnetic devices for capacity. Cache devices are used for performance.

The cache device should be sized at a minimum of 10% of the disk group capacity:

  1. 70% of the available cache is used for frequently-read drive blocks.
  2. 30% of the available cache is used for write buffering

vSAN Storage Policies

vSAN storage policies define VM storage requirements for performance and availability. Storage policies also define the placement of VM objects and components across the vSAN cluster.

The number of component replicas and copies that are created is based on the VM storage policy. After a storage policy is assigned, its requirements are pushed to the vSAN layer during VM creation. Stored files, such as VMDKs, are distributed across the vSAN datastore to meet the required levels of protection and performance per VM.

The vSAN datastore has a default storage policy configured with standard parameters to protect vSAN data. However, VM home directories and virtual disks can have user-defined custom vSAN storage policies. vSAN storage policies can also be assigned to VM swap objects.

# How to change default vSAN datastore storage policy
1. In vSphere client, navigate to the datastore
2. Select Configure > General > Default Storage Policy
3. Set or change the default vSAN datastore storage policy
vSAN RAID Types

vSAN supports the following common RAID types

RAID       Configuration                Usage
---------------------------------------------------------------------------------------------
RAID 0     Striped                  Fastest performance, no redundancy
RAID 1     Mirrored                 Good performance, full redundancy with 200% capacity usage
RAID 10    Mirrored plus striped    Best performance, redundancy with 200% capacity usage.
RAID 5     Striped plus parity      Good performance with redundancy that has slower drive writes because of parity calculations.
RAID 6     Striped plus double parity   Good performance with redundancy that has the slowest drive writes because of twice the parity calculations as RAID 5

Integrating vSAN with vSphere HA

You can enable both vSAN and vSphere HA on the same cluster. vSphere HA provides as much protection for VMs on a vSAN datastore as it does on a traditional VMFS or NFS datastore.

When enabling vSAN and vSphere HA for the same cluster, the vSphere HA agent traffic, such as heartbeats, and election packets flow over the vSAN network rather than the management network

Note
If vSphere HA is already enabled on a cluster, it must be temporarily disabled before enabling vSAN. 
After vSAN is enabled, vSphere HA can be re-enabled.

Integrate vSAN with vSphere Tanzu and Kubernetes (K8s)

vSAN 7 supports using native file services as persistent volumes (PVs) for Tanzu clusters.

When used with vSphere Tanzu, persistent volumes can support the use of encryption and snapshots. vSAN also enables vSphere Add-on for Kubernetes so that stateful containerized workloads can be deployed on vSAN datastores.

How to configure a vSAN ReadyNode

https://vsanreadynode.vmware.com/RN/RN

How to Configure vSAN ReadyNode

vSAN Objects and Components

VSAN objects

vSAN is an object-based file system. Virtual machines (VMs) stored in the vSAN datastore comprise a series of objects. vSAN stores and manages data as flexible data containers called objects. Each object on the datastore includes data, part of the metadata, and a unique ID.

# VMs include the following objects
a. The VM home namespace
b. VM disks (VMDK)
c. VM Swap object
d. VM snapshots
e. Vmem object (snapshot memory)

# vSAN also stores other types of objects
a. vSAN performance service object
b. vSAN iSCSI and File Services objects
vSAN object componets

Objects are made up of components. If objects are replicated, multiple copies of the data are located in replica components. Each object is composed of a set of components, determined by storage policies.

# Example
The VMDK is the object and each copy is a component of that object
Witness

https://blogs.vmware.com/vsphere/2014/04/vmware-virtual-san-witness-component-deployment-logic.html

Witness components are part of every storage object. The Virtual SAN Witness components contain objects metadata and their purpose is to serve as tiebreakers whenever availability decisions have to be made in the Virtual SAN cluster in order to avoid split-brain behavior and satisfy quorum requirements.

vSAN Witness components are defined and deployed in three different ways:

  1. Primary Witness
  2. Secondary Witness
  3. Tiebreaker Witness
The behavior and logic of witness placement is 100% transparent to the end user, 
and there is nothing to be concerned with regards to the layout of the witness components. 
This behavior is managed and controlled by the system. 

When needed, vSAN automatically creates witness components. Witness components provide an availability mechanism to VMs by serving as a tiebreaker when a quorum does not exist within a vSAN cluster

  1. The quorum and voting system is in place to preserve data integrity.
  2. Each component has one or more votes.
  3. Quorum is achieved when more than 50 percent of the votes are available.
  4. If a quorum exists, the object is accessible

The witness component is small (only a few megabytes) and contains only metadata, no application data. The purpose of the witness is to serve as a tiebreaker when a partition occurs.

vSAN supports a quorum-based system in which each component might have more than one vote to decide the availability of VMs. A minimum of 50 percent of the votes that make up a VM storage object must be accessible at all times.

The default storage policy is RAID 1 that sustain at least one component failure. This represents RAID 1 with two replicas on two separate capacity disks in two hosts. A witness can be created and placed on a third host. If the system loses a component or access to a component, the system uses the witness. The component that can communicate with the witness is declared to have integrity and is used for all data read/write operations until the broken component is repaired.

Note:
1. The witness count is completely dependent on how the components and data get placed and are not really determined by a given policy.
2. There can be multiple witness depending on the vSAN storage policies.

# Example 3 way miorror (across 5 nodes) (aks 3 way mirror)
a. 3 x components   (3 copies of data)
b. 2 x witness  (primary and secondary witness)
c. Total 5 hosts are required

If an even number of total components exists after adding primary and secondary witnesses, a tiebreaker witness is added to make the total component count an odd number.

Large vSAN Objects

When a VM disk exceeds 255 GB or the size of individual capacity devices, the object is automatically split into two or more components. When planning the vSAN datastore, consider the size of VMDKs and other objects planned for the datastore.

Any object, such as a VMDK, can be up to 62 TB. The 62 TB limitation is the same for VMFS and NFS so that VMs can be cloned and migrated using vSphere vMotion between vSAN and other datastores.

Objects striped across multiple drives are always striped evenly so that each component is the same size. When an object must grow, vSAN must create a new component to contain the new data. After the data is written and the object is no longer increasing in size, the system rebuilds all components to redistribute the new data evenly.

Note:
After the object is no longer increasing in size:
a. the system rebuilds all components
b. system redistribute the new component (data) evenly
vSAN Storage Policy Resilience

When configuring a VM storage policy, you can select a RAID configuration that is optimized with suitable availability, performance, and capacity for your VM deployments.

# Failure to tolerate
It defines the number of failures tolerated by an object
a. No data redundancy
b. 1 failure - RAID 1 (Mirroring)
c. 1 failure - RAID 5 (Erasure coding)
d. 2 failures - RAID 1 (Mirroring)
e. 2 failures - RAID 6 (Erasure coding)
f. 3 failures - RAID 1 (Mirroring) 

The default storage policy is RAID 1, aka 2 way mirror.

# Default storage policy
FTT = 1, RAID =1, SW = 1 (SW stripe width - number of drives that it written to)
    a. No of objects: 1
    b. No of drives that each replica is written to: 1 (SW=1)
    c. No of components, excluding witness: 2
    d. No of witnesss: 1
    e. No of hosts required: 3 (2n+1)

# Storage policy - RAID 1, FTT = 2, SW = 1
    a. No of objects: 1
    b. No of drives that each replica is written to: 1
    c. No of components, excluding witness: 3   (FTT=2)
    d. No of witness: 2 (2n+1 - 3)
    e. No of hosts required: 5 (2n+1)

# Storage policy - FTT = 1, RAID = 1, SW = 2
    a. No of objects: 1
    b. No of drives that each replica is written to: 4
        Each component is written to 2 drives (stripe width = 2)
    c. No of components, excluding witness: 2   (FTT=1)
    d. No of witness: 2 (2n+1 - 3)
    e. No of hosts required: 5 (2n+1)

Storage policy change impact - Component Count Policy Update

When we change an VM storage policy from (FTT=1, FTM=RAID 1, SW=2) to (FTT=1, FTM=RAID 1, SW=3), during the RAID layout changin period of time, a new set of components must be built. Until the policy change is complete, two complete sets of components coexist and consume four times the size of the disk object.

# FTT=1, FTM=RAID 1, SW=2
    a. Replica: 2
    b. Each replica is written to drives: 2
    c. No of components
        i. Each replica will have two components
        ii. Total of components: 2 x 2 = 4
    d. Consume storage size: 2 x VM disk size (existing storage required)

# FTT=1, FTM=RAID 1, SW=3
    a. Replica: 2
    b. Each replica is written to drives: 3 (Stripe width 3)
    c. No of components
        i. Each replica will have three components
        ii. Total of components: 2 x 3 = 6
    d. Consume storage size: 2 x VM disk size (new storage required)

# During the storage policy change period
    Total storage requires = existing storage required + new storage required
Note:
    After the VM storage policy change has completed, the actual storage required will be 2 x VM disk size

Minimizing the Impact of Policy Changes on Clusters

Applying new policies can result in resyncs or the creation of new components that affect the cluster performance.

# To apply new policies successfully, perform the following actions:
1. Ensure that sufficient available storage capacity exists for rebuilds.
2. Apply new policies to one object at a time and inspect the impact.
Verifying Individual vSAN Component States

To verify the individual vSAN component state

select a VM and select Monitor > vSAN > Physical Disk Placement

If the vSAN component state is not Active, but specifically Absent or Degraded, the object is noncompliant with the assigned storage policy

vSAN Fault Domains

vSAN fault domains can spread component redundancy across servers in separate computing racks. In doing so, you can protect the environment from a rack-level failure, such as power or connectivity loss.

vSAN requires a minimum of three fault domains to support FTT=1. If possible, use a minimum of four fault domains.

Each fault domain consists of one or more hosts.

# If fault domains have been created (Explicit fault domains)
    vSAN applies the active VM storage policy to the fault domains instead of to the individual hosts
vSAN Fault Domains Best Practices

For a balanced storage load and fault tolerance when using fault domains, consider the following guidelines:

  1. Provide enough fault domains to satisfy the failures to tolerate value that is configured in the storage policies.
  2. Assign the same number of hosts to each fault domain.
  3. Use hosts that have uniform configurations.
  4. If possible, dedicate one fault domain with available capacity for rebuilding data after a failure.
Implicit fault domains

In vSAN cluster, each host is an implicit fault domain if

a. There is no explicit fault domain(s) been created, or
b. The host is not in any explicit fault domain
Explicit Fault Domains

vSAN supports the creation of explicit fault domains. Explicit fault domains:

  1. Increase availability
  2. Protect against rack-level failures
  3. Ensure that component redundancy of the same object does not exist in the same server rack

vSAN Configurations Minimum and Maximums

vSAN 7 supports a wide array of values for vSAN cluster configuration.

Feature or 
Component             Minimum           Maximum
---------------------------------------------------------------------------------------
ESXi host               3               64
VM                      None            200 VMs per host (8,000 per vSphere HA protected cluster)
Disk group            1 (per host)      5 (per host)  
Cache tier disk       1 (per host)      5 (per host)
Capacity tier disk    1 (per host)      35 (per host) (Maximum 7 disk groups per host)

Planning vSAN cluster

vSAN Cluster Requirements

  1. When planning a vSAN cluster deployment, you must verify that all elements of the cluster meet the minimum requirements for vSAN.
  2. All devices, drivers, and firmware versions in your vSAN configuration must be certified and listed in the vSAN section of the VMware Compatibility Guide.
  3. A standard vSAN cluster must contain a minimum of three hosts that contribute to the capacity of the cluster.
As a best practice, consider designing clusters with a minimum of four nodes
Check product interoperability matrix for vSAN

https://www.vmware.com/resources/compatibility/search.php?deviceCategory=san

vSAN Host CPU Requirements

When determining CPU requirements for hosts in the vSAN cluster, consider the following information:

1. Number of virtual CPUs required for virtual machines (VMs)
2. Virtual CPU to physical CPU core ratio
3. Cores per socket
4. Sockets per host 
5. ESXi hypervisor CPU overhead 
6. vSAN operational overhead (10%)

vSAN ReadyNode Sizer is useful for determining CPU requirements for vSAN host.

Additional CPUs must be considered for vSAN operational overhead if vSAN deduplication, compression, and encryption capabilities are enabled.

vSAN Host Memory Requirements

When determining memory requirements for hosts in the vSAN cluster, consider the following information:

1. Memory per VM
2. ESXi hypervisor memory overhead 
3. vSAN operational overhead

The memory requirements for vSAN hosts depend on the amount of disk groups and devices that the ESXi hypervisor must manage.
Consider at least 32 GB of memory for a fully operational vSAN node with five disk groups and seven capacity devices per disk group.

vSAN Host Network Requirements

When configuring your network for vSAN hosts, consider the following recommendations:

  1. 1 GbE adapters must be dedicated to hybrid vSAN traffic.
  2. 10 GbE adapters, can be shared with other network traffic types. If a network adapter is shared with other traffic types, use VLANs to isolate traffic types.

Consider configuring Network I/O Control on a vSphere Distributed Switch to ensure that sufficient bandwidth is guaranteed to vSAN.

vSAN Host Storage Controllers

Storage controller recommendations:

  1. Use controllers that support passthrough mode (also known as HBA mode) to present disks directly to a host.
  2. Use multiple storage controllers to improve performance and to isolate a potential controller failure to only a subset of disk groups.
  3. Consider the storage controller passthrough mode support for easy hot plugging or the replacement of magnetic disks and flash capacity devices on a host.

Configure controllers that do not support passthrough to present each drive as a RAID 0 LUN with caching disabled or set to 100% Read. If a controller works in RAID 0 mode, you must perform additional steps before the host can discover the new drive.

The controller must be configured identically for all disks connected to the controller including those not used by vSAN. Do not mix the controller mode for vSAN disks and disks not used by vSAN to avoid handling the disks inconsistently, which can negatively affect vSAN operation.

The passthrough, also known as HBA mode, implies a storage controller that is not performing any RAID activity on a drive. Storage controllers that support Passthrough or HBA mode present disks directly to a host.

Note: Disable the storage controller cache or, if disabling cache is not possible, set it to 100 percent read.

vSAN Host Boot Device Requirements

You can boot vSAN hosts from a local disk, USB device, or SD cards, and SATADOM devices.

If you choose to boot vSAN hosts from a local disk, using separate storage controllers for boot disks and vSAN disks is the best practice.

When booting vSAN hosts from a USB device, an SD card, or SATADOM devices, log information and stack traces are lost on host reboot. They are lost because the scratch partition is on a RAM drive. Therefore, a best practice is to use persistent storage for logs, stack traces, and memory dumps.

Consider configuring the ESXi Dump Collector and vSphere Syslog Collector.

During installation, the ESXi installer creates a core dump partition on the boot device. The default size of the core dump partition satisfies most installation requirements.

If the ESXi host has 512 GB of memory or less, you can boot the host from a USB, SD, or SATADOM device. When booting a vSAN host from a USB device or SD card, the size of the boot device must be at least 4 GB.

If the ESXi host has more than 512 GB of memory, you can boot the host from a SATADOM or disk device with a minimal size of 16 GB. When you use a SATADOM device, use a single-level cell (SLC) device.

Solid State Devices

In vSAN, SSDs are used in cache tiers to improve performance. They can also be used in both the cache tier and the capacity tier, which is called a vSAN all-flash configuration.

SSDs have a limited number of write cycles before the cell fails, referred to as its write endurance rating. Every time the drive writes or erases, the flash memory cell’s oxide layer deteriorates. The type of cell affects the number of write cycles before failure.

Use single-level cell (SLC) as it provides 100,000 write cycles

vSAN Limitations

When planning to deploy vSAN, you should stay within the limits of what is supported by vSAN.

# vSAN does not support:
a. Hosts that participate in multiple vSAN clusters 
b. vSphere DPM (Distributed Power Management) and Storage I/O Control 
c. SEsparse disks, which are a default format for all delta disks on VMFS6 datastores 
d. RDM (Raw Device Mapping) and diagnostic partition

Capacity Sizing Guidelines

When planning for the storage capacity of the vSAN datastore, you must consider the following factors:

1. Storage space required for VMs
2. Anticipated growth
3. Failures to tolerate
4. vSAN operational overhead

If planning to use advanced vSAN features such as software checksum or deduplication and compression, 
    reserve additional storage capacity to manage the operational overhead.

Plan for additional storage capacity to handle any potential failure or replacement of capacity devices, disk groups, and hosts. Reserve additional storage capacity for vSAN to recover components after a host failure or when a host enters maintenance mode.

Note: 
    Keep at least 30% of storage consumption unused to prevent vSAN from rebalancing the storage load. 
    vSAN rebalances the components across the cluster whenever the consumption on a single capacity device reaches 80% or more. 

Plan extra capacity to handle any potential failure or replacement of capacity devices, disk groups, and hosts. When a capacity device is not reachable, vSAN recovers the components from another device in the cluster. When a flash cache device fails or is removed, vSAN recovers the components from the entire disk group.

Provide enough temporary storage space for changes in the vSAN VM storage policy. When you dynamically change a VM storage policy, vSAN might create a new RAID tree layout of the object. When vSAN instantiates and synchronizes a new layout, the object may consume extra space temporarily. Keep some temporary storage space in the cluster to handle such changes. Enabling deduplication and compression with software checksum features requires additional storage space overhead, approximately 6.2 percent capacity per device.

vSAN Reserved Capacity

vSAN requires free space set aside for operations such as host maintenance mode data evacuation, component rebuilds, and rebalancing. This free space also accounts for the capacity needed for host outages. Activities such as rebuilds and rebalancing can temporarily consume additional raw capacity.

The free space required for these operations is called vSAN reserved capacity and it comprises the following elements

1. Operations reserve
    Reserves storage space for internal vSAN operations, such as object rebuild or repair.
2. Host rebuild reserve
    Reserves storage space to ensure that all objects can be rebuilt if host failure occurs in the cluster.

# Slack space
In all vSAN versions earlier than vSAN 7 U1, the free space required for these transient operations was called slack space. 

# Recommendation
Recommendation of free space as a percentage of the cluster (25-30%), regardless of cluster siz

Planning for Failures To Tolerate

When planning the storage capacity of the vSAN datastore, you must consider the Failures To Tolerate (FTT) levels and the Failure Tolerance Method (FTM) attributes of the VM storage policies for the cluster.

vSAN can be configured in one of two fault tolerance modes or methods. RAID1 (Mirroring) delivers the best performance and RAID 5/6 (Erasure Coding) which is optimized for capacity. Erasure Coding does not utilize witness components, instead it uses data parity

Planning Capacity for VMs

When planning the storage capacity of the vSAN datastore, consider the space required for the following VM objects:

a. VM home namespace object
b. VM VMDK object
c. VM snapshot object
    The VM snapshot object inherits the storage policy settings from the VM's base VMDK file.
d. VM swap object
    vSAN applies Failures To Tolerate policy of 1 to a swap object
    The VM swap object inherits the storage policy settings from the VM home namespace object.

Because VM VMDK files are thin-provisioned by default, prepare for future capacity growth.

The VM VMDK object holds the user data. Its size depends on the size of virtual disk, defined by the user. However, the actual space required by a VMDK object for storage depends on the applied VM storage policy.

The size of a VM swap object depends on the memory configured on a VM.

vSAN applies the Failures To Tolerate policy of 1 to a VM swap object.
The actual space consumption can be twice as much as the configured VM memory.

VM Home Namespace Objects

A home namespace object does not contain user data. It is a container object that contains various files (such as VMX, log files, and so on) which, compared to other objects, occupies much less space.

# VM home namespace objects only accept the following policies
a. Failures To Tolerate
b. Force Provisionin

vSAN Cache Tiers

A vSAN cache tier must meet the following requirements

1. A solid-state disk (SSD) must be connected.
2. A higher cache-to-capacity ratio can be considered to allow future capacity growth.
3. For hybrid clusters, a flash caching device must provide at least 10% of the anticipated capacity tier storage space.

For best performance, consider a PCIe flash device which is faster than SSD.

In vSAN all-flash configurations, the cache tier is not used for reading operations. You can use a small capacity with high write endurance flash device for cache tier.

Flash Devices for Capacity Tiers

Plan the configuration of flash capacity devices for vSAN all-flash clusters to provide high performance and the required storage space, and to accommodate future growth. Choose SSD flash devices according to requirements for performance, capacity, write endurance, and cost of vSAN storage

  1. For capacity: Using flash devices is less expensive and has lower write endurance.
  2. For balanced performance and predictable behaviour: Use the same type and model of flash capacity device.

Multiple vSAN Disk Groups

An entire disk group can fail if a flash cache device or a storage controller stops responding. vSAN rebuilds all components for a failed disk group from another location in the cluster. Using multiple disk groups, with each providing less capacity, has benefits and disadvantages.

# Benefits
1. Improved performance
    a. The datastore has more aggregated cache and I/O operations are faster.
    b. If a disk group fails, vSAN rebuilds fewer components.
2. Risk of failure is spread among multiple disk groups. 

# Disadvantages
1. Costs are increased because two or more caching devices are required.
2. A vSAN host requires additional memory to manage more disk groups.
3. Multiple storage controllers are required to reduce the risk of a single point of failure.

vSAN Cluster Scaling

vSAN scales up and scales out if you need more compute or storage resources in the cluster. Scaling up adds resources to an existing host:

  1. Capacity disks for storage space
  2. Caching tier devices for performance

Scaling out adds additional nodes to the cluster for compute and storage capacity.

Scale Up

Scaling up a vSAN cluster refers to increasing the storage capacity by adding additional disks to the existing vSAN node.

Always increase capacity uniformly on all cluster nodes to avoid uneven data distribution which can lead to uneven resource utilization.

# Ways to scale up
a. Create new disk groups
b. Add new disks to existing disk groups
c. Replace existing cache and capacity disks with higher-capacity drives 

# Reasons to scale up
a. Poor performance because of undersized cache disk
b. To satisfy stripe width policy compliance
c. To increase vSAN datastore capacity
Scale Out

Scaling out a cluster adds storage and compute resources to the cluster. Reasons to add more nodes to vSAN cluster:

  1. Increase storage and compute resources to a cluster
  2. Increase the amount of fault domains to meet FTT compliance
  3. To resolve cluster full situation

Designing vSAN Network

Hosts use a VMkernel adaptor to access the vSAN network. You can create a vSAN network with standard or distributed switches

Recommendation: Using distributed switches

When planning the network for the vSAN cluster, consider the following networking features that vSAN supports which provide availability, security, and guaranteed bandwidth

1. Distributed or standard switches
2. NIC teaming and failover
3. Unicast support
4. Network I/O Control
5. Priority tagging and isolating vSAN traffic
6. Jumbo frames
7. Static routes for vSAN traffic
NIC Teaming and Failover

vSAN uses the NIC teaming and failover policy configured on the virtual switch for network redundancy only.

a. vSAN does not use the second NIC for load balancing purposes. 
b. vSAN does not support multiple VMkernel adapters on the same subnet.

Consider configuring Link Aggregation Control Protocol (LACP) or EtherChannel for improved redundancy and bandwidth use.
Unicast Support

Unicast is the supported protocol for a vSAN network. Multicast is no longer required on the physical switches that support vSAN clusters. Reasons vSAN unicast mode was introduced in vSAN 6.6:

  1. To simplify network requirements for vSAN cluster communications for Cluster Membership, Monitoring, and Directory Services (CMMDS) and VM I/O traffic
  2. To verify cluster participation

If hosts in your vSAN cluster are running earlier versions of ESXi, a multicast network is still required.

Network I/O Control

Network I/O Control is available on vSphere distributed switches and provides the following bandwidth controls:

  1. Guarantees a minimum amount of bandwidth for each traffic type
  2. Limits the bandwidth that each traffic type can consume
  3. Controls the proportion of bandwidth allocated to each traffic type during congestion

If you plan to use a shared 10 Gb Ethernet network adapter, place the vSAN traffic on a distributed switch and configure Network I/O Control to guarantee sufficient bandwidth for vSAN traffic.

Isolating vSAN Traffic

Consider isolating vSAN traffic by segmenting it in a VLAN for enhanced security and performance.

Jumbo Frames

Jumbo frames can transmit and receive up to six times more data per frame than the default of 1,500 bytes. This feature reduces the load on host CPUs when transmitting and receiving network traffic.

Note:
Jumbo frames: 9,000 MTU.
    This is required for vSAN network

You should verify that jumbo frames are enabled on all network devices and hosts in a cluster.

By default, the TCP Segmentation Offload (TSO) and Large Receive Offload (LRO) features are enabled on ESXi. These features offload TCP/IP packet processing work onto the NICs. If not offloaded, the host CPU must perform this work.

vSAN Network Requirements

The network infrastructure and configuration on the ESXi hosts must meet the minimum networking requirements for vSAN.

Network Component           Requirements 
------------------------------------------------------------------------------------------------------
Connection between hosts    Each host in the vSAN cluster must have a VMkernel adapter for vSAN traffic exchange. 
Host network                All hosts in the vSAN cluster must be connected to a vSAN layer 2 or layer 3 network.
Network latency             The maximum latency is 1 ms RTT for standard (non-stretched) vSAN clusters between all hosts in the cluster.
IPv4 and IPv6 support       vSAN network supports both IPv4 and IPv6.
vSAN hybrid cluster         Use 10 Gb or faster, but 1 Gb is supported with Latency <1 ms RTT.
vSAN all-flash cluster      A vSAN all-flash cluster requires 10 Gb or faster with Latency <1 ms RTT, 1 Gb is not supported.
vSAN Communication Ports

The ports listed are used for vSAN communication.

Port    Protocol    vSAN Service
-------------------------------------------------
12345   UDP         vSAN clustering service
2233    TCP         vSAN transport for storage I/O
8080    TCP         vSAN management service
9080    TCP         vSAN storage I/O filter
3260    TCP         vSAN iSCSI target port
5001    UDP         vSAN network health test
8010    TCP         vSAN observer for live statistic
80      TCP         vSAN performance service

Deploy vSAN Cluster

vSAN Cluster Configuration Types

vSAN is a cluster-based solution and creating a cluster is the first logical step in the deployment of the solution. vSAN clusters can be configured in the following ways:

1. Single-site vSAN cluster
    Clusters are configured on one site to run production workloads. All ESXi servers run on that single site.
2. vSAN stretched cluster
    Clusters span across three sites, two data sites, and a witness site. 
    You typically deploy vSAN stretched clusters in environments where avoiding disasters and downtime is a key requirement.
3. Two-node vSAN cluster
    A minimal configuration is a key requirement and typically running a small number of workloads that require high availability.
Configuring vSAN Cluster

Next, you add hosts to the newly created cluster and configure vSAN.

# There are two ways to configure a vSAN Cluster
1. QuickStart Wizard
    Configure a new vSAN cluster that uses recommended default settings for functions such as networking, storage, and services.
2. Manual Configuration
    Preferred methods with better control and confiugration

Comparing the Cluster Quickstart Wizard and Manual Configuration

Cluster Quickstart wizard                       Manual Configuration
=============================================================================================================
You can use the QuickStart wizard               A cluster can always be configured manually, 
only if hosts have ESXi 6.0 Update 2 or later.  regardless of the ESXi version and hardware configuration.
---------------------------------------------------------------------------------------
ESXi hosts should have similar configurations.  This method offers more flexibility while configuring a new or existing cluster.
---------------------------------------------------------------------------------------
Helps configure a vSAN cluster,                 This method provides detailed control over every aspect of cluster configuration.
as per recommendations.
---------------------------------------------------------------------------------------
Available only through the the                  This method is available though any version of the vSphere Client
vSphere Client based on HTML5.
Create vSAN Cluster

To create a vSAN cluster, you must first create the vSphere cluster

#*** Add Hosts
1. Add hosts to datacenter

# Confiugre vSAN networking
1. Create distribute switch
2. Create port groups for the newly created distributed switch
    a. pg-<site|function>-vSAN
    b. pg-<site|function>-vMotion

# Setup VMkernel ports for vSAN and vMotion
1. Select each esxi host
2. Configure > Kernel Adapter
3. Create Kernel adapter for vSAN and vMotion

# Create vSAN cluster
1. Right-click a data center and select New Cluster
2. Enter a name for the cluster in the Name text box
3. Select to configure DRS, vSphere HA, and vSAN.
    Do not select vSphere HA, enable it after successfully configure vSAN cluster
Disable using cluster QuickStart Wizard
1. Fom vSphere client, select vSAN cluster
2. On right pane, select Configure > Configuration > Quickstart
3. Disable Quickstart wizard
    Note:
    Once disabled, it will never run again
vSAN cluster configuration

Configure the vSAN cluster

1. Configure the networking settings, including vSphere distributed switches, port groups, and physical adapters
2. Set up VMkernel ports for vMotion and vSAN traffic. 
3. Configure DRS, vSphere HA, vSAN, and Enhanced vMotion Compatibility. 
4. Claim disks on each host for the cache and capacity tier. 
5. (Optional) Create fault domains for hosts that can fail together. 
6. On the Ready to complete page, verify the cluster settings and then click Finish.
Manually create vSAN

vSAN can be configured on a new or existing vSphere cluster manually. All hosts must have a VMKernel network adapter for vSAN traffic configured and vSphere HA must be temporarily disabled.

It includes vSAN cluster configuration, vSAN Services, Disk claim, and Fault domain setup.

Manually Creating a vSAN Disk Group

Disks are assigned to disk groups for either cache or capacity purposes. Each drive can only be used in one vSAN disk group.

In a hybrid disk group configuration, the cache device is used by vSAN as both a read cache (70%) and a write buffer (30%). In an all-flash disk group configuration, 100% of the cache device is dedicated as a write buffer.

vSAN Fault Domains

vSAN fault domains can spread component redundancy across servers in separate computing racks. By doing so, you can protect the environment from a rack-level failure, such as power and network connectivity loss.

vSAN requires a minimum of three fault domains. At least one additional fault domain is recommended to ease data resynchronization in the event of unplanned downtime or planned downtime, such as host maintenance or upgrades. Each fault domain consists of one or more hosts.

If fault domains are enabled, vSAN applies the active VM storage policy to the fault domains instead of to the individual hosts.

# Implicit Fault Domains
Each host in a vSAN cluster is an implicit fault domain by default.
Explicit Fault Domains

vSAN includes the ability to configure explicit fault domains that include multiple hosts. vSAN distributes data across these fault domains to provide resilience against entire server rack failure resulting from rack power supplies and top-of-rack networking switch failure.

Explicit fault domains increase availability and ensure that component redundancy of the same object does not exist in the same server rack.

vSAN Fault Domains Best Practices

For a balanced storage load and fault tolerance when using fault domains, consider the following guidelines

Provide sufficient fault domains to satisfy the failures to tolerate value.
1. Configure a minimum of three fault domains in the vSAN cluster. For best results, configure four or more fault domains.
2. Assign the same number of hosts to each fault domain. 
3. Use hosts with uniform configurations.
4. Dedicate one additional fault domain with available capacity for rebuilding data after a failure.
5. A host not included in an explicit fault domain is considered its own fault domain.
6. You do not need to assign every vSAN host to a fault domain. 
    If you decide to use fault domains to protect the vSAN environment, consider creating equal sized fault domains.
7. You can add any number of cluster member hosts to a fault domain. Each fault domain must contain at least one host.

vSphere HA on vSAN Clusters

vSAN, in conjunction with vSphere HA, provides a highly available solution for VM workloads. If a host that is running VMs fails, the VMs are restarted on other available hosts in the cluster by vSphere HA.

# Requirements for vSAN to operate with vSphere HA
1. vSphere HA uses the vSAN network for communication.
2. vSphere HA does not use the vSAN datastore as a datastore heart beating location. 
    External datastores can still be used with this functionality if they exist.
3. vSphere HA must be disabled before configuring vSAN on a cluster. 
    vSphere HA can only be enabled after the vSAN cluster is configured

Note:
 ESXi hosts in the cluster must be version 5.5 U1 or later

When enabling or disabling vSphere HA in vSAN cluster, the following order is requried

# Create vSAN
1. Disable vSphere HA
2. Enable vSAN
3. Reconfigure vSphere HA
4. Re-enable vSphere HA

# Disable vSAN
1. Disable vSphere HA
2. Disable vSAN
3. Reconfigure vSphere HA
4. Re-enable vSphere HA

When configuring vSphere HA on a vSAN cluster, use the following recommended values.

vSphere HA Settings                 Recommended Values 
=============================================================================
Host Monitoring                     Enabled
---------------------------------------------------------------------
Host Hardware Monitoring 
    – VM Component Protection       Disabled (disabled by default)
---------------------------------------------------------------------
Virtual Machine Monitoring          Custom preference (disabled by default)
---------------------------------------------------------------------
Host Isolation Response 
    (Remain Powered On by default)  Power off and restart VMs
---------------------------------------------------------------------
Datastore Heartbeats                Disable datastore heartbeats
                                    (Use datastores only from the specified list but do not add any datastores.)
---------------------------------------------------------------------
Host Isolation Addresses            Two isolation addresse

Plan and Design Considerations: vSphere HA

The number of Failures to Tolerate setting reserves a set amount of CPU and memory resources on all hosts in the cluster. As a result, if a host fails, enough free resources exist on the remaining hosts in the cluster for VMs to restart.

After a host failure, vSAN tries to use the remaining space on the remaining hosts and storage in the cluster to reprotect the VMs, as instructed by their policies. Reprotection might involve the creation of additional replicas and stripes.

Multiple failures in a vSAN cluster might fill the available space in vSAN, as a result of resource overcommitment.

Detecting Host Isolation - Heartbeat Datastore

vSphere HA uses datastore heartbeating to distinguish between hosts that have failed and hosts that reside on a network partition. With datastore heartbeating, vSphere HA can monitor hosts when a management network partition occurs and continue to respond to failures.

Heartbeat datastores are not necessary for a vSAN cluster but, if available, they can provide additional benefits. A best practice is to provision heartbeat datastores when the benefits are sufficient to validate additional provisioning costs. A heartbeat datastore can host VM workloads.

Detecting Host Isolation - Host isolation addresses

Consider adding host isolation addresses for the vSAN network when planning a vSAN cluster. Host isolation addresses are a mechanism for a vSAN host to determine whether it has lost network communication to the other hosts in the cluster.

Host isolation addresses should be addresses on the network for devices that are available on the vSAN network like the vSAN networks default gateway. If a host can't connect to any of its isolation addresses vSphere HA performs the action defined for an isolated host.

Enabling vSAN Reserved Capacity

vSAN 7 U1 includes a reserve capacity workflow to simplify storage capacity management for vSAN backend operations and maintenance.

1. Operations reserve capacity is used for internal vSAN operations, such as object rebuild or repair. 
2. Host rebuild reserve capacity is used to ensure that all objects can be rebuilt if any hosts fail.

Note:
    To enable the Host Rebuild reserve, you must have a minimum of four hosts in a vSAN cluster.

When vSAN Reserved Capacity reservation is enabled, and if the cluster storage capacity usage reaches the limit, new workloads will fail to deploy.

# How to enable capacity reserve
1. In vSphere client, select the vSAN cluster
2. On right pane, select Configure -> vSAN Services
3. Navigate to Enable Capacity Reserve
    a. click Edit
    b. Toggle to enable
        i. Operations reserve
        ii. Host rebuild reserve
Reserving vSAN Storage Capacity for Maintenance Activities

In earlier versions 30% of the total capacity was used as slack space. In vSAN 7.0 U1, instead of slack space, Reserved Capacity is used.

Reserved Capacity = Operations Reserve + Host Rebuild Reserve

Planning for Capacity Reserve

The Operations Reserve reserves capacity for internal vSAN operations, such as object rebuild or repair.

The Host Rebuild Reserve reserves capacity to ensure that all objects can be rebuilt if host failure occurs.

Host rebuild reserve is based on N+1. To calculate the host capacity reserve, divide the 100 by the number of hosts in the cluster. The answer is the percentage reserved on the hosts.

vSAN calculates the amount of capacity to reserve 
    a. using the the host with the highest capacity in clusters
    b. no matter how much capacity the hosts contribute

VMware Skyline Health

VMware Skyline Health is the primary and most convenient way to monitor vSAN health. VMware Skyline Health provides an end-to-end approach to monitoring and managing the environment. It also helps ensure optimal configuration and operation of your vSAN environment to provide the highest levels of availability and performance.

VMware Skyline Health alerts can typically stem from:

  1. Configuration inconsistency
  2. Exceeding software or hardware limits
  3. Hardware incompatibility
  4. Failure condition

vSAN Logs and Traces

vSAN support logs are contained in the ESXi host support bundle in the form of vSAN traces. vSAN support logs are collected automatically by gathering the ESXi support bundle of all hosts.

Because vSAN is distributed across multiple ESXi hosts, you should gather the ESXi support logs from all the hosts configured for vSAN in a cluster.

1. By default, vSAN traces are saved to ESXi host system partition path
    /var/log/vsantraces
    The can also be accessed from a symbolic link in /root/vsantraces

2. VMware does not support storing logs and traces on the vSAN datastore.
3. When USB and SD card devices are used as boot devices, 
    the logs and traces reside in RAM disks which are not persistent during reboots.

Consider redirecting logging and traces to other persistent storage when these devices are used as boot devices.

Creating a persistent scratch location for ESXi

https://kb.vmware.com/s/article/1033696

# How to configure a persistent scratch location
1. Log in to vCenter Server using the vSphere Web Client.
2. Click Hosts and Clusters, then select the specific host.
3. Click System
4. Click  Advanced System Settings.
5. Locate ScratchConfig.ConfiguredScratchLocation.
6. Click Edit and add the path to the scratch directory.
7. Reboot the host.

vSAN Storage Policies

Storage Policy-Based Management

Storage Policy-Based Management (SPBM) helps you ensure that virtual machines (VMs) use storage that guarantees a specified level of capacity, performance, availability, redundancy. Storage policies help you meet the following goals

  1. Categorize storage based on certain levels of service
  2. Provision VM disks for optimal configuration
  3. Protect data through object-based fault tolerance

These storage characteristics are defined as sets of rules

Default vSAN Storage Policy

vSAN requires that VMs deployed to a vSAN datastore are assigned at least one VM storage policy. If a storage policy is not explicitly assigned to a VM, the default storage policy of the datastore is applied to the VM. If a custom policy has not been applied to the vSAN datastore, the vSAN default storage policy is used.

vSAN has a default VM storage policy

1. It uses mirroring to make data redundant
2. It cannot be deleted
3. It can be modified.

Defining Storage Policies: Host-Based Rules

When you create a storage policy, you select groups of rules, called rule sets.

# Host-based rules
1. generic for all types of storage and do not depend on datastores
2. Activate data services for the VM
    a. Encryption
    b. Storage I/O control

Host-based rules do not define storage placement for the VM and do not include placement rules. Common rules are generic for all types of storage and do not depend on the datastore. These rules activate data services for the VM. Common rules include rules or storage policy components that describe particular data services, such as encryption, replication, and so on.

# How to create VM Storage Policy
1. In vsphere -> Menu -> Administration
2. Select Policy and Rules
3. Create VM Storage Policy
4. Under Host based services, select Enable host based rules
    a. Select Encryption
        i. Disabled
        ii. usage storage policy component
        iii. Custom
    b. Select Storage I/O Control
        i. Disabled
        ii. usage storage policy component
        iii. Custom

Defining Storage Policies: vSAN Rule Sets

vSAN rule sets

  1. Are specific to vSAN clusters
  2. Include placement rules that describe VM storage requirements
  3. Include advanced policy rules that allow for additional storage requirements
# Configure rule set
1. In Availability tab
    a. Site disaster tolerance
        i. Non - standard cluster
        ii. <other options>
    b. Failures to tolerate
        i. 1 failure - RAID 1 (Mirroring)
            and other options
2. In Advanced Policy Rules tab
    a. Number of disk strips per object:
        i.  1 (or other values)
    b. IOPS limit for object
        i.  0 (or other values)
    c. Object space reversation
        i. Thin provisioning (or other vlues)
    d. Flash read cache reservation
        i.  0 (or other value)
    e. Disable object checksum
        i. Toggle to enable or disable
    f. Force provisioning
        i.  Toggle to enable or disable

Monitoring Storage Policy-Based Management

A storage policy defines a set of capability requirements for VMs:

  1. Storage policies are based on vSAN capabilities.
  2. Storage policies cannot be deleted when in use.
  3. Storage policies are monitored for compliance by vSAN.
VM Storage Policy Capabilities for vSAN

Aside from the objects on the datastore themselves, Storage policies are highly influential on vSAN datastore planning. Storage policies are created using one or more vSAN rules.

Storage policies are created using one or more vSAN rules.

Storage Policy Capability           Use Case            Potential Planning Impact
--------------------------------------------------------------------------------
Failures to tolerate                Redundancy          High
Number of disk stripes per object   Performance         Low to moderate
Flash Read Cache reservation (%)    Performance         None to low
Force provisioning                  Override policy     None to low
Object space reservation (%)        Capacity planning   None to low
IOPS limit for object               Performance         None to low
Disable object checksum             Performance         Non

Failures to Tolerate

The Failures to Tolerate configuration has a significant effect on datastore planning. The number of failures to tolerate and the method used are important in determining how many components are deployed to the datastore, how much capacity is consumed, and how the data is distributed.

# Under Availability -> Failures to tolerate  (Example)
    a.  No data redundancy
    b. 1 failure - RAID-1 (Mirroring)
    b. 1 failure - RAID-5 (Erasure Coding)
    c. 2 failure - RAID-1 (Mirroring)
    d. 2 failure - RAID-6 (Erasure Coding)
    e. 3 failure - RAID-1 (Mirroring)
Number of failures to tolerate

The number of failures to tolerate sets a requirement on the storage object to remain available after a specified number of failures corresponding to the number of host or drive failures in the cluster occurs. This value specifies that configurations must contain at least a number of failures to tolerate + 1 replica

Number of Replica = Number of failures to tolerate + 1

Witnesses ensure that the object data is available even if the specified number of host failures occur.

# If the Number of Failures to Tolerate is configured to 1
    The object cannot persist if it is affected by both a simultaneous drive failure on one host and a network failure on a second host.

Level of Failures to Tolerate

The number of failures tolerated by an object has a direct relationship with the number of vSAN objects and components created.

# Example - FTT1 - RAID-1 (Mirror)
1. vSAN uses RAID 1 to ensure data availability
2. For n failures that are tolerated, n+1 copies of the object are created.
    2 x 1 = 3 (3 components - 2 replica + 1 witness)
3. For n failures that are tolerated, 2n+1 hosts contributing storage are required.
    2n + 1 = 2x1 + 1 = 3 ( hosts is required )
4. The default number of failures tolerated is 1
    FTT1 - vSAN default
5. The possible numbers of failures to tolerate are 0 through 3

Large vSAN Objects with Failure Tolerance

The vSAN object size limit is 255GB, when the component size is larger than 255GB, the component will will have multiple stripes per data copy.

Stripes are created as a result of the object size, in addition to whatever storage policy is applied.

Comparing RAID 1 Mirroring and RAID 5/6 Erasure Coding

https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.vsan.doc/GUID-AD408FA8-5898-4541-9F82-FE72E6CD6227.html

The number of failures to tolerate and the method used to tolerate those failures have a direct effect on the architecture of the datastore, from how many hosts to use, how many disk groups are on each host, and the overall size of the datastore.

Failure            RAID-1 (Mirroring)          RAID-5/6 (Erasure Coding)          RAID-5/6
To        ------------------------------------------------------------------      Storage
Tolerate        Min Host    Total Capacity      Min Host        Total Capacity     Saving
                Required       Required         Required          Required                                                               
===================================================================================================
0                 3             1 x              N/A                 N/A            (Not Supported)
1                 3 (2n+1)      2 x (n+1)       4 (minimum)         1.33 x          33.3% less
2                 5 (2n+1)      3 x (n+1)       6 (2 + 4(RAID-5))   1.5 x           50% less
3                 7 (2n+1)      4 x (n+1)        N/A                 N/A            (Not Supported)

Note:
1. The highest FTT value supported by RAID-5/6 is 2
2. The maximum FTT is 3
3. FTT is Failure to Tolerate
4. FTM (RAID 5/6 (Erasure Coding)) is Failure to Tolerate Method

Erasure coding provides significant capacity savings over mirroring, but erasure coding incurs additional overhead in IOPS and network bandwidth. Erasure coding is only supported in all-flash vSAN configurations.

While mirroring techniques excel in workloads where performance is a critical factor, they are expensive regarding the amount of capacity that is required. RAID 5/6 (erasure coding) can be configured to help ensure the same levels of component availability while consuming less capacity than RAID 1.

Objects with RAID 5 or 6 applied are considered to have additional stripe properties not defined by the number of stripes rule
a. A RAID 5 object has a stripe width (SW) of 4
b. A RAID 6 object has a stripe width (SW) of 6
Number of Disk Stripes Per Object

When planning a cluster, stripes are a direct contributor to the number of components that make up a vSAN object and the increased need for capacity devices and disk groups.

# The stripes per object
1. Have a default value of 1
2. Have possible values of 1 through 12
3. Are placed on disparate capacity devices to prevent single points of failures
Stripe Width (SW)

The number of disk stripes per object value determines the number of capacity devices across which each storage object copy is striped. For data resiliency, stripes from different mirrors of the same object are never placed on the same host.

Force Provisioning

Force provisioning allows an object to be created despite not having sufficient resources in the cluster. vSAN makes the object compliant when additional resources are added.

Force provisioning carries additional considerations to be addressed during the planning of the datastore:

  1. Placing hosts in maintenance mode could affect the accessibility of a VM with no failure tolerance.
  2. Placing a host in maintenance mode could affect the performance of the cluster if a large number of force-provisioned machines must be moved.
  3. Consider the resources consumed by noncompliant VMs when adding resources to the cluster.

Force provisioning overrides the following policy rules to provision VMs to a datastore unable to meet the policy requirements.

  1. The Level of Failures to tolerate is 0
  2. The Stripe Width is set to 1
  3. The Flash Read Cache Reservation is set to 0

Object Space Reservation

When planning a vSAN datastore, you must consider whether objects are provisioned thin, thick, or somewhere in between. VMs are thin-provisioned by default.

The level of reservation dictates both actual storage versus logical storage consumption, as well as deduplication and compression:

  1. If deduplication and compression are disabled, objects can be provisioned with 25%, 50%, 75%, and thick configurations.
  2. If deduplication and compression are enabled, space reservation is limited to thick and thin configurations.
Disabling Object Checksums

Software checksums are used to detect data corruption that might be caused by hardware or software components. vSAN includes software checksums with the following benefits

  1. Automatically detect and resolve silent drive errors
  2. Rebuild corrupted data from other mirrors or data/parity stripes
  3. Perform drive scrubbing in the background once per year
  4. Enabled for all objects by default
  5. Disabled per object with VM storage policies

During read/write operations, vSAN checks for the validity of the data based on the checksum.

vSAN has a drive-scrubbing mechanism that periodically checks the data on drives for errors.

By default, the data is checked once a year, under advanced ESXi host setting
    VSAN.ObjectScrubsPerYear

Note: This value could be changed.

vSAN Cluster Maintenance

The maintenance of computing, network, and storage resources on production systems must be achieved without causing downtime to maximize the availability of services in your environment.

Maintenance Mode Options

ESXi hosts in vSAN clusters provide storage resources in addition to compute resources. You must use appropriate maintenance mode options to maintain data accessibility.

When placing the host into maintenance mode, you can select one of the following vSAN data migration options:

a. Ensure accessibility
    i. Default selection
    ii. The components affected will be marked as Absent for 60 minutes (default value)
    migrates only the components that are required to keep objects available
b. Full data migration
    migrates all data
c. No data migration

Ensure Accessibility Option

The Ensure accessibility option ensures that virtual machines (VMs) with FTT = 0 remain accessible.

Unprotected components on the host, such as objects with FTT = 0, are migrated to other hosts. Components of objects with FTT > 0 are not migrated. If sufficient components to maintain quorum are active on other hosts in the cluster, the objects remain available. However, the objects are noncompliant while the host is in maintenance mode.

Ensure Accessibility - Delta Component

If an extra host with sufficient capacity is available in the vSAN cluster, a temporary delta component is created to capture the new I/O.

The delta component only contains the new data generated after the original component is marked absent. After the absent component is back online, it syncs with the delta component. The delta component is then discarded after the sync operation is complete.

The delta component significantly reduces the overall time required to take a component from Active-Stale to Active state.

# RAID_D  (Delta Component)
The delta component and the original absent component are linked in a special RAID-type structure called RAID_D.
You can view the component layout in the vSphere Client.

# Time consideration
If the host does not return, absent components are rebuilt after 60 minutes.
Object Repair Timer Considerations

You might need to increase the Object Repair Timer value when planned maintenance is likely to take more than 60 minutes but you want to avoid rebuild operations.

Consider the following points:

  1. Rebuild operations are designed to restore redundancy. The higher the Object Repair Timer value, the longer your data is vulnerable to additional failures.
  2. You should reset the Object Repair Timer value to the default value when maintenance is complete.

Full Data Migration Option

The Full data migration option evacuates all components from the disk groups of the host entering maintenance mode onto other available ESXi hosts.

You use this option only when the host is being decommissioned, permanently removed, or put into maintenance mode for a considerably long time.

Note:
1. Must have additional ESXi hosts available in the vSAN cluster.
2. The remaining hosts in the vSAN cluster must be able to satisfy the policy requirements of the objects being evacuated.

Data migration pre-check

You run the data migration precheck before placing a host into maintenance mode. This precheck determines whether the operation can succeed and reports the state of the cluster after the host enters maintenance mode.

# How to run data migration precheck
select vSAN Cluster and select Monitor > vSAN > Data Migration Pre-Check

The data migration precheck provides a list of VMs and objects that might become noncompliant.

No Data Migration Option

If you select the No Data Migration option, vSAN does not evacuate any data from the host. However, some VM objects might become inaccessible. Selecting this option can leave objects in a noncompliant or inaccessible state.

The No Data Migration option is useful when you want to shut down all hosts in a vSAN cluster for maintenance or when data on the hosts is not required.

Selecting the No data migration option does not create delta components.

Changing the Default Maintenance Mode

The default vSAN maintenance mode is Ensure accessibility, which can be changed through an advanced host-level setting. This setting must be identical on all hosts in the cluster.

# How to change default maintenance mode
In the vSphere Client, select the ESXi host and select Configure > Advanced System Settings
    a. ensureAccessilibity
    b. evacuateAllData
    c. noAction

vSAN Disk Balance

The vSAN Disk Balance health check helps to monitor the balance state among disks. By default, automatic rebalance is disabled. The status of this check turns yellow if the imbalance exceeds a system-determined threshold.

When automatic rebalance is enabled, vSAN automatically rebalances the cluster to keep the disk balance status green.

Rebalancing can wait up to 30 minutes to start, giving time to high-priority tasks, such as entering maintenance mode or object repair, to use resources before rebalancing.

The rebalancing threshold determines when the background rebalancing starts in the system. Rebalancing continues until it is turned off or the variance between disks is less than half of the rebalancing threshold.

Reserving vSAN Storage Capacity

You can reserve vSAN storage capacity for the following maintenance activities:

  1. Operations reserve: Reserves capacity for internal vSAN operations, such as object rebuild or repair.
  2. Host rebuild reserve: Reserves capacity to ensure that all objects can be rebuilt if host failure occurs.
To enable, you must have a minimum of four hosts.

# How to enable vSAN capacity reserve
select the vSAN cluster and select Monitor > 
    vSAN > Capacity > 
    Capacity Usage > Configure

When reservation is enabled and capacity usage reaches the limit, new workloads fail to deploy

Shutting Down and Restarting vSAN Clusters

To safely shut down a vSAN cluster

1. you must power off all VMs and put all hosts into maintenance mode
2. Place all hosts into maintenance mode, one at a time
3. Deselect the option to move VMs to other hosts.
4. Select the No data migration option

Rebooting vSAN Clusters Without Downtime - Reboot All Hosts

When rebooting a vSAN cluster, you must reboot one host at a time so that the VMs do not incur downtime:

1. Migrate VMs to other hosts.
2. Select the Ensure accessibility data migration option when placing hosts into maintenance mode.
3. Reboot the host. 
4. Exit maintenance mode after the host is running again. 
5. Repeat the process on other hosts, one at a time.

Moving vSAN Clusters to Other vCenter Server Instances

You might be required to move the vSAN cluster from the existing vCenter Server instance to another

# Process
1. Build a new vCenter Server instance using the same or a later version. 
2. Ensure that networking is configured correctly on the new vCenter Server instance. 
3. Create a cluster with only vSAN enabled. 
4. Configure other vSAN data services to match the original cluster.
5. Create vSAN storage policies to match the vSAN policies of the original cluster.
6. Disconnect and remove all hosts from the inventory in the original vCenter Server instance. 
7. Add hosts to the cluster enabled with vSAN in the new vCenter Server instance. 
8. Verify host connectivity and VM existence. 
9. Apply the vSAN storage policies to the VMs

vSAN Logs and Traces

Because vSAN is distributed across multiple ESXi hosts, you should gather the ESXi support logs from all the hosts configured for vSAN in a cluster. VMware does not support storing logs and traces on the vSAN datastore.

# By default, vSAN traces are saved to the ESXi host system partition path
    /var/log/vsantraces 

Redirecting vSAN Logs and Traces

https://kb.vmware.com/s/article/1033696

When USB and SD card devices are used as boot devices, the logs and traces reside in RAM disks which are not persistent during reboots.

# How to configure a persistent scratch location
1. Log in to vCenter Server using the vSphere Web Client.
2. Click Hosts and Clusters, then select the specific host.
3. Click Configure
4. Expand System -> Advanced System Settings.
5. Locate 
    ScratchConfig.ConfiguredScratchLocation
6. Edit the Value, and change path to the scratch directory.
7. Reboot the host.
    a. Place the host in maintenance mode
    b. Reboot the host

To redirect vSAN traces to a persistent datastore, use the esxcli vsan trace set command.

https://kb.vmware.com/s/article/2145556

# Find current trace log location
    esxcli vsan trace get

# Redirect vSAN trace log
esxcli vsan trace set
    -l|--logtosyslog        Boolean value to enable or disable logging urgent traces to syslog.
    -f|--numfiles=<long>    Log file rotation for vSAN trace files.
    -p|--path=<str>         Path to store vSAN trace files.
    -r|--reset              When set to true, reset defaults for vSAN trace files.
    -s|--size=<long>        Maximum size of vSAN trace files in MB.

esxcli vsan trace set -f 30 -s 200 -p <path>
Configuring Syslog Servers

It is good practice to configure a remote Syslog server to capture all logs from ESXi hosts.

# To configure in the vSphere Client
select the ESXi host and select Configure > Advanced System Settings > 
    Syslog.global.logHost

vSAN Cluster Scaling and Hardware Replacement

vSAN scales up and scales out if you need more compute or storage resources in the cluster.

Scaling up adds resources to an existing host

  1. Capacity disks for storage space
  2. Caching tier devices for performance

Scaling out adds additional nodes to the cluster

  1. Nodes for compute and storage capacity

To increase storage capacity in a vSAN cluster

# Pre-requisite
Before replacing disks, ensure that the vSAN cluster has sufficient capacity to migrate your data from the existing capacity devices.
  1. Replace capacity devices in an existing disk group with higher-capacity devices.
  2. Add capacity devices to an existing disk group.

Adding New Hosts to vSAN Clusters

You can add an ESXi host to a running vSAN cluster without disrupting any ongoing operations

  1. Check VMware Compatibility Guide for supported hardware.
  2. Create uniformly configured hosts.
  3. Use the vSAN Disk Balance health check to rebalance the disks.

Adding New Capacity Devices to Disk Groups

You can expand the capacity of a disk group by adding disks

  1. Ensure that the disks do not have partitions.
  2. Increase the capacity of all disk groups to maintain a balanced configuration.
  3. Add devices with the same performance characteristics as the existing disks.

Adding capacity devices affects the cache-to-capacity ratio of the disk group.

Disk Claim

Available disks are grouped by either model and size or by host.

# How to claim disks (two methods)
1. Select the vSAN cluster, select Configure > vSAN > Disk Management
    click Claim Unused Disks
2. Select the vSAN cluster, select Configure > vSAN > Disk Management
    select a host, and click Create disk group

Replacing Capacity Tier Disks

If you detect a failure, replace a capacity device:

  1. Select the disk group and remove the capacity disk from the disk group.
  2. Replace the faulty drive and rescan the storage adapter.
  3. Add the new disk to the disk group.
  4. Verify the disk group storage space.
deduplication and compression is enabled on the cluster

If deduplication and compression is enabled on the cluster, a capacity device failure affects the entire disk group.

# Pre-requisites
1. Place the affected host in Maintenance Mode
    a. Select Full Data Migration
2. Verify and ensure all VMs and data have been migrated to other hosts

# Process
1. Remove and replace the failed disk
    a. In vSphere client, select vSAN cluster
    b. Select Configure -> Disk Management
    c. Navigate to the ESXi host and select the disk group in the host that contains the failed disk 
    d. Select "..." and select Remove
    e. Replace the faulty disk
2. Re-create the disk group
    a. Navigate to the vSAN cluster.
    b. Click the Configure tab.
    c. Under vSAN, click Disk Management.
    d. Click Claim Unused Disks.
    e. Group by host.
    f. Select disks to claim.
    g. Select the flash device to use for the cache tier.
    h. Select the disks to use for the capacity tier.
    i. Click Create or OK to confirm your selections.
    The new disk group appears in the list.
How to manually remove and recreate a vSAN disk group using esxcli

https://kb.vmware.com/s/article/2150567

Replacing Cache Tier Disks

When replacing a cache tier device, you must take the entire disk group offline

  1. Ensure that adequate capacity is available in the vSAN cluster.
  2. Place the host in maintenance mode.
  3. Delete the disk group
  4. Replace the cache device.
  5. Recreate the disk group.
Removing Disk Groups

When removing a disk group from a vSAN cluster, the vSphere Client describes the impact of a disk group evacuation.

Running a precheck before removing a disk group is a best practice. Prechecks determine if the operation will be successful and report the state of the cluster after the disk group is removed.

Replacing vSAN Nodes

When replacing a host, the replacement should have the same hardware configuration, whenever possible.

# Before removing the host
1. Verify data evacuation is complete.
2. Verify all objects are currently healthy. 

# To replace the host:
1. Place the host in maintenance mode, and select Full Data Migration.
2. Remove the host from the cluster. 
3. Add the new host.

Decommissioning vSAN Nodes

To permanently decommission a vSAN node, you must follow the correct procedure:

1. Ensure that sufficient storage capacity is available in the vSAN cluster. 
2. Place the host in maintenance mode and select Full data migration. 
3. Wait for the data migration to complete and the host to enter maintenance mode. 
4. Delete disk groups that reside on the host that you want to decommission.
5. Use the vSphere Client to move the ESXi host from the cluster to disassociate it from the vSAN cluster.
6. Shut down the ESXi host.
7. Remove the ESXi host from vCenter inventory.

Upgrading and Updating vSAN

The vSAN upgrade process includes several stages. Depending on the vSAN and disk format versions that you are running, an object and disk format conversion might be required. Disk format changes can be time consuming if data evacuation is required. If you upgrade the disk format, you cannot roll back software on the hosts or add incompatible hosts to the cluster.

vSAN Upgrade Process

Before attempting a vSAN upgrade, review the complete vSphere upgrade process to ensure a smooth, uninterrupted, and successful upgrade:

1. Plan the upgrade.
    a. Check hardware compatibility
    b. Check Firwmare and driver compatiblity
    c. Check any dependent VMware product compatibility, such as NSX-T, Horizon
2. Configure the backup.
3. Patch or upgrade vCenter Server. 
4. Patch or upgrade ESXi hosts. 
5. Perform a VMware Skyline Health check. 
6. Perform a disk format version upgrade. 
7. Complete the upgrade.

Preparing to Upgrade vSAN

Before upgrading to the latest version of vSAN, always verify your current environment. See the following resources for additional reference:

  1. vSAN version-specific release notes
  2. VMware Product Interoperability Matrices

Review the VMware Compatibility Guide to verify support for the following items:

  1. Server, storage controller, SSDs, and disks
  2. Storage controller firmware and disk firmware
  3. Device drivers

vSAN Upgrade Phases

vSAN is upgraded in two phases:

  1. Upgrade vSphere (vCenter Server and ESXi hosts)
  2. Upgrade vSAN objects and disk format conversion (DFC)

vSAN Disk Format

https://kb.vmware.com/s/article/2148493

The disk format upgrade is optional. Your vSAN cluster continues to run smoothly even if you use a previous disk format version. For best results, upgrade disks to use the latest disk format version, which provides the new vSAN feature set.

After you upgrade the on-disk format, you cannot roll back software on the hosts or add certain older hosts to the cluster.

Disk format upgrade is an optional final step when you upgrade a vSAN cluster. You might choose not to upgrade the disk format if you want to maintain backward compatibility with hosts on an earlier version of vSAN. For example, you might want to retain the ability to add hosts to the cluster with vSAN 7.0 GA to provide burst capacity.

Disk format upgrades from v3.0 (vSAN 6.2) to a later version (vSAN 7 U2 currently is version 13) only update disk metadata and do not require data evacuation.

vSAN Disk Format Upgrade Prechecks

When you initiate an upgrade precheck of the on-disk format, vSAN verifies for the following conditions:

  1. All hosts are connected.
  2. All hosts have the same software version.
  3. All disks are healthy.
  4. All objects are accessible. vSAN also verifies that no outstanding issues exist that might prevent upgrade completion.
Note: 
1. vSAN on-disk format conversion enables new data services whose impact on your environment must be considered before upgrading.
2. Current disk format v13

vSAN System Baselines

vSAN build recommendations are provided through vSAN system baselines for vSphere Lifecycle Manager:

  1. vSAN generates one baseline group for each vSAN cluster.
  2. vSAN system baselines are listed in the baselines pane of the vSphere Lifecycle Manager.
  3. vSAN system baselines can include custom ISO images provided by certified vendors.
  4. vSphere Lifecycle Manager automatically scans each vSAN cluster to verify compliance against the baseline group.

To upgrade your cluster, you must manually remediate the system baseline through vSphere Lifecycle Manager

Advanced vSAN Configurations

Deduplication and Compression

Enabling deduplication and compression can reduce the amount of physical storage consumed by as much as seven times.

# Used case
Environments with highly redundant data, such as full-clone virtual desktops and homogeneous server operating systems, 
benefit the most from deduplication.

You can enable deduplication and compression when you create or edit an existing vSAN all-flash cluster.

Deduplication and compression are enabled as a cluster-wide setting but they are applied on a disk group basis

vSAN performs deduplication and compression at the block level to save storage space. Deduplication removes redundant data blocks, whereas compression removes additional redundant data within each data block. Deduplication and compression might not be effective for encrypted VMs because VM encryption encrypts data on the host before it is written to storage.

Consider the following guidelines when managing disks and disk groups in a cluster with deduplication and compression enabled:

  1. You cannot remove a single capacity disk from a disk group.
  2. You must remove the entire disk group to make modifications.
  3. A single disk failure causes the entire disk group to fail.
  4. Consider adding disk groups to increase the cluster storage capacity.

Deduplication and compression occur in-line when data is written back from the cache tier to the capacity tier. The deduplication algorithm uses a fixed block size and is applied within each disk group.

The compression algorithm is applied after deduplication but before the data is written to the capacity tier.

Given the additional compute resource and allocation map overhead of compression, vSAN stores compressed data only if a unique 4K block can be reduced to 2K or less. Otherwise, the block is written uncompressed.

Using Compression-Only Mode

You can enable compression-only mode on an all-flash vSAN cluster to provide storage space efficiency without the overhead of deduplication.

The compression-only mode algorithm moves data from the cache tier to individual capacity disks, which also ensures better parallelism and throughput.

# Compression-only mode also provides the following capabilities
a. Reduces the amount of physical storage consumed by as much as 2x
b. Reduces the failure domain from the entire disk group to only one disk

Note:
Using compression only mode is availbe from vSAN 7 update 1.
# How to configure deduplicatioin and compression
1. Select vSAN cluster
2. Select Configure -> vSAN -> Services -> Space Efficency
3. Click Edit
4. Select
    a. Compression only
    b. Deduplication and compression mode
5. click APPLY

Reclaiming Storage Space Using TRIM/UNMAP

vSAN supports SCSI UNMAP commands directly from a guest OS to reclaim storage space. A TRIM/UNMAP command sent from the guest OS reclaims previously allocated storage as free space. This space-efficiency feature can deliver better storage capacity use in vSAN.

TRIM/UNMAP also offers quicker object repair because reclaimed blocks do not need to be rebalanced or mirrored if a storage device fails. TRIM/UNMAP support is disabled by default.

# Ways to enable TRIM/UNMAP support
1. Using PowerCLI:
    Get-Cluster -name <Cluster_Name> | Set-VsanClusterConfiguration -GuestTrimUnmap:$true
2. Using RVC:
/localhost/VSAN-DC/computers> vsan.unmap_support <vSAN_Cluster_Name> -e Unmap support is already disabledVSAN-Cluster: succes

vSAN Encryption

vSAN encryption is a native HCI encryption solution, built in to the vSAN layer, with the following characteristics

1. Supports various key management server (KMS) vendor solutions
2. Configured at the cluster level
3. Supported on hybrid and all-flash vSAN clusters
4. Compatible with other vSAN features
5. Supports data-at-rest encryption, a vSAN datastore-level method
6. Supports data-in-transit encryption, a vSAN network-level method

vCenter Server requests encryption keys from an external KMS. The KMS generates and stores the keys, and vCenter Server obtains the key IDs from the KMS and distributes them to the ESXi hosts. vCenter Server does not store the KMS keys but keeps a list of key IDs.

vSAN data-at-rest encryption and vSAN data-in-transit encryption features are independent of each other and both these features can be enabled and configured separately.

vSAN Encryption - Design Considerations

Consider the following points when working with vSAN encryption

  1. Do not deploy your key provider on the same vSAN datastore that you plan to encrypt.
  2. Modern processors offload encryption operations to a dedicated portion of the CPU.
  3. The witness host in a two-node or stretched cluster does not participate in vSAN encryption.
  4. vSAN node core dumps are also encrypted
  5. vSAN data-at-rest encryption and vSAN data-in-transit encryption are independent of each other. These features can be enabled and configured independentl

Encryption Operation Permission

In secure environments, only authorized users should be able to perform cryptographic operations:

  1. The No cryptography administrator role limits authorized users.
  2. You should consider assigning this role to a subset of administrators when enabling vSAN encryption.
  3. Custom roles help implement least-privilege management.
  4. You should review and audit role assignments regularly to ensure that access is limited only to authorized users.

Adding a KMS to vCenter Server

Use a supported key provider to distribute the keys to be used with vSAN encryption. To support encryption, add the KMS to vCenter Server and establish the trust. vCenter Server requests encryption keys from the key provider. The KMS must support the Key Management Interoperability Protocol (KMIP) 1.1 standard.

To set up communication between the KMS and vCenter Server, trust must be established. vCenter Server uses KMIP to communicate with the KMS over SSL or TLS.

KMS Server Cluster

Set up the KMS cluster for high availability to avoid a single point of failure. A KMS cluster has the following characteristics:

  1. The KMS cluster is a group of KMIP-based key management servers that replicate keys to one another.
  2. The KMS cluster must have key management servers from the same vendor
If your environment requires KMS solutions from different vendors, you can create KMS clusters for each vendor.

You add a KMS to your vCenter Server system from the vSphere Client. vCenter Server defines a KMS cluster when you add the first KMS instance and sets this cluster as the default.

You can add KMS instances from the same vendor to the cluster and configure all KMS instances to synchronize keys among them. If your environment requires KMS solutions from different vendors, you can create KMS clusters for each vendor.

# How to add KMS to vCenter
1. From vSphere client, select vCenter object
2. Select Configure -> Security -> Key Providers
3. Click Add Standard Key Provider
KMIP Client Certificates

The type of certificate used by vCenter Server (KMIP client) depends on the KMS vendor. Always check with the KMS vendor for their certificate requirements to establish the trust.

When enabling the trust between KMS and vCenter, choose one of the following methods
a. vCenter Root CA Certificate
b. vCenter Certificate
c. KMS certificate and private key
d. New Certificate Signing Request (CSR)

Then, establish the trust

vSAN Data-at-Rest Encryption

The following URL provides detail information about vSAN Data-at-Rest encryption

https://core.vmware.com/resource/vsan-data-rest-encryption

vSAN KEK and KEK_Id

When vSAN data-at-rest encryption is enabled on a vSAN cluster:

  1. vCenter Server requests a key encryption key (KEK) from the KMS. vCenter Server stores only the ID of the KEK.
  2. vCenter Server sends the KEK ID to all hosts.
  3. Hosts use the KEK ID to request the KEK from the KMS.
  4. Hosts create a unique data encryption key (DEK) for each drive.
  5. Each cache and capacity drive is encrypted using its DEK. The DEK is encrypted using the KEK and is stored on the storage device.
  6. vCenter Server requests a single host encryption key (HEK) from the KMS which is then sent to all the hosts in the cluster. This HEK is used to encrypt core dumps.

When data-at-rest encryption is enabled on a new vSAN cluster, disks are encrypted immediately on the creation of disk groups. If data-at-rest encryption is enabled on the existing vSAN cluster, a rolling disk format change is performed. Each disk group is evacuated in turn. The cache and capacity devices are reformatted using the DEKs

Operational Impact When Enabling Encryption

Encrypted and unencrypted vSAN datastores use different disk formats.

On new vSAN clusters, the appropriate disk format is selected automatically, but an existing vSAN cluster requires a disk format change (DFC).

You must consider the following points before enabling encryption on an existing vSAN cluster

  1. Ensure that sufficient capacity is available to perform disk evacuations.
  2. Ensure that no resync operations are running.
  3. Ensure that the cluster is in a healthy state.
  4. Ensure that no congestion exists in the cluster.
  5. Preferably schedule the DFC outside of production hours
# Prerequisite
Add Key Provider to vCenter

# How to enable vSAN Data-at-Rest encryption
1. Select the vSAN cluster
2. select Configure > vSAN > Services > Data-At-Rest Encryption > EDIT
3. Enable Data-at-Rest encryption. 
    Optionally, select Wipe residual data and Allow reduced redundancy.
    Optionally, select Allow reduced redundancy.
4. Select Key Provider
5. Click APPLY
Allowing Reduced Redundancy

If your vSAN cluster has a limited number of fault domains or hosts, select the Allow reduced redundancy check box. If you allow reduced redundancy, your data might be at risk during the disk reformat operation.

Writing Data to an Encrypted vSAN Datastore

Data is written to the cache tier

  1. Write I/O is broken into chunks.
  2. Checksum is created.
  3. Encryption is performed.
  4. Encrypted data is written to the write buffer. Data is later destaged to the capacity tier.
  5. Decryption is performed.
  6. Deduplication is performed (If configured).
  7. Compression is performed (If configured).
  8. Encryption is performed.

Scaling Out a Data-at-Rest Encrypted vSAN Cluster

When you add a host to the data-at-rest encrypted vSAN cluster, the new host receives the KEK and the HEK from the KMS. The DEK is generated for each cache and capacity device, and disk groups are created using the correct format.

Performing Rekey Operations

As part of security auditing, regularly generate new encryption keys to maintain the enterprise security compliance. vSAN uses shallow rekey and deep rekey, which is a two-layer encryption model to make data very well secured.

A shallow rekey operation replaces only the KEK, and the data does not require the re-encryption. A deep rekey operation replaces both the KEK and the DEK and requires a full DFC. In most cases, a shallow rekey is sufficient.

Performing a deep rekey is time-consuming and might temporarily decrease the performance of the cluster.

# How to generate new encryption key
1. Select vSAN cluster -> Configure
2. Navigte to vSAN -> Services
3. Select Data-At-Rest-Encryption
4. Click Generate New Encryption Key

Rotating KMIP Client Certificates

As part of enterprise security auditing, you might be required to periodically rotate the KMIP client certificate on vCenter Server:

  1. All changes should be performed on vCenter Server.
  2. When the client certificate is replaced, you must reconfigure the KMS to trust the new client certificate.

Changing the Key Provider

You can change the key provider. The process of changing the key provider is essentially a shallow rekey operation:

  1. Add a key provider in vCenter
  2. Select an alternate KMS cluster.
  3. The new KMS configuration is pushed to the vSAN cluster.
Verifying Bidirectional Trust

After you change the key provider, you verify that the KMS connection is operational. Communication between the KMS and the KMIP client is temporarily interrupted until bidirectional trust is established.

Encrypted vSAN Node Core Dumps

A core dump is a state of memory that is saved at the time when a system stops responding with a purple error screen.

Core dumps are used by VMware Support representatives for diagnostic and technical support purposes. By default, core dumps are saved to the vmkDiagnostic partition on the ESXi host.

Core dumps for vSAN nodes in crypto-safe mode are always encrypted using the HEK. Set a password to decrypt encrypted cored dump.

# When export system logs
1. Select the ESXi host
2. select "Password for encrypted core dumps"
3. Enter the password
4. Click Export Logs

vSAN Data-in-Transit Encryption

vSAN data-in-transit encryption encrypts vSAN traffic exchanged between vSAN nodes. vSAN uses a message authentication code to ensure authentication and integrity of the vSAN traffic. vSAN uses a native proprietary technique to encrypt vSAN traffic and does not rely on the external key provider (KMS cluster).

vSAN enforces encryption on vSAN traffic exchanged between vSAN nodes only when data-in-transit encryption is enabled:

  1. vSAN creates a TLS link between vSAN nodes intended to exchange the traffic.
  2. vSAN nodes create a shared secret and attach it to the current session.
  3. vSAN uses the shared secret to establish an authenticated encryption session between vSAN nodes.

Encrypting data over the wire between vSAN nodes manages the existing FIPS 140-2 validated cryptographic module.

# How to enable vSAN data-in-transit encryption
1. Select the vSAN cluster
2. select Configure > vSAN > Services > Data-In-Transit Encryption > EDIT
3. Enable Data-In-Transit encryption. 
4. From the Rekey interval drop-down menu, select the required interval
    Default: 1 days
    Options: 6 hours, 12 hours, 1 day, 3 days, 7 days
5. Click APPLY

As part of security compliance audits, vSAN initiates the rekey process to generate new keys at the scheduled interval.

By default, the rekey interval is set to one day. Depending on enterprise security compliance, the rekey interval can be adjusted as needed.

vSAN Data-in-Transit Encryption Health Check

Individual vSAN node readiness for data-in-transit encryption is verified, and inconsistent configuration can be remediated.

# To view the status of the vSAN data-in-transit encryption health check
select the vSAN cluster and go to Monitor > 
    vSAN > Skyline Health >
    Data-in-transit-encryption > 
    Configuration check

vSAN Cluster Monitoring

vSphere Client provides several tools to enable you to monitor the health and performance of the vSAN cluster.

vSAN Health Monitoring

Customer Experience Improvement Program (CEIP) collects the following information:

  1. Configuration
  2. Feature use
  3. Performance
  4. Product logs
To enable CEIP from vSphere Client
In vSphere client, navigate to Menu > Administration > Deployment > Customer Experience Improvement Program > Join

Proactive Tests

You can use proactive tests to check the integrity of your vSAN cluster. These tests are useful to verify that your vSAN cluster is working properly before you place it in production.

1. Select vSAN cluster > Monitor > vSAN > Proactive Tests
2. Select
    a. VM Creation Test
    b. Network Performance Test
3. Click Run

VMware Skyline Health

VMware Skyline Health is the primary and most convenient way to monitor vSAN health. VMware Skyline Health provides you with findings and recommendations to resolve problems, reducing the time spent on resolving issues

Online Health

Online health includes vSAN Support Insight and Skyline Advisor. vSAN Support Insight help vSAN users maintain a reliable and consistent compute, storage, and network environment. This feature is available when you join CEIP.

Skyline Advisor, included with your Premier Support contract, enhances your proactive support experience with additional features and functionality, including automatic support log bundle transfer with Log Assist

1. Select vSAN cluster > Monitor > vSAN > Skyline Health
2. Select Online health, then select
    a. vSAN Support Insight
    b. Advisor
vSAN Cluster Partition

To ensure the proper operation of vSAN, all hosts must be able to communicate over the vSAN network. If they cannot, the vSAN cluster splits into multiple partitions. vSAN objects might become unavailable until the network misconfiguration is resolved

Select vSAN cluster > Monitor > vSAN > Skyline Health
    Network > vSAN Cluster Partition
Network Latency Check

The network latency check looks at vSAN hosts and reports warnings based on a threshold of 5 milliseconds.

If this check fails, check VMKNICs, uplinks, VLANs, physical switches, and associated settings to locate the network issue.

Select vSAN cluster > Monitor > vSAN > Skyline Health
    Network > Network Latency Check
vSAN Object Health

This check summarizes the health state of all objects in the cluster.

You can immediately initiate a repair object action to override the default absent component rebuild delay of 60 minutes. You can also purge inaccessible VM swap objects.

Select vSAN cluster > Monitor > vSAN > Skyline Health
    Disks > vSAN Object Health
Time Synchronization

This check looks at time differences between vCenter Server and hosts. A difference greater than 60 seconds leads this check to fail. If this check fails, you should review the NTP server configuration on vCenter Server and the ESXi hosts.

Select vSAN cluster > Monitor > vSAN > Skyline Health
    Cluster > Time is synchronized across hosts and VC
vSAN Disk Balance

This check monitors the balance state among disks. By default, automatic rebalance is disabled. When automatic rebalance is enabled, vSAN automatically rebalances disks if a difference greater than 30% usage is found between capacity devices.

Rebalance can wait up to 30 minutes to start, providing time for high-priority tasks such as Enter maintenance mode and object repair to complet

Select vSAN cluster > Monitor > vSAN > Skyline Health
    Cluster > vSAN Disk Balance
Disk Format Version

This check examines the disk format version. For disks with a format version lower than the expected version, a vSAN on-disk format upgrade is recommended to support the latest vSAN features.

vSAN 7.0 U1 introduces on-disk format version 13, which is the highest version supported by any host in the cluster.

Select vSAN cluster > Monitor > vSAN > Skyline Health
    Cluster > Disk Format Version
vSAN Extended Configuration

This check verifies the default settings for the object repair timer, site read locality, customized swap object, and large-scale cluster support. For hosts with inconsistent extended configurations, vSAN cluster remediation is recommended.

The default clusterwide setting for the object repair timer is 60 minutes. The site read locality is enabled, customized swap object is enabled, and large-scale cluster support is disabled.

Select vSAN cluster > Monitor > vSAN > Skyline Health
    Cluster > Disk Format Version
vSAN Component Utilization

This check examines component utilization for the entire cluster and each host. It displays a warning or error if the utilization exceeds 80% for the cluster or 90% for any host.

The deployment of new VMs and rebuild operations are not allowed if the component limit is reached

Select vSAN cluster > Monitor > vSAN > Skyline Health
    Capacity Utilization > Component
What if the Most Consumed Host Fails

This check simulates a failure of the host with most resources consumed and then displays the resulting cluster resource consumption.

Select vSAN cluster > Monitor > vSAN > Skyline Health
    Capacity Utilization > What if the Most Consumed Host Fails

vSAN Performance Monitoring

vSAN Online Performance Diagnostics

The vSAN online performance diagnostics tool collects performance data and sends it to VMware for diagnostic and benchmarking purposes. VMware analyzes the performance data and provides recommendations

Select vSAN cluster > Monitor > vSAN > Performance Diagostics
    Performance Diagnostics

vSAN Performance Service

The vSAN performance service monitors performance-based metrics at the cluster, host, VM, and virtual disk levels.

1. Select vSAN cluster > Configure > vSAN > Services
2. Select Performance Service

Note:
a. Performance service is enabled by default. 
b. The performance history database is created and stored as the StatsDB object in the vSAN datastore

I/O Impact on Performance

When analyzing performance, you must consider the sources of I/O traffic.

Front-end storage traffic
    generated by VM storage I/O traffic

Back-end storage traffic
    generated by the following sources
    a. vSphere Replication traffic
    b. vSphere Storage vMotion
    c. vSAN Objects Resynchronization

vSAN Cluster Metrics

In addition to standard cluster performance metrics, vSAN clusters record storage I/O metrics for both VM and vSAN back-end traffic.

# Display vSAN cluster metrics
1. Select vSAN cluster > Monitor > vSAN > Performance
2. Select one of the options
    a. VM
    b. Backend
    c. IOInsight

# VM I/O metrics at the following levels:
    a. Cluster
        The chart displays cluster-level metrics from the perspective of VM I/O traffic.
        i. IOPS
        ii. Throughput
        iii. Latency
        iv. Congestion
        v. Outstanding
    b. Specific VMs
Back-End Cluster-Level Metrics

The chart displays cluster-level metrics from the perspective of the vSAN back end.

IOInsight

IOInsight captures I/O traces from ESXi and generates metrics that represent the storage I/O behavior at the VMDK level. The IOInsight report contains no sensitive information about the VM applications.

# To start IOInsight
select the vSAN cluster and go to Monitor > vSAN > Performance > IOINSIGHT > NEW INSTANCE
Preparing an IOInsight Instance

You select a VM or host to monitor all VMDKs associated with them. Name the IOInsight instance, and select the duration to run (default is 10 minutes). The system limits IOInsight monitoring overhead of CPU and memory to less than 1%.

After the IOInsight instance completes the collection, you can view detailed disk-related metrics.

Viewing IOInsight Instance Metrics

After the IOInsight instance completes the collection, you can view detailed disk-related metrics.

# View the IOInsight instance
1. select the vSAN cluster and go to Monitor > vSAN > Performance > IOINSIGHT
2. Select the instance, and select "..." and select View Metrics
3. New top menu options available
    a. Disks
        The Disk tab displays performance metrics at disk and disk group levels.
        Use the Disk Group drop-down menus to select individual cache or capacity disks, or the entire disk group.
    b. Physical Adapters
    c. Host Network
Host-Level Metrics for the Cache Tier

The Write Buffer Free Percentage chart indicates the amount of free capacity on the cache tier. As the buffer starts to fills up, the destaging rate increases. This increase is shown in the Cache Disk De-stage Rate chart.

If the write buffer-free percentage is less than 20%, artificial latency (congestion) is introduced to slow down the incoming data rate.

Host-Level Metrics for Network Performance

Network throughput is important to the overall health of the vSAN cluster. The PHYSICAL ADAPTERS and HOST NETWORK tabs enable you to monitor physical NICs and VMkernel adapters, respectively.

Host-Level Metrics for Resync Operations

Use the Resync IOPS, Resync Throughput, and Resync Latency charts to observe the impact of resync operations on a disk group. The charts display metrics for the following resync operation types:

  1. Policy change
  2. Evacuation
  3. Rebalance

VM Metrics

1. In vSphere client, navigate to the VM, and select the VM
2. Select Monitor > vSAN > Performance
3. Select the tab
    a. VM
        shows the IOPS, throughput, and latency statistics of individual VMs.
    b. Virtual Disk
        Virtual Disk drop-down menu lists all the disks that we can select from

vSAN Capacity Monitoring

Select the vSAN cluster and choose Monitor on the right pane, then expand vSAN and select Capacity Usage

Capacity Usage Overview

Information in the capacity overview includes:

  1. Used ("actually written") space at the capacity tier
  2. Free space on disks
Capacity Usage with Space Efficiency

Deduplication and compression savings provide an overview of the space savings achieved.

Usable Capacity Analysis

The Usable capacity analysis panel enables you to select a different storage policy and see how this policy affects the available free space on the datastore.

Capacity Usage Breakdown

The capacity usage breakdown section provides detailed information about the type of objects or data that are consuming the vSAN storage capacity.

Capacity History

The CAPACITY HISTORY tab displays changes in the used capacity over a selectable data range. You can use the tab to extrapolate future growth rates and capacity requirements.

vSAN Capacity Reserve

You can enable vSAN capacity reserve for the following use cases:

  1. Operation reserve
  2. Host rebuild reserve

Enabling operation reserve for vSAN helps ensure enough space in the cluster for internal operations to complete successfully. Enabling host rebuild reserve allows vSAN to tolerate one host failure.

When reservations are enabled, and if capacity usage reaches the limit, new workloads cannot be deployed.

vSAN iSCSI Target Service

With the vSAN iSCSI target service, a remote host with an iSCSI initiator can transport block-level data to an iSCSI target in the vSAN cluster. You can configure one or more iSCSI targets in your vSAN cluster to provide block storage to legacy servers.

You can add one or more iSCSI targets that provide storage objects as logical unit numbers (LUNs). Each iSCSI target is identified by its own unique iSCSI qualified name.

Use the vSAN iSCSI target service to enable hosts and physical workloads that reside outside the vSAN cluster to access the vSAN datastore.

After configuring the vSAN iSCSI target service, you can discover the vSAN iSCSI targets from a remote host. To discover vSAN iSCSI targets, use the IP address of any host in the vSAN cluster and the TCP port of the iSCSI target

vSAN iSCSI Target Service Networking

Before enabling the vSAN iSCSI target service, you must configure your ESXi hosts with VMkernel ports and NICs that are connected to the iSCSI network.

iSCSI storage traffic is transmitted in an unencrypted format across the LAN. Therefore, a best practice is to use iSCSI on trusted networks only and to isolate the traffic on separate physical switches or to manage a dedicated VLAN.

To ensure high availability of the vSAN iSCSI target, configure multipath support for your iSCSI application. You can use the IP addresses of two or more hosts to configure the multipath.

vSAN File Service

https://core.vmware.com/resource/vsan-file-services-tech-note

https://cormachogan.com/2020/11/11/vsan-7-0u1-file-service-smb-support/

https://4sysops.com/archives/vmware-vsan-7-u1-configure-vsan-file-service/

https://computingforgeeks.com/enable-nfs-file-service-in-vmware-vsan-storage-cluster/

With VMware vSAN File Service, you can provision NFS and SMB file shares on your existing vSAN datastore. You can access these file shares from supported clients, such as VMs and physical workstations or servers.

You can also create a container file volume that can be accessed from Kubernetes (K8s) pods.

vSAN File Service is based on the vSAN Distributed File System, which provides the underlying scalable file system by aggregating vSAN objects. The vSAN Distributed File System provides resilient file server endpoints and a control plane for deployment, management, and monitoring.

vSAN File Service is a layer that sits on top of vSAN to provide file shares. 
It currently supports SMB, NFSv3, and NFSv4.1 file shares.
vSAN File Service Overivew
vSAN Stretched Cluster File Service Overivew
a. To enable and configure vSAN File Service, ESXi hosts in a vSAN cluster should have a minimum of four CPUs. 
b. You can configure up to 32 file shares per vSAN cluster.

vSAN File Service shares are created as a single file system namespace consisting of vSAN storage objects. By using vSAN objects, you can apply storage policies to your file shares.

To provide network access points to file shares, the vSAN File Service uses special appliance virtual machines (VMs), called file service VMs (FSVMs). Multiple FSVMs are deployed, providing multiple access points and failover capability for your file shares

You assign a pool of static IP addresses as file service access points during the configuration of the vSAN File Service. FSVMs run a containerized NFS service and each file share is distributed between one or more containers.

Each vSAN cluster that has file services enabled is called a vSAN File Service domain. The vSAN File Service domain maintains the set of IP addresses bound to the file system containers running inside the FSVMs.

vSAN 7 File Services—The Requirements
1. CPU requirements
    vSAN file nodes require 4 vCPU minimum
2. Static IP and DNS records
    Create forward and reverse DNS records for vSAN file services nodes. 
    A static IP address, subnet masks, gateway for file servers are also needed.
3. AD domain — For SMB share and NFS share with Kerberos security,
    a. Using KMIP 
    you'll also need to provide information about your AD domain and organizational unit (optional). In addition, a user account with sufficient delegated permissions is required
vSAN File Service Considerations

https://core.vmware.com/resource/vsan-frequently-asked-questions-faq

when designing vSAN File Service and file share, considering the following:

  1. Only IPv4 is supported, and static IP addresses are required for FSVMs
  2. Up to 32 file shares can be created
  3. ESXi hosts in a vSAN cluster should have a minimum of four CPUs
Stretched cluster and two-node cluster configurations are not supported.
vSAN File Shares

vSAN file shares are integrated into the existing vSAN Storage Policy-Based Management on a per-share basis for resilience. vSAN File Service uses a set of file service VMs (FSVMs) to provide network access points to the file shares. FSVMs run a containerized NFS service, and each file share is distributed between one or more containers.

When you configure vSAN File Service, a VDFS is created to manage the following activities

  1. File share object placement
  2. Scaling and load balancing
  3. Performance optimization
File Service VMs

FSVMs are preconfigured Photon Linux-based VMs.

Up to 32 FSVMs can be deployed per vSAN cluster to provide multiple access points and failover capability for vSAN file shares.

One FSVM per host is deployed when vSAN File Service is enabled. vSphere ESX Agent Manager is used to provision FSVMs and pin them to a host (affinity)

When you configure vSAN File Service, ESX Agent Manager performs the following tasks:

  1. Verify vSAN File Service OVF image compatibility
  2. Provision agent machines on to a vSAN cluster

If vSAN File Service is disabled, the solution sends a Destroy Agent signal to remove all the FSVMs.

File Service Agent Machines Storage Policy

The FSVM is configured with a custom vSAN storage policy called FSVM_Profile_DO_NOT_MODIFY.

This custom storage policy offers no data redundancy and pins FSVM to a host.

Note:
    Do not modify this storage policy setting.

vSAN File Service is disabled by default. You can enable it from vSAN Services.

# How to enabling vSAN File Service
1. Select a vSAN cluster
2. select Configure > vSAN > Services > File Service > Enable

When you enable the vSAN File Service, you must provide the following details
a. Authentication method
b. File service domain name
c. Network port group
d. Network protocol type
e. Subnet mask
f. Default gateway
g. Pool of static IP addresses for FSVM
vSAN File Service Domain Configuration

A vSAN File Service domain is a namespace for a group of file shares with a common networking and security configuration. You must enter a unique namespace for the vSAN File Service domain when you enable the file service.

Note:
The vSAN File Service domain is not Active Directory domain.
It is a namespace for a group of file shares.
vSAN File Service Network Configuration

Currently, vSAN File Service supports only IPv4 for file share access. Select a network port group for FSVMs to provide access to file shares.

You should select a distributed port group to ensure consistency across all hosts in a vSAN cluster.

Note:
    All FSVMs should reside on the same network subnet.

You must provide a pool of static IP addresses for FSVMs.

Note:
    Provide the same number of IP addresses as the number of ESXi hosts present in the vSAN cluster during setup.
vSAN File Service Nodes (VMs)

As part of the vSAN File Service configuration, ESX Agent Manager deploys vSAN File Service nodes.

Note:
    Do not rename or manage changes to the vSAN File Service nodes (VMs)
Creating vSAN File Shares

After vSAN File Service is enabled, you can create a file share to access from NFS clients and a container file volume to access from Kubernetes pods.

# How too create a file share
1. select a vSAN cluster
2. select Configure > vSAN > Services > File Shares > ADD
    a. VSAN File Share
    b. Container File Volume
    c. All
    Note: Select "All"

You configure storage properties during file share creation
a. storage protocol
b. capacity of the share
c. security mode
d. vSAN storage policy 
Configuring a vSAN File Share

High level configuration steps for configuring vSAN file share

  1. Choose a suitable name for the file share.
  2. You can select either the NFS or the SMB protocol to access the file share. You also select the required protocol version.
  3. A file share supports both AUTH_SYS and the Kerberos security mode. Based on the selected protocol version, select the supported security mode.
  4. The vSAN default storage policy is assigned for file shares. You can select a policy based on your availability and performance requirements.
  5. Define the file share quota to limit the capacity that the file share can consume on the vSAN datastore. Include a warning threshold.
  6. Labels are key-value pairs that can be used to identify file shares. Labels are useful when assigning file shares to Kubernetes pods.
Configuring Network Access Control

You use network access control to limit which clients can access a file share. A vSAN file share can be allowed to access from any NFS client IP addresses or from a specific list of client IP addresses. You can define a custom network access property

Viewing vSAN File Share Properties

After a file share is created, you can record the file share mount path details to mount from NFS clients. You can view file share storage use details. And you can modify the file share storage quota properties by editing the file share

# How to record the file share path
1. In vSphere client, select the vSAN cluster > Configure
2. Navigate to vSAN > File Shares
3. Select the vSAN file share, click "Copy Path"
    It will display the share path
4. Provide the share path to NFS client for access the vSAN file share

Note:
From the file shars properties, it will show
a. File share name
b. Storage Policy
c. Usage/Quota
d. Actual Usage
Monitoring vSAN File Share Performance Metrics

You can monitor throughput, IOPS, and latency-related information per file share.

# To monitor performance metrics
1.Sselect a vSAN cluster and go to Monitor > vSAN > Performance
2. Click File Share tab
3. Select the file share
VMware Skyline Health Details for vSAN File Service

VMware Skyline Health provides detailed information about vSAN File Service infrastructure health, file server health, and share health.

# To view the Skyline health details
Select a vSAN cluster and go to Monitor > vSAN > Skyline Health > File Service
    a. Infrastructure Health
    b. File Server Health
    c. Share Health

HCI Mesh

With the HCI Mesh architecture, you can remotely mount datastores from other vSAN clusters (server clusters) to one or more vSAN clusters (client clusters). The client and server clusters must be within the same vCenter Server datacenter inventory.

With the help of HCI Mesh, a cluster can mount compute and storage from a remote vSAN cluster.

HCI Mesh works by using the existing vSAN VMkernel ports and transport protocols end to end. HCI Mesh requires no specialized hardware. HCI Mesh is scalable, it can support up to 64 hosts across clusters in a mesh. A client cluster can mount up to five remote datastores. Currently HCI Mesh is available with vSAN Enterprise and Enterprise Plus Editions.

# HCMI Mesh benefits
1. Balancing capacity across vSAN clusters
    HCI Mesh can be used for balancing storage capacity by migrating VM data using vSphere Storage vMotion 
    to other vSAN clusters based on capacity usage and the configured threshold.
2. Hardware maintenance
    HCI Mesh can be used to evacuate data from ESXi hosts during patching or maintenance. 
3. Storage-as-a-Service
    Cloud providers can provide a managed pool of storage for multiple tenant consumers by using storage-only cluster topology 
    which is easily scalable. In this model, the cloud provider owns manages the storage-only clusters. 
    The tenant cluster mounts the datastore remotely.

HCI Mesh provides the following advantages over traditional vSAN:

  1. It improves the storage-to-compute ratio.
  2. License optimization: The storage and compute can now be separated. You can use the appropriate licenses as required by the clusters and save costs.
  3. Heterogeneous storage classes: Different types of storage classes provide better efficiency in hyperscale deployments.
  4. It has policy-based performance and availability placement

The Datastore Sharing view provides information about shared vSAN datastores and displays client and server cluster information. Local datastores are identified with the (Local) prefix.

# How to mount the remote datastore
1. From the client vSAN cluster
2. select vSAN -> Configure tab -> Datastore Sharing
3. click the Mount Remote Datastore link

Considerations when mounting a remote vSAN datastore
a. Server and client clusters are from the same datacenter
b. All hosts in server and client clusters are licensed.
c. Server and client clusters have no connectivity issues.
d. Network latency between server and client hosts is below 5000 microsecond

HCI Mesh Terminology

Local cluster

  1. The vSAN cluster where storage is hosted and consumed. Each cluster has a subcluster UUID, for example, ssssssss-ssss-ssss-ssss-ssssssssssss.
  2. All vSAN clusters, where HCI Mesh is not used, are considered local vSAN clusters.

Server cluster

  1. The cluster where the storage is locally hosted. This cluster provides storage resources to other clusters.

Local vSAN datastore

  1. From the server cluster viewpoint, the vSAN datastore is considered a local datastore.

Remote vSAN datastore

  1. From the client cluster viewpoint, the vSAN datastore is considered a remote datastore.

Cross-cluster vMotion:

  1. The remote vSAN datastore is mounted on both the client and the server clusters, which means that cross-cluster vSphere vMotion is possible.

HCI Mesh: Common Topologies

Different HCI Mesh topologies can be used, based on the infrastructure requirement. Common HCI Mesh topologies:

  1. Cross-cluster topology
  2. Storage-only cluster topology. As a best practice, the storage-only cluster does not run workloads.

Cross-cluster topology has the following features and use cases:

  1. Provides the most flexibility by mounting multiple remote datastores
  2. VM workloads can run on both client and server clusters
  3. Can be bidirectional
  4. Useful in addressing stranded capacity scenarios

HCI Mesh: Network Requirements and Recommendations

Network connectivity requirements

Both layer 2 and layer 3 are supported for intercluster connectivity. 

Network redundancy recommendations:

1. Use multiple NICs on the ESXi host.
2. Use dual top-of-rack (ToR) switches on the rack.
3. A leaf spine topology is preferred for core redundancy and reduced latency.
4. A single vSAN VMKNIC is required without air-gap support. 

Network performance requirements:
HCI Mesh has the same latency and bandwidth requirement as local vSAN

HCI Mesh Licensing and Scalability

Licensing requirements

  1. vSAN Enterprise edition is required on all clusters participating in a mesh topology.

Because multiple vSAN clusters can participate in a mesh topology, the scalability limitations mostly apply to the vSAN datastore object:

  1. A single vSAN datastore can be mounted on a maximum of 64 hosts, which include both server and client hosts.
  2. A client cluster can mount a maximum of five remote vSAN datastores.
  3. A server cluster can export its datastore to a maximum of five client clusters.

HCI Mesh Operations

Consider the following while mounting the vSAN Datastore of one cluster to another vSAN Cluster:

1. You can only mount vSAN datastores from the same vCenter Server instance.
2. Two-node clusters and stretched clusters are not compatible with HCI Mesh.
# How to mount remote datastore
To mount a remote datastore on the client vSAN cluster
1. In vSphere client, select the vSAN cluster (which will be vSAN client) 
2. select vSAN > Datastore Sharing on the Configure tab and 
3. click the Mount Remote Datastore link.
4. On the Mount Remote Datate page
    select the remote vSAN datastore to be mounted, click Next
5. In the Check Compatibility section
    Review the check list
6. Click Finish

Client Datastore View

The Datastore Sharing view provides information about shared vSAN datastores and displays client and server cluster information. Local datastores are identified with the (Local) prefix.

How to verify the remote vSAN datastore

A quick way to verify the remote vSAN mount is to perform the vSAN VM creation test.

1. In vSphere client, select the local vSAN cluster
2. Select Monitor > vSAN > 
    Proactive Tests > 
    VM Creation Test and 
3. click Run

Remote Accessible Objects

Remote accessible objects denote the objects which are being accessed from the client cluster to the server cluster.

# To see the remote accessible objects that are placed on the remote vSAN datastore
1. select Client cluster > Monitor > 
    vSAN > 
    Skyline Health > 
    vSAN object health

Server Cluster Partition Health Check

To see the client and server hosts listed, select Monitor > vSAN > Skyline Health > Network > Server Cluster Partition.

Remote VM Performance

To see remote vSAN VM performance metrics, select Monitor > vSAN > Performance > Remote VM

Physical Disk Placement

Object accessibility shows a remote-accessible status.

To see physical disk placement, select the VM and select Monitor > vSAN > Physical disk placement > Remote Objects.

HCI Mesh Interoperability

vSphere HA Failure Scenarios

Common vSphere HA failure scenarios in an HCI Mesh environment can be different, such as

  1. A network partition in the server cluster
  2. A network partition in the client cluster
  3. Loss of connectivity between the client and server clusters

VM Component Protection

With HCI Mesh using Remote vSAN, you can have virtual machine (VM) computer resources allocated from one cluster and storage space allocated from another cluster.

You can configure VM Component Protection (VMCP) to protect VMs if a cross-cluster communication failure occurs.

When the cross-cluster communication fails, an All Paths Down (APD) is declared after 60 seconds of isolation. The APD response is triggered to automatically restart the affected VMs after 180 seconds.

SPBM Integration

A single vSAN VASA provider acts on behalf of all vSAN clusters managed by the same vCenter Server instance:

  1. The vSAN VASA provider dispatches all policy requests targeting a vSAN datastore to one of the hosts in the corresponding vSAN cluster.
  2. The vSAN VASA provider maintains an up-to-date list of hosts capable of satisfying VASA provider API calls to a vSAN datastore, using the datastore property collector:
a. For local vSAN, the list of hosts comprises all hosts mounting the vSAN datastore.
b. For HCI Mesh, the list also includes hosts from client clusters remotely mounting the same datastore.
Note:
1. Only hosts in the server cluster can respond to SPBM commands.
2. vSAN VASA provider ignores the client cluster host

vSphere vMotion and vSphere Storage vMotion

HCI Mesh is compatible with both vSphere vMotion and Storage vMotion. vSphere vMotion has the following conditions:

  1. VMs can be migrated using vSphere vMotion within the client cluster, regardless of whether they reside on a local or remote vSAN datastore.
  2. VMs are allowed to migrate with vSphere vMotion across client or server clusters, as long as all VM objects reside on a mutually shared remote vSAN datastore.

vSphere Storage vMotion has the following conditions:

  1. Migration between a local vSAN datastore and a remotely mounted vSAN datastore
  2. Migration between a remotely mounted vSAN datastore and a local vSAN datastore
  3. Migration between two remotely mounted vSAN datastore

HCI Mesh with vSphere DRS

HCI Mesh is compatible with vSphere DRS. vSphere DRS rules are applicable on the client cluster in the following cases:

  1. VMs are stored on the local vSAN datastore.
  2. VMs are stored on a remotely mounted vSAN datastore.

Here, vSphere DRS refers to standard DRS rules on the client cluster such as affinity and anti-affinity.

vSAN Direct

vSAN Direct enables administrators to create a vSAN Direct datastore on a single blank hard drive on the esxi host. With vSAN Direct, administrators can manage local disks by using vSAN management capabilities.

1. vSAN Direct Configuration is suitable for cloud-native applications using Kubernetes.
2. vSAN Direct via vSAN provides cloud-native storage such as persistent volumes

vSAN Direct configuration provides an alternative option for modern stateful services to interface directly with the underlying direct attached storage for optimized I/O and storage efficiency.

vSAN Direct features:
1. vSAN Direct extends the HCI management features of vSAN to the local disks formatted with VMFS.
2. vSAN Direct manages and monitors disks formatted with VMFS and provides insights into the health, 
    performance, and capacity of these disks.
3. vSAN Direct enables users to host persistent services on VMFS storage. 
4. vSAN Direct enables users to define placement policies and quotas for the local disks.

vSAN Direct Architecture

vSAN Direct supports cloud-native applications running in a vSphere 7.x Supervisor Kubernetes cluster. Other Kubernetes clusters are not currently supported. Each local disk is mapped to a single vSAN Direct datastore.

These datastores form the vSAN Direct storage pool. This pool can be claimed by Kubernetes in the form of persistent volumes (PVs).

Note:
Currently, vSAN Direct supports only Kubernetes Supervisor Clusters on vSphere.
Cloud Native Storage

https://www.vmware.com/au/products/cloud-native-storage.html

https://blogs.vmware.com/virtualblocks/2019/08/14/introducing-cloud-native-storage-for-vsphere/

Cloud Native Storage (CNS) is a vSphere and Kubernetes (K8s) feature that makes K8s aware of how to provision storage on vSphere on-demand, in a fully automated, scalable fashion as well as providing visibility for the administrator into container volumes through the CNS UI within vCenter. Run, monitor, and manage containers and virtual machines on the same platform – in the same way.

CNS provides K8s with the understanding on how to carry out both storage provisioning and management tasks on vSphere. Additionally, CNS provides the vSphere admin with visibility into container usage on the physical infrastructure. This includes mapping container volumes to backing disks and capacity management – just as if they were a VM volume.

Cloud Native Storage for vSphere

vSAN cloud native storage components

vSAN Cloud Native Storage Components

Key concepts for using vSAN Direct with Kubernetes:

  1. Namespaces are logical entities in Kubernetes.
  2. Kubernetes persistent storage is in the form of PVs.
  3. PVs communicate with vSphere storage through the intermediary cloud-native storage control plane.
  4. PVs can use vSAN Direct storage by employing tag-based policies
Cloud Native Operations Workflow

The following shows the steps taken by the vSphere administrator and the DevOps administrator to use vSAN Direct:

  1. The vSphere administrator provisions vSAN Direct datastores, using unformatted drives on the ESXi hosts.
  2. The vSphere administrator creates policies using tags to map the vSAN Direct datastores.
  3. The DevOps administrator claims the PVs from the vSAN Direct storage pools identified by the tag-based storage policies.
  4. The DevOps administrator creates Kubernetes applications that consume the PVs residing on vSAN Direct.
Claiming Disks for vSAN Direct
# How to claim disks for vSAN Direct
a. Each disk claimed creates a unique vSAN Direct datastore.
b. An unformatted local disk drive is eligible for vSAN Direct.
c. You can use the vSAN claim disk wizard to claim disks for vSAN Direct

# After claiming disks for vSAN Direct
a. vSAN Direct uses one disk per datastore.
b. vSAN Direct can coexist with regular vSAN disk groups

# vSAN Direct tags
a. A set of default vSAN Direct tags and categories is available, after claming disk for vSAN Direct
b. You can create your own tags to use with your storage policies.

# Tag Name: vSANDirect
1. When click on the vSAN Direct datastore, under Tags.
2. Navigate to "Assigned Tags", it will show the tags.
3. Navigate to Storage, it will show the capacity
Tag-Based Policies - vSAN Direct

vSAN Direct supports tag-based storage policies.

# To create a tag-based vSAN Direct storage policy:
1. In vsphere -> Menu -> Administration
2. Select Policy and Rules
3. Create VM Storage Policy
4. Enter the Name and description
5. In Policy structure
    a. Select "Enable tag based placement rules"
6. Continue with Tag based placement
7. Verify Storage compatibility
8. Review and finish
Storage Compatibility

The supervisor Kubernetes cluster requires a storage policy to identify datastores to store PVs:

  1. After you create the vSAN Direct storage policy, matching datastores with the vSAN Direct tag are available as compatible storage.
  2. These vSAN Direct datastores can be used by cloud-native applications as a common pool for persistent data storage.
1. In vSphere client, navigate to the vSAN Direct VM storage policy
2. Select Storage Compatibility tab
    a. Compatible tab
        Review the compatible vSAN Direct datastores
        i. Name
        ii. Datacenter
        iii. Tage
        iv. Free space
        v. Capacity
    b. Incompatible tab
Capacity Reporting

You can monitor vSAN Direct capacity independently of vSAN capacity.

Select the vSAN cluster > Monitor >
    vSAN > Capacity > 
    CAPACITY USAGE
    click vSAN Direct
Claiming Disks for vSAN Direct Configuration

An unformatted local disk drive is eligible for vSAN Direct Configuration.

Each local disk is mapped to a single vSAN Direct datastore. These datastores form the vSAN Direct storage pool, this pool can be claimed by the Kubernetes cluster in the form of persistent volumes.

You can make the devices ineligible for regular vSAN Datastore and available only for vSAN Direct Configuration by tagging local storage device

esxcli vsan storage tag add -d diskName -t vsanDirect 

vSAN Direct Configuration can coexist with regular vSAN disk groups.

vSAN Cluster Maintenance Mode Options

ESXi hosts in vSAN clusters provide storage resources in addition to compute resources. You must use the appropriate maintenance mode options to maintain data accessibility.

When placing a host in maintenance mode, you can select one of the vSAN data migration options

1. Ensure accessibility
    evacuates enough components to ensure that VMs can continue to run
2. Full data migration
    evacuates all components to other hosts in the cluster
3. No data migration
    Use this options, if need to shutdown all hosts
Data Migration Precheck

You run the data migration precheck before placing a host in maintenance mode. This precheck determines whether the operation can succeed and reports the state of the cluster after the host enters maintenance mode.

You can select a host and the type of vSAN data migration to test

# How to run data migration precheck
1. select the cluster
2. select Monitor > vSAN > Data Migration Pre-Check

Maintenance Mode and Absent Components

When a host is placed in maintenance mode without evacuating all data, some components become absent.

As long as the object has a quorum, the object remains functional. When an absent component comes back online, vSAN updates the component with any needed changes and the component becomes healthy again.

# How to view VM componnets
1. In vSphere client, select the required VM
2. On right pane, select Monitor -> vSAN -> Physical disk placement

Recommendations for Planned Maintenance

During maintenance, you must plan your tasks to avoid failure and consider the following recommendations:

1. Unless full data migration is selected, components on a host become absent 
    when the host enters maintenance mode, which counts as a failure.
2. Data loss can occur if too many unrecoverable failures occur and no backups exist.
3. Never reboot, disconnect, or disable more hosts than the required failures to tolerate.
4. Never start another maintenance activity before all resyncs are complete.
5. Never put a host in maintenance mode if another failure exists in the cluster.

vSAN Stretched Clusters

vSAN stretched clusters provide site or datacenter failure resilience.

A vSAN stretched cluster spans three sites to protect against site-level failure. If one site goes down, the VMs can be powered on at the other site with minimal downtime.

The 3rd site hosts the witness host.

A vSAN stretched cluster extends the concept of fault domains so that each site represents a fault domain. The distance between the sites is limited, such as in metropolitan or campus environments.

Design of vSAN Stretched Cluster

vSAN stretched cluster spans three sites to protect against site-level failure.

  1. Preferred data site
  2. Secondary (or non-preferred) data site
  3. Witness site

Only the preferred and secondary data sites contribute to the compute and storage resources.

Preferred and secondary sites can have a maximum of 15 ESXi hosts each, so a stretched cluster can have a maximum of 15+15+1 hosts.

# Previous vSAN version
15 + 15 + 1 = 31 hosts

# vSAN 7 update 2
20 + 20 + 1 = 41 hosts

vSAN stretched clusters have the following benefits

  1. Planned site maintenance
  2. Preventing service outages resulting from site failure
  3. Automated recovery

vSAN stretched clusters can be used with vSphere Replication and Site Recovery Manager. Replication between vSAN datastores enables a recovery point objective (RPO) as low as 5 minutes.

Witness Hosts

A vSAN stretched cluster requires a witness host to store the witness components for VM objects:

  1. Each stretched cluster must have its own witness host.
  2. The witness host cannot run any VMs.
  3. The witness host stores only witness components to provide cluster quorum.
  4. The witness host is packaged in a virtual appliance.
  5. The witness host includes the embedded license.

The Witness host can be deployed as either a physical ESXi host or a vSAN witness appliance. If a vSAN witness appliance is used for the witness host, it will not consume any of the customer’s vSphere licenses. A physical ESXi host that is used as a witness host will need to be licensed accordingly.

Sizing Witness Hosts

Verify the number of VMs in the cluster and number of two-node clusters that it supports.

Network Requirements - Between Data Sites

A vSAN stretched cluster network requires connectivity across all three sites. It must have independent routing and connectivity between the data sites and the witness host.

Bandwidth between sites hosting VM objects and the witness node is dependent on how many objects reside on the vSAN cluster. You must size the data site to the witness bandwidth appropriately for both availability and growth.

In a vSAN stretched configuration, you size the write I/O according to the inter-site bandwidth requirements. By default, the read traffic is handled by the site on which the VM resides.

Note:
vSAN stretched cluster (between data sites) Requirements 10 Gb or faster with latency <5 ms RT

Network Requirements - Between Data Site and Witness Sites

The network bandwidth required between the data sites and the witness site is calculated differently from the inter-site bandwidth required for data sites. Witness sites do not maintain VM data. They contain only component metadata.

vSAN Stretched Cluster Site     Bandwidth                Latency
----------------------------------------------------------------------
Between data sites 
and witness host
                    2 Mbps per 1000 vSAN components
                                    < 500 ms latency RTT (1 host per site)
                                    < 200 ms latency RTT(up to 10 hosts per site)
                                    < 100 ms latency RTT (11 to 15 hosts per site)

Static Routes for vSAN Traffic

By default, vSphere uses a single default gateway. All routed traffic attempts to reach its destination through this common gateway.

You might need to create static routes in your environment to override the default gateway for vSAN traffic in certain situations:

  1. If your deployment has the witness host on a different network
  2. If the stretched cluster deployment has both data sites and the witness host on different networks

You can create a static route before overriding the default gateway by using following esxcli command

esxcli network ip route ipv4 add -g gateway-to-use –n remote-network

# Edit vmkx (vSAN Kernel adapter)
1. IPv4 settings
    Use static IPv4 settings
    a. IPv4 address
    b. Subnet mask
    c. Default gateway - Override default gateway for this adapter

vSAN Stretched Cluster Site Disaster Tolerance

Use dual site mirroring (stretched cluster) VM storage policy rules to determine the fault tolerance. The site disaster tolerance governs the failures to tolerate across sites. If a data site goes down, the replica on the remaining site and the witness component remain available to continue operations.

In stretched vSAN Availability tab, select Failure to telerate option:
a. Non - standard cluster
b. None - standard cluster with nested fault domains
c. Dual site mirroring (stretched cluster)
    Choose this option
d. None - keep data on Preferred (stretched ccluster)
e. None - keep data on Non-preferred (stretched cluster)
f. None - stretched cluster

vSAN Stretched Cluster Heartbeats

vSAN designates a master node on the preferred site and a backup node on the secondary site to send and receive heartbeats.

Heartbeats are sent every second

  1. Between the master and backup nodes
  2. Between the master node and the witness host
  3. Between the backup node and the witness host

If communication is lost between the witness host and one of the data sites for five consecutive heartbeats, the witness is considered down until the heartbeats resume.

Managing Read and Write Operations

vSAN stretched clusters uses a read locality algorithm to read 100% from the data copy on the local site. Read locality reduces the latency incurred during reading operations.

However, the writes must be sent to all the available mirrors on both preferred and secondary sites.

Site Disaster Tolerance - Dual Site Mirroring

The dual site mirroring policy in a stretched cluster maintains one replica on each data site. If one data site goes down, the replica on the remaining site and the witness component remain available to continue operations. When choosing this policy, you must ensure that both data sites have sufficient storage capacity to each accommodate a replica. Consider the number of objects and their space requirements, when applying Dual Site Mirrorring policy.

Dual Site Mirroring with RAID 1

Dual site mirroring with RAID 1 ensures that the object remains accessible in the event of a site failure, in addition to a node failure on the remaining site. You must ensure that the number of hosts and drives available on each site can satisfy the Failures To Tolerate and Stripe Width policy settings.

Dual Site Mirroring with RAID 5/6

vSAN stretched clusters also support RAID 5/6 erasure coding within the two data sites. Within each site, four or six hosts are required for RAID 5 and RAID 6, respectively. In the example, the vSAN object is mirrored between the sites. If a single host failure occurs within a site, the object can tolerate using RAID 5 and RAID 6.

Keeping Data on a Single Site

You can use the following VM storage policy options to place the components of an object on a single site within the stretched cluster:

  1. None-keep data on Preferred
  2. None-keep data on Non-preferred

No mirroring is performed across the data sites. vSphere fault tolerance for VMs is supported for VMs that are restricted to a single site.

The None-stretched cluster option places the vSAN components across the sites. Use this policy option with VM or host groups and DRS rules to restrict the VM compute resource to a particular site host.

Keeping Data on a Single Site - Planning Considerations

If a VM does not require site-level protection, you can choose to not replicate the data on both sites. You can apply a policy to keep the data on a preferred or nonpreferred site.

If you choose to keep data on any site, you should also ensure that the VM runs on the same site as the data.

If the VM is migrated to remote site, because of Distributed Resource Scheduler, vSphere HA, or vSphere vMotion, the application performance is affected due to latency. You can create an affinity rule so that the VM always runs on the site where the data is located

Symmetrical and Asymmetrical Configuration

vSAN 6.1 or later supports symmetrical configurations. Preferred and nonpreferred data sites contain equal amounts of identical hosts.

vSAN 6.6 or later supports asymmetrical configurations. The two data sites can have different amounts of hosts with different hardware configurations.

vSAN Two-Node Clusters

A two-node vSAN cluster is a specific configuration implemented in environments where a minimal configuration is a key requirement, typically running a small number of workloads that require high availability.

Ideal use cases for a two-node vSAN cluster include remote branch offices and edge clusters.

A two-node vSAN cluster consists of two hosts at the same location, connected to the same physical network switch or directly connected. A third host acts as a witness host, which can be located remotely from the branch office. Usually, the witness host resides at the main site, where vCenter Server is deployed.

vSAN Two-Node Clusters Network Requirements

vSAN two-node clusters have different networking requirements. Following table covers the recommendations for a vSAN two-node clusters:

vSAN Cluster Configuration                  Requirements
--------------------------------------------------------------------------------------------
vSAN two-node cluster                   Use 10 Gb or faster, 
    (between hosts)                     but 1 Gb is supported with latency <1 ms RTT.
vSAN two-node cluster                   1.5 Mbps bandwidth network connectivity with latency <500 ms RTT (round trip)
    (between hosts and witness host)

A vSAN two-node cluster is a good choice for businesses with multiple remote offices. A two-node cluster can be deployed at each of the remote offices with a witness node hosted at a central datacenter.

Note:
The single witness host will be configured as the witness host for many remote site 2-node custers.
Shared vSAN Witness Nodes

Multiple remote or branch office sites can share a common vSAN witness host to store the witness components for their vSAN objects.

A single witness host can support up to 64 two-node vSAN clusters. The number of two-node vSAN clusters supported by a shared witness host is based on the host memory.

A shared vSAN witness node has the following limitations:

  1. Does not support vSAN stretched clusters
  2. Does not support data-in-transit encryption
Witness Node locations

You can run a vSAN witness node at the following locations

  1. On a vSphere environment with a VMFS datastore, an NFS datastore, or a vSAN cluster
  2. On vCloud Air/OVH backed by a supported storage
  3. Any vCloud Air Network partner-hosted solution
  4. On a vSphere hypervisor (free) installation using any supported storage (VMFS datastore or NFS datastore)
Shared vSAN Witness Node Memory Requirements

How many witness components that a shared vSAN witness node can support depends on the allocated memory during the appliance deployment. A best practice is to allocate 16 GB and 32 GB memory for sharing between multiple two-node vSAN clusters.

Allocated Memory   Maximum Number of        Maximum Number of 
                Components Supported        Clusters Supported
-----------------------------------------------------------------
>=32 GB                 64000                   64
16 GB                   32000                   32
8 GB                    750                     1  

Shared vSAN Witness Node for Mixed Environment

The vSAN shared witness node appliance can be shared with multiple two-node vSAN clusters running different versions of ESXi.

If the two-node vSAN cluster version is later than the version of the vSAN shared witness node appliance, the witness node cannot participate in that cluster.

Managing Advanced vSAN Cluster Operations

vSAN Stretched Clusters

vSAN stretched clusters can be used in environments where disaster and downtime avoidance is a key requirement. Stretched clusters protect virtual machines (VMs) across data centers, not only racks. A stretched cluster extends across two active-active data sites and a witness site. The witness site contains the witness host and provides the cluster quorum during a site failure. Use vSphere DRS affinity rules to run VMs on a specific data site

Stretched clusters benefits

  1. Planned site maintenance
  2. Preventing service outages resulting from site failure
  3. Automated recovery

With stretched clusters, you can perform planned maintenance of one site without any service downtime. You prevent production outages before an impending service outage such as power failures.

Stretched Cluster Architecture

Stretched clusters consist of two active data sites and one witness site. Each site has its own fault domain. A maximum of 30 hosts can be configured in a vSAN stretched cluster across the data sites

Stretched Cluster Networking

A stretched cluster has the following network requirements:

  1. Connectivity to the management and vSAN network on all three sites
  2. VM network connectivity between the data sites

Both data sites must be connected to a vSphere vMotion network for VM migration.

Preferred Sites

The preferred site is the data site that remains active when a network partition occurs between the two data sites.

If a failure occurs, VMs on the secondary site are powered off and vSphere HA restarts them on the preferred site.

Read Locality

Read locality reduces the latency incurred on read operations because these I/O operations do not traverse the inter-site link.

vSAN with stretched clusters uses a read locality algorithm to read 100 percent from the data copy on the local site.

The local site is the same site where the VM compute resource resides.

Witness Hosts

The witness host is at the witness site and stores the witness components for VM objects:

1. Each stretched cluster must have its own witness host.
2. The witness host cannot run any VMs.
3. The witness host stores only witness components to provide cluster quorum.
4. The witness host is packaged in a virtual appliance.
5. The witness host includes the embedded license.

Note:
a. Network latency between primary and secondary site < 5 ms latency over link > 10/20/40 Gbps
b. Network latency between witness site and primary site, 
    witness site and secondary site,
    must be < 500 ms latency over 100 mbps

Disaster Recovery Capabilities & SRM

Stretched clusters can be used with vSphere Replication and Site Recovery Manager.

Replication between vSAN datastores enables a recovery point objective (RPO) as low as 5 minutes.

# How to configure a stretch cluster
1. select the vSAN cluster and select Configure > 
    vSAN > Fault Domains > 
    Configure Stretched Cluster
2. Group hosts into preferred and secondary fault domains
3. Select a witness host
4. create a disk group on the witness host.

If the witness host fails, a new witness host can be easily be added to replace the failed witness host to the stretched configuration.

# How to replace a Witness Host
1. Select the vSAN cluster and select Configure >
    vSAN > Fault Domain >
2. Click Configure Witness Host
3. Expand vCenter and navigate to the new Witness Host
4. Select the new Witness Host

Stretched Clusters and Maintenance Mode

In a stretched cluster, you can use maintenance mode on data site hosts and on the witness host:

  1. For a data site host, select the required vSAN data migration option.
  2. For the witness host, data migration does not occur.

Monitoring Stretched Clusters

Skyline Health provides a range of tests to verify the health status of stretched clusters.

1. Select vSAN cluster and select Monitor >
    vSAN > Skyline Health
2. Select and expand Stretch Cluster, it provides list of health checks
    a. Witness host not found
    b. Unexpected number of fault domain
    c. Unicast agent configuration inconsistency
    d. Invalid preferred fault domain on
    e. Preferred fault domain unset
    f. Witness host within vCenter cluster
    g. Unicast agent not configured
    h. No disk claimed on witness host
    i. Unsupported host version
    j. Invalid unicast agent
    k. Site latency health

Site Disaster Tolerance

Use stretched cluster aware storage policies to tolerate a site failure. If one data site fails, the replica on the other site is available to continue VM operations:

  1. Configure Site disaster tolerance to govern the fault tolerance between sites.
# Under Availability - > Site disaster tolerance
a. None - standard cluster
b. None - standard cluster with nested fault domains
c. Dual site mirroring (stretched cluster) <------------- Default
d. None - keep data on Preferred (stretched cluster)
e. None - keep data on Non-preferred (stretched cluster)
f. None - stretched cluster
  1. Configure Failures to tolerate to govern fault tolerance within each site
# Under Availability -> Failures to tolerate (example)
a. 1 failure - RAID 1 (Mirroring)
b. 1 failure - RAID 5 (Erasure Coding)

Site disaster tolerance situation analysis

# 1 - Dual site mirroring with RAID 1
The VM object will have
a. 2 components, plus 1 witness in preferred site
b. 2 components, plus 1 witness in non-preferred site
c. 1 witness object in witness site

# 2 - Dual site mirroring with RAID 5 erasure coding
The VM object will have
a. 4 components in preferred site
b. 4 components in non-preferred site
c. 1 witness object in witness site

Stretched Cluster Heartbeats

vSAN designates a master node on the preferred site and a backup node on the secondary site to send and receive heartbeats.

Heartbeats are sent every second
a. Between the master and backup nodes
b. Between the master node and the witness host 
c. Between the backup node and the witness host 

If communication is lost between the witness host and one of the data sites for five consecutive heartbeats, the witness is considered down until the heartbeats resume.

Failure scenarios analysis

The following explains some of the common failure situations that could affect the stretched cluster operations.

Situation
a. Site disaster tolerance: Dual Site Mirroring
b. Failures to tolerate: 1 – RAID-1 (Mirroring)
c. vSphere HA – Host Monitoring: enabled

# 1 - Secondary site outage
vSphere restarts the VMs (that were running on secondary site) on preferred site,
    using the replica component that is available on the preferred site and the witness component

# 2 - How does the system response to the outage of one host in the preferred site
vSphere HA restarts the VMs (hosted in preferred site) in the secondary site.
There is no enough hosts in the preferred site for FTT1

# 3 - Hows does cluster response to the outage of the witness host
The components in the two data sites constitute a quorum, the object remains available.

# 4 - The network outage between witness and the preferred site
a. The data sites maintmain a quorum for the VM data, all VMs are running without issue.
b. Because the witness host does not have connectivity to all hosts,
    it is placed in its own network partition to prevent conflicts.

# 5 - Network outage between preferred and secondary sites, and
    vSphere DRS: enabled
a. All VMs running on the preferred site continue to run uninterrupted.
b. All VMs on the secondary site are automatically powered off, and
    vSphere restarts them on the preferred site

What happen after the recovery? What will DRS do?
After the outage is resolved, vSphere DRS migrates VMs based on the defined affinity rules in place.

# 6 - the preferred site has failed, and vCenter server also resides on the failed site, and
    vSphere DRS: enabled
a. vSphere HA restarts all VMs (including vCenter Server) from failed site to the other data site.
b. vSphere HA recovery is independent of the vCenter Server

Two node clusters

https://core.vmware.com/resource/vsan-2-node-cluster-guide

vSAN two-node clusters are implemented with two ESXi hosts and a witness host.

The two-node architecture is ideal for remote office/branch office (ROBO) use cases:

  1. Remote offices are managed centrally by one vCenter Server instance.
  2. Multiple two-node clusters can share the common witness node.
# Not supported
a. Sharing a witness host between a two-node and a stretched cluster
b. Sharing a witness host between multiple stretched clusters

Configuring Two-Node Clusters

You can use the Cluster Quickstart guide to configure a two-node cluster. However, the witness host must be added to the data center, not the cluster, before starting the wizard.

# How to configure two-node cluster
1. Navigate to an existing host cluster.
2. Click the Configure tab.
3. Under vSAN, select Services.
4. Click Configure vSAN to open the Configure vSAN wizard.
5. Select the deployment type of vSAN cluster to configure, and click Next.
    a. Single site cluster
    c. Two node vSAN cluster.
    d. Stretched cluster.
    Select Two node vSAN cluster
6. Configure the vSAN services to use, and click Next.
    a. vSphere HA
        Note: Enable vSphere HA after create the vSAN cluster
    b. vSpherer DRS
    c. Configure data management features, including deduplication and compression,
        Note: could only enable Compression
    d. Configure encryption
        data-at-res encryption, and data-in-transit encryption.
    e. Large scale cluster support
        Support up to 64 nodes
7. Claim disks for the vSAN cluster, and click Next.
    Each host requires at least one flash device in the cache tier, 
    and one or more devices in the capacity tier.
8. Review the configuration, and click Finish.

vSAN Storage Space Efficiency

vSAN storage space efficiency techniques reduce the total storage capacity required to meet your workload needs.

Enable the deduplication and compression on a vSAN cluster to eliminate duplicate data and reduce the amount of space required to store data.

You can set the failure tolerance method policy attribute on VMs to use RAID 5 or RAID 6 erasure coding. Erasure coding can protect your data while using less storage space than the default RAID 1 mirroring. You can use TRIM/UNMAP to reclaim storage space, for example, when files are deleted within a virtual disk.

Deduplication and Compression

Enabling deduplication and compression can reduce the amount of physical storage consumed by as much as seven times.

Environments with highly redundant data, such as full-clone virtual desktops and homogeneous server operating systems, naturally benefit the most from deduplication.

Likewise, compression offers more favorable results with data that compresses well, such as text, bitmap, and program files. You can enable deduplication and compression when you create a vSAN all-flash cluster or when you edit an existing vSAN all-flash cluster. Deduplication and compression are enabled as a clusterwide setting, but they are applied on a disk group basis.

When you enable or disable deduplication and compression, vSAN performs a rolling reformat of every disk group on every host, which involves all data to be evacuated. Depending on the data stored on the vSAN datastore, this process might take a long time to complete.

1. Deduplication and compression occur inline when data is written back from the cache tier to the capacity tier.
2. The deduplication algorithm uses a fixed block size and is applied within each disk group.
3. The compression algorithm is applied after deduplication but before the data is written to the capacity tier.

Given the additional compute resource and allocation map overhead of compression, vSAN stores compressed data only if a unique 4K block can be reduced to 2K or less. Otherwise, the block is written uncompressed

Disk Management

Consider the following guidelines when managing disks and disk groups in a cluster with deduplication and compression enabled:

  1. You cannot remove a single disk from a disk group.
  2. You must remove the entire disk group to make modifications.
  3. A single disk failure causes the entire disk group to fail.
  4. Consider adding additional disk groups to increase the cluster storage capacity.
Design Considerations

Consider the following guidelines when you configure the deduplication and compression in a vSAN cluster:

  1. VM storage policies must have either 0 percent or 100 percent object space reservations.
  2. Deduplication and compression are available only on all-flash disk groups.
  3. The processes of deduplication and compression incur compute overhead and potentially impact performance in terms of latency and maximum IOPS.
  4. However, the extreme performance and low latency of flash devices easily outweigh the additional compute resource requirements of deduplication and compression in vSAN.
  5. Enabling deduplication and compression consumes up to 5 percent of the vSAN datastore capacity for metadata, such as hash, translation, and allocation maps.
  6. The space consumed by the deduplication and compression metadata is relative to the size of the vSAN datastore capacity.

Compression-Only Mode

You can enable compression-only mode on an all-flash vSAN cluster to provide storage space efficiency without the overhead of deduplication. Compression-only mode provides the following capabilities:

  1. Compression-only mode can reduce the amount of physical storage consumed by as much as two times.
  2. Compression-only mode reduces the failure domain from the entire disk group to only one disk.
  3. You can scale up a disk group without unmounting it from the vSAN cluster.
The compression-only mode algorithm moves data from the cache tier to individual capacity disks, 
    which also ensures better parallelism and throughput.
# How to configure space efficency
select a vSAN cluster and go to Configure > 
    vSAN > Services > 
    Space Efficiency > Edit
    Select
        a. Compression only, or
        b. Deduplication and compression mode 
    click Apply

How to Verify Space Efficiency Savings

# To verify the storage space savings information
    select a vSAN cluster and go to Monitor > 
        vSAN > Capacity >
        CAPACITY USAGE

Using RAID 5 or RAID 6 Erasure Coding

You can use RAID 5 or RAID 6 erasure coding to protect against data loss and also increase the storage efficiency.

Erasure coding can provide as much data protection as RAID 1 failures to tolerate = 1 while using less storage capacity. You can configure RAID 5 on all-flash vSAN clusters with four or more nodes and RAID 6 on six or more nodes.

Failure Tolerance   Failures        VM              vSAN Storage
    Method          to Tolerate     Disk Size     Capacity Required
-----------------------------------------------------------------------
    RAID 1              1           100 GB              200 GB
    RAID 5              1           100 GB              133 GB
    RAID 1              2           100 GB              300 GB
    RAID 6              2           100 GB              150 GB
    RAID 1              3           100 GB              400 G

RAID 5 or RAID 6 erasure coding does not support a failures to tolerate value of 3. 

RAID 5 or RAID 6 erasure coding is a storage policy attribute that you can apply to VM components.

To use RAID 5, set Failures to tolerate to 1 failure - RAID-5 (Erasure Coding). 
To use RAID 6, set Failures to tolerate to 2 failures - RAID-6 (Erasure Coding).

Reclaiming Space Using TRIM/UNMAP

vSAN supports SCSI UNMAP commands directly from a guest OS to reclaim storage space. The guest operating systems can use TRIM/UNMAP commands to reclaim space that is no longer used.

A TRIM/UNMAP command sent from the guest OS can reclaim the previously allocated storage as free space. This opportunistic space efficiency feature can deliver much better storage capacity utilization in vSAN environments

In addition to freeing up storage space in the vSAN environment, TRIM/UNMAP provides the following benefits:

  1. Faster repair
  2. Removal of dirty cache pages

Because reclaimed blocks do not need to be rebalanced or remirrored if a device fails, repairs are much faster. Removal of dirty cache pages from the write buffer reduces the number of blocks that are copied to the capacity tier.

Monitoring TRIM/UNMAP

To monitor TRIM/UNMAP statistics, select a host in the vSAN cluster and go to Monitor > vSAN > Performance > BACKEND.

Unmap Throughput measures UNMAP commands that are being processed by the disk groups of a host.

Recovery Unmap Throughput measures throughput of UNMAP commands being synchronized as part of an object repair following a failure or an absent object.

vSAN Data Persistence Platform (DPp)

vSAN DPp is a management framework, that allows third parties to integrate their cloud native applications with vSphere management, lifecycle, and data placement operations.

Applications are changing and the consumption of applications is changing, too. We have recognized this shift here at VMware, as evidenced by our Tanzu suite of products.

Modern apps, in particular, those built to run on Kubernetes are designed to take care of availability, replication, scaling, encryption within themselves to become completely independent of the infrastructure.

The vSAN Data Persistence Platform was announced earlier this year during VMWorld, it is with vSphere 7.0 Patch 02, and provides initial availability of this new way to deliver apps from our partners on the platform.

DPp has a number of integrations that partners can take advantage of and they can be broken down into four sections: Observability, Data Placement, Maintenance Operations, and Failure Handling.

Observability

The primary concern of the vSphere administrator is ensuring workloads and the infrastructure is healthy. With vSAN DPp, partners gain the ability to build out vCenter native UI plugins to bring application-specific operations right into vCenter.

you could increase storage allocation for a particular S3 Object Store, choose to repair application data should a node go down, monitor the health status of not only the application but of individual volumes and nodes within the application itself.

vSAN DPp offers partners the ability to integrate their plugin with the vSphere Skyline Health framework, plugging application-level health awareness right into the vSphere environment.

Data Placement

Cloud native apps provide the resiliency and HA capability themselves, they do their own erasure coding, sharding, RAID, or any other data-duplication or distribution technique. Why duplicate the data at the infrastructure layer when the app does it already.

vSAN HCI Data Persistence Platform Data Placement Overview
vSAN SNA (Shared-Nothing Architecture) - Regular vSAN Cluster

vSAN SNA is, in essence, a “regular” vSAN Storage Policy, vSAN SNA can be equated to an SPBM policy with Failures To Tolerate set to zero (FTT=0), but with a new intelligent DPp based placement engine that interacts with the application to allow the app to choose the most optimal fault domains for the data to be placed.

vSAN SNA (Shared-Nothing Architecture) - vSAN Direct

Allowing you to offer services that are TCO optimized to use large, cheap, and deep disks, for applications like S3 Object Storage, that are not on the vSAN HCL but are on the vSphere HCL. This direct path also allows us to take advantage of the near bare-metal performance of these underlying disks.

It provides an extremely efficient IO path that allows applications direct access to the underlying disks.

Each disk in a vSAN Direct enabled cluster is shown as an individual VMFS datastore in vCenter and vSAN DPp uses its new intelligent placement engine to interact with the application to place the application’s replica disks on the appropriate vSAN Direct Datastores, again co-locating the data to allow for deterministic topologies, just like vSAN SNA mode. It provides the applications total access to the disks, without having to worry about the performance of VMs or other workloads on the cluster, as such it is ideal for super-dense storage applications.

Maintenance Operations

All vSphere lifecycle operations are actually hooked into vSAN DPp, meaning that when you put a host into maintenance mode, it will tell the application that a given node is going away and that the app will need to ensure the data is migrated off, or in another place before that happens. Additionally, vSphere maintenance operations will wait for the application to give the “all clear” signal before putting the node into maintenance mode, ensuring that the application is always in control of its stateful data.

Lifecycle of not only the infrastructure but the application is handled too, vSAN DPp coupled with the partner’s Kubernetes “Operators” take the best operational knowledge for a given application and build them directly into the platform, making application upgrades within vSphere seamless and safe.

Failure Handling

The consideration is around failure handling and remediation, vSAN DPp posts these events to the application too, allowing the application to take proactive action on failing components over which it resides, just as it does with planned maintenance operations.

Additionally – depending on which partner solution is used, the vSphere admin can also kick off repair operations directly inside the vCenter UI, should an application be in a degraded, but available, state due to failed hardware.

VMware not only the best platform for VMs, but for all workloads, vSphere is clearly now evolving from an infrastructure platform into a services platform.

vSAN Operation

Configure vSAN
#**** How to configure vSAN
# Prerequisites
1. Verify that your environment meets all requirements.
2. Create a cluster and add hosts to the cluster before enabling and configuring vSAN.

# Procedure
1. Navigate to an existing host cluster.
2. Click the Configure tab.
3. Under vSAN, select Services.
4. Click Configure vSAN to open the Configure vSAN wizard.
5. Select the type of vSAN cluster to configure, and click Next.
    a. Single site cluster
    b. Single site cluster with custom fault domains.
    c. Two node vSAN cluster.
    d. Stretched cluster.
    e. vSAN HCI Mesh compute cluster.
6. Configure the vSAN services to use, and click Next.
    Configure data management features, including deduplication and compression, 
    data-at-res encryption, and data-in-transit encryption.
7. Claim disks for the vSAN cluster, and click Next.
    Each host requires at least one flash device in the cache tier, 
    and one or more devices in the capacity tier.
8. Review the configuration, and click Finish.
Disable vSAN

You can turn off vSAN for a host cluster.

When you disable the vSAN cluster, all virtual machines and data services located on the vSAN datastore become inaccessible. If you have consumed storage on the vSAN cluster using the vSAN-Direct, then the vSAN Direct monitoring services, such as health checks, space reporting, and performance monitoring, are also disabled. If you intend to use virtual machine while vSAN is disabled, make sure you migrate virtual machines from vSAN datastore to another datastore before disabling the vSAN cluster.

# Prerequisites
Verify that the hosts are in maintenance mode.

# Procedure
1. Navigate to the vSAN cluster.
2. Click the Configure tab.
3. Under vSAN, select Services.
4. Click Turn Off vSAN.
5. On the Turn Off vSAN dialog, confirm your selection.
How to disable vSAN health check during maintenance

https://kb.vmware.com/s/article/2151813

It can be done via

1. GUI
2. RVC
3. vSAN Management API

Shutting Down the vSAN Cluster

When necessary, you can shut down the entire vSAN cluster.

If you plan to shut down the vSAN cluster, you must not disable vSAN on the cluster manually.

# Shut down the vSAN cluster - Procedure
1. Verify the vSAN health to confirm that the cluster is healthy.
2. Power off all virtual machines (VMs) running in the vSAN cluster, if vCenter Server is not running on the cluster. 
Note: 
    a. If vCenter Server is hosted in the vSAN cluster, do not power off the vCenter Server VM.
    b. For vSphere 7.0 U1 and later, enable vCLS retreat mode. For more information, 
        see the VMware knowledge base article at https://kb.vmware.com/s/article/80472.
3. Click the Configure tab and turn off HA. 
    As a result, the cluster does not register host shutdowns as failures.
4. Verify that all resynchronization tasks are complete.
    Click the Monitor tab and select vSAN > Resyncing Objects.
5. If vCenter Server is hosted in the vSAN cluster, power off the vCenter Server VM. 
    The vSphere Client becomes unavailable.

Make a note of the host that runs the vCenter Server VM to identify the host that 
    you must restart the vCenter Server VM, during the restart process.
6. Disable cluster member updates from vCenter Server by running the following command on the ESXi hosts in the cluster. \
    Ensure that you run the following command on all the hosts.
        esxcfg-advcfg -s 1 /VSAN/IgnoreClusterMemberListUpdates
7. Log in to any host in the cluster other than the witness host.
8. Run the following command only on that host. If you run the command on multiple hosts concurrently, 
    it may cause a race condition causing unexpected results.
        python /usr/lib/vmware/vsan/bin/reboot_helper.py prepare

    The command returns and prints the following:
        Cluster preparation is done.

    Note
        a. The cluster is fully partitioned after the successful completion of the command.
        b. If you encounter an error, resolve the issue based on the error message and try
            enabling vCLS retreat mode again.
        c. If there are unhealthy or disconnected hosts in the cluster, 
            remove the hosts and retry running the command.

9. Place all the hosts into the maintenance mode with 'No Action' mode. 
    If the vCenter Server is powered off, use the following command to place the ESXi hosts into the
    maintenance mode with 'No Action' mode.
        esxcli system maintenanceMode set -e true -m noAction
    
    Perform this step on all the hosts.

    To avoid the risk of data unavailability while using "No Action" mode at the same time on
    multiple hosts, followed by a reboot of multiple hosts, see the VMware knowledge base
    article at https://kb.vmware.com/s/article/60424. 
    To perform simultaneous reboot of all hosts in the cluster using a built-in tool, 
    see the VMware knowledge base article at
        https://kb.vmware.com/s/article/70650.
10. After all hosts have successfully entered the maintenance mode, 
    perform any necessary maintenance tasks and power off the hosts.

Restart the vSAN Cluster

# Procedure
1. Power on the ESXi hosts.
    Power on the physical box where ESXi is installed. The ESXi host starts, 
        locates the VMs, and functions normally.
    If any hosts fail to come up, you must manually recover the hosts or move the bad hosts
        out of the vSAN cluster.
2. When all the hosts are back after powering on, exit all hosts from the maintenance mode.
    If the vCenter Server is powered off, 
        use the following command on the ESXi hosts to exit the maintenance mode.
            esxcli system maintenanceMode set -e false
    Perform this step on all the hosts.
3. Log in to one of the hosts in the cluster other than the witness host.
4. Run the following command only on that host. If you run the command on multiple hosts concurrently, 
    it may cause a race condition causing unexpected results.
        python /usr/lib/vmware/vsan/bin/reboot_helper.py recover
    The command returns and prints the following:
        Cluster reboot/power-on is completed successfully!
5. Verify that all the hosts are available in the cluster by running the following command on each host.
    esxcli vsan cluster get
6. Enable cluster member updates from vCenter Server by running the following command
    on the ESXi hosts in the cluster. Ensure that you run the following command on all the hosts.
        esxcfg-advcfg -s 0 /VSAN/IgnoreClusterMemberListUpdates
7. Restart the vCenter Server VM if it is powered off. 
    Wait for the vCenter Server VM to be powered up and running. 
    To disable the vCLS retreat mode, see the VMware knowledge base article at https://kb.vmware.com/s/article/80472.
8. Verify again that all the hosts are participating in the vSAN cluster by running the following command on each host.
    esxcli vsan cluster get
9. Restart the remaining VMs through vCenter Server.
10. Check the vSAN health service and resolve any outstanding issues.
11. (Optional) If the vSAN cluster has vSphere Availability enabled, you must manually restart
    vSphere Availability to avoid the following error: 
        Cannot find vSphere HA master agent.

    To manually restart vSphere Availability, select the vSAN cluster and navigate to:
    a. Configure > Services > vSphere Availability > EDIT > Disable vSphere HA
    b. Configure > Services > vSphere Availability > EDIT > Enable vSphere HA
12. If there are unhealthy or disconnected hosts in the cluster, recover or remove the hosts from
        the vSAN cluster. Retry the above commands only after the vSAN health shows all available
        hosts in the green state.
    If the vSAN environment has a three-node cluster, the command 
        reboot_helper.py recover cannot work in a one host failure situation. 
        
        As an administrator, do the following:
        a. Temporarily remove the failure host information from the unicast agent list.
        b. Add the host after running the following command.
            reboot_helper.py recover

    Following are the commands to remove and add the host to a vSAN cluster:
        esxcli vsan cluster unicastagent remove -a <IP Address> -t node -u <NodeUuid>
        esxcli vsan cluster unicastagent add -t node -u <NodeUuid> -U true -a <IP Address> -p 12321
Create a Disk Group on a vSAN Host

You can manually combine specific cache devices with specific capacity devices to define disk groups on a particular host.

In this method, you manually select devices to create a disk group for a host. You add one cache device and at least one capacity device to the disk group.

# Procedure
1. Navigate to the vSAN cluster.
2. Click the Configure tab.
3. Under vSAN, click Disk Management.
4. Click Claim unused disks.
5. Group by host.
6. Select disks to claim.
    a. Select the flash device to use for the cache tier.
    b. Select the disks to use for the capacity tier.
7 Click Create or OK to confirm your selections.
Claim Storage Devices for a vSAN Cluster
# Procedure
1. Navigate to the vSAN cluster.
2. Click the Configure tab.
3. Under vSAN, click Disk Management.
4. Click Claim Unused Disks.
5. Select devices to add to disk groups.
    a. For hybrid disk groups, each host that contributes storage must contribute one flash
        cache device and one or more HDD capacity devices. 
        You can add only one cache device per disk group.
        i. Select a flash device to be used as cache and click Claim for cache tier.
        ii. Select an HDD device to be used as capacity and click Claim for capacity tier.
        ii. Click Create or OK.
    b. For all-flash disk groups, each host that contributes storage must contribute one flash
        cache device and one or more flash capacity devices. 
        You can add only one cache device per disk group.
            i. Select a flash device to be used as cache and click Claim for cache tier.
            ii. Select a flash device to be used for capacity and click Claim for capacity tier.
            iii. Click Create or OK.

To verify the role of each device added to the all-flash disk group, navigate to the Disk Role column at the bottom of the Disk Management page. The column shows the list of devices and their purpose in a disk group.

vSAN claims the devices that you selected and organizes them into default disk groups that support the vSAN datastore.

Remove Disk Groups or Devices from vSAN

Typically, you delete devices or disk groups from vSAN when you are upgrading a device o replacing a failed device.

Deleting a disk group permanently deletes the disk membership and the data stored on the devices.
# Prerequisites
Run data migration pre-check on the device or disk group before you remove it from the cluster. 

# Procedure
1. Navigate to the vSAN cluster.
2. Click the Configure tab.
3. Under vSAN, click Disk Management.
4. Remove a disk group or selected devices.

Option                          Description
------------------------------------------------------------
Remove the Disk Group       a. Under Disk Groups, select the disk group to remove, and click …, 
                                then Remove.
                            b. Select a data evacuation mode.
Remove the Selected Device  a. Under Disk Groups, select the disk group 
                                that contains the device that you are removing.
                            b. Under Disks, select the device to remove, and click the Remove Disk(s).
                            c. Select a data evacuation mode.
5. Click Yes or Remove to confirm.
The data is evacuated from the selected devices or disk group.
Recreate a Disk Group

When you recreate a disk group in the vSAN cluster, the existing disks are removed from the disk group, and the disk group is deleted. vSAN recreates the disk group with the same disks.

When you recreate a disk group on a vSAN cluster, vSAN manages the process for you. vSAN evacuates data from all disks in the disk group, removes the disk group, and creates the disk group with the same disks.

# Procedure
1. Navigate to the vSAN cluster in the vSphere Client.
2. Click the Configure tab.
3. Under vSAN, click Disk Management.
4. Under Disk Groups, select the disk group to recreate.
5. Click …, then click the Recreate.
    The Recreate Disk Group dialog box appears.
6. Select a data migration mode, and click Recreate.
Using Locator LEDs

You can use locator LEDs to identify the location of storage devices.

vSAN can light the locator LED on a failed device so that you can easily identify the device. This is particularly useful when you are working with multiple hot plug and host swap scenarios.

# Prerequisites
1. Verify that you have installed the supported drivers for storage I/O controllers that enable this feature. 
    For information about the drivers that are certified by VMware
     see the VMware Compatibility Guide
2. In some cases, you might need to use third-party utilities to configure the Locator LED
    feature on your storage I/O controllers.

# Procedure
1. Navigate to the vSAN cluster.
2. Click the Configure tab.
3. Under vSAN, click Disk Management.
4. Select a host to view the list of devices.
5. At the bottom of the page, select one or more storage devices from the list, 
    and enable o disable the locator LEDs on the selected devices.
Option                      Action
-----------------------------------------------------------------------------
Turn on LED         Enables locator LED on the selected storage device. 
                    You can enable locator LEDs from the Manage tab and click Storage > Storage Devices.
Turn off LED        Disables locator LED on the selected storage device. 
                    You can disable locator LEDs from the Manage tab and click Storage > Storage Devices.
Add a Capacity Device

You can add a capacity device to an existing vSAN disk group. You cannot add a shared device to a disk group.

# Prerequisites
Verify that the device is formatted and is not in use.

# Procedure
1. Navigate to the vSAN cluster.
2. Click the Configure tab.
3. Under vSAN, click Disk Management.
4. Select a disk group.
5. Click the Add Disks at the bottom of the page.
6. Select the capacity device that you want to add to the disk group.
7. Click OK or Add.

The device is added to the disk group.
Remove Partition From Devices

You can remove partition information from a device so vSAN can claim the device for use.

If you have added a device that contains residual data or partition information, you must remove all preexisting partition information from the device before you can claim it for vSAN use.

# Prerequisites
Verify that the device is not in use by ESXi as boot disk, VMFS datastore, or vSAN.

# rocedure
1. Navigate to the vSAN cluster.
2. Click the Configure tab.
3. Under vSAN, click Disk Management.
4. Select a host to view the list of available devices.
5. From the Show drop-down menu, select Ineligible.
6. Select a device from the list and click Erase partitions.
7. Click OK to confirm.

The device is clean and does not include any partition information.

vSAN Space Efficiency

You can use space efficiency techniques to reduce the amount of space for storing data. These techniques reduce the total storage capacity required to meet your needs.

vSAN 6.7 Update 1 and later supports SCSI unmap commands that enable you to reclaim storage space that is mapped to a deleted vSAN object.

Reclaiming Space with SCSI Unmap

vSAN 6.7 Update 1 and later supports SCSI UNMAP commands that enable you to reclaim storage space that is mapped to deleted files in the file system created by the guest on the vSAN object.

Upgrading the vSAN Cluster

Upgrading vSAN is a multistage process, in which you must perform the upgrade procedures in the order.

Before you attempt to upgrade, make sure you understand the complete upgrade process clearly to ensure a smooth and uninterrupted upgrade.

The vSAN cluster upgrade proceeds in the following sequence of tasks.

  1. Upgrade the vCenter Server.
  2. Upgrade the ESXi hosts.
  3. Upgrade the vSAN disk format. Upgrading the disk format is optional, but for best results, upgrade the objects to use the latest version. The on-disk format exposes your environment to the complete feature set of vSAN.
Before You Upgrade vSAN

Plan and design your upgrade to be fail-safe. Before you attempt to upgrade vSAN, verify that your environment meets the vSphere hardware and software requirements.

Upgrade Prerequisite

Review the key requirements before you upgrade your cluster to new vSAN version.

1. Software, hardware, drivers, firmware, and storage I/O controllers
    Verify that the new version of vSAN supports the software and hardware components, drivers, firmware, 
    and storage I/O controllers that you plan on using. 
2. vSAN version
    Verify that you are upgrading to the latest version of vSAN.
3. Disk space
    Verify that you have enough space available to complete the software version upgrade. 
    The amount of disk storage needed for the vCenter Server installation depends on your vCenter Server configuration.
4. vSAN disk format
    Verify that you have enough capacity available to upgrade the disk format. 
5. vSAN hosts
    Verify that you have placed the vSAN hosts in maintenance mode and 
    selected the Ensure data accessibility or Evacuate all data option.
    
    Note:
    You can use the vSphere Lifecycle Manager for automating and testing
    the upgrade process. However, when you use vSphere Lifecycle
    Manager to upgrade vSAN, the default evacuation mode is Ensure data
    accessibility
6. Virtual Machines
    Verify that you have backed up your virtual machines.
Upgrading the Witness Host in a Two Host or Stretched Cluster

The witness host for a two host cluster or stretched cluster is located outside of the vSAN cluster, but it is managed by the same vCenter Server. You can use the same process to upgrade the witness host as you use for a vSAN data host.

Do not upgrade the witness host until all data hosts have been upgraded and have exited maintenance mode.

vSAN Monitoring and Troubleshooting

You can monitor the vSAN cluster and all the objects related to it.

You can monitor all of the objects in a vSAN environment, including hosts that participate in a vSAN cluster and the vSAN datastore.

Monitor vSAN Capacity

You can monitor the capacity of the vSAN datastore, analyze usage, and view the capacity breakdown at the cluster level.

Monitor Physical Devices

You can monitor hosts, cache devices, and capacity devices used in the vSAN cluster.

Monitor Devices that Participate in vSAN Datastores

Verify the status of the devices that back up the vSAN datastore. You can check whether the devices experience any problems.

Monitor Virtual Objects in the vSAN Cluster

You can view the status of virtual objects in the vSAN cluster.

Reserved Capacity

vSAN requires capacity for its internal operations. For a cluster to be able to tolerate a single host failure, vSAN requires free space to restore the data of the failed host. The capacity required to restore a host failure matches the total capacity of the largest host in the cluster.

vSAN Cluster Resynchronization

You can monitor the status of virtual machine objects that are being resynchronized in the vSAN cluster.

vSAN Cluster Rebalancing

When any capacity device in your cluster reaches 80 percent full, vSAN automatically rebalances the cluster, until the space available on all capacity devices is below the threshold.

vSAN Default Alarms

You can use the default vSAN alarms to monitor the cluster, hosts, and existing vSAN licenses.

VMkernel Observations for Creating Alarms

VMkernel Observations (VOBs) are system events that you can use to set up vSAN alarms for monitoring and troubleshooting performance and networking issues in the vSAN cluster. In vSAN, these events are known as observations.

Troubleshooting and operations

How to identify unassiciated vSAN objects

There are many instances of what gets classed as an Unassociated object, the below are typical examples and not an exhaustive list:

  1. Any VM data objects (e.g. namespace, vmdk, snapshot, vswap, vmem) of a VM that is not currently registered in vSphere inventory.
  2. vSAN iSCSI target vmdk and namespace objects.
  3. Content Library namespace objects.
  4. vSAN stats objects used for storing vSAN Performance data.
  5. vswp objects belonging to some appliance e.g. NSX controllers, vRLI appliances and vROPS appliances.
#Run RVC command (Ruby vSphere Console)
    vsan.obj_status_report -t <pathToCluster>

Identification can also be done via PowerCLI.

Note:
1. Unassociated does not necessarily mean unused/unneeded.
2. Prior to performing any actions that could potentially cause loss of data such as deleting vSAN objects, 
    care should always be taken to positively identify the Object(s) and confirm that they are not needed and are safe to delete.

vSAN Skyline Health check list

In vSAN 7 update 2, under Monitor > vSAN > Skyline Health, it shows the list of health checks available

  1. Physical Disk
a. Operation health
b. Disk capacity
c. Congestion
d. Component limit health
e. Component metadata health
f. Memory pools (heaps)
g. Memory pools (slabs)
h. vSAN max component size
  1. Hardware Compatibility
a. vSAN HCL DB up-to-date
b. vSAN HCL DB auto update
c. Controller is VMware certified for ESXi release
d. Controller driver is VMware certified
e. Controller firmware is VMware certified
f. Controller disk group mode is VMware certified
g. vSAN firmware revision recommendation
h. SCSI controller is VMware certified
  1. Online Health
a. Advisor
b. vSAN support Insight
  1. vSAN Build Recommendation
a. vSAN release catalog up-to-date
b. vSAN build recommendation
c. vSAN build recommendation engine health
  1. Network
a. Host with connection issue
b. vSAN cluster partition
c. All hosts have a vSAN vmknic configured
d. Host disconnected from VC
e. vSAN: Basic (Unicast) connectivity check
f. vSAN: MTU check (ping with large packet size)
g. vMotion: Basic (Unicast) connectivity check
h. vMotion: MTU check (ping with large packet size)
i. Network latency check
j. Physical network adapter link speed consistency
  1. Data
a. vSAN Object Health
b. vSAN Object format health
  1. Cluster
a. Time is synchronized across hosts and VC
b. Advanced vSAN configuration in Sync
c. vSAN daemon liveness
d. vSAN disk balance
e. Resync operations throttling
f. vCenter state is authoritative
g. vSAN cluster configuration consistency
h. vSphere cluster members match vSAN cluster
i. Software version compatibility
j. Disk format version
k. vSAN extended configuration in sync
  1. Capacity Utilization
a. Disk space
b. Read cache reservations
c. Component
d. What if the most consumed host fails
  1. Performance Service
a. Stats DB object
b. Stats primary election
    i. CMMDS primary
    ii. Stats primary
c. Performance data collection
d. All hosts contributing stats
e. Stats DB object conflicts
  1. Stretched Cluster (This option only appears in stretched vSAN clusters)
a. Witness lost within vCenter cluster
b. Preferred fault domain unset
c. Unexpected number of fault domains
d. Witness host not found
e. Unicast agent configuration inconsisitent
f. Witness host fault domain misconfigured
g. Invalid unicast agent
h. Invalid preferred fault domain on witness host
i. No disk claimed on witness host
j. Unicast agent not configured
k. Site latency health

Useful esxcli vSAN commands and vsantop

esxcli vsan health cluster list     # List all vSAN Skyline health check list
esxcli vsan cluster get             # list members of vSAN cluster, member state, etc
esxcli vsan debug controller list   # Show queue depth
esxcli vsan debug disk list         # Show all capacity disks
esxcli vsan network list            # Show vSAN vmkernel adapter, such as vmk1

vsantop (Only available from vSAN 6.7 U3 and later)

Deploy vSAN witness appliance

https://docs.vmware.com/en/VMware-vSphere/6.7/com.vmware.vsphere.vsan-planning.doc/GUID-05C1737A-5FBA-4AEE-BDB8-3BF5DE569E0A.html

https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.vsan-planning.doc/GUID-05C1737A-5FBA-4AEE-BDB8-3BF5DE569E0A.html

You also must select a datastore for the vSAN witness appliance. The witness appliance must use a different datastore than the vSAN stretched cluster datastore.

# Procedure
1. Download the appliance from the VMware website.
2. Deploy the appliance to a vSAN host or cluster. 
    See Deploying OVF Templates in the vSphere Virtual Machine Administration documentation.
3. Configure the vSAN network on the witness appliance.
    The vSAN witness appliance includes two preconfigured network adapters. 
    You must change the configuration of the second adapter so that the appliance can connect to the vSAN network.
    a. Navigate to the virtual appliance that contains the witness host.
    b. Right-click the appliance and select Edit Settings.
    c. On the Virtual Hardware tab, expand the second Network adapter.
    d. From the drop-down menu, select the vSAN port group and click OK.
4. Configure the management network on the witness appliance.
    a. Power on the witness appliance and open its console.
        Because the appliance is an ESXi host, you see the Direct Console User Interface (DCUI).
    b. Press F2 and navigate to the Network Adapters page.
    c. On the Network Adapters page, verify that at least one vmnic is selected for transport.
    d. Configure the IPv4 parameters for the management network.
        i. Navigate to the IPv4 Configuration section and change the default DHCP setting to static.
        ii. Enter the following settings:
            - IP address
            - Subnet mask
            - Default gateway
    e. Configure DNS parameters.
        - Primary DNS server
        - Alternate DNS server
        - Hostname
5. Add the appliance to vCenter Server as a witness ESXi host. 
    Make sure to configure the vSAN VMkernel interface on the host.

Change the witness host

You can change the witness host for a vSAN stretched cluster. Change the ESXi host used as a witness host for your vSAN stretched cluster.

# Prerequisites
Verify that the witness host is not in use.

# Procedure
1. Navigate to the vSAN cluster.
2. Click the Configure tab.
3. Under vSAN, click Fault Domains.
    Option          Description
    --------------------------------------------------------
vSphere Client      a. Click the Change button. The Change Witness Host wizard opens.
                    b. Select a new host to use as a witness host, and click Next.
                    c. Claim disks on the new witness host, and click Next.
vSphere Web Client  a. Click the Change witness host button.
                    b. Select a new host to use as a witness host, and click Next.
                    c. Claim disks on the new witness host, and click Next.
        
4. On the Ready to complete page, review the configuration, and click Finish.

How to delete vSAN objects

https://knowaretech.com/2021/03/29/deleting-inaccessible-objects-from-vsan/

https://vinfrastructure.it/2019/11/purge-inaccessible-objects-in-vmware-vsan/

There are steps to identify and remove the inaccessible objects or orphan objects from vSAN.

1. SSH to vCenter or VCSA, login as root
2. RVC into the VCSA console
    # RVC - Ruby vSphere Console

    Command> rvc localhost
3. List and view all the cluster folders
    ls      # It will show all the cluster folder with a numbering in the first column
4. Change directories and get into the cluste folder
    cd <cluster number>
5. List all the object folder in the cluste folder
    ls
6. Change directories to "computers", normally with "1"
    cd 1
7. List all the ESXi hosts and the cluster
    ls
8. Change directory to the cluster
    cd <cluster name>
9. Verify the vSAN
    ls
10. Run vsan check state command
    vsan.check_state -r cluster-name
    
    # This will check the state and tries to refresh the objects it. 
    It will then list out all the inaccessible objects.Note down the UUIDs of the inaccessible objects.
11. Run command to find where the object is hosted
    vsan.cmmds_find -u UUID cluster-name
    
    # Alternative command
        vsan.cmmds_find -u UUID     <UUID of the inaccessible objecgt>

    # The details will show you where is the object hosted and what object it is.

12. To delete the object, need to SSH to the respective onwer nodes, and delete the objects using the UUID that noted earlier.
    /usr/lib/vmware/osfs/bin/objtool delete -u UUID -f -v 10

    Note:
    Repeat the process for each of the objects.
How to clear disk config that previously configured or enabled as vSAN disk

The disk is required to clear the config as it has been enabled as vSAN disk, otherwise when configure vSAN, the disk is not available for vSAN.

1. SSH to the ESXi host
2. Change to disk directory, where all the disks are configured
    cd /dev/disks
3. List all the disks, note down all the disk labels and configuration
    ls
4. Change the disk label
    partedutil mklabel /dev/disks/<disk to be cleared>  gpt         # change disk label to gpt
5. Verify the disk label has been updated
    vdq -qH         # Upper case "H"

vSAN disk group

esxcli -vl      # verify ESXi version

esxcli vsan cluster get
esxcli vsan debug object health summary get
esxcli vsan storage list | grep -i dedup        # verify deduplication
esxcli vsan storage list | grep -i comp         # verify compression

df -h
vdq -iH     # verify disk groups