Nested Esxi, Silence unsupported controller health



Check check all

vsan.health.silent_health_check_status vcsa01.ntitta.local/Cloud-DC01/computers/mgm02/

> vsan.health.silent_health_check_status vcsa01.ntitta.local/Cloud-DC01/computers/mgm02/
/opt/vmware/rvc/lib/rvc/lib/vsanhealth.rb:108: warning: calling URI.open via Kernel#open is deprecated, call URI.open directly or use URI#open
Silent Status of Cluster mgm02:
+----------------------------------------------------------------------------------------------------+---------------------------------------+---------------+
| Health Check                                                                                       | Health Check Id                       | Silent Status |
+----------------------------------------------------------------------------------------------------+---------------------------------------+---------------+
| Capacity utilization                                                                               |                                       |               |
|   Cluster and host component utilization                                                           | nodecomponentlimit                    | Normal        |
|   Read cache reservations                                                                          | rcreservation                         | Normal        |
|   Storage space                                                                                    | diskspace                             | Normal        |
|   What if the most consumed host fails                                                             | limit1hf                              | Normal        |
+----------------------------------------------------------------------------------------------------+---------------------------------------+---------------+
| Cluster                                                                                            |                                       |               |
|   Advanced vSAN configuration in sync                                                              | advcfgsync                            | Normal        |
|   Disk format version                                                                              | upgradelowerhosts                     | Normal        |
|   ESA prescriptive disk claim                                                                      | ddsconfig                             | Normal        |
|   Host Maintenance Mode                                                                            | mmdecominsync                         | Normal        |
|   Maximum host number in vSAN over RDMA                                                            | rdmanodes                             | Normal        |
|   Resync operations throttling                                                                     | resynclimit                           | Normal        |
|   Software version compatibility                                                                   | upgradesoftware                       | Normal        |
|   Time is synchronized across hosts and VC                                                         | timedrift                             | Normal        |
|   VMware vCenter state is authoritative                                                            | vcauthoritative                       | Normal        |
|   VSAN ESA Conversion Health                                                                       | esaconversionhealth                   | Normal        |
|   vSAN Direct homogeneous disk claiming                                                            | vsandconfigconsistency                | Normal        |
|   vSAN Disk Balance                                                                                | diskbalance                           | Normal        |
|   vSAN Managed disk claim                                                                          | hcldiskclaimcheck                     | Normal        |
|   vSAN cluster configuration consistency                                                           | consistentconfig                      | Normal        |
|   vSAN daemon liveness                                                                             | clomdliveness                         | Normal        |
|   vSAN disk group layout                                                                           | dglayout                              | Normal        |
|   vSAN extended configuration in sync                                                              | extendedconfig                        | Normal        |
|   vSAN optimal datastore default policy configuration                                              | optimaldsdefaultpolicy                | Normal        |
|   vSphere Lifecycle Manager (vLCM) configuration                                                   | vsanesavlcmcheck                      | Normal        |
|   vSphere cluster members match vSAN cluster members                                               | clustermembership                     | Normal        |
+----------------------------------------------------------------------------------------------------+---------------------------------------+---------------+
| Data                                                                                               |                                       |               |
|   vSAN object format health                                                                        | objectformat                          | Normal        |
|   vSAN object health                                                                               | objecthealth                          | Normal        |
+----------------------------------------------------------------------------------------------------+---------------------------------------+---------------+
| Data-at-rest encryption                                                                            |                                       |               |
|   CPU AES-NI is enabled on hosts                                                                   | hostcpuaesni                          | Normal        |
|   VMware vCenter and all hosts are connected to Key Management Servers                             | kmsconnection                         | Normal        |
|   com.vmware.vsan.health.test.cloudhealth.dualcloudhealth.testname                                 | dualencryption                        | Normal        |
+----------------------------------------------------------------------------------------------------+---------------------------------------+---------------+
| Data-in-transit encryption                                                                         |                                       |               |
|   Configuration check                                                                              | ditconfig                             | Normal        |
+----------------------------------------------------------------------------------------------------+---------------------------------------+---------------+
| File Service                                                                                       |                                       |               |
|   File Server Health                                                                               | fileserver                            | Normal        |
|   Infrastructure Health                                                                            | host                                  | Normal        |
|   Share Health                                                                                     | sharehealth                           | Normal        |
+----------------------------------------------------------------------------------------------------+---------------------------------------+---------------+
| Hardware compatibility                                                                             |                                       |               |
|   Controller disk group mode is VMware certified                                                   | controllerdiskmode                    | Normal        |
|   Controller driver is VMware certified                                                            | controllerdriver                      | Normal        |
|   Controller firmware is VMware certified                                                          | controllerfirmware                    | Normal        |
|   Controller is VMware certified for ESXi release                                                  | controllerreleasesupport              | Normal        |
|   Controller with pass-through and RAID disks                                                      | mixedmode                             | Normal        |
|   HPE NVMe Solid State Drives - critical firmware upgrade required                                 | vsanhpefwtest                         | Normal        |
|   Host issues retrieving hardware info                                                             | hclhostbadstate                       | Normal        |
|   Host physical memory compliance check                                                            | hostmemcheck                          | Normal        |
|   NVMe device is VMware certified                                                                  | nvmeonhcl                             | Normal        |
|   Network (RDMA NIC: RoCE v2) driver/firmware is vSAN certified                                    | rdmanicsupportdriverfirmware          | Normal        |
|   Network (RDMA NIC: RoCE v2) is certified for ESXi release                                        | rdmanicsupportesxrelease              | Normal        |
|   Network (RDMA NIC: RoCE v2) is vSAN certified                                                    | rdmaniciscertified                    | Normal        |
|   Physical NIC link speed meets requirements                                                       | pniclinkspeed                         | Normal        |
|   RAID controller configuration                                                                    | controllercacheconfig                 | Normal        |
|   SCSI controller is VMware certified                                                              | controlleronhcl                       | Normal        |
|   vSAN HCL DB Auto Update                                                                          | autohclupdate                         | Normal        |
|   vSAN HCL DB up-to-date                                                                           | hcldbuptodate                         | Normal        |
|   vSAN and VMFS datastores on a Dell H730 controller with the lsi_mr3 driver                       | mixedmodeh730                         | Normal        |
|   vSAN configuration for LSI-3108 based controller                                                 | h730                                  | Normal        |
+----------------------------------------------------------------------------------------------------+---------------------------------------+---------------+
| Hyperconverged cluster configuration compliance                                                    |                                       |               |
|   Host compliance check for hyperconverged cluster configuration                                   | hosthciconfig                         | Normal        |
|   VDS compliance check for hyperconverged cluster configuration                                    | dvshciconfig                          | Normal        |
+----------------------------------------------------------------------------------------------------+---------------------------------------+---------------+
| Network                                                                                            |                                       |               |
|   Active multicast connectivity check                                                              | multicastdeepdive                     | Normal        |
|   All hosts have a dedicated vSAN Max Client vmknic configured in server cluster                   | vsanexternalvmknic                    | Normal        |
|   All hosts have a vSAN vmknic configured                                                          | vsanvmknic                            | Normal        |
|   All hosts have matching multicast settings                                                       | multicastsettings                     | Normal        |
|   Hosts disconnected from VC                                                                       | hostdisconnected                      | Normal        |
|   Hosts with LACP issues                                                                           | lacpstatus                            | Normal        |
|   Hosts with connectivity issues                                                                   | hostconnectivity                      | Normal        |
|   Hosts with duplicate IP addresses                                                                | duplicateip                           | Normal        |
|   Hosts with pNIC TSO issues                                                                       | pnictso                               | Normal        |
|   Multicast assessment based on other checks                                                       | multicastsuspected                    | Normal        |
|   Network latency check                                                                            | hostlatencycheck                      | Normal        |
|   No hosts in remote vSAN have multiple vSAN vmknics configured                                    | multiplevsanvmknic                    | Normal        |
|   Physical network adapter link speed consistency                                                  | pnicconsistent                        | Normal        |
|   RDMA Configuration Health                                                                        | rdmaconfig                            | Normal        |
|   Remote VMware vCenter network connectivity                                                       | xvcconnectivity                       | Normal        |
|   Server Cluster Partition                                                                         | serverpartition                       | Normal        |
|   vMotion: Basic (unicast) connectivity check                                                      | vmotionpingsmall                      | Normal        |
|   vMotion: MTU check (ping with large packet size)                                                 | vmotionpinglarge                      | Normal        |
|   vSAN Max Client Network connectivity check                                                       | externalconnectivity                  | Normal        |
|   vSAN cluster partition                                                                           | clusterpartition                      | Normal        |
|   vSAN: Advanced (https) connectivity check                                                        | interhostconnectivity                 | Normal        |
|   vSAN: Basic (unicast) connectivity check                                                         | smallping                             | Normal        |
|   vSAN: MTU check (ping with large packet size)                                                    | largeping                             | Normal        |
+----------------------------------------------------------------------------------------------------+---------------------------------------+---------------+
| Online health                                                                                      |                                       |               |
|   A possible storage capacity limitation with vSAN OSA versions 8.0U2 and 8.0U2b                   | lsomadvconfig                         | Normal        |
|   Advisor                                                                                          | advisor                               | Normal        |
|   Audit CEIP Collected Data                                                                        | auditceip                             | Normal        |
|   CNS Critical Alert - Patch available with important fixes                                        | cnspatchalert                         | Normal        |
|   Controller with pass-through and RAID disks                                                      | mixedmode                             | Normal        |
|   Coredump partition size check                                                                    | coredumpartitionsize                  | Normal        |
|   Critical vSAN patch is available for vSAN ESA                                                    | laurelalert                           | Normal        |
|   Customer advisory for HPE Smart Array                                                            | vsanhpesmartarraytest                 | Normal        |
|   Disks usage on storage controller                                                                | diskusage                             | Normal        |
|   Dual encryption applied to VMs on vSAN                                                           | dualencryption                        | Normal        |
|   ESXi system logs stored outside vSAN datastore                                                   | scratchconfig                         | Normal        |
|   End of general support for lower vSphere version                                                 | eoscheck                              | Normal        |
|   Fix is available for a critical vSAN software defect with Guest Trim/Unmap configuration enabled | unmaptest                             | Normal        |
|   HPE NVMe Solid State Drives - critical firmware upgrade required                                 | vsanhpefwtest                         | Normal        |
|   HPE SAS Solid State Drive                                                                        | hpesasssd                             | Normal        |
|   Hardware compatibility issue for witness appliance                                               | witnesshw                             | Normal        |
|   Important patch available for vSAN issue                                                         | fsvlcmpatchalert                      | Normal        |
|   Maximum host number in vSAN over RDMA                                                            | rdmanodesalert                        | Normal        |
|   Multiple VMs share the same vSAN home namespace                                                  | vmns                                  | Normal        |
|   Patch available for critical vSAN issue for All-Flash clusters with deduplication enabled        | patchalert                            | Normal        |
|   Physical network adapter link speed consistency                                                  | pnicconsistent                        | Normal        |
|   Proper vSAN network traffic shaping policy is configured                                         | dvsportspeedlimit                     | Normal        |
|   RAID controller configuration                                                                    | controllercacheconfig                 | Normal        |
|   Thick-provisioned VMs on vSAN                                                                    | thickprovision                        | Normal        |
|   Update release available for vSAN ESA                                                            | marigoldalert                         | Normal        |
|   Update release available for vSAN ESA                                                            | lavenderalert                         | Normal        |
|   Upgrade vSphere CSI driver with caution                                                          | csidriver                             | Normal        |
|   VM storage policy is not-recommended                                                             | policyupdate                          | Normal        |
|   VMware vCenter up to date                                                                        | vcuptodate                            | Normal        |
|   vSAN Advanced Configuration Check for Urgent vSAN ESA Patch                                      | zdomadvcfgenabled                     | Normal        |
|   vSAN Critical Alert - Release available for critical vSAN issue                                  | lilypatchalert                        | Normal        |
|   vSAN Support Insight                                                                             | vsanenablesupportinsight              | Normal        |
|   vSAN and VMFS datastores on a Dell H730 controller with the lsi_mr3 driver                       | mixedmodeh730                         | Normal        |
|   vSAN configuration check for large scale cluster                                                 | largescalecluster                     | Normal        |
|   vSAN configuration for LSI-3108 based controller                                                 | h730                                  | Normal        |
|   vSAN critical alert regarding a potential data inconsistency                                     | lilacdeltacomponenttest               | Normal        |
|   vSAN management server system resource check                                                     | vsanmgmtresource                      | Normal        |
|   vSAN max component size                                                                          | smalldiskstest                        | Normal        |
|   vSAN storage policy compliance up-to-date                                                        | objspbm                               | Normal        |
|   vSAN v1 disk in use                                                                              | v1diskcheck                           | Normal        |
+----------------------------------------------------------------------------------------------------+---------------------------------------+---------------+
| Performance service                                                                                |                                       |               |
|   All hosts contributing stats                                                                     | hostsmissing                          | Normal        |
|   Network diagnostic mode                                                                          | diagmode                              | Normal        |
|   Performance data collection                                                                      | collection                            | Normal        |
|   Performance service status                                                                       | perfsvcstatus                         | Normal        |
|   Stats DB object                                                                                  | statsdb                               | Normal        |
|   Stats DB object conflicts                                                                        | renameddirs                           | Normal        |
|   Stats primary election                                                                           | masterexist                           | Normal        |
|   Verbose mode                                                                                     | verbosemode                           | Normal        |
+----------------------------------------------------------------------------------------------------+---------------------------------------+---------------+
| Physical disk                                                                                      |                                       |               |
|   Component metadata health                                                                        | componentmetadata                     | Normal        |
|   Congestion                                                                                       | physdiskcongestion                    | Normal        |
|   Disk capacity                                                                                    | physdiskcapacity                      | Normal        |
|   Disks usage on storage controller                                                                | diskusage                             | Normal        |
|   Memory pools (heaps)                                                                             | lsomheap                              | Normal        |
|   Memory pools (slabs)                                                                             | lsomslab                              | Normal        |
|   Operation health                                                                                 | physdiskoverall                       | Normal        |
|   Physical disk component utilization                                                              | physdiskcomplimithealth               | Normal        |
|   Physical disk health retrieval issues                                                            | physdiskhostissues                    | Normal        |
|   Storage Vendor Reported Drive Health                                                             | phmhealth                             | Normal        |
|   vSAN max component size                                                                          | smalldiskstest                        | Normal        |
+----------------------------------------------------------------------------------------------------+---------------------------------------+---------------+
| Stretched cluster                                                                                  |                                       |               |
|   Hardware compatibility issue for witness appliance                                               | witnessupgissue                       | Normal        |
|   Invalid preferred fault domain on witness host                                                   | witnesspreferredfaultdomaininvalid    | Normal        |
|   Invalid unicast agent                                                                            | hostwithinvalidunicastagent           | Normal        |
|   No disk claimed on witness host                                                                  | witnesswithnodiskmapping              | Normal        |
|   Preferred fault domain unset                                                                     | witnesspreferredfaultdomainnotexist   | Normal        |
|   Shared witness per cluster component limit scaled down                                           | sharedwitnesscomponentlimitscaleddown | Normal        |
|   Site latency health                                                                              | siteconnectivity                      | Normal        |
|   Unexpected number of data node in shared witness cluster                                         | sharedwitnessclusterdatahostnumexceed | Normal        |
|   Unexpected number of fault domains                                                               | clusterwithouttwodatafaultdomains     | Normal        |
|   Unicast agent configuration inconsistent                                                         | clusterwithmultipleunicastagents      | Normal        |
|   Unicast agent not configured                                                                     | hostunicastagentunset                 | Normal        |
|   Unsupported host version                                                                         | hostwithnostretchedclustersupport     | Normal        |
|   Witness appliance upgrade to vSphere 7.0 or higher with caution                                  | witnessupgrade                        | Normal        |
|   Witness host fault domain misconfigured                                                          | witnessfaultdomaininvalid             | Normal        |
|   Witness host not found                                                                           | clusterwithoutonewitnesshost          | Normal        |
|   Witness host within VMware vCenter cluster                                                       | witnessinsidevccluster                | Normal        |
|   Witness node is managed by vSphere Lifecycle Manager                                             | vlcmwitnessconfig                     | Normal        |
+----------------------------------------------------------------------------------------------------+---------------------------------------+---------------+
| com.vmware.vsan.health.test.cloudhealth                                                            |                                       |               |
|   Patch available for critical vSAN issue for All-Flash clusters with deduplication enabled        | patchalert                            | Normal        |
|   com.vmware.vsan.health.test.cloudhealth.checksummismatchcount.testname                           | checksummismatchcount                 | Normal        |
|   com.vmware.vsan.health.test.cloudhealth.cloudhealthconfig.testname                               | vumconfig                             | Normal        |
|   com.vmware.vsan.health.test.cloudhealth.cloudhealthrecommendation.testname                       | vumrecommendation                     | Normal        |
|   com.vmware.vsan.health.test.cloudhealth.clusternotfound.testname                                 | clusternotfound                       | Normal        |
|   com.vmware.vsan.health.test.cloudhealth.pausecount.testname                                      | pausecount                            | Normal        |
|   com.vmware.vsan.health.test.cloudhealth.releasecataloguptodate.testname                          | releasecataloguptodate                | Normal        |
|   com.vmware.vsan.health.test.cloudhealth.rxcrcerr.testname                                        | rxcrcerr                              | Normal        |
|   com.vmware.vsan.health.test.cloudhealth.rxerr.testname                                           | rxerr                                 | Normal        |
|   com.vmware.vsan.health.test.cloudhealth.rxfifoerr.testname                                       | rxfifoerr                             | Normal        |
|   com.vmware.vsan.health.test.cloudhealth.rxmisserr.testname                                       | rxmisserr                             | Normal        |
|   com.vmware.vsan.health.test.cloudhealth.rxoverr.testname                                         | rxoverr                               | Normal        |
|   com.vmware.vsan.health.test.cloudhealth.txcarerr.testname                                        | txcarerr                              | Normal        |
|   com.vmware.vsan.health.test.cloudhealth.txerr.testname                                           | txerr                                 | Normal        |
+----------------------------------------------------------------------------------------------------+---------------------------------------+---------------+
| vSAN iSCSI target service                                                                          |                                       |               |
|   Home object                                                                                      | iscsihomeobjectstatustest             | Normal        |
|   LUN runtime health                                                                               | iscsilunruntimetest                   | Normal        |
|   Network configuration                                                                            | iscsiservicenetworktest               | Normal        |
|   Service runtime status                                                                           | iscsiservicerunningtest               | Normal        |
+----------------------------------------------------------------------------------------------------+---------------------------------------+---------------+
>


vsan.health.silent_health_check_configure vcsa01.ntitta.local/Cloud-DC01/computers/mgm-01 -a controlleronhcl

> vsan.health.silent_health_check_configure vcsa01.ntitta.local/Cloud-DC01/computers/mgm02 -a controlleronhcl
/opt/vmware/rvc/lib/rvc/lib/vsanhealth.rb:108: warning: calling URI.open via Kernel#open is deprecated, call URI.open directly or use URI#open
Successfully update silent health check list for mgm02
> vsan.health.silent_health_check_configure vcsa01.ntitta.local/Cloud-DC01/computers/mgm -a controlleronhcl
vcsa01.ntitta.local/Cloud-DC01/computers/mgm-01  vcsa01.ntitta.local/Cloud-DC01/computers/mgm02
> vsan.health.silent_health_check_configure vcsa01.ntitta.local/Cloud-DC01/computers/mgm -a controlleronhcl
vcsa01.ntitta.local/Cloud-DC01/computers/mgm-01  vcsa01.ntitta.local/Cloud-DC01/computers/mgm02
> vsan.health.silent_health_check_configure vcsa01.ntitta.local/Cloud-DC01/computers/mgm-01 -a controlleronhcl
/opt/vmware/rvc/lib/rvc/lib/vsanhealth.rb:108: warning: calling URI.open via Kernel#open is deprecated, call URI.open directly or use URI#open
Successfully update silent health check list for mgm-01

Nested Esxi vSAN sample scripts

#! /usr/bin/pwsh
$user = '[email protected]'
# Import password from an encrypted file
$encryptedPassword = Import-Clixml -Path '/glabs/spec/vcsa_admin.xml'
$decryptedPassword = $encryptedPassword.GetNetworkCredential().Password



# Function to check if vCenter services are running
function Test-VCenterServicesRunning {
    $serviceInstance = Connect-VIServer -Server vcsa01.glabs.local -Username  $user -Password  $decryptedPassword -ErrorAction SilentlyContinue
    
    if ($serviceInstance -eq $null) {
        return $false
    }
    
    $serviceContent = Get-View -Id $serviceInstance.ExtensionData.content.ServiceInstance
    
    $serviceContent.serviceInfo.service | ForEach-Object {
        if ($_.running -eq $false) {
            Disconnect-VIServer -Server $vcServer -Confirm:$false
            return $false
        }
    }
    
    Disconnect-VIServer -Server $vcServer -Confirm:$false
    return $true
}

# Wait for vCenter services to start
Write-Host "Waiting for vCenter services to start..."

while (-not (Test-VCenterServicesRunning)) {
    Start-Sleep -Seconds 5
}

Write-Host "vCenter services are running. Connecting to vCenter..."




#connect to vc and add hosts
Connect-viserver vcsa01.glabs.local -User $user -Password $decryptedPassword

#crate datacenter and cluster
New-Datacenter -Location Datacenters  -Name cloud
New-Cluster -Name "management" -Location "cloud"

Add-VMHost -Name esxi01.Glabs.local -Location management -user 'root' -password 'bAdP@$$' -Force -Confirm:$false 
Add-VMHost -Name esxi02.Glabs.local -Location management -user 'root' -password 'bAdP@$$' -Force -Confirm:$false 
Add-VMHost -Name esxi03.Glabs.local -Location management -user 'root' -password 'bAdP@$$' -Force -Confirm:$false 
get-vmhost | Get-VMHostStorage -RescanAllHba -RescanVmfs


$cache = 'mpx.vmhba0:C0:T1:L0'
$data = 'mpx.vmhba0:C0:T2:L0'

#mask cache disk as ssd
$esx = Get-VMHost -Name esxi01.glabs.local
$storSys = Get-View -Id $esx.ExtensionData.ConfigManager.StorageSystem
$uuid = $storSys.StorageDeviceInfo.ScsiLun | where {$_.CanonicalName -eq $cache} 
$storSys.MarkAsSsd($uuid.Uuid)
$esx = Get-VMHost -Name esxi02.glabs.local
$storSys = Get-View -Id $esx.ExtensionData.ConfigManager.StorageSystem
$uuid = $storSys.StorageDeviceInfo.ScsiLun | where {$_.CanonicalName -eq $cache} 
$storSys.MarkAsSsd($uuid.Uuid)
$esx = Get-VMHost -Name esxi03.glabs.local
$storSys = Get-View -Id $esx.ExtensionData.ConfigManager.StorageSystem
$uuid = $storSys.StorageDeviceInfo.ScsiLun | where {$_.CanonicalName -eq $cache} 
$storSys.MarkAsSsd($uuid.Uuid)

#add vSAN service to portgroup
$VMKNetforVSAN = "iscsi_1"
Get-VMHostNetworkAdapter -VMKernel | Where {$_.PortGroupName -eq $VMKNetforVSAN }|Set-VMHostNetworkAdapter -VsanTrafficEnabled $true -Confirm:$false



#Create vSAN cluster
get-cluster management | Set-Cluster -VsanEnabled:$true -VsanDiskClaimMode Manual -Confirm:$false -ErrorAction SilentlyContinue

#wait for previous task to finish
start-sleep 60

#add disk disk groups
New-VsanDiskGroup -VMHost esxi01.glabs.local -SSDCanonicalName $cache -DataDiskCanonicalName $data
New-VsanDiskGroup -VMHost esxi02.glabs.local -SSDCanonicalName $cache -DataDiskCanonicalName $data
New-VsanDiskGroup -VMHost esxi03.glabs.local -SSDCanonicalName $cache -DataDiskCanonicalName $data

#mount nfs 
get-vmhost | New-Datastore -Nfs -Name iso -Path /volume1/iso -NfsHost iso.glabs.local -ReadOnly

#noidea why the above does not work for vsphere7 but running the below manualy on a deployed env preps it for vSAN, dont touch it if it aint broken?
get-cluster management | Set-Cluster -VsanEnabled:$true -VsanDiskClaimMode Manual -Confirm:$false -ErrorAction SilentlyContinue


disconnect-viserver -confirm:$false

VMware PowerCli installation fails

VMware powerCli installation fails with the below:

PS C:\Users\Administrator> Install-Module -Name VMware.PowerCLI                                                                                                                                                                                 NuGet provider is required to continue                                                                                  PowerShellGet requires NuGet provider version '2.8.5.201' or newer to interact with NuGet-based repositories. The NuGet  provider must be available in 'C:\Program Files\PackageManagement\ProviderAssemblies' or                               'C:\Users\Administrator\AppData\Local\PackageManagement\ProviderAssemblies'. You can also install the NuGet provider by  running 'Install-PackageProvider -Name NuGet -MinimumVersion 2.8.5.201 -Force'. Do you want PowerShellGet to install   and import the NuGet provider now?                                                                                      [Y] Yes  [N] No  [S] Suspend  [?] Help (default is "Y"): y

Untrusted repository
You are installing the modules from an untrusted repository. If you trust this repository, change its
InstallationPolicy value by running the Set-PSRepository cmdlet. Are you sure you want to install the modules from
'PSGallery'?
[Y] Yes  [A] Yes to All  [N] No  [L] No to All  [S] Suspend  [?] Help (default is "N"): y
PackageManagement\Install-Package : The module 'VMware.VimAutomation.Sdk' cannot be installed or updated because the
authenticode signature of the file 'VMware.VimAutomation.Sdk.cat' is not valid.
At C:\Program Files\WindowsPowerShell\Modules\PowerShellGet\1.0.0.1\PSModule.psm1:1809 char:21
+ ...          $null = PackageManagement\Install-Package @PSBoundParameters
+                      ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : InvalidOperation: (Microsoft.Power....InstallPackage:InstallPackage) [Install-Package],
   Exception
    + FullyQualifiedErrorId : InvalidAuthenticodeSignature,ValidateAndGet-AuthenticodeSignature,Microsoft.PowerShell.P
   ackageManagement.Cmdlets.InstallPackage

Workaround: install PowerShell by skipping publisher checks:

install-module vmware.powercli -scope AllUsers -force -SkipPublisherCheck -AllowClobber

Cause: This is due to the fact that the certificate we used to sign the modules was replaced with a new one from a new publisher. 

SaltConfig and Identity manager integration

SaltConfig must be running version 8.5 and must be deployed via LCM.

If vRA is running on self-signed/local-CA/LCM-CA certificates the saltstack UI will not load and you will see similar symptoms:

Specifically, a blank page when logging on to salt UI with account/info api returning 500

Logs:

less /var/log/raas/raas
Traceback (most recent call last):
File "requests/adapters.py", line 449, in send
File "urllib3/connectionpool.py", line 756, in urlopen
File "urllib3/util/retry.py", line 574, in increment
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='automation.ntitta.lab', port=443): Max retries exceeded with url: /csp/gateway/am/api/auth/discovery?username=service_type&state=aHR0cHM6Ly9zYWx0eS5udGl0dGEubGFiL2lkZW50aXR5L2FwaS9jb3JlL2F1dGhuL2NzcA%3D%3D&redirect_uri=https%3A%2F%2Fsalty.ntitta.lab%2Fidentity%2Fapi%2Fcore%2Fauthn%2Fcsp&client_id=ssc-HLwywt0h3Y (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1076)')))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "tornado/web.py", line 1680, in _execute
File "raas/utils/rest.py", line 153, in prepare
File "raas/utils/rest.py", line 481, in prepare
File "pop/contract.py", line 170, in __call__
File "/var/lib/raas/unpack/_MEIb1NPIC/raas/mods/vra/params.py", line 250, in get_login_url
verify=validate_ssl)
File "requests/api.py", line 76, in get
File "requests/api.py", line 61, in request
File "requests/sessions.py", line 542, in request
File "raven/breadcrumbs.py", line 341, in send
File "requests/sessions.py", line 655, in send
File "requests/adapters.py", line 514, in send
requests.exceptions.SSLError: HTTPSConnectionPool(host='automation.ntitta.lab', port=443): Max retries exceeded with url: /csp/gateway/am/api/auth/discovery?username=service_type&state=aHR0cHM6Ly9zYWx0eS5udGl0dGEubGFiL2lkZW50aXR5L2FwaS9jb3JlL2F1dGhuL2NzcA%3D%3D&redirect_uri=https%3A%2F%2Fsalty.ntitta.lab%2Fidentity%2Fapi%2Fcore%2Fauthn%2Fcsp&client_id=ssc-HLwywt0h3Y (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1076)')))
2021-08-23 04:29:16,906 [tornado.access                                                    ][ERROR   :2250][Webserver:59844] 500 POST /rpc (127.0.0.1) 1697.46ms

To resolve this, grab the root certificate of vRA and import this over to the saltstack appliance root store:

Grab root certificate:

Cli method:

root@salty [ ~ ]# openssl s_client -showcerts -connect automation.ntitta.lab:443
CONNECTED(00000003)
depth=1 CN = vRealize Suite Lifecycle Manager Locker CA, O = VMware, C = IN
verify error:num=19:self signed certificate in certificate chain
---
Certificate chain
 0 s:/CN=automation.ntitta.lab/OU=labs/O=GSS/L=BLR/ST=KA/C=IN
   i:/CN=vRealize Suite Lifecycle Manager Locker CA/O=VMware/C=IN
-----BEGIN CERTIFICATE-----
MIID7jCCAtagAwIBAgIGAXmkBtDxMA0GCSqGSIb3DQEBCwUAMFMxMzAxBgNVBAMM
KnZSZWFsaXplIFN1aXRlIExpZmVjeWNsZSBNYW5hZ2VyIExvY2tlciBDQTEPMA0G
A1UECgwGVk13YXJlMQswCQYDVQQGEwJJTjAeFw0yMTA1MjUxNDU2MjBaFw0yMzA1
MjUxNDU2MjBaMGUxHjAcBgNVBAMMFWF1dG9tYXRpb24ubnRpdHRhLmxhYjENMAsG
A1UECwwEbGFiczEMMAoGA1UECgwDR1NTMQwwCgYDVQQHDANCTFIxCzAJBgNVBAgM
AktBMQswCQYDVQQGEwJJTjCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEB
AJ+p/UsPFJp3WESJfUNlPWAUtYOUQ9cK5lZXBrEK79dtOwzJ8noUyKndO8i5wumC
tNJP8U3RjKbqu75UZH3LiwoHTOEkqhWufrn8gL7tQjtiQ0iAp2pP6ikxH2bXNAwF
Dh9/2CMjLhSN5mb7V5ehu4rP3/Niu19nT5iA1XMER3qR2tsRweV++78vrYFsKDS9
ePa+eGvMNrVaXvbYN75KnLEKbpkHGPg9P10zLbP/lPIskEGfgBMjS7JKOPxZZKX1
GczW/2sFq9OOr4bW6teWG3gt319N+ReNlUxnrxMDkKcWrml8EbeQMp4RmmtXX5Z4
JeVEATMS7O2CeoEN5E/rFFUCAwEAAaOBtTCBsjAdBgNVHQ4EFgQUz/pxN1bN/GxO
cQ/hcQCgBSdRqaUwHwYDVR0jBBgwFoAUYOI4DbX97wdcZa/pWivAMvnnDekwMAYD
VR0RBCkwJ4IXKi5hdXRvbWF0aW9uLm50aXR0YS5sYWKCDCoubnRpdHRhLmxhYjAO
BgNVHQ8BAf8EBAMCBaAwIAYDVR0lAQH/BBYwFAYIKwYBBQUHAwIGCCsGAQUFBwMB
MAwGA1UdEwEB/wQCMAAwDQYJKoZIhvcNAQELBQADggEBAA2KntXAyrY6DHho8FQc
R2GrHVCCWG3ugyPEq7S7tAabMIeSVhbPWsDaVLro5PlldK9FAUhinbxEwShIJfVP
+X1WOBUxwTQ7anfiagonMNotGtow/7f+fnHGO4Mfyk+ICo+jOp5DTDHGRmF8aYsP
5YGkOdpAb8SuT/pNerZie5WKx/3ZuUwsEDTqF3CYdqWQZSuDIlWRetECZAaq50hJ
c6kD/D1+cq2pmN/DI/U9RAfsvexkhdZaMbHdrlGzNb4biSvJ8HjJMH4uNLUN+Nyf
2MON41QKRRuzQn+ahq7X/K2BbxJTQUZGwbC+0CA6M79dQ1eVQui4d5GXmjutqFIo
Xwo=
-----END CERTIFICATE-----
 1 s:/CN=vRealize Suite Lifecycle Manager Locker CA/O=VMware/C=IN
   i:/CN=vRealize Suite Lifecycle Manager Locker CA/O=VMware/C=IN
-----BEGIN CERTIFICATE-----
MIIDiTCCAnGgAwIBAgIGAXmEbtiqMA0GCSqGSIb3DQEBCwUAMFMxMzAxBgNVBAMM
KnZSZWFsaXplIFN1aXRlIExpZmVjeWNsZSBNYW5hZ2VyIExvY2tlciBDQTEPMA0G
A1UECgwGVk13YXJlMQswCQYDVQQGEwJJTjAeFw0yMTA1MTkxMTQyMDdaFw0zMTA1
MTcxMTQyMDdaMFMxMzAxBgNVBAMMKnZSZWFsaXplIFN1aXRlIExpZmVjeWNsZSBN
YW5hZ2VyIExvY2tlciBDQTEPMA0GA1UECgwGVk13YXJlMQswCQYDVQQGEwJJTjCC
ASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAK6S4ESddCC7BAl4MACpAeAm
1JBaw72NgeSOruS/ljpd1MyDd/AJjpIpdie2M0cweyGDaJ4+/C549lxQe0NAFsgh
62BG87klbhzvYja6aNKvE+b1EKNMPllFoWiCKJIxZOvTS2FnXjXZFZKMw5e+hf2R
JgPEww+KsHBqcWL3YODmD6NvBRCpY2rVrxUjqh00ouo7EC6EHzZoJSMoSwcEgIGz
pclYSPuEzdbNFKVtEQGrdt94xlAk04mrqP2O6E7Fd5EwrOw/+dsFt70qS0aEj9bQ
nk7GeRXhJynXxlEpgChCDEXQ3MWvLIRwOuMBxQq/W4B/ZzvQVzFwmh3S8UkPTosC
AwEAAaNjMGEwHQYDVR0OBBYEFGDiOA21/e8HXGWv6VorwDL55w3pMB8GA1UdIwQY
MBaAFGDiOA21/e8HXGWv6VorwDL55w3pMA8GA1UdEwEB/wQFMAMBAf8wDgYDVR0P
AQH/BAQDAgGGMA0GCSqGSIb3DQEBCwUAA4IBAQBqAjCBd+EL6koGogxd72Dickdm
ecK60ghLTNJ2wEKvDICqss/FopeuEVhc8q/vyjJuirbVwJ1iqKuyvANm1niym85i
fjyP6XaJ0brikMPyx+TSNma/WiDoMXdDviUuYZo4tBJC2DUPJ/0KDI7ysAsMTB0R
8Q7Lc3GlJS65AFRNIxkpHI7tBPp2W8tZQlVBe7PEcWMzWRjWZAvwDGfnNvUtX4iY
bHEVWSzpoVQUk1hcylecYeMSCzBGw/efuWayIFoSf7ZXFe0TAEOJySwkzGJB9n78
4Rq0ydikMT4EFHP5G/iFI2zsx2vZGNsAHCw7XSVFydqb/ekm/9T7waqt3fW4
-----END CERTIFICATE-----
---
Server certificate
subject=/CN=automation.ntitta.lab/OU=labs/O=GSS/L=BLR/ST=KA/C=IN
issuer=/CN=vRealize Suite Lifecycle Manager Locker CA/O=VMware/C=IN
---
No client certificate CA names sent
Peer signing digest: SHA512
Server Temp Key: ECDH, P-256, 256 bits
---
SSL handshake has read 2528 bytes and written 393 bytes
---
New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-GCM-SHA384
Server public key is 2048 bit
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
SSL-Session:
    Protocol  : TLSv1.2
    Cipher    : ECDHE-RSA-AES256-GCM-SHA384
    Session-ID: B06BE4668E5CCE713F1C1547F0917CC901F143CB13D06ED7A111784AAD10B2F6
    Session-ID-ctx:
    Master-Key: 75E8109DD84E2DD064088B44779C4E7FEDA8BE91693C5FC2A51D3F90B177F5C92B7AB638148ADF612EBEFDA30930DED4
    Key-Arg   : None
    PSK identity: None
    PSK identity hint: None
    SRP username: None
    TLS session ticket:
    0000 - b9 54 91 b7 60 d4 18 d2-4b 72 55 db 78 e4 91 10   .T..`...KrU.x...
    0010 - 1f 97 a0 35 31 16 21 db-8c 49 bf 4a a1 b4 59 ff   ...51.!..I.J..Y.
    0020 - 07 22 1b cc 20 d5 52 7a-52 84 17 86 b3 2a 7a ee   .".. .RzR....*z.
    0030 - 14 c3 9b 9f 8f 24 a7 a1-76 4d a2 4f bb d7 5a 21   .....$..vM.O..Z!
    0040 - c9 a6 d0 be 3b 57 4a 4e-cd cc 9f a6 12 45 09 b5   ....;WJN.....E..
    0050 - ca c4 c9 57 f5 ac 17 04-94 cb d0 0a 77 17 ac b8   ...W........w...
    0060 - 8a b2 39 f1 78 70 37 6d-d0 bf f1 73 14 63 e8 86   ..9.xp7m...s.c..
    0070 - 17 27 80 c1 3e fe 54 cf-                          .'..>.T.

    Start Time: 1629788388
    Timeout   : 300 (sec)
    Verify return code: 19 (self signed certificate in certificate chain)

From the above example,
Certificate chain 0 s:/CN=automation.ntitta.lab/OU=labs/O=GSS/L=BLR/ST=KA/C=IN <—-this is my vRA cert
i:/CN=vRealize Suite Lifecycle Manager Locker CA/O=VMware/C=IN <—-This is the root cert (Generated via LCM)

Create a new cert file with the contents of the root certificate.

cat root.crt
-----BEGIN CERTIFICATE-----
MIIDiTCCAnGgAwIBAgIGAXmEbtiqMA0GCSqGSIb3DQEBCwUAMFMxMzAxBgNVBAMM
KnZSZWFsaXplIFN1aXRlIExpZmVjeWNsZSBNYW5hZ2VyIExvY2tlciBDQTEPMA0G
A1UECgwGVk13YXJlMQswCQYDVQQGEwJJTjAeFw0yMTA1MTkxMTQyMDdaFw0zMTA1
MTcxMTQyMDdaMFMxMzAxBgNVBAMMKnZSZWFsaXplIFN1aXRlIExpZmVjeWNsZSBN
YW5hZ2VyIExvY2tlciBDQTEPMA0GA1UECgwGVk13YXJlMQswCQYDVQQGEwJJTjCC
ASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAK6S4ESddCC7BAl4MACpAeAm
1JBaw72NgeSOruS/ljpd1MyDd/AJjpIpdie2M0cweyGDaJ4+/C549lxQe0NAFsgh
62BG87klbhzvYja6aNKvE+b1EKNMPllFoWiCKJIxZOvTS2FnXjXZFZKMw5e+hf2R
JgPEww+KsHBqcWL3YODmD6NvBRCpY2rVrxUjqh00ouo7EC6EHzZoJSMoSwcEgIGz
pclYSPuEzdbNFKVtEQGrdt94xlAk04mrqP2O6E7Fd5EwrOw/+dsFt70qS0aEj9bQ
nk7GeRXhJynXxlEpgChCDEXQ3MWvLIRwOuMBxQq/W4B/ZzvQVzFwmh3S8UkPTosC
AwEAAaNjMGEwHQYDVR0OBBYEFGDiOA21/e8HXGWv6VorwDL55w3pMB8GA1UdIwQY
MBaAFGDiOA21/e8HXGWv6VorwDL55w3pMA8GA1UdEwEB/wQFMAMBAf8wDgYDVR0P
AQH/BAQDAgGGMA0GCSqGSIb3DQEBCwUAA4IBAQBqAjCBd+EL6koGogxd72Dickdm
ecK60ghLTNJ2wEKvDICqss/FopeuEVhc8q/vyjJuirbVwJ1iqKuyvANm1niym85i
fjyP6XaJ0brikMPyx+TSNma/WiDoMXdDviUuYZo4tBJC2DUPJ/0KDI7ysAsMTB0R
8Q7Lc3GlJS65AFRNIxkpHI7tBPp2W8tZQlVBe7PEcWMzWRjWZAvwDGfnNvUtX4iY
bHEVWSzpoVQUk1hcylecYeMSCzBGw/efuWayIFoSf7ZXFe0TAEOJySwkzGJB9n78
4Rq0ydikMT4EFHP5G/iFI2zsx2vZGNsAHCw7XSVFydqb/ekm/9T7waqt3fW4
-----END CERTIFICATE-----

Backup existing certificate store:

cp  /etc/pki/tls/certs/ca-bundle.crt   ~/

Copy the lcm certificate to the certificate store:

cat root.crt >> /etc/pki/tls/certs/ca-bundle.crt

add the below to raas.service, /usr/lib/systemd/system/raas.service

Environment=REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt

Example:

root@salty [ ~ ]# cat /usr/lib/systemd/system/raas.service
[Unit]
Description=The SaltStack Enterprise API Server
After=network.target

[Service]
Type=simple
User=raas
Group=raas
# to be able to bind port < 1024
AmbientCapabilities=CAP_NET_BIND_SERVICE
NoNewPrivileges=yes
RestrictAddressFamilies=AF_INET AF_INET6 AF_UNIX AF_NETLINK
PermissionsStartOnly=true
ExecStartPre=/bin/sh -c 'systemctl set-environment FIPS_MODE=$(/opt/vmware/bin/ovfenv -q --key fips-mode)'
ExecStartPre=/bin/sh -c 'systemctl set-environment NODE_TYPE=$(/opt/vmware/bin/ovfenv -q --key node-type)'
Environment=REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt
ExecStart=/usr/bin/raas
TimeoutStopSec=90

[Install]
WantedBy=multi-user.target

Restart salt service:

systemctl daemon-reload
systemctl restart raas && tail -f /var/log/raas/raas

Upon restart, the above command should start to tail the raas logs, ensure that we no longer see the certificate-related messages.

adding a webhook to vRA-Code Stream fails

adding a webhook to vRA-Code Stream fails with error:

java.lang.IllegalArgumentException: 400 BAD_REQUEST "Unable to create webhook at Git server. Request failed with : 422"

browser console:

{timestamp: 1622639483332, path: "/codestream/api/git-webhooks", status: 400, error: "Bad Request",…}
@type: "com.vmware.codestream.exception.CodestreamException"
error: "Bad Request"
message: "400 BAD_REQUEST \"Unable to create webhook at Git server. Request failed with : 422\""
path: "/codestream/api/git-webhooks"
requestId: "929b7098-102300"
status: 400
timestamp: 1622639483332

Symptoms: Gitlab is installed on the same network as of vRA/Codestream


Cause: Gitlab does not allow connections to create a webhook from local subnet by default.

Resolution :Allow requests to the local network from web hooks and services

Steps:

  • On gitlab, go to admin area:
  • On the left hand, navigate to settings> Network
  • enable Allow requests to the local network from system hooks, also add the codestream FQDN (or *) in the URL

Now Go back to codestream and add the webhook: success!!

Usage meter 4.3/4.4 vCenter server Partial collection failure: Events

Usage meter reports partial collection failure: events

Logs files: 	vccol_main.log | vccol_error.log
[2021-05-18 09:02:25]  | ERROR | ter collector thread |    com.vmware.um.vccollector.VCCollector | vCenter collector90 | Events stage raised exception javax.xml.ws.WebServiceException: java.net.SocketTimeoutException: Read timed out java.net.SocketTimeoutException: Read timed out=>Read timed out
[2021-05-18 09:02:26]  | ERROR | ter collector thread | com.vmware.um.collector.CollectionHelper | vCenter collector98 | Status (COLLECT_API_ERR) for vCenter server 7: Partial collection failure: Events
[2021-05-18 10:02:18]  | ERROR | ter collector thread |    com.vmware.um.vccollector.VCCollector | vCenter collector179 | Events stage raised exception javax.xml.ws.WebServiceException: java.net.SocketTimeoutException: Read timed out java.net.SocketTimeoutException: Read timed out=>Read timed out

Cause: Connection was closed before the data could be retrieved successfully. Usage meter requests vCenter for events, this api generally takes some time to respond either due to the huge number of events or slowly due to heavy processing on the vCenter.

Resolution: Increase timeouts
* take a snapshot of the um appliance.
* ssh into the appliance with the user usagemeter
* take a backup copy of common_utils.sh

cp /opt/vmware/cloudusagemetering/scripts/common_utils.sh /opt/vmware/cloudusagemetering/scripts/common_utils.sh.bak

* edit the config file using vi

 vi /opt/vmware/cloudusagemetering/scripts/common_utils.sh

* replace the values of the below field

=> CONNECT_TIMEOUT_MS="300000"
=> READ_TIMEOUT_MS="600000"

* save the file and restart the appliance


Note: On the vCenter side, if there are bursts of events, then this is also a likely scenario. KB https://kb.vmware.com/s/article/74607 is one among several where the burst is documented (event bursts need to be triaged from vCenter prospective)

Saltstack + vSphere: Deploying Windows VM’s with Windows Minion

Ensure that you have set up sphere provider provider, refer my previous blog https://blog.ntitta.in/?p=597

create a windows profile

/etc/salt/cloud.profiles.d/w16k.conf 

root@saltyub:/# cat /etc/salt/cloud.profiles.d/w16k.conf 
w16k:
  provider: vcsa
  clonefrom: w16k_salt 
#  devices: 
#   network: 
#    Network adaptor 1:
#     name: VM Network
#     adapter_type: vmxnet3
#     switch_type: standard
#     ip: 172.16.70.79
#     gateway: [172.16.1.1]
#     subnet_mask: 255.255.128.0
#     domain: ntitta.lab
  cluster: vSAN
  datastore: vsanDatastore
  power_on: True
  deploy: True
  customization: True
  minion:
   master: saltyu.ntitta.lab
  win_username: administrator 
  win_password: 'P@ssw0d'
  plain_text: True
  win_user_fullname: admin
  win_run_once: 'powershell.exe c:\scripts\e.winrm.ps1'
  win_installer: /salt/minion/Salt-Minion-3000.9-Py2-AMD64-Setup.exe
  winrm_verify_ssl: False

Ensure that you have the smbprotocol and pypsexec installed

pip3 install smbprotocol
pip3 install pypsexec

on the guest windows server template, ensure vmware tools is installed and create a PowerShell script in the path: c:\scripts\e.winrm.ps1, refer salt doc for more information: https://docs.saltproject.io/en/latest/topics/cloud/windows.html

New-NetFirewallRule -Name "SMB445" -DisplayName "SMB445" -Protocol TCP -LocalPort 445
New-NetFirewallRule -Name "WINRM5986" -DisplayName "WINRM5986" -Protocol TCP -LocalPort 5986

winrm quickconfig -q
winrm set winrm/config/winrs '@{MaxMemoryPerShellMB="300"}'
winrm set winrm/config '@{MaxTimeoutms="1800000"}'
winrm set winrm/config/service/auth '@{Basic="true"}'

$SourceStoreScope = 'LocalMachine'
$SourceStorename = 'Remote Desktop'

$SourceStore = New-Object -TypeName System.Security.Cryptography.X509Certificates.X509Store -ArgumentList $SourceStorename, $SourceStoreScope
$SourceStore.Open([System.Security.Cryptography.X509Certificates.OpenFlags]::ReadOnly)

$cert = $SourceStore.Certificates | Where-Object -FilterScript {
    $_.subject -like '*'
}

$DestStoreScope = 'LocalMachine'
$DestStoreName = 'My'

$DestStore = New-Object -TypeName System.Security.Cryptography.X509Certificates.X509Store -ArgumentList $DestStoreName, $DestStoreScope
$DestStore.Open([System.Security.Cryptography.X509Certificates.OpenFlags]::ReadWrite)
$DestStore.Add($cert)

$SourceStore.Close()
$DestStore.Close()

winrm create winrm/config/listener?Address=*+Transport=HTTPS `@`{CertificateThumbprint=`"($cert.Thumbprint)`"`}

Restart-Service winrm

download salt windows minion installer to the below path on the salt-master:
/salt/minion/, exe can be downloaded from https://docs.saltproject.io/en/latest/topics/installation/windows.html

wget https://repo.saltstack.com/windows/Salt-Minion-3003-Py3-AMD64-Setup.exe

Deploy Windows VM via salt:

salt-cloud -p w16k w16k-salty-minion -l debug

Deployed VM you can see firewall and salt minion installed:

Troubleshooting:

[ERROR   ] Unable to execute command
Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/salt/utils/cloud.py", line 1005, in wait_for_psexecsvc
    stdout, stderr, ret_code = run_psexec_command(
  File "/usr/lib/python3/dist-packages/salt/utils/cloud.py", line 956, in run_psexec_command
    client = Client(
  File "/usr/lib/python3/dist-packages/salt/utils/cloud.py", line 879, in __init__
    self._client = PsExecClient(server, username, password, port, encrypt)
NameError: name 'PsExecClient' is not defined

cause: PsExecClient module is not installed. , use pip3 to install this

upgrade vRA 8.2 to 8.3

Upgrade LCM

Log in to LCM> Lifecycle operations> Settings> System upgrade

Create snapshot

Once snapshot is done, Click on Check for upgrade,

vRLCM will reboot as a part of the upgrade. Wait patiently… (go have a coffee)

On successfull upgrade, we should see :

Add vRA package repository to LCM

Download vRA 8.3 upgrade repository from https://my.vmware.com/group/vmware/downloads/details?downloadGroup=VRA-830&productId=1116&rPId=59213

copy the ISO to LCM, /data directory

Now, Go back to LCM> Lifecycle operations> settings > Binary mapping>Add product binaries, enter path in base location and then cick on discover

Clock on the prelude(the one we uploaded) and click on add

wait for the import task to compleate. You can take a look at the task by clicking on requests > Click on the most recent request

On successfull import, you should see it in the binary mapping:

Upgrade VRA:

Now that the upgrade packages have been uploaded to LCM, Go into LCM > LifeCycle Operations > Environments > Click on “view details” for the vRA environment.

Triger inventory sync:

Go grab a coffee!!, Typically at this point its just waiting for vRLCM to perform the upgrade.

Click on upgrade:

Run pre-check

wait for pre-check to complete: (takes about 3-10 min)

submit upgrade task

Now, Go grab a cup of coffee, This step generally takes a while.