Delete Dead ALB from VCD

Alb records are normally stored in the below tables, Since in my case, my ALB env was unrecoverable, delete the below record from VCD be fore adding a new integration

delete from gateway_lb_virtual_service;
delete from lb_seg_assignment;
delete from load_balancer_seg;
delete from gateway_load_balancer;
delete from load_balancer_cloud;
delete from load_balancer_controllers;

here’s the DB schema (from VCD10.6.1)

vcloud=# \d load_balancer_controllers
Table “public.load_balancer_controllers”
Column | Type | Collation | Nullable | Default
——————–+————————-+———–+———-+———
id | uuid | | not null |
name | character varying(128) | | not null |
description | character varying(256) | | |
username | character varying(128) | | not null |
password | character varying(128) | | not null |
url | character varying(2000) | | not null |
controller_version | character varying(32) | | |
enabled | boolean | | not null | false
version_number | bigint | | not null | 1
Indexes:
“pk_load_bala_con_id” PRIMARY KEY, btree (id)
“uq_load_bala_con_name” UNIQUE CONSTRAINT, btree (name)
“uq_load_bala_con_url” UNIQUE CONSTRAINT, btree (url)
Referenced by:
TABLE “load_balancer_cloud” CONSTRAINT “fk_load_bala_clo2load_bala_con” FOREIGN KEY (lb_controller_id) REFERENCES load_balancer_controllers(id)

vcloud=# \d load_balancer_cloud
Table “public.load_balancer_cloud”
Column | Type | Collation | Nullable | Default
——————+————————+———–+———-+———
id | uuid | | not null |
name | character varying(128) | | not null |
description | character varying(256) | | |
lb_controller_id | uuid | | not null |
network_pool_id | uuid | | |
type | character varying(128) | | not null |
backing_id | character varying(128) | | not null |
Indexes:
“pk_load_bala_clo_id” PRIMARY KEY, btree (id)
“uq_load_bala_clo_lb_co_id_ba_i” UNIQUE CONSTRAINT, btree (lb_controller_id, backing_id)
“uq_load_bala_clo_name” UNIQUE CONSTRAINT, btree (name)
Foreign-key constraints:
“fk_load_bala_clo2load_bala_con” FOREIGN KEY (lb_controller_id) REFERENCES load_balancer_controllers(id)
“fk_load_bala_clo2network_pool” FOREIGN KEY (network_pool_id) REFERENCES network_pool(id)
Referenced by:
TABLE “gateway_load_balancer” CONSTRAINT “fk_gate_load_bal2load_bala_clo” FOREIGN KEY (lb_cloud_id) REFERENCES load_balancer_cloud(id) ON DELETE CASCADE
TABLE “load_balancer_seg” CONSTRAINT “fk_load_bala_seg2load_bala_clo” FOREIGN KEY (lb_cloud_id) REFERENCES load_balancer_cloud(id)

vcloud=# \d gateway_load_balancer
Table “public.gateway_load_balancer”
Column | Type | Collation | Nullable | Default
—————————–+————————+———–+———-+———
id | uuid | | not null |
gateway_id | uuid | | not null |
is_enabled | boolean | | not null | false
ipv4_service_network_cidr | character varying(18) | | |
segment_id | character varying(128) | | not null |
vrf_context_id | character varying(128) | | not null |
lb_cloud_id | uuid | | not null |
supported_feature_set | character varying(128) | | not null |
ipv6_service_network_cidr | character varying(45) | | |
is_transparent_mode_enabled | boolean | | not null | false
Indexes:
“pk_gate_load_bal_id” PRIMARY KEY, btree (id)
“uq_gate_load_bal_gateway_id” UNIQUE CONSTRAINT, btree (gateway_id)
Check constraints:
“at_least_one_cidr” CHECK (ipv6_service_network_cidr IS NOT NULL OR ipv4_service_network_cidr IS NOT NULL)
Foreign-key constraints:
“fk_gate_load_bal2gateway” FOREIGN KEY (gateway_id) REFERENCES gateway(id)
“fk_gate_load_bal2load_bala_clo” FOREIGN KEY (lb_cloud_id) REFERENCES load_balancer_cloud(id) ON DELETE CASCADE

vcloud=# \d load_balancer_seg
Table “public.load_balancer_seg”
Column | Type | Collation | Nullable | Default
—————————+————————+———–+———-+———
id | uuid | | not null |
name | character varying(128) | | not null |
description | character varying(256) | | |
backing_id | character varying(128) | | not null |
lb_cloud_id | uuid | | not null |
ha_mode | character varying(128) | | not null |
reservation_type | character varying(128) | | not null |
max_virtual_services | integer | | |
reserved_virtual_services | integer | | not null |
version_number | bigint | | not null | 1
supported_feature_set | character varying(128) | | not null |
backing_name | character varying(128) | | |
Indexes:
“pk_load_bala_seg_id” PRIMARY KEY, btree (id)
“uq_load_bala_seg_lb_cl_id_ba_i” UNIQUE CONSTRAINT, btree (lb_cloud_id, backing_id)
“uq_load_bala_seg_lb_clo_id_nam” UNIQUE CONSTRAINT, btree (lb_cloud_id, name)
Foreign-key constraints:
“fk_load_bala_seg2load_bala_clo” FOREIGN KEY (lb_cloud_id) REFERENCES load_balancer_cloud(id)
Referenced by:
TABLE “gateway_lb_virtual_service” CONSTRAINT “fk_gat_lb_vir_se2load_bala_seg” FOREIGN KEY (seg_id) REFERENCES load_balancer_seg(id)
TABLE “lb_seg_assignment” CONSTRAINT “fk_lb_seg_assi2load_bala_seg” FOREIGN KEY (seg_id) REFERENCES load_balancer_seg(id)

vcloud=# \d gateway_lb_virtual_service
Table “public.gateway_lb_virtual_service”
Column | Type | Collation | Nullable | Default
————————–+————————+———–+———-+———
id | uuid | | not null |
name | character varying(128) | | not null |
description | character varying(256) | | |
enabled | boolean | | not null | false
vs_backing_id | character varying(128) | | |
vip_backing_id | character varying(128) | | |
ipv4_virtual_ip_address | character varying(15) | | |
seg_id | uuid | | not null |
gateway_lr_id | uuid | | not null |
version_number | bigint | | not null | 1
server_certificate_id | uuid | | |
lb_pool_id | uuid | | not null |
ipv6_virtual_ip_address | character varying(45) | | |
transparent_mode_enabled | boolean | | not null | false
http_policy_backing_id | character varying(128) | | |
Indexes:
“pk_gat_lb_vir_se_id” PRIMARY KEY, btree (id)
“uq_gat_lb_vir_se_gat_lr_id_nam” UNIQUE e, btree (gateway_lr_id, name)
Check constraints:
“at_least_one_ip” CHECK (ipv6_virtual_ip_address IS NOT NULL OR ipv4_virtual_ip_address IS NOT NULL)
Foreign-key constraints:
“fk_gat_lb_vir_se2gate_lb_pool” FOREIGN KEY (lb_pool_id) REFERENCES gateway_lb_pool(id) ON DELETE CASCADE
“fk_gat_lb_vir_se2gate_logi_res” FOREIGN KEY (gateway_lr_id) REFERENCES gateway_logical_resource(id)
“fk_gat_lb_vir_se2load_bala_seg” FOREIGN KEY (seg_id) REFERENCES load_balancer_seg(id)
“fk_gat_lb_vir_se2server_certif” FOREIGN KEY (server_certificate_id) REFERENCES certificate_library_item(id) ON DELETE SET NULL

vcloud=# \d lb_seg_assignment
Table “public.lb_seg_assignment”
Column | Type | Collation | Nullable | Default
——————————–+———————–+———–+———-+———
id | uuid | | not null |
seg_id | uuid | | not null |
gateway_lr_id | uuid | | not null |
max_virtual_services | integer | | |
min_virtual_services | integer | | |
version_number | bigint | | not null | 1
network_service_floating_ip | character varying(45) | | |
network_service_floating_ip_v6 | character varying(45) | | |
Indexes:
“pk_lb_seg_assi_id” PRIMARY KEY, btree (id)
“ix_gateway_lr_id” btree (gateway_lr_id)
“ix_seg_id” btree (seg_id)
“uq_lb_seg_assi_seg_id_gat_id” UNIQUE CONSTRAINT, btree (seg_id, gateway_lr_id)
Foreign-key constraints:
“fk_lb_seg_assi2gate_logi_res” FOREIGN KEY (gateway_lr_id) REFERENCES gateway_logical_resource(id)
“fk_lb_seg_assi2load_bala_seg” FOREIGN KEY (seg_id) REFERENCES load_balancer_seg(id)

Proxmox! set persistent USB NIC bindings

find the ID_USB_SERIAL_SHORT of the USB NIC:

root@pve03:~# udevadm info /sys/class/net/enx803f5dfb4b73 | grep ID_USB_SERIAL_SHORT
E: ID_USB_SERIAL_SHORT=4013000001


grab mac address

root@pve03:~# ip a | grep -A 1  enx803f5dfb4b73
3: enx803f5dfb4b73: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 80:3f:5d:fb:4b:73 brd ff:ff:ff:ff:ff:ff
    inet 172.16.35.203/24 scope global enx803f5dfb4b73
       valid_lft forever preferred_lft forever

Create a new File with the below content

cat /etc/systemd/network/10-vusb2.link
[Match]
Property=ID_USB_SERIAL_SHORT=4013000001
[Link]
Name=enx7eth2
MACAddress=80:3f:5d:fb:4b:73

Nested Esxi, Silence unsupported controller health



Check check all

SSH to VCSA, USE rvc to log in to vcsa

vsan.health.silent_health_check_status vcsa01.ntitta.local/Cloud-DC01/computers/mgm02/

> vsan.health.silent_health_check_status vcsa01.ntitta.local/Cloud-DC01/computers/mgm02/
/opt/vmware/rvc/lib/rvc/lib/vsanhealth.rb:108: warning: calling URI.open via Kernel#open is deprecated, call URI.open directly or use URI#open
Silent Status of Cluster mgm02:
+----------------------------------------------------------------------------------------------------+---------------------------------------+---------------+
| Health Check                                                                                       | Health Check Id                       | Silent Status |
+----------------------------------------------------------------------------------------------------+---------------------------------------+---------------+
| Capacity utilization                                                                               |                                       |               |
|   Cluster and host component utilization                                                           | nodecomponentlimit                    | Normal        |
|   Read cache reservations                                                                          | rcreservation                         | Normal        |
|   Storage space                                                                                    | diskspace                             | Normal        |
|   What if the most consumed host fails                                                             | limit1hf                              | Normal        |
+----------------------------------------------------------------------------------------------------+---------------------------------------+---------------+
| Cluster                                                                                            |                                       |               |
|   Advanced vSAN configuration in sync                                                              | advcfgsync                            | Normal        |
|   Disk format version                                                                              | upgradelowerhosts                     | Normal        |
|   ESA prescriptive disk claim                                                                      | ddsconfig                             | Normal        |
|   Host Maintenance Mode                                                                            | mmdecominsync                         | Normal        |
|   Maximum host number in vSAN over RDMA                                                            | rdmanodes                             | Normal        |
|   Resync operations throttling                                                                     | resynclimit                           | Normal        |
|   Software version compatibility                                                                   | upgradesoftware                       | Normal        |
|   Time is synchronized across hosts and VC                                                         | timedrift                             | Normal        |
|   VMware vCenter state is authoritative                                                            | vcauthoritative                       | Normal        |
|   VSAN ESA Conversion Health                                                                       | esaconversionhealth                   | Normal        |
|   vSAN Direct homogeneous disk claiming                                                            | vsandconfigconsistency                | Normal        |
|   vSAN Disk Balance                                                                                | diskbalance                           | Normal        |
|   vSAN Managed disk claim                                                                          | hcldiskclaimcheck                     | Normal        |
|   vSAN cluster configuration consistency                                                           | consistentconfig                      | Normal        |
|   vSAN daemon liveness                                                                             | clomdliveness                         | Normal        |
|   vSAN disk group layout                                                                           | dglayout                              | Normal        |
|   vSAN extended configuration in sync                                                              | extendedconfig                        | Normal        |
|   vSAN optimal datastore default policy configuration                                              | optimaldsdefaultpolicy                | Normal        |
|   vSphere Lifecycle Manager (vLCM) configuration                                                   | vsanesavlcmcheck                      | Normal        |
|   vSphere cluster members match vSAN cluster members                                               | clustermembership                     | Normal        |
+----------------------------------------------------------------------------------------------------+---------------------------------------+---------------+
| Data                                                                                               |                                       |               |
|   vSAN object format health                                                                        | objectformat                          | Normal        |
|   vSAN object health                                                                               | objecthealth                          | Normal        |
+----------------------------------------------------------------------------------------------------+---------------------------------------+---------------+
| Data-at-rest encryption                                                                            |                                       |               |
|   CPU AES-NI is enabled on hosts                                                                   | hostcpuaesni                          | Normal        |
|   VMware vCenter and all hosts are connected to Key Management Servers                             | kmsconnection                         | Normal        |
|   com.vmware.vsan.health.test.cloudhealth.dualcloudhealth.testname                                 | dualencryption                        | Normal        |
+----------------------------------------------------------------------------------------------------+---------------------------------------+---------------+
| Data-in-transit encryption                                                                         |                                       |               |
|   Configuration check                                                                              | ditconfig                             | Normal        |
+----------------------------------------------------------------------------------------------------+---------------------------------------+---------------+
| File Service                                                                                       |                                       |               |
|   File Server Health                                                                               | fileserver                            | Normal        |
|   Infrastructure Health                                                                            | host                                  | Normal        |
|   Share Health                                                                                     | sharehealth                           | Normal        |
+----------------------------------------------------------------------------------------------------+---------------------------------------+---------------+
| Hardware compatibility                                                                             |                                       |               |
|   Controller disk group mode is VMware certified                                                   | controllerdiskmode                    | Normal        |
|   Controller driver is VMware certified                                                            | controllerdriver                      | Normal        |
|   Controller firmware is VMware certified                                                          | controllerfirmware                    | Normal        |
|   Controller is VMware certified for ESXi release                                                  | controllerreleasesupport              | Normal        |
|   Controller with pass-through and RAID disks                                                      | mixedmode                             | Normal        |
|   HPE NVMe Solid State Drives - critical firmware upgrade required                                 | vsanhpefwtest                         | Normal        |
|   Host issues retrieving hardware info                                                             | hclhostbadstate                       | Normal        |
|   Host physical memory compliance check                                                            | hostmemcheck                          | Normal        |
|   NVMe device is VMware certified                                                                  | nvmeonhcl                             | Normal        |
|   Network (RDMA NIC: RoCE v2) driver/firmware is vSAN certified                                    | rdmanicsupportdriverfirmware          | Normal        |
|   Network (RDMA NIC: RoCE v2) is certified for ESXi release                                        | rdmanicsupportesxrelease              | Normal        |
|   Network (RDMA NIC: RoCE v2) is vSAN certified                                                    | rdmaniciscertified                    | Normal        |
|   Physical NIC link speed meets requirements                                                       | pniclinkspeed                         | Normal        |
|   RAID controller configuration                                                                    | controllercacheconfig                 | Normal        |
|   SCSI controller is VMware certified                                                              | controlleronhcl                       | Normal        |
|   vSAN HCL DB Auto Update                                                                          | autohclupdate                         | Normal        |
|   vSAN HCL DB up-to-date                                                                           | hcldbuptodate                         | Normal        |
|   vSAN and VMFS datastores on a Dell H730 controller with the lsi_mr3 driver                       | mixedmodeh730                         | Normal        |
|   vSAN configuration for LSI-3108 based controller                                                 | h730                                  | Normal        |
+----------------------------------------------------------------------------------------------------+---------------------------------------+---------------+
| Hyperconverged cluster configuration compliance                                                    |                                       |               |
|   Host compliance check for hyperconverged cluster configuration                                   | hosthciconfig                         | Normal        |
|   VDS compliance check for hyperconverged cluster configuration                                    | dvshciconfig                          | Normal        |
+----------------------------------------------------------------------------------------------------+---------------------------------------+---------------+
| Network                                                                                            |                                       |               |
|   Active multicast connectivity check                                                              | multicastdeepdive                     | Normal        |
|   All hosts have a dedicated vSAN Max Client vmknic configured in server cluster                   | vsanexternalvmknic                    | Normal        |
|   All hosts have a vSAN vmknic configured                                                          | vsanvmknic                            | Normal        |
|   All hosts have matching multicast settings                                                       | multicastsettings                     | Normal        |
|   Hosts disconnected from VC                                                                       | hostdisconnected                      | Normal        |
|   Hosts with LACP issues                                                                           | lacpstatus                            | Normal        |
|   Hosts with connectivity issues                                                                   | hostconnectivity                      | Normal        |
|   Hosts with duplicate IP addresses                                                                | duplicateip                           | Normal        |
|   Hosts with pNIC TSO issues                                                                       | pnictso                               | Normal        |
|   Multicast assessment based on other checks                                                       | multicastsuspected                    | Normal        |
|   Network latency check                                                                            | hostlatencycheck                      | Normal        |
|   No hosts in remote vSAN have multiple vSAN vmknics configured                                    | multiplevsanvmknic                    | Normal        |
|   Physical network adapter link speed consistency                                                  | pnicconsistent                        | Normal        |
|   RDMA Configuration Health                                                                        | rdmaconfig                            | Normal        |
|   Remote VMware vCenter network connectivity                                                       | xvcconnectivity                       | Normal        |
|   Server Cluster Partition                                                                         | serverpartition                       | Normal        |
|   vMotion: Basic (unicast) connectivity check                                                      | vmotionpingsmall                      | Normal        |
|   vMotion: MTU check (ping with large packet size)                                                 | vmotionpinglarge                      | Normal        |
|   vSAN Max Client Network connectivity check                                                       | externalconnectivity                  | Normal        |
|   vSAN cluster partition                                                                           | clusterpartition                      | Normal        |
|   vSAN: Advanced (https) connectivity check                                                        | interhostconnectivity                 | Normal        |
|   vSAN: Basic (unicast) connectivity check                                                         | smallping                             | Normal        |
|   vSAN: MTU check (ping with large packet size)                                                    | largeping                             | Normal        |
+----------------------------------------------------------------------------------------------------+---------------------------------------+---------------+
| Online health                                                                                      |                                       |               |
|   A possible storage capacity limitation with vSAN OSA versions 8.0U2 and 8.0U2b                   | lsomadvconfig                         | Normal        |
|   Advisor                                                                                          | advisor                               | Normal        |
|   Audit CEIP Collected Data                                                                        | auditceip                             | Normal        |
|   CNS Critical Alert - Patch available with important fixes                                        | cnspatchalert                         | Normal        |
|   Controller with pass-through and RAID disks                                                      | mixedmode                             | Normal        |
|   Coredump partition size check                                                                    | coredumpartitionsize                  | Normal        |
|   Critical vSAN patch is available for vSAN ESA                                                    | laurelalert                           | Normal        |
|   Customer advisory for HPE Smart Array                                                            | vsanhpesmartarraytest                 | Normal        |
|   Disks usage on storage controller                                                                | diskusage                             | Normal        |
|   Dual encryption applied to VMs on vSAN                                                           | dualencryption                        | Normal        |
|   ESXi system logs stored outside vSAN datastore                                                   | scratchconfig                         | Normal        |
|   End of general support for lower vSphere version                                                 | eoscheck                              | Normal        |
|   Fix is available for a critical vSAN software defect with Guest Trim/Unmap configuration enabled | unmaptest                             | Normal        |
|   HPE NVMe Solid State Drives - critical firmware upgrade required                                 | vsanhpefwtest                         | Normal        |
|   HPE SAS Solid State Drive                                                                        | hpesasssd                             | Normal        |
|   Hardware compatibility issue for witness appliance                                               | witnesshw                             | Normal        |
|   Important patch available for vSAN issue                                                         | fsvlcmpatchalert                      | Normal        |
|   Maximum host number in vSAN over RDMA                                                            | rdmanodesalert                        | Normal        |
|   Multiple VMs share the same vSAN home namespace                                                  | vmns                                  | Normal        |
|   Patch available for critical vSAN issue for All-Flash clusters with deduplication enabled        | patchalert                            | Normal        |
|   Physical network adapter link speed consistency                                                  | pnicconsistent                        | Normal        |
|   Proper vSAN network traffic shaping policy is configured                                         | dvsportspeedlimit                     | Normal        |
|   RAID controller configuration                                                                    | controllercacheconfig                 | Normal        |
|   Thick-provisioned VMs on vSAN                                                                    | thickprovision                        | Normal        |
|   Update release available for vSAN ESA                                                            | marigoldalert                         | Normal        |
|   Update release available for vSAN ESA                                                            | lavenderalert                         | Normal        |
|   Upgrade vSphere CSI driver with caution                                                          | csidriver                             | Normal        |
|   VM storage policy is not-recommended                                                             | policyupdate                          | Normal        |
|   VMware vCenter up to date                                                                        | vcuptodate                            | Normal        |
|   vSAN Advanced Configuration Check for Urgent vSAN ESA Patch                                      | zdomadvcfgenabled                     | Normal        |
|   vSAN Critical Alert - Release available for critical vSAN issue                                  | lilypatchalert                        | Normal        |
|   vSAN Support Insight                                                                             | vsanenablesupportinsight              | Normal        |
|   vSAN and VMFS datastores on a Dell H730 controller with the lsi_mr3 driver                       | mixedmodeh730                         | Normal        |
|   vSAN configuration check for large scale cluster                                                 | largescalecluster                     | Normal        |
|   vSAN configuration for LSI-3108 based controller                                                 | h730                                  | Normal        |
|   vSAN critical alert regarding a potential data inconsistency                                     | lilacdeltacomponenttest               | Normal        |
|   vSAN management server system resource check                                                     | vsanmgmtresource                      | Normal        |
|   vSAN max component size                                                                          | smalldiskstest                        | Normal        |
|   vSAN storage policy compliance up-to-date                                                        | objspbm                               | Normal        |
|   vSAN v1 disk in use                                                                              | v1diskcheck                           | Normal        |
+----------------------------------------------------------------------------------------------------+---------------------------------------+---------------+
| Performance service                                                                                |                                       |               |
|   All hosts contributing stats                                                                     | hostsmissing                          | Normal        |
|   Network diagnostic mode                                                                          | diagmode                              | Normal        |
|   Performance data collection                                                                      | collection                            | Normal        |
|   Performance service status                                                                       | perfsvcstatus                         | Normal        |
|   Stats DB object                                                                                  | statsdb                               | Normal        |
|   Stats DB object conflicts                                                                        | renameddirs                           | Normal        |
|   Stats primary election                                                                           | masterexist                           | Normal        |
|   Verbose mode                                                                                     | verbosemode                           | Normal        |
+----------------------------------------------------------------------------------------------------+---------------------------------------+---------------+
| Physical disk                                                                                      |                                       |               |
|   Component metadata health                                                                        | componentmetadata                     | Normal        |
|   Congestion                                                                                       | physdiskcongestion                    | Normal        |
|   Disk capacity                                                                                    | physdiskcapacity                      | Normal        |
|   Disks usage on storage controller                                                                | diskusage                             | Normal        |
|   Memory pools (heaps)                                                                             | lsomheap                              | Normal        |
|   Memory pools (slabs)                                                                             | lsomslab                              | Normal        |
|   Operation health                                                                                 | physdiskoverall                       | Normal        |
|   Physical disk component utilization                                                              | physdiskcomplimithealth               | Normal        |
|   Physical disk health retrieval issues                                                            | physdiskhostissues                    | Normal        |
|   Storage Vendor Reported Drive Health                                                             | phmhealth                             | Normal        |
|   vSAN max component size                                                                          | smalldiskstest                        | Normal        |
+----------------------------------------------------------------------------------------------------+---------------------------------------+---------------+
| Stretched cluster                                                                                  |                                       |               |
|   Hardware compatibility issue for witness appliance                                               | witnessupgissue                       | Normal        |
|   Invalid preferred fault domain on witness host                                                   | witnesspreferredfaultdomaininvalid    | Normal        |
|   Invalid unicast agent                                                                            | hostwithinvalidunicastagent           | Normal        |
|   No disk claimed on witness host                                                                  | witnesswithnodiskmapping              | Normal        |
|   Preferred fault domain unset                                                                     | witnesspreferredfaultdomainnotexist   | Normal        |
|   Shared witness per cluster component limit scaled down                                           | sharedwitnesscomponentlimitscaleddown | Normal        |
|   Site latency health                                                                              | siteconnectivity                      | Normal        |
|   Unexpected number of data node in shared witness cluster                                         | sharedwitnessclusterdatahostnumexceed | Normal        |
|   Unexpected number of fault domains                                                               | clusterwithouttwodatafaultdomains     | Normal        |
|   Unicast agent configuration inconsistent                                                         | clusterwithmultipleunicastagents      | Normal        |
|   Unicast agent not configured                                                                     | hostunicastagentunset                 | Normal        |
|   Unsupported host version                                                                         | hostwithnostretchedclustersupport     | Normal        |
|   Witness appliance upgrade to vSphere 7.0 or higher with caution                                  | witnessupgrade                        | Normal        |
|   Witness host fault domain misconfigured                                                          | witnessfaultdomaininvalid             | Normal        |
|   Witness host not found                                                                           | clusterwithoutonewitnesshost          | Normal        |
|   Witness host within VMware vCenter cluster                                                       | witnessinsidevccluster                | Normal        |
|   Witness node is managed by vSphere Lifecycle Manager                                             | vlcmwitnessconfig                     | Normal        |
+----------------------------------------------------------------------------------------------------+---------------------------------------+---------------+
| com.vmware.vsan.health.test.cloudhealth                                                            |                                       |               |
|   Patch available for critical vSAN issue for All-Flash clusters with deduplication enabled        | patchalert                            | Normal        |
|   com.vmware.vsan.health.test.cloudhealth.checksummismatchcount.testname                           | checksummismatchcount                 | Normal        |
|   com.vmware.vsan.health.test.cloudhealth.cloudhealthconfig.testname                               | vumconfig                             | Normal        |
|   com.vmware.vsan.health.test.cloudhealth.cloudhealthrecommendation.testname                       | vumrecommendation                     | Normal        |
|   com.vmware.vsan.health.test.cloudhealth.clusternotfound.testname                                 | clusternotfound                       | Normal        |
|   com.vmware.vsan.health.test.cloudhealth.pausecount.testname                                      | pausecount                            | Normal        |
|   com.vmware.vsan.health.test.cloudhealth.releasecataloguptodate.testname                          | releasecataloguptodate                | Normal        |
|   com.vmware.vsan.health.test.cloudhealth.rxcrcerr.testname                                        | rxcrcerr                              | Normal        |
|   com.vmware.vsan.health.test.cloudhealth.rxerr.testname                                           | rxerr                                 | Normal        |
|   com.vmware.vsan.health.test.cloudhealth.rxfifoerr.testname                                       | rxfifoerr                             | Normal        |
|   com.vmware.vsan.health.test.cloudhealth.rxmisserr.testname                                       | rxmisserr                             | Normal        |
|   com.vmware.vsan.health.test.cloudhealth.rxoverr.testname                                         | rxoverr                               | Normal        |
|   com.vmware.vsan.health.test.cloudhealth.txcarerr.testname                                        | txcarerr                              | Normal        |
|   com.vmware.vsan.health.test.cloudhealth.txerr.testname                                           | txerr                                 | Normal        |
+----------------------------------------------------------------------------------------------------+---------------------------------------+---------------+
| vSAN iSCSI target service                                                                          |                                       |               |
|   Home object                                                                                      | iscsihomeobjectstatustest             | Normal        |
|   LUN runtime health                                                                               | iscsilunruntimetest                   | Normal        |
|   Network configuration                                                                            | iscsiservicenetworktest               | Normal        |
|   Service runtime status                                                                           | iscsiservicerunningtest               | Normal        |
+----------------------------------------------------------------------------------------------------+---------------------------------------+---------------+
>


vsan.health.silent_health_check_configure vcsa01.ntitta.local/Cloud-DC01/computers/mgm-01 -a controlleronhcl

> vsan.health.silent_health_check_configure vcsa01.ntitta.local/Cloud-DC01/computers/mgm02 -a controlleronhcl
/opt/vmware/rvc/lib/rvc/lib/vsanhealth.rb:108: warning: calling URI.open via Kernel#open is deprecated, call URI.open directly or use URI#open
Successfully update silent health check list for mgm02
> vsan.health.silent_health_check_configure vcsa01.ntitta.local/Cloud-DC01/computers/mgm -a controlleronhcl
vcsa01.ntitta.local/Cloud-DC01/computers/mgm-01  vcsa01.ntitta.local/Cloud-DC01/computers/mgm02
> vsan.health.silent_health_check_configure vcsa01.ntitta.local/Cloud-DC01/computers/mgm -a controlleronhcl
vcsa01.ntitta.local/Cloud-DC01/computers/mgm-01  vcsa01.ntitta.local/Cloud-DC01/computers/mgm02
> vsan.health.silent_health_check_configure vcsa01.ntitta.local/Cloud-DC01/computers/mgm-01 -a controlleronhcl
/opt/vmware/rvc/lib/rvc/lib/vsanhealth.rb:108: warning: calling URI.open via Kernel#open is deprecated, call URI.open directly or use URI#open
Successfully update silent health check list for mgm-01

Nested Esxi vSAN sample scripts

#! /usr/bin/pwsh
$user = '[email protected]'
# Import password from an encrypted file
$encryptedPassword = Import-Clixml -Path '/glabs/spec/vcsa_admin.xml'
$decryptedPassword = $encryptedPassword.GetNetworkCredential().Password



# Function to check if vCenter services are running
function Test-VCenterServicesRunning {
    $serviceInstance = Connect-VIServer -Server vcsa01.glabs.local -Username  $user -Password  $decryptedPassword -ErrorAction SilentlyContinue
    
    if ($serviceInstance -eq $null) {
        return $false
    }
    
    $serviceContent = Get-View -Id $serviceInstance.ExtensionData.content.ServiceInstance
    
    $serviceContent.serviceInfo.service | ForEach-Object {
        if ($_.running -eq $false) {
            Disconnect-VIServer -Server $vcServer -Confirm:$false
            return $false
        }
    }
    
    Disconnect-VIServer -Server $vcServer -Confirm:$false
    return $true
}

# Wait for vCenter services to start
Write-Host "Waiting for vCenter services to start..."

while (-not (Test-VCenterServicesRunning)) {
    Start-Sleep -Seconds 5
}

Write-Host "vCenter services are running. Connecting to vCenter..."




#connect to vc and add hosts
Connect-viserver vcsa01.glabs.local -User $user -Password $decryptedPassword

#crate datacenter and cluster
New-Datacenter -Location Datacenters  -Name cloud
New-Cluster -Name "management" -Location "cloud"

Add-VMHost -Name esxi01.Glabs.local -Location management -user 'root' -password 'bAdP@$$' -Force -Confirm:$false 
Add-VMHost -Name esxi02.Glabs.local -Location management -user 'root' -password 'bAdP@$$' -Force -Confirm:$false 
Add-VMHost -Name esxi03.Glabs.local -Location management -user 'root' -password 'bAdP@$$' -Force -Confirm:$false 
get-vmhost | Get-VMHostStorage -RescanAllHba -RescanVmfs


$cache = 'mpx.vmhba0:C0:T1:L0'
$data = 'mpx.vmhba0:C0:T2:L0'

#mask cache disk as ssd
$esx = Get-VMHost -Name esxi01.glabs.local
$storSys = Get-View -Id $esx.ExtensionData.ConfigManager.StorageSystem
$uuid = $storSys.StorageDeviceInfo.ScsiLun | where {$_.CanonicalName -eq $cache} 
$storSys.MarkAsSsd($uuid.Uuid)
$esx = Get-VMHost -Name esxi02.glabs.local
$storSys = Get-View -Id $esx.ExtensionData.ConfigManager.StorageSystem
$uuid = $storSys.StorageDeviceInfo.ScsiLun | where {$_.CanonicalName -eq $cache} 
$storSys.MarkAsSsd($uuid.Uuid)
$esx = Get-VMHost -Name esxi03.glabs.local
$storSys = Get-View -Id $esx.ExtensionData.ConfigManager.StorageSystem
$uuid = $storSys.StorageDeviceInfo.ScsiLun | where {$_.CanonicalName -eq $cache} 
$storSys.MarkAsSsd($uuid.Uuid)

#add vSAN service to portgroup
$VMKNetforVSAN = "iscsi_1"
Get-VMHostNetworkAdapter -VMKernel | Where {$_.PortGroupName -eq $VMKNetforVSAN }|Set-VMHostNetworkAdapter -VsanTrafficEnabled $true -Confirm:$false



#Create vSAN cluster
get-cluster management | Set-Cluster -VsanEnabled:$true -VsanDiskClaimMode Manual -Confirm:$false -ErrorAction SilentlyContinue

#wait for previous task to finish
start-sleep 60

#add disk disk groups
New-VsanDiskGroup -VMHost esxi01.glabs.local -SSDCanonicalName $cache -DataDiskCanonicalName $data
New-VsanDiskGroup -VMHost esxi02.glabs.local -SSDCanonicalName $cache -DataDiskCanonicalName $data
New-VsanDiskGroup -VMHost esxi03.glabs.local -SSDCanonicalName $cache -DataDiskCanonicalName $data

#mount nfs 
get-vmhost | New-Datastore -Nfs -Name iso -Path /volume1/iso -NfsHost iso.glabs.local -ReadOnly

#noidea why the above does not work for vsphere7 but running the below manualy on a deployed env preps it for vSAN, dont touch it if it aint broken?
get-cluster management | Set-Cluster -VsanEnabled:$true -VsanDiskClaimMode Manual -Confirm:$false -ErrorAction SilentlyContinue


disconnect-viserver -confirm:$false

vRA8, Sample blueprint to Deploy a Windows AD with Cloudinit.

formatVersion: 1
inputs: {}
resources:
  Cloud_NSX_Network_1:
    type: Cloud.NSX.Network
    properties:
      networkType: existing
      constraints:
        - tag: net:vlan7
  Cloud_vSphere_Machine_1:
    type: Cloud.vSphere.Machine
    properties:
      imageRef: w22-cloudinit-instaclone/base
      cpuCount: 2
      totalMemoryMB: 3024
      networks:
        - network: ${resource.Cloud_NSX_Network_1.id}
          assignment: static
      cloudConfig: |
        #cloud-config
        users: 
          - 
            name: labadmin
            primary_group: administrators
            passwd: bAdP@$$  
            inactive: false            
          - 
            name: tseadmin
            primary_group: administrators
            passwd: bAdP@$$
            inactive: false
          -
            name: administrator
            primary_group: administrators
            passwd: bAdP@$$
            inactive: false
          -
        set_hostname: dc01
        runcmd: 
         - powershell.exe net user Administrator /passwordreq:yes
         - powershell.exe Install-WindowsFeature -Name AD-Domain-Services -IncludeManagementTools
         - powershell.exe Install-ADDSForest -CreateDnsDelegation:$false -DatabasePath "C:\Windows\NTDS" -DomainMode "WinThreshold" -DomainName "glabs.local" -DomainNetbiosName "GS" -ForestMode "WinThreshold" -InstallDns:$true -LogPath "C:\Windows\NTDS" -NoRebootOnCompletion:$false -SysvolPath "C:\Windows\SYSVOL" -Force:$true -SafeModeAdministratorPassword (ConvertTo-SecureString -AsPlainText "bAdP@$$" -Force)

IP ALLOCATE failed: Action run failed with the following error: (‘Error allocating in network or range: Failed to generate hostname. DNS suffix missing’, {})

Earlier this week, I was trying to integrate my test vRA deployment with Infoblox and all deployments failed with the error:

IP ALLOCATE failed: Action run failed with the following error: ('Error allocating in network or range: Failed to generate hostname. DNS suffix missing', {})

When looking at the Extensibility tab > action runs > (filter) change from user runs to all runs and look for a failed action: Infoblox_AllocateIP.

2023-05-04 15:01:07,914] [ERROR] - Error allocating in network or range: Failed to generate hostname. DNS suffix missing

[2023-05-04 15:01:07,914] [ERROR] - Failed to allocate from range network/ZG5zLm5ldHdvcmskMTAuMTA5LjI0LjAvMjEvMA:10.109.24.0/21/default: ('Error allocating in network or range: Failed to generate hostname. DNS suffix missing', {})

[2023-05-04 15:01:07,914] [ERROR] - No more ranges. Raising last error

('Error allocating in network or range: Failed to generate hostname. DNS suffix missing', {})

Finished running action code.

Exiting python process.

Traceback (most recent call last):

  File "/polyglot/function/source.py", line 171, in allocate_in_network_or_range

    host_record = HostRecordAllocation(range_id, resource, allocation, network_view, next_available_ip, context, endpoint)

  File "/polyglot/function/source.py", line 457, in __init__

    super().__init__(range_id, resource, allocation, network_view, next_available_ip, context, endpoint)

  File "/polyglot/function/source.py", line 392, in __init__

    self.hostname = generate_hostname(self.resource, self.range_id, self.allocation, self.context, self.endpoint["id"]) if self.dns_enabled else self.resource["name"]

  File "/polyglot/function/source.py", line 307, in generate_hostname

    raise Exception("Failed to generate hostname. DNS suffix missing")

Exception: Failed to generate hostname. DNS suffix missing



During handling of the above exception, another exception occurred:



Traceback (most recent call last):

  File "main.py", line 146, in <module>

    main()

  File "main.py", line 83, in main

    result = prepare_inputs_and_invoke(inputs)

  File "main.py", line 119, in prepare_inputs_and_invoke

    res = handler(ctx, inputs)

  File "/polyglot/function/source.py", line 29, in handler

    return ipam.allocate_ip()

  File "/polyglot/function/vra_ipam_utils/ipam.py", line 91, in allocate_ip

    result = self.do_allocate_ip(auth_credentials, cert)

  File "/polyglot/function/source.py", line 51, in do_allocate_ip

    raise e

  File "/polyglot/function/source.py", line 42, in do_allocate_ip

    allocation_result.append(allocate(resource, allocation, self.context, self.inputs["endpoint"]))

  File "/polyglot/function/source.py", line 78, in allocate

    raise last_error

  File "/polyglot/function/source.py", line 70, in allocate

    return allocate_in_network(range_id, resource, allocation, context, endpoint)

  File "/polyglot/function/source.py", line 155, in allocate_in_network

    endpoint)

  File "/polyglot/function/source.py", line 210, in allocate_in_network_or_range

    raise Exception(f"Error allocating in network or range: {str(e)}", result)

Exception: ('Error allocating in network or range: Failed to generate hostname. DNS suffix missing', {})

Python process exited.

There are 2 ways to remediate this.

Workaround 1: (if you do not care about adding the domain suffix to the records created on infoblox)
update your blueprint, add “Infoblox.IPAM.Network.enableDns: false” under properties for every type: cloud.vSphere.machine

resources:
  vCenterServer:
    type: Cloud.vSphere.Machine
    properties:
      Infoblox.IPAM.Network.enableDns: false
      name: Test
      imageRef: ${input.img_image_url}
      flavor: ${input.flavor}

The above deployment will ignore DNS suffix and will create a DNS record with the custom naming template as defined in the project (host name alone)

Workaround 2: If you do want the DNS records to be created with hostname + domain, then add the below to the blueprint:

resources:
  vCenterServer:
    type: Cloud.vSphere.Machine
    properties:
      Infoblox.IPAM.Network.dnsSuffix: lab.local
      name: Test
      imageRef: ${input.img_image_url}
      flavor: ${input.flavor}

with the above, the deployment will suffix the domain “lab.local” with the hostname and the respective DNS records will be created.

It took me a long time to figure this out. hopefully, this saves you a lot of time!

Cheers!

Troubleshooting saltconfig (aria config) Minion Deployment Failure

When troubleshooting a minion deployment failure, I would recommend hashing out the salt part of the blueprint and run this as a day2 task. This would help save significant deployment time and help focuss on the minion deployment issue alone.

So in my scenario, I Finished my deployment and run the salt as a day2 task which failed:

Navigate to Aria config(salt-config) web UI > activity > jobs > completed > Look for a deploy.minion task click on the JID (the long number to the right table of the job) and then click on raw:

so, this tells us that the script that was being executed failed and hence “Exit code: 1”

SSH to salt master and navigate to /etc/salt/cloud.profiles.d, you should see a conf with the the same vRA deployment name. in my case it was the second one from the below screenshot.

at this stage, you can manually call on salt-cloud with the debug flag so that you have realtime logging as the script attempts to connect to the remote host and bootstrap the minion.

The basic syntax is

salt-cloud -p profile_name VM_name -l debug

in my case:

salt-cloud -p ssc_Router-mcm770988a1-d535-4b24-b78b-2318f14911cd_profile test -l debug

Note: do not include the .conf in the profile name and the VM_name can be anything, it really does not matter in the current senario.

Typically, you want to look at the very end for the errors, In my case it was bad DNS.

[email protected]'s password: [DEBUG   ] [email protected]'s password:

[sudo] password for labadmin: [DEBUG   ] [sudo] password for labadmin:

 *  INFO: Running version: 2022.08.12
 *  INFO: Executed by: /bin/sh
 *  INFO: Command line: '/tmp/.saltcloud-3e1d4338-c7d1-4dbb-8596-de0d6bf587ec/deploy.sh -c /tmp/.saltcloud-3e1d4338-c7d1-4dbb-8596-de0d6bf587ec -x python3 stable 3005.1'
 *  WARN: Running the unstable version of bootstrap-salt.sh

 *  INFO: System Information:
 *  INFO:   CPU:          AuthenticAMD
 *  INFO:   CPU Arch:     x86_64
 *  INFO:   OS Name:      Linux
 *  INFO:   OS Version:   5.15.0-69-generic
 *  INFO:   Distribution: Ubuntu 22.04

 *  INFO: Installing minion
 *  INFO: Found function install_ubuntu_stable_deps
 *  INFO: Found function config_salt
 *  INFO: Found function preseed_master
 *  INFO: Found function install_ubuntu_stable
 *  INFO: Found function install_ubuntu_stable_post
 *  INFO: Found function install_ubuntu_res[DEBUG   ]  *  INFO: Running version: 2022.08.12
 *  INFO: Executed by: /bin/sh
 *  INFO: Command line: '/tmp/.saltcloud-3e1d4338-c7d1-4dbb-8596-de0d6bf587ec/deploy.sh -c /tmp/.saltcloud-3e1d4338-c7d1-4dbb-8596-de0d6bf587ec -x python3 stable 3005.1'
 *  WARN: Running the unstable version of bootstrap-salt.sh

 *  INFO: System Information:
 *  INFO:   CPU:          AuthenticAMD
 *  INFO:   CPU Arch:     x86_64
 *  INFO:   OS Name:      Linux
 *  INFO:   OS Version:   5.15.0-69-generic
 *  INFO:   Distribution: Ubuntu 22.04

 *  INFO: Installing minion
 *  INFO: Found function install_ubuntu_stable_deps
 *  INFO: Found function config_salt
 *  INFO: Found function preseed_master
 *  INFO: Found function install_ubuntu_stable
 *  INFO: Found function install_ubuntu_stable_post
 *  INFO: Found function install_ubuntu_res
tart_daemons
 *  INFO: Found function daemons_running
 *  INFO: Found function install_ubuntu_check_services
 *  INFO: Running install_ubuntu_stable_deps()
Ign:1 http://in.archive.ubuntu.com/ubuntu jammy InRelease
Ign:2 https://packages.microsoft.com/ubuntu/22.04/prod jammy InRelease
Ign:3 https://repo.saltproject.io/py3/ubuntu/20.04/amd64/archive/3005.1 focal InRelease
Ign:4 http://in.archive.ubuntu.com/ubuntu jammy-updates InRelease
Ign:5 http://in.archive.ubuntu.com/ubuntu jammy-backports InRelease
Ign:6 http://in.archive.ubuntu.com/ubuntu jammy-security InRelease
[DEBUG   ] tart_daemons
 *  INFO: Found function daemons_running
 *  INFO: Found function install_ubuntu_check_services
 *  INFO: Running install_ubuntu_stable_deps()
Ign:1 http://in.archive.ubuntu.com/ubuntu jammy InRelease
Ign:2 https://packages.microsoft.com/ubuntu/22.04/prod jammy InRelease
Ign:3 https://repo.saltproject.io/py3/ubuntu/20.04/amd64/archive/3005.1 focal InRelease
Ign:4 http://in.archive.ubuntu.com/ubuntu jammy-updates InRelease
Ign:5 http://in.archive.ubuntu.com/ubuntu jammy-backports InRelease
Ign:6 http://in.archive.ubuntu.com/ubuntu jammy-security InRelease
Ign:1 http://in.archive.ubuntu.com/ubuntu jammy InRelease
Ign:2 https://packages.microsoft.com/ubuntu/22.04/prod jammy InRelease
Ign:3 https://repo.saltproject.io/py3/ubuntu/20.04/amd64/archive/3005.1 focal InRelease
Ign:4 http://in.archive.ubuntu.com/ubuntu jammy-updates InRelease
Ign:5 http://in.archive.ubuntu.com/ubuntu jammy-backports InRelease
Ign:6 http://in.archive.ubuntu.com/ubuntu jammy-security InRelease
[DEBUG   ] Ign:1 http://in.archive.ubuntu.com/ubuntu jammy InRelease
Ign:2 https://packages.microsoft.com/ubuntu/22.04/prod jammy InRelease
Ign:3 https://repo.saltproject.io/py3/ubuntu/20.04/amd64/archive/3005.1 focal InRelease
Ign:4 http://in.archive.ubuntu.com/ubuntu jammy-updates InRelease
Ign:5 http://in.archive.ubuntu.com/ubuntu jammy-backports InRelease
Ign:6 http://in.archive.ubuntu.com/ubuntu jammy-security InRelease
Ign:1 http://in.archive.ubuntu.com/ubuntu jammy InRelease
Ign:2 https://packages.microsoft.com/ubuntu/22.04/prod jammy InRelease
Ign:3 https://repo.saltproject.io/py3/ubuntu/20.04/amd64/archive/3005.1 focal InRelease
Ign:4 http://in.archive.ubuntu.com/ubuntu jammy-updates InRelease
Ign:5 http://in.archive.ubuntu.com/ubuntu jammy-backports InRelease
Ign:6 http://in.archive.ubuntu.com/ubuntu jammy-security InRelease
[DEBUG   ] Ign:1 http://in.archive.ubuntu.com/ubuntu jammy InRelease
Ign:2 https://packages.microsoft.com/ubuntu/22.04/prod jammy InRelease
Ign:3 https://repo.saltproject.io/py3/ubuntu/20.04/amd64/archive/3005.1 focal InRelease
Ign:4 http://in.archive.ubuntu.com/ubuntu jammy-updates InRelease
Ign:5 http://in.archive.ubuntu.com/ubuntu jammy-backports InRelease
Ign:6 http://in.archive.ubuntu.com/ubuntu jammy-security InRelease
Err:1 http://in.archive.ubuntu.com/ubuntu jammy InRelease
  Temporary failure resolving 'in.archive.ubuntu.com'
Err:3 https://repo.saltproject.io/py3/ubuntu/20.04/amd64/archive/3005.1 focal InRelease
  Temporary failure resolving 'repo.saltproject.io'
Err:2 https://packages.microsoft.com/ubuntu/22.04/prod jammy InRelease
  Temporary failure resolving 'packages.microsoft.com'
Err:4 http://in.archive.ubuntu.com/ubuntu jammy-updates InRelease
  Temporary failure resolving 'in.archive.ubuntu.com'
Err:5 http://in.archive.ubuntu.com/ubuntu jammy-backports InRelease
  Temporary failure resolving 'in.archive.ubuntu.com'
Err:6 http://in.archive.ubuntu.com/ubuntu jammy-security InRelease
  Temporary failure resolving 'in.archive.ubuntu.com'
Reading package lists...[DEBUG   ] Err:1 http://in.archive.ubuntu.com/ubuntu jammy InRelease
  Temporary failure resolving 'in.archive.ubuntu.com'
Err:3 https://repo.saltproject.io/py3/ubuntu/20.04/amd64/archive/3005.1 focal InRelease
  Temporary failure resolving 'repo.saltproject.io'
Err:2 https://packages.microsoft.com/ubuntu/22.04/prod jammy InRelease
  Temporary failure resolving 'packages.microsoft.com'
Err:4 http://in.archive.ubuntu.com/ubuntu jammy-updates InRelease
  Temporary failure resolving 'in.archive.ubuntu.com'
Err:5 http://in.archive.ubuntu.com/ubuntu jammy-backports InRelease
  Temporary failure resolving 'in.archive.ubuntu.com'
Err:6 http://in.archive.ubuntu.com/ubuntu jammy-security InRelease
  Temporary failure resolving 'in.archive.ubuntu.com'
Reading package lists...
Connection to 10.109.30.5 closed.
[DEBUG   ] Connection to 10.109.30.5 closed.

 *  WARN: Non-LTS Ubuntu detected, but stable packages requested. Trying packages for previous LTS release. You may experience problems.
Reading package lists...
Building dependency tree...
Reading state information...
wget is already the newest version (1.21.2-2ubuntu1).
ca-certificates is already the newest version (20211016ubuntu0.22.04.1).
gnupg is already the newest version (2.2.27-3ubuntu2.1).
apt-transport-https is already the newest version (2.4.8).
The following packages were automatically installed and are no longer required:
  eatmydata libeatmydata1 python3-json-pointer python3-jsonpatch
Use 'sudo apt autoremove' to remove them.
0 upgraded, 0 newly installed, 0 to remove and 62 not upgraded.
 * ERROR: https://repo.saltproject.io/py3/ubuntu/20.04/amd64/archive/3005.1/salt-archive-keyring.gpg failed to download to /tmp/salt-gpg-UclYVAky.pub
 * ERROR: Failed to run install_ubuntu_stable_deps()!!!
[DEBUG   ]  *  WARN: Non-LTS Ubuntu detected, but stable packages requested. Trying packages for previous LTS release. You may experience problems.
Reading package lists...
Building dependency tree...
Reading state information...
wget is already the newest version (1.21.2-2ubuntu1).
ca-certificates is already the newest version (20211016ubuntu0.22.04.1).
gnupg is already the newest version (2.2.27-3ubuntu2.1).
apt-transport-https is already the newest version (2.4.8).
The following packages were automatically installed and are no longer required:
  eatmydata libeatmydata1 python3-json-pointer python3-jsonpatch
Use 'sudo apt autoremove' to remove them.
0 upgraded, 0 newly installed, 0 to remove and 62 not upgraded.
 * ERROR: https://repo.saltproject.io/py3/ubuntu/20.04/amd64/archive/3005.1/salt-archive-keyring.gpg failed to download to /tmp/salt-gpg-UclYVAky.pub
 * ERROR: Failed to run install_ubuntu_stable_deps()!!!

The same can be done for windows minion deployment troubleshooting too!!

VMware PowerCli installation fails

VMware powerCli installation fails with the below:

PS C:\Users\Administrator> Install-Module -Name VMware.PowerCLI                                                                                                                                                                                 NuGet provider is required to continue                                                                                  PowerShellGet requires NuGet provider version '2.8.5.201' or newer to interact with NuGet-based repositories. The NuGet  provider must be available in 'C:\Program Files\PackageManagement\ProviderAssemblies' or                               'C:\Users\Administrator\AppData\Local\PackageManagement\ProviderAssemblies'. You can also install the NuGet provider by  running 'Install-PackageProvider -Name NuGet -MinimumVersion 2.8.5.201 -Force'. Do you want PowerShellGet to install   and import the NuGet provider now?                                                                                      [Y] Yes  [N] No  [S] Suspend  [?] Help (default is "Y"): y

Untrusted repository
You are installing the modules from an untrusted repository. If you trust this repository, change its
InstallationPolicy value by running the Set-PSRepository cmdlet. Are you sure you want to install the modules from
'PSGallery'?
[Y] Yes  [A] Yes to All  [N] No  [L] No to All  [S] Suspend  [?] Help (default is "N"): y
PackageManagement\Install-Package : The module 'VMware.VimAutomation.Sdk' cannot be installed or updated because the
authenticode signature of the file 'VMware.VimAutomation.Sdk.cat' is not valid.
At C:\Program Files\WindowsPowerShell\Modules\PowerShellGet\1.0.0.1\PSModule.psm1:1809 char:21
+ ...          $null = PackageManagement\Install-Package @PSBoundParameters
+                      ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : InvalidOperation: (Microsoft.Power....InstallPackage:InstallPackage) [Install-Package],
   Exception
    + FullyQualifiedErrorId : InvalidAuthenticodeSignature,ValidateAndGet-AuthenticodeSignature,Microsoft.PowerShell.P
   ackageManagement.Cmdlets.InstallPackage

Workaround: install PowerShell by skipping publisher checks:

install-module vmware.powercli -scope AllUsers -force -SkipPublisherCheck -AllowClobber

Cause: This is due to the fact that the certificate we used to sign the modules was replaced with a new one from a new publisher.