vmTools 10.3.x installation fails on server core/other os with failed to run CustomAction VM_PostInstall scripts

the VMinst.log/toolsinst.log did not have much information on the failure
Start by running msi debug logging:

msiexec /i "C:\MyPackage\toolsxxxx.msi" /L*V "C:\log\msi.log"
Logs: 
	MSI (s) (FC:F4) [08:04:42:754]: PROPERTY CHANGE: Adding VM_PostInstall.A05FAB36_E570_4B23_8805_3633A16E8D19 property. Its value is '"C:\ProgramData\VMware\VMware CAF\pme\install\postInstall.bat" "C:\Program Files\VMware\VMware Tools\VMware CAF\pme\" "C
	:\ProgramData\VMware\VMware CAF\pme\"'.
	Action ended 8:04:42: VM_PostInstall_SD.A05FAB36_E570_4B23_8805_3633A16E8D19. Return value 1.
	MSI (s) (FC:F4) [08:04:42:754]: Skipping action: VM_StopVMwareProcs.869A7E00_8665_0000_83A8_EF0F76CF0001 (condition is false)
	MSI (s) (FC!8C) [08:04:46:207]: Closing MSIHANDLE (60) of type 790531 for thread 4492
	CustomAction VM_PostInstall.A05FAB36_E570_4B23_8805_3633A16E8D19 returned actual error code 1603 (note this may not be 100% accurate if translation happened inside sandbox)
	MSI (s) (FC:BC) [08:04:46:207]: Closing MSIHANDLE (58) of type 790536 for thread 5876
	Action 8:04:46: Rollback. Rolling back action:
	Rollback: VM_PostInstall.A05FAB36_E570_4B23_8805_3633A16E8D19
	MSI (s) (FC:F4) [08:04:46:238]: Executing op: ActionStart(Name=VM_PostInstall.A05FAB36_E570_4B23_8805_3633A16E8D19,,)
	MSI (s) (FC:F4) [08:04:46:238]: Executing op: ProductInfo(ProductKey={F32C4E7B-2BF8-4788-8408-824C6896E1BB},ProductName=VMware Tools,PackageName={F32C4E7B-2BF8-4788-8408-824C6896E1BB}.msi,Language=1033,Version=167968773,Assignment=1,ObsoleteArg=0,ProductIcon=VmwareIcon,,PackageCode={66C1BF82-7ADE-472F-B0AE-1E6A85835452},,,InstanceType=0,LUASetting=0,RemoteURTInstalls=0,ProductDeploymentFlags=3)
	Rollback: Copying new files

in order to work this around, run the msi installer excluding vmware CAF

setup64.exe /S /v"/qn ADDLOCAL=ALL REMOVE=CAF /l*v C:\temp\vmtools-install.log" 

VMware tools install, command line

setup64.exe /S /v"/qn REBOOT=R ADDLOCAL=Audio,BootCamp,Hgfs,FileIntrospection,NetworkIntrospection,Perfmon,TrayIcon,Drivers,MemCtl,Mouse,MouseUsb,PVSCSI,EFIFW,SVGA,VMCI,VMXNet3,VSS,Toolbox,Plugins,Unity REMOVE=CAF,Audio,BootCamp,Hgfs,FileIntrospection,NetworkIntrospection,Perfmon,TrayIcon,Unity /l*v C:\temp\vmware1052_install.log" 

VCSA/Insufficient inodes can result in service crash with no space left on device errors

version: VMware VirtualCenter 6.5.0 build-8024368

vpxd logs: 
019-02-08T22:44:53.609Z info vpxd[7FB5996C4800] [Originator@6876 sub=Main] Account name: root
2019-02-08T22:44:53.609Z info vpxd[7FB5996C4800] [Originator@6876 sub=vpxUtil] [LoadMachineInstanceUuid] Local instance UUID: 6f0cbee2-56b1-43d3-b13a-348208627e07
2019-02-08T22:44:53.614Z error vpxd[7FB5996C4800] [Originator@6876 sub=Main] Init failed. Exception: N7Vmacore15SystemExceptionE(No space left on device)
--> [context]zKq7AVECAAAAADBxegAOdnB4ZAAAeF4rbGlidm1hY29yZS5zbwAAEBcbAMppGAC8HisAtJgaAH6mGgG4w6d2cHhkAAEBxqcBCsinAeEoVQG9+1QBap9TAuAFAmxpYmMuc28uNgABdZdT[/context]
2019-02-08T22:44:53.615Z error vpxd[7FB5996C4800] [Originator@6876 sub=Default] Failed to intialize VMware VirtualCenter. Shutting down
2019-02-08T22:44:53.615Z info vpxd[7FB5996C4800] [Originator@6876 sub=SupportMgr] Wrote uptime information
2019-02-08T22:44:53.616Z error vpxd[7FB5996C4800] [Originator@6876 sub=Default] Alert:false@ bora/vpx/vpxd/util/vdb.cpp:509
--> Backtrace:
--> [backtrace begin] product: VMware VirtualCenter, version: 6.5.0, build: build-8024368, tag: vpxd, cpu: x86_64, os: linux, buildType: release
--> backtrace[00] libvmacore.so[0x002B5E90]: Vmacore::System::Stacktrace::CaptureFullWork(unsigned int)
--> backtrace[01] libvmacore.so[0x001B1804]: Vmacore::System::SystemFactoryImpl::CreateBacktrace(Vmacore::Ref<Vmacore::System::Backtrace>&)
--> backtrace[02] libvmacore.so[0x00178BDB]: Vmacore::Service::Alert(char const*, char const*, int)
--> backtrace[03] vpxd[0x00A367FF]
--> backtrace[04] vpxd[0x0054E418]
--> backtrace[05] vpxd[0x0054FC2F]
--> backtrace[06] vpxd[0x00539F6A]
--> backtrace[07] libc.so.6[0x000205E0]
--> backtrace[08] vpxd[0x00539775]
--> [backtrace end]
2019-02-08T22:44:53.616Z info vpxd[7FB5996C4800] [Originator@6876 sub=vpxdVdb] Registry Item DB 5 value is ''
2019-02-08T22:44:53.616Z info vpxd[7FB5996C4800] [Originator@6876 sub=vpxdVdb] Setting VDB delay statements queue size to 11000 transactions for 11 GB RAM dedicated to vpxd.
2019-02-08T22:44:53.616Z info vpxd[7FB5996C4800] [Originator@6876 sub=vpxdVdb] [VpxdVdb::SetDBType] Logging in to DSN: VMware VirtualCenter with username vc
2019-02-08T22:44:53.617Z error vpxd[7FB5996C4800] [Originator@6876 sub=CryptUtil] [static bool Vpx::Common::CryptUtil::UnmungePasswordToBuffer(VpxDecrypter*, const string&, char*, size_t)] invalid decrypter
2019-02-08T22:44:53.617Z error vpxd[7FB5996C4800] [Originator@6876 sub=Default] [Vdb::IsRecoverableErrorCode] Unable to recover from 00000:0
2019-02-08T22:44:53.617Z error vpxd[7FB5996C4800] [Originator@6876 sub=vpxdVdb] [VpxdVdb::SetDBType]: Database error: ODBC error: (00000) -
2019-02-08T22:44:53.617Z error vpxd[7FB5996C4800] [Originator@6876 sub=Default] Error getting configuration info from the database
2019-02-08T22:44:53.617Z warning vpxd[7FB5996C4800] [Originator@6876 sub=Main] Database not initialized. Nothing to unlock
2019-02-08T22:44:53.617Z info vpxd[7FB5996C4800] [Originator@6876 sub=Default] Forcing shutdown of VMware VirtualCenter now

Looking at file system:

root@cmtolpvctrapp24 [ / ]# df -h
Filesystem                                Size  Used Avail Use% Mounted on
devtmpfs                                   16G     0   16G   0% /dev
tmpfs                                      16G  8.0K   16G   1% /dev/shm
tmpfs                                      16G  656K   16G   1% /run
tmpfs                                      16G     0   16G   0% /sys/fs/cgroup
/dev/sda3                                  11G  8.1G  2.1G  80% /
tmpfs                                      16G  884K   16G   1% /tmp
/dev/sda1                                 120M   28M   87M  24% /boot
/dev/mapper/log_vg-log                     25G  1.9G   22G   9% /storage/log
/dev/mapper/dblog_vg-dblog                 25G  173M   24G   1% /storage/dblog
/dev/mapper/db_vg-db                       25G  396M   23G   2% /storage/db
/dev/mapper/seat_vg-seat                  197G  5.8G  181G   4% /storage/seat
/dev/mapper/netdump_vg-netdump            9.8G   23M  9.2G   1% /storage/netdump
/dev/mapper/autodeploy_vg-autodeploy       25G   45M   24G   1% /storage/autodeploy
/dev/mapper/updatemgr_vg-updatemgr         99G  435M   93G   1% /storage/updatemgr
/dev/mapper/imagebuilder_vg-imagebuilder   25G   45M   24G   1% /storage/imagebuilder
/dev/mapper/core_vg-core                   99G  188M   94G   1% /storage/core

looking at inode:

root@cmtolpvctrapp24 [ / ]# df -i
Filesystem                                 Inodes  IUsed    IFree IUse% Mounted on
devtmpfs                                  4115632    531  4115101    1% /dev
tmpfs                                     4117336      4  4117332    1% /dev/shm
tmpfs                                     4117336    690  4116646    1% /run
tmpfs                                     4117336     16  4117320    1% /sys/fs/cgroup
/dev/sda3                                  712704 712704        0  100% /
tmpfs                                     4117336     77  4117259    1% /tmp
/dev/sda1                                   32768    305    32463    1% /boot
/dev/mapper/log_vg-log                    1638400  14332  1624068    1% /storage/log
/dev/mapper/dblog_vg-dblog                1638400     22  1638378    1% /storage/dblog
/dev/mapper/db_vg-db                      1638400   2854  1635546    1% /storage/db
/dev/mapper/seat_vg-seat                 13107200   4430 13102770    1% /storage/seat
/dev/mapper/netdump_vg-netdump             655360     11   655349    1% /storage/netdump
/dev/mapper/autodeploy_vg-autodeploy      1638400     13  1638387    1% /storage/autodeploy
/dev/mapper/updatemgr_vg-updatemgr        6553600    273  6553327    1% /storage/updatemgr
/dev/mapper/imagebuilder_vg-imagebuilder  1638400     14  1638386    1% /storage/imagebuilder
/dev/mapper/core_vg-core                  6553600     15  6553585    1% /storage/core
Now we determine what is consuming the most of inode: 
find / -xdev -printf '%h\n' | sort | uniq -c | sort -k 1 -n
    875 /usr/bin
   1417 /opt/vmware/lib/python2.7/test
   1584 /usr/lib/vmware-vsphere-ui/server/work/deployer/s/global/39/0/h5ngc.war/resources/libs/angular-i18n
   5362 /usr/share/man/man3
607550 /var/spool/mqueue

determine what is consuming the most of inode:

find / -xdev -printf '%h\n' | sort | uniq -c | sort -k 1 -n
    875 /usr/bin
   1417 /opt/vmware/lib/python2.7/test
   1584 /usr/lib/vmware-vsphere-ui/server/work/deployer/s/global/39/0/h5ngc.war/resources/libs/angular-i18n
   5362 /usr/share/man/man3
607550 /var/spool/mqueue   <----------------------culprit here

To resolve this, we run the below to clear the inode

 find /var/spool/mqueue/ -type f -print0 | xargs  -0 rm -f

Check for available node to conform if everything is cleared up.

root@cmtolpvctrapp24 [ / ]# df -i
Filesystem                                 Inodes  IUsed    IFree IUse% Mounted on
devtmpfs                                  4115632    531  4115101    1% /dev
tmpfs                                     4117336      4  4117332    1% /dev/shm
tmpfs                                     4117336    690  4116646    1% /run
tmpfs                                     4117336     16  4117320    1% /sys/fs/cgroup
/dev/sda3                                  712704 105154   607550   15% /
tmpfs                                     4117336     77  4117259    1% /tmp
/dev/sda1                                   32768    305    32463    1% /boot
/dev/mapper/log_vg-log                    1638400  14332  1624068    1% /storage/log
/dev/mapper/dblog_vg-dblog                1638400     22  1638378    1% /storage/dblog
/dev/mapper/db_vg-db                      1638400   2854  1635546    1% /storage/db
/dev/mapper/seat_vg-seat                 13107200   4430 13102770    1% /storage/seat
/dev/mapper/netdump_vg-netdump             655360     11   655349    1% /storage/netdump
/dev/mapper/autodeploy_vg-autodeploy      1638400     13  1638387    1% /storage/autodeploy
/dev/mapper/updatemgr_vg-updatemgr        6553600    273  6553327    1% /storage/updatemgr
/dev/mapper/imagebuilder_vg-imagebuilder  1638400     14  1638386    1% /storage/imagebuilder
/dev/mapper/core_vg-core                  6553600     15  6553585    1% /storage/core

vCSA proxy configuration

root@is-dhcp34-161 [ ~ ]# cat /etc/sysconfig/proxy

Enable a generation of the proxy settings to the profile.
This setting allows to turn the proxy on and off while
preserving the particular proxy setup.
#
PROXY_ENABLED=”no”

Some programs (e.g. wget) support proxies, if set in
the environment.
Example: HTTP_PROXY=”http://proxy.provider.de:3128/”
HTTP_PROXY=””

Example: HTTPS_PROXY=”https://proxy.provider.de:3128/”
HTTPS_PROXY=””

Example: FTP_PROXY=”http://proxy.provider.de:3128/”
FTP_PROXY=””

Example: GOPHER_PROXY=”http://proxy.provider.de:3128/”
GOPHER_PROXY=””

Example: SOCKS_PROXY=”socks://proxy.example.com:8080″
SOCKS_PROXY=””

Example: SOCKS5_SERVER=”office-proxy.example.com:8881″
SOCKS5_SERVER=””

Example: NO_PROXY=”www.me.de, do.main, localhost”
NO_PROXY=”localhost, 127.0.0.1″

Deploy vSphere replication using OVF tools

The web client/client integration plugin is such a pain to get working!! Especially when you have to rebuild one of the vR appliance.

On this Blog, I will show you an easier way to deploy the vR/OVF’s  to the vCenter.

To start off with, you will need a copy of the ovftool’s. you can Download a copy from the my.vmware portal link: https://my.vmware.com/web/vmware/details?productId=614&downloadGroup=OVFTOOL420

I would recommend the version 4.2.0 or above to avoid running into deployment bugs.

Installation: 

on an elevated command prompt, change to Program Files\VMware\VMware OVF Tool\

 cd “\Program Files\VMware\VMware OVF Tool\”

use the below syntax to deploy the vR (replace the once’s in the red font with what is on your environment):

ovftool --acceptAllEulas -ds="DATASTORE_NAME" -n="SPECIFY VRMS NAME" --net:"Management Network"="PORT GROUP NAME" --prop:"password"="VRMS ROOT PASSWORD" --prop:"ntpserver"="NTP SERVER IP OR FQDN" --prop:"vami.ip0.vSphere_Replication_Appliance"="SPECIFY VRMS SERVER IP" --vService:installation=com.vmware.vim.vsm:extension_vservice <PATH>\vSphere_Replication_OVF10.ovf vi://administrator@vsphere.local:VMWARE123!@VCENTER IP/?ip=HOST IP

Note: VMWARE123! is to be replaced with the password for administrator@vsphere.local account



Example:

vpxd crashes bora/vpx/vpxd/vpxservices/alarm/PredefinedAlarmsManager.cpp:203

vpxd.log
2019-02-20T02:09:33.982Z info vpxd[07142] [Originator@6876 sub=vpxLro opID=lro-5-5364f5ed] [VpxLRO] -- BEGIN lro-5 --  -- QuerySCLRO --
2019-02-20T02:09:33.982Z info vpxd[07142] [Originator@6876 sub=vpxLro opID=lro-5-5364f5ed] [VpxLRO] -- FINISH lro-5
2019-02-20T02:09:33.982Z info vpxd[07142] [Originator@6876 sub=ThreadPool] Spawning additional worker - allocated: 150, idle: 20
2019-02-20T02:09:33.982Z error vpxd[07142] [Originator@6876 sub=alarmMo opID=CreatePredefinedAlarms-94dc561] Upgrade from pre VC5.0 version not supported in VC 6X
2019-02-20T02:09:33.982Z info vpxd[07447] [Originator@6876 sub=ThreadPool] Thread enlisted
2019-02-20T02:09:33.982Z info vpxd[07447] [Originator@6876 sub=ThreadPool] Entering worker thread loop
2019-02-20T02:09:33.982Z info vpxd[07447] [Originator@6876 sub=ThreadPool] Spawning additional worker - allocated: 151, idle: 20
2019-02-20T02:09:33.985Z info vpxd[07135] [Originator@6876 sub=ThreadPool] Spawning additional worker - allocated: 152, idle: 20
2019-02-20T02:09:33.986Z panic vpxd[07142] [Originator@6876 sub=Default opID=CreatePredefinedAlarms-94dc561]
-->
--> Panic: NOT_REACHED bora/vpx/vpxd/vpxservices/alarm/PredefinedAlarmsManager.cpp:203
-->
--> Backtrace:
--> [backtrace begin] product: VMware VirtualCenter, version: 6.7.0, build: build-9433931, tag: vpxd, cpu: x86_64, os: linux, buildType: release
--> backtrace[00] libvmacore.so[0x002A9C48]: Vmacore::System::Stacktrace::CaptureFullWork(unsigned int)
--> backtrace[01] libvmacore.so[0x001B2F1C]: Vmacore::System::SystemFactory::CreateBacktrace(Vmacore::Ref<Vmacore::System::Backtrace>&)
--> backtrace[02] libvmacore.so[0x002A7E1E]
--> backtrace[03] libvmacore.so[0x002A7EFE]: Vmacore::PanicExit(char const*)
--> backtrace[04] libvmacore.so[0x001935E5]
--> backtrace[05] libvmacore.so[0x00193683]
--> backtrace[06] vpxd[0x0090C95B]
--> backtrace[07] vpxd[0x008B3A5A]
--> backtrace[08] vpxd[0x00517C29]
--> backtrace[09] vpxd[0x00517CCC]
--> backtrace[10] libvmacore.so[0x00230A3D]
--> backtrace[11] libvmacore.so[0x00230D06]
--> backtrace[12] libvmacore.so[0x002AF3E1]
--> backtrace[13] libpthread.so.0[0x000073D4]
--> backtrace[14] libc.so.6[0x000E8BBD]
--> [backtrace end]


Cause: alarms.version mismatch in database.

Resolution:
Connect to vCenter database

root@vc [ / ]psql VCDB postgres
VCDB=# select * from vpx_parameter where name = 'alarms.version';
      name      | value
----------------+-------
 alarms.version | 0
(1 row)
VCDB=# update vpx_parameter set value = '60' where name = 'alarms.version';
UPDATE 1
VCDB=# select * from vpx_parameter where name = 'alarms.version';
      name      | value
----------------+-------
 alarms.version | 60
(1 row)

Unlock /reset vSphere replication appliance root password.

restart the replication appliance to GRUB

at the grub screen, select SLES 11/12xxx Press ‘e’

Scroll down and look for show opts.

append “init=/bin/bash” to the same line

press f10 on the keyboard to boot

remount the root partition as RW

mount -o remount,rw /

To unlock the locked account, use the below command./sbin/pam_tally2 -r -u root

/sbin/pam_tally2 -r -u root

To reset the password, Use the below:

passwd root


Type exit to reboot the appliance

Fixing the broken/corrupt Locker Partition on Esxi

Start by determining the device backing up the locker partition

commands: 
ls -ltrh / | grep store
 vmkfstools -P /vmfs/volumes/5cdce747-375af1f6-b185-0050569674de
Output: 
[root@is-dhcp41-13:~] ls -ltrh / | grep store
lrwxrwxrwx    1 root     root           6 May 13 23:03 locker -> /store
lrwxrwxrwx    1 root     root          49 May 16 04:29 store -> /vmfs/volumes/5cdce747-375af1f6-b185-0050569674de


[root@is-dhcp41-13:~] vmkfstools -P /vmfs/volumes/5cdce747-375af1f6-b185-0050569674de
vfat-0.04 (Raw Major Version: 0) file system spanning 1 partitions.
File system label (if any):
Mode: private
Capacity 299712512 (36586 file blocks * 8192), 299712512 (36586 blocks) avail, max supported file size 0
Disk Block Size: 512/0/0
UUID: 5cdce747-375af1f6-b185-0050569674de
Partitions spanned (on "disks"):
        mpx.vmhba0:C0:T0:L0:8
Is Native Snapshot Capable: NO
[root@is-dhcp41-13:~]

Make a note of the device under the line Partitions spanned (on “disks”):

Note: The :8 on the above result signifives that this is partition 8 of the disk
Note: On a default install, the locker/tools iso are always stored to partition 8 of the installed disk/drive.

Format the partition with fat filesystem using the below command: Ensure you DO NOT MISS the partition number

vmkfstools -C vfat /dev/disks/mpx.vmhba0:C0:T0:L0:8
eg:
[root@is-dhcp41-13:~] vmkfstools -C vfat /dev/disks/mpx.vmhba0:C0:T0:L0:8
create fs deviceName:'/dev/disks/mpx.vmhba0:C0:T0:L0:8', fsShortName:'vfat', fsName:'(null)'
deviceFullPath:/dev/disks/mpx.vmhba0:C0:T0:L0:8 deviceFile:mpx.vmhba0:C0:T0:L0:8
Checking if remote hosts are using this device as a valid file system. This may take a few seconds...
Creating vfat file system on "mpx.vmhba0:C0:T0:L0:8" with blockSize 1048576 and volume label "none".
Successfully created new volume: 5cdcf45e-68f98eec-adb0-0050569674de


Note: If the format fails with the resource in use errors, the host will need a reboot.

re-create the symlink for store:

ln -snf /vmfs/volumes/5cdcf45e-68f98eec-adb0-0050569674de /store

ln -snf /vmfs/volumes/5cdcf45e-68f98eec-adb0-0050569674de /locker

Copy contents of the store partition from a working host, same Esxi build

Isolating vSphere Replication traffic

On the Esxi host.

* create new vmkport group on the source and destination esxi host/cluster

Sample networking on the host:

* add a second nic to the vR appliance and reboot the appliance.

* Log into the VAMI page of the vR appliance (default url: https://ip:5480

* go into network>address

* under the eth1 info: set a static IP address there

* now go back to vR>Configuration

* fill in the “ip address for Incoming storage traffic” with the IP address of eth1 and click on “apply network settings”

* validate network and port connectivity (from source Esxi host to the Destination vR appliance)

* Network: vmkping -I vmkx REMOTE_vR_IP   (where x is the vmkernel portgroup on the host used for replication)

* port: nc -z vR_IP 31031

* validate network and port connectivity: (from Destination vR appliance to the Destination Esxi )

* curl -v telnet://Destination_ESXI_IP:902

Sample Networking configuration (for replication traffic, one way replication)

Sample Networking configuration (for replication traffic, one way replication)