Nested Esxi vSAN sample scripts

#! /usr/bin/pwsh
$user = '[email protected]'
# Import password from an encrypted file
$encryptedPassword = Import-Clixml -Path '/glabs/spec/vcsa_admin.xml'
$decryptedPassword = $encryptedPassword.GetNetworkCredential().Password



# Function to check if vCenter services are running
function Test-VCenterServicesRunning {
    $serviceInstance = Connect-VIServer -Server vcsa01.glabs.local -Username  $user -Password  $decryptedPassword -ErrorAction SilentlyContinue
    
    if ($serviceInstance -eq $null) {
        return $false
    }
    
    $serviceContent = Get-View -Id $serviceInstance.ExtensionData.content.ServiceInstance
    
    $serviceContent.serviceInfo.service | ForEach-Object {
        if ($_.running -eq $false) {
            Disconnect-VIServer -Server $vcServer -Confirm:$false
            return $false
        }
    }
    
    Disconnect-VIServer -Server $vcServer -Confirm:$false
    return $true
}

# Wait for vCenter services to start
Write-Host "Waiting for vCenter services to start..."

while (-not (Test-VCenterServicesRunning)) {
    Start-Sleep -Seconds 5
}

Write-Host "vCenter services are running. Connecting to vCenter..."




#connect to vc and add hosts
Connect-viserver vcsa01.glabs.local -User $user -Password $decryptedPassword

#crate datacenter and cluster
New-Datacenter -Location Datacenters  -Name cloud
New-Cluster -Name "management" -Location "cloud"

Add-VMHost -Name esxi01.Glabs.local -Location management -user 'root' -password 'bAdP@$$' -Force -Confirm:$false 
Add-VMHost -Name esxi02.Glabs.local -Location management -user 'root' -password 'bAdP@$$' -Force -Confirm:$false 
Add-VMHost -Name esxi03.Glabs.local -Location management -user 'root' -password 'bAdP@$$' -Force -Confirm:$false 
get-vmhost | Get-VMHostStorage -RescanAllHba -RescanVmfs


$cache = 'mpx.vmhba0:C0:T1:L0'
$data = 'mpx.vmhba0:C0:T2:L0'

#mask cache disk as ssd
$esx = Get-VMHost -Name esxi01.glabs.local
$storSys = Get-View -Id $esx.ExtensionData.ConfigManager.StorageSystem
$uuid = $storSys.StorageDeviceInfo.ScsiLun | where {$_.CanonicalName -eq $cache} 
$storSys.MarkAsSsd($uuid.Uuid)
$esx = Get-VMHost -Name esxi02.glabs.local
$storSys = Get-View -Id $esx.ExtensionData.ConfigManager.StorageSystem
$uuid = $storSys.StorageDeviceInfo.ScsiLun | where {$_.CanonicalName -eq $cache} 
$storSys.MarkAsSsd($uuid.Uuid)
$esx = Get-VMHost -Name esxi03.glabs.local
$storSys = Get-View -Id $esx.ExtensionData.ConfigManager.StorageSystem
$uuid = $storSys.StorageDeviceInfo.ScsiLun | where {$_.CanonicalName -eq $cache} 
$storSys.MarkAsSsd($uuid.Uuid)

#add vSAN service to portgroup
$VMKNetforVSAN = "iscsi_1"
Get-VMHostNetworkAdapter -VMKernel | Where {$_.PortGroupName -eq $VMKNetforVSAN }|Set-VMHostNetworkAdapter -VsanTrafficEnabled $true -Confirm:$false



#Create vSAN cluster
get-cluster management | Set-Cluster -VsanEnabled:$true -VsanDiskClaimMode Manual -Confirm:$false -ErrorAction SilentlyContinue

#wait for previous task to finish
start-sleep 60

#add disk disk groups
New-VsanDiskGroup -VMHost esxi01.glabs.local -SSDCanonicalName $cache -DataDiskCanonicalName $data
New-VsanDiskGroup -VMHost esxi02.glabs.local -SSDCanonicalName $cache -DataDiskCanonicalName $data
New-VsanDiskGroup -VMHost esxi03.glabs.local -SSDCanonicalName $cache -DataDiskCanonicalName $data

#mount nfs 
get-vmhost | New-Datastore -Nfs -Name iso -Path /volume1/iso -NfsHost iso.glabs.local -ReadOnly

#noidea why the above does not work for vsphere7 but running the below manualy on a deployed env preps it for vSAN, dont touch it if it aint broken?
get-cluster management | Set-Cluster -VsanEnabled:$true -VsanDiskClaimMode Manual -Confirm:$false -ErrorAction SilentlyContinue


disconnect-viserver -confirm:$false

vRA8, Sample blueprint to Deploy a Windows AD with Cloudinit.

formatVersion: 1
inputs: {}
resources:
  Cloud_NSX_Network_1:
    type: Cloud.NSX.Network
    properties:
      networkType: existing
      constraints:
        - tag: net:vlan7
  Cloud_vSphere_Machine_1:
    type: Cloud.vSphere.Machine
    properties:
      imageRef: w22-cloudinit-instaclone/base
      cpuCount: 2
      totalMemoryMB: 3024
      networks:
        - network: ${resource.Cloud_NSX_Network_1.id}
          assignment: static
      cloudConfig: |
        #cloud-config
        users: 
          - 
            name: labadmin
            primary_group: administrators
            passwd: bAdP@$$  
            inactive: false            
          - 
            name: tseadmin
            primary_group: administrators
            passwd: bAdP@$$
            inactive: false
          -
            name: administrator
            primary_group: administrators
            passwd: bAdP@$$
            inactive: false
          -
        set_hostname: dc01
        runcmd: 
         - powershell.exe net user Administrator /passwordreq:yes
         - powershell.exe Install-WindowsFeature -Name AD-Domain-Services -IncludeManagementTools
         - powershell.exe Install-ADDSForest -CreateDnsDelegation:$false -DatabasePath "C:\Windows\NTDS" -DomainMode "WinThreshold" -DomainName "glabs.local" -DomainNetbiosName "GS" -ForestMode "WinThreshold" -InstallDns:$true -LogPath "C:\Windows\NTDS" -NoRebootOnCompletion:$false -SysvolPath "C:\Windows\SYSVOL" -Force:$true -SafeModeAdministratorPassword (ConvertTo-SecureString -AsPlainText "bAdP@$$" -Force)

IP ALLOCATE failed: Action run failed with the following error: (‘Error allocating in network or range: Failed to generate hostname. DNS suffix missing’, {})

Earlier this week, I was trying to integrate my test vRA deployment with Infoblox and all deployments failed with the error:

IP ALLOCATE failed: Action run failed with the following error: ('Error allocating in network or range: Failed to generate hostname. DNS suffix missing', {})

When looking at the Extensibility tab > action runs > (filter) change from user runs to all runs and look for a failed action: Infoblox_AllocateIP.

2023-05-04 15:01:07,914] [ERROR] - Error allocating in network or range: Failed to generate hostname. DNS suffix missing

[2023-05-04 15:01:07,914] [ERROR] - Failed to allocate from range network/ZG5zLm5ldHdvcmskMTAuMTA5LjI0LjAvMjEvMA:10.109.24.0/21/default: ('Error allocating in network or range: Failed to generate hostname. DNS suffix missing', {})

[2023-05-04 15:01:07,914] [ERROR] - No more ranges. Raising last error

('Error allocating in network or range: Failed to generate hostname. DNS suffix missing', {})

Finished running action code.

Exiting python process.

Traceback (most recent call last):

  File "/polyglot/function/source.py", line 171, in allocate_in_network_or_range

    host_record = HostRecordAllocation(range_id, resource, allocation, network_view, next_available_ip, context, endpoint)

  File "/polyglot/function/source.py", line 457, in __init__

    super().__init__(range_id, resource, allocation, network_view, next_available_ip, context, endpoint)

  File "/polyglot/function/source.py", line 392, in __init__

    self.hostname = generate_hostname(self.resource, self.range_id, self.allocation, self.context, self.endpoint["id"]) if self.dns_enabled else self.resource["name"]

  File "/polyglot/function/source.py", line 307, in generate_hostname

    raise Exception("Failed to generate hostname. DNS suffix missing")

Exception: Failed to generate hostname. DNS suffix missing



During handling of the above exception, another exception occurred:



Traceback (most recent call last):

  File "main.py", line 146, in <module>

    main()

  File "main.py", line 83, in main

    result = prepare_inputs_and_invoke(inputs)

  File "main.py", line 119, in prepare_inputs_and_invoke

    res = handler(ctx, inputs)

  File "/polyglot/function/source.py", line 29, in handler

    return ipam.allocate_ip()

  File "/polyglot/function/vra_ipam_utils/ipam.py", line 91, in allocate_ip

    result = self.do_allocate_ip(auth_credentials, cert)

  File "/polyglot/function/source.py", line 51, in do_allocate_ip

    raise e

  File "/polyglot/function/source.py", line 42, in do_allocate_ip

    allocation_result.append(allocate(resource, allocation, self.context, self.inputs["endpoint"]))

  File "/polyglot/function/source.py", line 78, in allocate

    raise last_error

  File "/polyglot/function/source.py", line 70, in allocate

    return allocate_in_network(range_id, resource, allocation, context, endpoint)

  File "/polyglot/function/source.py", line 155, in allocate_in_network

    endpoint)

  File "/polyglot/function/source.py", line 210, in allocate_in_network_or_range

    raise Exception(f"Error allocating in network or range: {str(e)}", result)

Exception: ('Error allocating in network or range: Failed to generate hostname. DNS suffix missing', {})

Python process exited.

There are 2 ways to remediate this.

Workaround 1: (if you do not care about adding the domain suffix to the records created on infoblox)
update your blueprint, add “Infoblox.IPAM.Network.enableDns: false” under properties for every type: cloud.vSphere.machine

resources:
  vCenterServer:
    type: Cloud.vSphere.Machine
    properties:
      Infoblox.IPAM.Network.enableDns: false
      name: Test
      imageRef: ${input.img_image_url}
      flavor: ${input.flavor}

The above deployment will ignore DNS suffix and will create a DNS record with the custom naming template as defined in the project (host name alone)

Workaround 2: If you do want the DNS records to be created with hostname + domain, then add the below to the blueprint:

resources:
  vCenterServer:
    type: Cloud.vSphere.Machine
    properties:
      Infoblox.IPAM.Network.dnsSuffix: lab.local
      name: Test
      imageRef: ${input.img_image_url}
      flavor: ${input.flavor}

with the above, the deployment will suffix the domain “lab.local” with the hostname and the respective DNS records will be created.

It took me a long time to figure this out. hopefully, this saves you a lot of time!

Cheers!

Troubleshooting saltconfig (aria config) Minion Deployment Failure

When troubleshooting a minion deployment failure, I would recommend hashing out the salt part of the blueprint and run this as a day2 task. This would help save significant deployment time and help focuss on the minion deployment issue alone.

So in my scenario, I Finished my deployment and run the salt as a day2 task which failed:

Navigate to Aria config(salt-config) web UI > activity > jobs > completed > Look for a deploy.minion task click on the JID (the long number to the right table of the job) and then click on raw:

so, this tells us that the script that was being executed failed and hence “Exit code: 1”

SSH to salt master and navigate to /etc/salt/cloud.profiles.d, you should see a conf with the the same vRA deployment name. in my case it was the second one from the below screenshot.

at this stage, you can manually call on salt-cloud with the debug flag so that you have realtime logging as the script attempts to connect to the remote host and bootstrap the minion.

The basic syntax is

salt-cloud -p profile_name VM_name -l debug

in my case:

salt-cloud -p ssc_Router-mcm770988a1-d535-4b24-b78b-2318f14911cd_profile test -l debug

Note: do not include the .conf in the profile name and the VM_name can be anything, it really does not matter in the current senario.

Typically, you want to look at the very end for the errors, In my case it was bad DNS.

[email protected]'s password: [DEBUG   ] [email protected]'s password:

[sudo] password for labadmin: [DEBUG   ] [sudo] password for labadmin:

 *  INFO: Running version: 2022.08.12
 *  INFO: Executed by: /bin/sh
 *  INFO: Command line: '/tmp/.saltcloud-3e1d4338-c7d1-4dbb-8596-de0d6bf587ec/deploy.sh -c /tmp/.saltcloud-3e1d4338-c7d1-4dbb-8596-de0d6bf587ec -x python3 stable 3005.1'
 *  WARN: Running the unstable version of bootstrap-salt.sh

 *  INFO: System Information:
 *  INFO:   CPU:          AuthenticAMD
 *  INFO:   CPU Arch:     x86_64
 *  INFO:   OS Name:      Linux
 *  INFO:   OS Version:   5.15.0-69-generic
 *  INFO:   Distribution: Ubuntu 22.04

 *  INFO: Installing minion
 *  INFO: Found function install_ubuntu_stable_deps
 *  INFO: Found function config_salt
 *  INFO: Found function preseed_master
 *  INFO: Found function install_ubuntu_stable
 *  INFO: Found function install_ubuntu_stable_post
 *  INFO: Found function install_ubuntu_res[DEBUG   ]  *  INFO: Running version: 2022.08.12
 *  INFO: Executed by: /bin/sh
 *  INFO: Command line: '/tmp/.saltcloud-3e1d4338-c7d1-4dbb-8596-de0d6bf587ec/deploy.sh -c /tmp/.saltcloud-3e1d4338-c7d1-4dbb-8596-de0d6bf587ec -x python3 stable 3005.1'
 *  WARN: Running the unstable version of bootstrap-salt.sh

 *  INFO: System Information:
 *  INFO:   CPU:          AuthenticAMD
 *  INFO:   CPU Arch:     x86_64
 *  INFO:   OS Name:      Linux
 *  INFO:   OS Version:   5.15.0-69-generic
 *  INFO:   Distribution: Ubuntu 22.04

 *  INFO: Installing minion
 *  INFO: Found function install_ubuntu_stable_deps
 *  INFO: Found function config_salt
 *  INFO: Found function preseed_master
 *  INFO: Found function install_ubuntu_stable
 *  INFO: Found function install_ubuntu_stable_post
 *  INFO: Found function install_ubuntu_res
tart_daemons
 *  INFO: Found function daemons_running
 *  INFO: Found function install_ubuntu_check_services
 *  INFO: Running install_ubuntu_stable_deps()
Ign:1 http://in.archive.ubuntu.com/ubuntu jammy InRelease
Ign:2 https://packages.microsoft.com/ubuntu/22.04/prod jammy InRelease
Ign:3 https://repo.saltproject.io/py3/ubuntu/20.04/amd64/archive/3005.1 focal InRelease
Ign:4 http://in.archive.ubuntu.com/ubuntu jammy-updates InRelease
Ign:5 http://in.archive.ubuntu.com/ubuntu jammy-backports InRelease
Ign:6 http://in.archive.ubuntu.com/ubuntu jammy-security InRelease
[DEBUG   ] tart_daemons
 *  INFO: Found function daemons_running
 *  INFO: Found function install_ubuntu_check_services
 *  INFO: Running install_ubuntu_stable_deps()
Ign:1 http://in.archive.ubuntu.com/ubuntu jammy InRelease
Ign:2 https://packages.microsoft.com/ubuntu/22.04/prod jammy InRelease
Ign:3 https://repo.saltproject.io/py3/ubuntu/20.04/amd64/archive/3005.1 focal InRelease
Ign:4 http://in.archive.ubuntu.com/ubuntu jammy-updates InRelease
Ign:5 http://in.archive.ubuntu.com/ubuntu jammy-backports InRelease
Ign:6 http://in.archive.ubuntu.com/ubuntu jammy-security InRelease
Ign:1 http://in.archive.ubuntu.com/ubuntu jammy InRelease
Ign:2 https://packages.microsoft.com/ubuntu/22.04/prod jammy InRelease
Ign:3 https://repo.saltproject.io/py3/ubuntu/20.04/amd64/archive/3005.1 focal InRelease
Ign:4 http://in.archive.ubuntu.com/ubuntu jammy-updates InRelease
Ign:5 http://in.archive.ubuntu.com/ubuntu jammy-backports InRelease
Ign:6 http://in.archive.ubuntu.com/ubuntu jammy-security InRelease
[DEBUG   ] Ign:1 http://in.archive.ubuntu.com/ubuntu jammy InRelease
Ign:2 https://packages.microsoft.com/ubuntu/22.04/prod jammy InRelease
Ign:3 https://repo.saltproject.io/py3/ubuntu/20.04/amd64/archive/3005.1 focal InRelease
Ign:4 http://in.archive.ubuntu.com/ubuntu jammy-updates InRelease
Ign:5 http://in.archive.ubuntu.com/ubuntu jammy-backports InRelease
Ign:6 http://in.archive.ubuntu.com/ubuntu jammy-security InRelease
Ign:1 http://in.archive.ubuntu.com/ubuntu jammy InRelease
Ign:2 https://packages.microsoft.com/ubuntu/22.04/prod jammy InRelease
Ign:3 https://repo.saltproject.io/py3/ubuntu/20.04/amd64/archive/3005.1 focal InRelease
Ign:4 http://in.archive.ubuntu.com/ubuntu jammy-updates InRelease
Ign:5 http://in.archive.ubuntu.com/ubuntu jammy-backports InRelease
Ign:6 http://in.archive.ubuntu.com/ubuntu jammy-security InRelease
[DEBUG   ] Ign:1 http://in.archive.ubuntu.com/ubuntu jammy InRelease
Ign:2 https://packages.microsoft.com/ubuntu/22.04/prod jammy InRelease
Ign:3 https://repo.saltproject.io/py3/ubuntu/20.04/amd64/archive/3005.1 focal InRelease
Ign:4 http://in.archive.ubuntu.com/ubuntu jammy-updates InRelease
Ign:5 http://in.archive.ubuntu.com/ubuntu jammy-backports InRelease
Ign:6 http://in.archive.ubuntu.com/ubuntu jammy-security InRelease
Err:1 http://in.archive.ubuntu.com/ubuntu jammy InRelease
  Temporary failure resolving 'in.archive.ubuntu.com'
Err:3 https://repo.saltproject.io/py3/ubuntu/20.04/amd64/archive/3005.1 focal InRelease
  Temporary failure resolving 'repo.saltproject.io'
Err:2 https://packages.microsoft.com/ubuntu/22.04/prod jammy InRelease
  Temporary failure resolving 'packages.microsoft.com'
Err:4 http://in.archive.ubuntu.com/ubuntu jammy-updates InRelease
  Temporary failure resolving 'in.archive.ubuntu.com'
Err:5 http://in.archive.ubuntu.com/ubuntu jammy-backports InRelease
  Temporary failure resolving 'in.archive.ubuntu.com'
Err:6 http://in.archive.ubuntu.com/ubuntu jammy-security InRelease
  Temporary failure resolving 'in.archive.ubuntu.com'
Reading package lists...[DEBUG   ] Err:1 http://in.archive.ubuntu.com/ubuntu jammy InRelease
  Temporary failure resolving 'in.archive.ubuntu.com'
Err:3 https://repo.saltproject.io/py3/ubuntu/20.04/amd64/archive/3005.1 focal InRelease
  Temporary failure resolving 'repo.saltproject.io'
Err:2 https://packages.microsoft.com/ubuntu/22.04/prod jammy InRelease
  Temporary failure resolving 'packages.microsoft.com'
Err:4 http://in.archive.ubuntu.com/ubuntu jammy-updates InRelease
  Temporary failure resolving 'in.archive.ubuntu.com'
Err:5 http://in.archive.ubuntu.com/ubuntu jammy-backports InRelease
  Temporary failure resolving 'in.archive.ubuntu.com'
Err:6 http://in.archive.ubuntu.com/ubuntu jammy-security InRelease
  Temporary failure resolving 'in.archive.ubuntu.com'
Reading package lists...
Connection to 10.109.30.5 closed.
[DEBUG   ] Connection to 10.109.30.5 closed.

 *  WARN: Non-LTS Ubuntu detected, but stable packages requested. Trying packages for previous LTS release. You may experience problems.
Reading package lists...
Building dependency tree...
Reading state information...
wget is already the newest version (1.21.2-2ubuntu1).
ca-certificates is already the newest version (20211016ubuntu0.22.04.1).
gnupg is already the newest version (2.2.27-3ubuntu2.1).
apt-transport-https is already the newest version (2.4.8).
The following packages were automatically installed and are no longer required:
  eatmydata libeatmydata1 python3-json-pointer python3-jsonpatch
Use 'sudo apt autoremove' to remove them.
0 upgraded, 0 newly installed, 0 to remove and 62 not upgraded.
 * ERROR: https://repo.saltproject.io/py3/ubuntu/20.04/amd64/archive/3005.1/salt-archive-keyring.gpg failed to download to /tmp/salt-gpg-UclYVAky.pub
 * ERROR: Failed to run install_ubuntu_stable_deps()!!!
[DEBUG   ]  *  WARN: Non-LTS Ubuntu detected, but stable packages requested. Trying packages for previous LTS release. You may experience problems.
Reading package lists...
Building dependency tree...
Reading state information...
wget is already the newest version (1.21.2-2ubuntu1).
ca-certificates is already the newest version (20211016ubuntu0.22.04.1).
gnupg is already the newest version (2.2.27-3ubuntu2.1).
apt-transport-https is already the newest version (2.4.8).
The following packages were automatically installed and are no longer required:
  eatmydata libeatmydata1 python3-json-pointer python3-jsonpatch
Use 'sudo apt autoremove' to remove them.
0 upgraded, 0 newly installed, 0 to remove and 62 not upgraded.
 * ERROR: https://repo.saltproject.io/py3/ubuntu/20.04/amd64/archive/3005.1/salt-archive-keyring.gpg failed to download to /tmp/salt-gpg-UclYVAky.pub
 * ERROR: Failed to run install_ubuntu_stable_deps()!!!

The same can be done for windows minion deployment troubleshooting too!!

VMware PowerCli installation fails

VMware powerCli installation fails with the below:

PS C:\Users\Administrator> Install-Module -Name VMware.PowerCLI                                                                                                                                                                                 NuGet provider is required to continue                                                                                  PowerShellGet requires NuGet provider version '2.8.5.201' or newer to interact with NuGet-based repositories. The NuGet  provider must be available in 'C:\Program Files\PackageManagement\ProviderAssemblies' or                               'C:\Users\Administrator\AppData\Local\PackageManagement\ProviderAssemblies'. You can also install the NuGet provider by  running 'Install-PackageProvider -Name NuGet -MinimumVersion 2.8.5.201 -Force'. Do you want PowerShellGet to install   and import the NuGet provider now?                                                                                      [Y] Yes  [N] No  [S] Suspend  [?] Help (default is "Y"): y

Untrusted repository
You are installing the modules from an untrusted repository. If you trust this repository, change its
InstallationPolicy value by running the Set-PSRepository cmdlet. Are you sure you want to install the modules from
'PSGallery'?
[Y] Yes  [A] Yes to All  [N] No  [L] No to All  [S] Suspend  [?] Help (default is "N"): y
PackageManagement\Install-Package : The module 'VMware.VimAutomation.Sdk' cannot be installed or updated because the
authenticode signature of the file 'VMware.VimAutomation.Sdk.cat' is not valid.
At C:\Program Files\WindowsPowerShell\Modules\PowerShellGet\1.0.0.1\PSModule.psm1:1809 char:21
+ ...          $null = PackageManagement\Install-Package @PSBoundParameters
+                      ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : InvalidOperation: (Microsoft.Power....InstallPackage:InstallPackage) [Install-Package],
   Exception
    + FullyQualifiedErrorId : InvalidAuthenticodeSignature,ValidateAndGet-AuthenticodeSignature,Microsoft.PowerShell.P
   ackageManagement.Cmdlets.InstallPackage

Workaround: install PowerShell by skipping publisher checks:

install-module vmware.powercli -scope AllUsers -force -SkipPublisherCheck -AllowClobber

Cause: This is due to the fact that the certificate we used to sign the modules was replaced with a new one from a new publisher. 

SaltConfig multi-node scripted/automated Deployment Part-2

Topology

Prerequisites:

you must have a working salt-master and minions installed on the Redis/Postgres and the RAAS instance. Refer SaltConfig Multi-Node scripted Deployment Part-1

Dowload SaltConfig automated installer .gz from https://customerconnect.vmware.com/downloads/details?downloadGroup=VRA-SSC-862&productId=1206&rPId=80829

Extract and copy the files to the salt-master. In my case, I have placed it in the /root dir

the automated/scripted installer needs additional packages. you will need to install the below components on all the machines.

  • openssl (typically installed at this point)
  • epel-release
  • python36-cryptography
  • python36-pyOpenSSL
Install epel-release

Note: you can install most of the above using yum install packagename on centos however on redhat you will need to install the epel-release RPM manually

sudo yum install https://repo.ius.io/ius-release-el7.rpm https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm -y

Since the package needs to be installed on all nodes, I will leverage salt to run the commands on all nodes.

salt '*' cmd.run "sudo yum install https://repo.ius.io/ius-release-el7.rpm https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm -y"

sample output:

[root@labmaster ~]# salt '*' cmd.run "sudo yum install https://repo.ius.io/ius-release-el7.rpm https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm -y"
labpostgres:
    Loaded plugins: product-id, search-disabled-repos, subscription-manager
    Examining /var/tmp/yum-root-VBGG1c/ius-release-el7.rpm: ius-release-2-1.el7.ius.noarch
    Marking /var/tmp/yum-root-VBGG1c/ius-release-el7.rpm to be installed
    Examining /var/tmp/yum-root-VBGG1c/epel-release-latest-7.noarch.rpm: epel-release-7-14.noarch
    Marking /var/tmp/yum-root-VBGG1c/epel-release-latest-7.noarch.rpm to be installed
    Resolving Dependencies
    --> Running transaction check
    ---> Package epel-release.noarch 0:7-14 will be installed
    ---> Package ius-release.noarch 0:2-1.el7.ius will be installed
    --> Finished Dependency Resolution

    Dependencies Resolved

    ================================================================================
     Package        Arch     Version          Repository                       Size
    ================================================================================
    Installing:
     epel-release   noarch   7-14             /epel-release-latest-7.noarch    25 k
     ius-release    noarch   2-1.el7.ius      /ius-release-el7                4.5 k

    Transaction Summary
    ================================================================================
    Install  2 Packages

    Total size: 30 k
    Installed size: 30 k
    Downloading packages:
    Running transaction check
    Running transaction test
    Transaction test succeeded
    Running transaction
      Installing : epel-release-7-14.noarch                                     1/2
      Installing : ius-release-2-1.el7.ius.noarch                               2/2
      Verifying  : epel-release-7-14.noarch                                     1/2
      Verifying  : ius-release-2-1.el7.ius.noarch                               2/2

    Installed:
      epel-release.noarch 0:7-14          ius-release.noarch 0:2-1.el7.ius

    Complete!
labmaster:
    Loaded plugins: product-id, search-disabled-repos, subscription-manager
    Examining /var/tmp/yum-root-ALBF1m/ius-release-el7.rpm: ius-release-2-1.el7.ius.noarch
    Marking /var/tmp/yum-root-ALBF1m/ius-release-el7.rpm to be installed
    Examining /var/tmp/yum-root-ALBF1m/epel-release-latest-7.noarch.rpm: epel-release-7-14.noarch
    Marking /var/tmp/yum-root-ALBF1m/epel-release-latest-7.noarch.rpm to be installed
    Resolving Dependencies
    --> Running transaction check
    ---> Package epel-release.noarch 0:7-14 will be installed
    ---> Package ius-release.noarch 0:2-1.el7.ius will be installed
    --> Finished Dependency Resolution

    Dependencies Resolved

    ================================================================================
     Package        Arch     Version          Repository                       Size
    ================================================================================
    Installing:
     epel-release   noarch   7-14             /epel-release-latest-7.noarch    25 k
     ius-release    noarch   2-1.el7.ius      /ius-release-el7                4.5 k

    Transaction Summary
    ================================================================================
    Install  2 Packages

    Total size: 30 k
    Installed size: 30 k
    Downloading packages:
    Running transaction check
    Running transaction test
    Transaction test succeeded
    Running transaction
      Installing : epel-release-7-14.noarch                                     1/2
      Installing : ius-release-2-1.el7.ius.noarch                               2/2
      Verifying  : epel-release-7-14.noarch                                     1/2
      Verifying  : ius-release-2-1.el7.ius.noarch                               2/2

    Installed:
      epel-release.noarch 0:7-14          ius-release.noarch 0:2-1.el7.ius

    Complete!
labredis:
    Loaded plugins: product-id, search-disabled-repos, subscription-manager
    Examining /var/tmp/yum-root-QKzOF1/ius-release-el7.rpm: ius-release-2-1.el7.ius.noarch
    Marking /var/tmp/yum-root-QKzOF1/ius-release-el7.rpm to be installed
    Examining /var/tmp/yum-root-QKzOF1/epel-release-latest-7.noarch.rpm: epel-release-7-14.noarch
    Marking /var/tmp/yum-root-QKzOF1/epel-release-latest-7.noarch.rpm to be installed
    Resolving Dependencies
    --> Running transaction check
    ---> Package epel-release.noarch 0:7-14 will be installed
    ---> Package ius-release.noarch 0:2-1.el7.ius will be installed
    --> Finished Dependency Resolution

    Dependencies Resolved

    ================================================================================
     Package        Arch     Version          Repository                       Size
    ================================================================================
    Installing:
     epel-release   noarch   7-14             /epel-release-latest-7.noarch    25 k
     ius-release    noarch   2-1.el7.ius      /ius-release-el7                4.5 k

    Transaction Summary
    ================================================================================
    Install  2 Packages

    Total size: 30 k
    Installed size: 30 k
    Downloading packages:
    Running transaction check
    Running transaction test
    Transaction test succeeded
    Running transaction
      Installing : epel-release-7-14.noarch                                     1/2
      Installing : ius-release-2-1.el7.ius.noarch                               2/2
      Verifying  : epel-release-7-14.noarch                                     1/2
      Verifying  : ius-release-2-1.el7.ius.noarch                               2/2

    Installed:
      epel-release.noarch 0:7-14          ius-release.noarch 0:2-1.el7.ius

    Complete!
labraas:
    Loaded plugins: product-id, search-disabled-repos, subscription-manager
    Examining /var/tmp/yum-root-F4FNTG/ius-release-el7.rpm: ius-release-2-1.el7.ius.noarch
    Marking /var/tmp/yum-root-F4FNTG/ius-release-el7.rpm to be installed
    Examining /var/tmp/yum-root-F4FNTG/epel-release-latest-7.noarch.rpm: epel-release-7-14.noarch
    Marking /var/tmp/yum-root-F4FNTG/epel-release-latest-7.noarch.rpm to be installed
    Resolving Dependencies
    --> Running transaction check
    ---> Package epel-release.noarch 0:7-14 will be installed
    ---> Package ius-release.noarch 0:2-1.el7.ius will be installed
    --> Finished Dependency Resolution

    Dependencies Resolved

    ================================================================================
     Package        Arch     Version          Repository                       Size
    ================================================================================
    Installing:
     epel-release   noarch   7-14             /epel-release-latest-7.noarch    25 k
     ius-release    noarch   2-1.el7.ius      /ius-release-el7                4.5 k

    Transaction Summary
    ================================================================================
    Install  2 Packages

    Total size: 30 k
    Installed size: 30 k
    Downloading packages:
    Running transaction check
    Running transaction test
    Transaction test succeeded
    Running transaction
      Installing : epel-release-7-14.noarch                                     1/2
      Installing : ius-release-2-1.el7.ius.noarch                               2/2
      Verifying  : epel-release-7-14.noarch                                     1/2
      Verifying  : ius-release-2-1.el7.ius.noarch                               2/2

    Installed:
      epel-release.noarch 0:7-14          ius-release.noarch 0:2-1.el7.ius

    Complete!
[root@labmaster ~]#

Note: in the above, i am targeting ‘*’ which means all accepted minions will be targeted when executing the job. in my case, I just have the 4 minions.. you can replace the ‘*’ with minion names should you have other minions that are not going to be used as a part of the installation. eg:

salt 'labmaster' cmd.run "rpm -qa | grep epel-release"
salt 'labredis' cmd.run "rpm -qa | grep epel-release"
salt 'labpostgres' cmd.run "rpm -qa | grep epel-release"
salt 'labraas' cmd.run "rpm -qa | grep epel-release"

Installing the other packages:

Install python36-cryptography
salt '*' pkg.install  python36-cryptography

Output:

[root@labmaster ~]# salt '*' pkg.install  python36-cryptography
labpostgres:
    ----------
    gpg-pubkey.(none):
        ----------
        new:
            2fa658e0-45700c69,352c64e5-52ae6884,de57bfbe-53a9be98,fd431d51-4ae0493b
        old:
            2fa658e0-45700c69,de57bfbe-53a9be98,fd431d51-4ae0493b
    python36-asn1crypto:
        ----------
        new:
            0.24.0-7.el7
        old:
    python36-cffi:
        ----------
        new:
            1.9.1-3.el7
        old:
    python36-cryptography:
        ----------
        new:
            2.3-2.el7
        old:
    python36-ply:
        ----------
        new:
            3.9-2.el7
        old:
    python36-pycparser:
        ----------
        new:
            2.14-2.el7
        old:
labredis:
    ----------
    gpg-pubkey.(none):
        ----------
        new:
            2fa658e0-45700c69,352c64e5-52ae6884,de57bfbe-53a9be98,fd431d51-4ae0493b
        old:
            2fa658e0-45700c69,de57bfbe-53a9be98,fd431d51-4ae0493b
    python36-asn1crypto:
        ----------
        new:
            0.24.0-7.el7
        old:
    python36-cffi:
        ----------
        new:
            1.9.1-3.el7
        old:
    python36-cryptography:
        ----------
        new:
            2.3-2.el7
        old:
    python36-ply:
        ----------
        new:
            3.9-2.el7
        old:
    python36-pycparser:
        ----------
        new:
            2.14-2.el7
        old:
labmaster:
    ----------
    gpg-pubkey.(none):
        ----------
        new:
            2fa658e0-45700c69,352c64e5-52ae6884,de57bfbe-53a9be98,fd431d51-4ae0493b
        old:
            2fa658e0-45700c69,de57bfbe-53a9be98,fd431d51-4ae0493b
    python36-asn1crypto:
        ----------
        new:
            0.24.0-7.el7
        old:
    python36-cffi:
        ----------
        new:
            1.9.1-3.el7
        old:
    python36-cryptography:
        ----------
        new:
            2.3-2.el7
        old:
    python36-ply:
        ----------
        new:
            3.9-2.el7
        old:
    python36-pycparser:
        ----------
        new:
            2.14-2.el7
        old:
labraas:
    ----------
    gpg-pubkey.(none):
        ----------
        new:
            2fa658e0-45700c69,352c64e5-52ae6884,de57bfbe-53a9be98,fd431d51-4ae0493b
        old:
            2fa658e0-45700c69,de57bfbe-53a9be98,fd431d51-4ae0493b
    python36-asn1crypto:
        ----------
        new:
            0.24.0-7.el7
        old:
    python36-cffi:
        ----------
        new:
            1.9.1-3.el7
        old:
    python36-cryptography:
        ----------
        new:
            2.3-2.el7
        old:
    python36-ply:
        ----------
        new:
            3.9-2.el7
        old:
    python36-pycparser:
        ----------
        new:
            2.14-2.el7
        old:
install python36-pyOpenSSL
salt '*' pkg.install  python36-pyOpenSSL

sample output:

[root@labmaster ~]# salt '*' pkg.install  python36-pyOpenSSL
labmaster:
    ----------
    python36-pyOpenSSL:
        ----------
        new:
            17.3.0-2.el7
        old:
labpostgres:
    ----------
    python36-pyOpenSSL:
        ----------
        new:
            17.3.0-2.el7
        old:
labraas:
    ----------
    python36-pyOpenSSL:
        ----------
        new:
            17.3.0-2.el7
        old:
labredis:
    ----------
    python36-pyOpenSSL:
        ----------
        new:
            17.3.0-2.el7
        old:
install rsync

This is not a mandatory package, we will use this to copy files b/w the nodes, Specifically the keys.

salt '*' pkg.install rsync

sample output

[root@labmaster ~]# salt '*' pkg.install rsync
labmaster:
    ----------
    rsync:
        ----------
        new:
            3.1.2-10.el7
        old:
labpostgres:
    ----------
    rsync:
        ----------
        new:
            3.1.2-10.el7
        old:
labraas:
    ----------
    rsync:
        ----------
        new:
            3.1.2-10.el7
        old:
labredis:
    ----------
    rsync:
        ----------
        new:
            3.1.2-10.el7
        old:

Place the installer files in the correct directories.

the automated/scripted installer was previously scp into the /root dir
navigate to the extracted tar, cd to the sse-install dir, it should look like the below:

copy the pillar, state files from the SSE installer directory into the default piller_roots directory and the default file root dir (these folders do not exist by default, so we crate them)

sudo mkdir /srv/salt
sudo cp -r salt/sse /srv/salt/
sudo mkdir /srv/pillar
sudo cp -r pillar/sse /srv/pillar/
sudo cp -r pillar/top.sls /srv/pillar/
sudo cp -r salt/top.sls /srv/salt/

add SSE keys to all vms:

we will use rsync to copy the keys from the SSE installer directory to all the machines:

rsync -avzh keys/ [email protected]:~/keys
rsync -avzh keys/ [email protected]:~/keys
rsync -avzh keys/ [email protected]:~/keys
rsync -avzh keys/ [email protected]:~/keys

install keys:

 salt '*' cmd.run "sudo rpmkeys --import ~/keys/*.asc"

output:

edit the pillar top.sls

vi /srv/pillar/top.sls

replace the list hilighted below with the minion names of all the instances that will be used for the SSE deployment.

Edited:

note: you can get the minion names using

salt-key -L

now, my updated top file looks like the below:

{# Pillar Top File #}

{# Define SSE Servers #}
{% load_yaml as sse_servers %}
  - labmaster
  - labpostgres
  - labraas
  - labredis
{% endload %}

base:

  {# Assign Pillar Data to SSE Servers #}
  {% for server in sse_servers %}
  '{{ server }}':
    - sse
  {% endfor %}

now, edit the sse_settings.yaml

vi /srv/pillar/sse/sse_settings.yaml

I have highlighted the important fields that must be updated on the config. the other fields are optional and can be changed as per your choice

this is how my updated sample config looks like:

# Section 1: Define servers in the SSE deployment by minion id
servers:

  # PostgreSQL Server (Single value)
  pg_server: labpostgres

  # Redis Server (Single value)
  redis_server: labredis

  # SaltStack Enterprise Servers (List one or more)
  eapi_servers:
    - labraas

  # Salt Masters (List one or more)
  salt_masters:
    - labmaster


# Section 2: Define PostgreSQL settings
pg:

  # Set the PostgreSQL endpoint and port
  # (defines how SaltStack Enterprise services will connect to PostgreSQL)
  pg_endpoint: 172.16.120.111
  pg_port: 5432

  # Set the PostgreSQL Username and Password for SSE
  pg_username: sseuser
  pg_password: secure123

  # Specify if PostgreSQL Host Based Authentication by IP and/or FQDN
  # (allows SaltStack Enterprise services to connect to PostgreSQL)
  pg_hba_by_ip: True
  pg_hba_by_fqdn: False

  pg_cert_cn: pgsql.lab.ntitta.in
  pg_cert_name: pgsql.lab.ntitta.in


# Section 3: Define Redis settings
redis:

  # Set the Redis endpoint and port
  # (defines how SaltStack Enterprise services will connect to Redis)
  redis_endpoint: 172.16.120.105
  redis_port: 6379

  # Set the Redis Username and Password for SSE
  redis_username: sseredis
  redis_password: secure1234


# Section 4: eAPI Server settings
eapi:

  # Set the credentials for the SaltStack Enterprise service
  # - The default for the username is "root"
  #   and the default for the password is "salt"
  # - You will want to change this after a successful deployment
  eapi_username: root
  eapi_password: salt

  # Set the endpoint for the SaltStack Enterprise service
  eapi_endpoint: 172.16.120.115

  # Set if SaltStack Enterprise will use SSL encrypted communicaiton (HTTPS)
  eapi_ssl_enabled: True

  # Set if SaltStack Enterprise will use SSL validation (verified certificate)
  eapi_ssl_validation: False

  # Set if SaltStack Enterprise (PostgreSQL, eAPI Servers, and Salt Masters)
  # will all be deployed on a single "standalone" host
  eapi_standalone: False

  # Set if SaltStack Enterprise will regard multiple masters as "active" or "failover"
  # - No impact to a single master configuration
  # - "active" (set below as False) means that all minions connect to each master (recommended)
  # - "failover" (set below as True) means that each minion connects to one master at a time
  eapi_failover_master: False

  # Set the encryption key for SaltStack Enterprise
  # (this should be a unique value for each installation)
  # To generate one, run: "openssl rand -hex 32"
  #
  # Note: Specify "auto" to have the installer generate a random key at installation time
  # ("auto" is only suitable for installations with a single SaltStack Enterprise server)
  eapi_key: auto

  eapi_server_cert_cn: raas.lab.ntitta.in
  eapi_server_cert_name: raas.lab.ntitta.in

# Section 5: Identifiers
ids:

  # Appends a customer-specific UUID to the namespace of the raas database
  # (this should be a unique value for each installation)
  # To generate one, run: "cat /proc/sys/kernel/random/uuid"
  customer_id: 43cab1f4-de60-4ab1-85b5-1d883c5c5d09

  # Set the Cluster ID for the master (or set of masters) that will managed
  # the SaltStack Enterprise infrastructure
  # (additional sets of masters may be easily managed with a separate installer)
  cluster_id: distributed_sandbox_env

refresh grains and piller data:

salt '*' saltutil.refresh_grains
salt '*' saltutil.refresh_pillar

Confirm if piller returns the items:

salt '*' pillar.items

sample output:

labraas:
    ----------
    sse_cluster_id:
        distributed_sandbox_env
    sse_customer_id:
        43cab1f4-de60-4ab1-85b5-1d883c5c5d09
    sse_eapi_endpoint:
        172.16.120.115
    sse_eapi_failover_master:
        False
    sse_eapi_key:
        auto
    sse_eapi_num_processes:
        12
    sse_eapi_password:
        salt
    sse_eapi_server_cert_cn:
        raas.lab.ntitta.in
    sse_eapi_server_cert_name:
        raas.lab.ntitta.in
    sse_eapi_server_fqdn_list:
        - labraas.ntitta.lab
    sse_eapi_server_ipv4_list:
        - 172.16.120.115
    sse_eapi_servers:
        - labraas
    sse_eapi_ssl_enabled:
        True
    sse_eapi_ssl_validation:
        False
    sse_eapi_standalone:
        False
    sse_eapi_username:
        root
    sse_pg_cert_cn:
        pgsql.lab.ntitta.in
    sse_pg_cert_name:
        pgsql.lab.ntitta.in
    sse_pg_endpoint:
        172.16.120.111
    sse_pg_fqdn:
        labpostgres.ntitta.lab
    sse_pg_hba_by_fqdn:
        False
    sse_pg_hba_by_ip:
        True
    sse_pg_ip:
        172.16.120.111
    sse_pg_password:
        secure123
    sse_pg_port:
        5432
    sse_pg_server:
        labpostgres
    sse_pg_username:
        sseuser
    sse_redis_endpoint:
        172.16.120.105
    sse_redis_password:
        secure1234
    sse_redis_port:
        6379
    sse_redis_server:
        labredis
    sse_redis_username:
        sseredis
    sse_salt_master_fqdn_list:
        - labmaster.ntitta.lab
    sse_salt_master_ipv4_list:
        - 172.16.120.113
    sse_salt_masters:
        - labmaster
labmaster:
    ----------
    sse_cluster_id:
        distributed_sandbox_env
    sse_customer_id:
        43cab1f4-de60-4ab1-85b5-1d883c5c5d09
    sse_eapi_endpoint:
        172.16.120.115
    sse_eapi_failover_master:
        False
    sse_eapi_key:
        auto
    sse_eapi_num_processes:
        12
    sse_eapi_password:
        salt
    sse_eapi_server_cert_cn:
        raas.lab.ntitta.in
    sse_eapi_server_cert_name:
        raas.lab.ntitta.in
    sse_eapi_server_fqdn_list:
        - labraas.ntitta.lab
    sse_eapi_server_ipv4_list:
        - 172.16.120.115
    sse_eapi_servers:
        - labraas
    sse_eapi_ssl_enabled:
        True
    sse_eapi_ssl_validation:
        False
    sse_eapi_standalone:
        False
    sse_eapi_username:
        root
    sse_pg_cert_cn:
        pgsql.lab.ntitta.in
    sse_pg_cert_name:
        pgsql.lab.ntitta.in
    sse_pg_endpoint:
        172.16.120.111
    sse_pg_fqdn:
        labpostgres.ntitta.lab
    sse_pg_hba_by_fqdn:
        False
    sse_pg_hba_by_ip:
        True
    sse_pg_ip:
        172.16.120.111
    sse_pg_password:
        secure123
    sse_pg_port:
        5432
    sse_pg_server:
        labpostgres
    sse_pg_username:
        sseuser
    sse_redis_endpoint:
        172.16.120.105
    sse_redis_password:
        secure1234
    sse_redis_port:
        6379
    sse_redis_server:
        labredis
    sse_redis_username:
        sseredis
    sse_salt_master_fqdn_list:
        - labmaster.ntitta.lab
    sse_salt_master_ipv4_list:
        - 172.16.120.113
    sse_salt_masters:
        - labmaster
labredis:
    ----------
    sse_cluster_id:
        distributed_sandbox_env
    sse_customer_id:
        43cab1f4-de60-4ab1-85b5-1d883c5c5d09
    sse_eapi_endpoint:
        172.16.120.115
    sse_eapi_failover_master:
        False
    sse_eapi_key:
        auto
    sse_eapi_num_processes:
        12
    sse_eapi_password:
        salt
    sse_eapi_server_cert_cn:
        raas.lab.ntitta.in
    sse_eapi_server_cert_name:
        raas.lab.ntitta.in
    sse_eapi_server_fqdn_list:
        - labraas.ntitta.lab
    sse_eapi_server_ipv4_list:
        - 172.16.120.115
    sse_eapi_servers:
        - labraas
    sse_eapi_ssl_enabled:
        True
    sse_eapi_ssl_validation:
        False
    sse_eapi_standalone:
        False
    sse_eapi_username:
        root
    sse_pg_cert_cn:
        pgsql.lab.ntitta.in
    sse_pg_cert_name:
        pgsql.lab.ntitta.in
    sse_pg_endpoint:
        172.16.120.111
    sse_pg_fqdn:
        labpostgres.ntitta.lab
    sse_pg_hba_by_fqdn:
        False
    sse_pg_hba_by_ip:
        True
    sse_pg_ip:
        172.16.120.111
    sse_pg_password:
        secure123
    sse_pg_port:
        5432
    sse_pg_server:
        labpostgres
    sse_pg_username:
        sseuser
    sse_redis_endpoint:
        172.16.120.105
    sse_redis_password:
        secure1234
    sse_redis_port:
        6379
    sse_redis_server:
        labredis
    sse_redis_username:
        sseredis
    sse_salt_master_fqdn_list:
        - labmaster.ntitta.lab
    sse_salt_master_ipv4_list:
        - 172.16.120.113
    sse_salt_masters:
        - labmaster
labpostgres:
    ----------
    sse_cluster_id:
        distributed_sandbox_env
    sse_customer_id:
        43cab1f4-de60-4ab1-85b5-1d883c5c5d09
    sse_eapi_endpoint:
        172.16.120.115
    sse_eapi_failover_master:
        False
    sse_eapi_key:
        auto
    sse_eapi_num_processes:
        12
    sse_eapi_password:
        salt
    sse_eapi_server_cert_cn:
        raas.lab.ntitta.in
    sse_eapi_server_cert_name:
        raas.lab.ntitta.in
    sse_eapi_server_fqdn_list:
        - labraas.ntitta.lab
    sse_eapi_server_ipv4_list:
        - 172.16.120.115
    sse_eapi_servers:
        - labraas
    sse_eapi_ssl_enabled:
        True
    sse_eapi_ssl_validation:
        False
    sse_eapi_standalone:
        False
    sse_eapi_username:
        root
    sse_pg_cert_cn:
        pgsql.lab.ntitta.in
    sse_pg_cert_name:
        pgsql.lab.ntitta.in
    sse_pg_endpoint:
        172.16.120.111
    sse_pg_fqdn:
        labpostgres.ntitta.lab
    sse_pg_hba_by_fqdn:
        False
    sse_pg_hba_by_ip:
        True
    sse_pg_ip:
        172.16.120.111
    sse_pg_password:
        secure123
    sse_pg_port:
        5432
    sse_pg_server:
        labpostgres
    sse_pg_username:
        sseuser
    sse_redis_endpoint:
        172.16.120.105
    sse_redis_password:
        secure1234
    sse_redis_port:
        6379
    sse_redis_server:
        labredis
    sse_redis_username:
        sseredis
    sse_salt_master_fqdn_list:
        - labmaster.ntitta.lab
    sse_salt_master_ipv4_list:
        - 172.16.120.113
    sse_salt_masters:
        - labmaster

Install Postgres:

salt labpostgres state.highstate

output:

[root@labmaster sse]# sudo salt labpostgres state.highstate
labpostgres:
----------
          ID: install_postgresql-server
    Function: pkg.installed
      Result: True
     Comment: 4 targeted packages were installed/updated.
     Started: 19:57:29.956557
    Duration: 27769.35 ms
     Changes:
              ----------
              postgresql12:
                  ----------
                  new:
                      12.7-1PGDG.rhel7
                  old:
              postgresql12-contrib:
                  ----------
                  new:
                      12.7-1PGDG.rhel7
                  old:
              postgresql12-libs:
                  ----------
                  new:
                      12.7-1PGDG.rhel7
                  old:
              postgresql12-server:
                  ----------
                  new:
                      12.7-1PGDG.rhel7
                  old:
----------
          ID: initialize_postgres-database
    Function: cmd.run
        Name: /usr/pgsql-12/bin/postgresql-12-setup initdb
      Result: True
     Comment: Command "/usr/pgsql-12/bin/postgresql-12-setup initdb" run
     Started: 19:57:57.729506
    Duration: 2057.166 ms
     Changes:
              ----------
              pid:
                  33869
              retcode:
                  0
              stderr:
              stdout:
                  Initializing database ... OK
----------
          ID: create_pki_postgres_path
    Function: file.directory
        Name: /etc/pki/postgres/certs
      Result: True
     Comment:
     Started: 19:57:59.792636
    Duration: 7.834 ms
     Changes:
              ----------
              /etc/pki/postgres/certs:
                  ----------
                  directory:
                      new
----------
          ID: create_ssl_certificate
    Function: module.run
        Name: tls.create_self_signed_cert
      Result: True
     Comment: Module function tls.create_self_signed_cert executed
     Started: 19:57:59.802082
    Duration: 163.484 ms
     Changes:
              ----------
              ret:
                  Created Private Key: "/etc/pki/postgres/certs/pgsq.key." Created Certificate: "/etc/pki/postgres/certs/pgsq.crt."
----------
          ID: set_certificate_permissions
    Function: file.managed
        Name: /etc/pki/postgres/certs/pgsq.crt
      Result: True
     Comment:
     Started: 19:57:59.965923
    Duration: 4.142 ms
     Changes:
              ----------
              group:
                  postgres
              mode:
                  0400
              user:
                  postgres
----------
          ID: set_key_permissions
    Function: file.managed
        Name: /etc/pki/postgres/certs/pgsq.key
      Result: True
     Comment:
     Started: 19:57:59.970470
    Duration: 3.563 ms
     Changes:
              ----------
              group:
                  postgres
              mode:
                  0400
              user:
                  postgres
----------
          ID: configure_postgres
    Function: file.managed
        Name: /var/lib/pgsql/12/data/postgresql.conf
      Result: True
     Comment: File /var/lib/pgsql/12/data/postgresql.conf updated
     Started: 19:57:59.974388
    Duration: 142.264 ms
     Changes:
              ----------
              diff:
                  ---
                  +++
                  @@ -16,9 +16,9 @@
                   #
....
....

...
                   #------------------------------------------------------------------------------
----------
          ID: configure_pg_hba
    Function: file.managed
        Name: /var/lib/pgsql/12/data/pg_hba.conf
      Result: True
     Comment: File /var/lib/pgsql/12/data/pg_hba.conf updated
...
...
...
                  +
----------
          ID: start_postgres
    Function: service.running
        Name: postgresql-12
      Result: True
     Comment: Service postgresql-12 has been enabled, and is running
     Started: 19:58:00.225639
    Duration: 380.763 ms
     Changes:
              ----------
              postgresql-12:
                  True
----------
          ID: create_db_user
    Function: postgres_user.present
        Name: sseuser
      Result: True
     Comment: The user sseuser has been created
     Started: 19:58:00.620381
    Duration: 746.545 ms
     Changes:
              ----------
              sseuser:
                  Present

Summary for labpostgres
-------------
Succeeded: 10 (changed=10)
Failed:     0
-------------
Total states run:     10
Total run time:   31.360 s

If this fails for some reason, you can revert/remove postgres by using below and fix the underlying errors before re-trying

salt labpostgres state.apply sse.eapi_database.revert

example:

[root@labmaster sse]# salt labpostgres state.apply sse.eapi_database.revert
labpostgres:
----------
          ID: revert_all
    Function: pkg.removed
      Result: True
     Comment: All targeted packages were removed.
     Started: 16:30:26.736578
    Duration: 10127.277 ms
     Changes:
              ----------
              postgresql12:
                  ----------
                  new:
                  old:
                      12.7-1PGDG.rhel7
              postgresql12-contrib:
                  ----------
                  new:
                  old:
                      12.7-1PGDG.rhel7
              postgresql12-libs:
                  ----------
                  new:
                  old:
                      12.7-1PGDG.rhel7
              postgresql12-server:
                  ----------
                  new:
                  old:
                      12.7-1PGDG.rhel7
----------
          ID: revert_all
    Function: file.absent
        Name: /var/lib/pgsql/
      Result: True
     Comment: Removed directory /var/lib/pgsql/
     Started: 16:30:36.870967
    Duration: 79.941 ms
     Changes:
              ----------
              removed:
                  /var/lib/pgsql/
----------
          ID: revert_all
    Function: file.absent
        Name: /etc/pki/postgres/
      Result: True
     Comment: Removed directory /etc/pki/postgres/
     Started: 16:30:36.951337
    Duration: 3.34 ms
     Changes:
              ----------
              removed:
                  /etc/pki/postgres/
----------
          ID: revert_all
    Function: user.absent
        Name: postgres
      Result: True
     Comment: Removed user postgres
     Started: 16:30:36.956696
    Duration: 172.372 ms
     Changes:
              ----------
              postgres:
                  removed
              postgres group:
                  removed

Summary for labpostgres
------------
Succeeded: 4 (changed=4)
Failed:    0
------------
Total states run:     4
Total run time:  10.383 s

install redis

salt labredis state.highstate

sample output:

[root@labmaster sse]# salt labredis state.highstate
labredis:
----------
          ID: install_redis
    Function: pkg.installed
      Result: True
     Comment: The following packages were installed/updated: jemalloc, redis5
     Started: 20:07:12.059084
    Duration: 25450.196 ms
     Changes:
              ----------
              jemalloc:
                  ----------
                  new:
                      3.6.0-1.el7
                  old:
              redis5:
                  ----------
                  new:
                      5.0.9-1.el7.ius
                  old:
----------
          ID: configure_redis
    Function: file.managed
        Name: /etc/redis.conf
      Result: True
     Comment: File /etc/redis.conf updated
     Started: 20:07:37.516851
    Duration: 164.011 ms
     Changes:
              ----------
              diff:
                  ---
                  +++
                  @@ -1,5 +1,5 @@
...

...
                  -bind 127.0.0.1
                  +bind 0.0.0.0

.....
.....
                  @@ -1361,12 +1311,8 @@
                   # active-defrag-threshold-upper 100

                   # Minimal effort for defrag in CPU percentage
                  -# active-defrag-cycle-min 5
                  +# active-defrag-cycle-min 25

                   # Maximal effort for defrag in CPU percentage
                   # active-defrag-cycle-max 75

                  -# Maximum number of set/hash/zset/list fields that will be processed from
                  -# the main dictionary scan
                  -# active-defrag-max-scan-fields 1000
                  -
              mode:
                  0664
              user:
                  root
----------
          ID: start_redis
    Function: service.running
        Name: redis
      Result: True
     Comment: Service redis has been enabled, and is running
     Started: 20:07:37.703605
    Duration: 251.205 ms
     Changes:
              ----------
              redis:
                  True

Summary for labredis
------------
Succeeded: 3 (changed=3)
Failed:    0
------------
Total states run:     3
Total run time:  25.865 s

Install RAAS

Before proceeding with RAAS setup, ensure Postgres and Redis is accessible: In my case, I still have linux firewall on the two machines, use the below command to add firewall rule exceptions for the respective node. again, I am leveraging salt to run the commands on the remote node

salt labpostgres cmd.run "firewall-cmd --zone=public --add-port=5432/tcp --permanent && firewall-cmd --reload"
salt labredis cmd.run "firewall-cmd --zone=public --add-port=6379/tcp --permanent && firewall-cmd --reload"
salt labraas cmd.run "firewall-cmd --zone=public --add-port=443/tcp --permanent && firewall-cmd --reload"

now, proceed with raas install

salt labraas state.highstate

sample output:

[root@labmaster sse]# salt labraas state.highstate
labraas:
----------
          ID: install_xmlsec
    Function: pkg.installed
      Result: True
     Comment: 2 targeted packages were installed/updated.
              The following packages were already installed: openssl, openssl-libs, xmlsec1, xmlsec1-openssl, libxslt, libtool-ltdl
     Started: 20:36:16.715011
    Duration: 39176.806 ms
     Changes:
              ----------
              singleton-manager-i18n:
                  ----------
                  new:
                      0.6.0-5.el7.x86_64_1
                  old:
              ssc-translation-bundle:
                  ----------
                  new:
                      8.6.2-2.ph3.noarch_1
                  old:
----------
          ID: install_raas
    Function: pkg.installed
      Result: True
     Comment: The following packages were installed/updated: raas
     Started: 20:36:55.942737
    Duration: 35689.868 ms
     Changes:
              ----------
              raas:
                  ----------
                  new:
                      8.6.2.11-1.el7
                  old:
----------
          ID: install_raas
    Function: cmd.run
        Name: systemctl daemon-reload
      Result: True
     Comment: Command "systemctl daemon-reload" run
     Started: 20:37:31.638377
    Duration: 138.354 ms
     Changes:
              ----------
              pid:
                  31230
              retcode:
                  0
              stderr:
              stdout:
----------
          ID: create_pki_raas_path_eapi
    Function: file.directory
        Name: /etc/pki/raas/certs
      Result: True
     Comment: The directory /etc/pki/raas/certs is in the correct state
     Started: 20:37:31.785757
    Duration: 11.788 ms
     Changes:
----------
          ID: create_ssl_certificate_eapi
    Function: module.run
        Name: tls.create_self_signed_cert
      Result: True
     Comment: Module function tls.create_self_signed_cert executed
     Started: 20:37:31.800719
    Duration: 208.431 ms
     Changes:
              ----------
              ret:
                  Created Private Key: "/etc/pki/raas/certs/raas.lab.ntitta.in.key." Created Certificate: "/etc/pki/raas/certs/raas.lab.ntitta.in.crt."
----------
          ID: set_certificate_permissions_eapi
    Function: file.managed
        Name: /etc/pki/raas/certs/raas.lab.ntitta.in.crt
      Result: True
     Comment:
     Started: 20:37:32.009536
    Duration: 5.967 ms
     Changes:
              ----------
              group:
                  raas
              mode:
                  0400
              user:
                  raas
----------
          ID: set_key_permissions_eapi
    Function: file.managed
        Name: /etc/pki/raas/certs/raas.lab.ntitta.in.key
      Result: True
     Comment:
     Started: 20:37:32.015921
    Duration: 6.888 ms
     Changes:
              ----------
              group:
                  raas
              mode:
                  0400
              user:
                  raas
----------
          ID: raas_owns_raas
    Function: file.directory
        Name: /etc/raas/
      Result: True
     Comment: The directory /etc/raas is in the correct state
     Started: 20:37:32.023200
    Duration: 4.485 ms
     Changes:
----------
          ID: configure_raas
    Function: file.managed
        Name: /etc/raas/raas
      Result: True
     Comment: File /etc/raas/raas updated
     Started: 20:37:32.028374
    Duration: 132.226 ms
     Changes:
              ----------
              diff:
                  ---
                  +++
                  @@ -1,49 +1,47 @@
...
...
                  +
----------
          ID: save_credentials
    Function: cmd.run
        Name: /usr/bin/raas save_creds 'postgres={"username":"sseuser","password":"secure123"}' 'redis={"password":"secure1234"}'
      Result: True
     Comment: All files in creates exist
     Started: 20:37:32.163432
    Duration: 2737.346 ms
     Changes:
----------
          ID: set_secconf_permissions
    Function: file.managed
        Name: /etc/raas/raas.secconf
      Result: True
     Comment: File /etc/raas/raas.secconf exists with proper permissions. No changes made.
     Started: 20:37:34.902143
    Duration: 5.949 ms
     Changes:
----------
          ID: ensure_raas_pki_directory
    Function: file.directory
        Name: /etc/raas/pki
      Result: True
     Comment: The directory /etc/raas/pki is in the correct state
     Started: 20:37:34.908558
    Duration: 4.571 ms
     Changes:
----------
          ID: change_owner_to_raas
    Function: file.directory
        Name: /etc/raas/pki
      Result: True
     Comment: The directory /etc/raas/pki is in the correct state
     Started: 20:37:34.913566
    Duration: 5.179 ms
     Changes:
----------
          ID: /usr/sbin/ldconfig
    Function: cmd.run
      Result: True
     Comment: Command "/usr/sbin/ldconfig" run
     Started: 20:37:34.919069
    Duration: 32.018 ms
     Changes:
              ----------
              pid:
                  31331
              retcode:
                  0
              stderr:
              stdout:
----------
          ID: start_raas
    Function: service.running
        Name: raas
      Result: True
     Comment: check_cmd determined the state succeeded
     Started: 20:37:34.952926
    Duration: 16712.726 ms
     Changes:
              ----------
              raas:
                  True
----------
          ID: restart_raas_and_confirm_connectivity
    Function: cmd.run
        Name: salt-call service.restart raas
      Result: True
     Comment: check_cmd determined the state succeeded
     Started: 20:37:51.666446
    Duration: 472.205 ms
     Changes:
----------
          ID: get_initial_objects_file
    Function: file.managed
        Name: /tmp/sample-resource-types.raas
      Result: True
     Comment: File /tmp/sample-resource-types.raas updated
     Started: 20:37:52.139370
    Duration: 180.432 ms
     Changes:
              ----------
              group:
                  raas
              mode:
                  0640
              user:
                  raas
----------
          ID: import_initial_objects
    Function: cmd.run
        Name: /usr/bin/raas dump --insecure --server https://localhost --auth root:salt --mode import < /tmp/sample-resource-types.raas
      Result: True
     Comment: Command "/usr/bin/raas dump --insecure --server https://localhost --auth root:salt --mode import < /tmp/sample-resource-types.raas" run
     Started: 20:37:52.320146
    Duration: 24566.332 ms
     Changes:
              ----------
              pid:
                  31465
              retcode:
                  0
              stderr:
              stdout:
----------
          ID: raas_service_restart
    Function: cmd.run
        Name: systemctl restart raas
      Result: True
     Comment: Command "systemctl restart raas" run
     Started: 20:38:16.887666
    Duration: 2257.183 ms
     Changes:
              ----------
              pid:
                  31514
              retcode:
                  0
              stderr:
              stdout:

Summary for labraas
-------------
Succeeded: 19 (changed=12)
Failed:     0
-------------
Total states run:     19
Total run time:  122.349 s

Install Eapi Agent:

salt labmaster state.highstate

output:

[root@labmaster sse]# salt labmaster state.highstate
Authentication error occurred.

the Authentication error above is expected. now, we log in to the RAAS via webbrowser:

Accept the minion master keys and now we see all minion:

you now have salt-config /salt enterprise installed successfully.

Troubleshooting:

If the Postgres or RAAS high state fils with the bellow then download the newer version of salt-config tar files from VMware. (there are issues with the init.sls state files with 8.5 or older versions.

----------
          ID: create_ssl_certificate
    Function: module.run
        Name: tls.create_self_signed_cert
      Result: False
     Comment: Module function tls.create_self_signed_cert threw an exception. Exception: [Errno 2] No such file or directory: '/etc/pki/postgres/certs/sdb://osenv/PG_CERT_CN.key'
     Started: 17:11:56.347565
    Duration: 297.925 ms
     Changes:



----------
          ID: create_ssl_certificate_eapi
    Function: module.run
        Name: tls.create_self_signed_cert
      Result: False
     Comment: Module function tls.create_self_signed_cert threw an exception. Exception: [Errno 2] No such file or directory: '/etc/pki/raas/certs/sdb://osenv/SSE_CERT_CN.key'
     Started: 20:26:32.061862
    Duration: 42.028 ms
     Changes:
----------

you can work around the issue by hardcoding the full paths for pg cert and raas cert in the init.sls files.

ID: create_ssl_certificate
Function: module.run
Name: tls.create_self_signed_cert
Result: False
Comment: Module function tls.create_self_signed_cert is not available
Started: 16:11:55.436579
Duration: 932.506 ms
Changes:

Cause: prerequisits are not installed. python36-pyOpenSSL and python36-cryptography must be installed on all nodes where tls.create_self_signed_cert is targeted against.

SaltConfig multi-node scripted/automated Deployment Part-1

Topology: (master-minion communication)

OS: RHEL7.9/Centos7

For the above topology, you will need 4 machines, we will be using the scripted installer to install RAAS, REDIS and postgres for us.

the VM’s I am using:

note: My RHEL machine are already registered with RHEL subscription manager.

we start by updating the machine on all machine’s

Update OS to latest

yum update -y

Install Salt master and salt Minion

Add Salt stack repository:

URL: https://repo.saltproject.io/

Navigate to the above URL and select the correct repository for your OS:

install the repository on all four machines.

sudo rpm --import https://repo.saltproject.io/py3/redhat/7/x86_64/latest/SALTSTACK-GPG-KEY.pub
curl -fsSL https://repo.saltproject.io/py3/redhat/7/x86_64/latest.repo | sudo tee /etc/yum.repos.d/salt.repo

Eg output:

Clear expired cache (run on all 4 machine)

sudo yum clean expire-cache

Install master

we now install salt-master on the master VM:

sudo yum install salt-master

press y to continue

Salt master uses Port: 4505-4506, we add a firewall rule to allow traffic (run the below only on the master)

firewall-cmd --permanent --add-port=4505-4506/tcp   --permanent
firewall-cmd --reload

Enable and start services:

sudo systemctl enable salt-master && sudo systemctl start salt-master

Install minion

on all 4 machines, we install salt-minion.

yum install salt-minion -y

we will now need to edit the minion configuration file and point it to the salt-master IP. (this needs to be done on all nodes)

I use the below command to add the master IP the config file:

 echo "master: 172.16.120.113" >> /etc/salt/minion

EG output:

Enable and start the minion: (run on all nodes)

sudo systemctl enable salt-minion && sudo systemctl start salt-minion

On a successful connection, when you run salt-key -L on the master, you should see all the minions listed:

salt-key -L


Accept minion keys:

salt-key -A

Test minions:

salt '*' test.ping

Troubleshooting minion/master :

Config files:

Master: /etc/salt/master
/etc/salt/master.d/*
minion: /etc/salt/minion
/etc/salt/minion.d/*

Log files:

Master: /var/log/salt/master
Minion: /var/log/salt/minion


minion logs:

Mar 04 12:05:26 xyzzzzy salt-minion[16137]: [ERROR   ] Error while bringing up minion for multi-master. Is master at 172.16.120.113 responding?

Cause: minion Is not able to communicate with master. Either the master ports are not open or there is no master service running on the IP or network is unreachable.

The Salt Master has cached the public key for this node, this salt minion will wait for 10 seconds before attempting to re-authenticate

Minion keys are not accepted by the master

Usage meter 4.5 API walkthrough

Greetings!!,
I am writing this based on a popular request by partners.

API specification/tempalte

Usage meter API guide: https://developer.vmware.com/apis/1206/vcloud-usage-meter
usage meter OpenAPI specification: Click here to download JSON or use the below:

https://vdc-download.vmware.com/vmwb-repository/dcr-public/d61291b6-4397-44be-9b34-f046ada52f46/55e49229-0015-43d7-8655-b164861f7e0d/api_spec_4.5.json

You can find the openAPI specification on the API guide link > Documentation:

We will be using this JSON with postman.

Import template to postman

on postman, Navigate to File> import > (
either import the json file downloaded with previosu steps or use the URL. In my case I have used URL

and then click on import

you should now see the API collections listed like so:

Set up variables via Enveronment

we will be using 2 variables

  • baseUrl
  • SessionID

Navigate to Environment and create a new Env.

Enter a name of the enveronment and then in the variables, add baseUrl with value https://um_ip/api/v1
leave the sessionid blank for now. Ensure that you hit the save button here!!!

Generating a SessionID

navigate back to collections > usage meter collection>login > create a session.
Click on body and enter the credentials

Switch the environment to usage meter 4.5 that we created in the previous step and hit send. we now have a sessionid

Go back in to environments > usage meter env > edit the sessionid field there and paste the sessionid you generated in the previous step: and ensure that you click on save!!

Add a vCenter endpoint

Navigate back to collection > usage meter collection > product> Adds a new product> header> repalce the value of sessionid with {{sessionid}} and hit save (you will need to add the sessionid variable for all api’s going forward with similar steps)

Switch to body and replace the raw data with the vCenter details:

Hit send and the product should be added to usage meter:

Usage meter:

accept certificate