Troubleshooting saltconfig (aria config) Minion Deployment Failure

When troubleshooting a minion deployment failure, I would recommend hashing out the salt part of the blueprint and run this as a day2 task. This would help save significant deployment time and help focuss on the minion deployment issue alone.

So in my scenario, I Finished my deployment and run the salt as a day2 task which failed:

Navigate to Aria config(salt-config) web UI > activity > jobs > completed > Look for a deploy.minion task click on the JID (the long number to the right table of the job) and then click on raw:

so, this tells us that the script that was being executed failed and hence “Exit code: 1”

SSH to salt master and navigate to /etc/salt/cloud.profiles.d, you should see a conf with the the same vRA deployment name. in my case it was the second one from the below screenshot.

at this stage, you can manually call on salt-cloud with the debug flag so that you have realtime logging as the script attempts to connect to the remote host and bootstrap the minion.

The basic syntax is

salt-cloud -p profile_name VM_name -l debug

in my case:

salt-cloud -p ssc_Router-mcm770988a1-d535-4b24-b78b-2318f14911cd_profile test -l debug

Note: do not include the .conf in the profile name and the VM_name can be anything, it really does not matter in the current senario.

Typically, you want to look at the very end for the errors, In my case it was bad DNS.

[email protected]'s password: [DEBUG   ] [email protected]'s password:

[sudo] password for labadmin: [DEBUG   ] [sudo] password for labadmin:

 *  INFO: Running version: 2022.08.12
 *  INFO: Executed by: /bin/sh
 *  INFO: Command line: '/tmp/.saltcloud-3e1d4338-c7d1-4dbb-8596-de0d6bf587ec/deploy.sh -c /tmp/.saltcloud-3e1d4338-c7d1-4dbb-8596-de0d6bf587ec -x python3 stable 3005.1'
 *  WARN: Running the unstable version of bootstrap-salt.sh

 *  INFO: System Information:
 *  INFO:   CPU:          AuthenticAMD
 *  INFO:   CPU Arch:     x86_64
 *  INFO:   OS Name:      Linux
 *  INFO:   OS Version:   5.15.0-69-generic
 *  INFO:   Distribution: Ubuntu 22.04

 *  INFO: Installing minion
 *  INFO: Found function install_ubuntu_stable_deps
 *  INFO: Found function config_salt
 *  INFO: Found function preseed_master
 *  INFO: Found function install_ubuntu_stable
 *  INFO: Found function install_ubuntu_stable_post
 *  INFO: Found function install_ubuntu_res[DEBUG   ]  *  INFO: Running version: 2022.08.12
 *  INFO: Executed by: /bin/sh
 *  INFO: Command line: '/tmp/.saltcloud-3e1d4338-c7d1-4dbb-8596-de0d6bf587ec/deploy.sh -c /tmp/.saltcloud-3e1d4338-c7d1-4dbb-8596-de0d6bf587ec -x python3 stable 3005.1'
 *  WARN: Running the unstable version of bootstrap-salt.sh

 *  INFO: System Information:
 *  INFO:   CPU:          AuthenticAMD
 *  INFO:   CPU Arch:     x86_64
 *  INFO:   OS Name:      Linux
 *  INFO:   OS Version:   5.15.0-69-generic
 *  INFO:   Distribution: Ubuntu 22.04

 *  INFO: Installing minion
 *  INFO: Found function install_ubuntu_stable_deps
 *  INFO: Found function config_salt
 *  INFO: Found function preseed_master
 *  INFO: Found function install_ubuntu_stable
 *  INFO: Found function install_ubuntu_stable_post
 *  INFO: Found function install_ubuntu_res
tart_daemons
 *  INFO: Found function daemons_running
 *  INFO: Found function install_ubuntu_check_services
 *  INFO: Running install_ubuntu_stable_deps()
Ign:1 http://in.archive.ubuntu.com/ubuntu jammy InRelease
Ign:2 https://packages.microsoft.com/ubuntu/22.04/prod jammy InRelease
Ign:3 https://repo.saltproject.io/py3/ubuntu/20.04/amd64/archive/3005.1 focal InRelease
Ign:4 http://in.archive.ubuntu.com/ubuntu jammy-updates InRelease
Ign:5 http://in.archive.ubuntu.com/ubuntu jammy-backports InRelease
Ign:6 http://in.archive.ubuntu.com/ubuntu jammy-security InRelease
[DEBUG   ] tart_daemons
 *  INFO: Found function daemons_running
 *  INFO: Found function install_ubuntu_check_services
 *  INFO: Running install_ubuntu_stable_deps()
Ign:1 http://in.archive.ubuntu.com/ubuntu jammy InRelease
Ign:2 https://packages.microsoft.com/ubuntu/22.04/prod jammy InRelease
Ign:3 https://repo.saltproject.io/py3/ubuntu/20.04/amd64/archive/3005.1 focal InRelease
Ign:4 http://in.archive.ubuntu.com/ubuntu jammy-updates InRelease
Ign:5 http://in.archive.ubuntu.com/ubuntu jammy-backports InRelease
Ign:6 http://in.archive.ubuntu.com/ubuntu jammy-security InRelease
Ign:1 http://in.archive.ubuntu.com/ubuntu jammy InRelease
Ign:2 https://packages.microsoft.com/ubuntu/22.04/prod jammy InRelease
Ign:3 https://repo.saltproject.io/py3/ubuntu/20.04/amd64/archive/3005.1 focal InRelease
Ign:4 http://in.archive.ubuntu.com/ubuntu jammy-updates InRelease
Ign:5 http://in.archive.ubuntu.com/ubuntu jammy-backports InRelease
Ign:6 http://in.archive.ubuntu.com/ubuntu jammy-security InRelease
[DEBUG   ] Ign:1 http://in.archive.ubuntu.com/ubuntu jammy InRelease
Ign:2 https://packages.microsoft.com/ubuntu/22.04/prod jammy InRelease
Ign:3 https://repo.saltproject.io/py3/ubuntu/20.04/amd64/archive/3005.1 focal InRelease
Ign:4 http://in.archive.ubuntu.com/ubuntu jammy-updates InRelease
Ign:5 http://in.archive.ubuntu.com/ubuntu jammy-backports InRelease
Ign:6 http://in.archive.ubuntu.com/ubuntu jammy-security InRelease
Ign:1 http://in.archive.ubuntu.com/ubuntu jammy InRelease
Ign:2 https://packages.microsoft.com/ubuntu/22.04/prod jammy InRelease
Ign:3 https://repo.saltproject.io/py3/ubuntu/20.04/amd64/archive/3005.1 focal InRelease
Ign:4 http://in.archive.ubuntu.com/ubuntu jammy-updates InRelease
Ign:5 http://in.archive.ubuntu.com/ubuntu jammy-backports InRelease
Ign:6 http://in.archive.ubuntu.com/ubuntu jammy-security InRelease
[DEBUG   ] Ign:1 http://in.archive.ubuntu.com/ubuntu jammy InRelease
Ign:2 https://packages.microsoft.com/ubuntu/22.04/prod jammy InRelease
Ign:3 https://repo.saltproject.io/py3/ubuntu/20.04/amd64/archive/3005.1 focal InRelease
Ign:4 http://in.archive.ubuntu.com/ubuntu jammy-updates InRelease
Ign:5 http://in.archive.ubuntu.com/ubuntu jammy-backports InRelease
Ign:6 http://in.archive.ubuntu.com/ubuntu jammy-security InRelease
Err:1 http://in.archive.ubuntu.com/ubuntu jammy InRelease
  Temporary failure resolving 'in.archive.ubuntu.com'
Err:3 https://repo.saltproject.io/py3/ubuntu/20.04/amd64/archive/3005.1 focal InRelease
  Temporary failure resolving 'repo.saltproject.io'
Err:2 https://packages.microsoft.com/ubuntu/22.04/prod jammy InRelease
  Temporary failure resolving 'packages.microsoft.com'
Err:4 http://in.archive.ubuntu.com/ubuntu jammy-updates InRelease
  Temporary failure resolving 'in.archive.ubuntu.com'
Err:5 http://in.archive.ubuntu.com/ubuntu jammy-backports InRelease
  Temporary failure resolving 'in.archive.ubuntu.com'
Err:6 http://in.archive.ubuntu.com/ubuntu jammy-security InRelease
  Temporary failure resolving 'in.archive.ubuntu.com'
Reading package lists...[DEBUG   ] Err:1 http://in.archive.ubuntu.com/ubuntu jammy InRelease
  Temporary failure resolving 'in.archive.ubuntu.com'
Err:3 https://repo.saltproject.io/py3/ubuntu/20.04/amd64/archive/3005.1 focal InRelease
  Temporary failure resolving 'repo.saltproject.io'
Err:2 https://packages.microsoft.com/ubuntu/22.04/prod jammy InRelease
  Temporary failure resolving 'packages.microsoft.com'
Err:4 http://in.archive.ubuntu.com/ubuntu jammy-updates InRelease
  Temporary failure resolving 'in.archive.ubuntu.com'
Err:5 http://in.archive.ubuntu.com/ubuntu jammy-backports InRelease
  Temporary failure resolving 'in.archive.ubuntu.com'
Err:6 http://in.archive.ubuntu.com/ubuntu jammy-security InRelease
  Temporary failure resolving 'in.archive.ubuntu.com'
Reading package lists...
Connection to 10.109.30.5 closed.
[DEBUG   ] Connection to 10.109.30.5 closed.

 *  WARN: Non-LTS Ubuntu detected, but stable packages requested. Trying packages for previous LTS release. You may experience problems.
Reading package lists...
Building dependency tree...
Reading state information...
wget is already the newest version (1.21.2-2ubuntu1).
ca-certificates is already the newest version (20211016ubuntu0.22.04.1).
gnupg is already the newest version (2.2.27-3ubuntu2.1).
apt-transport-https is already the newest version (2.4.8).
The following packages were automatically installed and are no longer required:
  eatmydata libeatmydata1 python3-json-pointer python3-jsonpatch
Use 'sudo apt autoremove' to remove them.
0 upgraded, 0 newly installed, 0 to remove and 62 not upgraded.
 * ERROR: https://repo.saltproject.io/py3/ubuntu/20.04/amd64/archive/3005.1/salt-archive-keyring.gpg failed to download to /tmp/salt-gpg-UclYVAky.pub
 * ERROR: Failed to run install_ubuntu_stable_deps()!!!
[DEBUG   ]  *  WARN: Non-LTS Ubuntu detected, but stable packages requested. Trying packages for previous LTS release. You may experience problems.
Reading package lists...
Building dependency tree...
Reading state information...
wget is already the newest version (1.21.2-2ubuntu1).
ca-certificates is already the newest version (20211016ubuntu0.22.04.1).
gnupg is already the newest version (2.2.27-3ubuntu2.1).
apt-transport-https is already the newest version (2.4.8).
The following packages were automatically installed and are no longer required:
  eatmydata libeatmydata1 python3-json-pointer python3-jsonpatch
Use 'sudo apt autoremove' to remove them.
0 upgraded, 0 newly installed, 0 to remove and 62 not upgraded.
 * ERROR: https://repo.saltproject.io/py3/ubuntu/20.04/amd64/archive/3005.1/salt-archive-keyring.gpg failed to download to /tmp/salt-gpg-UclYVAky.pub
 * ERROR: Failed to run install_ubuntu_stable_deps()!!!

The same can be done for windows minion deployment troubleshooting too!!

SaltConfig multi-node scripted/automated Deployment Part-2

Topology

Prerequisites:

you must have a working salt-master and minions installed on the Redis/Postgres and the RAAS instance. Refer SaltConfig Multi-Node scripted Deployment Part-1

Dowload SaltConfig automated installer .gz from https://customerconnect.vmware.com/downloads/details?downloadGroup=VRA-SSC-862&productId=1206&rPId=80829

Extract and copy the files to the salt-master. In my case, I have placed it in the /root dir

the automated/scripted installer needs additional packages. you will need to install the below components on all the machines.

  • openssl (typically installed at this point)
  • epel-release
  • python36-cryptography
  • python36-pyOpenSSL
Install epel-release

Note: you can install most of the above using yum install packagename on centos however on redhat you will need to install the epel-release RPM manually

sudo yum install https://repo.ius.io/ius-release-el7.rpm https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm -y

Since the package needs to be installed on all nodes, I will leverage salt to run the commands on all nodes.

salt '*' cmd.run "sudo yum install https://repo.ius.io/ius-release-el7.rpm https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm -y"

sample output:

[root@labmaster ~]# salt '*' cmd.run "sudo yum install https://repo.ius.io/ius-release-el7.rpm https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm -y"
labpostgres:
    Loaded plugins: product-id, search-disabled-repos, subscription-manager
    Examining /var/tmp/yum-root-VBGG1c/ius-release-el7.rpm: ius-release-2-1.el7.ius.noarch
    Marking /var/tmp/yum-root-VBGG1c/ius-release-el7.rpm to be installed
    Examining /var/tmp/yum-root-VBGG1c/epel-release-latest-7.noarch.rpm: epel-release-7-14.noarch
    Marking /var/tmp/yum-root-VBGG1c/epel-release-latest-7.noarch.rpm to be installed
    Resolving Dependencies
    --> Running transaction check
    ---> Package epel-release.noarch 0:7-14 will be installed
    ---> Package ius-release.noarch 0:2-1.el7.ius will be installed
    --> Finished Dependency Resolution

    Dependencies Resolved

    ================================================================================
     Package        Arch     Version          Repository                       Size
    ================================================================================
    Installing:
     epel-release   noarch   7-14             /epel-release-latest-7.noarch    25 k
     ius-release    noarch   2-1.el7.ius      /ius-release-el7                4.5 k

    Transaction Summary
    ================================================================================
    Install  2 Packages

    Total size: 30 k
    Installed size: 30 k
    Downloading packages:
    Running transaction check
    Running transaction test
    Transaction test succeeded
    Running transaction
      Installing : epel-release-7-14.noarch                                     1/2
      Installing : ius-release-2-1.el7.ius.noarch                               2/2
      Verifying  : epel-release-7-14.noarch                                     1/2
      Verifying  : ius-release-2-1.el7.ius.noarch                               2/2

    Installed:
      epel-release.noarch 0:7-14          ius-release.noarch 0:2-1.el7.ius

    Complete!
labmaster:
    Loaded plugins: product-id, search-disabled-repos, subscription-manager
    Examining /var/tmp/yum-root-ALBF1m/ius-release-el7.rpm: ius-release-2-1.el7.ius.noarch
    Marking /var/tmp/yum-root-ALBF1m/ius-release-el7.rpm to be installed
    Examining /var/tmp/yum-root-ALBF1m/epel-release-latest-7.noarch.rpm: epel-release-7-14.noarch
    Marking /var/tmp/yum-root-ALBF1m/epel-release-latest-7.noarch.rpm to be installed
    Resolving Dependencies
    --> Running transaction check
    ---> Package epel-release.noarch 0:7-14 will be installed
    ---> Package ius-release.noarch 0:2-1.el7.ius will be installed
    --> Finished Dependency Resolution

    Dependencies Resolved

    ================================================================================
     Package        Arch     Version          Repository                       Size
    ================================================================================
    Installing:
     epel-release   noarch   7-14             /epel-release-latest-7.noarch    25 k
     ius-release    noarch   2-1.el7.ius      /ius-release-el7                4.5 k

    Transaction Summary
    ================================================================================
    Install  2 Packages

    Total size: 30 k
    Installed size: 30 k
    Downloading packages:
    Running transaction check
    Running transaction test
    Transaction test succeeded
    Running transaction
      Installing : epel-release-7-14.noarch                                     1/2
      Installing : ius-release-2-1.el7.ius.noarch                               2/2
      Verifying  : epel-release-7-14.noarch                                     1/2
      Verifying  : ius-release-2-1.el7.ius.noarch                               2/2

    Installed:
      epel-release.noarch 0:7-14          ius-release.noarch 0:2-1.el7.ius

    Complete!
labredis:
    Loaded plugins: product-id, search-disabled-repos, subscription-manager
    Examining /var/tmp/yum-root-QKzOF1/ius-release-el7.rpm: ius-release-2-1.el7.ius.noarch
    Marking /var/tmp/yum-root-QKzOF1/ius-release-el7.rpm to be installed
    Examining /var/tmp/yum-root-QKzOF1/epel-release-latest-7.noarch.rpm: epel-release-7-14.noarch
    Marking /var/tmp/yum-root-QKzOF1/epel-release-latest-7.noarch.rpm to be installed
    Resolving Dependencies
    --> Running transaction check
    ---> Package epel-release.noarch 0:7-14 will be installed
    ---> Package ius-release.noarch 0:2-1.el7.ius will be installed
    --> Finished Dependency Resolution

    Dependencies Resolved

    ================================================================================
     Package        Arch     Version          Repository                       Size
    ================================================================================
    Installing:
     epel-release   noarch   7-14             /epel-release-latest-7.noarch    25 k
     ius-release    noarch   2-1.el7.ius      /ius-release-el7                4.5 k

    Transaction Summary
    ================================================================================
    Install  2 Packages

    Total size: 30 k
    Installed size: 30 k
    Downloading packages:
    Running transaction check
    Running transaction test
    Transaction test succeeded
    Running transaction
      Installing : epel-release-7-14.noarch                                     1/2
      Installing : ius-release-2-1.el7.ius.noarch                               2/2
      Verifying  : epel-release-7-14.noarch                                     1/2
      Verifying  : ius-release-2-1.el7.ius.noarch                               2/2

    Installed:
      epel-release.noarch 0:7-14          ius-release.noarch 0:2-1.el7.ius

    Complete!
labraas:
    Loaded plugins: product-id, search-disabled-repos, subscription-manager
    Examining /var/tmp/yum-root-F4FNTG/ius-release-el7.rpm: ius-release-2-1.el7.ius.noarch
    Marking /var/tmp/yum-root-F4FNTG/ius-release-el7.rpm to be installed
    Examining /var/tmp/yum-root-F4FNTG/epel-release-latest-7.noarch.rpm: epel-release-7-14.noarch
    Marking /var/tmp/yum-root-F4FNTG/epel-release-latest-7.noarch.rpm to be installed
    Resolving Dependencies
    --> Running transaction check
    ---> Package epel-release.noarch 0:7-14 will be installed
    ---> Package ius-release.noarch 0:2-1.el7.ius will be installed
    --> Finished Dependency Resolution

    Dependencies Resolved

    ================================================================================
     Package        Arch     Version          Repository                       Size
    ================================================================================
    Installing:
     epel-release   noarch   7-14             /epel-release-latest-7.noarch    25 k
     ius-release    noarch   2-1.el7.ius      /ius-release-el7                4.5 k

    Transaction Summary
    ================================================================================
    Install  2 Packages

    Total size: 30 k
    Installed size: 30 k
    Downloading packages:
    Running transaction check
    Running transaction test
    Transaction test succeeded
    Running transaction
      Installing : epel-release-7-14.noarch                                     1/2
      Installing : ius-release-2-1.el7.ius.noarch                               2/2
      Verifying  : epel-release-7-14.noarch                                     1/2
      Verifying  : ius-release-2-1.el7.ius.noarch                               2/2

    Installed:
      epel-release.noarch 0:7-14          ius-release.noarch 0:2-1.el7.ius

    Complete!
[root@labmaster ~]#

Note: in the above, i am targeting ‘*’ which means all accepted minions will be targeted when executing the job. in my case, I just have the 4 minions.. you can replace the ‘*’ with minion names should you have other minions that are not going to be used as a part of the installation. eg:

salt 'labmaster' cmd.run "rpm -qa | grep epel-release"
salt 'labredis' cmd.run "rpm -qa | grep epel-release"
salt 'labpostgres' cmd.run "rpm -qa | grep epel-release"
salt 'labraas' cmd.run "rpm -qa | grep epel-release"

Installing the other packages:

Install python36-cryptography
salt '*' pkg.install  python36-cryptography

Output:

[root@labmaster ~]# salt '*' pkg.install  python36-cryptography
labpostgres:
    ----------
    gpg-pubkey.(none):
        ----------
        new:
            2fa658e0-45700c69,352c64e5-52ae6884,de57bfbe-53a9be98,fd431d51-4ae0493b
        old:
            2fa658e0-45700c69,de57bfbe-53a9be98,fd431d51-4ae0493b
    python36-asn1crypto:
        ----------
        new:
            0.24.0-7.el7
        old:
    python36-cffi:
        ----------
        new:
            1.9.1-3.el7
        old:
    python36-cryptography:
        ----------
        new:
            2.3-2.el7
        old:
    python36-ply:
        ----------
        new:
            3.9-2.el7
        old:
    python36-pycparser:
        ----------
        new:
            2.14-2.el7
        old:
labredis:
    ----------
    gpg-pubkey.(none):
        ----------
        new:
            2fa658e0-45700c69,352c64e5-52ae6884,de57bfbe-53a9be98,fd431d51-4ae0493b
        old:
            2fa658e0-45700c69,de57bfbe-53a9be98,fd431d51-4ae0493b
    python36-asn1crypto:
        ----------
        new:
            0.24.0-7.el7
        old:
    python36-cffi:
        ----------
        new:
            1.9.1-3.el7
        old:
    python36-cryptography:
        ----------
        new:
            2.3-2.el7
        old:
    python36-ply:
        ----------
        new:
            3.9-2.el7
        old:
    python36-pycparser:
        ----------
        new:
            2.14-2.el7
        old:
labmaster:
    ----------
    gpg-pubkey.(none):
        ----------
        new:
            2fa658e0-45700c69,352c64e5-52ae6884,de57bfbe-53a9be98,fd431d51-4ae0493b
        old:
            2fa658e0-45700c69,de57bfbe-53a9be98,fd431d51-4ae0493b
    python36-asn1crypto:
        ----------
        new:
            0.24.0-7.el7
        old:
    python36-cffi:
        ----------
        new:
            1.9.1-3.el7
        old:
    python36-cryptography:
        ----------
        new:
            2.3-2.el7
        old:
    python36-ply:
        ----------
        new:
            3.9-2.el7
        old:
    python36-pycparser:
        ----------
        new:
            2.14-2.el7
        old:
labraas:
    ----------
    gpg-pubkey.(none):
        ----------
        new:
            2fa658e0-45700c69,352c64e5-52ae6884,de57bfbe-53a9be98,fd431d51-4ae0493b
        old:
            2fa658e0-45700c69,de57bfbe-53a9be98,fd431d51-4ae0493b
    python36-asn1crypto:
        ----------
        new:
            0.24.0-7.el7
        old:
    python36-cffi:
        ----------
        new:
            1.9.1-3.el7
        old:
    python36-cryptography:
        ----------
        new:
            2.3-2.el7
        old:
    python36-ply:
        ----------
        new:
            3.9-2.el7
        old:
    python36-pycparser:
        ----------
        new:
            2.14-2.el7
        old:
install python36-pyOpenSSL
salt '*' pkg.install  python36-pyOpenSSL

sample output:

[root@labmaster ~]# salt '*' pkg.install  python36-pyOpenSSL
labmaster:
    ----------
    python36-pyOpenSSL:
        ----------
        new:
            17.3.0-2.el7
        old:
labpostgres:
    ----------
    python36-pyOpenSSL:
        ----------
        new:
            17.3.0-2.el7
        old:
labraas:
    ----------
    python36-pyOpenSSL:
        ----------
        new:
            17.3.0-2.el7
        old:
labredis:
    ----------
    python36-pyOpenSSL:
        ----------
        new:
            17.3.0-2.el7
        old:
install rsync

This is not a mandatory package, we will use this to copy files b/w the nodes, Specifically the keys.

salt '*' pkg.install rsync

sample output

[root@labmaster ~]# salt '*' pkg.install rsync
labmaster:
    ----------
    rsync:
        ----------
        new:
            3.1.2-10.el7
        old:
labpostgres:
    ----------
    rsync:
        ----------
        new:
            3.1.2-10.el7
        old:
labraas:
    ----------
    rsync:
        ----------
        new:
            3.1.2-10.el7
        old:
labredis:
    ----------
    rsync:
        ----------
        new:
            3.1.2-10.el7
        old:

Place the installer files in the correct directories.

the automated/scripted installer was previously scp into the /root dir
navigate to the extracted tar, cd to the sse-install dir, it should look like the below:

copy the pillar, state files from the SSE installer directory into the default piller_roots directory and the default file root dir (these folders do not exist by default, so we crate them)

sudo mkdir /srv/salt
sudo cp -r salt/sse /srv/salt/
sudo mkdir /srv/pillar
sudo cp -r pillar/sse /srv/pillar/
sudo cp -r pillar/top.sls /srv/pillar/
sudo cp -r salt/top.sls /srv/salt/

add SSE keys to all vms:

we will use rsync to copy the keys from the SSE installer directory to all the machines:

rsync -avzh keys/ [email protected]:~/keys
rsync -avzh keys/ [email protected]:~/keys
rsync -avzh keys/ [email protected]:~/keys
rsync -avzh keys/ [email protected]:~/keys

install keys:

 salt '*' cmd.run "sudo rpmkeys --import ~/keys/*.asc"

output:

edit the pillar top.sls

vi /srv/pillar/top.sls

replace the list hilighted below with the minion names of all the instances that will be used for the SSE deployment.

Edited:

note: you can get the minion names using

salt-key -L

now, my updated top file looks like the below:

{# Pillar Top File #}

{# Define SSE Servers #}
{% load_yaml as sse_servers %}
  - labmaster
  - labpostgres
  - labraas
  - labredis
{% endload %}

base:

  {# Assign Pillar Data to SSE Servers #}
  {% for server in sse_servers %}
  '{{ server }}':
    - sse
  {% endfor %}

now, edit the sse_settings.yaml

vi /srv/pillar/sse/sse_settings.yaml

I have highlighted the important fields that must be updated on the config. the other fields are optional and can be changed as per your choice

this is how my updated sample config looks like:

# Section 1: Define servers in the SSE deployment by minion id
servers:

  # PostgreSQL Server (Single value)
  pg_server: labpostgres

  # Redis Server (Single value)
  redis_server: labredis

  # SaltStack Enterprise Servers (List one or more)
  eapi_servers:
    - labraas

  # Salt Masters (List one or more)
  salt_masters:
    - labmaster


# Section 2: Define PostgreSQL settings
pg:

  # Set the PostgreSQL endpoint and port
  # (defines how SaltStack Enterprise services will connect to PostgreSQL)
  pg_endpoint: 172.16.120.111
  pg_port: 5432

  # Set the PostgreSQL Username and Password for SSE
  pg_username: sseuser
  pg_password: secure123

  # Specify if PostgreSQL Host Based Authentication by IP and/or FQDN
  # (allows SaltStack Enterprise services to connect to PostgreSQL)
  pg_hba_by_ip: True
  pg_hba_by_fqdn: False

  pg_cert_cn: pgsql.lab.ntitta.in
  pg_cert_name: pgsql.lab.ntitta.in


# Section 3: Define Redis settings
redis:

  # Set the Redis endpoint and port
  # (defines how SaltStack Enterprise services will connect to Redis)
  redis_endpoint: 172.16.120.105
  redis_port: 6379

  # Set the Redis Username and Password for SSE
  redis_username: sseredis
  redis_password: secure1234


# Section 4: eAPI Server settings
eapi:

  # Set the credentials for the SaltStack Enterprise service
  # - The default for the username is "root"
  #   and the default for the password is "salt"
  # - You will want to change this after a successful deployment
  eapi_username: root
  eapi_password: salt

  # Set the endpoint for the SaltStack Enterprise service
  eapi_endpoint: 172.16.120.115

  # Set if SaltStack Enterprise will use SSL encrypted communicaiton (HTTPS)
  eapi_ssl_enabled: True

  # Set if SaltStack Enterprise will use SSL validation (verified certificate)
  eapi_ssl_validation: False

  # Set if SaltStack Enterprise (PostgreSQL, eAPI Servers, and Salt Masters)
  # will all be deployed on a single "standalone" host
  eapi_standalone: False

  # Set if SaltStack Enterprise will regard multiple masters as "active" or "failover"
  # - No impact to a single master configuration
  # - "active" (set below as False) means that all minions connect to each master (recommended)
  # - "failover" (set below as True) means that each minion connects to one master at a time
  eapi_failover_master: False

  # Set the encryption key for SaltStack Enterprise
  # (this should be a unique value for each installation)
  # To generate one, run: "openssl rand -hex 32"
  #
  # Note: Specify "auto" to have the installer generate a random key at installation time
  # ("auto" is only suitable for installations with a single SaltStack Enterprise server)
  eapi_key: auto

  eapi_server_cert_cn: raas.lab.ntitta.in
  eapi_server_cert_name: raas.lab.ntitta.in

# Section 5: Identifiers
ids:

  # Appends a customer-specific UUID to the namespace of the raas database
  # (this should be a unique value for each installation)
  # To generate one, run: "cat /proc/sys/kernel/random/uuid"
  customer_id: 43cab1f4-de60-4ab1-85b5-1d883c5c5d09

  # Set the Cluster ID for the master (or set of masters) that will managed
  # the SaltStack Enterprise infrastructure
  # (additional sets of masters may be easily managed with a separate installer)
  cluster_id: distributed_sandbox_env

refresh grains and piller data:

salt '*' saltutil.refresh_grains
salt '*' saltutil.refresh_pillar

Confirm if piller returns the items:

salt '*' pillar.items

sample output:

labraas:
    ----------
    sse_cluster_id:
        distributed_sandbox_env
    sse_customer_id:
        43cab1f4-de60-4ab1-85b5-1d883c5c5d09
    sse_eapi_endpoint:
        172.16.120.115
    sse_eapi_failover_master:
        False
    sse_eapi_key:
        auto
    sse_eapi_num_processes:
        12
    sse_eapi_password:
        salt
    sse_eapi_server_cert_cn:
        raas.lab.ntitta.in
    sse_eapi_server_cert_name:
        raas.lab.ntitta.in
    sse_eapi_server_fqdn_list:
        - labraas.ntitta.lab
    sse_eapi_server_ipv4_list:
        - 172.16.120.115
    sse_eapi_servers:
        - labraas
    sse_eapi_ssl_enabled:
        True
    sse_eapi_ssl_validation:
        False
    sse_eapi_standalone:
        False
    sse_eapi_username:
        root
    sse_pg_cert_cn:
        pgsql.lab.ntitta.in
    sse_pg_cert_name:
        pgsql.lab.ntitta.in
    sse_pg_endpoint:
        172.16.120.111
    sse_pg_fqdn:
        labpostgres.ntitta.lab
    sse_pg_hba_by_fqdn:
        False
    sse_pg_hba_by_ip:
        True
    sse_pg_ip:
        172.16.120.111
    sse_pg_password:
        secure123
    sse_pg_port:
        5432
    sse_pg_server:
        labpostgres
    sse_pg_username:
        sseuser
    sse_redis_endpoint:
        172.16.120.105
    sse_redis_password:
        secure1234
    sse_redis_port:
        6379
    sse_redis_server:
        labredis
    sse_redis_username:
        sseredis
    sse_salt_master_fqdn_list:
        - labmaster.ntitta.lab
    sse_salt_master_ipv4_list:
        - 172.16.120.113
    sse_salt_masters:
        - labmaster
labmaster:
    ----------
    sse_cluster_id:
        distributed_sandbox_env
    sse_customer_id:
        43cab1f4-de60-4ab1-85b5-1d883c5c5d09
    sse_eapi_endpoint:
        172.16.120.115
    sse_eapi_failover_master:
        False
    sse_eapi_key:
        auto
    sse_eapi_num_processes:
        12
    sse_eapi_password:
        salt
    sse_eapi_server_cert_cn:
        raas.lab.ntitta.in
    sse_eapi_server_cert_name:
        raas.lab.ntitta.in
    sse_eapi_server_fqdn_list:
        - labraas.ntitta.lab
    sse_eapi_server_ipv4_list:
        - 172.16.120.115
    sse_eapi_servers:
        - labraas
    sse_eapi_ssl_enabled:
        True
    sse_eapi_ssl_validation:
        False
    sse_eapi_standalone:
        False
    sse_eapi_username:
        root
    sse_pg_cert_cn:
        pgsql.lab.ntitta.in
    sse_pg_cert_name:
        pgsql.lab.ntitta.in
    sse_pg_endpoint:
        172.16.120.111
    sse_pg_fqdn:
        labpostgres.ntitta.lab
    sse_pg_hba_by_fqdn:
        False
    sse_pg_hba_by_ip:
        True
    sse_pg_ip:
        172.16.120.111
    sse_pg_password:
        secure123
    sse_pg_port:
        5432
    sse_pg_server:
        labpostgres
    sse_pg_username:
        sseuser
    sse_redis_endpoint:
        172.16.120.105
    sse_redis_password:
        secure1234
    sse_redis_port:
        6379
    sse_redis_server:
        labredis
    sse_redis_username:
        sseredis
    sse_salt_master_fqdn_list:
        - labmaster.ntitta.lab
    sse_salt_master_ipv4_list:
        - 172.16.120.113
    sse_salt_masters:
        - labmaster
labredis:
    ----------
    sse_cluster_id:
        distributed_sandbox_env
    sse_customer_id:
        43cab1f4-de60-4ab1-85b5-1d883c5c5d09
    sse_eapi_endpoint:
        172.16.120.115
    sse_eapi_failover_master:
        False
    sse_eapi_key:
        auto
    sse_eapi_num_processes:
        12
    sse_eapi_password:
        salt
    sse_eapi_server_cert_cn:
        raas.lab.ntitta.in
    sse_eapi_server_cert_name:
        raas.lab.ntitta.in
    sse_eapi_server_fqdn_list:
        - labraas.ntitta.lab
    sse_eapi_server_ipv4_list:
        - 172.16.120.115
    sse_eapi_servers:
        - labraas
    sse_eapi_ssl_enabled:
        True
    sse_eapi_ssl_validation:
        False
    sse_eapi_standalone:
        False
    sse_eapi_username:
        root
    sse_pg_cert_cn:
        pgsql.lab.ntitta.in
    sse_pg_cert_name:
        pgsql.lab.ntitta.in
    sse_pg_endpoint:
        172.16.120.111
    sse_pg_fqdn:
        labpostgres.ntitta.lab
    sse_pg_hba_by_fqdn:
        False
    sse_pg_hba_by_ip:
        True
    sse_pg_ip:
        172.16.120.111
    sse_pg_password:
        secure123
    sse_pg_port:
        5432
    sse_pg_server:
        labpostgres
    sse_pg_username:
        sseuser
    sse_redis_endpoint:
        172.16.120.105
    sse_redis_password:
        secure1234
    sse_redis_port:
        6379
    sse_redis_server:
        labredis
    sse_redis_username:
        sseredis
    sse_salt_master_fqdn_list:
        - labmaster.ntitta.lab
    sse_salt_master_ipv4_list:
        - 172.16.120.113
    sse_salt_masters:
        - labmaster
labpostgres:
    ----------
    sse_cluster_id:
        distributed_sandbox_env
    sse_customer_id:
        43cab1f4-de60-4ab1-85b5-1d883c5c5d09
    sse_eapi_endpoint:
        172.16.120.115
    sse_eapi_failover_master:
        False
    sse_eapi_key:
        auto
    sse_eapi_num_processes:
        12
    sse_eapi_password:
        salt
    sse_eapi_server_cert_cn:
        raas.lab.ntitta.in
    sse_eapi_server_cert_name:
        raas.lab.ntitta.in
    sse_eapi_server_fqdn_list:
        - labraas.ntitta.lab
    sse_eapi_server_ipv4_list:
        - 172.16.120.115
    sse_eapi_servers:
        - labraas
    sse_eapi_ssl_enabled:
        True
    sse_eapi_ssl_validation:
        False
    sse_eapi_standalone:
        False
    sse_eapi_username:
        root
    sse_pg_cert_cn:
        pgsql.lab.ntitta.in
    sse_pg_cert_name:
        pgsql.lab.ntitta.in
    sse_pg_endpoint:
        172.16.120.111
    sse_pg_fqdn:
        labpostgres.ntitta.lab
    sse_pg_hba_by_fqdn:
        False
    sse_pg_hba_by_ip:
        True
    sse_pg_ip:
        172.16.120.111
    sse_pg_password:
        secure123
    sse_pg_port:
        5432
    sse_pg_server:
        labpostgres
    sse_pg_username:
        sseuser
    sse_redis_endpoint:
        172.16.120.105
    sse_redis_password:
        secure1234
    sse_redis_port:
        6379
    sse_redis_server:
        labredis
    sse_redis_username:
        sseredis
    sse_salt_master_fqdn_list:
        - labmaster.ntitta.lab
    sse_salt_master_ipv4_list:
        - 172.16.120.113
    sse_salt_masters:
        - labmaster

Install Postgres:

salt labpostgres state.highstate

output:

[root@labmaster sse]# sudo salt labpostgres state.highstate
labpostgres:
----------
          ID: install_postgresql-server
    Function: pkg.installed
      Result: True
     Comment: 4 targeted packages were installed/updated.
     Started: 19:57:29.956557
    Duration: 27769.35 ms
     Changes:
              ----------
              postgresql12:
                  ----------
                  new:
                      12.7-1PGDG.rhel7
                  old:
              postgresql12-contrib:
                  ----------
                  new:
                      12.7-1PGDG.rhel7
                  old:
              postgresql12-libs:
                  ----------
                  new:
                      12.7-1PGDG.rhel7
                  old:
              postgresql12-server:
                  ----------
                  new:
                      12.7-1PGDG.rhel7
                  old:
----------
          ID: initialize_postgres-database
    Function: cmd.run
        Name: /usr/pgsql-12/bin/postgresql-12-setup initdb
      Result: True
     Comment: Command "/usr/pgsql-12/bin/postgresql-12-setup initdb" run
     Started: 19:57:57.729506
    Duration: 2057.166 ms
     Changes:
              ----------
              pid:
                  33869
              retcode:
                  0
              stderr:
              stdout:
                  Initializing database ... OK
----------
          ID: create_pki_postgres_path
    Function: file.directory
        Name: /etc/pki/postgres/certs
      Result: True
     Comment:
     Started: 19:57:59.792636
    Duration: 7.834 ms
     Changes:
              ----------
              /etc/pki/postgres/certs:
                  ----------
                  directory:
                      new
----------
          ID: create_ssl_certificate
    Function: module.run
        Name: tls.create_self_signed_cert
      Result: True
     Comment: Module function tls.create_self_signed_cert executed
     Started: 19:57:59.802082
    Duration: 163.484 ms
     Changes:
              ----------
              ret:
                  Created Private Key: "/etc/pki/postgres/certs/pgsq.key." Created Certificate: "/etc/pki/postgres/certs/pgsq.crt."
----------
          ID: set_certificate_permissions
    Function: file.managed
        Name: /etc/pki/postgres/certs/pgsq.crt
      Result: True
     Comment:
     Started: 19:57:59.965923
    Duration: 4.142 ms
     Changes:
              ----------
              group:
                  postgres
              mode:
                  0400
              user:
                  postgres
----------
          ID: set_key_permissions
    Function: file.managed
        Name: /etc/pki/postgres/certs/pgsq.key
      Result: True
     Comment:
     Started: 19:57:59.970470
    Duration: 3.563 ms
     Changes:
              ----------
              group:
                  postgres
              mode:
                  0400
              user:
                  postgres
----------
          ID: configure_postgres
    Function: file.managed
        Name: /var/lib/pgsql/12/data/postgresql.conf
      Result: True
     Comment: File /var/lib/pgsql/12/data/postgresql.conf updated
     Started: 19:57:59.974388
    Duration: 142.264 ms
     Changes:
              ----------
              diff:
                  ---
                  +++
                  @@ -16,9 +16,9 @@
                   #
....
....

...
                   #------------------------------------------------------------------------------
----------
          ID: configure_pg_hba
    Function: file.managed
        Name: /var/lib/pgsql/12/data/pg_hba.conf
      Result: True
     Comment: File /var/lib/pgsql/12/data/pg_hba.conf updated
...
...
...
                  +
----------
          ID: start_postgres
    Function: service.running
        Name: postgresql-12
      Result: True
     Comment: Service postgresql-12 has been enabled, and is running
     Started: 19:58:00.225639
    Duration: 380.763 ms
     Changes:
              ----------
              postgresql-12:
                  True
----------
          ID: create_db_user
    Function: postgres_user.present
        Name: sseuser
      Result: True
     Comment: The user sseuser has been created
     Started: 19:58:00.620381
    Duration: 746.545 ms
     Changes:
              ----------
              sseuser:
                  Present

Summary for labpostgres
-------------
Succeeded: 10 (changed=10)
Failed:     0
-------------
Total states run:     10
Total run time:   31.360 s

If this fails for some reason, you can revert/remove postgres by using below and fix the underlying errors before re-trying

salt labpostgres state.apply sse.eapi_database.revert

example:

[root@labmaster sse]# salt labpostgres state.apply sse.eapi_database.revert
labpostgres:
----------
          ID: revert_all
    Function: pkg.removed
      Result: True
     Comment: All targeted packages were removed.
     Started: 16:30:26.736578
    Duration: 10127.277 ms
     Changes:
              ----------
              postgresql12:
                  ----------
                  new:
                  old:
                      12.7-1PGDG.rhel7
              postgresql12-contrib:
                  ----------
                  new:
                  old:
                      12.7-1PGDG.rhel7
              postgresql12-libs:
                  ----------
                  new:
                  old:
                      12.7-1PGDG.rhel7
              postgresql12-server:
                  ----------
                  new:
                  old:
                      12.7-1PGDG.rhel7
----------
          ID: revert_all
    Function: file.absent
        Name: /var/lib/pgsql/
      Result: True
     Comment: Removed directory /var/lib/pgsql/
     Started: 16:30:36.870967
    Duration: 79.941 ms
     Changes:
              ----------
              removed:
                  /var/lib/pgsql/
----------
          ID: revert_all
    Function: file.absent
        Name: /etc/pki/postgres/
      Result: True
     Comment: Removed directory /etc/pki/postgres/
     Started: 16:30:36.951337
    Duration: 3.34 ms
     Changes:
              ----------
              removed:
                  /etc/pki/postgres/
----------
          ID: revert_all
    Function: user.absent
        Name: postgres
      Result: True
     Comment: Removed user postgres
     Started: 16:30:36.956696
    Duration: 172.372 ms
     Changes:
              ----------
              postgres:
                  removed
              postgres group:
                  removed

Summary for labpostgres
------------
Succeeded: 4 (changed=4)
Failed:    0
------------
Total states run:     4
Total run time:  10.383 s

install redis

salt labredis state.highstate

sample output:

[root@labmaster sse]# salt labredis state.highstate
labredis:
----------
          ID: install_redis
    Function: pkg.installed
      Result: True
     Comment: The following packages were installed/updated: jemalloc, redis5
     Started: 20:07:12.059084
    Duration: 25450.196 ms
     Changes:
              ----------
              jemalloc:
                  ----------
                  new:
                      3.6.0-1.el7
                  old:
              redis5:
                  ----------
                  new:
                      5.0.9-1.el7.ius
                  old:
----------
          ID: configure_redis
    Function: file.managed
        Name: /etc/redis.conf
      Result: True
     Comment: File /etc/redis.conf updated
     Started: 20:07:37.516851
    Duration: 164.011 ms
     Changes:
              ----------
              diff:
                  ---
                  +++
                  @@ -1,5 +1,5 @@
...

...
                  -bind 127.0.0.1
                  +bind 0.0.0.0

.....
.....
                  @@ -1361,12 +1311,8 @@
                   # active-defrag-threshold-upper 100

                   # Minimal effort for defrag in CPU percentage
                  -# active-defrag-cycle-min 5
                  +# active-defrag-cycle-min 25

                   # Maximal effort for defrag in CPU percentage
                   # active-defrag-cycle-max 75

                  -# Maximum number of set/hash/zset/list fields that will be processed from
                  -# the main dictionary scan
                  -# active-defrag-max-scan-fields 1000
                  -
              mode:
                  0664
              user:
                  root
----------
          ID: start_redis
    Function: service.running
        Name: redis
      Result: True
     Comment: Service redis has been enabled, and is running
     Started: 20:07:37.703605
    Duration: 251.205 ms
     Changes:
              ----------
              redis:
                  True

Summary for labredis
------------
Succeeded: 3 (changed=3)
Failed:    0
------------
Total states run:     3
Total run time:  25.865 s

Install RAAS

Before proceeding with RAAS setup, ensure Postgres and Redis is accessible: In my case, I still have linux firewall on the two machines, use the below command to add firewall rule exceptions for the respective node. again, I am leveraging salt to run the commands on the remote node

salt labpostgres cmd.run "firewall-cmd --zone=public --add-port=5432/tcp --permanent && firewall-cmd --reload"
salt labredis cmd.run "firewall-cmd --zone=public --add-port=6379/tcp --permanent && firewall-cmd --reload"
salt labraas cmd.run "firewall-cmd --zone=public --add-port=443/tcp --permanent && firewall-cmd --reload"

now, proceed with raas install

salt labraas state.highstate

sample output:

[root@labmaster sse]# salt labraas state.highstate
labraas:
----------
          ID: install_xmlsec
    Function: pkg.installed
      Result: True
     Comment: 2 targeted packages were installed/updated.
              The following packages were already installed: openssl, openssl-libs, xmlsec1, xmlsec1-openssl, libxslt, libtool-ltdl
     Started: 20:36:16.715011
    Duration: 39176.806 ms
     Changes:
              ----------
              singleton-manager-i18n:
                  ----------
                  new:
                      0.6.0-5.el7.x86_64_1
                  old:
              ssc-translation-bundle:
                  ----------
                  new:
                      8.6.2-2.ph3.noarch_1
                  old:
----------
          ID: install_raas
    Function: pkg.installed
      Result: True
     Comment: The following packages were installed/updated: raas
     Started: 20:36:55.942737
    Duration: 35689.868 ms
     Changes:
              ----------
              raas:
                  ----------
                  new:
                      8.6.2.11-1.el7
                  old:
----------
          ID: install_raas
    Function: cmd.run
        Name: systemctl daemon-reload
      Result: True
     Comment: Command "systemctl daemon-reload" run
     Started: 20:37:31.638377
    Duration: 138.354 ms
     Changes:
              ----------
              pid:
                  31230
              retcode:
                  0
              stderr:
              stdout:
----------
          ID: create_pki_raas_path_eapi
    Function: file.directory
        Name: /etc/pki/raas/certs
      Result: True
     Comment: The directory /etc/pki/raas/certs is in the correct state
     Started: 20:37:31.785757
    Duration: 11.788 ms
     Changes:
----------
          ID: create_ssl_certificate_eapi
    Function: module.run
        Name: tls.create_self_signed_cert
      Result: True
     Comment: Module function tls.create_self_signed_cert executed
     Started: 20:37:31.800719
    Duration: 208.431 ms
     Changes:
              ----------
              ret:
                  Created Private Key: "/etc/pki/raas/certs/raas.lab.ntitta.in.key." Created Certificate: "/etc/pki/raas/certs/raas.lab.ntitta.in.crt."
----------
          ID: set_certificate_permissions_eapi
    Function: file.managed
        Name: /etc/pki/raas/certs/raas.lab.ntitta.in.crt
      Result: True
     Comment:
     Started: 20:37:32.009536
    Duration: 5.967 ms
     Changes:
              ----------
              group:
                  raas
              mode:
                  0400
              user:
                  raas
----------
          ID: set_key_permissions_eapi
    Function: file.managed
        Name: /etc/pki/raas/certs/raas.lab.ntitta.in.key
      Result: True
     Comment:
     Started: 20:37:32.015921
    Duration: 6.888 ms
     Changes:
              ----------
              group:
                  raas
              mode:
                  0400
              user:
                  raas
----------
          ID: raas_owns_raas
    Function: file.directory
        Name: /etc/raas/
      Result: True
     Comment: The directory /etc/raas is in the correct state
     Started: 20:37:32.023200
    Duration: 4.485 ms
     Changes:
----------
          ID: configure_raas
    Function: file.managed
        Name: /etc/raas/raas
      Result: True
     Comment: File /etc/raas/raas updated
     Started: 20:37:32.028374
    Duration: 132.226 ms
     Changes:
              ----------
              diff:
                  ---
                  +++
                  @@ -1,49 +1,47 @@
...
...
                  +
----------
          ID: save_credentials
    Function: cmd.run
        Name: /usr/bin/raas save_creds 'postgres={"username":"sseuser","password":"secure123"}' 'redis={"password":"secure1234"}'
      Result: True
     Comment: All files in creates exist
     Started: 20:37:32.163432
    Duration: 2737.346 ms
     Changes:
----------
          ID: set_secconf_permissions
    Function: file.managed
        Name: /etc/raas/raas.secconf
      Result: True
     Comment: File /etc/raas/raas.secconf exists with proper permissions. No changes made.
     Started: 20:37:34.902143
    Duration: 5.949 ms
     Changes:
----------
          ID: ensure_raas_pki_directory
    Function: file.directory
        Name: /etc/raas/pki
      Result: True
     Comment: The directory /etc/raas/pki is in the correct state
     Started: 20:37:34.908558
    Duration: 4.571 ms
     Changes:
----------
          ID: change_owner_to_raas
    Function: file.directory
        Name: /etc/raas/pki
      Result: True
     Comment: The directory /etc/raas/pki is in the correct state
     Started: 20:37:34.913566
    Duration: 5.179 ms
     Changes:
----------
          ID: /usr/sbin/ldconfig
    Function: cmd.run
      Result: True
     Comment: Command "/usr/sbin/ldconfig" run
     Started: 20:37:34.919069
    Duration: 32.018 ms
     Changes:
              ----------
              pid:
                  31331
              retcode:
                  0
              stderr:
              stdout:
----------
          ID: start_raas
    Function: service.running
        Name: raas
      Result: True
     Comment: check_cmd determined the state succeeded
     Started: 20:37:34.952926
    Duration: 16712.726 ms
     Changes:
              ----------
              raas:
                  True
----------
          ID: restart_raas_and_confirm_connectivity
    Function: cmd.run
        Name: salt-call service.restart raas
      Result: True
     Comment: check_cmd determined the state succeeded
     Started: 20:37:51.666446
    Duration: 472.205 ms
     Changes:
----------
          ID: get_initial_objects_file
    Function: file.managed
        Name: /tmp/sample-resource-types.raas
      Result: True
     Comment: File /tmp/sample-resource-types.raas updated
     Started: 20:37:52.139370
    Duration: 180.432 ms
     Changes:
              ----------
              group:
                  raas
              mode:
                  0640
              user:
                  raas
----------
          ID: import_initial_objects
    Function: cmd.run
        Name: /usr/bin/raas dump --insecure --server https://localhost --auth root:salt --mode import < /tmp/sample-resource-types.raas
      Result: True
     Comment: Command "/usr/bin/raas dump --insecure --server https://localhost --auth root:salt --mode import < /tmp/sample-resource-types.raas" run
     Started: 20:37:52.320146
    Duration: 24566.332 ms
     Changes:
              ----------
              pid:
                  31465
              retcode:
                  0
              stderr:
              stdout:
----------
          ID: raas_service_restart
    Function: cmd.run
        Name: systemctl restart raas
      Result: True
     Comment: Command "systemctl restart raas" run
     Started: 20:38:16.887666
    Duration: 2257.183 ms
     Changes:
              ----------
              pid:
                  31514
              retcode:
                  0
              stderr:
              stdout:

Summary for labraas
-------------
Succeeded: 19 (changed=12)
Failed:     0
-------------
Total states run:     19
Total run time:  122.349 s

Install Eapi Agent:

salt labmaster state.highstate

output:

[root@labmaster sse]# salt labmaster state.highstate
Authentication error occurred.

the Authentication error above is expected. now, we log in to the RAAS via webbrowser:

Accept the minion master keys and now we see all minion:

you now have salt-config /salt enterprise installed successfully.

Troubleshooting:

If the Postgres or RAAS high state fils with the bellow then download the newer version of salt-config tar files from VMware. (there are issues with the init.sls state files with 8.5 or older versions.

----------
          ID: create_ssl_certificate
    Function: module.run
        Name: tls.create_self_signed_cert
      Result: False
     Comment: Module function tls.create_self_signed_cert threw an exception. Exception: [Errno 2] No such file or directory: '/etc/pki/postgres/certs/sdb://osenv/PG_CERT_CN.key'
     Started: 17:11:56.347565
    Duration: 297.925 ms
     Changes:



----------
          ID: create_ssl_certificate_eapi
    Function: module.run
        Name: tls.create_self_signed_cert
      Result: False
     Comment: Module function tls.create_self_signed_cert threw an exception. Exception: [Errno 2] No such file or directory: '/etc/pki/raas/certs/sdb://osenv/SSE_CERT_CN.key'
     Started: 20:26:32.061862
    Duration: 42.028 ms
     Changes:
----------

you can work around the issue by hardcoding the full paths for pg cert and raas cert in the init.sls files.

ID: create_ssl_certificate
Function: module.run
Name: tls.create_self_signed_cert
Result: False
Comment: Module function tls.create_self_signed_cert is not available
Started: 16:11:55.436579
Duration: 932.506 ms
Changes:

Cause: prerequisits are not installed. python36-pyOpenSSL and python36-cryptography must be installed on all nodes where tls.create_self_signed_cert is targeted against.

SaltConfig multi-node scripted/automated Deployment Part-1

Topology: (master-minion communication)

OS: RHEL7.9/Centos7

For the above topology, you will need 4 machines, we will be using the scripted installer to install RAAS, REDIS and postgres for us.

the VM’s I am using:

note: My RHEL machine are already registered with RHEL subscription manager.

we start by updating the machine on all machine’s

Update OS to latest

yum update -y

Install Salt master and salt Minion

Add Salt stack repository:

URL: https://repo.saltproject.io/

Navigate to the above URL and select the correct repository for your OS:

install the repository on all four machines.

sudo rpm --import https://repo.saltproject.io/py3/redhat/7/x86_64/latest/SALTSTACK-GPG-KEY.pub
curl -fsSL https://repo.saltproject.io/py3/redhat/7/x86_64/latest.repo | sudo tee /etc/yum.repos.d/salt.repo

Eg output:

Clear expired cache (run on all 4 machine)

sudo yum clean expire-cache

Install master

we now install salt-master on the master VM:

sudo yum install salt-master

press y to continue

Salt master uses Port: 4505-4506, we add a firewall rule to allow traffic (run the below only on the master)

firewall-cmd --permanent --add-port=4505-4506/tcp   --permanent
firewall-cmd --reload

Enable and start services:

sudo systemctl enable salt-master && sudo systemctl start salt-master

Install minion

on all 4 machines, we install salt-minion.

yum install salt-minion -y

we will now need to edit the minion configuration file and point it to the salt-master IP. (this needs to be done on all nodes)

I use the below command to add the master IP the config file:

 echo "master: 172.16.120.113" >> /etc/salt/minion

EG output:

Enable and start the minion: (run on all nodes)

sudo systemctl enable salt-minion && sudo systemctl start salt-minion

On a successful connection, when you run salt-key -L on the master, you should see all the minions listed:

salt-key -L


Accept minion keys:

salt-key -A

Test minions:

salt '*' test.ping

Troubleshooting minion/master :

Config files:

Master: /etc/salt/master
/etc/salt/master.d/*
minion: /etc/salt/minion
/etc/salt/minion.d/*

Log files:

Master: /var/log/salt/master
Minion: /var/log/salt/minion


minion logs:

Mar 04 12:05:26 xyzzzzy salt-minion[16137]: [ERROR   ] Error while bringing up minion for multi-master. Is master at 172.16.120.113 responding?

Cause: minion Is not able to communicate with master. Either the master ports are not open or there is no master service running on the IP or network is unreachable.

The Salt Master has cached the public key for this node, this salt minion will wait for 10 seconds before attempting to re-authenticate

Minion keys are not accepted by the master

saltconfig 8.6.1 + vRA 8.6.1 Deploying VM’s with salsify driver fails and time’s out

error:

Resource [/resources/compute/e730e5b7-bd12-4803-809e-2d9655c0f448]:: Salt configuration CREATE with job id [5bae15fe-6903-42cb-b915-193b6d5e9d97] failed. Error:: : Minion deployment successful. JID - 20211202152858391410

investigation:

On salt-config > jobs , look for the deploy.minion task. This was a success in my case.

Re-ran the deployment, once the machine is provisioned and customization is successful (has IP when viewing VM from the vCenter view), open the console and I took a look at /etc/salt/minion

Here, the line: master: photon-machine is clearly wrong.

looking at the grains on the salt-config UI, we see the same grains:

Resolution:
restart salt-master and salt-minion service.
and then take a look at the grains on the CLI

salt saltmaster grains.get fqdn

Now, refresh grains

salt saltmaster saltutil.refresh_grains

and then take a look at the UI (in about a min time)

what that sorted, deploying new VM’s via VRA with the saltify driver now works!!

SaltConfig and Identity manager integration

SaltConfig must be running version 8.5 and must be deployed via LCM.

If vRA is running on self-signed/local-CA/LCM-CA certificates the saltstack UI will not load and you will see similar symptoms:

Specifically, a blank page when logging on to salt UI with account/info api returning 500

Logs:

less /var/log/raas/raas
Traceback (most recent call last):
File "requests/adapters.py", line 449, in send
File "urllib3/connectionpool.py", line 756, in urlopen
File "urllib3/util/retry.py", line 574, in increment
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='automation.ntitta.lab', port=443): Max retries exceeded with url: /csp/gateway/am/api/auth/discovery?username=service_type&state=aHR0cHM6Ly9zYWx0eS5udGl0dGEubGFiL2lkZW50aXR5L2FwaS9jb3JlL2F1dGhuL2NzcA%3D%3D&redirect_uri=https%3A%2F%2Fsalty.ntitta.lab%2Fidentity%2Fapi%2Fcore%2Fauthn%2Fcsp&client_id=ssc-HLwywt0h3Y (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1076)')))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "tornado/web.py", line 1680, in _execute
File "raas/utils/rest.py", line 153, in prepare
File "raas/utils/rest.py", line 481, in prepare
File "pop/contract.py", line 170, in __call__
File "/var/lib/raas/unpack/_MEIb1NPIC/raas/mods/vra/params.py", line 250, in get_login_url
verify=validate_ssl)
File "requests/api.py", line 76, in get
File "requests/api.py", line 61, in request
File "requests/sessions.py", line 542, in request
File "raven/breadcrumbs.py", line 341, in send
File "requests/sessions.py", line 655, in send
File "requests/adapters.py", line 514, in send
requests.exceptions.SSLError: HTTPSConnectionPool(host='automation.ntitta.lab', port=443): Max retries exceeded with url: /csp/gateway/am/api/auth/discovery?username=service_type&state=aHR0cHM6Ly9zYWx0eS5udGl0dGEubGFiL2lkZW50aXR5L2FwaS9jb3JlL2F1dGhuL2NzcA%3D%3D&redirect_uri=https%3A%2F%2Fsalty.ntitta.lab%2Fidentity%2Fapi%2Fcore%2Fauthn%2Fcsp&client_id=ssc-HLwywt0h3Y (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1076)')))
2021-08-23 04:29:16,906 [tornado.access                                                    ][ERROR   :2250][Webserver:59844] 500 POST /rpc (127.0.0.1) 1697.46ms

To resolve this, grab the root certificate of vRA and import this over to the saltstack appliance root store:

Grab root certificate:

Cli method:

root@salty [ ~ ]# openssl s_client -showcerts -connect automation.ntitta.lab:443
CONNECTED(00000003)
depth=1 CN = vRealize Suite Lifecycle Manager Locker CA, O = VMware, C = IN
verify error:num=19:self signed certificate in certificate chain
---
Certificate chain
 0 s:/CN=automation.ntitta.lab/OU=labs/O=GSS/L=BLR/ST=KA/C=IN
   i:/CN=vRealize Suite Lifecycle Manager Locker CA/O=VMware/C=IN
-----BEGIN CERTIFICATE-----
MIID7jCCAtagAwIBAgIGAXmkBtDxMA0GCSqGSIb3DQEBCwUAMFMxMzAxBgNVBAMM
KnZSZWFsaXplIFN1aXRlIExpZmVjeWNsZSBNYW5hZ2VyIExvY2tlciBDQTEPMA0G
A1UECgwGVk13YXJlMQswCQYDVQQGEwJJTjAeFw0yMTA1MjUxNDU2MjBaFw0yMzA1
MjUxNDU2MjBaMGUxHjAcBgNVBAMMFWF1dG9tYXRpb24ubnRpdHRhLmxhYjENMAsG
A1UECwwEbGFiczEMMAoGA1UECgwDR1NTMQwwCgYDVQQHDANCTFIxCzAJBgNVBAgM
AktBMQswCQYDVQQGEwJJTjCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEB
AJ+p/UsPFJp3WESJfUNlPWAUtYOUQ9cK5lZXBrEK79dtOwzJ8noUyKndO8i5wumC
tNJP8U3RjKbqu75UZH3LiwoHTOEkqhWufrn8gL7tQjtiQ0iAp2pP6ikxH2bXNAwF
Dh9/2CMjLhSN5mb7V5ehu4rP3/Niu19nT5iA1XMER3qR2tsRweV++78vrYFsKDS9
ePa+eGvMNrVaXvbYN75KnLEKbpkHGPg9P10zLbP/lPIskEGfgBMjS7JKOPxZZKX1
GczW/2sFq9OOr4bW6teWG3gt319N+ReNlUxnrxMDkKcWrml8EbeQMp4RmmtXX5Z4
JeVEATMS7O2CeoEN5E/rFFUCAwEAAaOBtTCBsjAdBgNVHQ4EFgQUz/pxN1bN/GxO
cQ/hcQCgBSdRqaUwHwYDVR0jBBgwFoAUYOI4DbX97wdcZa/pWivAMvnnDekwMAYD
VR0RBCkwJ4IXKi5hdXRvbWF0aW9uLm50aXR0YS5sYWKCDCoubnRpdHRhLmxhYjAO
BgNVHQ8BAf8EBAMCBaAwIAYDVR0lAQH/BBYwFAYIKwYBBQUHAwIGCCsGAQUFBwMB
MAwGA1UdEwEB/wQCMAAwDQYJKoZIhvcNAQELBQADggEBAA2KntXAyrY6DHho8FQc
R2GrHVCCWG3ugyPEq7S7tAabMIeSVhbPWsDaVLro5PlldK9FAUhinbxEwShIJfVP
+X1WOBUxwTQ7anfiagonMNotGtow/7f+fnHGO4Mfyk+ICo+jOp5DTDHGRmF8aYsP
5YGkOdpAb8SuT/pNerZie5WKx/3ZuUwsEDTqF3CYdqWQZSuDIlWRetECZAaq50hJ
c6kD/D1+cq2pmN/DI/U9RAfsvexkhdZaMbHdrlGzNb4biSvJ8HjJMH4uNLUN+Nyf
2MON41QKRRuzQn+ahq7X/K2BbxJTQUZGwbC+0CA6M79dQ1eVQui4d5GXmjutqFIo
Xwo=
-----END CERTIFICATE-----
 1 s:/CN=vRealize Suite Lifecycle Manager Locker CA/O=VMware/C=IN
   i:/CN=vRealize Suite Lifecycle Manager Locker CA/O=VMware/C=IN
-----BEGIN CERTIFICATE-----
MIIDiTCCAnGgAwIBAgIGAXmEbtiqMA0GCSqGSIb3DQEBCwUAMFMxMzAxBgNVBAMM
KnZSZWFsaXplIFN1aXRlIExpZmVjeWNsZSBNYW5hZ2VyIExvY2tlciBDQTEPMA0G
A1UECgwGVk13YXJlMQswCQYDVQQGEwJJTjAeFw0yMTA1MTkxMTQyMDdaFw0zMTA1
MTcxMTQyMDdaMFMxMzAxBgNVBAMMKnZSZWFsaXplIFN1aXRlIExpZmVjeWNsZSBN
YW5hZ2VyIExvY2tlciBDQTEPMA0GA1UECgwGVk13YXJlMQswCQYDVQQGEwJJTjCC
ASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAK6S4ESddCC7BAl4MACpAeAm
1JBaw72NgeSOruS/ljpd1MyDd/AJjpIpdie2M0cweyGDaJ4+/C549lxQe0NAFsgh
62BG87klbhzvYja6aNKvE+b1EKNMPllFoWiCKJIxZOvTS2FnXjXZFZKMw5e+hf2R
JgPEww+KsHBqcWL3YODmD6NvBRCpY2rVrxUjqh00ouo7EC6EHzZoJSMoSwcEgIGz
pclYSPuEzdbNFKVtEQGrdt94xlAk04mrqP2O6E7Fd5EwrOw/+dsFt70qS0aEj9bQ
nk7GeRXhJynXxlEpgChCDEXQ3MWvLIRwOuMBxQq/W4B/ZzvQVzFwmh3S8UkPTosC
AwEAAaNjMGEwHQYDVR0OBBYEFGDiOA21/e8HXGWv6VorwDL55w3pMB8GA1UdIwQY
MBaAFGDiOA21/e8HXGWv6VorwDL55w3pMA8GA1UdEwEB/wQFMAMBAf8wDgYDVR0P
AQH/BAQDAgGGMA0GCSqGSIb3DQEBCwUAA4IBAQBqAjCBd+EL6koGogxd72Dickdm
ecK60ghLTNJ2wEKvDICqss/FopeuEVhc8q/vyjJuirbVwJ1iqKuyvANm1niym85i
fjyP6XaJ0brikMPyx+TSNma/WiDoMXdDviUuYZo4tBJC2DUPJ/0KDI7ysAsMTB0R
8Q7Lc3GlJS65AFRNIxkpHI7tBPp2W8tZQlVBe7PEcWMzWRjWZAvwDGfnNvUtX4iY
bHEVWSzpoVQUk1hcylecYeMSCzBGw/efuWayIFoSf7ZXFe0TAEOJySwkzGJB9n78
4Rq0ydikMT4EFHP5G/iFI2zsx2vZGNsAHCw7XSVFydqb/ekm/9T7waqt3fW4
-----END CERTIFICATE-----
---
Server certificate
subject=/CN=automation.ntitta.lab/OU=labs/O=GSS/L=BLR/ST=KA/C=IN
issuer=/CN=vRealize Suite Lifecycle Manager Locker CA/O=VMware/C=IN
---
No client certificate CA names sent
Peer signing digest: SHA512
Server Temp Key: ECDH, P-256, 256 bits
---
SSL handshake has read 2528 bytes and written 393 bytes
---
New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-GCM-SHA384
Server public key is 2048 bit
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
SSL-Session:
    Protocol  : TLSv1.2
    Cipher    : ECDHE-RSA-AES256-GCM-SHA384
    Session-ID: B06BE4668E5CCE713F1C1547F0917CC901F143CB13D06ED7A111784AAD10B2F6
    Session-ID-ctx:
    Master-Key: 75E8109DD84E2DD064088B44779C4E7FEDA8BE91693C5FC2A51D3F90B177F5C92B7AB638148ADF612EBEFDA30930DED4
    Key-Arg   : None
    PSK identity: None
    PSK identity hint: None
    SRP username: None
    TLS session ticket:
    0000 - b9 54 91 b7 60 d4 18 d2-4b 72 55 db 78 e4 91 10   .T..`...KrU.x...
    0010 - 1f 97 a0 35 31 16 21 db-8c 49 bf 4a a1 b4 59 ff   ...51.!..I.J..Y.
    0020 - 07 22 1b cc 20 d5 52 7a-52 84 17 86 b3 2a 7a ee   .".. .RzR....*z.
    0030 - 14 c3 9b 9f 8f 24 a7 a1-76 4d a2 4f bb d7 5a 21   .....$..vM.O..Z!
    0040 - c9 a6 d0 be 3b 57 4a 4e-cd cc 9f a6 12 45 09 b5   ....;WJN.....E..
    0050 - ca c4 c9 57 f5 ac 17 04-94 cb d0 0a 77 17 ac b8   ...W........w...
    0060 - 8a b2 39 f1 78 70 37 6d-d0 bf f1 73 14 63 e8 86   ..9.xp7m...s.c..
    0070 - 17 27 80 c1 3e fe 54 cf-                          .'..>.T.

    Start Time: 1629788388
    Timeout   : 300 (sec)
    Verify return code: 19 (self signed certificate in certificate chain)

From the above example,
Certificate chain 0 s:/CN=automation.ntitta.lab/OU=labs/O=GSS/L=BLR/ST=KA/C=IN <—-this is my vRA cert
i:/CN=vRealize Suite Lifecycle Manager Locker CA/O=VMware/C=IN <—-This is the root cert (Generated via LCM)

Create a new cert file with the contents of the root certificate.

cat root.crt
-----BEGIN CERTIFICATE-----
MIIDiTCCAnGgAwIBAgIGAXmEbtiqMA0GCSqGSIb3DQEBCwUAMFMxMzAxBgNVBAMM
KnZSZWFsaXplIFN1aXRlIExpZmVjeWNsZSBNYW5hZ2VyIExvY2tlciBDQTEPMA0G
A1UECgwGVk13YXJlMQswCQYDVQQGEwJJTjAeFw0yMTA1MTkxMTQyMDdaFw0zMTA1
MTcxMTQyMDdaMFMxMzAxBgNVBAMMKnZSZWFsaXplIFN1aXRlIExpZmVjeWNsZSBN
YW5hZ2VyIExvY2tlciBDQTEPMA0GA1UECgwGVk13YXJlMQswCQYDVQQGEwJJTjCC
ASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAK6S4ESddCC7BAl4MACpAeAm
1JBaw72NgeSOruS/ljpd1MyDd/AJjpIpdie2M0cweyGDaJ4+/C549lxQe0NAFsgh
62BG87klbhzvYja6aNKvE+b1EKNMPllFoWiCKJIxZOvTS2FnXjXZFZKMw5e+hf2R
JgPEww+KsHBqcWL3YODmD6NvBRCpY2rVrxUjqh00ouo7EC6EHzZoJSMoSwcEgIGz
pclYSPuEzdbNFKVtEQGrdt94xlAk04mrqP2O6E7Fd5EwrOw/+dsFt70qS0aEj9bQ
nk7GeRXhJynXxlEpgChCDEXQ3MWvLIRwOuMBxQq/W4B/ZzvQVzFwmh3S8UkPTosC
AwEAAaNjMGEwHQYDVR0OBBYEFGDiOA21/e8HXGWv6VorwDL55w3pMB8GA1UdIwQY
MBaAFGDiOA21/e8HXGWv6VorwDL55w3pMA8GA1UdEwEB/wQFMAMBAf8wDgYDVR0P
AQH/BAQDAgGGMA0GCSqGSIb3DQEBCwUAA4IBAQBqAjCBd+EL6koGogxd72Dickdm
ecK60ghLTNJ2wEKvDICqss/FopeuEVhc8q/vyjJuirbVwJ1iqKuyvANm1niym85i
fjyP6XaJ0brikMPyx+TSNma/WiDoMXdDviUuYZo4tBJC2DUPJ/0KDI7ysAsMTB0R
8Q7Lc3GlJS65AFRNIxkpHI7tBPp2W8tZQlVBe7PEcWMzWRjWZAvwDGfnNvUtX4iY
bHEVWSzpoVQUk1hcylecYeMSCzBGw/efuWayIFoSf7ZXFe0TAEOJySwkzGJB9n78
4Rq0ydikMT4EFHP5G/iFI2zsx2vZGNsAHCw7XSVFydqb/ekm/9T7waqt3fW4
-----END CERTIFICATE-----

Backup existing certificate store:

cp  /etc/pki/tls/certs/ca-bundle.crt   ~/

Copy the lcm certificate to the certificate store:

cat root.crt >> /etc/pki/tls/certs/ca-bundle.crt

add the below to raas.service, /usr/lib/systemd/system/raas.service

Environment=REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt

Example:

root@salty [ ~ ]# cat /usr/lib/systemd/system/raas.service
[Unit]
Description=The SaltStack Enterprise API Server
After=network.target

[Service]
Type=simple
User=raas
Group=raas
# to be able to bind port < 1024
AmbientCapabilities=CAP_NET_BIND_SERVICE
NoNewPrivileges=yes
RestrictAddressFamilies=AF_INET AF_INET6 AF_UNIX AF_NETLINK
PermissionsStartOnly=true
ExecStartPre=/bin/sh -c 'systemctl set-environment FIPS_MODE=$(/opt/vmware/bin/ovfenv -q --key fips-mode)'
ExecStartPre=/bin/sh -c 'systemctl set-environment NODE_TYPE=$(/opt/vmware/bin/ovfenv -q --key node-type)'
Environment=REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt
ExecStart=/usr/bin/raas
TimeoutStopSec=90

[Install]
WantedBy=multi-user.target

Restart salt service:

systemctl daemon-reload
systemctl restart raas && tail -f /var/log/raas/raas

Upon restart, the above command should start to tail the raas logs, ensure that we no longer see the certificate-related messages.

Getting started with salt stack and VMware (vCenter)

Salt stack (salt open) is an opensource utility that can be used to orchestrate vCenter tasks. In this blog, I will guide you on how to install, and set a basic clone operation with salt stack.

Install:

pre-requisite: Linux VM (cent os or ubuntu), I’ve used Ubuntu 20.4 for the below example.

Log into ssh of the ubuntu VM, and add salt stack repository:

wget -O - https://repo.saltstack.com/py3/debian/10/amd64/latest/SALTSTACK-GPG-KEY.pub | sudo apt-key add -

add key: in /etc/apt/sources.list.d/saltstack.list

nano /etc/apt/sources.list.d/saltstack.list

add the below line on the file:

deb http://repo.saltstack.com/py3/debian/10/amd64/latest buster main

update repository

sudo apt update

install stalt stack

sudo apt-get install salt-master salt-minion salt-ssh salt-syndic salt-cloud salt-api -y

Edit vi /etc/salt/master and add the salt master IP address to interface

interface: 172.16.8.9

Configuring VMware provider

Pre-requisite: Pyvomi must be installed

sudo apt install python3-pip
pip3 install pyVmomi

Confirm pyvomi is imported:

python3 -c "import pyVmomi" ; echo $?

Setup cloud provider:

create /etc/salt/cloud.providers.d/vmware.conf with the below contents: (replace user, password and url as per what you have in your enveronment)

#/etc/salt/cloud.providers.d/vmware.conf
vcsa:
 driver: vmware
 user: '[email protected]'
 password: 'P@ssw0rd'
 url: 'vcsa.ntitta.lab'
 verify_ssl: False 

homelabs:
 driver: vmware
 user: '[email protected]'
 password: '*************'
 url: 'vcsa.ntitta.in'

now you can test the above config by querying for images using below command:

salt-cloud --list-images vcsa

Okay, Now, lets set up profiles: Create /etc/salt/cloud.profiles.d/vmware.conf with the below content:

#/etc/salt/cloud.profiles.d/vmware.conf
ubuntu20:
 provider: vcsa
 clonefrom: ub20
 cluster: vSAN
 ssh_username: root
 password: 'P@ssw0rd'

update salt config

 salt-cloud -u

Test a deployment:

salt-cloud -p ubuntu20 test-stal1-vm

Troubleshooting: (enable debug by using the below )

salt-cloud -p w16k salty-w16-test-1 -l debug