looking for a specific record on a Postgres database

The below script will look for a string in every column of the database. To start off with, Log in to postgres and make sure that you have selected the database.

create a function by running the below script. (copy and paste as it is)

CREATE OR REPLACE FUNCTION search_whole_db(_like_pattern text)
  RETURNS TABLE(_tbl regclass, _ctid tid) AS
$func$
BEGIN
   FOR _tbl IN
      SELECT c.oid::regclass
      FROM   pg_class c
      JOIN   pg_namespace n ON n.oid = relnamespace
      WHERE  c.relkind = 'r'                           -- only tables
      AND    n.nspname !~ '^(pg_|information_schema)'  -- exclude system schemas
      ORDER BY n.nspname, c.relname
   LOOP
      RETURN QUERY EXECUTE format(
         'SELECT $1, ctid FROM %s t WHERE t::text ~~ %L'
       , _tbl, '%' || _like_pattern || '%')
      USING _tbl;
   END LOOP;
END
$func$  LANGUAGE plpgsql;

Now, run the below command, replace ‘mypattern’ with the object you are looking for

SELECT * FROM search_whole_db('mypattern');

Here’s an example:

I am looking for a VM named hosting:57669-1:HTM:Cluster1-Web5 in a vCD database. Below is the output from psql

Type "help" for help.

vcd=# CREATE OR REPLACE FUNCTION search_whole_db(_like_pattern text)
vcd-#   RETURNS TABLE(_tbl regclass, _ctid tid) AS
vcd-# $func$
vcd$# BEGIN
vcd$#    FOR _tbl IN
vcd$#       SELECT c.oid::regclass
vcd$#       FROM   pg_class c
vcd$#       JOIN   pg_namespace n ON n.oid = relnamespace
vcd$#       WHERE  c.relkind = 'r'                           -- only tables
vcd$#       AND    n.nspname !~ '^(pg_|information_schema)'  -- exclude system schemas
vcd$#       ORDER BY n.nspname, c.relname
vcd$#    LOOP
vcd$#       RETURN QUERY EXECUTE format(
vcd$#          'SELECT $1, ctid FROM %s t WHERE t::text ~~ %L'
vcd$#        , _tbl, '%' || _like_pattern || '%')
vcd$#       USING _tbl;
vcd$#    END LOOP;
vcd$# END
vcd$# $func$  LANGUAGE plpgsql;
CREATE FUNCTION
vcd=#
vcd=# SELECT * FROM search_whole_db('hosting:57669-1:HTM:Cluster1-Web5');
  _tbl  | _ctid
--------+-------
 vm_inv | (2,8)
(1 row)

Content library hack(DB)

VMware Content library, A unique way to make VM templates/ISO’s available across multiple vCenter’s. However, It does not handle Datacenter segregation very well when storing the contents on NFS/VMFS.

A subscribed library would need dedicated storage space and would be pointless in my setup as I had presented the NFS volume to all host.

So what seems to be the problem?

I have an NFS volume presented across all the host across several Data center. Although the NFS UUID and the name of the datastore are the same (ISO), on the vCenter database, this is stored with a different datacenter ID as they are segregated by the datacenter object.


  id  |      name      | datacenter_id
 -----+----------------+---------------
  157 | Template       |            77
  158 | SlowBro_400    |            77
  159 | ISO            |            77
  160 | SharedLUN      |            77
  161 | 10.154         |            77
  156 | template       |            77
   13 | SlowBro_legasy |             2
   14 | 10.128         |             2
   91 | ISO            |             2
   16 | Template       |             2
   92 | is-tse-d129_1  |             2
   12 | SlowBro_400    |             2

VCDB=# select * from vpx_entity where id=77;
  id |     name     | type_id | parent_id
 ----+--------------+---------+-----------
  77 | BLR          |       8 |         1
 (1 row)


VCDB=# select * from vpx_entity where id=2;
  id |    name    | type_id | parent_id
 ----+------------+---------+-----------
   2 | HYD        |       8 |         1
 (1 row)

With this configuration, When I had created a content library, as per the db, this was only referencing to once of the site datastore, IE: we only see iso (id 159) from BLR (id 77) datacenter.

What this means is The content library objects can only be deployed to one of the datacenter rather than being able to deploy them across all datacenter’s on the vCenter.

VCDB=# select * from cl_storage;
                  id                  |                          storageuri                          |   type
--------------------------------------+--------------------------------------------------------------+-----------
 a0018db8-f630-4c04-b0f1-30c900ad691c | Datastore:datastore-159:481639ff-d88d-4622-8872-ec6856e6b157  | Datastore
 bf8e8dcb-5b28-4b03-863e-89308bc8c501 | Datastore:datastore-157:481639ff-d88d-4622-8872-ec6856e6b157 | Datastore


The above table references:
VCDB=# select * from cl_library_storage;
              library_id              |              storage_id
--------------------------------------+--------------------------------------
 3336e2ad-8166-4e6a-850d-a9d81c41ba01 | a0018db8-f630-4c04-b0f1-30c900ad691c
 946cea65-5cd0-41e0-83ab-17259f690ce1 | bf8e8dcb-5b28-4b03-863e-89308bc8c501

So.. Logically, If I add the other ID for ISO on this table cl_storage and create a referencing record in cl_library_storage then we should be able to use the same content library across all datacenter.

The ID from the above must be unique and must match the two tables. I added the below records (i added another record by incrementing one of the values after the co-relating table.

after change
VCDB=# select * from cl_storage;
                  id                  |                          storageuri                          |   type
--------------------------------------+--------------------------------------------------------------+-----------
 a0018db8-f630-4c04-b0f1-30c900ad691c | Datastore:datastore-159:481639ff-d88d-4622-8872-ec6856e6b157  | Datastore
 bf8e8dcb-5b28-4b03-863e-89308bc8c501 | Datastore:datastore-157:481639ff-d88d-4622-8872-ec6856e6b157 | Datastore
 bf8e8dcb-5b28-4b03-863e-89308bc8c502 | Datastore:datastore-11:481639ff-d88d-4622-8872-ec6856e6b157  | Datastore

VCDB=# select * from cl_library_storage;
              library_id              |              storage_id
--------------------------------------+--------------------------------------
 3336e2ad-8166-4e6a-850d-a9d81c41ba01 | a0018db8-f630-4c04-b0f1-30c900ad691c
 946cea65-5cd0-41e0-83ab-17259f690ce1 | bf8e8dcb-5b28-4b03-863e-89308bc8c501
 946cea65-5cd0-41e0-83ab-17259f690ce1 | bf8e8dcb-5b28-4b03-863e-89308bc8c502

after adding the above records, I am now able to deploy VM’s from the content library across Datacenters.

content library DB schema can be found here:

/usr/lib/vmware-content-library/support/scripts/db/PostgreSQL/cls_unified/cls60.sql

PostgreSQL: could not open file “/var/lib/pgsql/data/pg_clog/0726”: No such file or directory

when selecting a specific date from a table, at times due to database corruption Postgres might report the file was not found.

hsphere=# select * from TABLE;
ERROR:  could not access status of transaction 1918986094
DETAIL:  could not open file "/var/lib/pgsql/data/pg_clog/0726": No such file or directory

In Such instances, we will can re-create the file using the below dd command (note: data loss on the missing file)

dd if=/dev/zero of=/var/lib/pgsql/data/pg_clog/0726 bs=256k count=1

Note: Ensure the permissions on the file that was re-created are set appropriately. creating the blank file means that the data that was originally on it might actually go missing. The respective records will need to be re-created or its relevant records (constraints etc) will need to be removed from the DB.

Statement diagnostic data from driver is XX002:0:7:ERROR: index “pk_index_name” contains corrupted page at block xxxx;

on the vpxd.log:
--> Error while executing the query
2018-09-25T21:17:09.351Z error vpxd[7F3FC958B800] [Originator@6876 sub=Default] [VdbStatement] Connection diagnostic data from driver is HY000:0:110:
2018-09-25T21:17:09.351Z error vpxd[7F3FC958B800] [Originator@6876 sub=Default] [VdbStatement] Bind parameters:
2018-09-25T21:17:09.351Z error vpxd[7F3FC958B800] [Originator@6876 sub=Default] [VdbStatement] [0]datatype: 1, size: 4, arraySize: 0
2018-09-25T21:17:09.351Z error vpxd[7F3FC958B800] [Originator@6876 sub=Default] [VdbStatement] value = 172237
2018-09-25T21:17:09.351Z error vpxd[7F3FC958B800] [Originator@6876 sub=Default] [Vdb::IsRecoverableErrorCode] Unable to recover from XX002:7
2018-09-25T21:17:09.351Z error vpxd[7F3FC958B800] [Originator@6876 sub=Default] [Vdb::IsRecoverableErrorCode] Unable to recover from HY000:110
2018-09-25T21:17:09.351Z error vpxd[7F3FC958B800] [Originator@6876 sub=Default] [VdbStatement] SQLError was thrown: "ODBC error: (XX002) - ERROR: index "pk_vpx_alarm_runtime" contains corrupted page at block 5877;
--> Error while executing the query" is returned when executing SQL statement "DELETE FROM VPX_ALARM_RUNTIME WHERE ENTITY_ID=?"
2018-09-25T21:17:09.355Z error vpxd[7F3FC958B800] [Originator@6876 sub=Daemon] Unhandled exception: Error[VdbODBCError] (-1) "ODBC error: (XX002) - ERROR: index "pk_vpx_alarm_runtime" contains corrupted page at block 5877;
--> Error while executing the query" is returned when executing SQL statement "DELETE FROM VPX_ALARM_RUNTIME WHERE ENTITY_ID=?"
2018-09-25T21:17:09.355Z info vpxd[7F3FC958B800] [Originator@6876 sub=SupportMgr] Wrote uptime information
(END)

Resolution:

Take a snapshot of the vcsa
Export affected table:

VCDB=# copy (select * from vpx_alarm_runtime) to '/tmp/vpx_alarm_runtime_select_.csv' with CSV DELIMITER ',';
COPY 3854
Drop affected constrain.
VCDB=# alter table vpx_alarm_runtime drop CONSTRAINT PK_VPX_ALARM_RUNTIME;

Recreate constrain

alter table vpx_alarm_runtime add constraint PK_VPX_ALARM_RUNTIME primary key (ENTITY_ID, ALARM_ID,
EXPRESSION_NAME)
recreate Table: VPX_ALARM_RUNTIME
/*==============================================================*/
/* Table: VPX_ALARM_RUNTIME */
/*==============================================================*/
create table VPX_ALARM_RUNTIME (
ENTITY_ID INTEGER not null,
ALARM_ID INTEGER not null,
ENTITY_TYPE INTEGER,
EXPRESSION_NAME VARCHAR(440) not null,
STATE_VALUE VARCHAR(255),
METRIC_VALUE INTEGER,
CREATED_TIME TIMESTAMP,
STATUS_VALUE VARCHAR(50),
EVENT_KEY INTEGER null,
constraint PK_VPX_ALARM_RUNTIME primary key (ENTITY_ID, ALARM_ID,
EXPRESSION_NAME)
);
 

Connecting to VMware appliance postgres/PSQL instance from an external computer/pgadmin

By default, the postgres instance on vCenter/vSphere replication..etc.. are configured to not accept connections from a computer on the network. On this  post, I will show you how to re-configure this to allow connections from an external box for tools like PGadmin etc.

Note: Depending on the appliance, the postgres, configuration files/paths might be different. On this post, we will search for the configuration and then change the respective file.

Start by ssh into the appliance.

Type the below command to search for the configuration file: postgresql.conf

find / -iname postgresql.conf

take a copy of the configuration.

cp /storage/db/vpostgres/postgresql.conf /storage/db/vpostgres/postgresql.conf.backup

Edit the file

vi /storage/db/vpostgres/postgresql.conf

Look for the line that says “listen_addresses = ‘XXXX”
In some cases, this will be hashed out, remove the hash. and replace local host with *

Save the configuration file (key combination: “Esc” + “:” and then type in “wq!”

Search for the Postgres configuration file

find / -iname pg_hba.conf

backup the configuration file

cp /storage/db/vpostgres/pg_hba.conf /storage/db/vpostgres/pg_hba.conf.bak

Edit the file

vi /storage/db/vpostgres/pg_hba.conf

Look for the below and replace this with the your IP subnet

host all all 192.168.1.0/24 trust   <———————————–From the below putty, you can see that I am on a 192.168.1.x subnet

The method is set to trust (not recommended) as I did not want to log into the DB with a password

Save the configuration file (key combination: “Esc” + “:” and then type in “wq!”

restart vmware-postgres service

service vmware-vpostgres restart

For vCenter server 6.5

service-control --vmware-vpostgres restart

Conform postgres port number and if it is listening to (vSPhere Replication appliance listens to a different port! it is best to know which port you need to connect to when accessing from an external box)

netstat -anop | grep postgres

From the above, we know the port is 5432

Launch pgadmin and add a new server

Also note that in most cases, the db credentials is stored in certain configuration files like

  • VCDB.properties
find / -iname vcdb.properties
Cat  /etc/vmware-vpx/vcdb.properties
  • or the .pgpass from the home directory
ls -ltha ~/

cat ~/.pgpass