Tuesday, 14 August 2018

Simulating ON COMMIT DELETE Global Temporary Tables on PostgreSQL

Vibhor Kumar did a great post on Leveraging Local Temporary Table for Global Temporary Table in EDB Postgres 9.6, including both ON COMMIT PRESERVE, and ON COMMIT DELETE options:

https://vibhorkumar.blog/2017/07/06/leveraging-local-temporary-table-for-global-temporary-table-in-edb-postgres-9-6/


The code example only covers the INSERT case, so here is an extension on that code which includes INSERT/UPDATE/DELETE:

CREATE UNLOGGED TABLE gtt_test_definition (id int);
CREATE VIEW gtt_test AS SELECT * FROM gtt_test_definition;

CREATE OR REPLACE FUNCTION gtt_test_function()
RETURNS TRIGGER
LANGUAGE plpgsql
AS
$function$
BEGIN
  /* attempt to run the DML */
  IF (TG_OP = 'INSERT') THEN
    INSERT INTO gtt_test_temp VALUES(NEW.*);
    RETURN NEW;
  ELSIF (TG_OP = 'UPDATE') THEN
    UPDATE gtt_test_temp SET id = NEW.id WHERE id = OLD.id;
    RETURN NEW;
  ELSIF (TG_OP = 'DELETE') THEN
    DELETE FROM gtt_test_temp WHERE id = OLD.id;
    RETURN NEW;
  END IF;

  /* when the temp table doesn't already exist */
  EXCEPTION WHEN undefined_table THEN
    /* create local temporary table */
    CREATE TEMP TABLE gtt_test_temp
      (LIKE gtt_test_definition INCLUDING ALL )
      INHERITS (gtt_test_definition)
      ON COMMIT DELETE ROWS;

    /* re-run the DML */
    IF (TG_OP = 'INSERT') THEN
      INSERT INTO gtt_test_temp VALUES(NEW.*);
      RETURN NEW;
    ELSIF (TG_OP = 'UPDATE') THEN
      UPDATE gtt_test_temp SET id = NEW.id WHERE id = OLD.id;
      RETURN NEW;
    ELSIF (TG_OP = 'DELETE') THEN
      DELETE FROM gtt_test_temp WHERE id = OLD.id;
      RETURN NEW;
    END IF;

END;
$function$;

CREATE TRIGGER gtt_test_trigger INSTEAD OF INSERT OR UPDATE OR DELETE ON gtt_test

  FOR EACH ROW EXECUTE PROCEDURE gtt_test_function();


And here is a test case, showing it working for ON COMMIT DELETE:

postgres=# \d
No relations found.

postgres=# \i test_gtt.sql
CREATE TABLE
CREATE VIEW
CREATE FUNCTION
CREATE TRIGGER

postgres=# \d
                List of relations
 Schema |        Name         | Type  |  Owner   
--------+---------------------+-------+----------
 public | gtt_test            | view  | postgres
 public | gtt_test_definition | table | postgres
(2 rows)

postgres=# select * from gtt_test;
 id 
----
(0 rows)

postgres=# insert into gtt_test values(1);
NOTICE:  merging column "id" with inherited definition
INSERT 0 1
postgres=# select * from gtt_test;
 id 
----
(0 rows)

postgres=# \d
                 List of relations
  Schema   |        Name         | Type  |  Owner   
-----------+---------------------+-------+----------
 pg_temp_2 | gtt_test_temp       | table | postgres
 public    | gtt_test            | view  | postgres
 public    | gtt_test_definition | table | postgres
(3 rows)

postgres=# begin;
BEGIN
postgres=# insert into gtt_test values(1);
INSERT 0 1
postgres=# select * from gtt_test;
 id 
----
  1
(1 row)

postgres=# insert into gtt_test values(2);
INSERT 0 1
postgres=# select * from gtt_test;
 id 
----
  1
  2
(2 rows)

postgres=# commit;
COMMIT
postgres=# select * from gtt_test;
 id 
----
(0 rows)

postgres=# begin;
BEGIN
postgres=# insert into gtt_test values(1);
INSERT 0 1
postgres=# insert into gtt_test values(2);
INSERT 0 1
postgres=# select * from gtt_test;
 id 
----
  1
  2
(2 rows)

postgres=# update gtt_test set id=100 where id=1;
UPDATE 1
postgres=# select * from gtt_test;
 id  
-----
   2
100
(2 rows)

postgres=# commit;
COMMIT
postgres=# select * from gtt_test;
 id 
----
(0 rows)

postgres=# begin;
BEGIN
postgres=# insert into gtt_test values(1);
INSERT 0 1
postgres=# insert into gtt_test values(2);
INSERT 0 1
postgres=# insert into gtt_test values(3);
INSERT 0 1
postgres=# select * from gtt_test;
 id 
----
  1
  2
  3
(3 rows)

postgres=# delete from gtt_test where id=2;
DELETE 0
postgres=# select * from gtt_test;
 id 
----
  1
  3
(2 rows)

postgres=# commit;
COMMIT
postgres=# select * from gtt_test;
 id 
----
(0 rows)

Wednesday, 28 March 2018

Don't use "kill" on PostgreSQL... or you're gonna have a bad time!

Have you ever been tempted to "kill -9a non-behaving PostgreSQL client session that won't die via "pg_cancel_backend()" or "pg_terminate_backend()"?  Let's just say that's at first you may believe (incorrectly) that it will affect only the single session you've killed, but in reality it's an incredibly bad idea... as the result will be ALL of your client sessions being reset and the database crashing.





Here's an example.  Let's say there is one session you need to kill, so you use kill -9:

postgres=# select * from pg_stat_activity;
 datid | datname  |  pid  | usesysid | usename  | application_name | client_addr | client_hostname | client_port |         backend_start         |          xact_start           |          query_start          |         state_change          | wait_event_type | wait_event | state  | backend_xid | backend_xmin |              query
-------+----------+-------+----------+----------+------------------+-------------+-----------------+-------------+-------------------------------+-------------------------------+-------------------------------+-------------------------------+-----------------+------------+--------+-------------+--------------+---------------------------------
 13323 | postgres | 15658 |       10 | postgres | psql             |             |                 |          -1 | 2018-03-28 16:46:27.924226+00 | 2018-03-28 16:46:34.663799+00 | 2018-03-28 16:46:34.663799+00 | 2018-03-28 16:46:34.663808+00 |                 |            | active |             |         1757 | select * from pg_stat_activity;
(1 row)


postgres=# \! kill -9 15658


The postmaster detects that a process has exited abnormally:

16:46:44 UTC [15642]: LOG:  server process (PID 15658) was terminated by signal 9: Killed
16:46:44 UTC [15642]: DETAIL:  Failed process was running: select * from pg_stat_activity;


And even though it wasn't in a transaction and was only running a simple select, the postmaster terminates "any other active service processes":


16:46:44 [15642]: LOG:  terminating any other active server processes


The end result is that the entire PostgreSQL cluster/server resets itself, and does crash recovery to cleanup.  The time taken to do this can be significant, depending on how busy your database is and how long it's been since the last checkpoint.

16:46:44 [15648]: DETAIL:  The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
16:46:44 [15648]: HINT:  In a moment you should be able to reconnect to the database and repeat your command.
16:46:44 [15642]: LOG:  archiver process (PID 15649) exited with exit code 1
16:46:44 [15642]: LOG:  all server processes terminated; reinitializing
16:46:44 [15661]: LOG:  database system was interrupted; last known up at 2018-03-28 16:46:16 UTC
16:46:44 [15661]: LOG:  database system was not properly shut down; automatic recovery in progress
16:46:44 [15661]: LOG:  redo starts at 0/155C230
16:46:44 [15661]: LOG:  invalid record length at 0/155C2A0: wanted 24, got 0
16:46:44 [15661]: LOG:  redo done at 0/155C268
16:46:44 [15661]: LOG:  checkpoint starting: end-of-recovery immediate
16:46:45 [15661]: LOG:  checkpoint complete: wrote 0 buffers (0.0%); 0 transaction log file(s) added, 0 removed, 0 recycled; write=0.030 s, sync=0.000 s, total=0.035 s; sync files=0, longest=0.000 s, average=0.000 s; distance=0 kB, estimate=0 kB
16:46:45 [15661]: LOG:  MultiXact member wraparound protections are now enabled
16:46:45 [15665]: LOG:  autovacuum launcher started

16:46:45 [15642]: LOG:  database system is ready to accept connections


So in short, if you can't kill a session using the standard PostgreSQL Server Signalling Functions, then the only sensible option is to issue a restart during a maintenance window:

psql -c "checkpoint;"
pg_ctl restart -m fast

... or if that still won't work, then the last resort is:

pg_ctl restart -m immediate


Finally, it may seem like a more complicated process to follow when your database has gone awry, but if you don't use the PostgreSQL supported methods then your users will end up in a heap... just like they used to back in the Ingres days:



Stay safe, and keep your fingers off the trigger!

Greg.

Tuesday, 20 March 2018

Running multiple PostgreSQL clusters in RedHat/CentOS?

It's always something that comes up, even in today's vitrualised and containerised environments... "How can I run multiple PostgreSQL clusters on a single server?"

If you're used to doing RPM installs on RedHat/CentOS, then you'll be aware that the default location for all database files is:

/var/lib/pgsql/<version>/data

And the related service name is:

postgresql-<version>

That works well when you're only running a single cluster of each major version on a server, but things become complicated when you need more than one.  There are several considerations:
  • What network port should it use?
  • Where should $PGDATA be?
  • What should the services be called?

What network port should it use?

This is the easiest to solve, as the default is "5432".  By convention subsequent clusters running on a host simply increment this number by one, so "5433", then "5434", etc.  In practice, you can choose any number you prefer, and some sites allocate a unique port to each cluster across the entire enterprise, so that there are never any clashes.

Modify the $PGDATA/postgresql.conf, and uncomment this line with the new value, and restart the cluster:


#port = 5432   # (change requires restart)



Where will $PGDATA be?

This becomes more complex, especially if you run SELinux in "enforcing" mode.  The simplest way is to extend the existing structure in a similar way that Ubuntu has done, and include the cluster name after the version:

/var/lib/pgsql/<version>/<cluster_name>/data

If you want several mountpoints because you prefer to split your database, then they can all be mounted at the same level as "data", and this still fits within the default SELinux configuration for PostgreSQL:

/var/lib/pgsql/<version>/<cluster_name>/data
                                       /pg_wal
                                       /pg_log
                                       /<tblspc_name>
                                       /pg_stat_tmp



What will the services be called?

Assuming you're using CentOS v7 and systemd then there are two options here, both of which have merits.  You can either use a separate service file for each new cluster, or create a single service file template to cover all clusters.

Service File Template

The file should be owned by root:root, mode 644, and be called (replacing "<version>" with the actual PostgreSQL version):

/etc/systemd/system/postgresql-<version>@.service

The contents should be:

.include /lib/systemd/system/postgresql-<version>.service

[Unit]
Description=PostgreSQL <version> database server - cluster_name:%i

[Service]
Environment=PGDATA=/var/lib/pgsql/<version>/%i/data


For PostgreSQL v9.6, this would look like:

[root@localhost]# cat /etc/systemd/system/postgresql-9.6@.service
.include /lib/systemd/system/postgresql-9.6.service

[Unit]
Description=PostgreSQL 9.6 database server - cluster_name:%i

[Service]
Environment=PGDATA=/var/lib/pgsql/9.6/%i/data


Service File per Cluster

If you need more control on a per-cluster basis, then a separate file for each cluster must be created:

/etc/systemd/system/postgresql-<version>-<cluster_name>.service

The contents should be:

.include /lib/systemd/system/postgresql-<version>.service

[Unit]
Description=PostgreSQL <version> database server - cluster_name:<cluster_name>

[Service]
User=<postgres_service_user>
Group=<postgres_service_group>
Environment=PGDATA=/var/lib/pgsql/<version>/<cluster_name>/data


Here is an example for PostgreSQL v9.6, a cluster name of "test", running with the Linux user:group "postgresql_test:postgresql_test":

[root@localhost]# cat /etc/systemd/systempostgresql-9.6-test.service
.include /lib/systemd/system/postgresql-9.6.service

[Unit]
Description=PostgreSQL 9.6 database server - cluster_name:test

[Service]
User=postgres_test
Group=postgres_test
Environment=PGDATA=/var/lib/pgsql/9.6/test/data

What should the services be called?

Once you've chosen to use either a template or a service file per cluster, the service names are:

Service File Template

postgresql-<version>@<cluster_name>

To start a PostgreSQL v9.6 cluster called "main", you would use:

sudo systemctl start postgresql-9.6@main

Service File per Cluster

postgresql-<version>-<cluster_name>

To start a PostgreSQL v9.6 cluster called "main", you would use:

sudo systemctl start postgresql-9.6-main

Both are very similar, with the only difference being the use of an at "@", or a dash "-" between the version and the cluster name.

IMPORTANT: The services won't get picked up until you reload the systemd configuration by running:

sudo systemctl daemon-reload

Conclusion

This structure allows for the flexibility of running multiple clusters on a server, without going as far as Ubuntu and splitting the configuration files out of $PGDATA... which in my opinion is very "un-Postgres-ish".

Even if you don't need it now, adopting such a structure could someday make your life easier, because you're already prepared to run multiple clusters on the server.


Video

I presented my ideas to the London PostgreSQL Users Group, and their input was affirming and interesting at the same time.  If you want to hear what they had to say, then click below:


  https://skillsmatter.com/skillscasts/11736-multiple-postgre-clusters-on-redhat-centos#video


Wednesday, 14 February 2018

Do you have a HUGE table that never seems to vacuum frequently enough?

Do you have a HUGE table that never seems to vacuum frequently enough?

In situations where you have very large tables, then relying on  
autovacuum_vacuum_scale_factor  can mean many thousands or hundreds of thousands of rows are updated before a vacuum is triggered.

One way to ensure you have regular vacuuming is to reduce  autovacuum_vacuum_scale_factor = 0  for those tables, and increase the  autovacuum_vacuum_threshold  to an appropriate level.  Be warned that you can trap yourself into a constant vacuum spiral if you don't make the threshold high enough, so be informed about your update volume before you make changes and monitor it afterwards to ensure you're not constantly vacuuming.

I tested this on PostgreSQL v9.6.7 on CentOS v7.4, running standard parameters except for setting  log_autovacuum_min_duration = 0  so that we can see the autovacuum activity:

postgres=# SELECT version();
                                                 version                                                  
----------------------------------------------------------------------------------------------------------
 PostgreSQL 9.6.7 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-16), 64-bit
(1 row)

postgres=# SHOW autovacuum_vacuum_threshold;
 autovacuum_vacuum_threshold 
-----------------------------
 50
(1 row)

postgres=# SHOW autovacuum_vacuum_scale_factor;
 autovacuum_vacuum_scale_factor 
--------------------------------
 0.2
(1 row)

postgres=# show log_autovacuum_min_duration;
 log_autovacuum_min_duration 
-----------------------------
 0
(1 row)


Create a table, and set the autovacuum parameters explicitly:

postgres=# CREATE TABLE vacuum_threshold_test(id int);
CREATE TABLE

postgres=# ALTER TABLE vacuum_threshold_test SET (autovacuum_vacuum_scale_factor = 0);
ALTER TABLE

postgres=# ALTER TABLE vacuum_threshold_test SET (autovacuum_vacuum_threshold = 1000);
ALTER TABLE

postgres=# \d+ vacuum_threshold_test
                Table "public.vacuum_threshold_test"
 Column |  Type   | Modifiers | Storage | Stats target | Description 
--------+---------+-----------+---------+--------------+-------------
 id     | integer |           | plain   |              | 
Options: autovacuum_vacuum_scale_factor=0, autovacuum_vacuum_threshold=1000


Create some dummy data, in excess of the autovacuum_vacuum_threshold:

postgres=# INSERT INTO vacuum_threshold_test(id) (SELECT generate_series(1,2000));
INSERT 0 2000


Update exactly the same number of rows as the autovacuum_vacuum_threshold, and for good measure force a checkpoint:

postgres=# UPDATE vacuum_threshold_test SET id = id + 1 WHERE id BETWEEN 1 AND 1000;
UPDATE 1000
postgres=# CHECKPOINT;
CHECKPOINT


That isn't enough to trigger an autovacuum, as it must be GREATER THAN the  autovacuum_vacuum_threshold,  so update one more row:

postgres=# UPDATE vacuum_threshold_test SET id = id + 1 WHERE id = 2000;
UPDATE 1


That then triggers an autovacuum, but strangely it only removed 1 tuple... not 1001 as you would expect:

< 2018-02-14 11:59:05.571 GMT > LOG:  automatic vacuum of table "postgres.public.vacuum_threshold_test": index scans: 0
pages: 0 removed, 14 remain, 0 skipped due to pins, 0 skipped frozen
tuples: 1 removed, 2000 remain, 0 are dead but not yet removable
buffer usage: 51 hits, 2 misses, 12 dirtied
avg read rate: 0.613 MB/s, avg write rate: 3.677 MB/s
system usage: CPU 0.00s/0.00u sec elapsed 0.02 sec


Alternatively if you exceed the  autvacuum_vacuum_threshold  in one transaction, it removes all 1001 of them:

postgres=# UPDATE vacuum_threshold_test SET id = id + 1 WHERE id BETWEEN 2 AND 1002;
UPDATE 1001

< 2018-02-14 12:07:05.648 GMT > LOG:  automatic vacuum of table "postgres.public.vacuum_threshold_test": index scans: 0
pages: 0 removed, 18 remain, 0 skipped due to pins, 0 skipped frozen
tuples: 1001 removed, 2000 remain, 0 are dead but not yet removable
buffer usage: 60 hits, 0 misses, 0 dirtied
avg read rate: 0.000 MB/s, avg write rate: 0.000 MB/s
system usage: CPU 0.00s/0.00u sec elapsed 0.00 sec


If you know what you want, then you can tune individual tables to always vacuum at the right point to prevent bloat, rather than relying on the cluster wide parameters in postgresql.conf.

Interestingly, if you set both  autovacuum_vacuum_scale_factor = 0, and  autovacuum_vacuum_threshold = 0, then any single update/delete to the table will result in autovacuum being triggered.

This has been logged as a bug on the PostgreSQL mailing list:

https://www.postgresql.org/message-id/151861949011.1450.18007996621988910831%40wrigleys.postgresql.org

Thursday, 1 February 2018

Monitoring bgwriter efficiency during checkpoints with pg_buffercache

Have you ever wanted to know what's happening inside your PostgreSQL buffer cache?  Is your bgwriter keeping up, or are you suffering with a lot of dirty buffers during a checkpoint?

Uncloak some of the mystery by using the pg_buffercache extension.

CREATE EXTENSION pg_buffercache;


You can then use the following query, and plot the results into your favourite graphing solution:

SELECT buffer_status, sum(count) AS count
  FROM (SELECT CASE isdirty
                 WHEN true THEN 'dirty'
                 WHEN false THEN 'clean'
                 ELSE 'empty'
               END AS buffer_status,
               count(*) AS count
          FROM pg_buffercache
          GROUP BY buffer_status
        UNION ALL
          SELECT * FROM (VALUES ('dirty', 0), ('clean', 0), ('empty', 0)) AS tab2 (buffer_status,count)) tab1
  GROUP BY buffer_status;


The output will look like this:

postgres=# SELECT buffer_status, sum(count) AS count
  FROM (SELECT CASE isdirty
                 WHEN true THEN 'dirty'
                 WHEN false THEN 'clean'
                 ELSE 'empty'
               END AS buffer_status,
               count(*) AS count
          FROM pg_buffercache
          GROUP BY buffer_status
        UNION ALL
          SELECT * FROM (VALUES ('dirty', 0), ('clean', 0), ('empty', 0)) AS tab2 (buffer_status,count)) tab1
  GROUP BY buffer_status;
 buffer_status | count
---------------+--------
 clean         |    544
 dirty         |     18
 empty         | 523726
(3 rows)


And whenever there is a checkpoint, the dirty count should go to zero:

postgres=# checkpoint;
CHECKPOINT

postgres=# SELECT buffer_status, sum(count) AS count
  FROM (SELECT CASE isdirty
                 WHEN true THEN 'dirty'
                 WHEN false THEN 'clean'
                 ELSE 'empty'
               END AS buffer_status,
               count(*) AS count
          FROM pg_buffercache
          GROUP BY buffer_status
        UNION ALL
          SELECT * FROM (VALUES ('dirty', 0), ('clean', 0), ('empty', 0)) AS tab2 (buffer_status,count)) tab1
  GROUP BY buffer_status;
 buffer_status | count
---------------+--------
 clean         |    562
 dirty         |      0
 empty         | 523726
(3 rows)


ACTION: If there is a large clearance of dirty blocks during the checkpoint, then you should tune bgwriter to be more aggressive.


Should you want to get the output as one per line, then you can use crosstab:

CREATE EXTENSION tablefunc;

SELECT * FROM crosstab(
  $$
    SELECT now() sample_timestamp, buffer_status, sum(buffer_count)::int AS buffer_count
          FROM (SELECT CASE isdirty
                         WHEN false THEN 'clean'
                         WHEN true THEN 'dirty'
                         ELSE 'empty'
                       END AS buffer_status,
                       count(*) AS buffer_count
                  FROM pg_buffercache
                  GROUP BY buffer_status
                UNION ALL
                  SELECT * FROM (VALUES ('clean', 0), ('dirty', 0), ('empty', 0)) AS dummy_table(buffer_status, buffer_count)
               ) outer_table
          GROUP BY buffer_status
          ORDER BY buffer_status
  $$)

AS ct(sample_timestamp timestamptz, clean int, dirty int, empty int);

The output will then look like:

       sample_timestamp        | clean | dirty | empty
-------------------------------+-------+-------+--------
 2018-02-02 11:38:00.915978+00 |  3108 |     0 | 521180
(1 row)


I created a test database, and ran pgbench against it whilst sampling every 10 seconds:

pgbench -i -s 100 pgbench

then

pgbench -c 32 -j 4 -P 10 -T 1200 pgbench


The database was configured with these parameters:

checkpoint_timeout = 5min
checkpoint_completion_target = 0.9

bgwriter_delay = 100ms
bgwriter_lru_maxpages = 1000
bgwriter_lru_multiplier = 3.0


The resulting graph was:


Some notable points:

1) The db_buffers is big enough to fit the entire data set, because we never run out of empty buffers (grey).
2) Work starts at #14 and the dirty buffers (orange) start to climb.
3) A checkpoint starts at #27, and dirty blocks are written to disk.  This results in clean buffers (blue) balancing out with the new dirty buffers being written.
4) This held in equilibrium until a forced manual checkpoint is done at #98.  This clears all dirty buffers to disk immediately, instead of letting bgwriter do it in a controlled fashion.
5) Things quickly get back to equilibrium.
6) The test completes at #128, and the buffers stay in their current clean/dirty/empty state.
7) Once the next timed checkpoint is triggered at #140, the dirty blocks are written to disk at the rate determined by the bgwriter parameters.
8) All dirty blocks are written to disk by #167

Wednesday, 10 January 2018

Faster shutdowns with checkpoint

Have you ever restarted your PostgreSQL cluster and wondered why it's taking so long?  The cause of this is generally one of these:

  • You didn't include the "-m fast" flag (v9.4 and earlier), and sessions are still connected
  • A large update query is rolling back
  • There are a large number of dirty blocks, which are not yet written to disk


You can cover off the first two by:

  • Using "pg_ctl -m fast"
  • Checking in pg_stat_activity for long-running queries
But to ensure you minimise your downtime when restarting a PostgreSQL cluster, then consider forcing a "checkpoint" beforehand.

The normal process of a shutdown is:
  1. Stop any new connections
  2. Disconnect any existing connections, and rollback transactions (if "-m fast" is used)
  3. Write all dirty blocks to disk
  4. Shutdown the database cleanly, and restart
The 3rd step can sometimes take a long time if you've got a lot of dirty buffers.  Maybe you've got a very long checkpoint_timeout set, or possibly your server is just very busy.

If you run a manual "CHECKPOINT" immediately prior to running the shutdown, then you can sometimes drastically shorten the time a restart takes and keep your users happy:

postgres@localhost[default:5432] ~/$ psql
psql (10.1)
Type "help" for help.

postgres=# CHECKPOINT;
CHECKPOINT
postgres=# \q

postgres@localhost[default:5432] ~/$ pg_ctl restart -m fast
waiting for server to shut down.... done
server stopped
waiting for server to start....LOG:  listening on IPv4 address "0.0.0.0", port 5432
LOG:  listening on IPv6 address "::", port 5432
LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
LOG:  listening on Unix socket "/tmp/.s.PGSQL.5432"
LOG:  redirecting log output to logging collector process
HINT:  Future log output will appear in directory "log".
 done
server started


You can combine this all into a single step by running:

postgres@localhost[default:5432] ~/$ psql -c CHECKPOINT && pg_ctl restart -m fast
CHECKPOINT
waiting for server to shut down.... done
server stopped
waiting for server to start....2018-01-11 09:53:21.419 GMT [8919] LOG:  listening on IPv4 address "0.0.0.0", port 5432
LOG:  listening on IPv6 address "::", port 5432
LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
LOG:  listening on Unix socket "/tmp/.s.PGSQL.5432"
LOG:  redirecting log output to logging collector process
HINT:  Future log output will appear in directory "log".
 done

server started

New PostgreSQL RDS versions - v10.14 , v11.9, and v12.4

It appears that new patch versions of PostgreSQL RDS are being rolled out in AWS.. so we should expect an announcement soon: REGION: us-east...