Wednesday, 23 October 2019

Think AWS support for RDS gives 100% coverage? Think again!

Just a friendly FYI that AWS do not have access to any of your PostgreSQL RDS database logs, such as postgresql.log.  This is important, if you just happen to hit PostgreSQL bug such as this one that was fixed in v11.3:

  • Fix race conditions in management of dynamic shared memory (Thomas Munro)

    These could lead to dsa_area could not attach to segment or cannot unpin a segment that is not pinned errors.

Once this problem occurs, your RDS database will appear to be running, and can possibly be used to service some queries... but most queries will hang, and eventually your application servers will timeout causing application downtime.

Unfortunately from the AWS support perspective, the RDS database is up, it's using some CPU, and network, so there's no reason to suspect it's not healthy.  It's only if you see this message by looking in your postgresql.log that you will know what the problem is:

2018-01-01 12:34:56 UTC::@:[10234]:FATAL:  cannot unpin a segment that is not pinned



So if you run an application team, then please ensure you have a DBA somewhere on staff, because AWS doesn't have your back when you hit problems such as this.  You could always just turn it off and on again, but you will never find the root-cause and will be cursed to keep rebooting whenever things break at unpredictable times.

postgresql.log file grows infinitely, and never rotates... did your disk fill up previously?

An interesting (and currently undocumented) behaviour of PostgreSQL is that when there is an error writing to the postgresql.log configured in log_filename, that all future log rotations are disabled by this piece of code in syslogger.c:


/*
 * ENFILE/EMFILE are not too surprising on a busy system; just
 * keep using the old file till we manage to get a new one.
 * Otherwise, assume something's wrong with Log_directory and stop
 * trying to create files.
 */
 if (errno != ENFILE && errno != EMFILE)
 {
     ereport(LOG,
             (errmsg("disabling automatic rotation (use SIGHUP to re-enable)")));
     rotation_disabled = true;
 }


In most default configurations this will never be noticed, because the log_directory is on the same filesystem as the data_directory.  In that case the entire instance will crash, and once space is cleared then it will be restarted afresh with log rotation enabled.




If, however, you had the foresight to put your log_directory onto a different filesystem, then it can fill up without taking the entire cluster down.  Once you clear space then logging will resume, but both log_rotation_age, and log_rotation_size will be disabled:


if (Log_RotationAge > 0 && !rotation_disabled)
{
 /* Do a logfile rotation if it's time */
 now = (pg_time_t) time(NULL);
 if (now >= next_rotation_time)
        rotation_requested = time_based_rotation = true;
}

if (!rotation_requested && Log_RotationSize > 0 && !rotation_disabled)
{
 /* Do a rotation if file is too big */
 if (ftell(syslogFile) >= Log_RotationSize * 1024L)
 {
        rotation_requested = true;
        size_rotation_for |= LOG_DESTINATION_STDERR;
 }
 if (csvlogFile != NULL &&
        ftell(csvlogFile) >= Log_RotationSize * 1024L)
 {
        rotation_requested = true;
        size_rotation_for |= LOG_DESTINATION_CSVLOG;
 }
}


The result of being disabled means that one single log file will be written to, growing and growing until it once again fills the disk.  If you're relying on log_truncate_on_rotation to run, or cron jobs to cleanup old logs, then neither will work.

To correct this, once some space has been cleared then you just need to send a SIGHUP to the postmaster process.  This is safely done by calling pg_reload_conf(), but you could also use kill or pg_ctl reload if you prefer.


/*
 * If we had a rotation-disabling failure, re-enable rotation
 * attempts after SIGHUP, and force one immediately.
 */
if (rotation_disabled)
{
    rotation_disabled = false;
    rotation_requested = true;
}


After that your logging will go back to normal, and will rotate as you have configured in your postgresql.conf.

Tuesday, 21 May 2019

You should NEVER use pg_resetwal / pg_resetxlog

I realise it's a blanket statement to say NEVER to use pg_resetwal / pg_resetxlog, but in almost 100% of cases this will be true.  One common reason cited is:

"My disk filled up and my database crashed, so I ran pg_resetwal to clear some space."





If you have seen this:

postgres=# INSERT INTO full_disk SELECT generate_series(1,1000000);
PANIC:  could not write to file "pg_wal/xlogtemp.11818": No space left on device
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
The connection to the server was lost. Attempting reset: Failed.


or this:

2019-05-21 13:30:20.080 BST [11818] PANIC:  could not write to file "pg_wal/xlogtemp.11818": No space left on device
2019-05-21 13:30:20.080 BST [11818] STATEMENT:  insert into full_disk select generate_series(1,1000000);
2019-05-21 13:30:20.100 BST [11708] LOG:  server process (PID 11818) was terminated by signal 6: Aborted
2019-05-21 13:30:20.100 BST [11708] DETAIL:  Failed process was running: insert into full_disk select generate_series(1,1000000);
2019-05-21 13:30:20.100 BST [11708] LOG:  terminating any other active server processes
2019-05-21 13:30:20.101 BST [11715] WARNING:  terminating connection because of crash of another server process
2019-05-21 13:30:20.101 BST [11715] DETAIL:  The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2019-05-21 13:30:20.101 BST [11715] HINT:  In a moment you should be able to reconnect to the database and repeat your command.
2019-05-21 13:30:20.103 BST [11708] LOG:  all server processes terminated; reinitializing
2019-05-21 13:30:20.117 BST [11849] LOG:  database system was interrupted; last known up at 2019-05-21 13:27:43 BST
2019-05-21 13:30:20.119 BST [11850] FATAL:  the database system is in recovery mode
2019-05-21 13:30:20.177 BST [11849] LOG:  database system was not properly shut down; automatic recovery in progress
2019-05-21 13:30:20.179 BST [11849] LOG:  redo starts at 0/16B0250
2019-05-21 13:30:20.468 BST [11849] LOG:  redo done at 0/1FFFF88
2019-05-21 13:30:20.468 BST [11849] LOG:  last completed transaction was at log time 2019-05-21 13:29:41.186055+01
2019-05-21 13:30:20.500 BST [11849] PANIC:  could not write to file "pg_wal/xlogtemp.11849": No space left on device
2019-05-21 13:30:20.522 BST [11708] LOG:  startup process (PID 11849) was terminated by signal 6: Aborted
2019-05-21 13:30:20.522 BST [11708] LOG:  aborting startup due to startup process failure

2019-05-21 13:30:20.527 BST [11708] LOG:  database system is shut down



DO NOT RUN pg_resetwal!


This will result in a corrupted database and lost data.  To fix a full disk situation you should try these, in order of preference:

1) Remove non-PostgreSQL files on the filesystem
2) Add storage to the filesystem
3) Move/remove files in the "pg_log" or "log" directory, after confirming they are all postgresql logs, and not a part of the database itself.

... then restart the database and will do crash recovery to ensure you once again have a consistent database.  Upon startup you will see crash recovery happen, and the database will then come up:

2019-05-21 13:35:02.361 BST [11977] LOG:  database system shutdown was interrupted; last known up at 2019-05-21 13:30:20 BST
2019-05-21 13:35:02.379 BST [11977] LOG:  database system was not properly shut down; automatic recovery in progress
2019-05-21 13:35:02.380 BST [11977] LOG:  redo starts at 0/16B0250
2019-05-21 13:35:02.678 BST [11977] LOG:  redo done at 0/1FFFF88
2019-05-21 13:35:02.678 BST [11977] LOG:  last completed transaction was at log time 2019-05-21 13:29:41.186055+01

2019-05-21 13:35:02.734 BST [11973] LOG:  database system is ready to accept connections


The same goes for deleting files in the pg_wal / pg_log directories, which happened so frequently that it was the motivation to rename everything from "xlog" to "wal".

The only reason you should ever consider running pg_resetwal is if you have an already corrupted database, and you have no backups available.  This will allow you to start the database, and pg_dump what's left... with the understanding that it may not be consistent.

Even if you have an already corrupt database, and you've decided that pg_resetwal is the one and only solution... ALWAYS take a file-level backup (tar, etc). of the database and all related files (tablespaces, wal) before doing anything.  It's a one-way command, that can actually make things worse.

New PostgreSQL RDS versions - v10.14 , v11.9, and v12.4

It appears that new patch versions of PostgreSQL RDS are being rolled out in AWS.. so we should expect an announcement soon: REGION: us-east...