Skip to content

Commit a7ab6ce

Browse files
committed
Fix stalled lag columns in pg_stat_replication when replay LSN stops advancing.
Previously, when the replay LSN reported in feedback messages from a standby stopped advancing, for example, due to a recovery conflict, the write_lag and flush_lag columns in pg_stat_replication would initially update but then stop progressing. This prevented users from correctly monitoring replication lag. The problem occurred because when any LSN stopped updating, the lag tracker's cyclic buffer became full (the write head reached the slowest read head). In that state, the lag tracker could no longer compute round-trip lag values correctly. This commit fixes the issue by handling the slowest read entry (the one causing the buffer to fill) as a separate overflow entry and freeing space so the write and other read heads can continue advancing in the buffer. As a result, write_lag and flush_lag now continue updating even if the reported replay LSN remains stalled. Backpatch to all supported versions. Author: Fujii Masao <masao.fujii@gmail.com> Reviewed-by: Chao Li <lic@highgo.com> Reviewed-by: Shinya Kato <shinya11.kato@gmail.com> Reviewed-by: Xuneng Zhou <xunengzhou@gmail.com> Discussion: https://postgr.es/m/CAHGQGwGdGQ=1-X-71Caee-LREBUXSzyohkoQJd4yZZCMt24C0g@mail.gmail.com Backpatch-through: 13
1 parent 58ba7e5 commit a7ab6ce

File tree

1 file changed

+33
-17
lines changed

1 file changed

+33
-17
lines changed

src/backend/replication/walsender.c

Lines changed: 33 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -214,6 +214,7 @@ typedef struct
214214
int write_head;
215215
int read_heads[NUM_SYNC_REP_WAIT_MODE];
216216
WalTimeSample last_read[NUM_SYNC_REP_WAIT_MODE];
217+
WalTimeSample overflowed[NUM_SYNC_REP_WAIT_MODE];
217218
} LagTracker;
218219

219220
static LagTracker *lag_tracker;
@@ -3572,7 +3573,6 @@ WalSndKeepaliveIfNecessary(void)
35723573
static void
35733574
LagTrackerWrite(XLogRecPtr lsn, TimestampTz local_flush_time)
35743575
{
3575-
bool buffer_full;
35763576
int new_write_head;
35773577
int i;
35783578

@@ -3594,25 +3594,19 @@ LagTrackerWrite(XLogRecPtr lsn, TimestampTz local_flush_time)
35943594
* of space.
35953595
*/
35963596
new_write_head = (lag_tracker->write_head + 1) % LAG_TRACKER_BUFFER_SIZE;
3597-
buffer_full = false;
35983597
for (i = 0; i < NUM_SYNC_REP_WAIT_MODE; ++i)
35993598
{
3599+
/*
3600+
* If the buffer is full, move the slowest reader to a separate
3601+
* overflow entry and free its space in the buffer so the write head
3602+
* can advance.
3603+
*/
36003604
if (new_write_head == lag_tracker->read_heads[i])
3601-
buffer_full = true;
3602-
}
3603-
3604-
/*
3605-
* If the buffer is full, for now we just rewind by one slot and overwrite
3606-
* the last sample, as a simple (if somewhat uneven) way to lower the
3607-
* sampling rate. There may be better adaptive compaction algorithms.
3608-
*/
3609-
if (buffer_full)
3610-
{
3611-
new_write_head = lag_tracker->write_head;
3612-
if (lag_tracker->write_head > 0)
3613-
lag_tracker->write_head--;
3614-
else
3615-
lag_tracker->write_head = LAG_TRACKER_BUFFER_SIZE - 1;
3605+
{
3606+
lag_tracker->overflowed[i] =
3607+
lag_tracker->buffer[lag_tracker->read_heads[i]];
3608+
lag_tracker->read_heads[i] = -1;
3609+
}
36163610
}
36173611

36183612
/* Store a sample at the current write head position. */
@@ -3639,6 +3633,28 @@ LagTrackerRead(int head, XLogRecPtr lsn, TimestampTz now)
36393633
{
36403634
TimestampTz time = 0;
36413635

3636+
/*
3637+
* If 'lsn' has not passed the WAL position stored in the overflow entry,
3638+
* return the elapsed time (in microseconds) since the saved local flush
3639+
* time. If the flush time is in the future (due to clock drift), return
3640+
* -1 to treat as no valid sample.
3641+
*
3642+
* Otherwise, switch back to using the buffer to control the read head and
3643+
* compute the elapsed time. The read head is then reset to point to the
3644+
* oldest entry in the buffer.
3645+
*/
3646+
if (lag_tracker->read_heads[head] == -1)
3647+
{
3648+
if (lag_tracker->overflowed[head].lsn > lsn)
3649+
return (now >= lag_tracker->overflowed[head].time) ?
3650+
now - lag_tracker->overflowed[head].time : -1;
3651+
3652+
time = lag_tracker->overflowed[head].time;
3653+
lag_tracker->last_read[head] = lag_tracker->overflowed[head];
3654+
lag_tracker->read_heads[head] =
3655+
(lag_tracker->write_head + 1) % LAG_TRACKER_BUFFER_SIZE;
3656+
}
3657+
36423658
/* Read all unread samples up to this LSN or end of buffer. */
36433659
while (lag_tracker->read_heads[head] != lag_tracker->write_head &&
36443660
lag_tracker->buffer[lag_tracker->read_heads[head]].lsn <= lsn)

0 commit comments

Comments
 (0)