Skip to content

Gracefully handle missing jobstat in background worker#8969

Merged
svenklemm merged 1 commit into
mainfrom
sven/bgw_missing_jobstat
Nov 24, 2025
Merged

Gracefully handle missing jobstat in background worker#8969
svenklemm merged 1 commit into
mainfrom
sven/bgw_missing_jobstat

Conversation

@svenklemm
Copy link
Copy Markdown
Member

@svenklemm svenklemm commented Nov 23, 2025

This would lead to an assertion in debug builds and a segfault in
production builds.

Fixes #8037

@github-actions
Copy link
Copy Markdown

@antekresic, @dbeck: please review this pull request.

Powered by pull-review

@svenklemm svenklemm added this to the v2.24.0 milestone Nov 23, 2025
@svenklemm svenklemm added the Background Worker The background worker subsystem, including the scheduler label Nov 23, 2025
This would lead to an assertion in debug builds and a segfault in
production builds.
@svenklemm svenklemm force-pushed the sven/bgw_missing_jobstat branch from cc2d7ed to a585e12 Compare November 23, 2025 05:55
@svenklemm
Copy link
Copy Markdown
Member Author

@codecov
Copy link
Copy Markdown

codecov Bot commented Nov 23, 2025

Codecov Report

❌ Patch coverage is 0% with 1 line in your changes missing coverage. Please review.
✅ Project coverage is 82.61%. Comparing base (fb5a627) to head (a585e12).
⚠️ Report is 4 commits behind head on main.

Files with missing lines Patch % Lines
src/bgw/scheduler.c 0.00% 0 Missing and 1 partial ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #8969      +/-   ##
==========================================
+ Coverage   82.52%   82.61%   +0.09%     
==========================================
  Files         249      249              
  Lines       48499    48484      -15     
  Branches    12386    12383       -3     
==========================================
+ Hits        40023    40055      +32     
- Misses       3515     3527      +12     
+ Partials     4961     4902      -59     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@svenklemm svenklemm force-pushed the sven/bgw_missing_jobstat branch 4 times, most recently from 698867e to a585e12 Compare November 23, 2025 07:06
@svenklemm svenklemm merged commit bea804f into main Nov 24, 2025
78 of 84 checks passed
@svenklemm svenklemm deleted the sven/bgw_missing_jobstat branch November 24, 2025 09:39
@philkra philkra mentioned this pull request Nov 25, 2025
philkra added a commit that referenced this pull request Dec 3, 2025
## 2.24.0 (2025-12-03)

This release contains performance improvements and bug fixes since the
2.23.1 release. We recommend that you upgrade at the next available
opportunity.

**Highlighted features in TimescaleDB v2.24.0**
* **Direct Compress just got smarter and faster**: it now works
seamlessly with hypertables generating continuous aggregates.
Invalidation ranges are computed directly in-memory based on the
ingested batches and written efficiently at transaction commit. This
change reduces the IO footprint drastically by removing the write
amplification of the invalidation logs.
* **Continuous aggregates now speak UUIDv7**: hypertables partitioned by
UUIDv7 are fully supported through an enhanced `time_bucket` that
accepts UUIDv7 values and returns precise, timezone-aware timestamps —
unlocking powerful time-series analytics on modern UUID-driven table
schemas.
* **Lightning-fast recompression**: the new `recompress := true` option
on the `compress_chunk` API enables pure in-memory recompression,
delivering a **4–5× speed boost** over the previous disk-based process.

**ARM support for bloom filters**
The [sparse bloom filter
indexes](https://www.tigerdata.com/blog/blocked-bloom-filters-speeding-up-point-lookups-in-tiger-postgres-native-columnstore)
will stop working after upgrade to 2.24. If you are affected by this
problem, the warning "bloom filter sparse indexes require action to
re-enable" will appear in the Postgres log during upgrade.

In versions before 2.24, the hashing scheme of the bloom filter sparse
indexes used to depend on the build options of the TimescaleDB
executables. These options are set by the package publishers and might
differ between different package sources or even versions. After
upgrading to a version with different options, the queries that use the
bloom filter lookups could erroneously stop returning the rows that
should in fact match the query conditions. The 2.24 release fixes this
by using distinct column names for each hashing scheme.

The bloom filter sparse indexes will be disabled on the compressed
chunks created before upgrading to 2.24. To re-enable them, you have to
decompress and then compress the affected chunks.

If you were running the official APT package on AMD64 architecture, the
hashing scheme did not change, and it is safe to use the existing bloom
filter sparse indexes. To enable this, set the GUC
`timescaledb.read_legacy_bloom1_v1 = on` in the server configuration.

The chunks compressed after upgrade to 2.24 will use the new index
format, and the bloom filter sparse indexes will continue working as
usual for these chunks without any intervention.

For more details, refer to the pull request
[#8761](#8761).

**Deprecations**
* The next release of TimescaleDB will remove the deprecated partial
continuous aggregates format. The new format was introduced in
[`2.7.0`](https://github.com/timescale/timescaledb/releases/tag/2.7.0)
and provides significant improvements in terms of performance and
storage efficiency. Please use
[`cagg_migrate(<CONTINUOUS_AGGREGATE_NAME>)`](https://www.tigerdata.com/docs/use-timescale/latest/continuous-aggregates/migrate)
to migrate to the new format. Tiger Cloud users are migrated
automatically.
* In future releases the deprecated view
`timescaledb_information.compression_settings` will be removed. Please
use
[`timescaledb_information.hypertable_columnstore_settings`](https://www.tigerdata.com/docs/api/latest/hypercore/hypertable_columnstore_settings)
as a replacement.
* The experimental view
[`timescaledb_experimental.policies`](https://www.tigerdata.com/docs/api/latest/informational-views/policies)
and the adjacent experimental functions
[`add_policies`](https://www.tigerdata.com/docs/api/latest/continuous-aggregates/add_policies),
[`alter_policies`](https://www.tigerdata.com/docs/api/latest/continuous-aggregates/alter_policies),
[`show_policies`](https://www.tigerdata.com/docs/api/latest/continuous-aggregates/show_policies),
[`remove_policies`](https://www.tigerdata.com/docs/api/latest/continuous-aggregates/remove_policies),
and
[`remove_all_policies`](https://www.tigerdata.com/docs/api/latest/continuous-aggregates/remove_all_policies)
to manage continuous aggregates will be removed in an upcoming release.
For replacements, please use the [Jobs
API](https://www.tigerdata.com/docs/api/latest/jobs-automation).

**Backward-Incompatible Changes**
* [#8761](#8761) Fix
matching rows in queries using the bloom filter sparse indexes
potentially not returned after extension upgrade. The version of the
bloom filter sparse indexes is changed. The existing indexes will stop
working and will require action to re-enable. See the section above for
details.

**Features**
* [#8465](#8465) Speed up
the filters like `x = any(array[...])` using bloom filter sparse
indexes.
* [#8569](#8569) In-memory
recompression
* [#8754](#8754) Add
concurrent mode for merging chunks
* [#8786](#8786) Display
chunks view range as timestamps for UUIDv7
* [#8819](#8819) Refactor
chunk compression logic
* [#8840](#8840) Allow
`ALTER COLUMN TYPE` when compression is enabled but no compressed chunks
exist
* [#8908](#8908) Add time
bucketing support for UUIDv7
* [#8909](#8909) Support
direct compress on hypertables with continuous aggregates
* [#8939](#8939) Support
continuous aggregates on UUIDv7-partitioned hypertables
* [#8959](#8959) Cap
continuous aggregate invalidation interval range at chunk boundary
* [#8975](#8975) Exclude
date/time columns from default segmentby
* [#8993](#8993) Add GUC
for in-memory recompression

**Bugfixes**
* [#8839](#8839) Improve
`_timescaledb_functions.cagg_watermark` error handling
* [#8853](#8853) Change log
level of continuous aggregate refresh messages to `DEBUG1`
* [#8933](#8933) Potential
crash or seemingly random errors when querying the compressed chunks
created on releases before 2.15 and using the minmax sparse indexes.
* [#8942](#8942) Fix
lateral join handling for compressed chunks
* [#8958](#8958) Fix
`if_not_exists` behaviour when adding refresh policy
* [#8969](#8969) Gracefully
handle missing job stat in background worker
* [#8988](#8988) Don't
ignore additional filters on same column when building scankeys

**GUCs**
* `direct_compress_copy_tuple_sort_limit`: Number of tuples that can be
sorted at once in a `COPY` operation.
* `direct_compress_insert_tuple_sort_limit`: Number of tuples that can
be sorted at once in an `INSERT` operation.
* `read_legacy_bloom1_v1`: Enable reading the legacy `bloom1` version 1
sparse indexes for `SELECT` queries.
* `enable_in_memory_recompression`: Enable in-memory recompression
functionality.

**Thanks**
* @bezpechno for implementing `ALTER COLUMN TYPE` for hypertable with
columnstore when no compressed chunks exist

---------

Signed-off-by: Philip Krauss <35487337+philkra@users.noreply.github.com>
Co-authored-by: timescale-automation <123763385+github-actions[bot]@users.noreply.github.com>
Co-authored-by: philkra <philip@philipkrauss.at>
Co-authored-by: Philip Krauss <35487337+philkra@users.noreply.github.com>
Co-authored-by: Anastasiia Tovpeko <114177030+atovpeko@users.noreply.github.com>
@timescale-automation timescale-automation added the released-2.24.0 Released in 2.24.0 label Dec 3, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Background Worker The background worker subsystem, including the scheduler backported-2.23.x released-2.24.0 Released in 2.24.0

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Segfault in background worker

4 participants