Releases: timescale/timescaledb
2.27.0 (2026-05-12)
This release contains performance improvements and bug fixes since the 2.26.4 release. We recommend that you upgrade at the next available opportunity.
Download valid Windows binaries from:
Highlighted features in TimescaleDB v2.27.0
- The Hypercore engine now supports a vectorized implementation of filters by evaluating them inline through the standard Postgres function path. This expands the set of queries (including continuous aggregate refreshes) that can take the faster path through the columnstore, yielding speedups ranging from 30% up to 2x in benchmarks.
UPDATEandDELETEstatements with equality predicates can now use bloom filters to skip decompressing batches whose compressed rows can't match. When multiple bloom filters apply, they are evaluated in decreasing order of column count (most selective first), and EXPLAIN now reports filtering activity via the new "Compressed batches filtered" and "Batches filtered after decompression counters". The query performance increases in some case up to 160 times.UPSERTqueries can now leverage bloom filters (including composite ones) to skip decompressing batches when the arbiter values are guaranteed not to be present, with the most-selective filter chosen automatically when multiple apply. EXPLAIN output adds new statistics — batches checked by bloom, batches pruned by bloom, batches without bloom, and bloom false positives — for visibility into pruning effectiveness.
Upcoming PostgreSQL 15 EOL announcement
As a reminder, the upcoming TimescaleDB release in June 2026 will officially be the last version with support for PostgreSQL 15. This deprecation was initially announced in the v2.23.0 changelog on October 29, 2025, to provide users ample time to prepare. To ensure uninterrupted access to new features, bugfixes and performance enhancements, all instances must be upgraded to PostgreSQL 16 or greater.
Backward-Incompatible Changes
- #9579 The bloom filter sparse indexes on compressed
int2columns could lead toSELECTqueries not returning the rows that actually match theWHEREcondition. The upgrade is blocked for the affected databases, and the incorrect indexes have to be dropped manually before the upgrade. - This release introduces a new naming convention for composite bloom filter metadata. While this change will not disrupt query processing, v2.27 cannot automatically utilize composite bloom filters generated in v2.26. To convert your existing v2.26 composite bloom filters, the legacy metadata columns must be renamed. This is a lightweight, catalog-only operation requiring zero data recompression, which can be done with this migration script.
Features
- #8868 Use
PG_MODULE_MAGIC_EXTforPG18 - #8967 Rewriting queries with continuous aggregates exactly matching query aggregation
- #9192 Push down scalar array operations into the columnar metadata scan by transforming them into an
OR/ANDclause. - #9355 Defer
segmentbydefault for direct compress - #9374 Use bloom filters to eliminate decompression of unrelated compressed batches during
UPSERTs. - #9396 Analyze and get
segmentbyduring direct compress - #9398 Fix chunk exclusion for
IN/ANYon open (time) dimensions - #9399 Use bloom filters to reduce decompression during
UPDATE/DELETEcommands. - #9403 Set default
segmentbyduring direct compress flush - #9437 Allow running compression as part of refresh policy for compressed continuous aggregates
- #9443 Enable vectorized aggregation in some cases when the
WHEREclause contains filters not handled through the "Vectorized Filters" facility. This includes e.g. filters ontime_bucket(). - #9458 Remove
_timescaledb_functions.repair_relation_acls - #9475 Calculate hashes for bloom filter predicates at planning time.
- #9504 Allow
ALTER TABLE RESETon materialization hypertables - #9521 Add support for reporting index creation progress
- #9559 Notice on compression settings change
- #9569 For nullable
orderbycolumns do segmentwise decompress-compress instead of segmentwise recompress. - #9583 Drop existing sparse indexes when dropping columns
- #9648 Support
ENABLE/DISABLE TRIGGERon hypertables - #9702 Allow Batch Sorted Merge for unordered chunks with no
segmentbyor when allsegmentbycolumns are pinned to aConst
Bugfixes
- #9363 Change compression job status when chunks could be compressed
- #9413 Fix incorrect decompress markers on full batch delete
- #9414 Fix
NULLcompression handling inestimate_uncompressed_size - #9417 Fix segfault in
bloom1_contains - #9479 Disallow sub-day offset for
time-bucketonDate - #9482 Forbid Batch Sort Merge on nullable
orderbycolumns - #9490 Disallow negative interval as
chunk_interval - #9500 Fix off-by-one error when building object name
- #9519 Remove self-referential
FOREIGN KEYconstraints from catalog - #9561 Simplify job history retention by replacing binary search and temp table
- #9590 Fix policy skipping uncompressed chunks
- #9596 Remove unused
process_hypertable_invalidationspolicy code - #9604 Remove dead
post_parse_analyze_hookcapture in loader - #9610 Fix use-after-free crash in
cache_destroyduring transaction abort - #9632 Preserve chunk settings during recompress
- #9640 Fix
NULLdatumCopycrash insegmentbyanalysis - #9680 Fix segfault in direct compress insert on hypertable with dropped column
- #9692 Fix internal "invalid perminfoindex 0 in RTE" error on
MERGE NOT MATCHED INSERTinto a hypertable - #9705 Avoid double
TOASTdelete whenDELETE-after-compressionis enabled - #9705 Only freeze compressed rows when truncating uncompressed chunk
- #9706 Use
bigintinestimate_uncompressed_sizecalculations - #9709 Reject mismatched element type in
bool/uuiddecompression - #9710 Return
bigintfromcompressed_data_column_size - #9711 Fix registration row leak when continuous aggregate refresh fails
- #9697 Improve
pathkeyhandling for compressed sub-paths during sort transformation - #9743 Fix the composite bloom metadata column naming scheme
- #9767 Skip dropped chunks when trying to remove
ts_cagg_invalidation_trigger - #9747 Reject inheriting from a hypertable
- #9744 Use a fixed call string for the telemetry job in
ts_stat_statementsrecording - [#9736](#9736...
2.26.4 (2026-04-28)
This release contains bug fixes since the 2.26.3 release. We recommend that you upgrade at the next available opportunity.
Bugfixes
- #9360 Sanitize
DT_NOBEGINnext_start to recover jobs stuck after primary failover - #9515 Fix
now()constification for continuous aggregate queries - #9550 Fix out of memory when propagating
ALTER TABLEto many chunks - #9605 Fix
InstrStartNodecalled twice in a row - #9607 Fix use-after-free of
PlaceHolderVar.phrelsin cached ChunkAppend plans - #9612 Fix
PlaceHolderVarerror in runtime chunk exclusion - #9614 Remove stale hypertable entries during upgrade
- #9615 Fix segfault with transition tables after column drop
- #9616 Use
DROP CASCADEfor trigger removal - #9623 Error when querying compressed chunks under Apache license
- #9625 Make
timescaledb_post_restore()reliably restart background workers in a single call - #9639 Fix lost orderby sparse index
- #9646 Replace
ERRCODE_INTERNAL_ERRORon user-reachable error paths - #9652 Add Error on missing custom job function in
ts_bgw_job_get_funci - #9655 Fix data corruption when merging chunks with different compression settings
- #9654 Fix
sort_transformcrash with hypertable on nullable side of outer join - #9656 Fix concurrent merge of compressed chunks dropping the new heap
- #9641 Fix
COPYpath with transition tables after column drop - #9660 Fix incremental continuous aggregate refresh so that
extend_last_bucketonly applies to the boundary batch - #9674 Fix segmentby crash in cagg invalidation tracking
Thanks
- @GetsuDer and @WeiJie-JL for reporting an error with timescaledb and extensions using Explain
- @igor2x for reporting a problem when trying to query compressed data with the Apache license
- @ivaaaan for reporting an issue with constraint pushdown in continuous aggregate queries
- @patstrom for reporting a segfault with transition table triggers after dropping a column
- @patstrom for reporting an out-of-memory error when dropping constraints
- @pcayen for reporting an issue with GROUP BY ROLLUP on views over hypertables
2.26.3 (2026-04-14)
This release contains bug fixes since the 2.26.2 release. We recommend that you upgrade at the next available opportunity.
Bugfixes
- #9511 Fix
alter_jobfailing for retention policy withdrop_created_beforeargument - #9557 Clean up orphaned
compression_chunk_sizeentries during the extension upgrade - #9551 Fix resource leaks on error paths for during a continuous aggregate refresh
- #9563 Fix gapfill out-of-order bucket creation during DST shift
- #9571 Fix concurrent refreshes of continuous aggregates
Thanks
- @sebastian-ederer for reporting an issue with alter_job and drop_created_before
- @petergledhillinclusive for reporting the DST shift issue with time_bucket_gapfill
- @GTan615 for reporting the data duplicate issues observed during overlapping cagg refreshes
2.26.2 (2026-04-07)
This release contains bug fixes since the 2.26.1 release. We recommend that you upgrade at the next available opportunity.
Bugfixes
- #9460 Fix WAL record tracking in
EXPLAINfor direct compress - #9485 Fix use-after-free of invalidation in
tsl_compressor_free - #9486 Fix use-after-free in job owner validation
- #9487 Fix use-after-free in
reorder_chunk - #9392 Fix wrong result when performing chunk exclusion by a mutable expression
- #9510 Fix chunk skipping with dropped columns
- #9522 Fix
GROUP BY ROLLUPon compressed continuous aggregates
Thanks
- @pcayen for reporting an issue with GROUP BY ROLLUP
- @PiotrCiechomski for reporting the wrong result with chunk exclusion by a mutable expression
2.26.1 (2026-03-30)
This release contains bug fixes since the 2.26.0 release. We recommend that you upgrade at the next available opportunity.
Bugfixes
- #9455 Fix memory leak in ColumnarScan
Exceptionally, this windows' release binaries are available here:
2.26.0 (2026-03-24)
This release contains performance improvements and bug fixes since the 2.25.2 release. We recommend that you upgrade at the next available opportunity.
Highlighted features in TimescaleDB v2.26.0
- The vectorized aggregation engine now evaluates PostgreSQL functions directly on columnar arguments and stores the results in a columnar format to preserve the high-speed execution pipeline. For analytical queries that leverage functions like
time_bucket()in grouping or aggregation expressions, the function is evaluated natively without falling back to standard row-based processing. This enhancement ensures that the remainder of the query can seamlessly continue using the highly efficient columnar pipeline, yielding performance improvements of 3.5 times faster. - The query execution engine now supports composite bloom filters for
SELECTandUPSERToperations, pushing down multi-column predicates directly to compressed table scans. This optimization bypasses costly batch decompression by automatically selecting the most restrictive bloom filter to quickly verify if target values are present. Showing over two times faster query performance when a composite bloom filter is used. Additionally, query profiling now includes detailedEXPLAINstatistics to monitor batch pruning and false-positive rates. - The custom node
ColumnarIndexScanadjusts the query plan to fetch values from the sparse minmax indexes, improving query performance on the columnstore by up to 70x. For analytical queries that leverage functions likeCOUNT,MIN,MAX,FIRST(limited), andLAST(limited), the sparse index is being read instead of decompressing the batch.
Features
- #9104 Support
min(text),max(text)for C collation in columnar aggregation pipeline - #9117 Support functions like
time_bucketin the columnar aggregation and grouping pipeline. - #9142 Remove column
droppedfrom _timescaledb_catalog.chunk - #9238 Support non-partial aggregates with vectorized aggregation
- #9253 Support
VectorAggin subqueries and CTEs - #9266 Add support for
HAVINGto vectorized aggregation - #9267 Enable
ColumnarIndexScancustom scan - #9312 Remove advisory locks from bgw jobs and add graceful cancellation
- #8983 Add GUC for default chunk time interval
- #9334 Fix out-of-range timestamp error in WHERE clauses
- #9368 Enable runtime chunk exclusion on inner side of nested loop join
- #9372 Push down composite bloom filter checks to
SELECTexecution - #9374 Use bloom filters to eliminate decompression of unrelated compressed batches during
UPSERTstatements - #9382 Fix chunk creation failure after replica identity invalidation
- #9398 Fix chunk exclusion for
IN/ANYon open (time) dimensions
Bugfixes
- #9401 Fix forced refresh not consuming invalidations
- #7629 Forbid non-constant timezone parameter in
time_bucket_gapfill - #9344 Wrong result or crash on cross-type comparison of partitioning column
- #9356 Potential crash when using a hypertable with partial compression or space partitioning in a nested loop join
- #9376 Allow
CREATE EXTENSIONafter drop in the same session - #9378 Fix foreign key constraint failure when inserting into hypertable with referencing a foreign key
- #9381 Data loss with direct compress with client-ordered data in an
INSERT SELECTfrom a compressed hypertable - #9413 Fix incorrect decompress markers on full batch delete
- #9414 Fix
NULLcompression handling inestimate_uncompressed_size - #9417 Fix segfault in
bloom1_contains
GUCs
default_chunk_time_interval: Default chunk time interval for new hypertables. This is an expert configuration, please do not alter unless recommended from Tiger Data.enable_composite_bloom_indexes: Enable creation of bloom composite indexes on compressed chunks. Default:true
Thanks
- @bronzinni for reporting an issue with foreign keys on hypertables
- @janpio for reporting an issue with CREATE EXTENSION after dropping and recreating schema
- @leppaott for reporting a deadlock when deleting jobs
2.25.2 (2026-03-03)
This release contains performance improvements and bug fixes since the 2.25.1 release and a fix for a security vulnerability (#9331). You can check the security advisory for more information on the vulnerability and the platforms that are affected. We recommend that you upgrade as soon as possible.
Bugfixes
- #9276 Fix NULL and DEFAULT handling in uniqueness check on compressed chunks
- #9277 Fix SSL-related build errors
- #9279 Fix EXPLAIN VERBOSE corrupting targetlist of cached ModifyHypertable plans
- #9281 Fix real-time continuous aggregates on UUID hypertables
- #9283 Fix plan-time error when using enum in orderby compression setting
- #9290 Propagate ALTER OWNER TO to policy jobs
- #9292 Fix continuous aggregate column rename
- #9293 Fix time_bucket_gapfill inside LATERAL subqueries
- #9294 Fix DELETEand UPDATE with WHERE EXISTS on hypertables
- #9303 Fix segfault in continuous aggregate creation on Postgres 18
- #9308 Fix continuous aggregate offset/origin not applied in watermark and refresh window calculations
- #9314 Fix generated columns always NULL in compressed chunks
- #9321 Fix segfault when using OLD/NEW refs in RETURNING clause on Postgres 18
- #9324 Potential violation of a foreign key constraint referencing a hypertable caused by concurrent DELETE of the key record
- #9327 Fix handling of generated columns with NOT NULL domain type
- #9331 Ensure search_path is set before anything else in SQL scripts
- #9339 Fix segmentwise recompression clearing unordered flag
- @CaptainCuddleCube for reporting an issue with time_bucket_gapfill and LATERAL subqueries
- @JacobBrejnbjerg for reporting an issue with generated columns in compressed chunks
- @Kusumoto for reporting an issue with continuous aggregates on hypertables with UUID columns
- @arfathyahiya for reporting an issue with renaming columns in continuous aggregates
- @desertmark for reporting an issue with DELETE/UPDATE and subqueries
- @flaviofernandes004 for reporting an issue with RETURNING clause and references to OLD/NEW
- @tureba for fixing SSL-related build errors
Thanks
2.25.1 (2026-02-17)
This release contains performance improvements and bug fixes since the 2.25.0 release. We recommend that you upgrade at the next available opportunity.
Bugfixes
- #9215 Add missing handling for em_parent to sort_transform
- #9223 Clean up orphaned entries in continuous aggregate invalidaton logs
- #9226 Fix invalidation and batching issues for variable bucket continuous aggregates.
- #9256 Error "record type has no extended hash function" on some queries using a sparse bloom filter index on a column of composite type.
- #9257 Handle type coercion for metadata column equivalence members
Thanks
- @emapple for reporting a crash in a query with nested joins and subqueries
2.25.0 (2026-01-29)
This release contains performance improvements and bug fixes since the 2.24.0 release. We recommend that you upgrade at the next available opportunity.
Highlighted features in TimescaleDB v2.25.0
This release features multiple improvements for continuous aggregates on the columnstore:
- Faster refreshes: You can now utilize direct compress during materialized view refreshes, resulting in higher throughput and reduced I/O usage.
- Efficiency: The enablement of delete optimizations significantly lowers system resource requirements.
- Smaller transactions: Adjusted defaults for
buckets_per_batchto 10 reduces transaction sizes, requiring less WAL holding time. - Faster queries: Smarter defaults for
segmentbyandorderbyyield improved query performance and better compression ratio on the columnstore.
Sunsetting announcements
- This release removes the WAL-based invalidation of continuous aggregates. This feature was introduced in 2.22.0 as tech preview to use logical decoding for building the invalidation logs. The feature was designed for high ingest workloads, reducing the write amplification. With the upcoming stream of improvements to continuous aggregates, this feature was deprioritized and removed.
- The old continuous aggregate format, deprecated in version 2.10.0, has been fully removed from TimescaleDB in this release. Users still on the old format should read the migration documentation to migrate to the new format. Users of Tiger Cloud have already been
automatically migrated.
Features
- #8777 Enable direct compress on continuous aggregate refresh using new GUC
timescaledb.enable_direct_compress_on_cagg_refresh - #9031 Change
defaultbuckets_per_batchon continuous aggregate refresh policy to10 - #9032 Add in-memory recompression for unordered chunks
- #9017 Move
bgw_jobtable into schema_timescaledb_catalog - #9033 Add
rebuild_columnstoreprocedure - #9038 Change default configuration for compressed continuous aggregates
- #9042 Enable batch sorted merge on unordered compressed chunks
- #9046 Allow non timescaledb namespace
SEToption for continuous aggregates - #9059 Allow configuring
work_memfor background worker jobs - #9074 Add function to estimate uncompressed size of compressed chunk
- #9085 Don't register timescaledb-tune specific GUCs
- #9088 Add
ColumnarIndexScancustom node - #9090 Support direct batch delete on hypertables with continuous aggregates
- #9094 Enable the columnar pipeline for grouping without aggregation to speed up the queries of the form
select column from table group by column. - #9103 Support
FIRSTandLASTinColumnarIndexScan - #9108 Support multiple aggregates in
ColumnarIndexScan - #9111 Allow recompression with orderby/index changes
- #9113 Use
enable_columnarscanto control columnarscan - #9127 Remove primary dimension constraints from fully covered chunks
- #8710 Add SQL function to fetch continuous aggregate grouping columns
- #9133 Allow pushing down sort into columnar unordered chunks when it is possible
- #8229 Removed
time_bucket_ngfunction - #8859 Remove support for partial continuous aggregate format
- #9022 Remove WAL based invalidation
- #9016 Remove
_timescaledb_debugschema - #9030 Add new chunks to hypertable publication
Bug fixes
- #8706 Fix planning performance regression on Postgres 16 and later on some join
queries. - #8986 Add pathkey replacement for
ColumnarScanPath - #8989 Ensure no XID is assigned during chunk query
- #8990 Fix
EquivalenceClassindex update forRelOptInfo - #9007 Add validation for compression index key limits
- #9024 Recompress some chunks on
VACUUM FULL - #9045 Fix missing UUID check in compression policy
- #9056 Fix split chunk
relfrozenxid - #9058 Fix missing chunk column stats bug
- #9061 Fix update race with background worker jobs
- #9069 Fix applying multikey sort for columnstore when one numeric key is pinned to a Const of different type
- #9102 Support retention policies on UUIDv7-partitioned hypertables
- #9120 Fix for pre Postgres 17, where a
DELETEfrom a partially compressed chunk may miss records ifBitmapHeapScanis being used - #9121 Allow any immutable constant expressions as default values for compressed columns
- #9121 Fix a potential "unexpected column type 'bool'" error for compressed bool columns with missing value
- #9144 Fix handling implicit constraints in
ALTER TABLE - #9155 Fix column generation during compressed chunk insert
- #9129 Fix
time_bucketwith timezone during DST - #9177 Add alias for
bgw_job - #9176 Handle
NULLvalues in continuous aggregate invalidation more gracefully - #9175 Do not remove dimension constraints for OSM chunks
GUCs
enable_columnarindexscan: Enable returning results directly from compression metadata without decompression. This feature is experimental and in development towards a GA release. Not for production environments. Default:falseenable_direct_compress_on_cagg_refresh: Enable experimental support for direct compression...
2.24.0 (2025-12-03)
This release contains performance improvements and bug fixes since the 2.23.1 release. We recommend that you upgrade at the next available opportunity.
Highlighted features in TimescaleDB v2.24.0
- Direct Compress just got smarter and faster: it now works seamlessly with hypertables generating continuous aggregates. Invalidation ranges are computed directly in-memory based on the ingested batches and written efficiently at transaction commit. This change reduces the IO footprint drastically by removing the write amplification of the invalidation logs.
- Continuous aggregates now speak UUIDv7: hypertables partitioned by UUIDv7 are fully supported through an enhanced
time_bucketthat accepts UUIDv7 values and returns precise, timezone-aware timestamps — unlocking powerful time-series analytics on modern UUID-driven table schemas. - Lightning-fast recompression: the new
recompress := trueoption on theconvert_to_columnstoreAPI enables pure in-memory recompression, delivering a 4–5× speed boost over the previous disk-based process.
ARM support for bloom filters
The sparse bloom filter indexes will stop working after upgrade to 2.24. If you are affected by this problem, the warning "bloom filter sparse indexes require action to re-enable" will appear in the Postgres log during upgrade.
In versions before 2.24, the hashing scheme of the bloom filter sparse indexes used to depend on the build options of the TimescaleDB executables. These options are set by the package publishers and might differ between different package sources or even versions. After upgrading to a version with different options, the queries that use the bloom filter lookups could erroneously stop returning the rows that should in fact match the query conditions. The 2.24 release fixes this by using distinct column names for each hashing scheme.
The bloom filter sparse indexes will be disabled on the compressed chunks created before upgrading to 2.24. To re-enable them, you have to decompress and then compress the affected chunks.
If you were running the official APT package on AMD64 architecture, the hashing scheme did not change, and it is safe to use the existing bloom filter sparse indexes. To enable this, set the GUC timescaledb.read_legacy_bloom1_v1 = on in the server configuration.
The chunks compressed after upgrade to 2.24 will use the new index format, and the bloom filter sparse indexes will continue working as usual for these chunks without any intervention.
For more details, refer to the pull request #8761.
Deprecations
- The next release of TimescaleDB will remove the deprecated partial continuous aggregates format. The new format was introduced in
2.7.0and provides significant improvements in terms of performance and storage efficiency. Please usecagg_migrate(<CONTINUOUS_AGGREGATE_NAME>)to migrate to the new format. Tiger Cloud users are migrated automatically. - In future releases the deprecated view
timescaledb_information.compression_settingswill be removed. Please usetimescaledb_information.hypertable_columnstore_settingsas a replacement. - The experimental view
timescaledb_experimental.policiesand the adjacent experimental functionsadd_policies,alter_policies,show_policies,remove_policies, andremove_all_policiesto manage continuous aggregates will be removed in an upcoming release. For replacements, please use the Jobs API.
Backward-Incompatible Changes
- #8761 Fix matching rows in queries using the bloom filter sparse indexes potentially not returned after extension upgrade. The version of the bloom filter sparse indexes is changed. The existing indexes will stop working and will require action to re-enable. See the section above for details.
Features
- #8465 Speed up the filters like
x = any(array[...])using bloom filter sparse indexes. - #8569 In-memory recompression
- #8754 Add concurrent mode for merging chunks
- #8786 Display chunks view range as timestamps for UUIDv7
- #8819 Refactor chunk compression logic
- #8840 Allow
ALTER COLUMN TYPEwhen compression is enabled but no compressed chunks exist - #8908 Add time bucketing support for UUIDv7
- #8909 Support direct compress on hypertables with continuous aggregates
- #8939 Support continuous aggregates on UUIDv7-partitioned hypertables
- #8959 Cap continuous aggregate invalidation interval range at chunk boundary
- #8975 Exclude date/time columns from default segmentby
- #8993 Add GUC for in-memory recompression
Bugfixes
- #8839 Improve
_timescaledb_functions.cagg_watermarkerror handling - #8853 Change log level of continuous aggregate refresh messages to
DEBUG1 - #8933 Potential crash or seemingly random errors when querying the compressed chunks created on releases before 2.15 and using the minmax sparse indexes.
- #8942 Fix lateral join handling for compressed chunks
- #8958 Fix
if_not_existsbehaviour when adding refresh policy - #8969 Gracefully handle missing job stat in background worker
- #8988 Don't ignore additional filters on same column when building scankeys
GUCs
direct_compress_copy_tuple_sort_limit: Number of tuples that can be sorted at once in aCOPYoperation.direct_compress_insert_tuple_sort_limit: Number of tuples that can be sorted at once in anINSERToperation.read_legacy_bloom1_v1: Enable reading the legacybloom1version 1 sparse indexes forSELECTqueries.enable_in_memory_recompression: Enable in-memory recompression functionality.
Thanks
- @bezpechno for implementing
ALTER COLUMN TYPEfor hypertable with columnstore when no compressed chunks exist