CREATE TABLE ... WITH ..., the default partitioning column is automatically the first column with a timestamp data type. Also, creates a columnstore policy that automatically converts your data to the , after an interval equal to the value of the chunk_interval, defined through compress_after in the policy. This columnar format enables fast scanning and aggregation, optimizing performance for analytical workloads while also saving significant storage space. In the conversion, s are compressed by up to 98%, and organized for efficient, large-scale queries.
You can customize this policy later using alter_job(). However, to change after or created_before, the compression settings, or the the policy is acting on, you must remove the columnstore policy and add a new one.
You can also manually convert s in a to the .
to foreign keys are not allowed, all other combinations are permitted.
The settings are applied on a per- basis. You can change the settings by calling
ALTER TABLE without first converting the entire back to the .
The new settings apply only to the s that have not yet been converted to , the existing s in
the
do not change. Similarly, if you remove an existing columnstore policy and
then
add a new one, the new policy applies only to the unconverted s. This means that
s
with different settings can co-exist in the same .
calculates default settings for each when it is created. These settings apply to
each
, and not the entire . To explicitly disable the defaults, set a setting to an empty string.
CREATE TABLE extends the standard CREATE TABLE. This page explains the features and
arguments specific to .
For v2.23.0 and higher, the table is automatically partitioned on the first column in the table with a timestamp data type. If multiple columns are suitable candidates as a partitioning column, throws an error and asks for an explicit definition. For earlier versions, set
partition_column to a time column.If you are self-hosting v2.20.0 to v2.22.1, to convert your data to the after a specific time interval, you have to call add_columnstore_policy() after you call CREATE TABLE.If you are self-hosting v2.19.3 and below, create a relational table, then convert it using create_hypertable(). You then enable with a call to ALTER TABLE.Samples
-
Create a partitioned on the time dimension and enable :
When you create a using
CREATE TABLE WITH, automatically creates a columnstore policy that uses the interval as the compression interval, with a default schedule interval of 1 day. The default partitioning column is automatically selected as the first column with a timestamp or timestampz data type. -
Create a partitioned on the time with fewer s based on time interval:
-
Create a partitioned using UUIDv7:
- Postgres 17 and lower
- Postgres v18
-
Enable data compression during ingestion:
When you set
timescaledb.enable_direct_compress_copyyour data gets compressed in memory during ingestion withCOPYstatements. By writing the compressed batches immediately in the , the IO footprint is significantly lower. Also, the columnstore policy you set is less important,INSERTalready produces compressed s.To enable in-memory data compression during ingestion:Please note that this feature is a tech preview and not production-ready. Using this feature could lead to regressed query performance and/or storage ratio, if the ingested batches are not correctly ordered or are of too high cardinality.Important facts- High cardinality use cases do not produce good batches and lead to degreaded query performance.
- The is optimized to store 1000 records per batch, which is the optimal format for ingestion per segment by.
- WAL records are written for the compressed batches rather than the individual tuples.
- Currently only
COPYis support,INSERTwill eventually follow. - Best results are achieved for batch ingestion with 1000 records or more, upper boundary is 10.000 records.
- Continous Aggregates are not supported at the moment.
- Create a :
- Copy data into the : You achieve the highest insert rate using binary format. CSV and text format are also supported.
-
Create a relational table:
Arguments
The syntax is:| Name | Type | Default | Required | Description |
|---|---|---|---|---|
tsdb.hypertable | BOOLEAN | true | ✖ | Create a new for time-series data rather than a standard relational table. |
tsdb.partition_column | TEXT | The first column in the table with a timestamp data type | ✖ | Set the time column to automatically partition your time-series data by. |
tsdb.chunk_interval | TEXT | 7 days | ✖ | Change this to better suit your needs. For example, if you set chunk_interval to 1 day, each stores data from the same day. Data from different days is stored in different s. |
tsdb.create_default_indexes | BOOLEAN | true | ✖ | Set to false to not automatically create indexes. The default indexes are:
|
tsdb.associated_schema | REGCLASS | _timescaledb_internal | ✖ | Set the schema name for internal tables. |
tsdb.associated_table_prefix | TEXT | _hyper | ✖ | Set the prefix for the names of internal s. |
tsdb.orderby | TEXT | Descending order on the time column in table_name. | ✖ | The order in which items are used in the . Specified in the same way as an ORDER BY clause in a SELECT query. Setting tsdb.orderby automatically creates an implicit min/max sparse index on the orderby column. |
tsdb.segmentby | TEXT | looks at pg_stats and determines an appropriate column based on the data cardinality and distribution. If pg_stats is not available, looks for an appropriate column from the existing indexes. | ✖ | Set the list of columns used to segment data in the for table. An identifier representing the source of the data such as device_id or tags_id is usually a good candidate. |
tsdb.sparse_index | TEXT | evaluates the columns you already have indexed, checks which data types are a good fit for sparse indexing, then creates a sparse index as an optimization. | ✖ | Configure the sparse indexes for compressed s. Requires setting tsdb.orderby. Supported index types include: bloom(<column_name>): a probabilistic index, effective for = filters. Cannot be applied to tsdb.orderby columns.minmax(<column_name>): stores min/max values for each compressed . Setting tsdb.orderby automatically creates an implicit min/max sparse index on the orderby column. |