Skip to main content
Since 2.20.0 Create a partitioned on a single dimension with enabled, or create a standard relational table. A is a specialized table that automatically partitions your data by time. All actions that work on a table, work on s. For example, ALTER TABLE and SELECT. By default, a is partitioned on the time dimension. To add secondary dimensions to a , call add_dimension. To convert an existing relational table into a , call create_hypertable. When you create a using CREATE TABLE ... WITH ..., the default partitioning column is automatically the first column with a timestamp data type. Also, creates a columnstore policy that automatically converts your data to the , after an interval equal to the value of the chunk_interval, defined through compress_after in the policy. This columnar format enables fast scanning and aggregation, optimizing performance for analytical workloads while also saving significant storage space. In the conversion, s are compressed by up to 98%, and organized for efficient, large-scale queries. You can customize this policy later using alter_job(). However, to change after or created_before, the compression settings, or the the policy is acting on, you must remove the columnstore policy and add a new one. You can also manually convert s in a to the . to foreign keys are not allowed, all other combinations are permitted. The settings are applied on a per- basis. You can change the settings by calling ALTER TABLE without first converting the entire back to the . The new settings apply only to the s that have not yet been converted to , the existing s in the do not change. Similarly, if you remove an existing columnstore policy and then add a new one, the new policy applies only to the unconverted s. This means that s with different settings can co-exist in the same . calculates default settings for each when it is created. These settings apply to each , and not the entire . To explicitly disable the defaults, set a setting to an empty string. CREATE TABLE extends the standard CREATE TABLE. This page explains the features and arguments specific to .
For v2.23.0 and higher, the table is automatically partitioned on the first column in the table with a timestamp data type. If multiple columns are suitable candidates as a partitioning column, throws an error and asks for an explicit definition. For earlier versions, set partition_column to a time column.If you are self-hosting v2.20.0 to v2.22.1, to convert your data to the after a specific time interval, you have to call add_columnstore_policy() after you call CREATE TABLE.If you are self-hosting v2.19.3 and below, create a relational table, then convert it using create_hypertable(). You then enable with a call to ALTER TABLE.

Samples

  • Create a partitioned on the time dimension and enable :
    CREATE TABLE crypto_ticks (
       "time" TIMESTAMPTZ,
       symbol TEXT,
       price DOUBLE PRECISION,
       day_volume NUMERIC
    ) WITH (
      tsdb.hypertable,
      tsdb.segmentby='symbol',
      tsdb.orderby='time DESC'
    );
    
    When you create a using CREATE TABLE WITH, automatically creates a columnstore policy that uses the interval as the compression interval, with a default schedule interval of 1 day. The default partitioning column is automatically selected as the first column with a timestamp or timestampz data type.
  • Create a partitioned on the time with fewer s based on time interval:
    CREATE TABLE IF NOT EXISTS hypertable_control_chunk_interval(
     time int4 NOT NULL,
     device text,
     value float
    ) WITH (
     tsdb.hypertable,
     tsdb.chunk_interval=3453
    );
    
  • Create a partitioned using UUIDv7:
    • Postgres 17 and lower
    • Postgres v18
     -- UUIDv7 compression is enabled by default
     CREATE TABLE events (
        id  uuid PRIMARY KEY DEFAULT generate_uuidv7(),
        payload jsonb
     ) WITH (tsdb.hypertable, tsdb.partition_column = 'id');
    
  • Enable data compression during ingestion: When you set timescaledb.enable_direct_compress_copy your data gets compressed in memory during ingestion with COPY statements. By writing the compressed batches immediately in the , the IO footprint is significantly lower. Also, the columnstore policy you set is less important, INSERT already produces compressed s.
    Please note that this feature is a tech preview and not production-ready. Using this feature could lead to regressed query performance and/or storage ratio, if the ingested batches are not correctly ordered or are of too high cardinality.
    To enable in-memory data compression during ingestion:
    SET timescaledb.enable_direct_compress_copy=on;
    
    Important facts
    • High cardinality use cases do not produce good batches and lead to degreaded query performance.
    • The is optimized to store 1000 records per batch, which is the optimal format for ingestion per segment by.
    • WAL records are written for the compressed batches rather than the individual tuples.
    • Currently only COPY is support, INSERT will eventually follow.
    • Best results are achieved for batch ingestion with 1000 records or more, upper boundary is 10.000 records.
    • Continous Aggregates are not supported at the moment.
    1. Create a :
    CREATE TABLE t(time timestamptz, device text, value float) WITH (tsdb.hypertable);
    
    1. Copy data into the : You achieve the highest insert rate using binary format. CSV and text format are also supported.
    COPY t FROM '/tmp/t.binary' WITH (format binary);
    
  • Create a relational table:
    CREATE TABLE IF NOT EXISTS relational_table(
     device text,
     value float
    );
    

Arguments

The syntax is:
CREATE TABLE <table_name> (
   -- Standard Postgres syntax for CREATE TABLE
)
WITH (
   tsdb.hypertable = true | false
   tsdb.partition_column = '<column_name> ',
   tsdb.chunk_interval = '<interval>'
   tsdb.create_default_indexes =  true | false
   tsdb.associated_schema = '<schema_name>',
   tsdb.associated_table_prefix = '<prefix>'
   tsdb.orderby = '<column_name> [ASC | DESC] [ NULLS { FIRST | LAST } ] [, ...]',
   tsdb.segmentby = '<column_name> [, ...]',
   tsdb.sparse_index = '<index>(<column_name>), index(<column_name>)'
)
NameTypeDefaultRequiredDescription
tsdb.hypertableBOOLEANtrueCreate a new for time-series data rather than a standard relational table.
tsdb.partition_columnTEXTThe first column in the table with a timestamp data typeSet the time column to automatically partition your time-series data by.
tsdb.chunk_intervalTEXT7 daysChange this to better suit your needs. For example, if you set chunk_interval to 1 day, each stores data from the same day. Data from different days is stored in different s.
tsdb.create_default_indexesBOOLEANtrueSet to false to not automatically create indexes.
The default indexes are:
  • On all s, a descending index on partition_column
  • On s with space partitions, an index on the space parameter and partition_column
tsdb.associated_schemaREGCLASS_timescaledb_internalSet the schema name for internal tables.
tsdb.associated_table_prefixTEXT_hyperSet the prefix for the names of internal s.
tsdb.orderbyTEXTDescending order on the time column in table_name.The order in which items are used in the . Specified in the same way as an ORDER BY clause in a SELECT query. Setting tsdb.orderby automatically creates an implicit min/max sparse index on the orderby column.
tsdb.segmentbyTEXT looks at pg_stats and determines an appropriate column based on the data cardinality and distribution. If pg_stats is not available, looks for an appropriate column from the existing indexes.Set the list of columns used to segment data in the for table. An identifier representing the source of the data such as device_id or tags_id is usually a good candidate.
tsdb.sparse_indexTEXT evaluates the columns you already have indexed, checks which data types are a good fit for sparse indexing, then creates a sparse index as an optimization.Configure the sparse indexes for compressed s. Requires setting tsdb.orderby. Supported index types include:
  • bloom(<column_name>): a probabilistic index, effective for = filters. Cannot be applied to tsdb.orderby columns.
  • minmax(<column_name>): stores min/max values for each compressed . Setting tsdb.orderby automatically creates an implicit min/max sparse index on the orderby column.
  • Define multiple indexes using a comma-separated list. You can set only one index per column. Set to an empty string to avoid using sparse indexes and explicitly disable the default behavior.

    Returns

    returns a simple message indicating success or failure.