Replace a standard relational table with a that is partitioned on a single
dimension. To create a new , best practice is to call CREATE TABLE.
A is a table that automatically partitions your data by time. A dimension defines the way your
data is partitioned. All actions work on the resulting . For example, ALTER TABLE, and SELECT.
If the table to convert already contains data, set migrate_data to TRUE.
However, this may take a long time and there are limitations when the table contains foreign
key constraints.
You cannot run create_hypertable() on a table that is already partitioned using
declarative partitioning or inheritance. The time column must be defined
as NOT NULL. If this is not already specified on table creation, create_hypertable automatically adds
this constraint on the table when it is executed.
This page describes the generalized API introduced in v2.13.
The old interface for create_hypertable is also available.
Samples
Before you call create_hypertable, you create a standard relational table. For example:
CREATE TABLE conditions (
time TIMESTAMPTZ NOT NULL,
location text NOT NULL,
temperature DOUBLE PRECISION NULL
);
The following examples show you how to create a from an existing table or a function:
Time partition a by time range
The following examples show different ways to create a :
-
Convert with range partitioning on the
time column:
SELECT create_hypertable('conditions', by_range('time'));
-
Convert with a set_chunk_time_interval of 24 hours:
Either:
SELECT create_hypertable('conditions', by_range('time', 86400000000));
or:
SELECT create_hypertable('conditions', by_range('time', INTERVAL '1 day'));
-
with range partitioning on the
time column, do not raise a warning if conditions is already a hypertable:
SELECT create_hypertable('conditions', by_range('time'), if_not_exists => TRUE);
If you call SELECT * FROM create_hypertable(...) the return value is formatted as a table with column headings.
Time partition a hypertable using composite columns and immutable functions
The following example shows how to time partition the measurements relational table on a composite
column type using a range partitioning function.
-
Create the report type, then an immutable function that converts the column value into a supported column value:
CREATE TYPE report AS (reported timestamp with time zone, contents jsonb);
CREATE FUNCTION report_reported(report)
RETURNS timestamptz
LANGUAGE SQL
IMMUTABLE AS
'SELECT $1.reported';
-
Create the hypertable using the immutable function:
SELECT create_hypertable('measurements', by_range('report', partition_func => 'report_reported'));
The following example shows how to time partition the events table on a jsonb (event) column
type, which has a top level started key that contains an ISO 8601 formatted timestamp:
CREATE FUNCTION event_started(jsonb)
RETURNS timestamptz
LANGUAGE SQL
IMMUTABLE AS
$func$SELECT ($1->>'started')::timestamptz$func$;
SELECT create_hypertable('events', by_range('event', partition_func => 'event_started'));
Time partition a hypertable using UUIDv7:
-
Create a table with a UUIDv7 column:
Postgres 17 and lower
Postgres v18
CREATE TABLE events (
id uuid PRIMARY KEY DEFAULT generate_uuidv7(),
payload jsonb
);
-
Partition the table based on the timestamps embedded within the UUID values:
SELECT create_hypertable(
'events',
by_range('id', INTERVAL '1 month')
);
Subsequent data insertion and queries automatically leverage the UUIDv7-based partitioning.
Arguments
| Name | Type | Default | Required | Description |
create_default_indexes | BOOLEAN | TRUE | ✖ | Create default indexes on time/partitioning columns. |
dimension | DIMENSION_INFO | - | ✔ | To create a _timescaledb_internal.dimension_info instance to partition a hypertable, you call by_range and by_hash. |
if_not_exists | BOOLEAN | FALSE | ✖ | Set to TRUE to print a warning if relation is already a hypertable. By default, an exception is raised. |
migrate_data | BOOLEAN | FALSE | ✖ | Set to TRUE to migrate any existing data in relation in to chunks in the new hypertable. Depending on the amount of data to be migrated, setting migrate_data can lock the table for a significant amount of time. If there are foreign key constraints to other tables in the data to be migrated, create_hypertable() can run into deadlock. A hypertable can only contain foreign keys to another hypertable. UNIQUE and PRIMARY constraints must include the partitioning key. Deadlock may happen when concurrent transactions simultaneously try to insert data into tables that are referenced in the foreign key constraints, and into the converting table itself. To avoid deadlock, manually obtain a SHARE ROW EXCLUSIVE lock on the referenced tables before you call create_hypertable in the same transaction. If you leave migrate_data set to the default, non-empty tables generate an error when you call create_hypertable. |
relation | REGCLASS | - | ✔ | Identifier of the table to convert to a hypertable. |
Dimension info
To create a _timescaledb_internal.dimension_info instance, you call add_dimension
to an existing hypertable.
Samples
s must always have a primary range dimension, followed by an arbitrary number of additional
dimensions that can be either range or hash, Typically this is just one hash. For example:
SELECT add_dimension('conditions', by_range('time'));
SELECT add_dimension('conditions', by_hash('location', 2));
For incompatible data types such as jsonb, you can specify a function to the partition_func argument
of the dimension build to extract a compatible data type. Look in the example section below.
Custom partitioning
By default, calls ‘s internal hash function for the given type.
You use a custom partitioning function for value types that do not have a native hash function.
You can specify a custom partitioning function for both range and hash partitioning. A partitioning function should
take a anyelement argument as the only parameter and return a positive integer hash value. This hash value is
not a partition identifier, but rather the inserted value’s position in the dimension’s key space, which is then
divided across the partitions.
by_range()
Create a by-range dimension builder. You can partition by_range on it’s own.
Samples
-
Partition on time using
CREATE TABLE
The simplest usage is to partition on a time column:
CREATE TABLE conditions (
time TIMESTAMPTZ NOT NULL,
location TEXT NOT NULL,
device TEXT NOT NULL,
temperature DOUBLE PRECISION NULL,
humidity DOUBLE PRECISION NULL
) WITH (
tsdb.hypertable
);
When you create a using CREATE TABLE ... WITH ..., the default partitioning column is automatically the first column with a timestamp data type. Also, creates a columnstore policy that automatically converts your data to the , after an interval equal to the value of the chunk_interval, defined through compress_after in the policy. This columnar format enables fast scanning and aggregation, optimizing performance for analytical workloads while also saving significant storage space. In the conversion, s are compressed by up to 98%, and organized for efficient, large-scale queries.
You can customize this policy later using alter_job(). However, to change after or created_before, the compression settings, or the the policy is acting on, you must remove the columnstore policy and add a new one.
You can also manually convert s in a to the .
This is the default partition, you do not need to add it explicitly.
-
Extract time from a non-time column using
create_hypertable
If you have a table with a non-time column containing the time, such as
a JSON column, add a partition function to extract the time:
CREATE TABLE my_table (
metric_id serial not null,
data jsonb,
);
CREATE FUNCTION get_time(jsonb) RETURNS timestamptz AS $$
SELECT ($1->>'time')::timestamptz
$$ LANGUAGE sql IMMUTABLE;
SELECT create_hypertable('my_table', by_range('data', '1 day', 'get_time'));
Arguments
| Name | Type | Default | Required | Description |
column_name | NAME | - | ✔ | Name of column to partition on. |
partition_func | REGPROC | - | ✖ | The function to use for calculating the partition of a value. |
partition_interval | ANYELEMENT | - | ✖ | Interval to partition column on. |
If the column to be partitioned is a:
-
TIMESTAMP, TIMESTAMPTZ, or DATE: specify partition_interval either as an INTERVAL type
or an integer value in microseconds.
-
Another integer type: specify
partition_interval as an integer that reflects the column’s
underlying semantics. For example, if this column is in UNIX time, specify partition_interval in milliseconds.
The partition type and default value depending on column type is:
| Column Type | Partition Type | Default value |
TIMESTAMP WITHOUT TIMEZONE | INTERVAL/INTEGER | 1 week |
TIMESTAMP WITH TIMEZONE | INTERVAL/INTEGER | 1 week |
DATE | INTERVAL/INTEGER | 1 week |
SMALLINT | SMALLINT | 10000 |
INT | INT | 100000 |
BIGINT | BIGINT | 1000000 |
by_hash()
The main purpose of hash partitioning is to enable parallelization across multiple disks within the same time interval.
Every distinct item in hash partitioning is hashed to one of N buckets. By default, uses flexible range
intervals to manage sizes.
Parallelizing disk I/O
You use Parallel I/O in the following scenarios:
- Two or more concurrent queries should be able to read from different disks in parallel.
- A single query should be able to use query parallelization to read from multiple disks in parallel.
For the following options:
-
RAID: use a RAID setup across multiple physical disks, and expose a single logical disk to the .
That is, using a single tablespace.
Best practice is to use RAID when possible, as you do not need to manually manage tablespaces
in the database.
-
Multiple tablespaces: for each physical disk, add a separate tablespace to the database. allows you to
add multiple tablespaces to a single . However, although under the hood, a ‘s
s are spread across the tablespaces associated with that .
When using multiple tablespaces, a best practice is to also add a second hash-partitioned dimension to your
and to have at least one hash partition per disk. While a single time dimension would also work, it would mean that
the first is written to one tablespace, the second to another, and so on, and thus would parallelize only if a
query’s time range exceeds a single .
When adding a hash partitioned dimension, set the number of partitions to a multiple of number of disks. For example,
the number of partitions P=N*Pd where N is the number of disks and Pd is the number of partitions per
disk. This enables you to add more disks later and move partitions to the new disk from other disks.
does not benefit from a very large number of hash
partitions, such as the number of unique items you expect in partition
field. A very large number of hash partitions leads both to poorer
per-partition load balancing (the mapping of items to partitions using
hashing), as well as much increased planning latency for some types of
queries.
Samples
CREATE TABLE conditions (
"time" TIMESTAMPTZ NOT NULL,
location TEXT NOT NULL,
device TEXT NOT NULL,
temperature DOUBLE PRECISION NULL,
humidity DOUBLE PRECISION NULL
) WITH (
tsdb.hypertable
tsdb.chunk_interval='1 day'
);
SELECT add_dimension('conditions', by_hash('location', 2));
Arguments
| Name | Type | Default | Required | Description |
column_name | NAME | - | ✔ | Name of column to partition on. |
partition_func | REGPROC | - | ✖ | The function to use to calcule the partition of a value. |
number_partitions | ANYELEMENT | - | ✔ | Number of hash partitions to use for partitioning_column. Must be greater than 0. |
Returns
by_range and by-hash return an opaque _timescaledb_internal.dimension_info instance, holding the
dimension information used by this function.
Returns
| Column | Type | Description |
hypertable_id | INTEGER | The ID of the hypertable you created. |
created | BOOLEAN | TRUE when the hypertable is created. FALSE when if_not_exists is true and no hypertable was created. |