Two-step aggregation
This group of functions uses the two-step aggregation pattern. Rather than calculating the final result in one step, you first create an intermediate aggregate by using the aggregate function. Then, use any of the accessors on the intermediate aggregate to calculate a final result. You can also roll up multiple intermediate aggregates with the rollup functions. The two-step aggregation pattern has several advantages:- More efficient because multiple accessors can reuse the same aggregate
- Easier to reason about performance, because aggregation is separate from final computation
- Easier to understand when calculations can be rolled up into larger intervals, especially in window functions and continuous aggregates
- Perform retrospective analysis even when underlying data is dropped, because the intermediate aggregate stores extra information not available in the final result
Samples
Aggregate data into a TimeWeightSummary and calculate the average
Given a tablefoo with data in a column val, aggregate data into a daily TimeWeightSummary. Use that to calculate
the average for column val:
Advanced usage
Parallelism and ordering
Time-weighted average calculations are not strictly parallelizable, as defined by PostgreSQL. These calculations require inputs to be strictly ordered, but in general, PostgreSQL parallelizes by assigning rows randomly to workers. However, the algorithm can be parallelized if it is guaranteed that all rows within some time range go to the same worker. This is the case for both continuous aggregates and distributed hypertables. (Note that the partitioning keys of the distributed hypertable must be within theGROUP BY clause, but this is usually the case.)
Combining aggregates across measurement series
If you try to combine overlappingTimeWeightSummaries, an error is thrown. For example, you might create a
TimeWeightSummary for device_1 and a separate TimeWeightSummary for device_2, both covering the same period of
time. You can’t combine these because the interpolation techniques only make sense when restricted to a single
measurement series.
If you want to calculate a single summary statistic across all devices, use a simple average, like this:
Parallelism in multi-node
The time-weighted average functions are not strictly parallelizable in the PostgreSQL sense. PostgreSQL requires that parallelizable functions accept potentially overlapping input. As explained above, the time-weighted functions do not. However, they do support partial aggregation and partition-wise aggregation in multi-node setups.Reducing memory usage
Because the time-weighted aggregates require ordered sets, they build up a buffer of input data, sort it, and then perform the aggregation steps. When memory is too small to build up a buffer of points, you might see Out of Memory failures or other issues. In these cases, try using a multi-level aggregate. For example:Functions in this group
Aggregate
time_weight(): aggregate data into an intermediate time-weighted aggregate form for further calculation
Accessors
average(): calculate the time-weighted average of values in a TimeWeightSummaryfirst_time(): get the timestamp of the first point in the TimeWeightSummaryfirst_val(): get the value of the first point in the TimeWeightSummaryintegral(): calculate the integral from a TimeWeightSummaryinterpolated_average(): calculate the time-weighted average, interpolating at boundariesinterpolated_integral(): calculate the integral, interpolating at boundarieslast_time(): get the timestamp of the last point in the TimeWeightSummarylast_val(): get the value of the last point in the TimeWeightSummary
Rollup
rollup(): combine multiple TimeWeightSummaries