Accelerating PostgreSQL Ad Hoc Query and Dictionary with RUM Index

By Digoal

This article discusses how you can accelerate PostgreSQL ad hoc query and dictionary (random field combination) through RUM index acceleration.


Data Size of System:

About 2 billion rows, 64 fields. The original data is mostly strings. (Unique values of most fields are limited)


  1. Query. Query of random field combinations for the aggregated value.
  2. Query concurrency. About 1000 concurrent queries. The response time of each query must be less than 100 ms.
  3. Latency for write and update must be less than 1 second. Peak time write and update speed must be up to 200,000 rows/s. Batch write supported.
  4. Convenient to add fields.
  5. Real-time computing (without modeling) required. In other words, the addition of statistical dimensions must be convenient, and there must be no need to wait for modeling to complete.

PostgreSQL Features for This Scenario

PostgreSQL can be used to meet all these requirements. PostgreSQL has the following features for ad hoc non-modeling queries:

1. Index Interface:

Bloom interface. Supports multi-field combination indexes, query of any field combinations, implements lossy filtering, and aggregates target data to certain BLOCKs.

GIN interface. Inverted index, widely used in multi-value types (such as full-text search, arrays, JSON, and K-V), and multi-field combination indexes. Supports multi-value type or random field combination searches and bitmap index scans, aggregates target data to certain BLOCKs, and speeds up queries.

RUM interface. The new version of RUM not only supports the tsvector type, but also supports the array type. The advantage of RUM is that it does not need bitmap scans, so there is no recheck process, and there is less CPU consumption during queries than with the GIN index interface.

2. Index Scanning Methods

Index scans can directly find required data.

Bitmap index scan, returns the block that contains the target data, and then the database performs CPU RECHECK. This method supports multiple-field combined scans.

3. Other Features:

  • Parallel computing (supports parallel scanning, filtering, sorting, JOIN, aggregation, index creation, and so on), (for example, parallel sorting only takes 40 seconds to return the top k entries of 10 billion data entries). More indicators for reference:
  • Asynchronous calls and aggregations, also supports DBLINK asynchronous calls for parallel computing.
  • Horizontal sharding.
  • Sequence that can be used in dictionaries. Example:
  • UDF. Can support very complex database function programming, and implement complex logic.
  • RULE. Data is automatically converted into data dictionaries when a data write or update occurs.

PostgreSQL Scenario Optimization Measures

  1. Dictionary (the number of unique values of most fields is limited, which is about 1–50 million). Data with about 30 fields requires a data dictionary (the ETL process can be used to make a real-time dictionary). The purpose of converting data into dictionaries is to save storage space and improve processing efficiency. If the performance is OK, data dictionary is not required.
  2. Write to an automatic dictionary (can be implemented using RULE)
  3. Automatic translation upon query
  4. Bloom, RUM, GIN, array, tsvector, multi-field BITMAP SCAN
  5. Sharding. Dblink asynchronous parallel calls.

Introduction to the Horizontal Sharding Method

  1. Use plproxy for horizontal sharding
  2. Use postgres_fdw + pg_pathman horizontal sharding
  3. Other PostgreSQL-based NewSQL or MPP open source products





Solution 1 — Global Dictionary + Array Type + RUM Index

Global Dictionary

Global dictionary means that the value ranges of all fields constitutes a large value range, and the “field name + field value” is unique within the value range.

After making a dictionary, you can choose either INT4 or INT8 as the element type of the dictionary.


Since a global dictionary is used, we can use one array field to replace all fields.

Replaced with

There are many advantages to using arrays. For example, adding fields will be a breeze, because you don’t need to change the results. Instead, you only need to fill the contents of the newly added fields into the array.

The original AND queries are replaced with Array CONTAINS queries, and the original OR queries are replaced with ARRAY INTERSECT queries.

RUM indexes

RUM indexes already support arrays. Support CONTAINS and INTERSECT queries.

Solution Demo

The demo will not include converting text into a dictionary. The hardware used is an Alibaba Cloud ECS, 56-core, 224 GB memory, local SSD cloud disk.

1. Create an extension

2. Create a function that generates a random value

Create a function that generates a random value (namely the dictionary value). When you enter a range, the function returns a random value in this range.

3. Create a function that generates a random array with a length of 50

The rule is, 16 fields having 1 million elements in the dictionary’s value range, 16 fields having 10 million elements in the dictionary’s value range, and 18 fields having 50 million elements in the dictionary’s value range.

A total of 50 fields consume 1.076 billion dictionary values. Therefore, we can use INT4 as the dictionary element type.

4. Data example

5. Create a table

6. Create array RUM indexes

7. Write 200 million pieces of test data into a single table of a single instance

8. The write speed of a single instance is 33,000 rows/s.

Write speed is about 33,000 rows/s, and 10 nodes will make 330,000 rows/s.
About 80% CPU usage.

9. Space proportion for 200 million pieces of data

Table: 49 GB
Indexes: 184 GB

10. Create a function that returns random values within N valid ranges for query testing

Result example

11. Ad hoc query stress test

Disable bitmap scan

1 field query

2-field AND query

3-field AND query

4-field AND query

2-field OR query

3-field OR query

4-field OR query

More-field AND query

More-field OR query

Stress Test Results

4-dimension AND query, input random conditions, stress test result: average RT 1.3 milliseconds, TPS 43,000+

4-dimension OR query, input random conditions, stress test result: average RT 2.9 milliseconds, TPS 18,000+

PostgreSQL 11 HASH Partition Table Combined with RUM

PostgreSQL supports HASH partition tables. The performance might be better with smart parallel AGG.

1. Create RUM indexes

2. Create a partition table

3. Create random functions

4. Create a dictionary-generation function

5. Write test data

Using PostgreSQL 11 HASH partitions, the write speed is about 55,000 rows/s.

6. Ad hoc query performance is consistent with PostgreSQL 10

Aggregate Computing

At present, aggregation operations on partition tables requires scanning, appending, and then aggregating. There is still room for optimization, and the community is already on it.

  1. Smart parallel aggregation of partitions
  2. Smart parallel JOIN of partitions
    Let workers for each partition work in parallel, which is similar to the processing of the MPP architecture.
  3. DBLINK asynchronous call parallel aggregation


This article uses “global dictionary + array + RUM index” to achieve efficient write and query performance.

This query is particularly suitable for scenarios where all fields are equivalent query conditions. If there are non-equivalent conditions, we recommend that you cascade them and convert them to equivalent queries. Otherwise, you can extract the fields of non-equivalent query conditions and use the b-tree index, and then use the multi-index bitmap index scan, which also has the acceleration effect. In this case, you would need to do the corresponding recheck.

PG 10 single-instance single-table write speed: about 33,000 rows/s. There is significant room for write performance improvement, and the current bottleneck is mainly at the wal writer.

Single-instance write concomitant query: for any dimension queries, the response time is within 20 milliseconds.

4-dimension AND query, average RT 1.3 milliseconds, TPS 43,000+, far exceeding the 1000-concurrency business requirement.

4-dimension OR query, average RT 2.9 milliseconds, TPS 18,000+, far exceeding the 1000-concurrency business requirement.

With “global dictionary service + sub-database”, you can fulfill the requirements of a greater number of ad hoc real-time queries.