Overview
Relyt ONE delivers exceptional analytical performance powered by our caching service and the DuckDB execution engine. To enable users to quickly evaluate the system’s OLAP capabilities, we provide a pre-configured TPC-H standard benchmark dataset that supports out-of-the-box, one-click performance testing.What is TPC-H?
TPC-H is a decision support benchmark standard established by the Transaction Processing Performance Council (TPC). It simulates a real-world data warehouse environment with 8 business tables and 22 complex analytical queries, covering common OLAP scenarios including aggregations, joins, and subqueries. TPC-H is widely used to evaluate and compare the analytical performance of database systems.Why Use a Pre-configured Dataset?
- Quick start: Ready immediately after project creation
- Standardized testing: Industry-recognized benchmark with meaningful, comparable metrics
- Repeatability: Unified data layout and configuration across environments
Dataset Architecture
The pre-configured TPC-H 1GB dataset is created as external tables, with actual data stored on S3 object storage. Through Relyt ONE’s auto caching service, the system automatically caches hot data, delivering high-throughput, low-latency data access.Usage Guide
1. Connect to Database
The pre-configured dataset lives in thetpch
schema of the postgres
database.
1
Connect via psql
Replace the host address in the connection string with your actual project address. You can obtain the complete connection information from the console.
2. Explore the Dataset
1
List schema objects
2
Preview sample rows
3
Check table sizes
3. Execute Individual Queries
All 22 standard TPC-H queries ship as views (q01
through q22
).
1
Run a specific query
2
Inspect execution plans
Use
EXPLAIN ANALYZE
to view execution plans and timing:Views encapsulate the official TPC-H SQL so you can focus on performance validation rather than query maintenance.
4. One-Click Performance Testing
1
Run the full suite
- Cold run loads data from S3 into cache.
- Hot run reuses cached data to demonstrate steady-state performance.
2
Interpret the output
Hot-run metrics provide the most representative view of production behavior:
Result columns explained:
Query # | Execution Time (ms) | Row Count | Status |
---|---|---|---|
1 | 173.45 | 4 | SUCCESS |
2 | 39.07 | 100 | SUCCESS |
3 | 87.42 | 10 | SUCCESS |
4 | 77.47 | 5 | SUCCESS |
5 | 91.02 | 5 | SUCCESS |
6 | 43.94 | 1 | SUCCESS |
7 | 94.25 | 4 | SUCCESS |
8 | 108.99 | 2 | SUCCESS |
9 | 203.4 | 175 | SUCCESS |
10 | 153.52 | 20 | SUCCESS |
11 | 27.23 | 1048 | SUCCESS |
12 | 68.5 | 2 | SUCCESS |
13 | 222.47 | 42 | SUCCESS |
14 | 61.18 | 1 | SUCCESS |
15 | 87.09 | 1 | SUCCESS |
16 | 64.68 | 18314 | SUCCESS |
17 | 67.44 | 1 | SUCCESS |
18 | 179.22 | 57 | SUCCESS |
19 | 185.97 | 1 | SUCCESS |
20 | 64.35 | 186 | SUCCESS |
21 | 255.63 | 100 | SUCCESS |
22 | 53.99 | 7 | SUCCESS |
query_num
: Query number (1-22)execution_time_ms
: Hot Run execution time in millisecondsrow_count
: Number of rows returnedstatus
: Execution status
Performance Reference
Below are TPC-H 1GB Hot Run results for Relyt ONE under different configurations. Each value represents the per-query execution time in milliseconds:query_num | 1 vCPU (ms) | 2 vCPU (ms) | 4 vCPU (ms) |
---|---|---|---|
1 | 570.57 | 323.57 | 182.67 |
2 | 82.79 | 59.52 | 50.25 |
3 | 350.26 | 171.21 | 108.80 |
4 | 260.28 | 174.01 | 93.80 |
5 | 309.06 | 176.48 | 107.33 |
6 | 148.61 | 86.23 | 50.42 |
7 | 280.33 | 202.57 | 117.94 |
8 | 334.19 | 233.00 | 127.51 |
9 | 596.58 | 379.15 | 234.15 |
10 | 322.58 | 235.33 | 175.58 |
11 | 60.28 | 48.04 | 35.27 |
12 | 216.86 | 124.52 | 80.13 |
13 | 761.56 | 450.64 | 229.10 |
14 | 173.94 | 119.24 | 77.98 |
15 | 273.00 | 167.99 | 99.32 |
16 | 115.87 | 90.06 | 75.97 |
17 | 211.57 | 120.52 | 79.28 |
18 | 527.96 | 345.42 | 195.62 |
19 | 581.96 | 360.51 | 216.49 |
20 | 173.12 | 107.07 | 80.55 |
21 | 853.32 | 511.36 | 327.68 |
22 | 65.70 | 60.71 | 62.79 |
The
run_benchmark()
function automatically performs Cold/Hot dual testing. The reference data in the table above are all Hot Run results. Actual performance may vary due to network conditions, concurrent load, and other factors.Custom Testing
If you need to test larger datasets (e.g., 10GB, 100GB, 1TB), follow this process:- Generate Data: Use the official TPC-H
dbgen
tool to generate datasets at the desired scale - Upload to S3: Upload the generated data files to your object storage
- Create External Tables: Map S3 data files through external tables in Relyt ONE.
- Execute Queries: Reference query views in the pre-configured
tpch
schema or test with custom queries
FAQ
Q: Why execute Cold Run and Hot Run?A: Cold Run reflects performance when querying data loaded from S3 for the first time, while Hot Run reflects performance when data is already cached. Hot Run is closer to actual production performance as hot data is automatically cached. Q: Can TPC-H testing represent my actual business performance?
A: TPC-H provides standardized performance reference, but actual business performance also depends on data models, query patterns, indexing strategies, and other factors. It’s recommended to conduct additional testing with real business data.