Skip to main content

Overview

Relyt ONE delivers exceptional analytical performance powered by our caching service and the DuckDB execution engine. To enable users to quickly evaluate the system’s OLAP capabilities, we provide a pre-configured TPC-H standard benchmark dataset that supports out-of-the-box, one-click performance testing.

What is TPC-H?

TPC-H is a decision support benchmark standard established by the Transaction Processing Performance Council (TPC). It simulates a real-world data warehouse environment with 8 business tables and 22 complex analytical queries, covering common OLAP scenarios including aggregations, joins, and subqueries. TPC-H is widely used to evaluate and compare the analytical performance of database systems.

Why Use a Pre-configured Dataset?

  • Quick start: Ready immediately after project creation
  • Standardized testing: Industry-recognized benchmark with meaningful, comparable metrics
  • Repeatability: Unified data layout and configuration across environments

Dataset Architecture

The pre-configured TPC-H 1GB dataset is created as external tables, with actual data stored on S3 object storage. Through Relyt ONE’s auto caching service, the system automatically caches hot data, delivering high-throughput, low-latency data access.

Usage Guide

1. Connect to Database

The pre-configured dataset lives in the tpch schema of the postgres database.
1

Connect via psql

PGSSLMODE=require psql -h relyt-g94eh8mxxxxx.us-ca.cs.data.cloud -U relytdb_owner -d postgres
Replace the host address in the connection string with your actual project address. You can obtain the complete connection information from the console.

2. Explore the Dataset

1

List schema objects

-- Tables
\dt tpch.*

-- Views (pre-defined 22 queries)
\dv tpch.*
2

Preview sample rows

SELECT * FROM tpch.nation;

SELECT * FROM tpch.lineitem LIMIT 10;
3

Check table sizes

SELECT 'customer' AS table_name, count(*) FROM tpch.customer
UNION ALL
SELECT 'orders', count(*) FROM tpch.orders
UNION ALL
SELECT 'lineitem', count(*) FROM tpch.lineitem;

3. Execute Individual Queries

All 22 standard TPC-H queries ship as views (q01 through q22).
1

Run a specific query

SELECT * FROM tpch.q03;

SELECT * FROM tpch.q06;
2

Inspect execution plans

Use EXPLAIN ANALYZE to view execution plans and timing:
EXPLAIN ANALYZE SELECT * FROM tpch.q01;
Views encapsulate the official TPC-H SQL so you can focus on performance validation rather than query maintenance.

4. One-Click Performance Testing

1

Run the full suite

SELECT * FROM tpch.run_benchmark();
The function executes each query twice:
  • Cold run loads data from S3 into cache.
  • Hot run reuses cached data to demonstrate steady-state performance.
2

Interpret the output

Hot-run metrics provide the most representative view of production behavior:
Query #Execution Time (ms)Row CountStatus
1173.454SUCCESS
239.07100SUCCESS
387.4210SUCCESS
477.475SUCCESS
591.025SUCCESS
643.941SUCCESS
794.254SUCCESS
8108.992SUCCESS
9203.4175SUCCESS
10153.5220SUCCESS
1127.231048SUCCESS
1268.52SUCCESS
13222.4742SUCCESS
1461.181SUCCESS
1587.091SUCCESS
1664.6818314SUCCESS
1767.441SUCCESS
18179.2257SUCCESS
19185.971SUCCESS
2064.35186SUCCESS
21255.63100SUCCESS
2253.997SUCCESS
Result columns explained:
  • query_num: Query number (1-22)
  • execution_time_ms: Hot Run execution time in milliseconds
  • row_count: Number of rows returned
  • status: Execution status

Performance Reference

Below are TPC-H 1GB Hot Run results for Relyt ONE under different configurations. Each value represents the per-query execution time in milliseconds:
query_num1 vCPU (ms)2 vCPU (ms)4 vCPU (ms)
1570.57323.57182.67
282.7959.5250.25
3350.26171.21108.80
4260.28174.0193.80
5309.06176.48107.33
6148.6186.2350.42
7280.33202.57117.94
8334.19233.00127.51
9596.58379.15234.15
10322.58235.33175.58
1160.2848.0435.27
12216.86124.5280.13
13761.56450.64229.10
14173.94119.2477.98
15273.00167.9999.32
16115.8790.0675.97
17211.57120.5279.28
18527.96345.42195.62
19581.96360.51216.49
20173.12107.0780.55
21853.32511.36327.68
2265.7060.7162.79
The run_benchmark() function automatically performs Cold/Hot dual testing. The reference data in the table above are all Hot Run results. Actual performance may vary due to network conditions, concurrent load, and other factors.

Custom Testing

If you need to test larger datasets (e.g., 10GB, 100GB, 1TB), follow this process:
  1. Generate Data: Use the official TPC-H dbgen tool to generate datasets at the desired scale
  2. Upload to S3: Upload the generated data files to your object storage
  3. Create External Tables: Map S3 data files through external tables in Relyt ONE.
  4. Execute Queries: Reference query views in the pre-configured tpch schema or test with custom queries

FAQ

Q: Why execute Cold Run and Hot Run?
A: Cold Run reflects performance when querying data loaded from S3 for the first time, while Hot Run reflects performance when data is already cached. Hot Run is closer to actual production performance as hot data is automatically cached.
Q: Can TPC-H testing represent my actual business performance?
A: TPC-H provides standardized performance reference, but actual business performance also depends on data models, query patterns, indexing strategies, and other factors. It’s recommended to conduct additional testing with real business data.
I