Skip to main content
Relyt ONE ships with PGMQ v1.7 enabled for every Postgres cluster, so you can run durable message queues directly in Postgres. Use simple SQL to send, receive, delete, and archive jobs without leaving the database.

Key Capabilities

  • Visibility timeouts keep a message locked to one consumer before it reappears
  • SQL-first workflow—no background workers or extra services to manage
  • Archive or delete messages to match compliance or replay needs
  • Pure Postgres deployment with the extension already managed by Relyt ONE

Quick Start

Step 1: Enable the extension

CREATE EXTENSION IF NOT EXISTS pgmq;

Step 2: Create a queue

SELECT pgmq.create('my_queue');
Each queue is a table under the pgmq schema (q_<queue_name>). Archives live under a_<queue_name>.

Step 3: Publish messages

SELECT pgmq.send(
  queue_name => 'my_queue',
  msg        => '{"task": "resize", "asset_id": 42}',
  delay      => 5  -- optional seconds to defer visibility
);
Use pgmq.send_batch with a JSONB array to enqueue many messages atomically.

Step 4: Consume messages

SELECT * FROM pgmq.read(
  queue_name => 'my_queue',
  vt         => 30,  -- seconds to keep the message invisible to others
  qty        => 10
);
pgmq.pop('my_queue') is a shortcut that reads and deletes a single message immediately.

Step 5: Complete work

  • Delete forever: SELECT pgmq.delete('my_queue', msg_id);
  • Archive for replay: SELECT pgmq.archive(queue_name => 'my_queue', msg_id => 7);
  • Inspect archives via SELECT * FROM pgmq.a_my_queue;

Step 6: Clean up

SELECT pgmq.drop_queue('my_queue');

Visibility Timeout Tips

  • Set vt longer than the expected processing time; unfinished work reappears when the timeout expires.
  • Use shorter vt for idempotent consumers needing faster retries.
  • Always delete or archive after successful processing to avoid duplicates.

Best Practices

  1. Keep messages small—store big payloads in tables and enqueue references.
  2. Monitor lag by counting rows in pgmq.q_* tables or using read_ct.
  3. Standardize headers for trace IDs or tenant metadata via the optional headers parameter.
  4. Back up archive tables if you rely on replay semantics.

Next Steps