Queue claim-check

Most queues have message size limits. SQS caps individual messages at 256 KB. Sensitive payloads also aren't a great fit for queue bodies that linger in dead-letter queues and logs.

The "claim check" enterprise pattern handles this: put a pointer in the queue, store the blob somewhere else. kubbi is a managed claim-check service with agent-flavored ergonomics. The 256 KB cap stops mattering.


How it works

  1. The producer creates a kubbi with the payload. Receives a claim URL.
  2. The queue message carries the claim URL (plus routing keys, correlation IDs, anything else queue-relevant).
  3. The consumer dequeues, follows the claim URL, retrieves the payload server-side.

Examples

On-premise ETL → cloud analytics agent

An on-premise ETL pipeline (behind a corporate firewall) with outbound HTTPS access creates a kubbi for a transformed financial dataset (CSV payload sized to the plan's single-content cap, 2-hour TTL, single-read) and writes the claim URL into a shared job-tracking database. The cloud analytics agent polls for the URL, claims the kubbi, and processes the dataset. Larger datasets ship as multi-file packages instead.

Cron job → async reporting agent

A cron job at 00:00 drops weekly aggregated usage statistics into a kubbi (4-hour TTL, single-read) and records the claim URL in a jobs table. The AI reporting agent runs at 06:00, claims the kubbi, and produces the report. No shared in-memory state or schema coordination needed.


Related