Skip to main content

Documentation Index

Fetch the complete documentation index at: https://s2.dev/docs/llms.txt

Use this file to discover all available pages before exploring further.

Streams are append-only (a log) so records are always added to the end, or tail, of the stream. When you append records, S2 responds with an acknowledgement only once your data is fully durable with the range of sequence numbers assigned, the timestamps of the first and last record, and the current tail.
{
  "start": { "seq_num": 42, "timestamp": 1713812735000 },
  "end":   { "seq_num": 44, "timestamp": 1713812735012 },
  "tail":  { "seq_num": 44, "timestamp": 1713812735012 }
}
  • start — position of the first record appended.
  • end — one past the last record appended (so end.seq_num - start.seq_num is the number of records).
  • tail — current tail of the stream, which can exceed end if there have been concurrent appends.

Batching

The append API accepts batches of records. A single batch can contain up to 1000 records or 1 MiB of data, and is appended atomically: either all records in the batch become durable, or none. For payloads larger than the 1 MiB record size limit, the typical approach is to store the data externally (e.g. in object storage) and append a pointer to it as a record. You can also serialize large messages across multiple records, using record headers as metadata — see this blog post for patterns and examples with the TypeScript SDK.

Latency

Acknowledgment latency depends on the stream’s configured storage class:
  • Standard streams ack writes within 400 milliseconds.
  • Express streams ack writes within 40 milliseconds.
Both storage classes make data durable in multiple availability zones before acknowledging the append.
Latency expectations assume an authenticated client in the same region as the basin.

Throughput

To optimize throughput,
  1. Send fewer, larger batches. This is especially important if you have a very high rate of tiny appends, as each client is limited to 200 batches per second.
  2. Keep multiple writes in-flight. S2’s append session support lets a client pipeline multiple batches on a persistent connection while preserving submission order – if any batch fails, subsequent batches won’t become durable. Concurrent single-batch appends can also increase throughput, but they are independent requests, so the order in which batches become durable is not guaranteed.
If records arrive one at a time, use the SDK Producer API. It keeps the append session ordering model, while handling batching and backpressure for you.

See also

Appending SDK

Use append sessions, the Producer API, batching, and backpressure.

Append API

Review request parameters, batch limits, and append conditions.

Concurrency control

Coordinate writers with match sequence numbers and fencing tokens.