Skip to main content

Documentation Index

Fetch the complete documentation index at: https://s2.dev/docs/llms.txt

Use this file to discover all available pages before exploring further.

SDKs expose three ways to append records:
  • append for a single atomic batch.
  • appendSession for ordered, pipelined batches on a persistent connection.
  • Producer for submitting individual records with per-record tickets.
Batches are the atomic unit of appending. A batch can contain up to 1000 records or 1 MiB of data. Each append — whether a single call or part of a session — writes exactly one batch. For anything beyond simple one-off writes, use an append session. Sessions enable pipelining (multiple batches in flight), maintain strict ordering, and gracefully handle throttling. The Producer API builds on sessions to provide per-record semantics.
For high throughput, send batches approaching the 1000-record or 1 MiB limit. Each client is limited to 200 append batches per second, so small batches hit the batch-rate limit sooner. Single-batch callers over the limit may receive a 429 with Retry-After info; append sessions and producers use backpressure to match S2’s capacity.
Appends also support match sequence numbers and fencing tokens. See concurrency control for the write coordination model.

Single-batch append

A single batch of records can be appended by calling append on a stream client:
const stream = basin.stream(streamName);
const ack = await stream.append(
	AppendInput.create([
		AppendRecord.string({ body: "first event" }),
		AppendRecord.string({ body: "second event" }),
	]),
);

// ack tells us where the records landed
console.log(`Wrote records ${ack.start.seqNum} through ${ack.end.seqNum - 1}`);
This works well for simple cases, but each append is a separate HTTP request. For higher throughput, and guaranteeing ordering across batches, use an append session or the Producer API.

Append session

An append session maintains a bidirectional stream with S2: you send batches of records, and S2 sends back acknowledgements.
Under the hood, sessions use S2S, a minimal binary framing layer over HTTP/2. In environments without HTTP/2 (notably browsers), the TypeScript SDK falls back to HTTP/1.1. You get the same session APIs, but append batches cannot be pipelined, so there will only be one in-flight batch at a time.
This enables pipelining — you can have multiple batches in flight simultaneously, dramatically improving throughput compared to waiting for each append to complete before sending the next, while still maintaining the ordering of records across batches. Contrast this with multiple concurrent single-batch appends; while this would also allow high throughput, each concurrent append would be independent, and the ordering in which concurrent batches become durable would not be guaranteed. A session provides a stateful handle on a stream that is being appended to. Batches can be appended to that session by calling submit(). Submitting a batch is an async function. It resolves to a BatchSubmitTicket when the batch is accepted by the session; this async function can exhibit backpressure, to prevent overwhelming the session. A ticket tracks an append while it is pending. The batch is only durable once it has been acknowledged by S2. This can be awaited via the ticket’s ack() method, which resolves only when the written batch is fully durable on object storage. The resulting AppendAck contains information about the batch’s resulting position in the stream.
const session = await stream.appendSession();

// Submit a batch - this enqueues it and returns a ticket
const ticket = await session.submit(
	AppendInput.create([
		AppendRecord.string({ body: "event-1" }),
		AppendRecord.string({ body: "event-2" }),
	]),
);

// The ticket resolves when the batch is durable
const ack = await ticket.ack();
console.log(`Durable at seqNum ${ack.start.seqNum}`);

await session.close();
You do not need to await each ticket immediately. For some use cases, “fire-and-forget” may be fine; for others, you need to confirm that a specific append has finished, or that all writes up to a certain point have been persisted. If an acknowledgement is not received for a batch within the configured requestTimeout, the SDK will mark it failed and either retry, if so configured, or surface an error.

Backpressure

The session tracks how much data is “in flight” (submitted but not yet acknowledged). When you hit the limits, submit() blocks until capacity frees up. This is intentional: it prevents unbounded memory growth and naturally throttles your application to match what S2 can handle.
OptionDefaultDescription
maxInflightBytes5 MiBMaximum unacknowledged bytes before submit() blocks
maxInflightBatchesNoneMaximum unacknowledged batches (optional additional limit)

Producer

The Producer API provides a record-oriented interface over append sessions. You submit individual records and get back a ticket for each one. The producer groups submitted records into batches according to configurable thresholds. This is particularly useful when:
  • You’re receiving records one at a time (from a message queue, HTTP requests, etc.)
  • You want confirmation that specific records are durable
  • You want append-session ordering, pipelining, and backpressure without managing batch boundaries yourself
const producer = new Producer(
	new BatchTransform({ lingerDurationMillis: 5 }),
	await stream.appendSession(),
);

// Submit individual records
const ticket = await producer.submit(
	AppendRecord.string({ body: "my event" }),
);

// Get the exact sequence number for this record
const ack = await ticket.ack();
console.log(`Record durable at seqNum ${ack.seqNum()}`);

await producer.close();
The producer maintains the same ordering guarantee as the underlying append session: records are durable in exactly the order you submitted them, and each per-record ticket resolves with the correct sequence number.

Batching Configuration

OptionDefaultDescription
linger5msHow long to wait for more records before flushing a partial batch
maxBatchRecords1000Flush when the batch reaches this many records
maxBatchBytes1 MiBFlush when the batch reaches this size
The producer flushes whenever any threshold is hit. For latency-sensitive applications, a shorter linger time means records are written sooner. For throughput-sensitive applications, a longer linger time means more efficient batching.