Documentation Index
Fetch the complete documentation index at: https://s2.dev/docs/llms.txt
Use this file to discover all available pages before exploring further.
SDKs expose three ways to append records:
append for a single atomic batch.
appendSession for ordered, pipelined batches on a persistent connection.
Producer for submitting individual records with per-record tickets.
Batches are the atomic unit of appending. A batch can contain up to 1000 records or 1 MiB of data. Each append — whether a single call or part of a session — writes exactly one batch.
For anything beyond simple one-off writes, use an append session. Sessions enable pipelining (multiple batches in flight), maintain strict ordering, and gracefully handle throttling. The Producer API builds on sessions to provide per-record semantics.
For high throughput, send batches approaching the 1000-record or 1 MiB limit. Each client is limited to 200 append batches per second, so small batches hit the batch-rate limit sooner. Single-batch callers over the limit may receive a 429 with Retry-After info; append sessions and producers use backpressure to match S2’s capacity.
Appends also support match sequence numbers and fencing tokens. See concurrency control for the write coordination model.
Single-batch append
A single batch of records can be appended by calling append on a stream client:
TypeScript
Python
Go
Rust
const stream = basin.stream(streamName);
const ack = await stream.append(
AppendInput.create([
AppendRecord.string({ body: "first event" }),
AppendRecord.string({ body: "second event" }),
]),
);
// ack tells us where the records landed
console.log(`Wrote records ${ack.start.seqNum} through ${ack.end.seqNum - 1}`);
stream = basin.stream(stream_name)
ack = await stream.append(
AppendInput(
records=[
Record(body=b"first event"),
Record(body=b"second event"),
]
)
)
# ack tells us where the records landed
print(f"Wrote records {ack.start.seq_num} through {ack.end.seq_num - 1}")
ack, _ := stream.Append(ctx, &s2.AppendInput{
Records: []s2.AppendRecord{
{Body: []byte("first event")},
{Body: []byte("second event")},
},
})
// ack tells us where the records landed
fmt.Printf("Wrote records %d through %d\n", ack.Start.SeqNum, ack.End.SeqNum-1)
let stream = basin.stream(stream_name.clone());
let ack = stream
.append(AppendInput::new(AppendRecordBatch::try_from_iter([
AppendRecord::new("first event")?,
AppendRecord::new("second event")?,
])?))
.await?;
// ack tells us where the records landed
println!(
"Wrote records {} through {}",
ack.start.seq_num,
ack.end.seq_num - 1
);
This works well for simple cases, but each append is a separate HTTP request.
For higher throughput, and guaranteeing ordering across batches, use an append session or the Producer API.
Append session
An append session maintains a bidirectional stream with S2: you send batches of records, and S2 sends back acknowledgements.
Under the hood, sessions use S2S, a minimal binary framing layer over HTTP/2. In environments without HTTP/2 (notably browsers), the TypeScript SDK falls back to HTTP/1.1. You get the same session APIs, but append batches cannot be pipelined, so there will only be one in-flight batch at a time.
This enables pipelining — you can have multiple batches in flight simultaneously, dramatically improving throughput compared to waiting for each append to complete before sending the next, while still maintaining the ordering of records across batches.
Contrast this with multiple concurrent single-batch appends; while this would also allow high throughput, each concurrent append would be independent, and the ordering in which concurrent batches become durable would not be guaranteed.
A session provides a stateful handle on a stream that is being appended to. Batches can be appended to that session by calling submit().
Submitting a batch is an async function. It resolves to a BatchSubmitTicket when the batch is accepted by the session; this async function can exhibit backpressure, to prevent overwhelming the session.
A ticket tracks an append while it is pending. The batch is only durable once it has been acknowledged by S2. This can be awaited via the ticket’s ack() method, which resolves only when the written batch is fully durable on object storage. The resulting AppendAck contains information about the batch’s resulting position in the stream.
TypeScript
Python
Go
Rust
const session = await stream.appendSession();
// Submit a batch - this enqueues it and returns a ticket
const ticket = await session.submit(
AppendInput.create([
AppendRecord.string({ body: "event-1" }),
AppendRecord.string({ body: "event-2" }),
]),
);
// The ticket resolves when the batch is durable
const ack = await ticket.ack();
console.log(`Durable at seqNum ${ack.start.seqNum}`);
await session.close();
async with stream.append_session() as session:
# Submit a batch — this enqueues it and returns a ticket
ticket = await session.submit(
AppendInput(
records=[
Record(body=b"event-1"),
Record(body=b"event-2"),
]
)
)
# The ticket resolves when the batch is durable
ack = await ticket
print(f"Durable at seq_num {ack.start.seq_num}")
session, _ := stream.AppendSession(ctx, nil)
defer session.Close()
// Submit a batch - this enqueues it and returns a ticket
fut, _ := session.Submit(&s2.AppendInput{
Records: []s2.AppendRecord{
{Body: []byte("event-1")},
{Body: []byte("event-2")},
},
})
// Wait for enqueue (this is where backpressure happens)
ticket, _ := fut.Wait(ctx)
// Wait for durability
ack, _ := ticket.Ack(ctx)
fmt.Printf("Durable at seqNum %d\n", ack.Start.SeqNum)
let session = stream.append_session(AppendSessionConfig::new());
// Submit a batch - this enqueues it and returns a ticket
let records = AppendRecordBatch::try_from_iter([
AppendRecord::new("event-1")?,
AppendRecord::new("event-2")?,
])?;
let ticket = session.submit(AppendInput::new(records)).await?;
// Wait for durability
let ack = ticket.await?;
println!("Durable at seqNum {}", ack.start.seq_num);
session.close().await?;
You do not need to await each ticket immediately. For some use cases, “fire-and-forget” may be fine; for others, you need to confirm that a specific append has finished, or that all writes up to a certain point have been persisted.
If an acknowledgement is not received for a batch within the configured requestTimeout, the SDK will mark it failed and either retry, if so configured, or surface an error.
Backpressure
The session tracks how much data is “in flight” (submitted but not yet acknowledged). When you hit the limits, submit() blocks until capacity frees up.
This is intentional: it prevents unbounded memory growth and naturally throttles your application to match what S2 can handle.
| Option | Default | Description |
|---|
maxInflightBytes | 5 MiB | Maximum unacknowledged bytes before submit() blocks |
maxInflightBatches | None | Maximum unacknowledged batches (optional additional limit) |
Producer
The Producer API provides a record-oriented interface over append sessions. You submit individual records and get back a ticket for each one. The producer groups submitted records into batches according to configurable thresholds.
This is particularly useful when:
- You’re receiving records one at a time (from a message queue, HTTP requests, etc.)
- You want confirmation that specific records are durable
- You want append-session ordering, pipelining, and backpressure without managing batch boundaries yourself
TypeScript
Python
Go
Rust
const producer = new Producer(
new BatchTransform({ lingerDurationMillis: 5 }),
await stream.appendSession(),
);
// Submit individual records
const ticket = await producer.submit(
AppendRecord.string({ body: "my event" }),
);
// Get the exact sequence number for this record
const ack = await ticket.ack();
console.log(`Record durable at seqNum ${ack.seqNum()}`);
await producer.close();
async with stream.producer(
batching=Batching(linger=timedelta(milliseconds=5)),
) as producer:
# Submit individual records
ticket = await producer.submit(Record(body=b"my event"))
# Get the exact sequence number for this record
ack = await ticket
print(f"Record durable at seq_num {ack.seq_num}")
session, _ := stream.AppendSession(ctx, nil)
batcher := s2.NewBatcher(ctx, &s2.BatchingOptions{
Linger: 5 * time.Millisecond,
})
producer := s2.NewProducer(ctx, batcher, session)
// Submit individual records
fut, _ := producer.Submit(s2.AppendRecord{Body: []byte("my event")})
ticket, _ := fut.Wait(ctx)
ack, _ := ticket.Ack(ctx)
fmt.Printf("Record durable at seqNum %d\n", ack.SeqNum())
producer.Close()
let producer = stream.producer(
ProducerConfig::new()
.with_batching(BatchingConfig::new().with_linger(Duration::from_millis(5))),
);
// Submit individual records
let ticket = producer.submit(AppendRecord::new("my event")?).await?;
// Get the exact sequence number
let ack = ticket.await?;
println!("Record durable at seqNum {}", ack.seq_num);
producer.close().await?;
The producer maintains the same ordering guarantee as the underlying append session: records are durable in exactly the order you submitted them, and each per-record ticket resolves with the correct sequence number.
Batching Configuration
| Option | Default | Description |
|---|
linger | 5ms | How long to wait for more records before flushing a partial batch |
maxBatchRecords | 1000 | Flush when the batch reaches this many records |
maxBatchBytes | 1 MiB | Flush when the batch reaches this size |
The producer flushes whenever any threshold is hit. For latency-sensitive applications, a shorter linger time means records are written sooner. For throughput-sensitive applications, a longer linger time means more efficient batching.