S2 integrates with the AI SDK through theDocumentation Index
Fetch the complete documentation index at: https://s2.dev/docs/llms.txt
Use this file to discover all available pages before exploring further.
@s2-dev/resumable-stream/aisdk entrypoint.
The package stores AI SDK UIMessageChunk events in S2 and replays them with
the same SSE format that the AI SDK transport expects.
Install
ai >= 5.0; older AI SDK releases do not export
UIMessageChunk.
Create a basin with Create Stream on Append enabled. If your app reads
history streams before writing to them, also enable Create Stream on Read.
Server Setup
Create the S2 chat helper once and reuse it in your routes.lib/s2.ts
Start A Turn
The POST route starts the model call, writes chunks to S2, and streams the same chunks back to the AI SDK client.app/api/chat/route.ts
Reconnect Route
resume: true in useChat calls the reconnect route on mount. With the default
AI SDK transport, that route is ${api}/${chatId}/stream.
app/api/chat/[id]/stream/route.ts
204.
Client
Use the AI SDKuseChat React hook to create a resumable chat.
app/page.tsx
Completed History
@s2-dev/resumable-stream/aisdk stores the streaming chunks for resumability.
It does not decide where your completed transcript lives.
For a chat app, keep a transcript stream or database table in app code:
- Load completed messages before rendering the page.
- Pass them to
useChatas initial state if your UI needs refresh-after-done history. - Use
chat.makeResumable(...)for the active assistant turn.
Options
createResumableChat accepts the shared resumable-stream options:
| option | default | description |
|---|---|---|
mode | "single-use" | "single-use" uses one stream per generation. "shared" reuses one active-generation stream. "session" appends generations to one durable stream. |
endpoints | S2 defaults | Optional endpoint overrides, commonly used with S2 Lite. |
batchSize | 10 | Maximum number of chunks per append batch. |
lingerDuration | 50 | Maximum batching delay in milliseconds. |
leaseDurationMs | 5000 | shared mode takeover window for stale active generations. |
onError | generic message | Maps upstream errors to the stored AI SDK error chunk. |
makeResumable accepts:
| option | default | description |
|---|---|---|
delivery | "response" | "response" streams chunks on the POST response. "replay" returns 202 and expects the client to read from replay. |
waitUntil | none | Keeps persistence alive after the HTTP response returns. Use after, Cloudflare waitUntil, or your server’s background-task hook. |
Example
A complete Bun server and browser client is available here:examples/ai-sdk-resumable-chat.
The TypeScript SDK repo also includes direct S2 examples for:

