Documentation Index
Fetch the complete documentation index at: https://s2.dev/docs/llms.txt
Use this file to discover all available pages before exploring further.
S2 integrates with TanStack AI through the
@s2-dev/resumable-stream/tanstack-ai entrypoint.
The server helper stores TanStack StreamChunk events in S2. The client helper
builds a TanStack SubscribeConnectionAdapter so useChat can send messages
normally while reading assistant chunks from an S2 replay stream.
Install
npm install @s2-dev/resumable-stream @tanstack/ai @tanstack/ai-react @tanstack/ai-client
If you use TanStack’s OpenAI adapter:
npm install @tanstack/ai-openai
Create a basin with Create Stream on Append and Create Stream on Read
enabled.
export S2_ACCESS_TOKEN="..."
export S2_BASIN="my-basin"
export OPENAI_API_KEY="..."
Server Setup
import { createResumableChat } from '@s2-dev/resumable-stream/tanstack-ai';
export const chat = createResumableChat({
accessToken: process.env.S2_ACCESS_TOKEN!,
basin: process.env.S2_BASIN!,
mode: 'session',
});
Start A Turn
import {
chat as tanstackChat,
convertMessagesToModelMessages,
type UIMessage,
} from '@tanstack/ai';
import { openaiText } from '@tanstack/ai-openai';
import { chat } from '@/lib/s2';
export async function POST(req: Request) {
const { id, messages } = (await req.json()) as {
id: string;
messages: UIMessage[];
};
const source = tanstackChat({
adapter: openaiText(process.env.OPENAI_MODEL ?? 'gpt-4o-mini'),
messages: convertMessagesToModelMessages(messages),
});
return chat.makeResumable(`chat-${id}`, source, {
delivery: 'replay',
waitUntil: (p) => p.catch(console.error),
});
}
Replay Route
app/api/chat/replay/route.ts
import { chat } from '@/lib/s2';
function parseFromSeqNum(value: string | null): number | undefined {
if (value === null) return undefined;
const parsed = Number.parseInt(value, 10);
return Number.isSafeInteger(parsed) && parsed >= 0 ? parsed : undefined;
}
export async function GET(req: Request) {
const url = new URL(req.url);
const id = url.searchParams.get('id');
if (!id) return new Response('Missing id query parameter', { status: 400 });
return chat.replay(`chat-${id}`, {
fromSeqNum: parseFromSeqNum(url.searchParams.get('from')),
live: url.searchParams.get('live') === '1',
});
}
Use live: true for session-mode chat UIs so the replay connection stays open
at the tail and receives future turns.
Client
'use client';
import { createConnection } from '@s2-dev/resumable-stream/tanstack-ai/client';
import { useChat } from '@tanstack/ai-react';
import { useMemo } from 'react';
function Chat({ chatId }: { chatId: string }) {
const connection = useMemo(
() =>
createConnection({
sendUrl: '/api/chat',
subscribeUrl: (cursor) => {
const params = new URLSearchParams({ id: chatId, live: '1' });
if (cursor !== undefined) params.set('from', String(cursor));
return `/api/chat/replay?${params}`;
},
body: { id: chatId },
}),
[chatId],
);
const chat = useChat({
id: chatId,
connection,
live: true,
});
// Render chat.messages and call chat.sendMessage(...)
return null;
}
Every replayed chunk includes an SSE id: cursor. createConnection remembers
the most recent cursor inside that connection instance and sends it back as
from on the next subscribe request, so a dropped subscription can continue
after the chunks already delivered instead of rendering them twice. For a full
page refresh, pass the saved history cursor as shown below.
Completed History
TanStack useChat expects completed messages as app state, so keep transcript
loading in your app:
- Fetch history before mounting
useChat.
- Pass it as
initialMessages.
- Start replay from the history cursor so old chunks are not rendered again.
const history = await fetch(`/api/chat/history?id=${chatId}`).then((r) =>
r.json(),
);
const connection = createConnection({
sendUrl: '/api/chat',
subscribeUrl: (cursor) => {
const from = cursor ?? history.cursor;
const params = new URLSearchParams({ id: chatId, live: '1' });
if (from !== undefined) params.set('from', String(from));
return `/api/chat/replay?${params}`;
},
body: { id: chatId },
});
const chat = useChat({
id: chatId,
connection,
initialMessages: history.messages,
live: true,
});
The runnable example stores periodic message snapshots in the same session
stream and exposes a /history route. That history behavior is intentionally
example code, not package API.
Options
createResumableChat accepts:
| option | default | description |
|---|
mode | "single-use" | "single-use" uses one stream per generation. "shared" reuses one active-generation stream. "session" appends generations to one durable stream. |
endpoints | S2 defaults | Optional endpoint overrides, commonly used with S2 Lite. |
batchSize | 10 | Maximum number of chunks per append batch. |
lingerDuration | 50 | Maximum batching delay in milliseconds. |
leaseDurationMs | 5000 | shared mode takeover window for stale active generations. |
onError | generic message | Maps upstream errors to a TanStack RUN_ERROR chunk. |
createConnection accepts:
| option | description |
|---|
sendUrl | POST endpoint that starts generation. |
subscribeUrl | GET endpoint that returns replay SSE. String URLs get ?from=<cursor> appended after chunks have been seen; function URLs receive the cursor. |
body | Extra fields merged into the POST body, usually { id: chatId }. |
headers | Static or lazy headers sent on every request. |
credentials | Fetch credentials mode. Defaults to same-origin. |
fetch | Custom fetch implementation for tests or framework integrations. |
reconnectBackoffMs | Millisecond backoff schedule. Defaults to [], so TanStack owns subscription lifecycle. |
Example
A complete TanStack Start chat app is available here:
examples/tanstack-ai-chat.