Saltar al contenido principal

Chat & Streaming

Each lesson has an AI assistant powered by the configured LLM and the lesson's published materials (RAG). Students create individual chats within a lesson and exchange messages. AI responses are delivered as Server-Sent Events (SSE) to support token-by-token streaming.


Chat Entity

{
"id": "cht_123",
"lesson_id": "les_456",
"user_id": "usr_789",
"title": "Questions about bubble sort",
"last_updated_at": "2026-03-14T09:30:00Z"
}

Message Entity

{
"id": "msg_001",
"chat_id": "cht_123",
"role": "ASSISTANT",
"content": "Binary search works by repeatedly halving the search space...",
"status": "COMPLETED",
"is_grounded": true,
"warning_banner": null,
"sources": [
{
"material_id": "mat_789",
"material_title": "Lecture Slides Week 1",
"excerpt": "Binary search requires a sorted array..."
}
],
"feedback": "LIKE"
}

role values: USER, ASSISTANT

status values:

StatusDescription
STREAMINGAssistant response is currently being streamed.
STOPPEDGeneration was stopped by the user before completion.
COMPLETEDFull response was delivered successfully.
FAILEDGeneration failed due to a model or pipeline error.

feedback values: LIKE, DISLIKE, null

warning_banner — a string displayed above the message when the assistant detects it may be going outside the lesson scope, or when content filters are triggered. null when no warning applies.


SSE Streaming Format

The /messages/stream endpoint returns a text/event-stream response. Each event is a JSON object on a data: line.

Token event (during generation):

data: {"token": "Binary", "done": false}

Completion event (final frame):

data: {"done": true, "message_id": "msg_123"}

The client should accumulate all token values until done: true, then use message_id to fetch the full message record (including sources and grounding metadata) via GET /api/v1/chats/\{chatId\}.


Endpoints

GET /api/v1/lessons/{lessonId}/chats

List the current user's chats for a lesson, ordered by last_updated_at descending.

Authentication: Required — approved class member or global ADMIN.

Response: Paginated list of chat objects (without message history).


POST /api/v1/lessons/{lessonId}/chats

Create a new empty chat for the current user within a lesson.

Authentication: Required — approved class member or global ADMIN.

Request:

{
"title": "Questions about bubble sort"
}

title is optional; if omitted, the server generates a title from the first message.

Response: Created chat object with 201 Created.


GET /api/v1/chats/{chatId}

Retrieve a chat along with its full message history.

Authentication: Required — chat owner, class TEACHER, or global ADMIN.

Response: Chat object with an additional messages array containing all message objects in chronological order.


POST /api/v1/chats/{chatId}/messages

Add a user message to an existing chat. This does not trigger AI generation — call /stream separately to stream the assistant response.

Authentication: Required — chat owner.

Request:

{
"content": "What is binary search?"
}

Response: Created message object with role: "USER" and 201 Created.


POST /api/v1/chats/{chatId}/messages/stream

Stream an AI response for the most recent user message in the chat.

Authentication: Required — chat owner.

Request body: Empty.

Response: text/event-stream — token events followed by a final completion event (see SSE Streaming Format above).

Returns CONFLICT if another generation is already in progress for this chat.


POST /api/v1/chats/{chatId}/messages/{msgId}/regenerate

Discard the current assistant message and regenerate a new response. The previous message is soft-deleted and a fresh SSE stream is opened.

Authentication: Required — chat owner.

Request body: Empty.

Response: text/event-stream identical to /messages/stream.


POST /api/v1/chats/{chatId}/messages/{msgId}/stop

Abort an in-progress streaming generation. The partial response is saved with status: "STOPPED".

Authentication: Required — chat owner.

Request body: Empty.

Response: Updated message object with status: "STOPPED".


POST /api/v1/chats/{chatId}/messages/{msgId}/continue

Resume generation of a stopped message, streaming additional tokens from where generation halted.

Authentication: Required — chat owner.

Request body: Empty.

Response: text/event-stream continuation of the stopped message.


POST /api/v1/chats/{chatId}/messages/{msgId}/feedback

Submit a like or dislike reaction to an assistant message.

Authentication: Required — chat owner.

Request:

{
"vote": "LIKE"
}

vote must be "LIKE" or "DISLIKE". Send the same value again to toggle it off (sets feedback back to null).

Response: Updated message object.


DELETE /api/v1/chats/{chatId}

Soft-delete a chat and all its messages. The chat no longer appears in the lesson's chat list.

Authentication: Required — chat owner or global ADMIN.

Response: 204 No Content.