ART Framework API Docs
    Preparing search index...

    Interface StreamEvent

    Represents a single event emitted from an asynchronous LLM stream (ReasoningEngine.call).

    Allows for real-time delivery of tokens, metadata, errors, and lifecycle signals. Adapters are responsible for translating provider-specific stream chunks into these standard events.

    StreamEvent

    interface StreamEvent {
        data: any;
        phase?: "planning" | "execution" | "synthesis";
        sessionId?: string;
        stepDescription?: string;
        stepId?: string;
        threadId: string;
        timestamp?: number;
        tokenType?:
            | "PLANNING_LLM_THINKING"
            | "PLANNING_LLM_RESPONSE"
            | "EXECUTION_LLM_THINKING"
            | "EXECUTION_LLM_RESPONSE"
            | "SYNTHESIS_LLM_THINKING"
            | "SYNTHESIS_LLM_RESPONSE"
            | "LLM_THINKING"
            | "LLM_RESPONSE";
        traceId: string;
        type: "TOKEN"
        | "METADATA"
        | "ERROR"
        | "END";
    }
    Index

    Properties

    data: any

    The actual content of the event.

    • For TOKEN: string (the text chunk).
    • For METADATA: LLMMetadata object.
    • For ERROR: Error object or error details.
    • For END: null.
    phase?: "planning" | "execution" | "synthesis"

    Phase identification for the agent execution lifecycle.

    0.4.11

    sessionId?: string

    Optional identifier linking the event to a specific UI tab/window.

    stepDescription?: string

    Step description during execution phase.

    0.4.11 - Only populated during execution phase.

    stepId?: string

    Step ID during execution phase. Links tokens to specific TodoItem being executed.

    0.4.11 - Only populated during execution phase.

    threadId: string

    The identifier of the conversation thread this event belongs to.

    timestamp?: number

    Token emission timestamp (Unix ms).

    0.4.11

    tokenType?:
        | "PLANNING_LLM_THINKING"
        | "PLANNING_LLM_RESPONSE"
        | "EXECUTION_LLM_THINKING"
        | "EXECUTION_LLM_RESPONSE"
        | "SYNTHESIS_LLM_THINKING"
        | "SYNTHESIS_LLM_RESPONSE"
        | "LLM_THINKING"
        | "LLM_RESPONSE"

    Classification for TOKEN events, combining phase context and thinking detection.

    0.4.11 - Breaking change: New phase-based naming scheme.

    Phase-specific token types:

    • PLANNING_LLM_THINKING: Thinking token during planning phase.
    • PLANNING_LLM_RESPONSE: Response token during planning phase.
    • EXECUTION_LLM_THINKING: Thinking token during execution phase (per-step).
    • EXECUTION_LLM_RESPONSE: Response token during execution phase.
    • SYNTHESIS_LLM_THINKING: Thinking token during synthesis phase.
    • SYNTHESIS_LLM_RESPONSE: Response token during synthesis phase.
    • LLM_THINKING: Generic fallback when callContext not provided.
    • LLM_RESPONSE: Generic fallback when callContext not provided.

    Not all adapters can reliably distinguish 'LLM_THINKING' vs 'LLM_RESPONSE'. Adapters should prioritize setting the phase-based token type based on CallOptions.callContext.

    traceId: string

    The identifier tracing the specific agent execution cycle this event is part of.

    type: "TOKEN" | "METADATA" | "ERROR" | "END"

    The type of the stream event.

    • TOKEN: A chunk of text generated by the LLM.
    • METADATA: Information about the LLM call (e.g., token counts, stop reason), typically sent once at the end.
    • ERROR: An error occurred during the LLM call or stream processing. data will contain the Error object.
    • END: Signals the successful completion of the stream. data is typically null.