The actual content of the event.
TOKEN: string (the text chunk).METADATA: LLMMetadata object.ERROR: Error object or error details.END: null.OptionalphasePhase identification for the agent execution lifecycle.
OptionalsessionOptional identifier linking the event to a specific UI tab/window.
OptionalstepStep description during execution phase.
OptionalstepStep ID during execution phase. Links tokens to specific TodoItem being executed.
The identifier of the conversation thread this event belongs to.
OptionaltimestampToken emission timestamp (Unix ms).
OptionaltokenClassification for TOKEN events, combining phase context and thinking detection.
0.4.11 - Breaking change: New phase-based naming scheme.
Phase-specific token types:
PLANNING_LLM_THINKING: Thinking token during planning phase.PLANNING_LLM_RESPONSE: Response token during planning phase.EXECUTION_LLM_THINKING: Thinking token during execution phase (per-step).EXECUTION_LLM_RESPONSE: Response token during execution phase.SYNTHESIS_LLM_THINKING: Thinking token during synthesis phase.SYNTHESIS_LLM_RESPONSE: Response token during synthesis phase.LLM_THINKING: Generic fallback when callContext not provided.LLM_RESPONSE: Generic fallback when callContext not provided.The identifier tracing the specific agent execution cycle this event is part of.
The type of the stream event.
TOKEN: A chunk of text generated by the LLM.METADATA: Information about the LLM call (e.g., token counts, stop reason), typically sent once at the end.ERROR: An error occurred during the LLM call or stream processing. data will contain the Error object.END: Signals the successful completion of the stream. data is typically null.
Represents a single event emitted from an asynchronous LLM stream (
ReasoningEngine.call).Remarks
Allows for real-time delivery of tokens, metadata, errors, and lifecycle signals. Adapters are responsible for translating provider-specific stream chunks into these standard events.
StreamEvent