ART Framework API Docs
    Preparing search index...

    Interface CallOptions

    Options for configuring an LLM call, including streaming and context information.

    CallOptions

    interface CallOptions {
        callContext?: string;
        providerConfig: RuntimeProviderConfig;
        sessionId?: string;
        stepContext?: { stepDescription: string; stepId: string };
        stream?: boolean;
        threadId: string;
        traceId?: string;
        userId?: string;
        [key: string]: any;
    }

    Indexable

    • [key: string]: any

      Additional key-value pairs representing provider-specific parameters (e.g., temperature, max_tokens, top_p). These often override defaults set in ThreadConfig.

    Index

    Properties

    callContext?: string

    Provides context for the LLM call, identifying which phase of agent execution is making the request. This determines the tokenType prefix in StreamEvents.

    0.4.11 - Breaking change: Replaced 'AGENT_THOUGHT' and 'FINAL_SYNTHESIS' with phase-specific values.

    providerConfig: RuntimeProviderConfig

    Carries the specific target provider and configuration for this call.

    sessionId?: string

    Optional session ID.

    stepContext?: { stepDescription: string; stepId: string }

    Step context for execution phase, passed to StreamEvent for step identification.

    0.4.11 - Only used during execution phase.

    stream?: boolean

    Request a streaming response from the LLM provider. Adapters MUST check this flag.

    threadId: string

    The mandatory thread ID, used by the ReasoningEngine to fetch thread-specific configuration (e.g., model, params) via StateManager.

    traceId?: string

    Optional trace ID for correlation.

    userId?: string

    Optional user ID.