Creates a new ReasoningEngine instance.
The IProviderManager instance responsible for managing provider adapters. This manager must be pre-configured with available providers and their settings.
Executes an LLM call using a dynamically selected provider adapter.
The prompt to send to the LLM. This is the FormattedPrompt type, which represents an array of standardized messages (ArtStandardMessage[]). The provider adapter is responsible for translating this to the specific API format required by the underlying LLM provider.
Configuration options for this specific LLM call. Must include: - threadId (string): Required for identifying the thread and loading its configuration - providerConfig (RuntimeProviderConfig): Specifies which provider and model to use - traceId (string, optional): For distributed tracing and debugging - userId (string, optional): For user-specific configuration or logging - sessionId (string, optional): For multi-tab/session UI scenarios - stream (boolean, optional): Whether to request streaming responses (default: false) - callContext (string, optional): Context for the call (e.g., 'AGENT_THOUGHT', 'FINAL_SYNTHESIS') - Additional provider-specific parameters (e.g., temperature, max_tokens)
A promise resolving to an AsyncIterable
The returned iterable is wrapped to ensure proper adapter cleanup when iteration completes
or is interrupted.
This method orchestrates the entire LLM call lifecycle:
Provider Configuration Validation: Ensures that providerConfig is present in CallOptions. This is required for the multi-provider architecture.
Adapter Acquisition: Requests a ManagedAdapterAccessor from the IProviderManager. The manager handles:
Call Delegation: Delegates the actual LLM call to the obtained adapter's call method. The adapter is responsible for:
Resource Cleanup: Wraps the adapter's stream in a generator that automatically calls accessor.release() when the stream is consumed, errors occur, or iteration is aborted. This ensures adapters are always returned to the pool, preventing resource leaks.
Error Handling: Catches and logs any errors during adapter acquisition or call execution, ensuring the adapter is released before re-throwing the error to the caller.
Default implementation of the ReasoningEngine interface.
This class serves as the central point for all LLM interactions within the ART framework. It abstracts away the specifics of dealing with different LLM providers (OpenAI, Anthropic, Gemini, etc.) by delegating to provider-specific ProviderAdapter instances obtained from the IProviderManager.
Key responsibilities:
Dynamic Provider Selection: Obtains the appropriate ProviderAdapter instance based on runtime configuration (RuntimeProviderConfig) specified in CallOptions. This allows different threads or calls to use different LLM providers or models.
Resource Management: Ensures that adapter instances are properly released back to the IProviderManager after use, enabling connection pooling, reuse, and proper cleanup. This is critical for maintaining performance and preventing resource leaks.
Streaming Support: Returns an AsyncIterable that yields tokens, metadata, and lifecycle events as they arrive from the LLM provider. The implementation wraps the adapter's stream to ensure proper resource cleanup even if iteration is aborted or errors occur.
Error Handling: Transforms provider-specific errors into a consistent interface and ensures adapters are released even when errors occur during call setup or stream processing.
ReasoningEngine
Implements
IReasoningEngine