ReadonlyproviderThe unique identifier name for this provider (e.g., 'openai', 'anthropic').
Executes a call to the configured Large Language Model (LLM).
This method is typically implemented by a specific ProviderAdapter.
When streaming is requested via options.stream, it returns an AsyncIterable
that yields StreamEvent objects as they are generated by the LLM provider.
When streaming is not requested, it should still return an AsyncIterable
that yields a minimal sequence of events (e.g., a single TOKEN event with the full response,
a METADATA event if available, and an END event).
The prompt to send to the LLM, potentially formatted specifically for the provider.
Options controlling the LLM call, including mandatory threadId, tracing IDs, model parameters (like temperature), streaming preference, and call context.
A promise resolving to an AsyncIterable of StreamEvent objects.
OptionalshutdownOptional: Method for graceful shutdown
Base interface for LLM Provider Adapters, extending the core ReasoningEngine. Implementations will handle provider-specific API calls, authentication, etc.