Creates an instance of GeminiAdapter.
Configuration options for the adapter.
ReadonlyproviderThe unique identifier name for this provider (e.g., 'openai', 'anthropic').
Makes a call to the configured Gemini model.
Translates the ArtStandardPrompt into the Gemini API format, sends the request
using the @google/genai SDK, and yields StreamEvent objects representing
the response (tokens, metadata, errors, end signal).
Handles both streaming and non-streaming requests based on options.stream.
Thinking tokens (Gemini):
gemini-2.5-*), you can enable thought output via config.thinkingConfig.options.gemini.thinking.includeThoughts: boolean — when true, requests thought (reasoning) output.options.gemini.thinking.thinkingBudget?: number — optional token budget for thinking.StreamEvent.tokenType accordingly:
callContext === 'AGENT_THOUGHT'): AGENT_THOUGHT_LLM_THINKING or AGENT_THOUGHT_LLM_RESPONSE.callContext === 'FINAL_SYNTHESIS'): FINAL_SYNTHESIS_LLM_THINKING or FINAL_SYNTHESIS_LLM_RESPONSE.LLMMetadata.thinkingTokens will be populated if the provider reports separate thinking token usage....LLM_RESPONSE.The standardized prompt messages.
Options for the LLM call, including streaming preference, model override, and execution context.
An async iterable that yields StreamEvent objects.
TOKEN: Contains a chunk of the response text. tokenType indicates if it's part of agent thought or final synthesis.
When Gemini thinking is enabled and available, tokenType may be one of the ...LLM_THINKING or
...LLM_RESPONSE variants to separate thought vs response tokens.METADATA: Contains information like stop reason, token counts, and timing, yielded once at the end.ERROR: Contains any error encountered during translation, SDK call, or response processing.END: Signals the completion of the stream.// Enable Gemini thinking (if supported by the selected model)
const stream = await geminiAdapter.call(prompt, {
threadId,
stream: true,
callContext: 'FINAL_SYNTHESIS',
providerConfig, // your RuntimeProviderConfig
gemini: {
thinking: { includeThoughts: true, thinkingBudget: 8096 }
}
});
for await (const evt of stream) {
if (evt.type === 'TOKEN') {
// evt.tokenType may be FINAL_SYNTHESIS_LLM_THINKING or FINAL_SYNTHESIS_LLM_RESPONSE
}
}
Adapter for Google's Gemini models.