Skip to content

Conversation

Virtuoso633
Copy link

This PR activates multi-provider LLM support within the ADK framework by registering the existing Claude and LiteLlm model handlers. This allows developers to seamlessly use models from Anthropic, OpenAI, Groq, and over 100 other providers supported by litellm without any additional configuration.

Motivation

The ADK is architected for model flexibility, but the default configuration only registers the Gemini handler. This leaves the powerful, pre-existing integrations for Anthropic (Claude) and the litellm library dormant.

By enabling them, we make the ADK significantly more versatile out-of-the-box, allowing users to easily switch between or benchmark different foundational models from major providers. This change unlocks significant existing functionality with minimal code changes.

Implementation Details

  1. In src/google/adk/models/__init__.py:

    • Imported the Claude and LiteLlm classes.
    • Registered both classes with the LLMRegistry using loops that iterate over their supported_models() methods.
  2. In src/google/adk/models/lite_llm.py:

    • Added a supported_models() class method to provide registration patterns for common litellm providers (openai/.*, groq/.*, etc.), making the class discoverable by the registry.

How to Use

After these changes, users can instantiate models from different providers by simply providing the correct model string.

Anthropic Claude on Vertex AI
Requires GOOGLE_CLOUD_PROJECT and GOOGLE_CLOUD_LOCATION environment variables to be set.

from google.adk.models import LLMRegistry

claude_model = LLMRegistry.new_llm("claude-3-5-sonnet-v2@20241022")

OpenAI (via LiteLLM)
Requires the OPENAI_API_KEY environment variable to be set.

from google.adk.models import LLMRegistry

openai_model = LLMRegistry.new_llm("openai/gpt-4o")

Grok (via LiteLLM and the Groq API)
Requires the GROQ_API_KEY environment variable to be set.

from google.adk.models import LLMRegistry

grok_model = LLMRegistry.new_llm("groq/llama3-70b-8192")

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @Virtuoso633, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the ADK framework's flexibility by enabling multi-provider Large Language Model (LLM) support. It integrates pre-existing handlers for Anthropic's Claude and the LiteLLM library, allowing developers to easily access a wide range of foundational models from various providers, thereby expanding the framework's out-of-the-box capabilities and simplifying model switching and benchmarking.

Highlights

  • Multi-provider LLM support: The ADK framework now supports multiple LLM providers out-of-the-box by activating existing Claude and LiteLlm model handlers.
  • Expanded model access: Users can now seamlessly integrate models from Anthropic, OpenAI, Groq, and many other providers supported by litellm without additional configuration.
  • LLMRegistry updates: Claude and LiteLlm classes are now registered with the LLMRegistry, making their supported models discoverable and instantiable.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@adk-bot
Copy link
Collaborator

adk-bot commented Sep 1, 2025

Response from ADK Triaging Agent

Hello @Virtuoso633, thank you for creating this PR!

To help us review this contribution, could you please create and associate a GitHub issue with this PR?

Additionally, could you please add a testing plan section to your PR description to detail how you've tested these changes?

This information will help reviewers to review your PR more efficiently. You can find more details in our contribution guidelines. Thanks!

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request correctly enables multi-provider support for Claude and LiteLLM by registering their respective handlers. The implementation is straightforward. I have two suggestions for improvement. First, in src/google/adk/models/__init__.py, the model registration logic is inefficient and can be significantly simplified. Second, src/google/adk/models/lite_llm.py contains a block of commented-out code that should be removed to keep the codebase clean. Please see my detailed comments for suggestions.

Comment on lines 34 to +41
for regex in Gemini.supported_models():
LLMRegistry.register(Gemini)

for regex in Claude.supported_models():
LLMRegistry.register(Claude)

for regex in LiteLlm.supported_models():
LLMRegistry.register(LiteLlm)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The current registration logic is inefficient. The LLMRegistry.register method already iterates over supported_models(), so wrapping it in another loop causes redundant registrations and potentially confusing log messages. This can be simplified to a single loop over the model classes, making the code cleaner and more maintainable.

Suggested change
for regex in Gemini.supported_models():
LLMRegistry.register(Gemini)
for regex in Claude.supported_models():
LLMRegistry.register(Claude)
for regex in LiteLlm.supported_models():
LLMRegistry.register(LiteLlm)
for model_class in (Gemini, Claude, LiteLlm):
LLMRegistry.register(model_class)

Comment on lines +921 to +935
# @classmethod
# @override
# def supported_models(cls) -> list[str]:
# """Provides the list of supported models.

# LiteLlm supports all models supported by litellm. We do not keep track of
# these models here. So we return an empty list.

# Returns:
# A list of supported models.
# """

# return []


Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This block of commented-out code should be removed to improve code clarity and maintainability.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants