-
Notifications
You must be signed in to change notification settings - Fork 1.9k
feat(models): Enable multi-provider support for Claude and LiteLLM #2810
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
feat(models): Enable multi-provider support for Claude and LiteLLM #2810
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @Virtuoso633, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request significantly enhances the ADK framework's flexibility by enabling multi-provider Large Language Model (LLM) support. It integrates pre-existing handlers for Anthropic's Claude and the LiteLLM library, allowing developers to easily access a wide range of foundational models from various providers, thereby expanding the framework's out-of-the-box capabilities and simplifying model switching and benchmarking.
Highlights
- Multi-provider LLM support: The ADK framework now supports multiple LLM providers out-of-the-box by activating existing Claude and LiteLlm model handlers.
- Expanded model access: Users can now seamlessly integrate models from Anthropic, OpenAI, Groq, and many other providers supported by litellm without additional configuration.
- LLMRegistry updates: Claude and LiteLlm classes are now registered with the LLMRegistry, making their supported models discoverable and instantiable.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command>
or @gemini-code-assist <command>
. Below is a summary of the supported commands.
Feature | Command | Description |
---|---|---|
Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/
folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
Response from ADK Triaging Agent Hello @Virtuoso633, thank you for creating this PR! To help us review this contribution, could you please create and associate a GitHub issue with this PR? Additionally, could you please add a This information will help reviewers to review your PR more efficiently. You can find more details in our contribution guidelines. Thanks! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request correctly enables multi-provider support for Claude and LiteLLM by registering their respective handlers. The implementation is straightforward. I have two suggestions for improvement. First, in src/google/adk/models/__init__.py
, the model registration logic is inefficient and can be significantly simplified. Second, src/google/adk/models/lite_llm.py
contains a block of commented-out code that should be removed to keep the codebase clean. Please see my detailed comments for suggestions.
for regex in Gemini.supported_models(): | ||
LLMRegistry.register(Gemini) | ||
|
||
for regex in Claude.supported_models(): | ||
LLMRegistry.register(Claude) | ||
|
||
for regex in LiteLlm.supported_models(): | ||
LLMRegistry.register(LiteLlm) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The current registration logic is inefficient. The LLMRegistry.register
method already iterates over supported_models()
, so wrapping it in another loop causes redundant registrations and potentially confusing log messages. This can be simplified to a single loop over the model classes, making the code cleaner and more maintainable.
for regex in Gemini.supported_models(): | |
LLMRegistry.register(Gemini) | |
for regex in Claude.supported_models(): | |
LLMRegistry.register(Claude) | |
for regex in LiteLlm.supported_models(): | |
LLMRegistry.register(LiteLlm) | |
for model_class in (Gemini, Claude, LiteLlm): | |
LLMRegistry.register(model_class) |
# @classmethod | ||
# @override | ||
# def supported_models(cls) -> list[str]: | ||
# """Provides the list of supported models. | ||
|
||
# LiteLlm supports all models supported by litellm. We do not keep track of | ||
# these models here. So we return an empty list. | ||
|
||
# Returns: | ||
# A list of supported models. | ||
# """ | ||
|
||
# return [] | ||
|
||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This PR activates multi-provider LLM support within the ADK framework by registering the existing
Claude
andLiteLlm
model handlers. This allows developers to seamlessly use models from Anthropic, OpenAI, Groq, and over 100 other providers supported bylitellm
without any additional configuration.Motivation
The ADK is architected for model flexibility, but the default configuration only registers the
Gemini
handler. This leaves the powerful, pre-existing integrations for Anthropic (Claude) and thelitellm
library dormant.By enabling them, we make the ADK significantly more versatile out-of-the-box, allowing users to easily switch between or benchmark different foundational models from major providers. This change unlocks significant existing functionality with minimal code changes.
Implementation Details
In
src/google/adk/models/__init__.py
:Claude
andLiteLlm
classes.LLMRegistry
using loops that iterate over theirsupported_models()
methods.In
src/google/adk/models/lite_llm.py
:supported_models()
class method to provide registration patterns for commonlitellm
providers (openai/.*
,groq/.*
, etc.), making the class discoverable by the registry.How to Use
After these changes, users can instantiate models from different providers by simply providing the correct model string.
Anthropic Claude on Vertex AI
Requires
GOOGLE_CLOUD_PROJECT
andGOOGLE_CLOUD_LOCATION
environment variables to be set.OpenAI (via LiteLLM)
Requires the
OPENAI_API_KEY
environment variable to be set.Grok (via LiteLLM and the Groq API)
Requires the
GROQ_API_KEY
environment variable to be set.