Skip to content

shortthirdman/springai-multillm-integration

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Spring AI Multi-LLM Integration


Reference Documentation

For further reference, please consider the following sections:


Guides

The following guides illustrate how to use some features concretely:


Additional Links

These additional references should also help you:


GraalVM Native Support

This project has been configured to let you generate either a lightweight container or a native executable. It is also possible to run your tests in a native image.


Lightweight Container with Cloud Native Buildpacks

If you're already familiar with Spring Boot container images support, this is the easiest way to get started. Docker should be installed and configured on your machine prior to creating the image.

Make sure to set your environment variables:

export OPENAI_API_KEY="your-openai-api-key"
export GEMINI_API_KEY="your-gemini-api-key"
export ANTHROPIC_API_KEY="your-anthropic-api-key"
export MISTRAL_API_KEY="your-mistral-api-key"

To create the image, run the following goal:

$ ./mvnw spring-boot:build-image -Pnative

Then, you can run the app like any other container:

$ docker run --rm -p 8080:8080 multillm-integration:0.0.1-SNAPSHOT

Executable with Native Build Tools

Use this option if you want to explore more options such as running your tests in a native image. The GraalVM native-image compiler should be installed and configured on your machine.

NOTE: GraalVM 22.3+ is required.

To create the executable, run the following goal:

$ ./mvnw native:compile -Pnative

Then, you can run the app as follows:

$ target/multillm-integration

You can also run your existing tests suite in a native image. This is an efficient way to validate the compatibility of your application.

To run your existing tests in a native image, run the following goal:

$ ./mvnw test -PnativeTest

Maven Parent overrides

Due to Maven's design, elements are inherited from the parent POM to the project POM. While most of the inheritance is fine, it also inherits unwanted elements like <license> and <developers> from the parent. To prevent this, the project POM contains empty overrides for these elements. If you manually switch to a different parent and actually want the inheritance, you need to remove those overrides.


References


Testing OpenAI

http GET localhost:8100/chat \
  message=="What model are you? Please provide your name, version, and key capabilities." \
  llm==openai

Expected Sample Response

{
    "llm": "openai",
    "originalMessage": "What model are you? Please provide your name, version, and key capabilities.",
    "response": "I am ChatGPT, based on the GPT-4 architecture developed by OpenAI. My version includes improvements in understanding and generating human-like text, enabling me to assist with a wide range of tasks such as answering questions, providing explanations, composing creative writing, and more. I can understand context, handle complex prompts, and generate coherent and relevant responses across various topics.",
    "timestamp": 1753268746222
}

Testing Ollama

http GET localhost:8100/chat \
  message=="What model are you? Please tell me your name, version, and what you're good at." \
  llm==ollama

Expected Sample Response

{
    "llm": "ollama",
    "originalMessage": "What model are you? Please tell me your name, version, and what you're good at.",
    "response": " I am a model of the Chat Model developed by Mistral AI. My primary function is to assist with various tasks by providing information, answering questions, and engaging in conversation. I strive to provide precise, helpful, and courteous responses.\n\nWhile I don't have a personal name, you can think of me as your digital assistant designed to make your interactions more enjoyable and productive. My capabilities include but are not limited to: answering questions, providing explanations, discussing a wide range of topics, assisting with scheduling and organization, offering recommendations, and much more.\n\nIn terms of my version, I am part of the latest generation of models, continually learning and improving from the data it encounters during interactions like this one.",
    "timestamp": 1753268772790
}

Testing Gemini

http GET localhost:8100/chat \
  message=="Please identify yourself. What model are you, what version, and what are your strengths?" \
  llm==gemini

Expected Sample Response

{
    "llm": "gemini",
    "originalMessage": "Please identify yourself. What model are you, what version, and what are your strengths?",
    "response": "I am a large language model, trained by Google.\n\n**Model & Version:**\nUnlike traditional software with specific version numbers, large language models like me are continuously updated and refined. There isn't a single, publicly accessible \"version number\" in the way you might think of software like....",
    "timestamp": 1753268800297
}

About

Spring AI Multi-LLM Integration - Gemini, OpenAI, Ollama

Topics

Resources

License

Code of conduct

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages