You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This project collects GPU benchmarks from various cloud providers and compares them to fixed per token costs. Use our tool for efficient LLM GPU selections and cost-effective AI models. LLM provider price comparison, gpu benchmarks to price per token calculation, gpu benchmark table
This is an opensource project allowing you to compare two LLM's head to head with a given prompt, this section will be regarding the backend of this project, allowing for llm api's to be incorporated and used in the front-end
A full-stack web application for comparing and analyzing the performance of large language models (LLMs). Features include side-by-side prompt evaluation, performance metrics visualization, and an analytics dashboard. Built with React, Tailwind CSS, Node.js, and MongoDB."
MindTrial: Evaluate and compare AI language models (LLMs) on text-based tasks with optional file/image attachments. Supports multiple providers (OpenAI, Google, Anthropic, DeepSeek, Mistral AI, xAI), custom tasks in YAML, and HTML/CSV reports.
This project analyzes the first 10 rows of the Premier League 2022–23 dataset without grouping. Descriptive statistics and targeted visualizations were created, and insights were compared with responses from a large language model (LLM).
This project extends the dataset analysis by grouping results by stadium. Pivot-style summaries and visualizations were created for goals, possessions, and chances, alongside 10 new LLM prompts and Python scripts for deeper stadium-level insights.
About LLM-Compare-FastAPI is an open-source tool for comparing AI language models like DeepSeek, OpenAI GPT, Google Gemini, and more, using FastAPI and Streamlit