A high-performance Redis load testing application built with Python and redis-py. Designed to simulate high-volume operations for testing client resilience during database upgrades, capable of generating 50K+ operations per second with comprehensive observability.
- High throughput through multi-threaded architecture
- Standalone & Cluster Redis support with TLS/SSL
- Intuitive workload profiles with descriptive names
- OpenTelemetry metrics for observability
- Environment variable configuration with python-dotenv
- Connection resilience with auto-reconnect and retry logic
- CLI interface with extensive configuration options
- Python 3.8+ with pip
- Redis server running (localhost:6379 by default)
- Redis Metrics Stack (optional, for observability - separate repository)
For metrics collection and visualization, you'll need the separate metrics stack:
# Clone and start the metrics stack (optional - separate repository)
git clone <redis-metrics-stack-repo-url>
cd redis-metrics-stack
make start
Quick setup and testing:
# Install dependencies
make install-deps
# Test Redis connection
make test-connection
# Run basic test (60 seconds)
make test
# Run custom tests
python main.py run --workload-profile basic_rw --duration 30
python main.py run --workload-profile high_throughput --duration 60
python main.py run --workload-profile basic_rw # unlimited test run
# Custom value sizes
python main.py run --workload-profile basic_rw --value-size 500 --duration 30
python main.py run --operations SET,GET --value-size-min 100 --value-size-max 2000 --duration 60
With metrics stack (optional):
- π Grafana: http://localhost:3000 (admin/admin) - Redis Test Dashboard
- π Prometheus: http://localhost:9090 - Raw metrics
For testing containerized setup:
# Build the application
make build
# Run with Docker (standalone)
docker run redis-py-test-app:latest \
python main.py run --workload-profile basic_rw --duration 60
# Run with Docker (with metrics stack)
docker run --network redis-test-network \
-e OTEL_EXPORTER_OTLP_ENDPOINT=http://otel-collector:4317 \
redis-py-test-app:latest \
python main.py run --workload-profile basic_rw --duration 60
# Install Python dependencies
make install-deps
# Copy environment configuration (optional)
cp .env.example .env
# Edit .env with your Redis configuration
# Test Redis connection
make test-connection
make help # Show all available commands
make install-deps # Install Python dependencies
make test-connection # Test Redis connection
make test # Run basic test (60 seconds)
make build # Build Docker image
make clean # Clean up Python cache and virtual environment
- π Grafana: http://localhost:3000 (admin/admin) - Redis Test Dashboard
- π Prometheus: http://localhost:9090 - Raw metrics and queries
- π‘ Redis: localhost:6379 - Database connection
# Run tests repeatedly (edit code between runs)
python main.py run --workload-profile basic_rw --duration 30
python main.py run --workload-profile high_throughput --duration 60
python main.py run --workload-profile list_operations --duration 45
# View real-time metrics in Grafana: http://localhost:3000
# Run unlimited tests (until Ctrl+C)
python main.py run --workload-profile basic_rw
python main.py run --workload-profile high_throughput
# Monitor in real-time via Grafana: http://localhost:3000
# Stop test with Ctrl+C when ready
# Test Redis connection
make test-connection
# 60-second basic test
make test
# Build Docker image
make build
Basic Read/Write Operations:
# 30-second test
python main.py run --workload-profile basic_rw --duration 30
# Unlimited duration (until Ctrl+C)
python main.py run --workload-profile basic_rw
High Throughput Testing:
# 60-second high-load test
python main.py run --workload-profile high_throughput --duration 60 --threads-per-client 4
# Unlimited high-load test (for sustained performance monitoring)
python main.py run --workload-profile high_throughput --threads-per-client 4
List Operations:
# 45-second list operations test
python main.py run --workload-profile list_operations --duration 45
# Unlimited list operations (for memory usage monitoring)
python main.py run --workload-profile list_operations
Async Mixed Workload:
# 2-minute mixed workload test
python main.py run --workload-profile async_mixed --duration 120
# Unlimited mixed workload (for long-term stability testing)
python main.py run --workload-profile async_mixed
Custom Configuration:
# Multiple clients with custom threading
# Note: app-name will be concatenated with workload profile (python-dev-basic_rw)
python main.py run \
--workload-profile basic_rw \
--duration 300 \
--clients 2 \
--threads-per-client 3 \
--max-connections 10 \
--app-name python-dev \
--version test-v1
Environment Variable Configuration:
# Set via environment variables
export TEST_DURATION=180
export TEST_CLIENT_INSTANCES=3
export TEST_THREADS_PER_CLIENT=2
export APP_NAME=python-custom
python main.py run --workload-profile high_throughput
make help # Show all available commands
make dev-start # Start complete development environment (first time)
make dev-start-metrics-stack # Start metrics stack only (quick iteration)
make dev-test # Run quick test (60 seconds)
make dev-stop # Stop metrics stack
make dev-logs # View metrics stack logs
make status # Check service status
make install-deps # Install Python dependencies
make build # Build Docker images
make clean # Clean up everything
Real-time Grafana Dashboard:
- Operations per second (live updates)
- Latency percentiles (95th, 50th, average)
- Error rates and success percentages
- Individual operation breakdowns (GET, SET, BATCH)
- Filterable by app name, version, and operation type
Unlimited Test Monitoring:
# Start long-running test in background
python main.py run --workload-profile high_throughput &
# Monitor in real-time
echo "Monitor at: http://localhost:3000"
echo "Stop test with: kill %1"
# Or run in foreground and stop with Ctrl+C
python main.py run --workload-profile basic_rw
# Press Ctrl+C when ready to stop
Stop Development Environment:
make dev-stop
Profile | Description | Operations | Use Case |
---|---|---|---|
basic_rw |
Basic read/write operations | GET, SET, DEL | General testing, development |
high_throughput |
Pipeline-based operations | Batched SET, GET, INCR | Performance testing, load testing |
list_operations |
Redis list operations | LPUSH, LRANGE, LPOP | List-specific workloads |
async_mixed |
Mixed async operations | GET, SET, INCR, DEL | Async pattern testing |
Redis Connection:
--host HOST # Redis host (default: localhost)
--port PORT # Redis port (default: 6379)
--password PASSWORD # Redis password
--db DB # Redis database number (default: 0)
--cluster # Use Redis Cluster mode
--cluster-nodes NODES # Comma-separated cluster nodes (host:port)
--ssl # Use SSL/TLS connection
--ssl-cert-reqs LEVEL # SSL certificate requirements (none/optional/required)
--ssl-ca-certs PATH # Path to CA certificates file
--ssl-certfile PATH # Path to client certificate file
--ssl-keyfile PATH # Path to client private key file
--socket-timeout SECONDS # Socket timeout (default: 5.0)
--socket-connect-timeout SECONDS # Socket connect timeout (default: 5.0)
--max-connections N # Maximum connections per client (default: 50)
Test Configuration:
--duration SECONDS # Test duration in seconds (unlimited if not specified)
--target-ops-per-second N # Target operations per second
--clients N # Number of Redis client instances (default: 4)
--threads-per-client N # Number of worker threads per client (default: 1)
Workload Configuration:
--workload-profile PROFILE # Pre-defined workload profile
--operations OPS # Comma-separated Redis operations (default: SET,GET)
--operation-weights JSON # JSON string of operation weights
--key-prefix PREFIX # Prefix for generated keys (default: test_key)
--key-range N # Range of key IDs to use (default: 10000)
--read-write-ratio RATIO # Ratio of read operations 0.0-1.0 (default: 0.7)
--value-size BYTES # Fixed value size in bytes (overrides min/max)
--value-size-min BYTES # Minimum value size in bytes (default: 100)
--value-size-max BYTES # Maximum value size in bytes (default: 1000)
Pipeline & Advanced:
--use-pipeline # Use Redis pipelining
--pipeline-size N # Operations per pipeline (default: 10)
--async-mode # Use asynchronous operations
--transaction-size N # Operations per transaction (default: 5)
--pubsub-channels CHANNELS # Comma-separated pub/sub channels
Application Identification:
--app-name NAME # Application name for filtering (default: python)
--instance-id ID # Unique instance identifier (auto-generated)
--run-id ID # Unique run identifier (auto-generated)
--version VERSION # Version identifier (defaults to redis-py version)
Logging & Output:
--log-level LEVEL # Logging level (DEBUG/INFO/WARNING/ERROR)
--log-file PATH # Log file path
--output-file PATH # Output file for JSON test summary
--quiet # Suppress periodic stats output
OpenTelemetry & Metrics:
--otel-endpoint URL # OpenTelemetry OTLP endpoint
--otel-service-name NAME # OpenTelemetry service name (default: redis-load-test)
--otel-export-interval MS # Export interval in milliseconds (default: 5000)
Configuration Files:
--config-file PATH # Load configuration from YAML/JSON file
--save-config PATH # Save current configuration to file
The final app name in metrics is automatically concatenated as: {app-name}-{workload-profile}
Examples:
--app-name python --workload-profile basic_rw
βpython-basic_rw
--app-name python-dev --workload-profile high_throughput
βpython-dev-high_throughput
--app-name go-test --workload-profile list_operations
βgo-test-list_operations
This allows easy filtering and identification of different workloads in Grafana dashboards.
# Redis Connection
REDIS_HOST=localhost
REDIS_PORT=6379
REDIS_PASSWORD=
REDIS_DB=0
# Test Configuration
TEST_DURATION=300
TEST_CLIENT_INSTANCES=2
TEST_CONNECTIONS_PER_CLIENT=10
TEST_THREADS_PER_CLIENT=3
# Application Identity
APP_NAME=python-dev
INSTANCE_ID= # Auto-generated random ID if not specified
VERSION=v1.0.0
# OpenTelemetry
OTEL_ENABLED=true
OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4317
# Output
OUTPUT_FILE=test_results.json # If specified, final summary goes to file; otherwise stdout
Low Resource Testing:
python main.py run \
--workload-profile basic_rw \
--duration 60 \
--client-instances 1 \
--connections-per-client 2 \
--threads-per-client 1
High Load Testing:
python main.py run \
--workload-profile high_throughput \
--duration 300 \
--client-instances 3 \
--connections-per-client 15 \
--threads-per-client 5
Memory Usage Testing:
python main.py run \
--workload-profile list_operations \
--duration 600 \
--client-instances 2 \
--connections-per-client 8 \
--threads-per-client 3
The application outputs a standardized final test summary at the end of each test run. This summary includes key metrics and connection statistics.
Output to stdout (default):
python main.py run \
--workload-profile basic_rw \
--duration 60
Output to file:
python main.py run \
--workload-profile high_throughput \
--duration 300 \
--output-file results.json
Example Final Summary Output:
{
"app_name": "python-basic-rw",
"instance_id": "python-basic-rw-a1b2c3d4",
"run_id": "550e8400-e29b-41d4-a716-446655440000",
"version": "redis-py-5.0.1",
"test_duration": "60.0s",
"workload_name": "basic_rw",
"total_commands_count": 1234567,
"successful_commands_count": 1234565,
"failed_commands_count": 2,
"success_rate": "99.84%",
"overall_throughput": 20576,
"connection_attempts": 5,
"connection_failures": 0,
"connection_drops": 0,
"connection_success_rate": "100.00%",
"reconnection_count": 0,
"avg_reconnection_duration_ms": 0,
"run_start": 1703097600.123,
"run_end": 1703097660.456,
"min_latency_ms": 0.12,
"max_latency_ms": 45.67,
"median_latency_ms": 1.23,
"p95_latency_ms": 3.45,
"p99_latency_ms": 8.90,
"avg_latency_ms": 1.56
}
- Install dependencies:
make install-deps
# OR manually:
# pip install -r requirements.txt
- Copy environment configuration (optional):
cp .env.example .env
# Edit .env with your Redis configuration
Test Redis connection:
make test-connection
# OR manually:
# python main.py test-connection
Run a basic load test:
python main.py run --client-instances 1 --connections-per-client 1 --threads-per-client 1 --workload-profile basic_rw
Use a workload profile:
python main.py run --workload-profile high_throughput --client-instances 2 --connections-per-client 5 --threads-per-client 4
Test Redis Cluster:
python main.py run --cluster --cluster-nodes "node1:7000,node2:7000,node3:7000"
Available workload profiles with intuitive names:
basic_rw
: Basic read/write operations (SET, GET, DEL)high_throughput
: Optimized for maximum throughput with pipelininglist_operations
: List-based operations (LPUSH, LRANGE, LPOP)pubsub_heavy
: Publish/Subscribe operationstransaction_heavy
: Transaction-based operations (MULTI/EXEC)async_mixed
: Mixed async operations with pipelining
List all profiles:
python main.py list-profiles
Describe a specific profile:
python main.py describe-profile high_throughput
Key environment variables (see .env.example
for complete list):
# Redis Connection
REDIS_HOST=localhost
REDIS_PORT=6379
REDIS_PASSWORD=mypassword
# Test Configuration
TEST_CLIENT_INSTANCES=4
TEST_CONNECTIONS_PER_CLIENT=10
TEST_THREADS_PER_CLIENT=8
TEST_TARGET_OPS_PER_SECOND=50000
python main.py run --help
Key options:
--client-instances
: Number of client instances--connections-per-client
: Connections per client instance--threads-per-client
: Threads per client instance--target-ops-per-second
: Target throughput--duration
: Test duration in seconds--workload-profile
: Pre-defined workload profile
For 50K+ ops/sec, configure multiple client instances:
python main.py run \
--client-instances 4 \
--connections-per-client 5 \
--threads-per-client 4 \
--workload-profile high_throughput
Performance Examples:
- Single thread: 3K-33K ops/sec (depending on workload)
- Multiple threads: 47K+ ops/sec with pipeline workloads
- Basic workloads: ~3,500 ops/sec (SET/GET/DEL)
- Pipeline workloads: 15K-33K ops/sec (batched operations)
The application provides comprehensive metrics and monitoring capabilities:
- Final test summary output to stdout or JSON file with
--output-file
- Real-time statistics during test execution
- OpenTelemetry metrics sent to external metrics stack (optional)
- Comprehensive logging with configurable levels
# Print summary to stdout (default)
python main.py run --workload-profile basic_rw --duration 60
# Save summary to JSON file
python main.py run --workload-profile basic_rw --duration 60 --output-file results.json
Example output:
{
"app_name": "python-basic-rw",
"instance_id": "python-basic-rw-a1b2c3d4",
"run_id": "550e8400-e29b-41d4-a716-446655440000",
"version": "redis-py-5.0.1",
"test_duration": "60.0s",
"workload_name": "basic_rw",
"total_commands_count": 1234567,
"successful_commands_count": 1234565,
"failed_commands_count": 2,
"success_rate": "99.84%",
"overall_throughput": 20576,
"connection_attempts": 5,
"connection_failures": 0,
"connection_drops": 0,
"connection_success_rate": "100.00%",
"reconnection_count": 0,
"avg_reconnection_duration_ms": 0,
"run_start": 1703097600.123,
"run_end": 1703097660.456,
"min_latency_ms": 0.12,
"max_latency_ms": 45.67,
"median_latency_ms": 1.23,
"p95_latency_ms": 3.45,
"p99_latency_ms": 8.90,
"avg_latency_ms": 1.56
}
This application sends metrics to an external Redis Metrics Stack using OpenTelemetry.
You need the separate redis-metrics-stack
repository running:
# Clone and start the metrics stack
git clone <redis-metrics-stack-repo>
cd redis-metrics-stack
make start
The application automatically sends these standardized metrics:
redis_operations_total
: Counter of Redis operationsredis_operation_duration
: Histogram of operation latencyredis_connections_total
: Counter of connection attemptsredis_reconnection_duration_ms
: Histogram of reconnection time
Configure OpenTelemetry endpoint in .env
:
OTEL_ENABLED=true
OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4317
TODO: Complete production deployment documentation
The application is designed for cloud deployment with:
- β
Kubernetes templates (
k8s/
directory) - β Docker image ready for deployment
- β OpenTelemetry integration for observability
π TODO Items:
- Complete Kubernetes manifests
- Helm chart implementation
- Production environment configuration
- Scaling and resource management
The application includes a comprehensive test suite for all workload implementations.
Quick test run:
# Run all tests with default settings
python test_workloads.py
# Or use the test runner for better output
python run_tests.py
Advanced test options:
# Verbose output
python run_tests.py --verbose
# Quiet output
python run_tests.py --quiet
# Run specific test pattern
python run_tests.py --pattern "test_basic*"
The test suite covers:
- BaseWorkload functionality - Key/value generation, operation selection, threading safety
- BasicWorkload operations - SET, GET, DEL, INCR operations with error handling
- ListWorkload operations - LPUSH, RPUSH, LRANGE, LPOP, RPOP operations
- PipelineWorkload - Batch operations with individual metrics tracking
- TransactionWorkload - MULTI/EXEC transactions with proper cleanup
- PubSubWorkload - Publish/Subscribe with threading and message handling
- WorkloadFactory - Workload creation logic and type detection
- Integration tests - Real configuration profiles and concurrent execution
- Edge cases - Error conditions, empty configurations, invalid inputs
test_workloads.py
- Main test suite (50+ test cases)run_tests.py
- Enhanced test runner with formatting and reporting
main.py
- Entry point and CLI interfaceconfig.py
- Configuration management and workload profilesredis_client.py
- Redis connection management with resilienceworkloads.py
- Workload implementationstest_runner.py
- Main test execution enginemetrics.py
- Metrics collection and final test summarieslogger.py
- Logging configurationrequirements.txt
- Python dependencies.env.example
- Environment variables templateMakefile
- Development and build commandsDockerfile
- Container image definition
redis-metrics-stack
- Observability infrastructure (separate repository)- OpenTelemetry Collector for metrics collection
- Prometheus for metrics storage
- Grafana dashboards for visualization
- Redis database for testing
- Complete monitoring stack for Redis applications
Note: This Redis test application is standalone and doesn't require the metrics stack to function. The metrics stack provides additional observability features for advanced monitoring and visualization.