A distributed video processing system deployed across multiple cloud providers (AWS and GCP) with intelligent load balancing.
video-processing-pipeline/
├── frontend/ # Next.js frontend application
│ ├── src/ # Source code
│ │ ├── app/ # Next.js app directory
│ │ ├── components/ # React components
│ │ └── services/ # API services
│ ├── public/ # Static files
│ └── package.json # Frontend dependencies
│
├── backend/ # Python backend application
│ ├── src/ # Source code
│ │ └── video_processor/ # Video processing service
│ │ ├── main.py # Main application
│ │ ├── worker.py # Worker process
│ │ └── Dockerfile # Container configuration
│ ├── config/ # Configuration files
│ ├── k8s/ # Kubernetes manifests
│ ├── terraform/ # Infrastructure as Code
│ ├── postman/ # API testing collections
│ ├── scripts/ # Utility scripts
│ └── docker-compose.yml # Local development setup
│
└── README.md # Project documentation
This system implements a scalable video processing service that runs across AWS EKS and Google GKE clusters, with NGINX handling load balancing between clouds.
- Video Processor Service: Rust-based service for video transcoding
- NGINX Load Balancer: Distributes traffic between cloud providers
- Kubernetes Deployments: Running on both AWS EKS and GCP GKE
- Infrastructure as Code: Using Terraform for multi-cloud provisioning
The infrastructure is managed using Terraform with separate modules for AWS EKS and GCP GKE:
terraform/
├── main.tf # Main configuration for both clouds
├── modules/
├── aws-eks/ # AWS EKS cluster configuration
└── gcp-gke/ # GCP GKE cluster configuration
To deploy the infrastructure:
terraform init
terraform plan
terraform apply
The project uses two main containers:
-
Video Processor:
- Rust-based video processing service
- FFmpeg integration for transcoding
- Resource-optimized container with multi-stage build
-
NGINX Load Balancer:
- Handles traffic distribution
- Implements health checks
- Provides failover capability
Key Kubernetes resources:
-
Video Processor Deployment:
- 3 replicas for high availability
- Resource limits and requests defined
- Horizontal scaling capabilities
-
NGINX Load Balancer:
- ConfigMap for NGINX configuration
- 2 replicas for redundancy
- Automatic upstream server detection
Use Docker Compose for local development:
docker-compose up --build
- Docker and Docker Compose installed
- Postman for API testing
- FFmpeg installed locally (optional)
- Start the services:
chmod +x scripts/local-setup.sh
./scripts/local-setup.sh
- Import Postman Collection:
- Open Postman
- Import
postman/video-processor-api.json
- The collection includes three endpoints:
- POST /process: Submit a video processing job
- GET /jobs/{job_id}: Get job status
- GET /jobs: List all jobs
- Submit a video processing job:
POST http://localhost:8080/process
{
"input_url": "https://example.com/sample.mp4",
"resolutions": ["1080p", "720p", "480p"],
"job_id": "test-job-1"
}
- Check job status:
GET http://localhost:8080/jobs/test-job-1
- List all jobs:
GET http://localhost:8080/jobs
For testing, you can use these public domain test videos:
- http://commondatastorage.googleapis.com/gtv-videos-bucket/sample/BigBuckBunny.mp4
- http://commondatastorage.googleapis.com/gtv-videos-bucket/sample/ElephantsDream.mp4
- Deploy infrastructure:
cd terraform
terraform apply
- Configure kubectl contexts:
aws eks update-kubeconfig --name video-processing-aws
gcloud container clusters get-credentials video-processing-gcp
- Deploy Kubernetes resources:
kubectl apply -f k8s/
- Create AWS IAM user with appropriate permissions
- Configure AWS credentials:
aws configure
# Or manually create credentials file:
mkdir -p ~/.aws
cat > ~/.aws/credentials << EOF
[default]
aws_access_key_id = YOUR_ACCESS_KEY
aws_secret_access_key = YOUR_SECRET_KEY
EOF
- Create a Service Account in GCP Console
- Download the JSON key file
- Store it securely:
mkdir -p credentials
mv path/to/downloaded-key.json credentials/gcp-service-account.json
- Copy the template:
cp terraform/terraform.tfvars.example terraform/terraform.tfvars
- Edit terraform.tfvars with your credentials
- Create Kubernetes secrets for both clusters:
kubectl create secret generic cloud-credentials \
--from-file=aws-credentials=credentials/aws-credentials \
--from-file=gcp-credentials=credentials/gcp-service-account.json
- Never commit credentials to version control
- Rotate credentials regularly
- Use environment-specific credentials
- Enable audit logging for all credential usage
- Use HashiCorp Vault for production environments
- Kubernetes metrics available through metrics-server
- Horizontal Pod Autoscaling based on CPU/Memory
- Cloud-native monitoring tools integration
- High Availability: Multi-cloud deployment prevents single cloud failure
- Geographic Distribution: Lower latency for global users
- Cost Optimization: Ability to leverage spot instances and preemptible VMs
- Scalability: Independent scaling in each cloud
- Load Distribution: Intelligent traffic routing based on load and health
- Network isolation using VPC/VNet
- RBAC enabled on Kubernetes clusters
- TLS encryption for inter-service communication
- Container security best practices implemented
Frontend:
- Next.js
- Shadcn UI
- Framer Motion
- Lucide Icons
- Tailwind CSS
- TypeScript
Backend:
- Python
- FastAPI
- Redis
- Docker
- Kubernetes
- Terraform
- AWS EKS
- GCP GKE
- NGINX
Features:
- Upload video
- Process video
- Download video
- List jobs
- Get job status
- Load balancing between clouds
- High availability
- Scalability
- Cost optimization
- Intelligent traffic routing
This project demonstrates a modern, cloud-native application with:
- Microservices architecture
- Containerization
- Infrastructure as Code
- Multi-cloud deployment
- Real-time processing
- Modern UI/UX design