-
-
Notifications
You must be signed in to change notification settings - Fork 110
07 TROUBLESHOOTING
This guide covers common issues encountered during deployment, database management, and system maintenance. Based on real-world troubleshooting scenarios and solutions.
- OAuth 2.1 Authentication Issues
- Version Synchronization Issues
- Database and Storage Problems
- Deployment and Service Management
- Database Cleanup and Maintenance
- Configuration Conflicts
- Remote Server Issues
- Security Issues
Symptoms:
- Claude Code shows "Failed to connect" for HTTP transport MCP server
- Server logs show OAuth discovery requests (404 Not Found)
- OAuth endpoints not responding
Root Cause: OAuth 2.1 not enabled or misconfigured.
Solution:
-
Enable OAuth 2.1:
# Set OAuth environment variable export MCP_OAUTH_ENABLED=true # Restart server uv run memory server --http
-
Verify OAuth Discovery:
# Test OAuth discovery endpoint curl http://localhost:8000/.well-known/oauth-authorization-server/mcp # Expected response should include: # - issuer # - authorization_endpoint # - token_endpoint # - registration_endpoint
-
Check Server Logs:
# Monitor OAuth activity tail -f logs/mcp-memory-service.log | grep -i oauth
Symptoms:
-
/.well-known/oauth-authorization-server/mcp
returns 404 -
/oauth/register
returns 404 - OAuth functionality completely unavailable
Root Cause:
MCP_OAUTH_ENABLED
not set to true
or server needs restart.
Solution:
-
Check OAuth Status:
echo $MCP_OAUTH_ENABLED # Should output 'true'
-
Enable and Restart:
export MCP_OAUTH_ENABLED=true # Kill existing server pkill -f "memory server" # Start fresh uv run memory server --http
-
Verify All OAuth Endpoints:
# Discovery endpoint curl -v http://localhost:8000/.well-known/oauth-authorization-server/mcp # Registration endpoint curl -X POST http://localhost:8000/oauth/register \ -H "Content-Type: application/json" \ -d '{"client_name": "Test Client"}'
Symptoms:
- API requests fail with "Invalid token" errors
- JWT validation errors in server logs
- Token appears valid but rejected
Root Cause: JWT secret key mismatch or token expiration.
Solution:
-
Check JWT Secret Consistency:
# Verify secret key is set echo $MCP_OAUTH_SECRET_KEY # If empty, set a consistent key export MCP_OAUTH_SECRET_KEY="your-secure-256-bit-secret-key"
-
Debug Token Validation:
# Enable debug logging export LOG_LEVEL=DEBUG uv run memory server --http --debug # Check JWT validation logs tail -f logs/mcp-memory-service.log | grep -E "(jwt|token|auth)"
-
Re-register Client:
# Remove existing Claude Code server claude mcp remove memory-service # Re-add with fresh OAuth registration claude mcp add --transport http memory-service http://localhost:8000/mcp
Symptoms:
- OAuth works on localhost but fails on remote server
- "HTTPS required" errors in production
- OAuth discovery succeeds but authorization fails
Root Cause: OAuth 2.1 requires HTTPS for non-localhost URLs.
Solution:
-
Enable HTTPS:
# Production HTTPS setup export MCP_HTTPS_ENABLED=true export MCP_SSL_CERT_FILE="/path/to/cert.pem" export MCP_SSL_KEY_FILE="/path/to/key.pem" export MCP_OAUTH_ISSUER="https://your-domain.com" uv run memory server --http
-
Verify HTTPS Configuration:
# Test HTTPS OAuth discovery curl https://your-domain.com/.well-known/oauth-authorization-server/mcp # Check SSL certificate openssl s_client -connect your-domain.com:443
Symptoms:
- Client registration returns 400/500 errors
- "Invalid client metadata" errors
- Registration endpoint works but returns errors
Root Cause: Invalid client registration request or server configuration.
Solution:
-
Test Manual Registration:
# Valid registration request curl -X POST http://localhost:8000/oauth/register \ -H "Content-Type: application/json" \ -d '{ "client_name": "Test Client", "redirect_uris": ["http://localhost:3000/callback"], "grant_types": ["authorization_code"], "response_types": ["code"] }'
-
Check Server OAuth Configuration:
# Verify OAuth settings echo "OAuth Enabled: $MCP_OAUTH_ENABLED" echo "OAuth Issuer: $MCP_OAUTH_ISSUER" echo "OAuth Secret: ${MCP_OAUTH_SECRET_KEY:0:10}..."
-
Review Registration Logs:
# Monitor registration attempts tail -f logs/mcp-memory-service.log | grep -i "registration\|client"
Symptoms:
- Token received but API calls return 403 Forbidden
- "Insufficient scope" errors
- OAuth flow completes but access denied
Root Cause: Token lacks required scopes for API operations.
Solution:
-
Check Token Scopes:
# Decode JWT token to check scopes (requires jwt-cli) jwt decode your-jwt-token # Look for "scope" claim in payload
-
Request Proper Scopes:
# Registration with full scopes curl -X POST http://localhost:8000/oauth/register \ -H "Content-Type: application/json" \ -d '{ "client_name": "Claude Code", "scope": "read write admin" }'
-
Verify Scope Requirements:
# Check which endpoints require which scopes # In server logs, look for scope validation errors
Symptoms:
- API docs dashboard (
/api/docs
) shows hardcoded version like1.0.0
- Main dashboard shows correct current version
- Multiple version numbers inconsistent across application
Root Cause: Hardcoded version strings in multiple files instead of dynamic imports.
Files Affected:
-
src/mcp_memory_service/web/app.py
- FastAPI app version -
src/mcp_memory_service/web/__init__.py
- Web module version -
src/mcp_memory_service/config.py
- Server version -
pyproject.toml
- Main project version -
src/mcp_memory_service/__init__.py
- Package version
Solution:
-
Update FastAPI App Version:
# In src/mcp_memory_service/web/app.py from .. import __version__ app = FastAPI( title="MCP Memory Service", description="HTTP REST API and SSE interface for semantic memory storage", version=__version__, # Changed from hardcoded "1.0.0" # ... other config )
-
Consolidate Web Module Version:
# In src/mcp_memory_service/web/__init__.py # Replace: __version__ = "0.2.0" # With: Import from main package from .. import __version__
-
Update Server Version:
# In src/mcp_memory_service/config.py # Replace: SERVER_VERSION = "0.2.2" # With: Dynamic import from . import __version__ as SERVER_VERSION
-
Version Bump and Tag:
# Update both files with new version # pyproject.toml: version = "x.y.z" # src/mcp_memory_service/__init__.py: __version__ = "x.y.z" git add pyproject.toml src/mcp_memory_service/__init__.py CHANGELOG.md git commit -m "fix: vx.y.z - synchronize version numbers across all components" git tag -a vx.y.z -m "Release vx.y.z: Version Synchronization Fix" git push origin main && git push origin vx.y.z
Verification:
- Check
/api/health
endpoint shows correct version - Check
/openapi.json
shows correct version - Verify
/api/docs
displays current version - Confirm main dashboard shows same version
Symptoms:
- MCP memory tools report database corruption
- Claude Desktop works fine with same database
- Direct SQLite access shows healthy database
Root Cause: Configuration mismatch between different tools accessing different database instances.
Diagnosis Steps:
-
Check Database Paths:
# Claude Desktop config cat ~/.config/claude/claude_desktop_config.json | grep -A5 -B5 "MCP_MEMORY_SQLITE_PATH" # Default MCP tools path python -c "from src.mcp_memory_service.config import SQLITE_VEC_PATH; print('Default:', SQLITE_VEC_PATH)"
-
Test Direct Database Access:
sqlite3 "/path/to/sqlite_vec.db" "SELECT COUNT(*) FROM memories;" sqlite3 "/path/to/sqlite_vec.db" "PRAGMA integrity_check;"
-
Compare Memory Counts:
# Claude Desktop should show memory count in UI # Direct SQLite: sqlite3 "/path/to/db" "SELECT COUNT(*) FROM memories;" # MCP tools: Check health endpoint or tools output
Solution: Ensure all tools use the same database path:
export MCP_MEMORY_SQLITE_PATH="/Users/username/Library/Application Support/mcp-memory/sqlite_vec.db"
# Or set in environment for consistency
Symptoms:
- Claude Desktop: 919 memories
- MCP tools: 344 memories
- Remote dashboard: 925 memories
Root Cause: Multiple database instances or sync issues between local and remote.
Investigation:
-
Check File Timestamps:
ls -la "/path/to/sqlite_vec.db" ls -la "/path/to/sqlite_vec.db-wal" ls -la "/path/to/sqlite_vec.db-shm"
-
Check Database Sizes:
du -sh "/path/to/"*.db
-
Identify Active Database:
lsof | grep sqlite_vec.db # See which processes have the file open
Solution:
- Ensure WAL mode is properly closed:
sqlite3 db "PRAGMA wal_checkpoint(FULL);"
- Restart Claude Desktop to refresh connection
- Verify consistent paths across all configurations
Symptoms:
- Service fails to start: "address already in use"
- Multiple MCP memory service processes running
- systemd service restart loop
Diagnosis:
-
Check Port Usage:
ss -tulpn | grep :8443 netstat -tulpn | grep :8443 # if netstat available
-
Find Running Processes:
ps aux | grep -E '(run_server|memory)' | grep -v grep
-
Check systemd Service:
systemctl status mcp-memory.service journalctl -u mcp-memory.service --since "10 minutes ago"
Solution:
-
Clean Process Conflicts:
# Stop systemd service sudo systemctl stop mcp-memory.service # Kill conflicting processes pkill -f "run_server.py" pkill -f "memory" # Wait for cleanup sleep 2 # Start only systemd service sudo systemctl start mcp-memory.service
-
Update systemd Service Config:
sudo systemctl cat mcp-memory.service # Check current config sudo systemctl edit mcp-memory.service --full # Update if needed sudo systemctl daemon-reload
Symptoms:
- systemd service has old API key
- Authentication failures after security updates
Solution:
-
Update systemd Service:
sudo systemctl edit mcp-memory.service --full # Update Environment=MCP_API_KEY=new-secure-key sudo systemctl daemon-reload sudo systemctl restart mcp-memory.service
-
Verify Key Consistency:
# Check service config sudo systemctl cat mcp-memory.service | grep MCP_API_KEY # Test API access curl -k -H "Authorization: Bearer new-key" https://server:8443/api/health
Symptoms:
- Large disk usage from old database files
- Confusion about which database is active
- Old backup files taking space
Safe Cleanup Process:
-
Identify Active Database:
# Check Claude Desktop config for active path grep -r "MCP_MEMORY_SQLITE_PATH" ~/.config/claude/ # Verify active database sqlite3 "/active/path/sqlite_vec.db" "SELECT COUNT(*) FROM memories;"
-
Create Fresh Backup:
cp "/active/path/sqlite_vec.db" "/backup/location/sqlite_vec_backup_$(date +%Y%m%d_%H%M%S).db"
-
Safe Files to Remove:
# Old ChromaDB files (if migrated to SQLite-vec) rm -rf "/path/to/chroma_db/" rm "/path/to/chroma_export.json" # Old backup directories (check dates first!) rm -rf "/path/to/old_backups_from_months_ago/" # Staging/temporary databases rm "/path/to/sqlite_vec_staging.db" # Cache directories (will regenerate) rm -rf "/path/to/st_cache/" # Empty directories rmdir "/path/to/empty_dirs/"
-
Keep These Files:
# Active database and WAL files sqlite_vec.db # Main database sqlite_vec.db-shm # Shared memory file sqlite_vec.db-wal # Write-ahead log # Recent backup sqlite_vec_backup_YYYYMMDD_HHMMSS.db
-
Verification:
# Test database integrity sqlite3 "/path/to/sqlite_vec.db" "PRAGMA integrity_check;" # Verify memory count unchanged sqlite3 "/path/to/sqlite_vec.db" "SELECT COUNT(*) FROM memories;" # Test Claude Desktop still works # (Check memory count in UI)
Disk Space Savings: Typical cleanup can free 100-200MB+ from:
- Old ChromaDB files (10-50MB)
- Sentence transformer caches (50-100MB)
- Old backup directories (varies)
- Export/staging files (varies)
Symptoms:
-
ssh user@hostname.local
fails - "Permission denied (publickey,password)"
Solution: Use IP address instead of hostname:
# Instead of: ssh user@hostname.local
# Use: ssh user@192.168.x.x
ssh hkr@10.0.1.30
Root Cause: mDNS resolution or SSH key configuration issues.
Symptoms:
-
systemctl status mcp-memory-service
not found - Service exists but with different name
Diagnosis:
systemctl list-units --type=service | grep -i mcp
systemctl list-units --type=service | grep -i memory
Common Service Names:
mcp-memory.service
mcp-memory-service.service
mcp-http-dashboard.service
Symptoms:
- Dashboard shows fewer memories than expected
- Today's memories not visible in dashboard
- Memory count discrepancy between services
- Different memory counts in Claude Code vs Dashboard
Root Cause: Backend mismatch between services - Dashboard querying one backend (e.g., SQLite-vec) while Claude Code stores to another (e.g., Cloudflare).
Diagnosis Steps:
-
Check Backend Configuration:
# Check dashboard backend curl -s "http://localhost:8889/api/health/detailed" | python3 -c "import sys, json; d=json.load(sys.stdin); print('Backend:', d['storage'].get('backend'))" # Check Claude Code MCP config cat ~/.claude.json | grep -A10 "memory.*mcp"
-
Verify Memory Counts:
# SQLite-vec count sqlite3 "$MCP_MEMORY_SQLITE_PATH" "SELECT COUNT(*) FROM memories;" # Cloudflare count (via API) curl -s "http://localhost:8889/api/health/detailed" | python3 -c "import sys, json; d=json.load(sys.stdin); print('Total:', d['storage'].get('total_memories'))"
-
Check Last Update Times:
# Last memory in SQLite-vec sqlite3 "$MCP_MEMORY_SQLITE_PATH" "SELECT datetime(MAX(created_at), 'unixepoch') FROM memories;" # Today's memories count sqlite3 "$MCP_MEMORY_SQLITE_PATH" "SELECT COUNT(*) FROM memories WHERE date(created_at, 'unixepoch') = date('now');"
Solution: Use Hybrid backend for automatic synchronization:
# Stop current dashboard
pkill -f "uvicorn.*8889"
# Start with Hybrid backend
MCP_MEMORY_STORAGE_BACKEND=hybrid \
CLOUDFLARE_API_TOKEN=your-token \
CLOUDFLARE_ACCOUNT_ID=your-account-id \
CLOUDFLARE_D1_DATABASE_ID=your-db-id \
CLOUDFLARE_VECTORIZE_INDEX=mcp-memory-index \
MCP_MEMORY_SQLITE_PATH="/path/to/sqlite_vec.db" \
MCP_HTTP_ENABLED=true \
MCP_OAUTH_ENABLED=false \
uv run python -m uvicorn src.mcp_memory_service.web.app:app --host 127.0.0.1 --port 8889
# Monitor sync progress
curl -s "http://localhost:8889/api/health/sync-status"
Prevention:
- Always use Hybrid backend for production
- Verify
MCP_MEMORY_STORAGE_BACKEND
matches across all services - Check
/api/health/detailed
shows expected backend before assuming bugs
Symptoms:
- Service uses SQLite-vec despite Cloudflare configuration in
.env
- Backend configuration in
.env
file ignored - CLI defaults override environment settings
Root Cause: CLI default parameters override environment variables (fixed in v6.16.0+).
Solution:
-
Upgrade to v6.16.0 or later:
git pull origin main git checkout v6.16.0 # or later uv sync
-
Verify
.env
File Location:# Must be in project root ls -la .env cat .env | grep MCP_MEMORY_STORAGE_BACKEND
-
Use Explicit Environment Variables:
# Set explicitly when starting service export MCP_MEMORY_STORAGE_BACKEND=cloudflare export CLOUDFLARE_API_TOKEN=your-token # ... other vars uv run memory server
-
Validate Configuration:
# Check what backend is actually loaded uv run python -c "from mcp_memory_service.config import STORAGE_BACKEND; print(f'Backend: {STORAGE_BACKEND}')"
Symptoms:
- "Invalid API Token" errors
- "Error 9109: Cannot use access token from location"
- API requests return 401 Unauthorized
- Vectorize API fails but D1 works
Root Causes:
- Invalid or expired API token
- IP address restrictions (VPN blocking access)
- Insufficient permissions for Vectorize/D1/Workers AI
Diagnosis:
# Test Cloudflare connectivity
python scripts/validation/diagnose_backend_config.py
# Manual token verification
curl -H "Authorization: Bearer $CLOUDFLARE_API_TOKEN" \
https://api.cloudflare.com/client/v4/user/tokens/verify
# Test D1 access
curl -H "Authorization: Bearer $CLOUDFLARE_API_TOKEN" \
"https://api.cloudflare.com/client/v4/accounts/$CLOUDFLARE_ACCOUNT_ID/d1/database/$CLOUDFLARE_D1_DATABASE_ID/query" \
-X POST -d '{"sql":"SELECT 1"}'
Solution:
-
Regenerate API Token:
- Go to Cloudflare Dashboard → Account → API Tokens
- Delete old token
- Create new token with permissions:
- Account:D1:Edit
- Account:Vectorize:Edit
- Account:Workers AI:Edit
-
Configure IP Restrictions:
- Add your current IP to token allowlist
- Or disable IP restrictions for development
- Check VPN IP if using VPN:
curl ifconfig.me
-
Update Credentials:
# Update .env file echo "CLOUDFLARE_API_TOKEN=new-token-here" >> .env # Update Claude Code config # Edit ~/.claude.json memory server env section
Symptoms:
- Multiple memory servers in
/mcp
command - Unclear which configuration is active
- Conflicting backends between global and project configs
- Service connects to wrong backend
Root Cause:
Conflicting configurations in global (~/.claude.json
) and project (.mcp.json
) files.
Diagnosis:
# Check for duplicate configurations
grep -r "memory.*mcp" ~/.claude.json ~/.mcp.json 2>/dev/null
# List MCP servers
claude mcp list
# Verify active backend
curl -s "http://localhost:8889/api/health/detailed" | jq .storage.backend
Solution - Single Source of Truth:
-
Use Global Config Only:
# Remove project-level memory server config # Edit .mcp.json and remove memory server entry # Keep only global config in ~/.claude.json
-
Configuration Precedence:
-
Global:
~/.claude.json
→ Authoritative for MCP servers -
Project:
.env
→ Credentials only (no server config) - Shell: Environment variables → Runtime overrides
-
Global:
-
Validate Single Instance:
# Should show only ONE memory server claude mcp list | grep memory # Restart Claude Code to apply
Symptoms:
- Dashboard returns "401 Unauthorized"
- "Connection refused" errors
- OAuth blocking access to API endpoints
- Wrong port in configuration
Root Causes:
- OAuth enabled blocking local access
- Dashboard running on different port than expected
- MCP_HTTP_ENABLED not set
- Firewall blocking connections
Diagnosis:
# Check if dashboard is running
lsof -i :8889
lsof -i :8443
# Test health endpoint
curl -v http://localhost:8889/api/health
# Check OAuth status
curl http://localhost:8889/.well-known/oauth-authorization-server/mcp
Solution:
-
Disable OAuth for Local Development:
# Add to environment or .env export MCP_OAUTH_ENABLED=false export MCP_HTTP_ENABLED=true # Restart dashboard pkill -f "uvicorn.*8889" uv run python -m uvicorn src.mcp_memory_service.web.app:app --host 127.0.0.1 --port 8889
-
Fix Port Configuration:
# Check what port is actually running ps aux | grep uvicorn # Update hooks config if needed # ~/.claude/hooks/config.json # Change endpoint from 8443 to actual port
-
Enable HTTP Server:
# Ensure HTTP server is enabled export MCP_HTTP_ENABLED=true # Start server with HTTP uv run memory server --http
Symptoms:
- Session-start hooks timeout (>3 seconds)
- Claude Code slow to start
- MCP connection delays
- Hook initialization failures
Root Cause: Cloudflare backend takes 5-10+ seconds to initialize due to network verification (D1 schema check, Vectorize index verification, R2 bucket checks).
Solution - Use Hybrid Backend:
# Update ~/.claude/hooks/config.json
{
"memoryService": {
"mcp": {
"serverCommand": ["uv", "run", "memory", "server", "-s", "hybrid"],
"serverWorkingDir": "/path/to/mcp-memory-service"
}
}
}
Benefits:
- ~100ms initialization (vs 5-10s for Cloudflare)
- Background sync to Cloudflare
- Works offline with graceful degradation
- Automatic failover if Cloudflare unavailable
Verification:
# Test hook initialization time
time node ~/.claude/hooks/core/session-start.js
# Should complete in < 2 seconds
# Check for hybrid backend in output
Symptoms:
- Multiple "memory" entries in
/mcp
command - Old servers pointing to dead remote hosts
- Configuration confusion across projects
- Service connects to wrong backend
Root Cause: Legacy configurations not cleaned up after migration or server changes.
Solution:
-
Identify All Memory Servers:
# Global config grep -A20 '"memory"' ~/.claude.json # Project configs find ~ -name ".mcp.json" -exec grep -l "memory" {} \;
-
Remove Duplicate Configurations:
# Remove old server entries claude mcp remove old-memory-server # Keep only one authoritative config in ~/.claude.json
-
Clean Project Overrides:
# Edit each .mcp.json found # Remove "memory" or "memory-service" entries # Let global config be authoritative
-
Verify Single Configuration:
# Should show exactly ONE memory server claude mcp list # Restart Claude Code
1. Backend Selection:
- ✅ Production: Hybrid backend (fast + cloud sync)
- ✅ Development: SQLite-vec (local only)
- ✅ Team: Cloudflare (shared cloud storage)
- ❌ Never: Mix backends without sync strategy
2. Configuration Management:
- ✅ Single global MCP config in
~/.claude.json
- ✅ Project
.env
for credentials only - ✅ Explicit environment variables for clarity
- ❌ Duplicate memory server configs in projects
3. Validation Checklist:
# Before reporting bugs, verify:
1. Check backend: curl -s http://localhost:8889/api/health/detailed | jq .storage.backend
2. Verify env vars: env | grep MCP_MEMORY
3. Count memories in each backend
4. Test configuration: python scripts/validation/diagnose_backend_config.py
5. Check service logs for initialization errors
4. Debugging Methodology:
- Don't assume bugs first - Check configuration
- Verify data location - Where is memory stored vs where is service looking?
- Check service configs separately - Claude Code ≠ Dashboard ≠ CLI
- Test assumptions - Count memories in each backend independently
- Follow the data - Trace memory from storage to retrieval
Symptoms:
- Git checkout to new tag successful
- Health endpoint shows old version
- systemd restart doesn't pick up changes
Solution:
-
Verify Git State:
ssh user@server cd /path/to/repo git log --oneline -3 git describe --tags
-
Force Clean Restart:
# Stop service completely sudo systemctl stop mcp-memory.service # Kill any remaining processes pkill -f "run_server" # Clear Python cache find . -name "*.pyc" -delete find . -name "__pycache__" -type d -exec rm -rf {} + # Start fresh sudo systemctl start mcp-memory.service
-
Verify Version Update:
curl -k https://server:8443/api/health curl -k https://server:8443/openapi.json | grep version
Symptoms:
- Manual processes won't auto-start
- Uncertainty about service auto-start
Prevention:
-
Ensure systemd Service is Enabled:
sudo systemctl enable mcp-memory.service sudo systemctl is-enabled mcp-memory.service # Should return "enabled"
-
Test Service Configuration:
sudo systemctl cat mcp-memory.service # Verify paths and environment
-
Simulation Test:
# Simulate restart scenario sudo systemctl stop mcp-memory.service pkill -f "run_server" # Kill manual processes sudo systemctl start mcp-memory.service # Verify it starts properly
-
Single Source of Truth: Maintain version only in
pyproject.toml
and__init__.py
-
Dynamic Imports: Use
from . import __version__
everywhere else - Semantic Versioning: Follow semver for version bumps
-
Tag Releases: Always tag with
git tag -a vX.Y.Z
- Regular Backups: Automated backups before major changes
- Path Consistency: Same database path across all configurations
- Health Monitoring: Regular integrity checks
- Cleanup Schedule: Monthly cleanup of old files
- Single Service Instance: Avoid multiple processes on same port
- Environment Consistency: Same environment variables across tools
- Gradual Updates: Test locally before remote deployment
- Rollback Plan: Keep previous version tags for quick rollback
-
Health Endpoints: Regular checks of
/api/health
- Log Monitoring: Monitor systemd service logs
- Disk Space: Monitor database and cache growth
- Performance: Track memory counts and query times
# Database integrity
sqlite3 "/path/to/db" "PRAGMA integrity_check;"
# Memory count
sqlite3 "/path/to/db" "SELECT COUNT(*) FROM memories;"
# Service status
systemctl status mcp-memory.service
# API health
curl -k https://server:8443/api/health
# Stop all services
sudo systemctl stop mcp-memory.service
pkill -f "run_server"
# Restore from backup
cp backup.db sqlite_vec.db
# Restart clean
sudo systemctl start mcp-memory.service
# Git version
git describe --tags
# API version
curl -k https://server:8443/api/health | jq .version
# Package version
python -c "from src.mcp_memory_service import __version__; print(__version__)"
Symptoms:
- API tokens accidentally committed to git repository
- Need to revoke and rotate tokens for security
- Cannot find tokens in Cloudflare dashboard (OAuth account)
Root Cause:
Personal configuration files (.claude/settings.local.json
) containing API tokens were tracked in git.
Solution:
-
Find Account-Level API Tokens:
# For OAuth/Gmail accounts, tokens are at account level, not user profile https://dash.cloudflare.com/{account-id}/api-tokens # NOT at: https://dash.cloudflare.com/profile/api-tokens
-
Revoke Exposed Tokens:
- Locate the exposed token in account API tokens section
- Click "Delete" to revoke immediately
- Verify token is no longer active
-
Create New Token:
Token name: MCP-Memory-Service-Secure Permissions: - Account:D1:Edit - Account:Vectorize:Edit Account Resources: Include - Your Account IP Address Filtering: Your IP (recommended)
-
Clean Git History:
# Remove sensitive files from git tracking git rm --cached .claude/settings.local.json* # Remove from entire git history git filter-branch --force --index-filter \ 'git rm --cached --ignore-unmatch .claude/settings.local.json*' \ --prune-empty --tag-name-filter cat -- --all # Force push cleaned history git push --force origin main
-
Prevent Future Exposure:
# Add to .gitignore echo ".claude/settings.local.json*" >> .gitignore echo "scripts/.claude/settings.local.json*" >> .gitignore git add .gitignore && git commit -m "protect sensitive configs"
Security Notes:
- IP address restrictions on tokens significantly reduce risk
- Account-level tokens are correct for service applications
- Always complete git history cleanup even if risk seems minimal
This troubleshooting guide should help the community avoid and resolve common issues encountered during MCP Memory Service deployment and maintenance.