The Fraud Investigator Assistant demonstrates how customers can revolutionize their fraud investigation process using AWS Bedrock, MCP (Model Context Protocol) servers, and Strands expert agents. Fraud investigations can be an arduous, time-consuming endeavor. Each step can involve various compliance and security policies requiring specialized skills in different domains. Building an AI expert network can accelerate this work and allow customers to run investigations faster. This innovative approach dramatically reduces investigation timelines by combining a knowledge base of policy information with custom MCP servers for private data and open-source MCPs for public information-gathering.
Assistance capabilities:
- Orchestrator Agent - Decides which expert(s) should be used when processing a request and which policy needs to be enforced.
- Transactional Expert - Analyzes merchant transactions & authorization data.
- Merchant Indicators Expert - Retrieves and analyzes statistical aggregated data on merchants.
- Online Search Expert - Performs merchant verification and reviews online presence.
- Policies Knowledge Base - Supports investigation scaling and accurate execution of company policies particular use cases.
- Intelligent Insights - Provided through conversational AI.
- Comprehensive Toolset - For integrating external data sources via MCP protocol.
- Architecture & Data Flow
- Screenshots
- Tool Versions
- Prerequisites
- Installation
- User Interface
- Database Schema
- Testing
- Destroy
- Clean Up Cache
- License
- Package Dependencies
- Amazon Bedrock Agent with multiple action groups
- Strands Agents for domain expert agents: merchant portfolio agent, merchant aggregated data agent, merchant transaction agent, online assistant agent
- MCP Servers Integration for internal data access and external web search
- PostgreSQL Database with merchant and transaction data
- Streamlit UI for interactive testing
- Terraform Infrastructure for AWS deployment
- Comprehensive Testing with functional unit tests
app/ # Application-level logic and data
build-script/ # Build scripts
data/ # Mocked example data
iac/ # All Terraform and application layer code
bootstrap/ # Deploy bootstrapped infrastructure
roots/ # Main Terraform and application code
app/
templates/ # Terraform components and modules
images/ # Architecture and other images for the
test/ # Folder with different test scripts to check functionality
ui/ # Interact with agent via Streamlit UI
init.sh # Initialize environment variables
LICENSE # License file to use this project artifacts
Makefile # Scripts to deploy, destroy, and interact with IAC
README.md # This document which includes repository details and instructions
set-env-vars.sh # Export environment variables set during init.sh
Fraud expert ingest policies to knowledge base
Fraud AI assistance process user query
-
Frontend Layer
- Streamlit UI for agent interaction
-
Agent Layer
- Amazon Bedrock Agent with multiple action groups:
- Merchant Action Group
- Transaction Action Group
- Online/Internet Action Group
- Amazon Bedrock Agent with multiple action groups:
-
MCP Server Layer
- Merchant MCP: Database queries for merchant data
- Transaction MCP: Transaction analysis and filtering
- Fetch Search MCP: Fetch the contents from URL and can be used for website verification
- Brave MCP: Alternative search capabilities
-
Data Layer
- REST API endpoints: Data access through API
- PostgreSQL (Aurora): Primary database with merchant/transaction data
- OpenSearch: Knowledge base for policies and procedures
- S3: Document storage
-
Infrastructure
- VPC: Secure network isolation
- Lambda Functions: Serverless compute
- API Gateway: REST API management
- Secrets Manager: Secure credential storage
Fetching merchant metadata information
Get last authorization transactions for a merchant
Get merchant stats data (default day, month/year also supported)
Fetch last year merchant total dispute volume
Fetch realtime online website content
Perform online search according to knowledge base policy
To build and deploy this system the following tools are required:
- Python: Version 3.12 or higher
- AWS CLI: Version 2.0 or higher, configured with appropriate credentials
- Terraform: Version >5.0
- Make: GNU Make 4.0 or higher
- Bash: Version 4.0 or higher
- Streamlit: For UI testing
- Strands Agents: Version >=0.1.0
- Podman: Version 5.5.2 or higher
The project requires the following Python packages:
- pandas==2.2
- GCC >= 8.4
Use the secret access key of a user or export the temporary credentials before continuing.
For all Bedrock models you plan on using, accept their EULA in the region of deployment:
- Claude 3.5 Sonnet
- Claude 3 Haiku
- Titan Text models
The system uses PostgreSQL with the following schema:
- merchant_details: Merchant master data
- authorizations: Transaction authorization records
- settlements: Settlement transaction data
- merchant_stats: Aggregated merchant statistics
Give access to the script that creates lambda and layers packages by running
chmod +x ./build-script/build-lambdas.sh
chmod +x ./build-script/build-layers.sh
Then build them by running the following, respectively
make build-lambdas
make build-layers
Verify zip archive is created under /app/layers/***/layer.zip
Verify zip archive is created under /app/lambdas/packages/***.zip
After an application environment is configured and you have built the layers and lambdas zips, you can deploy the application with those configurations by executing the targets from Makefile
in the
order listed by the deploy-all
target.
make deploy-all
Terraform has some issues with Amazon Bedrock. Here are some common issues to resolve.
Issue | Solution |
---|---|
Could not perform Create operation, since the XXXXX (id: xxxxx) with the same name XXXXX already exists. | Manually delete action group. |
Agent is in preparing state and cannot be prepared. | Two resources updated and tried to prepare, this can be ignored or wait a minute and try again. |
In an S3 bucket we have a DDL file and DML file that create database tables and add data to them, respectively. In order to run these files we have a lambda function with the name xxx-xxx-deploy-db. Search for this function in the AWS Lambda console page and run a test event that has any content in it, triggering the function.
Navigate to Amazon Bedrock console page. Click Knowledge Bases on the left side bar. Click on the created knowledge base, select the data source, click sync. This adds the data to the knowledge base.
To avoid having to run all of the build and deploy commands you can use the single command below to do everything.
make everything
Review detailed schema at: /data/schema/ddl.sql
-
merchant_details: Core merchant information
- Merchant numbers, business details, contact info
- Address, phone, email information
- Account status and limits
-
authorizations: Transaction authorization data
- Account numbers, amounts, currencies
- Transaction types, payment methods
- Approval status and decline reasons
-
settlements: Settlement transaction records
- Processed amounts, transaction IDs
- Card information and countries
- Transaction modes and statuses
-
merchant_stats: Aggregated statistics
- Sales volumes and counts
- Refund and dispute metrics
- Entry method distributions
chmod +x ./test/fut/mcp-client-tests.sh
make test-mcp-client
make test-agent
The system includes comprehensive test cases for:
- Transaction detail queries
- Merchant searches by various criteria
- Data filtering and aggregation
- Web search functionality
Before testing knowledge policy scenario make sure to upload a policy to the S3 bucket - see example /data/knowledge-base/
After uploading the policies you must sync the agent with the knowledge base change:
The system includes multiple MCP servers:
- merchant_mcp: Handles merchant data queries
- transaction_mcp: Processes transaction analysis
- websearch_mcp: Performs web searches
- brave_mcp: Alternative search provider
- fetch_mcp: HTTP request capabilities
To work with the Streamlit UI, you need a .env with agent and alias ID.
To get the IDs from the terraform state and automatically create a .env
file, run:
make prep-ui-env
If you switch cloud environments you need to run this otherwise it will try to contact the agent related to the previously used cloud environment.
First make a copy of .env.TEMPLATE
and rename it to .env
. Then, add the agent id and alias id. I have an example below.
# The ID of the agent.
BEDROCK_AGENT_ID=XXXXXXXXX
# The ID of the agent alias. The default `TSTALIASID` will be used if it is not set.
BEDROCK_AGENT_ALIAS_ID=XXXXXXXXXX
Now, you are ready to run the UI. The script is below and the UI will be accessible with http://localhost:8080/
.
make run-ui
To easily destroy the resources in your environment use the destroy-all
command.
make destroy-all
This will help users clean up Terraform cache from local machine. Please run the following make command to clean up local cache.
make clean-tf-cache
- Include ELB access logging for application load balancers in front of the MCP servers on ECS. Documentation
- Be mindful of logging sensitive data. The current solution has thorough logging for testing and debugging purposes and because all data is non-sensitive. As you incorporate real data and promote the solution to different environments, be sure to change the logging. Documentation
- Add certificates to ALB and make sure to use HTTPS traffic. Documentation
- Encrypt CloudWatch log data. Documentation
- Restrict network access of OpenSearch collection to Bedrock service and VPC endpoint where you have CICD pipeline. Documentation
- Update VPC to use proxy for outbound traffic instead of NAT gateway.
- Further restrict IAM permissions. For example, the Bedrock Agent role currently allows for all models and inference profiles to allow for testing and expirmentation but you will want to limit that to your region.
- The current agentic solution does not have guardrails implemented. Documentation
This library is licensed under the MIT-0 License.
This solution uses several 3rd party packages / libraries, all of which are open sourced, MIT License or Ap :
- Fetch MCP - https://pypi.org/project/mcp-server-fetch/
- Brave Search - https://www.npmjs.com/package/@modelcontextprotocol/server-brave-search
- MCP Proxy - https://github.com/sparfenyuk/mcp-proxy
- Fast MCP - https://github.com/jlowin/fastmcp