AWS DMS Migration Guide
Migrate from Supabase to AWS Aurora PostgreSQL using AWS Console (GUI)
Overview & What You'll Learn
What This Guide Covers
This guide will teach you how to migrate data from Supabase to Aurora using only the AWS Console (web browser interface). You'll learn:
- How to navigate AWS Console
- How to set up AWS DMS using GUI
- How to configure all components visually
- How to monitor migration through dashboards
- How to verify data using Query Editor
- No terminal commands required!
What is AWS DMS?
AWS Database Migration Service (DMS) is a managed cloud service that automates database migrations through a web interface. You can:
- Set up everything in your browser
- Monitor progress visually
- No coding or command-line experience needed
Migration Strategy
We'll use Full Load + CDC (Change Data Capture):
Copy all existing data
Capture ongoing changes
Move app to Aurora
Stop replication
Benefits:
- Minimal downtime (< 5 minutes during switch)
- Safe to test before committing
- Easy rollback if needed
AWS Console Preparation
Access AWS Console
- Open your web browser
- Go to: https://console.aws.amazon.com
- Sign in with your AWS account credentials
- Enter your email
- Enter your password
- Complete MFA if enabled
Select Your Region
- Look at top-right corner of AWS Console
- Click on the region dropdown (shows current region like "N. Virginia")
- Select the region where your Aurora database is located
- Example: "US East (N. Virginia)" or "us-east-1"
- Remember this region - you'll use it for all DMS resources
Verify Permissions
You need access to these AWS services:
- DMS (Database Migration Service)
- RDS (for Aurora)
- VPC (networking)
- IAM (permissions)
- CloudWatch (monitoring)
- Use the search bar at top of console
- Type: "DMS" and press Enter
- If you can open DMS console → You have access ✅
- If you see "Access Denied" → Contact your AWS admin
Pre-Migration Setup (GUI)
Phase 1: Gather Your Information
You'll need this information. Write it down or save in a secure note:
Supabase Database Details
Open a text file and fill in:
SUPABASE_HOST: db.yourproject.supabase.co
SUPABASE_PORT: 5432
SUPABASE_DATABASE: postgres
SUPABASE_USER: postgres
SUPABASE_PASSWORD: [your-password]
Tables to migrate:
- user_dagad_entries
- user_dagad_folders
- user_dagad_files
- user_dagad_embeddings
- user_dagad_usage_log
- user_dagad_addon_imports
- Log in to app.supabase.com
- Select your project
- Click "Settings" (gear icon in left sidebar)
- Click "Database"
- Scroll down to "Connection string"
- Copy the host, port, database, user from the connection string
Aurora Database Details
Method 1: Find via AWS Console
- In AWS Console search bar, type "RDS"
- Click "RDS" to open RDS console
- In left sidebar, click "Databases"
- Find your Aurora cluster (shows "aurora-postgresql")
- Click on the cluster name
- Scroll to "Connectivity & security" section
- Copy these values:
- Endpoint: your-cluster.cluster-xxxxx.us-east-1.rds.amazonaws.com
- Port: 5432
- VPC ID: vpc-xxxxx (you'll need this!)
- Security group: sg-xxxxx (you'll need this!)
AURORA_ENDPOINT: [your-cluster-endpoint]
AURORA_PORT: 5432
AURORA_DATABASE: helium_production
AURORA_USER: admin
AURORA_PASSWORD: [your-password]
AURORA_VPC_ID: vpc-xxxxx
AURORA_SECURITY_GROUP: sg-xxxxx
Phase 2: Check Database Size
We need to know how much data you're migrating.
Option 1: Using Supabase Dashboard
- Go to app.supabase.com
- Select your project
- Click "Database" in left sidebar
- Look for database size indicator (usually shown in dashboard)
Option 2: Using pgAdmin (Visual Tool)
- Open pgAdmin
- Right-click "Servers" → "Register" → "Server"
- Fill in Supabase details
- Click "Save"
- Expand server → Database → Schemas → Tables
- Right-click on a table → "Properties" to see size
Option 3: Using TablePlus (Visual Tool)
- Open TablePlus
- Click "Create a new connection"
- Select PostgreSQL
- Fill in connection details
- Click "Connect"
- View database statistics in UI
Phase 3: Create IAM Roles
AWS DMS needs permissions to work. Let's create the necessary roles.
Create DMS VPC Role
- In AWS Console search bar, type "IAM"
- Click "IAM" to open IAM console
- In left sidebar, click "Roles"
- Click orange "Create role" button
Step 1: Select trusted entity
- Select: "AWS service"
- Use case: Scroll down and select "DMS"
- Click: "Next"
Step 2: Add permissions
- Search for: "AmazonDMSVPCManagementRole"
- Check the box next to it
- Search for: "AmazonDMSCloudWatchLogsRole"
- Check the box next to it
- Click: "Next"
Step 3: Name, review, and create
- Role name:
dms-vpc-role - Description: "Role for DMS to manage VPC resources"
- Scroll down, click "Create role"
Step-by-Step: AWS DMS Setup
Create Replication Instance
The replication instance is the "middleman" that moves your data.
1.1: Navigate to DMS Console
- In AWS Console search bar, type "DMS"
- Click "Database Migration Service"
- You'll see the DMS dashboard
1.2: Create Replication Instance
- In left sidebar, click "Replication instances"
- Click orange "Create replication instance" button
You'll see a form. Fill it in as follows:
Configuration Settings
- Name:
supabase-to-aurora-replication - Description:
Replication instance for migrating Supabase AIM data to Aurora
- Instance class:
- For < 10GB data: Select
dms.t3.micro - For 10-100GB data: Select
dms.t3.medium⭐ Recommended - For 100-500GB: Select
dms.c5.large
- For < 10GB data: Select
- Engine version: Leave default (latest version)
- High Availability: Select "Single-AZ" (cheaper for testing)
- Allocated storage (GB): Enter
100(or your database size + 20% buffer)
- Virtual Private Cloud (VPC): Select the same VPC as your Aurora database
- Replication subnet group: Select existing or create new
- Publicly accessible:
- Select "Yes" (if Supabase is external to AWS)
- Select "No" (if using VPN/VPC peering)
- VPC security group(s): Select your Aurora security group
- KMS key: Select "(default) aws/dms"
1.3: Create the Instance
- Scroll to bottom
- Click orange "Create replication instance" button
- Wait - You'll see a banner: "Creating replication instance..."
- Status: Will show "Creating" with a spinner
- Wait time: 5-10 minutes
- When done: Status changes to "Available" with green checkmark ✅
1.4: Note the IP Address
- Click on your replication instance name
- Look for "Public IP address" or "Private IP address"
- Copy the IP address
- Save it - you'll need it for Supabase firewall (if applicable)
Configure Security Groups
We need to allow the replication instance to connect to Aurora.
2.1: Navigate to EC2 Security Groups
- In AWS Console search bar, type "EC2"
- Click "EC2" to open EC2 console
- In left sidebar, scroll down to "Network & Security"
- Click "Security Groups"
2.2: Find Aurora's Security Group
- In the search box, enter your Aurora security group ID
- Click on the security group name to select it
2.3: Add Inbound Rule for DMS
- Look at bottom tabs
- Click "Inbound rules" tab
- Click "Edit inbound rules" button (top right)
- Click "Add rule" button
Configure the new rule:
- Type: Select "PostgreSQL" (auto-fills port 5432)
- Protocol: TCP (auto-selected)
- Port range: 5432 (auto-filled)
- Source: Enter the security group of your DMS replication instance
- Description:
DMS Replication Instance Access
- Click "Save rules" button (orange, bottom right)
- You'll see: "Successfully modified security group rules"
Create Source Endpoint (Supabase)
This tells DMS how to connect to Supabase.
3.1: Navigate to Endpoints
- In DMS Console, look at left sidebar
- Click "Endpoints"
- Click orange "Create endpoint" button
3.2: Configure Endpoint
Endpoint type:
- Select: "Source endpoint"
Endpoint configuration:
- Endpoint identifier:
supabase-source-endpoint - Source engine: Select "PostgreSQL"
- Access to endpoint database: Select "Provide access information manually"
Endpoint settings:
- Server name:
db.yourproject.supabase.co - Port:
5432 - Database name:
postgres - SSL mode: Select "require" ⭐ Important for Supabase!
- User name:
postgres - Password: [your-supabase-password]
3.3: Add Endpoint Settings (For CDC)
Scroll down to find "Endpoint settings":
- Click "Add new setting" button
- Enter this JSON:
{
"PluginName": "pglogical",
"HeartbeatEnable": true,
"HeartbeatFrequency": 1
}
- Enables Change Data Capture (CDC)
- Keeps connection alive
- Monitors replication health
3.4: Create Endpoint
- Scroll to bottom
- Click "Run test" button (if you filled test connection details)
- Wait 30-60 seconds
- Look for: "Connection tested successfully" ✅
- Click orange "Create endpoint" button
Create Target Endpoint (Aurora)
This tells DMS how to connect to Aurora.
4.1: Create Endpoint
- Still in Endpoints section, click "Create endpoint" button again
4.2: Configure Endpoint
Endpoint type:
- Select: "Target endpoint"
Endpoint configuration:
- Endpoint identifier:
aurora-target-endpoint - Target engine: Select "PostgreSQL"
- Access to endpoint database: Select "Provide access information manually"
Endpoint settings:
- Server name: [your-aurora-endpoint]
- Port:
5432 - Database name:
helium_production - SSL mode: Select "none" (if Aurora is in same VPC)
- User name:
admin - Password: [your-aurora-password]
4.3: Add Performance Settings (Optional but Recommended)
Scroll down to "Endpoint settings":
{
"BatchApplyEnabled": true,
"ParallelApplyThreads": 4,
"ParallelApplyBufferSize": 100
}
- Speeds up data loading
- Applies changes in parallel
- Improves performance
4.4: Create Endpoint
- Click "Run test" to verify connection
- Should show: "Connection tested successfully" ✅
- Click "Create endpoint" button
- Verify: Status shows "Active"
- ✅ Source endpoint (Supabase)
- ✅ Target endpoint (Aurora)
- ✅ Replication instance
- Next: Create the migration task!
Create Database Migration Task
This is the actual job that migrates your data.
5.1: Navigate to Tasks
- In DMS Console left sidebar, click "Database migration tasks"
- Click orange "Create task" button
5.2: Configure Task Settings
Task configuration:
- Task identifier:
supabase-to-aurora-aim-migration - Replication instance: Select
supabase-to-aurora-replication - Source database endpoint: Select
supabase-source-endpoint - Target database endpoint: Select
aurora-target-endpoint
Task settings:
- Migration type: Select "Migrate existing data and replicate ongoing changes" ⭐ Recommended!
- Start task on create: Leave checked (task starts automatically)
- Target table preparation mode: Select "Do nothing"
- Stop task after full load completes: Select "Don't stop"
- Include LOB columns in replication: Select "Full LOB mode"
- Enable validation: Check this box ✅ Very important!
- Enable CloudWatch logs: Check this box ✅ For monitoring!
5.3: Configure Table Mappings
This tells DMS which tables to migrate.
Using Wizard Method:
- Click "Add new selection rule"
- Schema: Select or enter
public - Table name: Select "Enter a table name pattern"
- Enter:
user_dagad_%(% is wildcard) - Action: Select "Include"
This includes all tables starting with "user_dagad_"
Using JSON Editor Method:
- Click "JSON editor" tab
- Replace the content with:
{
"rules": [
{
"rule-type": "selection",
"rule-id": "1",
"rule-name": "include-aim-tables",
"object-locator": {
"schema-name": "public",
"table-name": "user_dagad_%"
},
"rule-action": "include"
}
]
}
5.4: Review and Create Task
- Scroll to bottom
- Review all settings
- Click orange "Create task" button
- You'll see: "Creating database migration task..."
Migration Execution
What Happens Now
Your task is running! Here's what's happening:
30 seconds - 2 minutes
DMS prepares connections1-12 hours depending on data size
Copying all existing data from Supabase to AuroraOngoing
Capturing and applying ongoing changesMonitoring Migration (GUI)
View Task Status
Main Task Dashboard
- In DMS Console, click "Database migration tasks"
- Find your task:
supabase-to-aurora-aim-migration - Look at Status column:
- "Starting": Task is initializing
- "Running": Full load in progress
- "Load complete": Full load done, CDC ongoing ✅
- "Failed": Something went wrong (see logs)
Detailed Task View
- Click on your task name
- You'll see detailed information:
- Status: Current state
- % complete: Progress percentage
- Tables loaded: Number of tables completed
- Rows loaded: Total rows copied
Monitor Table Statistics
This shows per-table progress.
View Table Statistics
- In task details, look for tabs at top
- Click "Table statistics" tab
- You'll see a table with these columns:
- Table name: Which table
- Full load: Rows copied during initial load
- Inserts: New rows added (CDC)
- Updates: Rows modified (CDC)
- Deletes: Rows removed (CDC)
- Validation: Data validation status
- ✅ "Full load" numbers increasing steadily
- ✅ "Validation" status: "Validated" or "Pending"
- ❌ Any errors in status column
View CloudWatch Logs
Logs show detailed information about what's happening.
Access Logs from DMS Console
- In task details, click "Monitoring" tab
- Scroll down to "Logs"
- Click "View CloudWatch Logs"
Access Logs from CloudWatch
- In AWS Console search bar, type "CloudWatch"
- Click "CloudWatch"
- In left sidebar, expand "Logs"
- Click "Log groups"
- Find and click:
/aws/dms/tasks/[your-task-id] - Click on a log stream (usually the most recent)
Good log messages (normal):
- "Table loaded successfully"
- "CDC load has started"
- "Change processing has started"
- "Task is running"
Warning messages (may be okay):
- "Retrying after connection timeout" (temporary network issue)
- "Large transaction in progress" (just info)
Error messages (need attention):
- "Failed to connect to source endpoint"
- "Table does not exist"
- "Permission denied"
- "Validation failed"
Monitor CloudWatch Metrics
Metrics show performance graphs.
View Metrics Dashboard
- In DMS task details, click "Monitoring" tab
- You'll see graphs for:
- CPU utilization: Should stay under 80%
- Free memory: Should not drop too low
- Network receive throughput: Data coming from Supabase
- Network transmit throughput: Data going to Aurora
Key Metrics to Watch:
- < 5 seconds: Excellent ✅
- 5-30 seconds: Good 👍
- 30-60 seconds: Monitor closely ⚠️
- > 60 seconds: May have issues ❌
- Shows how fast data is copying
- Should be steady (not dropping to 0)
- Must be 0 ✅
- > 0: Data integrity issue ❌
Data Verification (GUI)
Once full load completes, verify your data before cutover.
Method 1: Using AWS Query Editor
AWS provides a built-in SQL editor for RDS databases.
Access Query Editor
- In AWS Console, go to RDS
- In left sidebar, look for "Query Editor"
- Click "Query Editor"
Connect to Aurora
- Select "Aurora" tab
- Choose your Aurora cluster from dropdown
- Database name:
helium_production - Database username:
admin - Password: [your-aurora-password]
- Click "Connect to database"
Run Verification Queries
Query 1: Compare row counts
-- In Aurora
SELECT 'user_dagad_entries' as table_name, COUNT(*) as row_count
FROM user_dagad_entries
UNION ALL
SELECT 'user_dagad_folders', COUNT(*) FROM user_dagad_folders
UNION ALL
SELECT 'user_dagad_files', COUNT(*) FROM user_dagad_files
UNION ALL
SELECT 'user_dagad_embeddings', COUNT(*) FROM user_dagad_embeddings
UNION ALL
SELECT 'user_dagad_usage_log', COUNT(*) FROM user_dagad_usage_log
UNION ALL
SELECT 'user_dagad_addon_imports', COUNT(*) FROM user_dagad_addon_imports;
Compare with Supabase: Row counts should match! ✅
Query 2: Spot-check sample data
-- Get recent entries from Aurora
SELECT entry_id, user_id, title, created_at
FROM user_dagad_entries
ORDER BY created_at DESC
LIMIT 10;
Run in both Aurora and Supabase - results should be identical!
Query 3: Verify foreign key relationships
-- Check for orphaned entries
SELECT COUNT(*) as orphaned_entries
FROM user_dagad_entries e
LEFT JOIN user_dagad_folders f ON e.folder_id = f.folder_id
WHERE e.folder_id IS NOT NULL AND f.folder_id IS NULL;
Result should be: 0 (no orphaned records)
Query 4: Verify embeddings
-- Check if embeddings exist
SELECT
COUNT(*) as total_embeddings,
COUNT(CASE WHEN embedding IS NOT NULL THEN 1 END) as embeddings_with_data
FROM user_dagad_embeddings;
Check: Both counts should match (all embeddings have data)
Verification Checklist
- All tables exist in Aurora
- Row counts match between Supabase and Aurora
- Sample data looks correct (compare 10-20 rows)
- No orphaned records (foreign keys intact)
- Embeddings have data
- Created_at timestamps preserved
- No NULL values where they shouldn't be
Cutover Process (GUI)
Once data is verified and CDC is stable, switch your application to Aurora.
Pre-Cutover Checklist
Before switching, confirm:
- Full load completed (Status: "Load complete")
- CDC running smoothly for 24+ hours
- Row counts match (verified above)
- Sample data verified
- Application tested against Aurora (staging)
- Team notified
- Rollback plan ready
Monitor CDC Status Before Cutover
Check that CDC is caught up:
Check CDCLatency Metric
- Go to CloudWatch → Metrics → DMS
- Select your task → CDCLatency
- View graph
- Verify: < 5 seconds ✅
Or:
- In DMS task details, click "Monitoring" tab
- Look at "CDC latency" graph
- Verify: Line is near zero
Troubleshooting (GUI)
Issue 1: Endpoint Test Failed
Symptom: When testing endpoint connection, you see "Failed"
Fix: Connection Timeout
Cause: Firewall blocking connection
- Go to EC2 → Security Groups
- Find Aurora security group
- Check inbound rules
- Verify port 5432 is open for DMS instance
- If not, add the rule
Fix: Permission Denied
Cause: Wrong credentials
- Double-check username and password
- Verify in Query Editor
- Update endpoint with correct password
- Re-test connection
Fix: SSL Error
For Supabase:
- Edit source endpoint
- Change SSL mode to "require"
- Save changes
- Re-test connection
Issue 2: Task Failed or Stopped
Symptom: Task status shows "Failed" or "Stopped unexpectedly"
Check Logs
- Click on task name
- Click "Monitoring" tab
- Click "View CloudWatch Logs"
- Look for error messages (usually in red)
Common Errors and Fixes
Error: "Table does not exist"
- Verify tables exist in Aurora
- If missing, create tables in Aurora first
- Restart task
Issue 3: Migration Too Slow
Symptom: Full load taking hours for small database
Fix: Upgrade Instance
- Stop task
- Go to Replication instances
- Click on your instance
- Click "Modify"
- Change instance class to larger size
- Click "Modify"
- Wait for modification to complete
- Restart task
Issue 4: CDC Lag Increasing
Symptom: CDCLatency metric keeps growing
Possible Causes
- Too many writes to Supabase
- Replication instance too small
- Aurora can't write fast enough
- Network issues
Fix: Upgrade Resources
- Upgrade replication instance
- Scale up Aurora instance if CPU/memory high
- Add more parallel threads in task settings
Issue 5: Validation Failures
Symptom: ValidationFailedRecords > 0
Find Failed Records
SELECT * FROM awsdms_validation_failures_v1
LIMIT 10;
Common Causes
- Data type mismatch: Float precision differences
- Encoding issues: Special characters
- NULL handling: NULL vs empty string
Post-Migration Cleanup (GUI)
After migration is successful and stable.
Phase 1: Monitor for 48 Hours
Keep everything running for 2 days:
Daily Checklist (Day 1 & 2):
- Check application error logs
- Monitor Aurora CPU/memory (RDS console)
- Verify no user complaints
- Test key functionality
- Spot-check data integrity
Phase 2: Cleanup After 1 Week
Once confident Aurora is stable:
Stop DMS Task
- Go to DMS Console → Database migration tasks
- Find your task
- Select the checkbox next to it
- Click "Actions" dropdown
- Select "Stop"
- Confirm
Stop Replication Instance
- Go to DMS Console → Replication instances
- Find your instance
- Select the checkbox
- Click "Actions" dropdown
- Select "Stop"
- Confirm
Phase 3: Delete Resources After 2 Weeks
Only if everything is stable:
Delete DMS Task
- Go to Database migration tasks
- Find your STOPPED task
- Select checkbox
- Click "Actions" → "Delete"
- Type "delete" to confirm
- Click "Delete"
Delete Replication Instance
- Go to Replication instances
- Find your STOPPED instance
- Select checkbox
- Click "Actions" → "Delete"
- Confirm deletion
Delete Endpoints (Optional)
- Go to Endpoints
- Select source endpoint
- Click "Actions" → "Delete"
- Confirm
- Repeat for target endpoint
Phase 4: Supabase Data (Optional)
Option A: Keep Supabase Data (Recommended)
Why:
- Serves as permanent backup
- Minimal cost (just storage)
- Can restore if disaster occurs
- No harm in keeping it
Do nothing - just keep paying for Supabase storage
Only if absolutely certain Aurora is stable. Export backup first, store safely, then delete Supabase tables.
Summary & Best Practices
What You Learned
Congratulations! You now know how to:
- Set up AWS DMS using only the GUI
- Create and configure replication instance
- Set up source and target endpoints
- Create and monitor migration tasks
- Verify data using Query Editor
- Perform safe cutover with rollback option
- Monitor and troubleshoot using CloudWatch
- Clean up resources to save costs
Best Practices Recap
- Always test endpoints before creating tasks
- Enable validation on migration tasks
- Monitor CloudWatch logs during migration
- Keep CDC running 24-48 hours before cutover
- Verify data thoroughly before switching
- Have rollback plan ready
- Stop resources when not needed (save $$$)
- Keep Supabase data as backup
Key Metrics to Remember
- CDCLatency: Should be < 10 seconds (ideally < 5)
- ValidationFailedRecords: Should be 0
- CPU Utilization: Should be < 80%
- Task Status: Should be "Running" or "Load complete"
Cost Optimization
- Stop replication instance when not migrating: Saves ~$140/month
- Delete resources after migration complete: No ongoing costs
- Use t3.micro for small databases: Cheaper
- Keep task and instance stopped (not deleted) for first week: Free rollback insurance