1968 lines
42 KiB
Markdown
Raw Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

# Database and PostgreSQL Issues
This guide covers PostgreSQL and database-related problems in Changemaker Lite V2.
## Overview
### Database Architecture
Changemaker Lite V2 uses:
- **PostgreSQL 16** - Primary database
- **Prisma ORM** - Main API (Express)
- **Drizzle ORM** - Media API (Fastify)
- **Same database** - Shared by both APIs
- **Separate schemas** - Tables owned by different ORMs
### Database Connection Info
```bash
# From API container
DATABASE_URL="postgresql://changemaker:password@v2-postgres:5432/changemaker_v2"
# From host
DATABASE_URL="postgresql://changemaker:password@localhost:5433/changemaker_v2"
# Connection details:
# User: changemaker
# Password: set in V2_POSTGRES_PASSWORD env var
# Host: v2-postgres (container) or localhost (host)
# Port: 5432 (inside Docker), 5433 (host)
# Database: changemaker_v2
```
### Essential Commands
```bash
# Connect to database
docker compose exec v2-postgres psql -U changemaker -d changemaker_v2
# Run single query
docker compose exec v2-postgres psql -U changemaker -d changemaker_v2 -c "SELECT NOW();"
# Run SQL file
docker compose exec -T v2-postgres psql -U changemaker -d changemaker_v2 < script.sql
# Database logs
docker compose logs v2-postgres
# Prisma Studio (GUI)
docker compose exec api npx prisma studio
```
---
## Connection Errors
### Connection Refused
**Severity:** 🔴 Critical
#### Symptoms
API logs:
```
Error: connect ECONNREFUSED 127.0.0.1:5433
Error: Can't reach database server at `v2-postgres:5432`
```
Or direct connection:
```bash
psql: error: connection to server at "localhost" (127.0.0.1), port 5433 failed:
Connection refused
```
#### Common Causes
1. **Database not running** - Container stopped
2. **Wrong connection string** - Incorrect host/port
3. **Port not exposed** - Missing port mapping
4. **Network issue** - Container can't reach database
#### Solutions
**Solution 1: Check database status**
```bash
# Is database running?
docker compose ps v2-postgres
# Should show:
# NAME STATUS
# changemaker-lite-v2-postgres-1 Up 5 minutes
# If not running:
docker compose up -d v2-postgres
```
**Solution 2: Wait for database to be ready**
```bash
# Check logs for "ready to accept connections"
docker compose logs v2-postgres | grep "ready"
# Should show:
# database system is ready to accept connections
# If not ready, wait 10-20 seconds and check again
```
**Solution 3: Verify connection string**
```bash
# Check .env
cat .env | grep DATABASE_URL
# From API container should use container name:
DATABASE_URL="postgresql://changemaker:password@v2-postgres:5432/changemaker_v2"
# From host should use localhost:
DATABASE_URL="postgresql://changemaker:password@localhost:5433/changemaker_v2"
# Common mistakes:
# ❌ Using localhost from container
# ❌ Using v2-postgres from host
# ❌ Wrong port (5432 vs 5433)
# ❌ Wrong password
```
**Solution 4: Test connection manually**
```bash
# From API container
docker compose exec api sh -c 'psql $DATABASE_URL -c "SELECT NOW();"'
# From host
psql "postgresql://changemaker:password@localhost:5433/changemaker_v2" -c "SELECT NOW();"
# If fails, connection string is wrong
```
**Solution 5: Check port mapping**
In `docker-compose.yml`:
```yaml
v2-postgres:
ports:
- "5433:5432" # host:container
```
Verify:
```bash
docker compose ps v2-postgres
# Should show:
# PORTS: 0.0.0.0:5433->5432/tcp
```
#### Prevention
- **Health checks** - Wait for database health before starting API
- **Connection retry** - Retry connection on startup
- **Correct env vars** - Validate DATABASE_URL format
- **Monitoring** - Alert on connection failures
---
### Too Many Clients
**Severity:** 🟠 High
#### Symptoms
```
FATAL: sorry, too many clients already
```
Or:
```
Error: remaining connection slots are reserved for non-replication superuser connections
```
#### Common Causes
1. **Connection leak** - Connections not closed
2. **Pool too large** - Connection pool size too high
3. **Multiple Prisma instances** - Each creates own pool
4. **Long-running transactions** - Holding connections
#### Solutions
**Solution 1: Check active connections**
```sql
-- View all connections
SELECT count(*) FROM pg_stat_activity;
-- View connections by state
SELECT state, count(*)
FROM pg_stat_activity
WHERE datname = 'changemaker_v2'
GROUP BY state;
-- View connection details
SELECT pid, usename, application_name, state, query_start, query
FROM pg_stat_activity
WHERE datname = 'changemaker_v2'
ORDER BY query_start;
```
**Solution 2: Kill idle connections**
```sql
-- Find idle connections
SELECT pid, usename, state, state_change
FROM pg_stat_activity
WHERE datname = 'changemaker_v2'
AND state = 'idle'
AND state_change < NOW() - INTERVAL '5 minutes';
-- Kill specific connection
SELECT pg_terminate_backend(12345); -- Replace with actual PID
-- Kill all idle connections (careful!)
SELECT pg_terminate_backend(pid)
FROM pg_stat_activity
WHERE datname = 'changemaker_v2'
AND state = 'idle'
AND state_change < NOW() - INTERVAL '5 minutes';
```
**Solution 3: Adjust connection pool**
In DATABASE_URL:
```bash
# Limit connection pool size
DATABASE_URL="postgresql://changemaker:password@v2-postgres:5432/changemaker_v2?connection_limit=10"
```
Or in Prisma code:
```typescript
// api/src/config/database.ts
import { PrismaClient } from '@prisma/client';
export const prisma = new PrismaClient({
datasources: {
db: {
url: process.env.DATABASE_URL
}
}
// Connection pool defaults:
// connection_limit: 10
// pool_timeout: 10 (seconds)
});
```
**Solution 4: Increase max connections**
In `docker-compose.yml`:
```yaml
v2-postgres:
command: postgres -c max_connections=200
# Default is 100
```
Restart:
```bash
docker compose up -d v2-postgres
```
Verify:
```sql
SHOW max_connections;
```
**Solution 5: Restart API to release connections**
```bash
# Restart API releases all connections
docker compose restart api
docker compose restart media-api
# Check connection count dropped
docker compose exec v2-postgres psql -U changemaker -d changemaker_v2 \
-c "SELECT count(*) FROM pg_stat_activity WHERE datname = 'changemaker_v2';"
```
#### Prevention
- **Proper cleanup** - Always close Prisma clients in tests
- **Appropriate pool size** - Balance performance vs connections
- **Monitor connections** - Alert when approaching max
- **Idle timeout** - Automatically close idle connections
!!! warning "Connection Math"
Total connections = (number of API instances) × (connection pool size) + (other clients)
Example:
- 2 API instances × 10 pool size = 20 connections
- 1 media API × 5 pool size = 5 connections
- Prisma Studio = 1 connection
- Total = 26 connections
Set max_connections to 2-3× expected usage.
---
### Authentication Failed
**Severity:** 🔴 Critical
#### Symptoms
```
FATAL: password authentication failed for user "changemaker"
```
Or:
```
FATAL: role "changemaker" does not exist
```
#### Common Causes
1. **Wrong password** - PASSWORD in DATABASE_URL doesn't match
2. **Wrong username** - User doesn't exist
3. **Password changed** - Database password changed but not .env
4. **Case sensitivity** - PostgreSQL usernames are case-sensitive
#### Solutions
**Solution 1: Verify credentials**
```bash
# Check .env
cat .env | grep V2_POSTGRES_PASSWORD
# Check DATABASE_URL
cat .env | grep DATABASE_URL
# Password in DATABASE_URL must match V2_POSTGRES_PASSWORD
```
**Solution 2: Test connection directly**
```bash
# Test with password
docker compose exec v2-postgres psql -U changemaker -d changemaker_v2
# If prompted for password, enter V2_POSTGRES_PASSWORD
# If fails, credentials are wrong
```
**Solution 3: Check user exists**
```bash
# Connect as postgres superuser
docker compose exec v2-postgres psql -U postgres -c "\du"
# Should show changemaker user:
# Role name | Attributes
# changemaker |
# If missing, create user:
docker compose exec v2-postgres psql -U postgres -c \
"CREATE USER changemaker WITH PASSWORD 'your-password';"
docker compose exec v2-postgres psql -U postgres -c \
"GRANT ALL PRIVILEGES ON DATABASE changemaker_v2 TO changemaker;"
```
**Solution 4: Reset password**
```bash
# As postgres superuser
docker compose exec v2-postgres psql -U postgres -c \
"ALTER USER changemaker WITH PASSWORD 'new-password';"
# Update .env
V2_POSTGRES_PASSWORD=new-password
DATABASE_URL="postgresql://changemaker:new-password@v2-postgres:5432/changemaker_v2"
# Restart API
docker compose restart api
```
**Solution 5: Recreate database**
If completely broken:
```bash
# Backup first!
docker compose exec v2-postgres pg_dump -U postgres changemaker_v2 > backup.sql
# Stop database
docker compose down v2-postgres
# Remove volume (⚠️ DELETES DATA!)
docker volume rm changemaker-lite_postgres-data
# Start fresh
docker compose up -d v2-postgres
# Wait for ready
docker compose logs -f v2-postgres | grep "ready"
# Run migrations
docker compose exec api npx prisma migrate deploy
docker compose exec api npx prisma db seed
```
#### Prevention
- **Secure passwords** - Strong passwords in .env
- **Consistent credentials** - Same password in all places
- **Version control .env.example** - Template with placeholders
- **Documentation** - Document credential structure
---
### Database Does Not Exist
**Severity:** 🟠 High
#### Symptoms
```
FATAL: database "changemaker_v2" does not exist
```
#### Common Causes
1. **First run** - Database not created yet
2. **Wrong database name** - Typo in DATABASE_URL
3. **Database deleted** - Volume was removed
4. **Wrong postgres instance** - Connected to different database
#### Solutions
**Solution 1: Check database exists**
```bash
# List databases
docker compose exec v2-postgres psql -U postgres -l
# Should show:
# Name | Owner
# changemaker_v2 | changemaker
# If missing, database wasn't created
```
**Solution 2: Create database**
```bash
# Create database
docker compose exec v2-postgres psql -U postgres -c \
"CREATE DATABASE changemaker_v2 OWNER changemaker;"
# Verify
docker compose exec v2-postgres psql -U postgres -l | grep changemaker_v2
```
**Solution 3: Run migrations**
```bash
# Prisma migrations create tables
docker compose exec api npx prisma migrate deploy
# Drizzle push creates media tables
docker compose exec api npx drizzle-kit push
# Seed initial data
docker compose exec api npx prisma db seed
```
**Solution 4: Check DATABASE_URL**
```bash
# Verify database name in URL
cat .env | grep DATABASE_URL
# Should end with /changemaker_v2
# Not:
# /changemaker (missing _v2)
# /postgres (wrong database)
```
**Solution 5: Full reset**
```bash
# ⚠️ Deletes all data!
docker compose down -v
docker compose up -d v2-postgres
# Wait for ready
sleep 10
# Create and migrate
docker compose exec api npx prisma migrate deploy
docker compose exec api npx drizzle-kit push
docker compose exec api npx prisma db seed
```
#### Prevention
- **Initialization scripts** - Auto-create database on first run
- **Health checks** - Verify database exists before app starts
- **Migrations** - Run migrations in deployment script
- **Documentation** - Clear setup instructions
---
## Migration Errors
### Migration Conflict
**Severity:** 🟠 High
#### Symptoms
```
Error: Migration failed to apply cleanly to the shadow database.
Error: P3006 Migration `20260101000000_init` failed to apply cleanly to a temporary database.
```
Or:
```
Error: The migration `20260201000000_add_field` cannot be applied to the database:
- Added the required column `fieldName` to the `User` table without a default value.
```
#### Common Causes
1. **Schema drift** - Database schema doesn't match Prisma schema
2. **Non-nullable column** - Adding required field to table with data
3. **Conflicting migration** - Different migration with same name
4. **Shadow database issue** - Can't create shadow database
#### Solutions
**Solution 1: Check migration status**
```bash
# View migration history
docker compose exec api npx prisma migrate status
# Shows:
# - Applied migrations
# - Pending migrations
# - Failed migrations
```
**Solution 2: Add default value for new field**
If adding non-nullable column to table with existing data:
```prisma
// In prisma/schema.prisma
model User {
id String @id @default(uuid())
email String @unique
name String @default("") // Add default for existing rows
}
```
Or use two-step migration:
```sql
-- Migration 1: Add nullable field
ALTER TABLE "User" ADD COLUMN "name" TEXT;
-- Migration 2: Make non-nullable (after backfilling)
UPDATE "User" SET "name" = 'Unknown' WHERE "name" IS NULL;
ALTER TABLE "User" ALTER COLUMN "name" SET NOT NULL;
```
**Solution 3: Reset database (dev only)**
```bash
# ⚠️ DELETES ALL DATA!
docker compose exec api npx prisma migrate reset
# This:
# 1. Drops database
# 2. Creates database
# 3. Applies all migrations
# 4. Runs seed
```
**Solution 4: Manually fix schema drift**
```bash
# Compare database schema to Prisma schema
docker compose exec api npx prisma db pull
# This creates a new schema.prisma from database
# Compare with your current schema.prisma
# Manually fix differences
```
**Solution 5: Mark migration as applied (if already applied manually)**
```bash
# If you manually ran migration SQL, mark as applied:
docker compose exec api npx prisma migrate resolve --applied "20260201000000_migration_name"
```
#### Prevention
- **Development workflow** - Use `prisma migrate dev` in dev
- **Production workflow** - Use `prisma migrate deploy` in prod
- **Never edit migrations** - Don't modify files in migrations/
- **Test migrations** - Test on copy of prod data first
---
### Schema Drift
**Severity:** 🟡 Medium
#### Symptoms
```
Warning: Your database schema is not in sync with your Prisma schema.
```
Or:
```
Error: P2021 The table `main.NewTable` does not exist in the current database.
```
#### Common Causes
1. **Manual schema changes** - Changed database without migration
2. **Missing migrations** - Migrations not run on this database
3. **Different environment** - Prod vs dev schema mismatch
4. **Failed migration** - Migration partially applied
#### Solutions
**Solution 1: Detect drift**
```bash
# Check for drift
docker compose exec api npx prisma migrate diff \
--from-schema-datamodel prisma/schema.prisma \
--to-schema-datasource prisma/schema.prisma \
--script
# If output is empty, no drift
# If shows SQL, that's the drift
```
**Solution 2: Create migration from drift**
```bash
# Generate migration to fix drift
docker compose exec api npx prisma migrate dev --name fix_drift
# Reviews changes and creates migration
```
**Solution 3: Pull schema from database**
```bash
# Update Prisma schema from database
docker compose exec api npx prisma db pull
# This overwrites schema.prisma with actual database schema
# Review changes before committing
```
**Solution 4: Deploy missing migrations**
```bash
# Apply all pending migrations
docker compose exec api npx prisma migrate deploy
# Check status
docker compose exec api npx prisma migrate status
```
**Solution 5: Reset and re-migrate (dev only)**
```bash
# ⚠️ DELETES ALL DATA!
docker compose exec api npx prisma migrate reset
# Applies all migrations fresh
```
#### Prevention
- **Never manual schema changes** - Always use migrations
- **Consistent workflow** - Same process in all environments
- **CI/CD validation** - Check for drift in CI pipeline
- **Documentation** - Document migration process
---
### Failed Migration Rollback
**Severity:** 🔴 Critical
#### Symptoms
```
Error: Migration failed. Cannot rollback without losing data.
```
Or:
```
Error: Database is in an inconsistent state after a failed migration
```
#### Common Causes
1. **Data migration failed** - Migration includes data changes that failed
2. **Constraint violation** - Migration violates database constraints
3. **No rollback** - Prisma doesn't support automatic rollback
4. **Partial application** - Migration partially applied before error
#### Solutions
**Solution 1: Mark migration as rolled back**
```bash
# Mark as failed (doesn't undo changes)
docker compose exec api npx prisma migrate resolve --rolled-back "20260201000000_migration_name"
```
**Solution 2: Manually revert changes**
```sql
-- Find what migration did
cat api/prisma/migrations/20260201000000_migration_name/migration.sql
-- Write reverse SQL
-- If migration did:
ALTER TABLE "User" ADD COLUMN "newField" TEXT;
-- Reverse is:
ALTER TABLE "User" DROP COLUMN "newField";
-- Apply reverse
docker compose exec v2-postgres psql -U changemaker -d changemaker_v2 \
-c 'ALTER TABLE "User" DROP COLUMN "newField";'
```
**Solution 3: Restore from backup**
```bash
# If you have backup before migration
docker compose exec -T v2-postgres psql -U changemaker -d changemaker_v2 < backup-before-migration.sql
# Then mark migration as rolled back
docker compose exec api npx prisma migrate resolve --rolled-back "20260201000000_migration_name"
```
**Solution 4: Fix forward**
Instead of rolling back, fix the issue and continue:
```bash
# Fix the issue (e.g., add missing default value)
docker compose exec v2-postgres psql -U changemaker -d changemaker_v2 \
-c 'ALTER TABLE "User" ALTER COLUMN "newField" SET DEFAULT '\''value'\'';'
# Retry migration
docker compose exec api npx prisma migrate deploy
```
**Solution 5: Baseline from current state**
If database is in unknown state:
```bash
# Create new migration from current state
docker compose exec api npx prisma migrate dev --name baseline --create-only
# Review generated migration
# If it looks correct, apply:
docker compose exec api npx prisma migrate deploy
```
#### Prevention
- **Test migrations** - Test on copy of prod data first
- **Backup before migrate** - Always backup before production migration
- **Reversible migrations** - Design migrations to be reversible
- **Small migrations** - Small, focused migrations easier to fix
!!! danger "Prisma Doesn't Auto-Rollback"
Prisma Migrate does NOT automatically rollback failed migrations. You must manually fix issues.
---
## Query Performance
### Slow Queries
**Severity:** 🟡 Medium to 🟠 High
#### Symptoms
API requests taking seconds to respond:
```
GET /api/users - 5000ms
```
Database logs show slow queries:
```
LOG: duration: 4521.234 ms statement: SELECT * FROM "User" WHERE ...
```
#### Common Causes
1. **Missing indexes** - Querying without index
2. **Full table scan** - WHERE clause doesn't use index
3. **N+1 queries** - Multiple queries instead of JOIN
4. **Large result set** - Fetching too many rows
5. **Complex query** - Too many JOINs or subqueries
#### Solutions
**Solution 1: Enable slow query logging**
In `docker-compose.yml`:
```yaml
v2-postgres:
command: postgres -c log_min_duration_statement=1000
# Logs queries taking > 1 second
```
Restart:
```bash
docker compose up -d v2-postgres
# View slow query log
docker compose logs v2-postgres | grep "duration:"
```
**Solution 2: Analyze query**
```sql
-- Use EXPLAIN to see query plan
EXPLAIN ANALYZE
SELECT * FROM "User"
WHERE email LIKE '%@example.com%';
-- Output shows:
-- Seq Scan on "User" (cost=0.00..20.00 rows=1000 width=100) (actual time=0.123..5.234 rows=50 loops=1)
-- Filter: (email ~~ '%@example.com%'::text)
-- Rows Removed by Filter: 950
-- Planning Time: 0.456 ms
-- Execution Time: 5.678 ms
-- "Seq Scan" = full table scan (slow)
-- "Index Scan" = using index (fast)
```
**Solution 3: Add indexes**
```prisma
// In prisma/schema.prisma
model User {
id String @id @default(uuid())
email String @unique // Creates index automatically
name String
@@index([name]) // Add index for name searches
}
```
Create migration:
```bash
docker compose exec api npx prisma migrate dev --name add_user_name_index
```
Verify index used:
```sql
EXPLAIN SELECT * FROM "User" WHERE name = 'John';
-- Should show: Index Scan using User_name_idx
```
**Solution 4: Fix N+1 queries**
```typescript
// Bad - N+1 queries
const campaigns = await prisma.campaign.findMany();
for (const campaign of campaigns) {
const emails = await prisma.campaignEmail.findMany({
where: { campaignId: campaign.id }
});
}
// 1 query for campaigns + N queries for emails = N+1
// Good - single query with include
const campaigns = await prisma.campaign.findMany({
include: {
emails: true
}
});
// 1 query total
```
**Solution 5: Limit result size**
```typescript
// Bad - fetch all users
const users = await prisma.user.findMany();
// Good - paginate
const users = await prisma.user.findMany({
take: 50, // Limit to 50 rows
skip: page * 50, // Offset for pagination
});
```
#### Prevention
- **Index frequently queried fields** - email, createdAt, etc.
- **Use includes** - Avoid N+1 queries
- **Paginate results** - Never fetch all rows
- **Monitor query performance** - Alert on slow queries
---
### Missing Indexes
**Severity:** 🟡 Medium
#### Symptoms
Slow queries on filtered/sorted columns:
```sql
SELECT * FROM "Location" WHERE "postalCode" = 'M5H 2N2';
-- Slow without index on postalCode
```
#### Common Causes
1. **No index on filter column** - WHERE clause column not indexed
2. **No index on sort column** - ORDER BY column not indexed
3. **No index on foreign key** - JOIN column not indexed
4. **Composite index needed** - Multiple columns in WHERE
#### Solutions
**Solution 1: Identify missing indexes**
```sql
-- Find tables without indexes
SELECT schemaname, tablename, indexname
FROM pg_indexes
WHERE schemaname = 'public'
ORDER BY tablename;
-- Find columns used in WHERE but not indexed
-- (requires pg_stat_statements extension)
```
**Solution 2: Add single-column index**
```prisma
model Location {
id String @id @default(uuid())
address String
postalCode String
@@index([postalCode]) // Add index
}
```
**Solution 3: Add composite index**
For queries filtering on multiple columns:
```prisma
model Location {
id String @id @default(uuid())
province String
city String
postalCode String
@@index([province, city]) // Composite index
// Speeds up: WHERE province = 'ON' AND city = 'Toronto'
// Also speeds up: WHERE province = 'ON'
// Does NOT speed up: WHERE city = 'Toronto' (must start with first column)
}
```
**Solution 4: Add index on foreign key**
```prisma
model CampaignEmail {
id String @id @default(uuid())
campaignId String
campaign Campaign @relation(fields: [campaignId], references: [id])
@@index([campaignId]) // Index foreign key for JOINs
}
```
**Solution 5: Create migration**
```bash
# Generate migration for index
docker compose exec api npx prisma migrate dev --name add_indexes
# Apply to production
docker compose exec api npx prisma migrate deploy
```
#### Prevention
- **Index foreign keys** - Always index foreign keys
- **Index filter columns** - Index columns used in WHERE
- **Index sort columns** - Index columns used in ORDER BY
- **Monitor query patterns** - Add indexes based on actual usage
!!! tip "Index Guidelines"
- Unique constraints auto-create indexes
- Foreign keys should be indexed
- Columns in WHERE/ORDER BY/GROUP BY are candidates
- Don't over-index (slows down writes)
---
### N+1 Queries
**Severity:** 🟠 High
#### Symptoms
API slow when fetching related data:
```
GET /api/campaigns - 2000ms
```
Database logs show many similar queries:
```
SELECT * FROM "CampaignEmail" WHERE "campaignId" = 'uuid1'
SELECT * FROM "CampaignEmail" WHERE "campaignId" = 'uuid2'
SELECT * FROM "CampaignEmail" WHERE "campaignId" = 'uuid3'
...
```
#### Common Causes
1. **No eager loading** - Fetching relations in loop
2. **Separate queries** - Not using include/select
3. **Nested loops** - Multiple levels of relations
#### Solutions
**Solution 1: Detect N+1 queries**
Enable query logging:
```typescript
// In api/src/config/database.ts
export const prisma = new PrismaClient({
log: ['query'], // Log all queries
});
```
Look for repeated patterns:
```
Query: SELECT * FROM "Campaign"
Query: SELECT * FROM "CampaignEmail" WHERE "campaignId" = '...'
Query: SELECT * FROM "CampaignEmail" WHERE "campaignId" = '...'
Query: SELECT * FROM "CampaignEmail" WHERE "campaignId" = '...'
```
**Solution 2: Use include**
```typescript
// Bad - N+1
const campaigns = await prisma.campaign.findMany();
for (const campaign of campaigns) {
campaign.emails = await prisma.campaignEmail.findMany({
where: { campaignId: campaign.id }
});
}
// 1 + N queries
// Good - single query
const campaigns = await prisma.campaign.findMany({
include: {
emails: true
}
});
// 2 queries (1 for campaigns, 1 for all emails with JOIN)
```
**Solution 3: Nested includes**
```typescript
// Multi-level relations
const campaigns = await prisma.campaign.findMany({
include: {
emails: {
include: {
user: true // Include user who sent email
}
},
createdBy: true
}
});
```
**Solution 4: Select only needed fields**
```typescript
// Fetch only needed data
const campaigns = await prisma.campaign.findMany({
select: {
id: true,
name: true,
emails: {
select: {
id: true,
sentAt: true
}
}
}
});
```
**Solution 5: Use findUnique with include for single record**
```typescript
// Bad
const campaign = await prisma.campaign.findUnique({
where: { id }
});
const emails = await prisma.campaignEmail.findMany({
where: { campaignId: id }
});
// Good
const campaign = await prisma.campaign.findUnique({
where: { id },
include: { emails: true }
});
```
#### Prevention
- **Always use include** - Load relations in single query
- **Enable query logging** - Monitor for N+1 patterns
- **Code review** - Check for loops with queries
- **Testing** - Load test with realistic data
---
### Connection Pool Exhaustion
**Severity:** 🟠 High
#### Symptoms
```
Error: Timed out fetching a new connection from the connection pool.
```
Or:
```
Error: Can't create connection pool - all connections are in use
```
API becomes unresponsive.
#### Common Causes
1. **Pool too small** - Not enough connections for load
2. **Connections not released** - Long-running transactions
3. **Too many workers** - BullMQ workers using all connections
4. **Connection leak** - Connections never closed
#### Solutions
**Solution 1: Check pool size**
```bash
# View DATABASE_URL
cat .env | grep DATABASE_URL
# Default connection_limit is 10
# Check if you've set it:
postgresql://user:pass@host:5432/db?connection_limit=10
```
**Solution 2: Increase pool size**
```bash
# In .env
DATABASE_URL="postgresql://changemaker:password@v2-postgres:5432/changemaker_v2?connection_limit=20"
# Restart API
docker compose restart api
```
**Solution 3: Check active connections**
```sql
-- View connection pool usage
SELECT count(*), state
FROM pg_stat_activity
WHERE datname = 'changemaker_v2'
GROUP BY state;
-- Should show:
-- count | state
-- 5 | active
-- 2 | idle
-- 3 | idle in transaction
```
**Solution 4: Find long-running transactions**
```sql
-- Find transactions running > 1 minute
SELECT pid, usename, state, NOW() - xact_start AS duration, query
FROM pg_stat_activity
WHERE datname = 'changemaker_v2'
AND state = 'idle in transaction'
AND NOW() - xact_start > INTERVAL '1 minute';
-- Kill if stuck
SELECT pg_terminate_backend(pid);
```
**Solution 5: Configure pool timeout**
```bash
# Increase timeout from 10s to 30s
DATABASE_URL="postgresql://...?connection_limit=20&pool_timeout=30"
```
#### Prevention
- **Appropriate pool size** - Size based on load
- **Release connections** - Always close transactions
- **Monitor pool usage** - Alert when near limit
- **Connection timeout** - Kill stuck connections
!!! tip "Pool Sizing"
Recommended pool size = (CPU cores × 2) + effective_spindle_count
For most applications: 10-20 connections per API instance
---
## Data Issues
### Duplicate Records
**Severity:** 🟡 Medium
#### Symptoms
```
Error: Unique constraint failed on the fields: (`email`)
```
Or finding multiple records:
```sql
SELECT email, count(*)
FROM "User"
GROUP BY email
HAVING count(*) > 1;
-- Returns duplicates
```
#### Common Causes
1. **Race condition** - Two creates at exact same time
2. **Import error** - CSV import created duplicates
3. **Migration bug** - Migration didn't handle duplicates
4. **No unique constraint** - Database allows duplicates
#### Solutions
**Solution 1: Find duplicates**
```sql
-- Find duplicate emails
SELECT email, array_agg(id) AS ids, count(*)
FROM "User"
GROUP BY email
HAVING count(*) > 1;
-- Example output:
-- email | ids | count
-- john@example.com | {uuid1, uuid2} | 2
```
**Solution 2: Delete duplicates (keep oldest)**
```sql
-- Delete newer duplicates, keep oldest
DELETE FROM "User" u1
WHERE EXISTS (
SELECT 1 FROM "User" u2
WHERE u2.email = u1.email
AND u2."createdAt" < u1."createdAt"
);
-- Or keep newest:
DELETE FROM "User" u1
WHERE EXISTS (
SELECT 1 FROM "User" u2
WHERE u2.email = u1.email
AND u2."createdAt" > u1."createdAt"
);
```
**Solution 3: Merge duplicates**
```sql
-- If duplicates have different data, merge:
-- 1. Update foreign keys to point to kept record
UPDATE "Campaign" SET "createdByUserId" = 'uuid-to-keep'
WHERE "createdByUserId" = 'uuid-to-delete';
-- 2. Delete duplicate
DELETE FROM "User" WHERE id = 'uuid-to-delete';
```
**Solution 4: Add unique constraint**
```prisma
model User {
id String @id @default(uuid())
email String @unique // Ensures uniqueness
}
```
Create migration:
```bash
# This will fail if duplicates exist
# Delete duplicates first (Solution 2)
docker compose exec api npx prisma migrate dev --name add_unique_email
```
**Solution 5: Prevent in application code**
```typescript
// Use upsert instead of create
const user = await prisma.user.upsert({
where: { email },
update: {}, // Don't change if exists
create: { email, name, password }
});
```
#### Prevention
- **Unique constraints** - Database enforces uniqueness
- **Use upsert** - Update or create atomically
- **Validation** - Check existence before creating
- **Transaction isolation** - Prevent race conditions
---
### Constraint Violations
**Severity:** 🟡 Medium
#### Symptoms
```
Error: Foreign key constraint failed on the field: `campaignId`
```
Or:
```
Error: Null value in column "name" violates not-null constraint
```
Or:
```
Error: Check constraint "positive_age" is violated
```
#### Common Causes
1. **Foreign key missing** - Referenced record doesn't exist
2. **Null in required field** - NULL when NOT NULL constraint
3. **Check constraint** - Value violates CHECK constraint
4. **Data type mismatch** - Wrong type for column
#### Solutions
**Solution 1: Verify foreign key exists**
```sql
-- Check if campaign exists
SELECT id FROM "Campaign" WHERE id = 'campaign-uuid';
-- If not found, create parent first
```
**Solution 2: Provide required fields**
```typescript
// Bad - missing required field
await prisma.user.create({
data: {
email: 'user@example.com'
// Missing: name (required)
}
});
// Good - all required fields
await prisma.user.create({
data: {
email: 'user@example.com',
name: 'User Name',
password: 'hashed-password'
}
});
```
**Solution 3: Handle check constraints**
```sql
-- If schema has:
ALTER TABLE "User" ADD CONSTRAINT age_check CHECK (age >= 0);
-- Ensure value meets constraint:
INSERT INTO "User" (email, age) VALUES ('user@example.com', 25);
-- Not: VALUES ('user@example.com', -5);
```
**Solution 4: Fix data type**
```typescript
// Bad - passing string for number
await prisma.location.create({
data: {
latitude: "43.65" as any // Wrong type
}
});
// Good - use number
await prisma.location.create({
data: {
latitude: 43.65 // Correct type
}
});
```
**Solution 5: Use transactions for dependent creates**
```typescript
// Create parent and child atomically
await prisma.$transaction(async (tx) => {
const campaign = await tx.campaign.create({
data: { name: 'My Campaign' }
});
const email = await tx.campaignEmail.create({
data: {
campaignId: campaign.id,
subject: 'Email Subject'
}
});
});
```
#### Prevention
- **TypeScript types** - Catch type errors at compile time
- **Zod validation** - Validate before database operations
- **Foreign key checks** - Verify parent exists
- **Transactions** - Atomic multi-step operations
---
### Data Corruption
**Severity:** 🔴 Critical
#### Symptoms
- Invalid JSON in JSON columns
- Truncated text
- Wrong character encoding
- Inconsistent relationships
```sql
SELECT * FROM "Campaign" WHERE "settings"::text LIKE '%\\u0000%';
-- Null bytes in JSON
```
#### Common Causes
1. **Bad import** - CSV/JSON import with bad data
2. **Encoding issues** - Wrong character encoding
3. **Failed migration** - Migration partially applied
4. **Application bug** - Code writing bad data
#### Solutions
**Solution 1: Detect corruption**
```sql
-- Find invalid JSON
SELECT id, settings
FROM "Campaign"
WHERE settings IS NOT NULL
AND settings::text !~ '^[\[\{].*[\]\}]$';
-- Find null bytes
SELECT id, name
FROM "Location"
WHERE name LIKE '%' || chr(0) || '%';
-- Find wrong encoding
SELECT id, address
FROM "Location"
WHERE address ~ '[^\x00-\x7F]' AND address !~ '[À-ÿ]';
```
**Solution 2: Fix invalid JSON**
```sql
-- Replace invalid JSON with valid default
UPDATE "Campaign"
SET settings = '{}'::jsonb
WHERE settings IS NOT NULL
AND settings::text !~ '^[\[\{].*[\]\}]$';
```
**Solution 3: Fix encoding**
```sql
-- Convert encoding
UPDATE "Location"
SET address = convert_from(convert_to(address, 'LATIN1'), 'UTF8')
WHERE address ~ '[^\x00-\x7F]';
```
**Solution 4: Restore from backup**
```bash
# If corruption is widespread, restore from backup
docker compose exec -T v2-postgres psql -U changemaker -d changemaker_v2 < backup-before-corruption.sql
```
**Solution 5: Prevent future corruption**
```typescript
// Validate data before saving
import { z } from 'zod';
const settingsSchema = z.object({
key: z.string(),
value: z.any()
});
// Before save
const validated = settingsSchema.parse(settings);
await prisma.campaign.update({
where: { id },
data: { settings: validated as any }
});
```
#### Prevention
- **Input validation** - Validate all inputs with Zod
- **UTF-8 encoding** - Use UTF-8 everywhere
- **Regular backups** - Daily backups
- **Data integrity checks** - Regular validation scripts
---
## Prisma Studio Issues
### Won't Connect
**Severity:** 🟢 Low
#### Symptoms
```bash
docker compose exec api npx prisma studio
```
Opens browser but shows:
```
Error connecting to database
```
#### Solutions
**Solution 1: Check DATABASE_URL**
```bash
# Verify DATABASE_URL in container
docker compose exec api sh -c 'echo $DATABASE_URL'
# Should be valid connection string
```
**Solution 2: Test connection**
```bash
# Test database connection
docker compose exec api npx prisma db pull
# If fails, connection string is wrong
```
**Solution 3: Use correct port**
Prisma Studio runs on port 5555 by default. If port conflicts:
```bash
# Use different port
docker compose exec api npx prisma studio --port 5556
```
**Solution 4: Check database is running**
```bash
docker compose ps v2-postgres
# Must be "Up"
```
---
### Slow Loading
**Severity:** 🟢 Low
#### Symptoms
Prisma Studio takes minutes to load tables with many rows.
#### Solutions
**Solution 1: Limit rows**
Prisma Studio loads all rows. For large tables, use SQL instead:
```bash
# Instead of Prisma Studio for large tables
docker compose exec v2-postgres psql -U changemaker -d changemaker_v2
```
**Solution 2: Add pagination**
```sql
-- In psql, paginate manually
SELECT * FROM "Location" LIMIT 50 OFFSET 0;
SELECT * FROM "Location" LIMIT 50 OFFSET 50;
```
---
## Drizzle Kit Issues
### Push Failures
**Severity:** 🟠 High
#### Symptoms
```bash
docker compose exec api npx drizzle-kit push
```
Fails with:
```
Error: Failed to push schema changes
```
#### Solutions
**Solution 1: Check Drizzle config**
```typescript
// In api/drizzle.config.ts
import { defineConfig } from 'drizzle-kit';
export default defineConfig({
schema: './src/modules/media/db/schema.ts',
out: './drizzle',
dialect: 'postgresql',
dbCredentials: {
url: process.env.DATABASE_URL!
}
});
```
**Solution 2: Verify schema file**
```bash
# Check schema file exists
docker compose exec api ls -la src/modules/media/db/schema.ts
# Check for syntax errors
docker compose exec api npx tsc --noEmit src/modules/media/db/schema.ts
```
**Solution 3: Check for conflicts with Prisma tables**
Drizzle and Prisma share same database. Ensure table names don't conflict:
```typescript
// Drizzle tables
export const videos = pgTable('media_videos', { ... });
export const reactions = pgTable('media_reactions', { ... });
// Prisma uses: User, Campaign, etc. (no conflict)
```
**Solution 4: Manually apply schema**
```bash
# Generate SQL
docker compose exec api npx drizzle-kit generate:pg
# Review SQL in drizzle/ directory
# Apply manually if needed
docker compose exec v2-postgres psql -U changemaker -d changemaker_v2 < drizzle/0000_schema.sql
```
---
## Backup/Restore Issues
### pg_dump Errors
**Severity:** 🟠 High
#### Symptoms
```bash
docker compose exec v2-postgres pg_dump -U changemaker changemaker_v2 > backup.sql
```
Fails with:
```
pg_dump: error: connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: No such file or directory
```
#### Solutions
**Solution 1: Use correct connection**
```bash
# From inside container
docker compose exec v2-postgres pg_dump -U changemaker changemaker_v2 > backup.sql
# Or specify host explicitly
docker compose exec v2-postgres pg_dump -U changemaker -h v2-postgres changemaker_v2 > backup.sql
```
**Solution 2: Backup to file inside container**
```bash
# Dump to file inside container
docker compose exec v2-postgres pg_dump -U changemaker changemaker_v2 -f /tmp/backup.sql
# Copy to host
docker cp changemaker-lite-v2-postgres-1:/tmp/backup.sql ./backup.sql
```
**Solution 3: Use backup script**
```bash
# Use provided backup script
./scripts/backup.sh
```
---
### Restore Failures
**Severity:** 🔴 Critical
#### Symptoms
```bash
docker compose exec -T v2-postgres psql -U changemaker -d changemaker_v2 < backup.sql
```
Fails with errors:
```
ERROR: relation "User" already exists
ERROR: duplicate key value violates unique constraint
```
#### Solutions
**Solution 1: Drop database first**
```bash
# ⚠️ DELETES ALL DATA!
docker compose exec v2-postgres psql -U postgres -c "DROP DATABASE changemaker_v2;"
docker compose exec v2-postgres psql -U postgres -c "CREATE DATABASE changemaker_v2 OWNER changemaker;"
# Then restore
docker compose exec -T v2-postgres psql -U changemaker -d changemaker_v2 < backup.sql
```
**Solution 2: Use --clean flag**
```bash
# Create backup with clean option
docker compose exec v2-postgres pg_dump -U changemaker --clean changemaker_v2 > backup.sql
# Restore (drops existing objects first)
docker compose exec -T v2-postgres psql -U changemaker -d changemaker_v2 < backup.sql
```
**Solution 3: Ignore errors for existing objects**
```bash
# Restore and ignore "already exists" errors
docker compose exec -T v2-postgres psql -U changemaker -d changemaker_v2 < backup.sql 2>&1 | grep -v "already exists"
```
---
## Useful Commands
### Query Database
```bash
# Connect to database
docker compose exec v2-postgres psql -U changemaker -d changemaker_v2
# Run single query
docker compose exec v2-postgres psql -U changemaker -d changemaker_v2 -c "SELECT NOW();"
# Run SQL file
docker compose exec -T v2-postgres psql -U changemaker -d changemaker_v2 < script.sql
# Export query results to CSV
docker compose exec v2-postgres psql -U changemaker -d changemaker_v2 \
-c "COPY (SELECT * FROM \"User\") TO STDOUT WITH CSV HEADER" > users.csv
```
### Database Inspection
```bash
# List databases
docker compose exec v2-postgres psql -U postgres -l
# List tables
docker compose exec v2-postgres psql -U changemaker -d changemaker_v2 -c "\dt"
# Describe table
docker compose exec v2-postgres psql -U changemaker -d changemaker_v2 -c "\d \"User\""
# List indexes
docker compose exec v2-postgres psql -U changemaker -d changemaker_v2 -c "\di"
# View table sizes
docker compose exec v2-postgres psql -U changemaker -d changemaker_v2 -c "
SELECT
schemaname,
tablename,
pg_size_pretty(pg_total_relation_size(schemaname||'.'||tablename)) AS size
FROM pg_tables
WHERE schemaname = 'public'
ORDER BY pg_total_relation_size(schemaname||'.'||tablename) DESC;
"
```
### Performance Analysis
```bash
# Current activity
docker compose exec v2-postgres psql -U changemaker -d changemaker_v2 -c "
SELECT pid, usename, application_name, state, query_start, query
FROM pg_stat_activity
WHERE datname = 'changemaker_v2'
ORDER BY query_start;
"
# Table statistics
docker compose exec v2-postgres psql -U changemaker -d changemaker_v2 -c "
SELECT schemaname, tablename, n_live_tup, n_dead_tup, last_autovacuum
FROM pg_stat_user_tables
ORDER BY n_live_tup DESC;
"
# Index usage
docker compose exec v2-postgres psql -U changemaker -d changemaker_v2 -c "
SELECT schemaname, tablename, indexname, idx_scan, idx_tup_read, idx_tup_fetch
FROM pg_stat_user_indexes
ORDER BY idx_scan DESC;
"
# Unused indexes
docker compose exec v2-postgres psql -U changemaker -d changemaker_v2 -c "
SELECT schemaname, tablename, indexname, idx_scan
FROM pg_stat_user_indexes
WHERE idx_scan = 0 AND indexname NOT LIKE '%pkey'
ORDER BY pg_relation_size(indexname::regclass) DESC;
"
```
---
## Related Documentation
### Database Documentation
- [Database Issues](database-issues.md) - This guide
- [Installation Guide](../user/installation.md) - Initial database setup
- [Architecture Overview](../technical/architecture.md) - Database architecture
### Other Troubleshooting
- [Common Errors](common-errors.md) - General errors
- [Docker Issues](docker-issues.md) - Container problems
- [Performance Optimization](performance-optimization.md) - Database tuning
### PostgreSQL Resources
- [PostgreSQL Documentation](https://www.postgresql.org/docs/)
- [Prisma Documentation](https://www.prisma.io/docs/)
- [Drizzle Documentation](https://orm.drizzle.team/)
---
**Last Updated:** February 2026
**Version:** V2.0
**Status:** Complete