changemaker.lite/mkdocs/docs/v2/troubleshooting/performance-optimization.md

1065 lines
21 KiB
Markdown
Raw Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

# Performance Optimization
This guide covers performance tuning and optimization strategies for Changemaker Lite V2.
## Overview
### Performance Areas
1. **Database** - Query optimization, indexing, connection pooling
2. **API** - Caching, rate limiting, pagination
3. **Frontend** - Code splitting, lazy loading, bundling
4. **Docker** - Resource limits, multi-stage builds
5. **Nginx** - Compression, caching, keep-alive
6. **Email Queue** - Worker count, batch processing
7. **Monitoring** - Prometheus metrics, Grafana dashboards
### Performance Metrics
**Target performance:**
- API response time: < 200ms (p95)
- Database query time: < 50ms (p95)
- Frontend load time: < 2s (initial)
- Email sending: 100+ emails/minute
- Concurrent users: 500+
---
## Database Optimization
### Index Optimization
**Find missing indexes:**
```sql
-- Find tables without indexes
SELECT schemaname, tablename, indexname
FROM pg_indexes
WHERE schemaname = 'public'
ORDER BY tablename;
-- Find columns used in WHERE but not indexed
SELECT *
FROM pg_stat_user_tables
WHERE schemaname = 'public'
AND seq_scan > 1000
AND seq_tup_read / seq_scan > 10000
ORDER BY seq_scan DESC;
```
**Add indexes to frequently queried columns:**
```prisma
model Location {
id String @id @default(uuid())
address String
city String
province String
postalCode String
createdAt DateTime @default(now())
// Add indexes for WHERE clauses
@@index([postalCode]) // WHERE postalCode = '...'
@@index([city]) // WHERE city = '...'
@@index([province]) // WHERE province = '...'
@@index([createdAt]) // ORDER BY createdAt
// Composite index for multi-column queries
@@index([province, city]) // WHERE province = '...' AND city = '...'
}
```
**Create migration:**
```bash
docker compose exec api npx prisma migrate dev --name add_location_indexes
```
**Verify index usage:**
```sql
EXPLAIN ANALYZE
SELECT * FROM "Location"
WHERE "postalCode" = 'M5H 2N2';
-- Should show:
-- Index Scan using Location_postalCode_idx
-- NOT: Seq Scan on "Location"
```
---
### Query Optimization
**Use select instead of fetching all fields:**
```typescript
// Bad - fetches all fields
const users = await prisma.user.findMany();
// Returns: id, email, password, name, role, createdAt, updatedAt, ...
// Good - only needed fields
const users = await prisma.user.findMany({
select: {
id: true,
email: true,
name: true,
role: true
}
});
```
**Use include instead of separate queries:**
```typescript
// Bad - N+1 queries
const campaigns = await prisma.campaign.findMany();
for (const campaign of campaigns) {
const emails = await prisma.campaignEmail.findMany({
where: { campaignId: campaign.id }
});
campaign.emails = emails;
}
// Good - single query with join
const campaigns = await prisma.campaign.findMany({
include: {
emails: true
}
});
```
**Paginate large result sets:**
```typescript
// Bad - fetch all
const locations = await prisma.location.findMany();
// Returns 10,000+ rows
// Good - paginate
const locations = await prisma.location.findMany({
take: 50, // Limit
skip: page * 50, // Offset
orderBy: { createdAt: 'desc' }
});
```
**Use aggregations efficiently:**
```typescript
// Bad - count all then filter
const allUsers = await prisma.user.findMany();
const activeCount = allUsers.filter(u => u.role !== 'TEMP').length;
// Good - count in database
const activeCount = await prisma.user.count({
where: {
role: { not: 'TEMP' }
}
});
```
---
### Connection Pooling
**Configure pool size:**
```bash
# In .env
DATABASE_URL="postgresql://changemaker:password@v2-postgres:5432/changemaker_v2?connection_limit=20&pool_timeout=30"
# connection_limit: Max connections (default: 10)
# pool_timeout: Max wait time in seconds (default: 10)
```
**Recommended pool sizes:**
- Development: 5-10 connections
- Production (1 API instance): 10-20 connections
- Production (3 API instances): 5-10 per instance
**Formula:**
```
Total connections = (API instances × pool size) + overhead
Overhead = Prisma Studio (1) + other clients (5)
Example:
3 instances × 10 pool + 6 overhead = 36 connections
Set PostgreSQL max_connections = 50 (1.4× usage)
```
**Monitor pool usage:**
```sql
-- View active connections
SELECT count(*), state
FROM pg_stat_activity
WHERE datname = 'changemaker_v2'
GROUP BY state;
-- Alert if nearing limit
SELECT count(*) FROM pg_stat_activity WHERE datname = 'changemaker_v2';
-- If > 80% of max_connections, increase limit or reduce pool size
```
---
### Read Replicas
For read-heavy workloads, add read replicas:
```yaml
# docker-compose.yml
v2-postgres-read:
image: postgres:16-alpine
environment:
POSTGRES_DB: changemaker_v2
POSTGRES_USER: changemaker
POSTGRES_PASSWORD: ${V2_POSTGRES_PASSWORD}
command: postgres -c wal_level=replica -c max_wal_senders=3
```
Configure replication in Prisma:
```typescript
// Use read replica for read queries
const readPrisma = new PrismaClient({
datasources: {
db: { url: process.env.READ_DATABASE_URL }
}
});
// Read from replica
const users = await readPrisma.user.findMany();
// Write to primary
const user = await prisma.user.create({ data: { ... } });
```
---
## API Optimization
### Caching Strategies
**Redis caching:**
```typescript
// Cache expensive operations
import { redis } from './config/redis';
export const getCampaigns = async () => {
// Check cache
const cacheKey = 'campaigns:all';
const cached = await redis.get(cacheKey);
if (cached) {
return JSON.parse(cached);
}
// Query database
const campaigns = await prisma.campaign.findMany({
include: { emails: true }
});
// Cache for 5 minutes
await redis.setex(cacheKey, 300, JSON.stringify(campaigns));
return campaigns;
};
```
**Invalidate cache on updates:**
```typescript
export const updateCampaign = async (id: string, data: any) => {
// Update database
const campaign = await prisma.campaign.update({
where: { id },
data
});
// Invalidate cache
await redis.del('campaigns:all');
await redis.del(`campaign:${id}`);
return campaign;
};
```
**Cache patterns:**
- **Cache-aside:** Check cache, fetch from DB if miss
- **Write-through:** Update DB and cache simultaneously
- **Write-behind:** Update cache, async update DB
- **TTL:** Set expiration time (5min-1hour typical)
---
### Rate Limiting
**Configure rate limits:**
```typescript
// In api/src/middleware/rate-limit.ts
import rateLimit from 'express-rate-limit';
// General API
export const apiRateLimit = rateLimit({
windowMs: 60 * 1000, // 1 minute
max: 100, // 100 requests per minute
standardHeaders: true,
legacyHeaders: false,
});
// Auth endpoints (stricter)
export const authRateLimit = rateLimit({
windowMs: 60 * 1000,
max: 10, // 10 requests per minute
message: 'Too many login attempts. Please try again later.'
});
// Public endpoints (more lenient)
export const publicRateLimit = rateLimit({
windowMs: 60 * 1000,
max: 200 // 200 requests per minute
});
```
**Apply to routes:**
```typescript
// In server.ts
app.use('/api/auth', authRateLimit);
app.use('/api', apiRateLimit);
app.use('/public', publicRateLimit);
```
---
### Pagination
**Implement cursor-based pagination:**
```typescript
// api/src/modules/users/users.controller.ts
export const getUsers = async (req: Request, res: Response) => {
const { cursor, limit = 50 } = req.query;
const users = await prisma.user.findMany({
take: Number(limit) + 1, // Fetch one extra to check if more
skip: cursor ? 1 : 0,
cursor: cursor ? { id: cursor as string } : undefined,
orderBy: { createdAt: 'desc' }
});
const hasMore = users.length > Number(limit);
if (hasMore) users.pop(); // Remove extra
res.json({
data: users,
cursor: hasMore ? users[users.length - 1].id : null,
hasMore
});
};
```
**Frontend pagination:**
```typescript
// admin/src/pages/UsersPage.tsx
const [users, setUsers] = useState([]);
const [cursor, setCursor] = useState<string | null>(null);
const [hasMore, setHasMore] = useState(true);
const loadMore = async () => {
const response = await api.get('/api/users', {
params: { cursor, limit: 50 }
});
setUsers([...users, ...response.data.data]);
setCursor(response.data.cursor);
setHasMore(response.data.hasMore);
};
```
---
### Response Compression
Enable gzip compression:
```typescript
// In server.ts
import compression from 'compression';
app.use(compression({
level: 6, // Compression level (0-9)
threshold: 1024 // Only compress responses > 1KB
}));
```
---
## Frontend Optimization
### Code Splitting
**Route-based splitting:**
```typescript
// admin/src/App.tsx
import { lazy, Suspense } from 'react';
// Lazy load pages
const UsersPage = lazy(() => import('./pages/UsersPage'));
const CampaignsPage = lazy(() => import('./pages/CampaignsPage'));
const LocationsPage = lazy(() => import('./pages/LocationsPage'));
function App() {
return (
<Suspense fallback={<Spin />}>
<Routes>
<Route path="/app/users" element={<UsersPage />} />
<Route path="/app/campaigns" element={<CampaignsPage />} />
<Route path="/app/locations" element={<LocationsPage />} />
</Routes>
</Suspense>
);
}
```
**Component splitting:**
```typescript
// Lazy load heavy components
const MapView = lazy(() => import('./components/MapView'));
function Page() {
return (
<Suspense fallback={<Spin />}>
<MapView />
</Suspense>
);
}
```
---
### Lazy Loading
**Images:**
```typescript
<img
src={imageUrl}
loading="lazy" // Native lazy loading
alt="Description"
/>
```
**Large libraries:**
```typescript
// Don't import large libs at top level
import dayjs from 'dayjs'; // ❌ Always loads
// Import only when needed
const formatDate = async (date: Date) => {
const dayjs = (await import('dayjs')).default; // ✅ Loads on demand
return dayjs(date).format('YYYY-MM-DD');
};
```
---
### Bundle Optimization
**Analyze bundle size:**
```bash
cd admin
npm run build
npx vite-bundle-visualizer
```
**Tree shaking:**
```typescript
// Import only what you need
import { Button } from 'antd'; // ❌ Imports all of antd
import Button from 'antd/es/button'; // ✅ Only button
```
**Configure Vite:**
```typescript
// admin/vite.config.ts
export default defineConfig({
build: {
rollupOptions: {
output: {
manualChunks: {
vendor: ['react', 'react-dom', 'react-router-dom'],
antd: ['antd'],
maps: ['leaflet', 'react-leaflet']
}
}
},
chunkSizeWarningLimit: 1000
}
});
```
---
### Memoization
**React.memo for expensive components:**
```typescript
import { memo } from 'react';
const LocationMarker = memo(({ location }) => {
return (
<CircleMarker
center={[location.latitude, location.longitude]}
radius={8}
/>
);
}, (prev, next) => {
// Only re-render if location changed
return prev.location.id === next.location.id;
});
```
**useMemo for expensive calculations:**
```typescript
import { useMemo } from 'react';
function MapView({ locations }) {
// Only recalculate when locations change
const bounds = useMemo(() => {
if (!locations.length) return null;
const coords = locations.map(l => [l.latitude, l.longitude]);
return L.latLngBounds(coords);
}, [locations]);
return <MapContainer bounds={bounds} />;
}
```
**useCallback for stable functions:**
```typescript
import { useCallback } from 'react';
function Table({ data }) {
// Stable reference for row click handler
const handleRowClick = useCallback((row) => {
console.log('Clicked:', row.id);
}, []);
return <Table data={data} onRowClick={handleRowClick} />;
}
```
---
## Docker Optimization
### Resource Limits
```yaml
# docker-compose.yml
api:
deploy:
resources:
limits:
cpus: '2.0' # Max 2 CPU cores
memory: 4G # Max 4GB RAM
reservations:
cpus: '0.5' # Reserve 0.5 cores
memory: 1G # Reserve 1GB
```
**Monitor resource usage:**
```bash
docker stats
# Shows:
# CONTAINER CPU % MEM USAGE / LIMIT MEM %
# api 15% 1.2GB / 4GB 30%
```
---
### Multi-Stage Builds
**Optimize Dockerfile:**
```dockerfile
# Build stage
FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY . .
RUN npm run build
# Runtime stage (smaller)
FROM node:20-alpine
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
COPY package*.json ./
CMD ["node", "dist/server.js"]
```
**Benefits:**
- Smaller final image (no build tools)
- Faster deployment
- Better security (fewer packages)
---
### Volume Performance
**Use cached volumes for dependencies:**
```yaml
api:
volumes:
- ./api:/app
- /app/node_modules # Don't bind-mount node_modules
- api-build:/app/dist:cached # Cache build output
```
**For macOS/Windows:**
```yaml
api:
volumes:
- ./api:/app:cached # Cached mode for better performance
```
---
## Nginx Optimization
### Gzip Compression
```nginx
# nginx/nginx.conf
http {
gzip on;
gzip_vary on;
gzip_min_length 1024;
gzip_comp_level 6;
gzip_types
text/plain
text/css
text/xml
text/javascript
application/json
application/javascript
application/xml+rss
application/atom+xml
image/svg+xml;
}
```
---
### Caching
**Static assets:**
```nginx
# nginx/conf.d/default.conf
location ~* \.(js|css|png|jpg|jpeg|gif|ico|svg|woff|woff2|ttf|eot)$ {
expires 1y;
add_header Cache-Control "public, immutable";
}
```
**API responses:**
```nginx
location /api/ {
proxy_cache api_cache;
proxy_cache_valid 200 5m; # Cache 200 responses for 5 minutes
proxy_cache_bypass $http_cache_control; # Honor Cache-Control header
add_header X-Cache-Status $upstream_cache_status;
proxy_pass http://api:4000;
}
```
---
### Keep-Alive
```nginx
# nginx/nginx.conf
http {
keepalive_timeout 65;
keepalive_requests 100;
upstream api {
server api:4000;
keepalive 32; # Keep 32 connections alive to backend
}
}
```
---
## Email Queue Optimization
### Worker Concurrency
**Increase parallel processing:**
```typescript
// api/src/services/email-queue.service.ts
const worker = new Worker('email-queue', emailProcessor, {
connection: redis,
concurrency: 5, // Process 5 emails simultaneously
limiter: {
max: 50, // Max 50 jobs per second
duration: 1000
}
});
```
**Recommended concurrency:**
- Development: 1-2
- Production (low volume): 3-5
- Production (high volume): 10-20
---
### Batch Processing
**Process emails in batches:**
```typescript
export const sendBulkEmails = async (emails: Email[]) => {
const batchSize = 100;
for (let i = 0; i < emails.length; i += batchSize) {
const batch = emails.slice(i, i + batchSize);
// Add batch to queue
await emailQueue.addBulk(
batch.map(email => ({
name: 'send-email',
data: email
}))
);
}
};
```
---
### Rate Limiting
**Respect SMTP provider limits:**
```typescript
const worker = new Worker('email-queue', emailProcessor, {
limiter: {
// Gmail: 500 emails/day (free), 2000/day (workspace)
max: 100, // 100 emails per hour
duration: 3600 * 1000 // 1 hour
}
});
```
---
## Monitoring Performance
### Prometheus Metrics
**Track response times:**
```typescript
import { Histogram } from 'prom-client';
const httpRequestDuration = new Histogram({
name: 'http_request_duration_seconds',
help: 'HTTP request duration in seconds',
labelNames: ['method', 'route', 'status'],
buckets: [0.01, 0.05, 0.1, 0.5, 1, 5]
});
// Middleware to track duration
app.use((req, res, next) => {
const start = Date.now();
res.on('finish', () => {
const duration = (Date.now() - start) / 1000;
httpRequestDuration
.labels(req.method, req.route?.path || req.path, res.statusCode.toString())
.observe(duration);
});
next();
});
```
**Track query counts:**
```typescript
const dbQueries = new Counter({
name: 'cm_database_queries_total',
help: 'Total database queries',
labelNames: ['model', 'operation']
});
// In Prisma middleware
prisma.$use(async (params, next) => {
dbQueries.labels(params.model, params.action).inc();
return next(params);
});
```
---
### Grafana Dashboards
**Create performance dashboard:**
```promql
# API response time (p95)
histogram_quantile(0.95,
rate(http_request_duration_seconds_bucket[5m])
)
# Database query rate
rate(cm_database_queries_total[5m])
# Cache hit rate
rate(cm_cache_hits_total[5m]) /
(rate(cm_cache_hits_total[5m]) + rate(cm_cache_misses_total[5m]))
```
---
### Slow Query Log
**Enable in PostgreSQL:**
```yaml
# docker-compose.yml
v2-postgres:
command: postgres -c log_min_duration_statement=100
# Logs queries taking > 100ms
```
**View slow queries:**
```bash
docker compose logs v2-postgres | grep "duration:"
# Output:
# LOG: duration: 523.456 ms statement: SELECT * FROM "Location" WHERE ...
```
---
## Load Testing
### k6 Load Testing
**Install k6:**
```bash
# macOS
brew install k6
# Linux
sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys C5AD17C747E3415A3642D57D77C6C491D6AC1D69
echo "deb https://dl.k6.io/deb stable main" | sudo tee /etc/apt/sources.list.d/k6.list
sudo apt-get update
sudo apt-get install k6
```
**Create test script:**
```javascript
// load-test.js
import http from 'k6/http';
import { check, sleep } from 'k6';
export const options = {
stages: [
{ duration: '2m', target: 100 }, // Ramp up to 100 users
{ duration: '5m', target: 100 }, // Stay at 100 users
{ duration: '2m', target: 0 }, // Ramp down
],
thresholds: {
http_req_duration: ['p(95)<500'], // 95% of requests < 500ms
},
};
export default function () {
// Test login
const loginRes = http.post('http://localhost:4000/api/auth/login', {
email: 'admin@example.com',
password: 'Admin123!',
});
check(loginRes, { 'login succeeded': (r) => r.status === 200 });
const token = loginRes.json('accessToken');
// Test API endpoints
const headers = { Authorization: `Bearer ${token}` };
const campaignsRes = http.get('http://localhost:4000/api/campaigns', { headers });
check(campaignsRes, { 'campaigns loaded': (r) => r.status === 200 });
const locationsRes = http.get('http://localhost:4000/api/map/locations', { headers });
check(locationsRes, { 'locations loaded': (r) => r.status === 200 });
sleep(1);
}
```
**Run test:**
```bash
k6 run load-test.js
```
**Interpret results:**
```
✓ login succeeded
✓ campaigns loaded
✓ locations loaded
checks.........................: 100.00%
data_received..................: 8.2 MB
data_sent......................: 1.1 MB
http_req_duration..............: avg=145ms min=12ms med=89ms max=2.1s p(95)=423ms
http_reqs......................: 12450
vus............................: 100
vus_max........................: 100
```
---
### Apache Bench
**Quick load test:**
```bash
# 1000 requests, 10 concurrent
ab -n 1000 -c 10 http://localhost:4000/api/health
# With authentication
ab -n 1000 -c 10 -H "Authorization: Bearer TOKEN" http://localhost:4000/api/campaigns
```
---
## Performance Checklist
### Database
- [ ] Indexes on frequently queried columns
- [ ] Composite indexes for multi-column queries
- [ ] Connection pool sized appropriately
- [ ] Slow query log enabled
- [ ] VACUUM run regularly (auto by default)
- [ ] Read replicas for read-heavy loads
### API
- [ ] Redis caching for expensive operations
- [ ] Rate limiting on all endpoints
- [ ] Pagination on list endpoints
- [ ] Response compression enabled
- [ ] N+1 queries eliminated
- [ ] Select only needed fields
### Frontend
- [ ] Route-based code splitting
- [ ] Lazy loading for heavy components
- [ ] Images optimized and lazy-loaded
- [ ] Bundle size < 500KB (gzipped)
- [ ] React.memo for expensive components
- [ ] useCallback/useMemo for stable references
### Docker
- [ ] Multi-stage builds
- [ ] Resource limits set
- [ ] Health checks configured
- [ ] Volumes optimized
- [ ] Images use Alpine base
### Nginx
- [ ] Gzip compression enabled
- [ ] Static asset caching (1 year)
- [ ] Keep-alive connections
- [ ] Worker processes = CPU cores
- [ ] Access logs rotated
### Email Queue
- [ ] Worker concurrency optimized
- [ ] Rate limiting respects SMTP limits
- [ ] Batch processing for bulk sends
- [ ] Failed jobs retry with backoff
- [ ] Queue size monitored
### Monitoring
- [ ] Prometheus metrics collected
- [ ] Grafana dashboards created
- [ ] Alerts configured
- [ ] Slow queries logged
- [ ] Resource usage tracked
---
## Related Documentation
### Performance Documentation
- [Performance Optimization](performance-optimization.md) - This guide
- [Monitoring Issues](monitoring-issues.md) - Observability troubleshooting
- [Database Issues](database-issues.md) - Database troubleshooting
### Other Guides
- [Architecture Overview](../technical/architecture.md) - System design
- [Deployment Guide](../deployment/production.md) - Production setup
- [Monitoring Guide](../deployment/monitoring.md) - Monitoring setup
### External Resources
- [PostgreSQL Performance Tips](https://wiki.postgresql.org/wiki/Performance_Optimization)
- [Prisma Performance Guide](https://www.prisma.io/docs/guides/performance-and-optimization)
- [React Performance](https://react.dev/learn/render-and-commit)
- [Vite Performance](https://vitejs.dev/guide/performance.html)
---
**Last Updated:** February 2026
**Version:** V2.0
**Status:** Complete