Merge changemaker-control-panel into v2 monorepo

Absorbs the separate control-panel git repo as a subdirectory.
Instances and backups directories excluded via .gitignore.

Bunker Admin
This commit is contained in:
bunker-admin 2026-02-21 11:51:45 -07:00
parent 7352815e57
commit 2fa50b001c
80 changed files with 16513 additions and 1 deletions

4
.gitignore vendored
View File

@ -49,3 +49,7 @@ docker-compose.override.yml
/logs/
/backups/
.upgrade.lock
# Control Panel runtime data (managed deployments + backups)
/changemaker-control-panel/instances/
/changemaker-control-panel/backups/

@ -1 +0,0 @@
Subproject commit d4cd2d2cd5d2dd33d49c7d6feaed975741e0925a

View File

@ -0,0 +1,59 @@
# ============================================================
# Changemaker Control Panel — Environment Variables
# ============================================================
# Server
NODE_ENV=development
PORT=5000
ADMIN_PORT=5100
# Database
DATABASE_URL=postgresql://ccp:ccp_secret@localhost:5480/ccp
CCP_POSTGRES_PASSWORD=ccp_secret
# Redis
REDIS_URL=redis://:ccp_redis_secret@localhost:6399
REDIS_PASSWORD=ccp_redis_secret
# JWT Secrets (generate with: openssl rand -hex 32)
JWT_ACCESS_SECRET=change-me-to-a-random-64-char-hex-string-for-access-tokens!!
JWT_REFRESH_SECRET=change-me-to-a-random-64-char-hex-string-for-refresh-tokens!
# Encryption key for secrets at rest (openssl rand -hex 32)
ENCRYPTION_KEY=change-me-to-a-random-64-char-hex-string-for-encryption-keys!
# Initial Admin Account
INITIAL_ADMIN_EMAIL=admin@example.com
INITIAL_ADMIN_PASSWORD=ChangeMe2025!!
# CORS
CORS_ORIGINS=http://localhost:5100
# Instance Management
# Absolute paths — auto-resolved by setup.sh (run ./setup.sh after cloning)
INSTANCES_BASE_PATH=/path/to/changemaker-control-panel/instances
CML_SOURCE_PATH=/path/to/changemaker.lite
CML_GIT_REPO=https://github.com/your-org/changemaker.lite.git
CML_GIT_BRANCH=v2
# Port Allocation Ranges
PORT_RANGE_API_START=14000
PORT_RANGE_API_END=14999
PORT_RANGE_ADMIN_START=13000
PORT_RANGE_ADMIN_END=13999
PORT_RANGE_POSTGRES_START=15400
PORT_RANGE_POSTGRES_END=15499
PORT_RANGE_NGINX_START=10000
PORT_RANGE_NGINX_END=10999
# Pangolin (optional — for tunnel management)
PANGOLIN_API_URL=
PANGOLIN_API_KEY=
PANGOLIN_ORG_ID=
# Health Checks (0 to disable automated checks)
HEALTH_CHECK_INTERVAL_MS=300000
# Backups (auto-resolved by setup.sh)
BACKUP_STORAGE_PATH=/path/to/changemaker-control-panel/backups
BACKUP_RETENTION_DAYS=30

13
changemaker-control-panel/.gitignore vendored Normal file
View File

@ -0,0 +1,13 @@
node_modules/
dist/
.env
*.log
.DS_Store
.vscode/
*.tsbuildinfo
# Instance data (managed deployments)
/instances/
# Backup storage
/backups/

View File

@ -0,0 +1,605 @@
# Changemaker Control Panel (CCP) — Development Plan
A multi-tenant management system for provisioning, monitoring, and operating multiple Changemaker Lite instances from a single dashboard.
---
## Vision
CCP replaces manual `git clone` / `docker compose up` / `.env` editing with a web UI that can:
- One-click provision a new Changemaker Lite instance (database, containers, config, tunnel)
- Monitor instance health across the fleet
- Start/stop/restart instances and individual services
- Back up and restore instance data
- Maintain a full audit trail of operator actions
- Manage Pangolin tunnels for production exposure
---
## Architecture
```
┌─────────────────────────┐
│ CCP Admin GUI (5100) │ React + Vite + Ant Design
│ Dark theme, SPA │ Zustand auth store
└────────────┬─────────────┘
│ /api/* proxy
┌────────────▼─────────────┐
│ CCP API (5000) │ Express + TypeScript
│ JWT auth, RBAC │ Prisma ORM → PostgreSQL
│ Docker socket access │ Winston logger
└────────────┬─────────────┘
┌───────────┬───────────┼───────────┬──────────┐
▼ ▼ ▼ ▼ ▼
ccp-postgres ccp-redis Docker Socket /opt/ccp/ /var/backups/
(port 5480) (port 6399) (.sock) instances ccp-instances
```
### Stack
| Layer | Technology | Notes |
|-------|-----------|-------|
| API | Express 4, TypeScript 5, Node 20 | `express-async-errors` for async route handling |
| ORM | Prisma 6 + PostgreSQL 16 | 10 models, mapped table names |
| Auth | JWT (jsonwebtoken) + bcryptjs | 15min access / 7d refresh, atomic rotation |
| Encryption | AES-256-GCM (Node crypto) | Secrets at rest in `encrypted_secrets` column |
| Frontend | React 19, Vite 6, Ant Design 5 | Dark theme, Zustand state, axios interceptors |
| Docker | Docker CLI + socket API | compose up/down/ps/exec/logs via shell + HTTP socket |
| Templates | Handlebars | .env, docker-compose.yml, nginx configs rendered per-instance |
| Logging | Winston | JSON in production, colorized in development |
| Config | Zod schema validation | Fails fast on startup with clear error messages |
### Key Design Decisions
1. **Docker CLI over Docker SDK** — Shell out to `docker compose` commands rather than using dockerode. Simpler, matches what operators would run manually, and `docker compose` handles all the orchestration logic.
2. **Shared PostgreSQL** — All CCP data in one database; each CML instance gets its own isolated PostgreSQL container with unique ports and passwords.
3. **Port Range Allocation** — Four non-overlapping port ranges prevent conflicts:
- API: 14000-14999
- Admin: 13000-13999
- PostgreSQL: 15400-15499
- Nginx: 10000-10999
4. **Async Provisioning** — Instance creation returns immediately; provisioning runs fire-and-forget with progress tracked via `status`/`statusMessage` fields. Frontend polls for updates.
5. **Template-Based Config** — Handlebars templates render per-instance docker-compose.yml, .env, nginx configs, and Pangolin resources. This avoids complex string manipulation and keeps configs readable.
---
## Database Schema (Prisma)
```
CcpUser ──< CcpRefreshToken
├──< AuditLog
Instance ──< PortAllocation
├──< HealthCheck
├──< Backup
└──< AuditLog
CcpSetting (key-value)
```
### Models
| Model | Purpose | Key Fields |
|-------|---------|------------|
| **CcpUser** | Control panel operators | email, password (bcrypt), role (SUPER_ADMIN/OPERATOR/VIEWER) |
| **CcpRefreshToken** | JWT refresh token storage | token (SHA-256 hash), expiresAt |
| **Instance** | Managed CML instance | slug, domain, basePath, composeProject, status, portConfig (JSON), encryptedSecrets, feature flags |
| **PortAllocation** | Port registry | port (unique), service name, instanceId |
| **HealthCheck** | Periodic health snapshots | status (HEALTHY/DEGRADED/UNHEALTHY/UNKNOWN), serviceStatus (JSON), totalServices, healthyServices, responseTimeMs |
| **Backup** | Backup records | status (PENDING→IN_PROGRESS→COMPLETED/FAILED), archivePath, sizeBytes, manifest (JSON) |
| **AuditLog** | Action trail | action (18 types), userId, instanceId, details (JSON), ipAddress |
| **CcpSetting** | Global key-value config | key (PK), value (JSON) |
### Audit Actions (18 types)
```
INSTANCE_CREATE, INSTANCE_UPDATE, INSTANCE_DELETE,
INSTANCE_START, INSTANCE_STOP, INSTANCE_RESTART, INSTANCE_UPGRADE,
BACKUP_CREATE, BACKUP_DELETE,
PANGOLIN_SETUP, PANGOLIN_SYNC,
USER_LOGIN, USER_CREATE, USER_UPDATE, USER_DELETE,
SETTINGS_UPDATE
```
---
## API Endpoints
### Authentication
| Method | Path | Auth | Description |
|--------|------|------|-------------|
| POST | `/api/auth/login` | Public | Login, returns JWT pair |
| POST | `/api/auth/refresh` | Public | Rotate refresh token |
| POST | `/api/auth/logout` | Public | Revoke refresh token |
| GET | `/api/auth/me` | Authenticated | Current user profile |
### Instances — CRUD
| Method | Path | Auth | Description |
|--------|------|------|-------------|
| GET | `/api/instances` | Authenticated | List all instances |
| GET | `/api/instances/:id` | Authenticated | Instance detail (includes health + backups) |
| POST | `/api/instances` | SUPER_ADMIN, OPERATOR | Create instance (triggers async provisioning) |
| PUT | `/api/instances/:id` | SUPER_ADMIN, OPERATOR | Update instance config |
| DELETE | `/api/instances/:id` | SUPER_ADMIN | Delete instance (stops containers, removes files) |
| GET | `/api/instances/:id/secrets` | SUPER_ADMIN | Decrypt and return instance secrets |
### Instances — Lifecycle
| Method | Path | Auth | Description |
|--------|------|------|-------------|
| POST | `/api/instances/:id/provision` | SUPER_ADMIN, OPERATOR | Retry provisioning |
| POST | `/api/instances/:id/start` | SUPER_ADMIN, OPERATOR | Start all containers |
| POST | `/api/instances/:id/stop` | SUPER_ADMIN, OPERATOR | Stop all containers |
| POST | `/api/instances/:id/restart` | SUPER_ADMIN, OPERATOR | Restart (optionally `?service=api`) |
### Instances — Services & Logs
| Method | Path | Auth | Description |
|--------|------|------|-------------|
| GET | `/api/instances/:id/services` | Authenticated | Container status via `docker compose ps` |
| GET | `/api/instances/:id/logs` | Authenticated | Logs (`?service=api&tail=200&since=1h`) |
### Instances — Health
| Method | Path | Auth | Description |
|--------|------|------|-------------|
| POST | `/api/instances/:id/health-check` | SUPER_ADMIN, OPERATOR | Trigger manual health check |
| GET | `/api/instances/:id/health-history` | Authenticated | Paginated health check history |
### Instances — Backups
| Method | Path | Auth | Description |
|--------|------|------|-------------|
| POST | `/api/instances/:id/backup` | SUPER_ADMIN, OPERATOR | Create backup (async) |
| GET | `/api/instances/:id/backups` | Authenticated | List instance backups |
### Backups (cross-instance)
| Method | Path | Auth | Description |
|--------|------|------|-------------|
| GET | `/api/backups` | Authenticated | List all backups (`?instanceId=...&page=1&limit=50`) |
| DELETE | `/api/backups/:id` | SUPER_ADMIN | Delete backup (file + record) |
| GET | `/api/backups/:id/download` | SUPER_ADMIN | Stream backup archive |
### Health (CCP-level)
| Method | Path | Auth | Description |
|--------|------|------|-------------|
| GET | `/api/health` | Public | CCP API health check |
| GET | `/api/health/overview` | Authenticated | All instances with latest health |
### Audit
| Method | Path | Auth | Description |
|--------|------|------|-------------|
| GET | `/api/audit` | Authenticated | Filtered, paginated audit log (`?action=...&instanceId=...&userId=...&from=...&to=...`) |
### Settings
| Method | Path | Auth | Description |
|--------|------|------|-------------|
| GET | `/api/settings` | Authenticated | All settings as key-value map |
| PUT | `/api/settings/:key` | SUPER_ADMIN | Upsert a setting value |
---
## Frontend Pages
| Route | Page | Description |
|-------|------|-------------|
| `/login` | LoginPage | Email/password form |
| `/app` | DashboardPage | Stats (total, running, healthy, degraded, stopped, errors) + instance cards |
| `/app/instances` | InstanceListPage | Table view with search/filter |
| `/app/instances/new` | CreateWizardPage | 5-step wizard (info, features, email, tunnel, review) |
| `/app/instances/:id` | InstanceDetailPage | Tabs: Overview, Services, Logs, Backups, Tunnel |
| `/app/backups` | BackupsPage | Cross-instance backup list with stats |
| `/app/audit` | AuditLogPage | Filterable audit log + detail drawer |
| `/app/settings` | SettingsPage | Port ranges, Pangolin config, defaults |
### Sidebar Navigation
1. Dashboard (home)
2. Instances (list)
3. Backups (cross-instance)
4. Audit Log (activity trail)
5. Settings (CCP config)
---
## Provisioning Flow
When `POST /api/instances` is called, the system:
```
1. Validate uniqueness (slug + domain)
2. Allocate 4 ports from ranges
3. Generate 14 secrets (passwords, JWT keys, encryption keys)
4. Create Instance record (status: PROVISIONING)
5. [async] Create directory: /opt/ccp/instances/{slug}/changemaker.lite/
6. [async] rsync CML source (excluding node_modules, .git, .env, .claude)
7. [async] Decrypt secrets → build Handlebars context
8. [async] Render 7 templates: docker-compose.yml, .env, nginx configs, Pangolin, Prometheus
9. [async] Copy static files (nginx.conf)
10. [async] docker compose pull (non-fatal if images cached)
11. [async] docker compose build
12. [async] Start infrastructure: v2-postgres + redis-changemaker
13. [async] Wait for healthy (Docker healthcheck polling)
14. [async] Start API → run prisma migrate deploy → prisma db seed
15. [async] docker compose up (all services)
16. [async] Wait for HTTP health (localhost:{api_port}/api/health)
17. [async] Set status: RUNNING
```
Frontend polls `GET /api/instances/:id` every 3 seconds during provisioning to show progress.
---
## Health Check System
### How It Works
1. **Scheduler** starts on API boot (default: every 5 minutes, configurable via `HEALTH_CHECK_INTERVAL_MS`)
2. For each RUNNING instance, runs `docker compose ps --format json`
3. Parses container states and health check results
4. Determines overall status:
- **HEALTHY** — all containers running, all health checks passing
- **DEGRADED** — some containers running but not all, or some health checks failing
- **UNHEALTHY** — majority of containers down or failing health checks
- **UNKNOWN** — no containers found or compose project doesn't exist
5. Stores `HealthCheck` record with per-service status JSON, response time
6. Updates `instance.lastHealthCheck` timestamp
### Manual Trigger
`POST /api/instances/:id/health-check` runs an immediate check (SUPER_ADMIN/OPERATOR only).
---
## Backup System
### What Gets Backed Up
1. **PostgreSQL dump**`pg_dump` inside the instance's v2-postgres container
2. **Uploads archive** — tar.gz of the uploads directory (if it exists)
3. **Manifest** — JSON file with file names, sizes, SHA-256 hashes
### Backup Flow
```
1. Validate instance is RUNNING
2. Create Backup record (status: PENDING)
3. [async] Set status: IN_PROGRESS
4. [async] mkdir /var/backups/ccp-instances/{slug}/{timestamp}/
5. [async] docker compose exec v2-postgres pg_dump → v2-postgres.sql → gzip
6. [async] tar -czf uploads.tar.gz (if uploads/ exists)
7. [async] Write manifest.json with file inventory + SHA-256 hashes
8. [async] tar -czf final archive → /var/backups/ccp-instances/{slug}/backup-{slug}-{timestamp}.tar.gz
9. [async] Cleanup temp directory
10. [async] Update Backup record (COMPLETED, archivePath, sizeBytes, manifest)
11. [async] Write audit log
```
### Retention
`cleanupOldBackups(retentionDays)` deletes archives + records older than the configured retention period (default: 30 days).
---
## Phased Implementation
### Phase 1: Foundation (COMPLETE)
**Goal:** Skeleton that boots — database, auth, project structure.
**Delivered:**
- Prisma schema with all 10 models (User, Instance, HealthCheck, Backup, AuditLog, etc.)
- JWT authentication (access + refresh tokens with atomic rotation)
- Role-based access control (SUPER_ADMIN, OPERATOR, VIEWER)
- Zod-validated environment configuration
- AES-256-GCM encryption for instance secrets
- React admin shell with dark theme, sidebar navigation, protected routes
- Zustand auth store with token persistence + refresh interceptor
- Login page, Dashboard placeholder, Settings page
- Docker Compose orchestration (PostgreSQL, Redis, API, Admin)
- Placeholder pages wired for Audit Log, Backups
### Phase 2: Docker Lifecycle (COMPLETE)
**Goal:** Create and manage running CML instances.
**Delivered:**
- Instance CRUD with slug/domain uniqueness validation
- Port allocation across 4 ranges (API, Admin, PostgreSQL, Nginx)
- Secret generation (14 secrets: postgres, redis, JWT, encryption, admin passwords)
- Handlebars template engine rendering 7 config files per instance
- 13-step async provisioning (copy source → render config → pull → build → migrate → seed → start)
- Lifecycle operations: start, stop, restart (whole stack or individual service)
- Container status via `docker compose ps --format json`
- Log viewing via `docker compose logs` with service/tail/since filters
- 5-step Create Instance wizard (basic info → features → email → tunnel → review)
- Instance detail page with tabs (Overview, Services, Logs, Backups, Tunnel)
- Service health grid with per-container restart/log-view actions
- Provisioning progress indicator (polls every 3s)
- Audit logging on all lifecycle operations (7 action types)
### Phase 3: Observability + Backups (COMPLETE)
**Goal:** Visibility into instance health, operator activity trail, and data protection.
**Delivered:**
#### Part A — Audit Log
- Audit service with filtered queries (action, instance, user, date range) + pagination
- `GET /api/audit` endpoint with query parameter filtering
- IP address capture on all audit log entries (existing + new)
- `USER_LOGIN` audit event on successful authentication
- `SETTINGS_UPDATE` audit event on settings changes
- Full AuditLogPage: filterable table, action-colored tags, detail drawer with JSON inspector, server-side pagination, 30s auto-refresh toggle
#### Part B — Health Checks
- Health service with `checkInstanceHealth()` analyzing Docker container states
- Overall status determination (HEALTHY/DEGRADED/UNHEALTHY/UNKNOWN) from per-container state
- Scheduled health checker (default 5-minute interval, configurable, 0 to disable)
- `POST /api/instances/:id/health-check` for manual checks
- `GET /api/instances/:id/health-history` for paginated history
- Health card in Instance Detail overview with "Check Now" button + history table
- Dashboard stat cards: Healthy and Degraded instance counts from `/api/health/overview`
#### Part C — Backups
- Backup service: pg_dump via Docker exec, uploads tar.gz, SHA-256 manifest, final archive
- Async backup creation with PENDING → IN_PROGRESS → COMPLETED/FAILED progression
- `POST /api/instances/:id/backup` — create backup
- `GET /api/instances/:id/backups` — instance-scoped backup list
- `GET /api/backups` — cross-instance backup list with pagination
- `DELETE /api/backups/:id` — delete backup (file + DB record)
- `GET /api/backups/:id/download` — stream backup archive for download
- BackupsPage: cross-instance table, instance filter, stats (count/size/last), "Backup All Running"
- Enhanced Instance Detail backups tab: create/download/delete actions, status polling
- Backup storage volume mount in docker-compose.yml
- Old backup cleanup utility (configurable retention days)
### Phase 4: Pangolin Integration (PLANNED)
**Goal:** Automated tunnel setup for exposing instances to the internet.
**Scope:**
- Pangolin API client (site creation, resource management, Newt credentials)
- Automated tunnel setup endpoint (`POST /api/instances/:id/setup-tunnel`)
- Per-instance Newt container management (start/stop with stored credentials)
- Resource sync for all instance subdomains (app, api, media, docs, etc.)
- Tunnel status monitoring in Instance Detail tunnel tab
- Bulk tunnel setup for fleet-wide deployment
### Phase 5: Upgrades + Git Integration (PLANNED)
**Goal:** Rolling upgrades and version management.
**Scope:**
- Git pull + branch checkout for instance source code
- Database migration execution (prisma migrate deploy)
- Docker image rebuild + rolling restart
- Upgrade progress tracking (similar to provisioning)
- Rollback capability (pre-upgrade backup + restore)
- Version display on instance cards and detail pages
- `INSTANCE_UPGRADE` audit event
### Phase 6: User Management + RBAC (PLANNED)
**Goal:** Multi-operator support with granular permissions.
**Scope:**
- User CRUD pages (create, edit, delete operators)
- Role assignment and management
- Per-instance access control (operator can only manage assigned instances)
- Invitation flow (email invite with temporary password)
- Password change and reset functionality
- Session management (view/revoke active sessions)
### Phase 7: Monitoring Dashboard (PLANNED)
**Goal:** Fleet-wide observability with trends and alerting.
**Scope:**
- Health trend charts (uptime over time)
- Resource utilization (CPU, memory, disk via Docker stats)
- Alert rules (instance down > N minutes, disk usage > threshold)
- Notification channels (email, webhook)
- Backup compliance monitoring (last backup age alerts)
- Fleet summary dashboard with at-a-glance status
### Phase 8: Instance Configuration UI (PLANNED)
**Goal:** Edit instance configuration without SSH.
**Scope:**
- Feature flag toggles (media, listmonk, gancio, monitoring)
- SMTP configuration management
- .env variable editor (safe subset)
- Docker Compose service scaling
- Configuration diff preview before applying
- Auto-restart on config change
---
## Environment Variables
| Variable | Default | Description |
|----------|---------|-------------|
| `NODE_ENV` | `development` | Environment mode |
| `PORT` | `5000` | API server port |
| `DATABASE_URL` | — | PostgreSQL connection string |
| `REDIS_URL` | `redis://localhost:6399` | Redis connection string |
| `JWT_ACCESS_SECRET` | — | JWT signing key (min 32 chars) |
| `JWT_REFRESH_SECRET` | — | Refresh token signing key (min 32 chars) |
| `JWT_ACCESS_EXPIRES_IN` | `15m` | Access token lifetime |
| `JWT_REFRESH_EXPIRES_IN` | `7d` | Refresh token lifetime |
| `ENCRYPTION_KEY` | — | AES-256 key for secrets at rest (min 32 hex chars) |
| `INITIAL_ADMIN_EMAIL` | `admin@example.com` | Bootstrap admin email |
| `INITIAL_ADMIN_PASSWORD` | `ChangeMe2025!!` | Bootstrap admin password (min 12 chars) |
| `CORS_ORIGINS` | `http://localhost:5100` | Allowed origins (comma-separated) |
| `INSTANCES_BASE_PATH` | `/opt/ccp/instances` | Where instance directories live |
| `CML_SOURCE_PATH` | `/home/bunker-admin/changemaker.lite` | CML source to clone from |
| `CML_GIT_REPO` | — | Git repo URL (if cloning remotely) |
| `CML_GIT_BRANCH` | `v2` | Default branch for new instances |
| `PORT_RANGE_API_START` | `14000` | API port range start |
| `PORT_RANGE_API_END` | `14999` | API port range end |
| `PORT_RANGE_ADMIN_START` | `13000` | Admin port range start |
| `PORT_RANGE_ADMIN_END` | `13999` | Admin port range end |
| `PORT_RANGE_POSTGRES_START` | `15400` | PostgreSQL port range start |
| `PORT_RANGE_POSTGRES_END` | `15499` | PostgreSQL port range end |
| `PORT_RANGE_NGINX_START` | `10000` | Nginx port range start |
| `PORT_RANGE_NGINX_END` | `10999` | Nginx port range end |
| `PANGOLIN_API_URL` | — | Pangolin API base URL |
| `PANGOLIN_API_KEY` | — | Pangolin API key |
| `PANGOLIN_ORG_ID` | — | Pangolin organization ID |
| `HEALTH_CHECK_INTERVAL_MS` | `300000` | Health check interval (0 to disable) |
| `BACKUP_STORAGE_PATH` | `/var/backups/ccp-instances` | Backup archive storage directory |
| `BACKUP_RETENTION_DAYS` | `30` | Auto-cleanup threshold |
---
## Docker Services
```yaml
services:
ccp-postgres: # PostgreSQL 16 Alpine — CCP database (port 5480)
ccp-redis: # Redis 7 Alpine — rate limiting, caching (port 6399)
ccp-api: # Node 20 + Docker CLI — API server (port 5000)
ccp-admin: # Nginx + React SPA — admin GUI (port 5100)
```
### Volume Mounts (ccp-api)
| Host | Container | Mode | Purpose |
|------|-----------|------|---------|
| `./api` | `/app` | rw | Source code (dev) |
| `./templates` | `/app/templates` | ro | Handlebars templates |
| `/var/run/docker.sock` | `/var/run/docker.sock` | — | Docker CLI access |
| `$INSTANCES_BASE_PATH` | Same | rw | Instance directories |
| `$CML_SOURCE_PATH` | Same | ro | CML source for provisioning |
| `$BACKUP_STORAGE_PATH` | Same | rw | Backup archives |
---
## File Inventory
### API (`api/`)
```
src/
├── server.ts # Express app + route mounting + health scheduler start
├── config/
│ ├── env.ts # Zod env validation (30+ vars)
│ └── redis.ts # Redis client config
├── middleware/
│ ├── auth.ts # authenticate + requireRole middleware
│ ├── error-handler.ts # AppError class + global error handler
│ └── validate.ts # Zod request body validation
├── modules/
│ ├── auth/
│ │ ├── auth.routes.ts # POST login/refresh/logout, GET /me
│ │ ├── auth.schemas.ts # Zod schemas for auth payloads
│ │ └── auth.service.ts # JWT generation, bcrypt verify, token rotation
│ ├── instances/
│ │ ├── instances.routes.ts # CRUD + lifecycle + services + logs + health + backups
│ │ ├── instances.schemas.ts # Zod create/update validation
│ │ ├── instances.service.ts # Business logic + audit logging with IP capture
│ │ └── provisioner.ts # 13-step async provisioning orchestration
│ ├── audit/
│ │ ├── audit.routes.ts # GET /api/audit with filters + pagination
│ │ └── audit.service.ts # Prisma query builder for audit logs
│ ├── backups/
│ │ └── backup.routes.ts # GET list, DELETE, GET download
│ ├── health/
│ │ └── health.routes.ts # Public health + authenticated overview
│ └── settings/
│ └── settings.routes.ts # GET all, PUT :key (+ audit logging)
├── services/
│ ├── docker.service.ts # Docker CLI wrapper (ps, exec, logs, up, down, etc.)
│ ├── health.service.ts # Health check logic + 5-minute scheduler
│ ├── backup.service.ts # pg_dump + tar + manifest + cleanup
│ ├── port-allocator.ts # Port range management
│ ├── secret-generator.ts # CML-compatible credential generation
│ └── template-engine.ts # Handlebars rendering for 7 config files
└── utils/
├── encryption.ts # AES-256-GCM encrypt/decrypt
└── logger.ts # Winston config
```
### Admin (`admin/`)
```
src/
├── App.tsx # Route definitions + dark theme config
├── main.tsx # React entry
├── components/
│ ├── AppLayout.tsx # Sidebar nav + header + user dropdown
│ ├── InstanceCard.tsx # Card with status, health bar, features
│ ├── ServiceHealthGrid.tsx # Container state table with actions
│ ├── LogViewer.tsx # Log display with service filter
│ └── ProtectedRoute.tsx # Auth guard wrapper
├── pages/
│ ├── LoginPage.tsx # Auth form
│ ├── DashboardPage.tsx # Stats + health overview + instance cards
│ ├── InstanceListPage.tsx # Instance table
│ ├── CreateWizardPage.tsx # 5-step provisioning wizard
│ ├── InstanceDetailPage.tsx # Tabbed detail (overview/services/logs/backups/tunnel)
│ ├── BackupsPage.tsx # Cross-instance backup manager
│ ├── AuditLogPage.tsx # Filterable audit trail
│ └── SettingsPage.tsx # CCP configuration
├── stores/
│ └── auth.store.ts # Zustand auth + localStorage persistence
├── lib/
│ └── api.ts # Axios + auth interceptors + token refresh
└── types/
└── api.ts # TypeScript interfaces for all API models
```
### Templates (`templates/`)
```
docker-compose.yml.hbs # Full CML docker-compose with ports/secrets/features
env.hbs # Instance .env file
nginx/
├── nginx.conf # Global nginx config (static copy)
└── conf.d/
├── default.conf.hbs # Subdomain routing
├── api.conf.hbs # API reverse proxy
└── services.conf.hbs # Service proxies
configs/
├── pangolin/resources.yml.hbs # Tunnel resource definitions
└── prometheus/prometheus.yml.hbs # Monitoring scrape targets
```
---
## Quick Start
```bash
# 1. Clone and enter the CCP directory
cd changemaker.lite/changemaker-control-panel
# 2. Copy environment file
cp .env.example .env
# Edit .env: set strong passwords for JWT_ACCESS_SECRET, JWT_REFRESH_SECRET, ENCRYPTION_KEY
# 3. Start CCP
docker compose up -d
# 4. Run migrations + seed
docker compose exec ccp-api npx prisma migrate deploy
docker compose exec ccp-api npx prisma db seed
# 5. Access admin GUI
open http://localhost:5100
# Login with INITIAL_ADMIN_EMAIL / INITIAL_ADMIN_PASSWORD from .env
```
---
## Development
```bash
# API development (hot reload)
cd api && npm install && npx tsx src/server.ts
# Admin development (Vite dev server)
cd admin && npm install && npm run dev
# Type checking
cd api && npx tsc --noEmit
cd admin && npx tsc --noEmit
# Database operations
cd api && npx prisma migrate dev # Create/apply migrations
cd api && npx prisma studio # Browse database GUI
cd api && npx prisma db seed # Re-seed admin user
```

View File

@ -0,0 +1,25 @@
FROM node:20-alpine AS build
WORKDIR /app
COPY admin/package.json admin/package-lock.json ./
RUN npm ci
COPY admin/ .
RUN npm run build
# Production stage
FROM nginx:alpine
COPY --from=build /app/dist /usr/share/nginx/html
# SPA fallback
RUN echo 'server { \
listen 5100; \
root /usr/share/nginx/html; \
index index.html; \
location / { try_files $uri $uri/ /index.html; } \
location /api/ { proxy_pass http://ccp-api:5000; } \
}' > /etc/nginx/conf.d/default.conf
EXPOSE 5100

View File

@ -0,0 +1,20 @@
FROM node:20-alpine
# Install Docker CLI (needed to manage instance containers) + rsync (for provisioning)
RUN apk add --no-cache docker-cli docker-cli-compose rsync
WORKDIR /app
# Install dependencies
COPY api/package.json api/package-lock.json ./
RUN npm ci
# Copy source
COPY api/ .
# Generate Prisma client
RUN npx prisma generate
EXPOSE 5000
CMD ["npx", "tsx", "src/server.ts"]

View File

@ -0,0 +1,13 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<link rel="icon" type="image/svg+xml" href="/vite.svg" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Changemaker Control Panel</title>
</head>
<body>
<div id="root"></div>
<script type="module" src="/src/main.tsx"></script>
</body>
</html>

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,30 @@
{
"name": "ccp-admin",
"version": "1.0.0",
"description": "Changemaker Control Panel — Admin GUI",
"type": "module",
"scripts": {
"dev": "vite",
"build": "tsc -b && vite build",
"preview": "vite preview",
"typecheck": "tsc --noEmit"
},
"dependencies": {
"@ant-design/icons": "^5.6.0",
"@ant-design/v5-patch-for-react-19": "^1.0.3",
"antd": "^5.23.0",
"axios": "^1.7.9",
"dayjs": "^1.11.19",
"react": "^19.0.0",
"react-dom": "^19.0.0",
"react-router-dom": "^7.1.1",
"zustand": "^5.0.3"
},
"devDependencies": {
"@types/react": "^19.0.7",
"@types/react-dom": "^19.0.3",
"@vitejs/plugin-react": "^4.3.4",
"typescript": "^5.7.3",
"vite": "^6.0.7"
}
}

View File

@ -0,0 +1,72 @@
import { useEffect } from 'react';
import { BrowserRouter, Routes, Route, Navigate } from 'react-router-dom';
import { App as AntApp, ConfigProvider, theme, Spin } from 'antd';
import { useAuthStore } from '@/stores/auth.store';
import ProtectedRoute from '@/components/ProtectedRoute';
import AppLayout from '@/components/AppLayout';
import LoginPage from '@/pages/LoginPage';
import DashboardPage from '@/pages/DashboardPage';
import InstanceListPage from '@/pages/InstanceListPage';
import CreateWizardPage from '@/pages/CreateWizardPage';
import InstanceDetailPage from '@/pages/InstanceDetailPage';
import RegisterInstancePage from '@/pages/RegisterInstancePage';
import BackupsPage from '@/pages/BackupsPage';
import AuditLogPage from '@/pages/AuditLogPage';
import SettingsPage from '@/pages/SettingsPage';
export default function App() {
const { hydrate, isLoading } = useAuthStore();
useEffect(() => {
hydrate();
}, []);
if (isLoading) {
return (
<div style={{ display: 'flex', justifyContent: 'center', alignItems: 'center', height: '100vh' }}>
<Spin size="large" />
</div>
);
}
return (
<ConfigProvider
theme={{
algorithm: theme.darkAlgorithm,
token: {
colorPrimary: '#1677ff',
colorBgBase: '#141414',
borderRadius: 6,
},
}}
>
<AntApp>
<BrowserRouter>
<Routes>
<Route path="/login" element={<LoginPage />} />
<Route
path="/app"
element={
<ProtectedRoute>
<AppLayout />
</ProtectedRoute>
}
>
<Route index element={<DashboardPage />} />
<Route path="instances" element={<InstanceListPage />} />
<Route path="instances/new" element={<CreateWizardPage />} />
<Route path="instances/register" element={<RegisterInstancePage />} />
<Route path="instances/:id" element={<InstanceDetailPage />} />
<Route path="backups" element={<BackupsPage />} />
<Route path="audit" element={<AuditLogPage />} />
<Route path="settings" element={<SettingsPage />} />
</Route>
<Route path="*" element={<Navigate to="/app" replace />} />
</Routes>
</BrowserRouter>
</AntApp>
</ConfigProvider>
);
}

View File

@ -0,0 +1,181 @@
import { useState } from 'react';
import { Layout, Menu, Button, Typography, Avatar, Dropdown, Grid, Drawer } from 'antd';
import {
DashboardOutlined,
CloudServerOutlined,
SaveOutlined,
AuditOutlined,
SettingOutlined,
LogoutOutlined,
UserOutlined,
MenuFoldOutlined,
MenuUnfoldOutlined,
MenuOutlined,
} from '@ant-design/icons';
import { Outlet, useNavigate, useLocation } from 'react-router-dom';
import { useAuthStore } from '@/stores/auth.store';
const { Header, Sider, Content } = Layout;
export default function AppLayout() {
const [collapsed, setCollapsed] = useState(false);
const [drawerOpen, setDrawerOpen] = useState(false);
const navigate = useNavigate();
const location = useLocation();
const { user, logout } = useAuthStore();
const screens = Grid.useBreakpoint();
const isMobile = !screens.md;
const menuItems = [
{
key: '/app',
icon: <DashboardOutlined />,
label: 'Dashboard',
},
{
key: '/app/instances',
icon: <CloudServerOutlined />,
label: 'Instances',
},
{
key: '/app/backups',
icon: <SaveOutlined />,
label: 'Backups',
},
{
key: '/app/audit',
icon: <AuditOutlined />,
label: 'Audit Log',
},
{
key: '/app/settings',
icon: <SettingOutlined />,
label: 'Settings',
},
];
// Use startsWith matching with longest-match preference so sub-routes
// like /app/instances/123 highlight the "Instances" menu item, not Dashboard.
const selectedKey = menuItems
.filter((item) => location.pathname === item.key || location.pathname.startsWith(item.key + '/'))
.sort((a, b) => b.key.length - a.key.length)[0]?.key || '/app';
const handleMenuClick = ({ key }: { key: string }) => {
navigate(key);
if (isMobile) setDrawerOpen(false);
};
const userMenu = {
items: [
{
key: 'logout',
icon: <LogoutOutlined />,
label: 'Logout',
onClick: async () => {
await logout();
navigate('/login');
},
},
],
};
const siderContent = (
<>
<div
style={{
height: 64,
display: 'flex',
alignItems: 'center',
justifyContent: isMobile ? 'flex-start' : collapsed ? 'center' : 'flex-start',
padding: isMobile ? '0 16px' : collapsed ? 0 : '0 16px',
}}
>
<Typography.Title
level={4}
style={{ color: '#fff', margin: 0, whiteSpace: 'nowrap' }}
>
{!isMobile && collapsed ? 'CCP' : 'Control Panel'}
</Typography.Title>
</div>
<Menu
theme="dark"
mode="inline"
selectedKeys={[selectedKey]}
items={menuItems}
onClick={handleMenuClick}
/>
</>
);
return (
<Layout style={{ minHeight: '100vh' }}>
{isMobile ? (
<Drawer
placement="left"
open={drawerOpen}
onClose={() => setDrawerOpen(false)}
width={240}
styles={{ body: { padding: 0, background: '#001529' } }}
closable={false}
>
{siderContent}
</Drawer>
) : (
<Sider
collapsible
collapsed={collapsed}
onCollapse={setCollapsed}
trigger={null}
width={240}
style={{
overflow: 'auto',
height: '100vh',
position: 'fixed',
left: 0,
top: 0,
bottom: 0,
zIndex: 10,
}}
>
{siderContent}
</Sider>
)}
<Layout style={{ marginLeft: isMobile ? 0 : collapsed ? 80 : 240, transition: 'margin-left 0.2s' }}>
<Header
style={{
padding: '0 24px',
background: 'transparent',
display: 'flex',
alignItems: 'center',
justifyContent: 'space-between',
}}
>
{isMobile ? (
<Button
type="text"
icon={<MenuOutlined />}
onClick={() => setDrawerOpen(true)}
style={{ fontSize: 16 }}
/>
) : (
<Button
type="text"
icon={collapsed ? <MenuUnfoldOutlined /> : <MenuFoldOutlined />}
onClick={() => setCollapsed(!collapsed)}
style={{ fontSize: 16 }}
/>
)}
<Dropdown menu={userMenu} placement="bottomRight">
<div style={{ cursor: 'pointer', display: 'flex', alignItems: 'center', gap: 8 }}>
<Avatar icon={<UserOutlined />} size="small" />
<span>{user?.name || user?.email}</span>
</div>
</Dropdown>
</Header>
<Content style={{ margin: isMobile ? 12 : 24, minHeight: 280 }}>
<Outlet />
</Content>
</Layout>
</Layout>
);
}

View File

@ -0,0 +1,373 @@
import { useState, useEffect, useRef } from 'react';
import {
Drawer, Table, Tag, Button, Space, Typography, Spin, Alert, Input,
Descriptions, Switch, message, Collapse, Badge, Result,
} from 'antd';
import {
SearchOutlined,
CheckCircleOutlined,
CloudServerOutlined,
HomeOutlined,
} from '@ant-design/icons';
import { api } from '@/lib/api';
import type { DiscoveredInstance, DiscoveryResult, ImportResult } from '@/types/api';
interface Props {
open: boolean;
onClose: () => void;
onImported: () => void;
}
export default function DiscoverInstancesDrawer({ open, onClose, onImported }: Props) {
const [loading, setLoading] = useState(false);
const [importing, setImporting] = useState(false);
const [result, setResult] = useState<DiscoveryResult | null>(null);
const [importResult, setImportResult] = useState<ImportResult | null>(null);
const [selectedKeys, setSelectedKeys] = useState<string[]>([]);
const [editedInstances, setEditedInstances] = useState<Record<string, Partial<DiscoveredInstance>>>({});
const [error, setError] = useState<string | null>(null);
const hasScanned = useRef(false);
const runDiscovery = async () => {
setLoading(true);
setError(null);
setResult(null);
setImportResult(null);
setSelectedKeys([]);
setEditedInstances({});
try {
const { data } = await api.post('/instances/discover');
setResult(data.data);
// Auto-select all new instances
const newKeys = (data.data as DiscoveryResult).instances
.filter((inst) => !inst.isAlreadyRegistered)
.map((inst) => inst.basePath);
setSelectedKeys(newKeys);
} catch (err) {
const msg = (err as { response?: { data?: { error?: { message?: string } } } })
?.response?.data?.error?.message || 'Discovery failed';
setError(msg);
} finally {
setLoading(false);
}
};
useEffect(() => {
if (open && !hasScanned.current) {
runDiscovery();
hasScanned.current = true;
}
}, [open]);
const getEditedValue = (basePath: string, field: keyof DiscoveredInstance, original: unknown) => {
return editedInstances[basePath]?.[field] ?? original;
};
const setEditedField = (basePath: string, field: string, value: unknown) => {
setEditedInstances((prev) => ({
...prev,
[basePath]: { ...prev[basePath], [field]: value },
}));
};
const handleImport = async () => {
if (!result) return;
setImporting(true);
setImportResult(null);
const toImport = result.instances
.filter((inst) => selectedKeys.includes(inst.basePath) && !inst.isAlreadyRegistered)
.map((inst) => {
const edits = editedInstances[inst.basePath] || {};
return {
name: (edits.name as string) || inst.name,
slug: (edits.slug as string) || inst.slug,
domain: inst.domain,
basePath: inst.basePath,
composeProject: inst.composeProject,
portConfig: inst.portConfig,
adminEmail: inst.adminEmail,
enableMedia: edits.enableMedia ?? inst.enableMedia,
enableChat: edits.enableChat ?? inst.enableChat,
enableGancio: edits.enableGancio ?? inst.enableGancio,
enableListmonk: edits.enableListmonk ?? inst.enableListmonk,
enableMonitoring: edits.enableMonitoring ?? inst.enableMonitoring,
enableDevTools: edits.enableDevTools ?? inst.enableDevTools,
enablePayments: edits.enablePayments ?? inst.enablePayments,
};
});
if (toImport.length === 0) {
message.warning('No instances selected for import');
setImporting(false);
return;
}
try {
const { data } = await api.post('/instances/import', { instances: toImport });
const ir = data.data as ImportResult;
setImportResult(ir);
if (ir.summary.succeeded > 0) {
message.success(`Imported ${ir.summary.succeeded} instance(s)`);
hasScanned.current = false; // Reset so next open re-scans
onImported();
}
if (ir.summary.failed > 0) {
message.warning(`${ir.summary.failed} instance(s) failed to import`);
}
} catch (err) {
const msg = (err as { response?: { data?: { error?: { message?: string } } } })
?.response?.data?.error?.message || 'Import failed';
message.error(msg);
} finally {
setImporting(false);
}
};
const columns = [
{
title: 'Name',
key: 'name',
render: (_: unknown, record: DiscoveredInstance) => (
<Space direction="vertical" size={0}>
<Space size="small">
{record.isParentInstance && <Tag icon={<HomeOutlined />} color="blue">Parent</Tag>}
{record.isAlreadyRegistered ? (
<Typography.Text type="secondary">{record.name}</Typography.Text>
) : (
<Input
size="small"
value={getEditedValue(record.basePath, 'name', record.name) as string}
onChange={(e) => setEditedField(record.basePath, 'name', e.target.value)}
style={{ width: 180 }}
/>
)}
</Space>
<Typography.Text type="secondary" style={{ fontSize: 11 }}>
{record.domain}
</Typography.Text>
</Space>
),
},
{
title: 'Source',
key: 'source',
width: 100,
render: (_: unknown, record: DiscoveredInstance) => (
<Tag
icon={record.source === 'parent' ? <HomeOutlined /> : <CloudServerOutlined />}
color={record.source === 'parent' ? 'blue' : 'default'}
>
{record.source === 'parent' ? 'Parent' : 'Docker'}
</Tag>
),
},
{
title: 'Status',
key: 'status',
width: 120,
render: (_: unknown, record: DiscoveredInstance) => (
<Space direction="vertical" size={0}>
<Tag color={record.isRunning ? 'green' : 'default'}>
{record.isRunning ? 'Running' : 'Stopped'}
</Tag>
<Typography.Text type="secondary" style={{ fontSize: 11 }}>
{record.runningContainers}/{record.totalContainers} containers
</Typography.Text>
</Space>
),
},
{
title: 'Tracked',
key: 'tracked',
width: 90,
render: (_: unknown, record: DiscoveredInstance) =>
record.isAlreadyRegistered ? (
<Tag color="orange">Already tracked</Tag>
) : (
<Tag color="green">New</Tag>
),
},
];
const expandedRowRender = (record: DiscoveredInstance) => {
const isEditable = !record.isAlreadyRegistered;
return (
<div style={{ padding: '8px 0' }}>
<Descriptions size="small" column={2} bordered>
<Descriptions.Item label="Slug">
{isEditable ? (
<Input
size="small"
value={getEditedValue(record.basePath, 'slug', record.slug) as string}
onChange={(e) => setEditedField(record.basePath, 'slug', e.target.value)}
/>
) : (
<Typography.Text code>{record.slug}</Typography.Text>
)}
</Descriptions.Item>
<Descriptions.Item label="Admin Email">{record.adminEmail}</Descriptions.Item>
<Descriptions.Item label="Base Path">
<Typography.Text code style={{ fontSize: 11 }}>{record.basePath}</Typography.Text>
</Descriptions.Item>
<Descriptions.Item label="Compose Project">
<Typography.Text code>{record.composeProject}</Typography.Text>
</Descriptions.Item>
<Descriptions.Item label="Ports" span={2}>
API:{record.portConfig.api} &nbsp; Admin:{record.portConfig.admin} &nbsp;
Postgres:{record.portConfig.postgres} &nbsp; Nginx:{record.portConfig.nginx}
</Descriptions.Item>
</Descriptions>
{isEditable && (
<Collapse
size="small"
style={{ marginTop: 8 }}
items={[{
key: 'features',
label: 'Feature Flags',
children: (
<Space wrap>
{(['enableMedia', 'enableChat', 'enableGancio', 'enableListmonk', 'enableMonitoring', 'enableDevTools', 'enablePayments'] as const).map((flag) => (
<Space key={flag} size="small">
<Switch
size="small"
checked={getEditedValue(record.basePath, flag, record[flag]) as boolean}
onChange={(checked) => setEditedField(record.basePath, flag, checked)}
/>
<Typography.Text style={{ fontSize: 12 }}>
{flag.replace('enable', '')}
</Typography.Text>
</Space>
))}
</Space>
),
}]}
/>
)}
</div>
);
};
const selectableInstances = result?.instances.filter((inst) => !inst.isAlreadyRegistered) || [];
const selectedCount = selectedKeys.filter((key) =>
selectableInstances.some((inst) => inst.basePath === key)
).length;
return (
<Drawer
title="Discover CML Instances"
placement="right"
width={720}
open={open}
onClose={onClose}
footer={
importResult ? (
<Button type="primary" onClick={onClose}>Done</Button>
) : (
<Space>
<Button onClick={runDiscovery} icon={<SearchOutlined />} loading={loading}>
Re-scan
</Button>
<Button
type="primary"
onClick={handleImport}
loading={importing}
disabled={selectedCount === 0 || loading}
>
Import Selected ({selectedCount})
</Button>
</Space>
)
}
>
{loading && (
<div style={{ textAlign: 'center', padding: '60px 0' }}>
<Spin size="large" />
<Typography.Paragraph type="secondary" style={{ marginTop: 16 }}>
Scanning Docker projects and parsing .env files...
</Typography.Paragraph>
</div>
)}
{error && <Alert type="error" message={error} showIcon style={{ marginBottom: 16 }} />}
{importResult && (
<div style={{ marginBottom: 16 }}>
<Result
status={importResult.summary.failed === 0 ? 'success' : 'warning'}
title={`${importResult.summary.succeeded} imported, ${importResult.summary.failed} failed`}
/>
{importResult.results.filter((r) => !r.success).map((r) => (
<Alert
key={r.slug}
type="error"
message={`${r.slug}: ${r.error}`}
style={{ marginBottom: 4 }}
showIcon
/>
))}
{importResult.results.filter((r) => r.success).map((r) => (
<Alert
key={r.slug}
type="success"
message={`${r.slug} imported successfully`}
icon={<CheckCircleOutlined />}
style={{ marginBottom: 4 }}
showIcon
/>
))}
</div>
)}
{!loading && result && !importResult && (
<>
<Alert
type="info"
showIcon
message={
<Space>
<span>
Found <strong>{result.summary.total}</strong> instance(s)
</span>
<Badge count={result.summary.newInstances} style={{ backgroundColor: '#52c41a' }} overflowCount={99}>
<Tag>New</Tag>
</Badge>
<Badge count={result.summary.alreadyRegistered} style={{ backgroundColor: '#faad14' }} overflowCount={99}>
<Tag>Already tracked</Tag>
</Badge>
{result.summary.parentFound && <Tag icon={<HomeOutlined />} color="blue">Parent found</Tag>}
</Space>
}
style={{ marginBottom: 16 }}
/>
{result.instances.length === 0 ? (
<Result
icon={<CloudServerOutlined style={{ color: '#999' }} />}
title="No CML instances found"
subTitle="No running Docker Compose projects with the CML fingerprint were detected."
/>
) : (
<Table
dataSource={result.instances}
columns={columns}
rowKey="basePath"
size="small"
pagination={false}
expandable={{ expandedRowRender }}
rowSelection={{
selectedRowKeys: selectedKeys,
onChange: (keys) => setSelectedKeys(keys as string[]),
getCheckboxProps: (record) => ({
disabled: record.isAlreadyRegistered,
}),
}}
rowClassName={(record) => record.isAlreadyRegistered ? 'ant-table-row-disabled' : ''}
/>
)}
</>
)}
</Drawer>
);
}

View File

@ -0,0 +1,127 @@
import { Card, Tag, Typography, Space, Button, Progress, Tooltip } from 'antd';
import {
PlayCircleOutlined,
PauseCircleOutlined,
LinkOutlined,
SettingOutlined,
} from '@ant-design/icons';
import { useNavigate } from 'react-router-dom';
import type { Instance } from '@/types/api';
const statusColors: Record<string, string> = {
RUNNING: 'green',
STOPPED: 'default',
PROVISIONING: 'processing',
ERROR: 'red',
DESTROYING: 'orange',
};
interface InstanceCardProps {
instance: Instance;
onStart?: (id: string) => void;
onStop?: (id: string) => void;
}
export default function InstanceCard({ instance, onStart, onStop }: InstanceCardProps) {
const navigate = useNavigate();
const healthCheck = instance.healthChecks?.[0];
const healthPercent = healthCheck
? Math.round((healthCheck.healthyServices / healthCheck.totalServices) * 100)
: 0;
return (
<Card
hoverable
onClick={() => navigate(`/app/instances/${instance.id}`)}
style={{ width: '100%' }}
actions={[
<Tooltip title="Open Site" key="open">
<Button
type="text"
icon={<LinkOutlined />}
onClick={(e) => {
e.stopPropagation();
window.open(`https://app.${instance.domain}`, '_blank');
}}
/>
</Tooltip>,
instance.status === 'RUNNING' ? (
<Tooltip title="Stop" key="stop">
<Button
type="text"
icon={<PauseCircleOutlined />}
onClick={(e) => {
e.stopPropagation();
onStop?.(instance.id);
}}
/>
</Tooltip>
) : (instance.status === 'STOPPED' || instance.status === 'ERROR') ? (
<Tooltip title="Start" key="start">
<Button
type="text"
icon={<PlayCircleOutlined />}
onClick={(e) => {
e.stopPropagation();
onStart?.(instance.id);
}}
/>
</Tooltip>
) : (
<Tooltip title={instance.status === 'PROVISIONING' ? 'Provisioning...' : instance.status} key="disabled">
<Button
type="text"
icon={<PlayCircleOutlined />}
disabled
/>
</Tooltip>
),
<Tooltip title="Settings" key="settings">
<Button
type="text"
icon={<SettingOutlined />}
onClick={(e) => {
e.stopPropagation();
navigate(`/app/instances/${instance.id}`);
}}
/>
</Tooltip>,
]}
>
<Card.Meta
title={
<Space>
<span>{instance.name}</span>
<Tag color={statusColors[instance.status]}>{instance.status}</Tag>
</Space>
}
description={
<Space direction="vertical" style={{ width: '100%' }}>
<Typography.Text type="secondary">{instance.domain}</Typography.Text>
{healthCheck && (
<div>
<Typography.Text type="secondary" style={{ fontSize: 12 }}>
Services: {healthCheck.healthyServices}/{healthCheck.totalServices}
</Typography.Text>
<Progress
percent={healthPercent}
size="small"
status={healthPercent === 100 ? 'success' : healthPercent > 50 ? 'active' : 'exception'}
showInfo={false}
/>
</div>
)}
<Space size="small" wrap>
{instance.enableMedia && <Tag>Media</Tag>}
{instance.enableListmonk && <Tag>Newsletter</Tag>}
{instance.enableGancio && <Tag>Events</Tag>}
{instance.enableChat && <Tag>Chat</Tag>}
{instance.enableMonitoring && <Tag>Monitoring</Tag>}
</Space>
</Space>
}
/>
</Card>
);
}

View File

@ -0,0 +1,186 @@
import { useEffect, useRef, useState, useCallback } from 'react';
import { Select, Space, Button, message, Spin, Empty } from 'antd';
import { ReloadOutlined, CopyOutlined, VerticalAlignBottomOutlined, SyncOutlined } from '@ant-design/icons';
import axios from 'axios';
import { api } from '@/lib/api';
interface LogViewerProps {
instanceId: string;
services: string[];
initialService?: string;
}
const TAIL_OPTIONS = [
{ label: '100 lines', value: 100 },
{ label: '200 lines', value: 200 },
{ label: '500 lines', value: 500 },
{ label: '1000 lines', value: 1000 },
];
const SINCE_OPTIONS = [
{ label: '1 hour', value: '1h' },
{ label: '6 hours', value: '6h' },
{ label: '24 hours', value: '24h' },
{ label: 'All time', value: '' },
];
const AUTO_REFRESH_INTERVAL = 5000;
export default function LogViewer({ instanceId, services, initialService }: LogViewerProps) {
const [service, setService] = useState<string | undefined>(initialService);
const [tail, setTail] = useState(200);
const [since, setSince] = useState('1h');
const [logs, setLogs] = useState('');
const [loading, setLoading] = useState(false);
const [autoScroll, setAutoScroll] = useState(true);
const [autoRefresh, setAutoRefresh] = useState(false);
const logRef = useRef<HTMLPreElement>(null);
const abortRef = useRef<AbortController | null>(null);
const loadingRef = useRef(false);
useEffect(() => {
if (initialService) setService(initialService);
}, [initialService]);
const fetchLogs = useCallback(async () => {
// Abort any in-flight request
if (abortRef.current) {
abortRef.current.abort();
}
const controller = new AbortController();
abortRef.current = controller;
setLoading(true);
loadingRef.current = true;
try {
const params = new URLSearchParams();
if (service) params.set('service', service);
params.set('tail', tail.toString());
if (since) params.set('since', since);
const { data } = await api.get(`/instances/${instanceId}/logs?${params}`, {
signal: controller.signal,
});
setLogs(data.data || '');
} catch (err) {
// Ignore aborted requests — they're expected during rapid dropdown changes
if (axios.isCancel(err)) return;
setLogs('Failed to load logs');
} finally {
setLoading(false);
loadingRef.current = false;
}
}, [instanceId, service, tail, since]);
// Fetch when parameters change
useEffect(() => {
fetchLogs();
// Cleanup: abort on unmount or before next effect run
return () => {
if (abortRef.current) {
abortRef.current.abort();
}
};
}, [fetchLogs]);
// Auto-refresh interval
useEffect(() => {
if (!autoRefresh) return;
const id = setInterval(() => {
// Skip if a manual fetch is already in progress
if (!loadingRef.current) {
fetchLogs();
}
}, AUTO_REFRESH_INTERVAL);
return () => clearInterval(id);
}, [autoRefresh, fetchLogs]);
useEffect(() => {
if (autoScroll && logRef.current) {
logRef.current.scrollTop = logRef.current.scrollHeight;
}
}, [logs, autoScroll]);
const handleCopy = async () => {
try {
await navigator.clipboard.writeText(logs);
message.success('Logs copied to clipboard');
} catch {
message.error('Failed to copy');
}
};
return (
<div>
<Space wrap style={{ marginBottom: 12 }}>
<Select
style={{ width: 180 }}
placeholder="All services"
allowClear
value={service}
onChange={setService}
options={services.map((s) => ({ label: s, value: s }))}
/>
<Select
style={{ width: 120 }}
value={tail}
onChange={setTail}
options={TAIL_OPTIONS}
/>
<Select
style={{ width: 120 }}
value={since}
onChange={setSince}
options={SINCE_OPTIONS}
/>
<Button icon={<ReloadOutlined />} onClick={fetchLogs} loading={loading}>
Refresh
</Button>
<Button icon={<CopyOutlined />} onClick={handleCopy} disabled={!logs}>
Copy
</Button>
<Button
icon={<VerticalAlignBottomOutlined />}
type={autoScroll ? 'primary' : 'default'}
onClick={() => setAutoScroll(!autoScroll)}
>
Auto-scroll
</Button>
<Button
icon={<SyncOutlined spin={autoRefresh} />}
type={autoRefresh ? 'primary' : 'default'}
onClick={() => setAutoRefresh(!autoRefresh)}
>
Auto-refresh
</Button>
</Space>
{loading && !logs ? (
<div style={{ textAlign: 'center', padding: 40 }}>
<Spin />
</div>
) : !logs ? (
<Empty description="No logs available" />
) : (
<pre
ref={logRef}
style={{
background: '#1e1e1e',
color: '#d4d4d4',
padding: 16,
borderRadius: 6,
fontSize: 12,
lineHeight: 1.5,
maxHeight: 600,
overflow: 'auto',
fontFamily: "'Fira Code', 'Cascadia Code', 'Consolas', monospace",
whiteSpace: 'pre-wrap',
wordBreak: 'break-all',
}}
>
{logs}
</pre>
)}
</div>
);
}

View File

@ -0,0 +1,22 @@
import { Spin } from 'antd';
import { Navigate } from 'react-router-dom';
import { useAuthStore } from '@/stores/auth.store';
export default function ProtectedRoute({ children }: { children: React.ReactNode }) {
const { isAuthenticated, isLoading } = useAuthStore();
// Wait for hydration to complete before deciding on redirect
if (isLoading) {
return (
<div style={{ display: 'flex', justifyContent: 'center', alignItems: 'center', height: '100vh' }}>
<Spin size="large" />
</div>
);
}
if (!isAuthenticated) {
return <Navigate to="/login" replace />;
}
return <>{children}</>;
}

View File

@ -0,0 +1,138 @@
import { Table, Tag, Button, Space, Tooltip, Empty } from 'antd';
import { ReloadOutlined, FileTextOutlined } from '@ant-design/icons';
export interface ServiceInfo {
name: string;
service: string;
status: string;
state: string;
health: string;
ports: string;
createdAt: string;
exitCode: number;
}
interface ServiceHealthGridProps {
services: ServiceInfo[];
loading?: boolean;
onRestart?: (service: string) => void;
onViewLogs?: (service: string) => void;
}
const stateColors: Record<string, string> = {
running: 'green',
exited: 'red',
restarting: 'orange',
paused: 'gold',
dead: 'red',
created: 'blue',
};
const healthColors: Record<string, string> = {
healthy: 'green',
unhealthy: 'red',
starting: 'processing',
};
export default function ServiceHealthGrid({
services,
loading,
onRestart,
onViewLogs,
}: ServiceHealthGridProps) {
if (!loading && services.length === 0) {
return <Empty description="No containers found" />;
}
const columns = [
{
title: 'Service',
dataIndex: 'service',
key: 'service',
render: (name: string) => <strong>{name}</strong>,
},
{
title: 'Container',
dataIndex: 'name',
key: 'name',
render: (name: string) => (
<span style={{ fontSize: 12, fontFamily: 'monospace' }}>{name}</span>
),
},
{
title: 'State',
dataIndex: 'state',
key: 'state',
render: (state: string) => (
<Tag color={stateColors[state] || 'default'}>{state.toUpperCase()}</Tag>
),
},
{
title: 'Health',
dataIndex: 'health',
key: 'health',
render: (health: string) => {
if (!health || health === 'none' || health === '') {
return <Tag>N/A</Tag>;
}
return <Tag color={healthColors[health] || 'default'}>{health}</Tag>;
},
},
{
title: 'Status',
dataIndex: 'status',
key: 'status',
render: (status: string) => (
<span style={{ fontSize: 12 }}>{status}</span>
),
},
{
title: 'Ports',
dataIndex: 'ports',
key: 'ports',
render: (ports: string) => (
<span style={{ fontSize: 11, fontFamily: 'monospace' }}>{ports || '-'}</span>
),
},
{
title: 'Actions',
key: 'actions',
width: 100,
render: (_: unknown, record: ServiceInfo) => (
<Space size="small">
{onRestart && (
<Tooltip title="Restart">
<Button
type="text"
size="small"
icon={<ReloadOutlined />}
onClick={() => onRestart(record.service)}
/>
</Tooltip>
)}
{onViewLogs && (
<Tooltip title="View Logs">
<Button
type="text"
size="small"
icon={<FileTextOutlined />}
onClick={() => onViewLogs(record.service)}
/>
</Tooltip>
)}
</Space>
),
},
];
return (
<Table
dataSource={services}
columns={columns}
rowKey="name"
loading={loading}
pagination={false}
size="small"
/>
);
}

View File

@ -0,0 +1,92 @@
import axios from 'axios';
interface AuthResponse {
user: { id: string; email: string; name: string; role: string };
accessToken?: string;
refreshToken?: string;
}
export const api = axios.create({
baseURL: '/api',
});
// Token accessor — set by auth store on init
let getTokens: () => { accessToken: string | null; refreshToken: string | null } = () => ({
accessToken: null,
refreshToken: null,
});
let onTokenRefresh: (accessToken: string, refreshToken: string) => void = () => {};
let onAuthFailure: () => void = () => {};
export function registerAuthCallbacks(callbacks: {
getTokens: typeof getTokens;
onTokenRefresh: typeof onTokenRefresh;
onAuthFailure: typeof onAuthFailure;
}) {
getTokens = callbacks.getTokens;
onTokenRefresh = callbacks.onTokenRefresh;
onAuthFailure = callbacks.onAuthFailure;
}
// Request interceptor: attach access token
api.interceptors.request.use((config) => {
const { accessToken } = getTokens();
if (accessToken) {
config.headers.Authorization = `Bearer ${accessToken}`;
}
return config;
});
// Response interceptor: handle 401 with token refresh
let refreshPromise: Promise<AuthResponse> | null = null;
api.interceptors.response.use(
(response) => response,
async (error) => {
const originalRequest = error.config;
const errorCode = error.response?.data?.error?.code;
// Skip retry logic for the refresh endpoint itself to avoid circular await deadlock
const isRefreshRequest = originalRequest.url?.includes('/auth/refresh');
if (
error.response?.status === 401 &&
(errorCode === 'INVALID_TOKEN' || errorCode === 'AUTH_REQUIRED') &&
!originalRequest._retry &&
!isRefreshRequest
) {
originalRequest._retry = true;
const { refreshToken } = getTokens();
if (!refreshToken) {
onAuthFailure();
return Promise.reject(error);
}
try {
if (!refreshPromise) {
refreshPromise = api
.post<AuthResponse>('/auth/refresh', { refreshToken })
.then((res) => res.data)
.finally(() => {
refreshPromise = null;
});
}
const data = await refreshPromise;
if (!data.accessToken || !data.refreshToken) {
onAuthFailure();
return Promise.reject(new Error('Missing tokens in refresh response'));
}
onTokenRefresh(data.accessToken, data.refreshToken);
originalRequest.headers.Authorization = `Bearer ${data.accessToken}`;
return api(originalRequest);
} catch {
onAuthFailure();
return Promise.reject(error);
}
}
return Promise.reject(error);
}
);

View File

@ -0,0 +1,9 @@
import React from 'react';
import ReactDOM from 'react-dom/client';
import App from './App';
ReactDOM.createRoot(document.getElementById('root')!).render(
<React.StrictMode>
<App />
</React.StrictMode>
);

View File

@ -0,0 +1,271 @@
import { useEffect, useState, useCallback, useRef } from 'react';
import {
Typography,
Table,
Tag,
Space,
Select,
DatePicker,
Button,
Drawer,
Switch,
message,
} from 'antd';
import { ReloadOutlined } from '@ant-design/icons';
import dayjs from 'dayjs';
import { api } from '@/lib/api';
import type { AuditLogEntry, Instance } from '@/types/api';
const { RangePicker } = DatePicker;
const AUDIT_ACTIONS = [
'INSTANCE_CREATE',
'INSTANCE_UPDATE',
'INSTANCE_DELETE',
'INSTANCE_START',
'INSTANCE_STOP',
'INSTANCE_RESTART',
'INSTANCE_UPGRADE',
'BACKUP_CREATE',
'BACKUP_DELETE',
'PANGOLIN_SETUP',
'PANGOLIN_SYNC',
'USER_LOGIN',
'USER_CREATE',
'USER_UPDATE',
'USER_DELETE',
'SETTINGS_UPDATE',
];
const actionColors: Record<string, string> = {
INSTANCE_CREATE: 'green',
INSTANCE_START: 'green',
INSTANCE_UPDATE: 'blue',
INSTANCE_RESTART: 'blue',
INSTANCE_STOP: 'orange',
INSTANCE_DELETE: 'red',
INSTANCE_UPGRADE: 'cyan',
BACKUP_CREATE: 'purple',
BACKUP_DELETE: 'red',
PANGOLIN_SETUP: 'geekblue',
PANGOLIN_SYNC: 'geekblue',
USER_LOGIN: 'default',
USER_CREATE: 'green',
USER_UPDATE: 'blue',
USER_DELETE: 'red',
SETTINGS_UPDATE: 'volcano',
};
export default function AuditLogPage() {
const [logs, setLogs] = useState<AuditLogEntry[]>([]);
const [total, setTotal] = useState(0);
const [loading, setLoading] = useState(true);
const [page, setPage] = useState(1);
const [pageSize, setPageSize] = useState(50);
const [actionFilter, setActionFilter] = useState<string | undefined>();
const [instanceFilter, setInstanceFilter] = useState<string | undefined>();
const [dateRange, setDateRange] = useState<[dayjs.Dayjs, dayjs.Dayjs] | null>(null);
const [instances, setInstances] = useState<Instance[]>([]);
const [selectedLog, setSelectedLog] = useState<AuditLogEntry | null>(null);
const [autoRefresh, setAutoRefresh] = useState(false);
const intervalRef = useRef<ReturnType<typeof setInterval>>(undefined);
const fetchLogs = useCallback(async () => {
try {
const params: Record<string, string> = {
page: String(page),
limit: String(pageSize),
};
if (actionFilter) params.action = actionFilter;
if (instanceFilter) params.instanceId = instanceFilter;
if (dateRange?.[0]) params.from = dateRange[0].startOf('day').toISOString();
if (dateRange?.[1]) params.to = dateRange[1].endOf('day').toISOString();
const { data } = await api.get('/audit', { params });
setLogs(data.data);
setTotal(data.total);
} catch {
message.error('Failed to load audit logs');
} finally {
setLoading(false);
}
}, [page, pageSize, actionFilter, instanceFilter, dateRange]);
const fetchInstances = useCallback(async () => {
try {
const { data } = await api.get('/instances');
setInstances(data.data);
} catch {
// Silently fail
}
}, []);
useEffect(() => {
fetchInstances();
}, [fetchInstances]);
useEffect(() => {
setLoading(true);
fetchLogs();
}, [fetchLogs]);
// Auto-refresh
useEffect(() => {
if (autoRefresh) {
intervalRef.current = setInterval(fetchLogs, 30_000);
}
return () => {
if (intervalRef.current) clearInterval(intervalRef.current);
};
}, [autoRefresh, fetchLogs]);
const columns = [
{
title: 'Timestamp',
dataIndex: 'createdAt',
width: 180,
render: (d: string) => dayjs(d).format('YYYY-MM-DD HH:mm:ss'),
},
{
title: 'Action',
dataIndex: 'action',
width: 160,
render: (action: string) => (
<Tag color={actionColors[action] || 'default'}>{action}</Tag>
),
},
{
title: 'Instance',
dataIndex: 'instance',
width: 160,
render: (inst: AuditLogEntry['instance']) => inst?.name || '-',
},
{
title: 'User',
dataIndex: 'user',
width: 200,
render: (user: AuditLogEntry['user']) => user?.email || '-',
},
{
title: 'IP Address',
dataIndex: 'ipAddress',
width: 140,
render: (ip: string | undefined) => ip || '-',
},
];
return (
<div>
<div style={{ display: 'flex', justifyContent: 'space-between', alignItems: 'center', marginBottom: 16, flexWrap: 'wrap', gap: 8 }}>
<Typography.Title level={3} style={{ margin: 0 }}>
Audit Log
</Typography.Title>
<Space>
<span>Auto-refresh</span>
<Switch size="small" checked={autoRefresh} onChange={setAutoRefresh} />
<Button icon={<ReloadOutlined />} onClick={fetchLogs}>
Refresh
</Button>
</Space>
</div>
<Space wrap style={{ marginBottom: 16 }}>
<Select
placeholder="Filter by action"
allowClear
style={{ width: 200 }}
value={actionFilter}
onChange={(v) => { setActionFilter(v); setPage(1); }}
options={AUDIT_ACTIONS.map((a) => ({ label: a, value: a }))}
/>
<Select
placeholder="Filter by instance"
allowClear
style={{ width: 200 }}
value={instanceFilter}
onChange={(v) => { setInstanceFilter(v); setPage(1); }}
options={instances.map((i) => ({ label: i.name, value: i.id }))}
/>
<RangePicker
value={dateRange}
onChange={(vals) => {
setDateRange(vals as [dayjs.Dayjs, dayjs.Dayjs] | null);
setPage(1);
}}
/>
</Space>
<Table
dataSource={logs}
rowKey="id"
columns={columns}
loading={loading}
onRow={(record) => ({
onClick: () => setSelectedLog(record),
style: { cursor: 'pointer' },
})}
pagination={{
current: page,
pageSize,
total,
showSizeChanger: true,
pageSizeOptions: ['25', '50', '100'],
onChange: (p, ps) => { setPage(p); setPageSize(ps); },
showTotal: (t) => `${t} entries`,
}}
size="small"
/>
<Drawer
title="Audit Log Details"
open={!!selectedLog}
onClose={() => setSelectedLog(null)}
width={520}
>
{selectedLog && (
<Space direction="vertical" style={{ width: '100%' }} size="middle">
<div>
<Typography.Text type="secondary">Action</Typography.Text>
<div>
<Tag color={actionColors[selectedLog.action] || 'default'}>
{selectedLog.action}
</Tag>
</div>
</div>
<div>
<Typography.Text type="secondary">Timestamp</Typography.Text>
<div>{dayjs(selectedLog.createdAt).format('YYYY-MM-DD HH:mm:ss')}</div>
</div>
<div>
<Typography.Text type="secondary">User</Typography.Text>
<div>{selectedLog.user?.email || selectedLog.userId || '-'}</div>
</div>
<div>
<Typography.Text type="secondary">Instance</Typography.Text>
<div>{selectedLog.instance?.name || selectedLog.instanceId || '-'}</div>
</div>
<div>
<Typography.Text type="secondary">IP Address</Typography.Text>
<div>{selectedLog.ipAddress || '-'}</div>
</div>
<div>
<Typography.Text type="secondary">Details</Typography.Text>
<pre
style={{
background: '#1a1a2e',
padding: 12,
borderRadius: 6,
overflow: 'auto',
maxHeight: 400,
fontSize: 12,
}}
>
{JSON.stringify(selectedLog.details, null, 2) || 'null'}
</pre>
</div>
</Space>
)}
</Drawer>
</div>
);
}

View File

@ -0,0 +1,259 @@
import { useEffect, useState, useCallback } from 'react';
import {
Typography,
Table,
Tag,
Space,
Select,
Button,
Popconfirm,
Statistic,
Card,
Row,
Col,
message,
} from 'antd';
import {
ReloadOutlined,
CloudDownloadOutlined,
DeleteOutlined,
CloudUploadOutlined,
} from '@ant-design/icons';
import dayjs from 'dayjs';
import { api } from '@/lib/api';
import type { Instance } from '@/types/api';
interface BackupRow {
id: string;
instanceId: string;
status: 'PENDING' | 'IN_PROGRESS' | 'COMPLETED' | 'FAILED';
archivePath?: string;
sizeBytes?: string | number | null;
manifest?: Record<string, unknown>;
startedAt: string;
completedAt?: string;
errorMessage?: string;
s3Uploaded: boolean;
instance?: { id: string; name: string; slug: string } | null;
}
function formatSize(bytes: string | number | null | undefined): string {
if (bytes == null) return '-';
const n = typeof bytes === 'string' ? parseInt(bytes, 10) : bytes;
if (n < 1024) return `${n} B`;
if (n < 1024 * 1024) return `${(n / 1024).toFixed(1)} KB`;
if (n < 1024 * 1024 * 1024) return `${(n / 1024 / 1024).toFixed(1)} MB`;
return `${(n / 1024 / 1024 / 1024).toFixed(2)} GB`;
}
export default function BackupsPage() {
const [backups, setBackups] = useState<BackupRow[]>([]);
const [total, setTotal] = useState(0);
const [loading, setLoading] = useState(true);
const [page, setPage] = useState(1);
const [pageSize, setPageSize] = useState(50);
const [instanceFilter, setInstanceFilter] = useState<string | undefined>();
const [instances, setInstances] = useState<Instance[]>([]);
const [backingUpAll, setBackingUpAll] = useState(false);
const fetchBackups = useCallback(async () => {
try {
const params: Record<string, string> = { page: String(page), limit: String(pageSize) };
if (instanceFilter) params.instanceId = instanceFilter;
const { data } = await api.get('/backups', { params });
setBackups(data.data);
setTotal(data.total);
} catch {
message.error('Failed to load backups');
} finally {
setLoading(false);
}
}, [page, pageSize, instanceFilter]);
const fetchInstances = useCallback(async () => {
try {
const { data } = await api.get('/instances');
setInstances(data.data);
} catch {
// Silently fail
}
}, []);
useEffect(() => {
fetchInstances();
}, [fetchInstances]);
useEffect(() => {
setLoading(true);
fetchBackups();
}, [fetchBackups]);
const handleDelete = async (id: string) => {
try {
await api.delete(`/backups/${id}`);
message.success('Backup deleted');
fetchBackups();
} catch {
message.error('Failed to delete backup');
}
};
const handleDownload = (id: string) => {
window.open(`/api/backups/${id}/download`, '_blank');
};
const handleBackupAll = async () => {
const running = instances.filter((i) => i.status === 'RUNNING');
if (running.length === 0) {
message.warning('No running instances to backup');
return;
}
setBackingUpAll(true);
try {
await Promise.all(running.map((inst) => api.post(`/instances/${inst.id}/backup`)));
message.success(`Backup started for ${running.length} instance(s)`);
fetchBackups();
} catch {
message.error('Some backups failed to start');
} finally {
setBackingUpAll(false);
}
};
// Summary stats
const totalSize = backups
.filter((b) => b.status === 'COMPLETED' && b.sizeBytes)
.reduce((sum, b) => sum + (typeof b.sizeBytes === 'string' ? parseInt(b.sizeBytes, 10) : (b.sizeBytes || 0)), 0);
const lastBackup = backups.find((b) => b.status === 'COMPLETED');
const statusColors: Record<string, string> = {
COMPLETED: 'green',
FAILED: 'red',
IN_PROGRESS: 'processing',
PENDING: 'default',
};
return (
<div>
<div style={{ display: 'flex', justifyContent: 'space-between', alignItems: 'center', marginBottom: 16, flexWrap: 'wrap', gap: 8 }}>
<Typography.Title level={3} style={{ margin: 0 }}>
Backups
</Typography.Title>
<Space>
<Button
icon={<CloudUploadOutlined />}
onClick={handleBackupAll}
loading={backingUpAll}
>
Backup All Running
</Button>
<Button icon={<ReloadOutlined />} onClick={fetchBackups}>
Refresh
</Button>
</Space>
</div>
<Row gutter={[16, 16]} style={{ marginBottom: 16 }}>
<Col xs={12} sm={8}>
<Card size="small">
<Statistic title="Total Backups" value={total} />
</Card>
</Col>
<Col xs={12} sm={8}>
<Card size="small">
<Statistic title="Total Size" value={formatSize(totalSize)} />
</Card>
</Col>
<Col xs={24} sm={8}>
<Card size="small">
<Statistic
title="Last Backup"
value={lastBackup ? dayjs(lastBackup.completedAt).format('MMM D, HH:mm') : 'Never'}
/>
</Card>
</Col>
</Row>
<Space wrap style={{ marginBottom: 16 }}>
<Select
placeholder="Filter by instance"
allowClear
style={{ width: 220 }}
value={instanceFilter}
onChange={(v) => { setInstanceFilter(v); setPage(1); }}
options={instances.map((i) => ({ label: i.name, value: i.id }))}
/>
</Space>
<Table
dataSource={backups}
rowKey="id"
loading={loading}
size="small"
pagination={{
current: page,
pageSize,
total,
showSizeChanger: true,
pageSizeOptions: ['25', '50', '100'],
onChange: (p, ps) => { setPage(p); setPageSize(ps); },
showTotal: (t) => `${t} backups`,
}}
columns={[
{
title: 'Instance',
dataIndex: 'instance',
width: 160,
render: (inst: BackupRow['instance']) => inst?.name || '-',
},
{
title: 'Status',
dataIndex: 'status',
width: 110,
render: (s: string) => <Tag color={statusColors[s]}>{s}</Tag>,
},
{
title: 'Started',
dataIndex: 'startedAt',
width: 150,
render: (d: string) => dayjs(d).format('YYYY-MM-DD HH:mm'),
},
{
title: 'Completed',
dataIndex: 'completedAt',
width: 150,
render: (d: string | null) => (d ? dayjs(d).format('YYYY-MM-DD HH:mm') : '-'),
},
{
title: 'Size',
dataIndex: 'sizeBytes',
width: 100,
render: (b: string | number | null) => formatSize(b),
},
{
title: 'Actions',
width: 100,
render: (_: unknown, record: BackupRow) => (
<Space size="small">
{record.status === 'COMPLETED' && (
<Button
icon={<CloudDownloadOutlined />}
size="small"
type="text"
onClick={() => handleDownload(record.id)}
/>
)}
<Popconfirm
title="Delete this backup?"
onConfirm={() => handleDelete(record.id)}
>
<Button icon={<DeleteOutlined />} size="small" type="text" danger />
</Popconfirm>
</Space>
),
},
]}
/>
</div>
);
}

View File

@ -0,0 +1,489 @@
import { useState, useEffect, useCallback } from 'react';
import {
Typography,
Steps,
Form,
Input,
Switch,
Button,
Card,
Space,
Descriptions,
Tag,
Alert,
InputNumber,
Result,
Progress,
} from 'antd';
import { SyncOutlined } from '@ant-design/icons';
import { useNavigate } from 'react-router-dom';
import { api } from '@/lib/api';
import type { Instance } from '@/types/api';
interface WizardData {
name: string;
slug: string;
domain: string;
adminEmail: string;
enableMedia: boolean;
enableChat: boolean;
enableGancio: boolean;
enableListmonk: boolean;
enableMonitoring: boolean;
enableDevTools: boolean;
enablePayments: boolean;
smtpHost: string;
smtpPort: number;
smtpUser: string;
smtpFrom: string;
emailTestMode: boolean;
enablePangolin: boolean;
notes: string;
}
const defaultData: WizardData = {
name: '',
slug: '',
domain: '',
adminEmail: '',
enableMedia: false,
enableChat: false,
enableGancio: false,
enableListmonk: false,
enableMonitoring: false,
enableDevTools: false,
enablePayments: false,
smtpHost: '',
smtpPort: 587,
smtpUser: '',
smtpFrom: '',
emailTestMode: true,
enablePangolin: false,
notes: '',
};
/** Parse provisioning step from statusMessage like "Step 5/13: Rendering configuration files" */
function parseProvisioningStep(msg: string | null | undefined): { current: number; total: number; description: string } | null {
if (!msg) return null;
const match = msg.match(/Step (\d+)\/(\d+): (.+)/);
if (match) {
return { current: parseInt(match[1], 10), total: parseInt(match[2], 10), description: match[3] };
}
return null;
}
export default function CreateWizardPage() {
const navigate = useNavigate();
const [current, setCurrent] = useState(0);
const [data, setData] = useState<WizardData>(defaultData);
const [creating, setCreating] = useState(false);
const [createdId, setCreatedId] = useState<string | null>(null);
const [error, setError] = useState<string | null>(null);
const [form] = Form.useForm();
// Provisioning tracking
const [provisioningInstance, setProvisioningInstance] = useState<Instance | null>(null);
const update = (fields: Partial<WizardData>) => setData((prev) => ({ ...prev, ...fields }));
const autoSlug = (name: string) => {
return name
.toLowerCase()
.replace(/[^a-z0-9]+/g, '-')
.replace(/^-|-$/g, '');
};
const handleCreate = async () => {
setCreating(true);
setError(null);
try {
const { data: result } = await api.post('/instances', data);
setCreatedId(result.data.id);
setProvisioningInstance(result.data);
} catch (err: unknown) {
const resp = (err as { response?: { data?: { error?: { message?: string } } } })?.response
?.data?.error;
setError(resp?.message || 'Failed to create instance');
} finally {
setCreating(false);
}
};
// Poll for provisioning progress
const pollStatus = useCallback(async () => {
if (!createdId) return;
try {
const { data: result } = await api.get(`/instances/${createdId}`);
setProvisioningInstance(result.data);
} catch {
// Silently fail — will retry on next poll
}
}, [createdId]);
useEffect(() => {
if (!createdId) return;
if (provisioningInstance?.status === 'RUNNING' || provisioningInstance?.status === 'STOPPED') return;
// Don't poll if error (user needs to decide what to do)
if (provisioningInstance?.status === 'ERROR') return;
const interval = setInterval(pollStatus, 3_000);
return () => clearInterval(interval);
}, [createdId, provisioningInstance?.status, pollStatus]);
const handleRetryProvision = async () => {
if (!createdId) return;
try {
await api.post(`/instances/${createdId}/provision`);
setProvisioningInstance((prev) =>
prev ? { ...prev, status: 'PROVISIONING', statusMessage: 'Retrying provisioning...' } : prev
);
} catch {
setError('Failed to retry provisioning');
}
};
const steps = [
{
title: 'Basic Info',
content: (
<Form layout="vertical" form={form}>
<Form.Item label="Campaign Name" required>
<Input
value={data.name}
onChange={(e) => {
update({ name: e.target.value, slug: autoSlug(e.target.value) });
}}
placeholder="Better Edmonton"
/>
</Form.Item>
<Form.Item label="Slug" required help="URL-safe identifier (auto-generated)">
<Input
value={data.slug}
onChange={(e) => update({ slug: e.target.value })}
placeholder="better-edmonton"
/>
</Form.Item>
<Form.Item label="Domain" required>
<Input
value={data.domain}
onChange={(e) => update({ domain: e.target.value })}
placeholder="betteredmonton.org"
/>
</Form.Item>
<Form.Item label="Admin Email" required>
<Input
value={data.adminEmail}
onChange={(e) => update({ adminEmail: e.target.value })}
placeholder="admin@betteredmonton.org"
type="email"
/>
</Form.Item>
<Form.Item label="Notes">
<Input.TextArea
value={data.notes}
onChange={(e) => update({ notes: e.target.value })}
rows={3}
placeholder="Optional notes about this instance..."
/>
</Form.Item>
</Form>
),
},
{
title: 'Features',
content: (
<Space direction="vertical" size="large" style={{ width: '100%' }}>
<Card size="small" title="Media Manager">
<Space>
<Switch checked={data.enableMedia} onChange={(v) => update({ enableMedia: v })} />
<span>Video library, uploads, gallery, analytics</span>
</Space>
</Card>
<Card size="small" title="Newsletter (Listmonk)">
<Space>
<Switch checked={data.enableListmonk} onChange={(v) => update({ enableListmonk: v })} />
<span>Email newsletters, subscriber management</span>
</Space>
</Card>
<Card size="small" title="Events (Gancio)">
<Space>
<Switch checked={data.enableGancio} onChange={(v) => update({ enableGancio: v })} />
<span>Community events calendar, shift sync</span>
</Space>
</Card>
<Card size="small" title="Chat (Rocket.Chat)">
<Space>
<Switch checked={data.enableChat} onChange={(v) => update({ enableChat: v })} />
<span>Team communication, channels</span>
</Space>
</Card>
<Card size="small" title="Monitoring">
<Space>
<Switch checked={data.enableMonitoring} onChange={(v) => update({ enableMonitoring: v })} />
<span>Prometheus, Grafana, alerts</span>
</Space>
</Card>
<Card size="small" title="Dev Tools">
<Space>
<Switch checked={data.enableDevTools} onChange={(v) => update({ enableDevTools: v })} />
<span>Code Server, Gitea, n8n, Homepage, Excalidraw</span>
</Space>
</Card>
<Card size="small" title="Payments">
<Space>
<Switch checked={data.enablePayments} onChange={(v) => update({ enablePayments: v })} />
<span>Vaultwarden (secrets vault, future)</span>
</Space>
</Card>
</Space>
),
},
{
title: 'Email',
content: (
<Form layout="vertical">
<Form.Item label="Email Mode">
<Space>
<Switch
checked={data.emailTestMode}
onChange={(v) => update({ emailTestMode: v })}
checkedChildren="Test Mode (MailHog)"
unCheckedChildren="Real SMTP"
/>
</Space>
</Form.Item>
{!data.emailTestMode && (
<>
<Form.Item label="SMTP Host">
<Input
value={data.smtpHost}
onChange={(e) => update({ smtpHost: e.target.value })}
placeholder="smtp.sendgrid.net"
/>
</Form.Item>
<Form.Item label="SMTP Port">
<InputNumber
value={data.smtpPort}
onChange={(v) => update({ smtpPort: v || 587 })}
style={{ width: 120 }}
/>
</Form.Item>
<Form.Item label="SMTP User">
<Input
value={data.smtpUser}
onChange={(e) => update({ smtpUser: e.target.value })}
placeholder="apikey"
/>
</Form.Item>
<Form.Item label="From Address">
<Input
value={data.smtpFrom}
onChange={(e) => update({ smtpFrom: e.target.value })}
placeholder="noreply@betteredmonton.org"
/>
</Form.Item>
</>
)}
</Form>
),
},
{
title: 'Tunnel',
content: (
<Space direction="vertical" size="large" style={{ width: '100%' }}>
<Alert
message="Pangolin Tunnel"
description="Enable to automatically create a Pangolin tunnel for external access. Requires Pangolin API credentials in CCP settings."
type="info"
showIcon
/>
<Card size="small" title="Enable Pangolin Tunnel">
<Space>
<Switch checked={data.enablePangolin} onChange={(v) => update({ enablePangolin: v })} />
<span>Auto-create site and resources on Pangolin</span>
</Space>
</Card>
{data.enablePangolin && (
<Alert
message="Subdomains will be created"
description={
<ul style={{ margin: 0, paddingLeft: 16 }}>
<li>app.{data.domain || '...'} Admin + Public</li>
<li>api.{data.domain || '...'} API</li>
<li>docs.{data.domain || '...'} Documentation</li>
{data.enableMedia && <li>media.{data.domain || '...'} Media API</li>}
{data.enableListmonk && <li>listmonk.{data.domain || '...'} Newsletters</li>}
{data.enableGancio && <li>events.{data.domain || '...'} Events</li>}
</ul>
}
type="info"
/>
)}
</Space>
),
},
{
title: 'Review',
content: (
<Space direction="vertical" size="large" style={{ width: '100%' }}>
{error && <Alert message={error} type="error" showIcon closable onClose={() => setError(null)} />}
<Descriptions bordered column={1} size="small">
<Descriptions.Item label="Name">{data.name}</Descriptions.Item>
<Descriptions.Item label="Slug">{data.slug}</Descriptions.Item>
<Descriptions.Item label="Domain">{data.domain}</Descriptions.Item>
<Descriptions.Item label="Admin Email">{data.adminEmail}</Descriptions.Item>
<Descriptions.Item label="Email Mode">
{data.emailTestMode ? 'Test (MailHog)' : `SMTP: ${data.smtpHost}:${data.smtpPort}`}
</Descriptions.Item>
<Descriptions.Item label="Features">
<Space wrap>
{data.enableMedia && <Tag color="blue">Media</Tag>}
{data.enableListmonk && <Tag color="green">Newsletter</Tag>}
{data.enableGancio && <Tag color="purple">Events</Tag>}
{data.enableChat && <Tag color="orange">Chat</Tag>}
{data.enableMonitoring && <Tag color="cyan">Monitoring</Tag>}
{data.enableDevTools && <Tag color="geekblue">Dev Tools</Tag>}
{data.enablePayments && <Tag color="gold">Payments</Tag>}
{!data.enableMedia && !data.enableListmonk && !data.enableGancio && !data.enableChat && !data.enableMonitoring && !data.enableDevTools && !data.enablePayments && (
<Typography.Text type="secondary">Core only</Typography.Text>
)}
</Space>
</Descriptions.Item>
<Descriptions.Item label="Pangolin Tunnel">
{data.enablePangolin ? 'Enabled' : 'Disabled'}
</Descriptions.Item>
</Descriptions>
<Alert
message="Secrets will be auto-generated"
description="Database passwords, JWT secrets, and encryption keys will be generated automatically and stored encrypted in the CCP database."
type="info"
showIcon
/>
</Space>
),
},
];
// ─── Provisioning Progress View ──────────────────────────────────
if (createdId && provisioningInstance) {
const inst = provisioningInstance;
const stepInfo = parseProvisioningStep(inst.statusMessage);
const isDone = inst.status === 'RUNNING';
const isError = inst.status === 'ERROR';
if (isDone) {
return (
<Result
status="success"
title="Instance Ready!"
subTitle={`${data.name} (${data.domain}) is running and ready to use.`}
extra={[
<Button key="view" type="primary" onClick={() => navigate(`/app/instances/${createdId}`)}>
View Instance
</Button>,
<Button
key="open"
onClick={() => window.open(`https://app.${data.domain}`, '_blank')}
>
Open Site
</Button>,
<Button key="list" onClick={() => navigate('/app/instances')}>
Back to List
</Button>,
]}
/>
);
}
if (isError) {
return (
<Result
status="error"
title="Provisioning Failed"
subTitle={inst.statusMessage || 'An error occurred during provisioning.'}
extra={[
<Button key="retry" type="primary" onClick={handleRetryProvision}>
Retry Provisioning
</Button>,
<Button key="view" onClick={() => navigate(`/app/instances/${createdId}`)}>
View Instance Details
</Button>,
<Button key="list" onClick={() => navigate('/app/instances')}>
Back to List
</Button>,
]}
/>
);
}
// Provisioning in progress
const percent = stepInfo ? Math.round((stepInfo.current / stepInfo.total) * 100) : 0;
return (
<div style={{ maxWidth: 600, margin: '0 auto', textAlign: 'center', paddingTop: 40 }}>
<SyncOutlined spin style={{ fontSize: 48, color: '#1890ff', marginBottom: 24 }} />
<Typography.Title level={3}>Provisioning {data.name}...</Typography.Title>
<Typography.Text type="secondary" style={{ display: 'block', marginBottom: 24 }}>
This may take several minutes. Please do not close this page.
</Typography.Text>
<Progress
percent={percent}
status="active"
style={{ marginBottom: 16, maxWidth: 400, margin: '0 auto 16px' }}
/>
{stepInfo && (
<Card size="small" style={{ textAlign: 'left', maxWidth: 400, margin: '0 auto' }}>
<Typography.Text strong>
Step {stepInfo.current} of {stepInfo.total}
</Typography.Text>
<br />
<Typography.Text type="secondary">{stepInfo.description}</Typography.Text>
</Card>
)}
{!stepInfo && inst.statusMessage && (
<Typography.Text type="secondary">{inst.statusMessage}</Typography.Text>
)}
<div style={{ marginTop: 24 }}>
<Button onClick={() => navigate(`/app/instances/${createdId}`)}>
View Instance Details
</Button>
</div>
</div>
);
}
const canProceed = () => {
if (current === 0) {
return data.name && data.slug && data.domain && data.adminEmail;
}
return true;
};
return (
<div style={{ maxWidth: 800, margin: '0 auto' }}>
<Typography.Title level={3}>Create New Instance</Typography.Title>
<Steps current={current} items={steps.map((s) => ({ title: s.title }))} style={{ marginBottom: 32 }} />
<Card>{steps[current].content}</Card>
<div style={{ marginTop: 24, display: 'flex', justifyContent: 'space-between' }}>
<Button disabled={current === 0} onClick={() => setCurrent(current - 1)}>
Previous
</Button>
{current < steps.length - 1 ? (
<Button type="primary" disabled={!canProceed()} onClick={() => setCurrent(current + 1)}>
Next
</Button>
) : (
<Button type="primary" loading={creating} disabled={!canProceed()} onClick={handleCreate}>
Create Instance
</Button>
)}
</div>
</div>
);
}

View File

@ -0,0 +1,227 @@
import { useEffect, useState, useCallback, useRef } from 'react';
import { Typography, Row, Col, Button, Statistic, Card, Empty, Spin, message } from 'antd';
import {
PlusOutlined,
CloudServerOutlined,
CheckCircleOutlined,
WarningOutlined,
HeartOutlined,
ExclamationCircleOutlined,
SyncOutlined,
} from '@ant-design/icons';
import { useNavigate } from 'react-router-dom';
import { api } from '@/lib/api';
import InstanceCard from '@/components/InstanceCard';
import type { Instance } from '@/types/api';
interface HealthOverviewItem {
id: string;
name: string;
slug: string;
domain: string;
status: string;
lastHealthCheck?: string;
health?: { status: string; healthyServices: number; totalServices: number } | null;
}
const AUTO_REFRESH_INTERVAL = 30_000;
export default function DashboardPage() {
const navigate = useNavigate();
const [instances, setInstances] = useState<Instance[]>([]);
const [healthOverview, setHealthOverview] = useState<HealthOverviewItem[]>([]);
const [loading, setLoading] = useState(true);
const [lastUpdated, setLastUpdated] = useState<Date | null>(null);
const [secondsAgo, setSecondsAgo] = useState(0);
const lastUpdatedRef = useRef<Date | null>(null);
const fetchInstances = useCallback(async () => {
try {
const { data } = await api.get('/instances');
setInstances(data.data);
} catch {
message.error('Failed to load instances');
} finally {
setLoading(false);
}
}, []);
const fetchHealthOverview = useCallback(async () => {
try {
const { data } = await api.get('/health/overview');
setHealthOverview(data.data);
} catch {
message.error('Failed to load health overview');
}
}, []);
const refreshAll = useCallback(async () => {
await Promise.all([fetchInstances(), fetchHealthOverview()]);
const now = new Date();
setLastUpdated(now);
lastUpdatedRef.current = now;
setSecondsAgo(0);
}, [fetchInstances, fetchHealthOverview]);
// Initial fetch
useEffect(() => {
refreshAll();
}, [refreshAll]);
// Auto-refresh every 30s
useEffect(() => {
const id = setInterval(refreshAll, AUTO_REFRESH_INTERVAL);
return () => clearInterval(id);
}, [refreshAll]);
// Tick the "seconds ago" display every second
useEffect(() => {
const id = setInterval(() => {
if (lastUpdatedRef.current) {
setSecondsAgo(Math.floor((Date.now() - lastUpdatedRef.current.getTime()) / 1000));
}
}, 1000);
return () => clearInterval(id);
}, []);
const handleStart = async (id: string) => {
try {
await api.post(`/instances/${id}/start`);
message.success('Instance started');
fetchInstances();
} catch {
message.error('Failed to start instance');
}
};
const handleStop = async (id: string) => {
try {
await api.post(`/instances/${id}/stop`);
message.success('Instance stopped');
fetchInstances();
} catch {
message.error('Failed to stop instance');
}
};
const running = instances.filter((i) => i.status === 'RUNNING').length;
const stopped = instances.filter((i) => i.status === 'STOPPED').length;
const errors = instances.filter((i) => i.status === 'ERROR').length;
// Health stats from overview
const healthyInstances = healthOverview.filter((h) => h.health?.status === 'HEALTHY').length;
const degradedInstances = healthOverview.filter((h) => h.health?.status === 'DEGRADED').length;
if (loading) {
return (
<div style={{ display: 'flex', justifyContent: 'center', padding: 100 }}>
<Spin size="large" />
</div>
);
}
return (
<div>
<div style={{ display: 'flex', justifyContent: 'space-between', alignItems: 'center', marginBottom: 24 }}>
<div style={{ display: 'flex', alignItems: 'center', gap: 16 }}>
<Typography.Title level={3} style={{ margin: 0 }}>
Dashboard
</Typography.Title>
{lastUpdated && (
<Typography.Text type="secondary" style={{ fontSize: 12 }}>
<SyncOutlined style={{ marginRight: 4 }} />
Updated {secondsAgo < 5 ? 'just now' : `${secondsAgo}s ago`}
</Typography.Text>
)}
</div>
<Button
type="primary"
icon={<PlusOutlined />}
onClick={() => navigate('/app/instances/new')}
>
Create Instance
</Button>
</div>
<Row gutter={[16, 16]} style={{ marginBottom: 24 }}>
<Col xs={12} sm={6} lg={4}>
<Card>
<Statistic
title="Total Instances"
value={instances.length}
prefix={<CloudServerOutlined />}
/>
</Card>
</Col>
<Col xs={12} sm={6} lg={4}>
<Card>
<Statistic
title="Running"
value={running}
valueStyle={{ color: '#52c41a' }}
prefix={<CheckCircleOutlined />}
/>
</Card>
</Col>
<Col xs={12} sm={6} lg={4}>
<Card>
<Statistic
title="Healthy"
value={healthyInstances}
valueStyle={{ color: '#52c41a' }}
prefix={<HeartOutlined />}
/>
</Card>
</Col>
<Col xs={12} sm={6} lg={4}>
<Card>
<Statistic
title="Degraded"
value={degradedInstances}
valueStyle={degradedInstances > 0 ? { color: '#faad14' } : undefined}
prefix={degradedInstances > 0 ? <ExclamationCircleOutlined /> : undefined}
/>
</Card>
</Col>
<Col xs={12} sm={6} lg={4}>
<Card>
<Statistic title="Stopped" value={stopped} />
</Card>
</Col>
<Col xs={12} sm={6} lg={4}>
<Card>
<Statistic
title="Errors"
value={errors}
valueStyle={errors > 0 ? { color: '#ff4d4f' } : undefined}
prefix={errors > 0 ? <WarningOutlined /> : undefined}
/>
</Card>
</Col>
</Row>
{instances.length === 0 ? (
<Empty
description="No instances yet"
style={{ padding: 60 }}
>
<Button type="primary" onClick={() => navigate('/app/instances/new')}>
Create Your First Instance
</Button>
</Empty>
) : (
<Row gutter={[16, 16]}>
{instances.map((instance) => (
<Col xs={24} sm={12} lg={8} xl={6} key={instance.id}>
<InstanceCard
instance={instance}
onStart={handleStart}
onStop={handleStop}
/>
</Col>
))}
</Row>
)}
</div>
);
}

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,283 @@
import { useEffect, useState, useMemo } from 'react';
import { Typography, Table, Tag, Button, Space, message, Popconfirm, Input, Select } from 'antd';
import {
PlusOutlined,
PlayCircleOutlined,
PauseCircleOutlined,
DeleteOutlined,
EyeOutlined,
ImportOutlined,
SearchOutlined,
} from '@ant-design/icons';
import { useNavigate } from 'react-router-dom';
import { api } from '@/lib/api';
import type { Instance } from '@/types/api';
import DiscoverInstancesDrawer from '@/components/DiscoverInstancesDrawer';
const statusColors: Record<string, string> = {
RUNNING: 'green',
STOPPED: 'default',
PROVISIONING: 'processing',
ERROR: 'red',
DESTROYING: 'orange',
};
export default function InstanceListPage() {
const navigate = useNavigate();
const [instances, setInstances] = useState<Instance[]>([]);
const [loading, setLoading] = useState(true);
const [actionLoading, setActionLoading] = useState<string | null>(null);
const [discoverOpen, setDiscoverOpen] = useState(false);
const [search, setSearch] = useState('');
const [statusFilter, setStatusFilter] = useState<string>('ALL');
const filteredInstances = useMemo(() => {
let result = instances;
if (statusFilter !== 'ALL') {
result = result.filter((i) => i.status === statusFilter);
}
if (search.trim()) {
const q = search.toLowerCase().trim();
result = result.filter(
(i) =>
i.name.toLowerCase().includes(q) ||
i.domain.toLowerCase().includes(q) ||
i.slug.toLowerCase().includes(q)
);
}
return result;
}, [instances, search, statusFilter]);
const fetchInstances = async () => {
try {
const { data } = await api.get('/instances');
setInstances(data.data);
} catch {
message.error('Failed to load instances');
} finally {
setLoading(false);
}
};
useEffect(() => {
fetchInstances();
}, []);
const handleDelete = async (id: string) => {
try {
const { data } = await api.delete(`/instances/${id}`);
message.success(data?.message || 'Instance deleted');
fetchInstances();
} catch {
message.error('Failed to delete instance');
}
};
const handleStart = async (id: string) => {
setActionLoading(id);
try {
await api.post(`/instances/${id}/start`);
message.success('Instance started');
fetchInstances();
} catch (err: unknown) {
const resp = (err as { response?: { data?: { error?: { message?: string } } } })?.response
?.data?.error;
message.error(resp?.message || 'Failed to start instance');
} finally {
setActionLoading(null);
}
};
const handleStop = async (id: string) => {
setActionLoading(id);
try {
await api.post(`/instances/${id}/stop`);
message.success('Instance stopped');
fetchInstances();
} catch (err: unknown) {
const resp = (err as { response?: { data?: { error?: { message?: string } } } })?.response
?.data?.error;
message.error(resp?.message || 'Failed to stop instance');
} finally {
setActionLoading(null);
}
};
const columns = [
{
title: 'Name',
dataIndex: 'name',
key: 'name',
render: (name: string, record: Instance) => (
<Space size="small">
<a onClick={() => navigate(`/app/instances/${record.id}`)}>{name}</a>
{record.isRegistered && <Tag color="purple">External</Tag>}
</Space>
),
},
{
title: 'Domain',
dataIndex: 'domain',
key: 'domain',
render: (domain: string) => (
<Typography.Text copyable={{ text: `https://app.${domain}` }}>
{domain}
</Typography.Text>
),
},
{
title: 'Status',
dataIndex: 'status',
key: 'status',
render: (status: string) => <Tag color={statusColors[status]}>{status}</Tag>,
},
{
title: 'Features',
key: 'features',
render: (_: unknown, record: Instance) => (
<Space size="small" wrap>
{record.enableMedia && <Tag>Media</Tag>}
{record.enableListmonk && <Tag>Newsletter</Tag>}
{record.enableGancio && <Tag>Events</Tag>}
{record.enableChat && <Tag>Chat</Tag>}
</Space>
),
},
{
title: 'Ports',
key: 'ports',
render: (_: unknown, record: Instance) => {
const ports = record.portConfig as Record<string, number>;
return (
<Typography.Text type="secondary" style={{ fontSize: 12 }}>
API:{ports.api} Admin:{ports.admin}
</Typography.Text>
);
},
},
{
title: 'Created',
dataIndex: 'createdAt',
key: 'createdAt',
render: (date: string) => new Date(date).toLocaleDateString(),
},
{
title: 'Actions',
key: 'actions',
render: (_: unknown, record: Instance) => {
const isLoading = actionLoading === record.id;
const canStart = record.status === 'STOPPED' || record.status === 'ERROR';
const canStop = record.status === 'RUNNING';
return (
<Space>
<Button
type="text"
icon={<EyeOutlined />}
onClick={() => navigate(`/app/instances/${record.id}`)}
/>
{canStop ? (
<Popconfirm
title="Stop this instance?"
onConfirm={() => handleStop(record.id)}
>
<Button
type="text"
icon={<PauseCircleOutlined />}
loading={isLoading}
/>
</Popconfirm>
) : (
<Button
type="text"
icon={<PlayCircleOutlined />}
disabled={!canStart}
loading={isLoading}
onClick={() => handleStart(record.id)}
/>
)}
<Popconfirm
title={record.isRegistered ? 'Unregister this instance?' : 'Delete this instance?'}
description={
record.isRegistered
? 'This removes the instance from CCP monitoring. No containers or data will be affected.'
: 'This will stop all containers and delete all data.'
}
onConfirm={() => handleDelete(record.id)}
okText={record.isRegistered ? 'Unregister' : 'Delete'}
okType="danger"
>
<Button type="text" danger icon={<DeleteOutlined />} />
</Popconfirm>
</Space>
);
},
},
];
return (
<div>
<div style={{ display: 'flex', justifyContent: 'space-between', alignItems: 'center', marginBottom: 24 }}>
<Typography.Title level={3} style={{ margin: 0 }}>
Instances
</Typography.Title>
<Space>
<Button
icon={<SearchOutlined />}
onClick={() => setDiscoverOpen(true)}
>
Discover Instances
</Button>
<Button
icon={<ImportOutlined />}
onClick={() => navigate('/app/instances/register')}
>
Register Existing
</Button>
<Button
type="primary"
icon={<PlusOutlined />}
onClick={() => navigate('/app/instances/new')}
>
Create Instance
</Button>
</Space>
</div>
<Space style={{ marginBottom: 16 }} wrap>
<Input.Search
placeholder="Search name, domain, or slug..."
allowClear
style={{ width: 280 }}
value={search}
onChange={(e) => setSearch(e.target.value)}
/>
<Select
value={statusFilter}
onChange={setStatusFilter}
style={{ width: 160 }}
options={[
{ label: 'All Statuses', value: 'ALL' },
{ label: 'Running', value: 'RUNNING' },
{ label: 'Stopped', value: 'STOPPED' },
{ label: 'Error', value: 'ERROR' },
{ label: 'Provisioning', value: 'PROVISIONING' },
]}
/>
</Space>
<Table
dataSource={filteredInstances}
columns={columns}
rowKey="id"
loading={loading}
pagination={{ pageSize: 20 }}
/>
<DiscoverInstancesDrawer
open={discoverOpen}
onClose={() => setDiscoverOpen(false)}
onImported={fetchInstances}
/>
</div>
);
}

View File

@ -0,0 +1,76 @@
import { useState } from 'react';
import { Card, Form, Input, Button, Typography, Alert, Space } from 'antd';
import { LockOutlined, MailOutlined } from '@ant-design/icons';
import { useNavigate } from 'react-router-dom';
import { useAuthStore } from '@/stores/auth.store';
export default function LoginPage() {
const navigate = useNavigate();
const { login, error, isLoading } = useAuthStore();
const [form] = Form.useForm();
const [localError, setLocalError] = useState<string | null>(null);
const handleSubmit = async (values: { email: string; password: string }) => {
setLocalError(null);
try {
await login(values.email, values.password);
navigate('/app');
} catch {
// Error is set in store
}
};
const displayError = localError || error;
return (
<div
style={{
display: 'flex',
justifyContent: 'center',
alignItems: 'center',
minHeight: '100vh',
background: '#141414',
}}
>
<Card style={{ width: 400, maxWidth: '90vw' }}>
<Space direction="vertical" size="large" style={{ width: '100%' }}>
<div style={{ textAlign: 'center' }}>
<Typography.Title level={3} style={{ margin: 0 }}>
Changemaker Control Panel
</Typography.Title>
<Typography.Text type="secondary">Sign in to manage instances</Typography.Text>
</div>
{displayError && (
<Alert message={displayError} type="error" showIcon closable onClose={() => setLocalError(null)} />
)}
<Form form={form} onFinish={handleSubmit} layout="vertical" size="large">
<Form.Item
name="email"
rules={[
{ required: true, message: 'Email is required' },
{ type: 'email', message: 'Enter a valid email' },
]}
>
<Input prefix={<MailOutlined />} placeholder="Email" autoFocus />
</Form.Item>
<Form.Item
name="password"
rules={[{ required: true, message: 'Password is required' }]}
>
<Input.Password prefix={<LockOutlined />} placeholder="Password" />
</Form.Item>
<Form.Item>
<Button type="primary" htmlType="submit" loading={isLoading} block>
Sign In
</Button>
</Form.Item>
</Form>
</Space>
</Card>
</div>
);
}

View File

@ -0,0 +1,298 @@
import { useState } from 'react';
import {
Typography,
Form,
Input,
InputNumber,
Switch,
Button,
Card,
Space,
Alert,
Result,
message,
} from 'antd';
import { ArrowLeftOutlined } from '@ant-design/icons';
import { useNavigate } from 'react-router-dom';
import { api } from '@/lib/api';
interface RegisterData {
name: string;
slug: string;
domain: string;
basePath: string;
composeProject: string;
portConfig: { api: number; admin: number; postgres: number; nginx: number };
adminEmail: string;
enableMedia: boolean;
enableChat: boolean;
enableGancio: boolean;
enableListmonk: boolean;
enableMonitoring: boolean;
enableDevTools: boolean;
enablePayments: boolean;
notes: string;
}
const defaultData: RegisterData = {
name: '',
slug: '',
domain: '',
basePath: '/home/bunker-admin/changemaker.lite',
composeProject: 'changemakerlite',
portConfig: { api: 4002, admin: 3002, postgres: 5433, nginx: 80 },
adminEmail: '',
enableMedia: false,
enableChat: false,
enableGancio: false,
enableListmonk: false,
enableMonitoring: false,
enableDevTools: false,
enablePayments: false,
notes: '',
};
export default function RegisterInstancePage() {
const navigate = useNavigate();
const [data, setData] = useState<RegisterData>(defaultData);
const [submitting, setSubmitting] = useState(false);
const [error, setError] = useState<string | null>(null);
const [createdId, setCreatedId] = useState<string | null>(null);
const update = (fields: Partial<RegisterData>) =>
setData((prev) => ({ ...prev, ...fields }));
const updatePort = (key: keyof RegisterData['portConfig'], value: number | null) =>
setData((prev) => ({
...prev,
portConfig: { ...prev.portConfig, [key]: value || 0 },
}));
const autoSlug = (name: string) =>
name.toLowerCase().replace(/[^a-z0-9]+/g, '-').replace(/^-|-$/g, '');
const handleSubmit = async () => {
setSubmitting(true);
setError(null);
try {
const payload = {
...data,
adminEmail: data.adminEmail || 'admin@localhost',
};
const { data: result } = await api.post('/instances/register', payload);
setCreatedId(result.data.id);
message.success('Instance registered successfully');
} catch (err: unknown) {
const resp = (err as { response?: { data?: { error?: { message?: string } } } })?.response
?.data?.error;
setError(resp?.message || 'Failed to register instance');
} finally {
setSubmitting(false);
}
};
const canSubmit = data.name && data.slug && data.domain && data.basePath && data.composeProject;
if (createdId) {
return (
<Result
status="success"
title="Instance Registered"
subTitle={`${data.name} (${data.domain}) has been registered for monitoring.`}
extra={[
<Button key="view" type="primary" onClick={() => navigate(`/app/instances/${createdId}`)}>
View Instance
</Button>,
<Button key="list" onClick={() => navigate('/app/instances')}>
Back to List
</Button>,
]}
/>
);
}
return (
<div style={{ maxWidth: 700, margin: '0 auto' }}>
<Space style={{ marginBottom: 24 }}>
<Button icon={<ArrowLeftOutlined />} onClick={() => navigate('/app/instances')} />
<Typography.Title level={3} style={{ margin: 0 }}>
Register Existing Instance
</Typography.Title>
</Space>
<Alert
message="Register an external CML instance"
description="Add an existing Changemaker Lite deployment that was set up outside the control panel. CCP will monitor it (health checks, services, logs) but will not manage its secrets, provisioning, or backups."
type="info"
showIcon
style={{ marginBottom: 24 }}
/>
{error && (
<Alert
message={error}
type="error"
showIcon
closable
onClose={() => setError(null)}
style={{ marginBottom: 16 }}
/>
)}
<Space direction="vertical" size="large" style={{ width: '100%' }}>
<Card title="Instance Info" size="small">
<Form layout="vertical">
<Form.Item label="Name" required>
<Input
value={data.name}
onChange={(e) => update({ name: e.target.value, slug: autoSlug(e.target.value) })}
placeholder="Better Edmonton"
/>
</Form.Item>
<Form.Item label="Slug" required help="URL-safe identifier (auto-generated from name)">
<Input
value={data.slug}
onChange={(e) => update({ slug: e.target.value })}
placeholder="betteredmonton"
/>
</Form.Item>
<Form.Item label="Domain" required>
<Input
value={data.domain}
onChange={(e) => update({ domain: e.target.value })}
placeholder="betteredmonton.org"
/>
</Form.Item>
<Form.Item label="Admin Email">
<Input
value={data.adminEmail}
onChange={(e) => update({ adminEmail: e.target.value })}
placeholder="admin@betteredmonton.org"
type="email"
/>
</Form.Item>
</Form>
</Card>
<Card title="Docker Configuration" size="small">
<Form layout="vertical">
<Form.Item
label="Base Path"
required
help="Absolute path to the directory containing docker-compose.yml"
>
<Input
value={data.basePath}
onChange={(e) => update({ basePath: e.target.value })}
placeholder="/home/bunker-admin/changemaker.lite"
/>
</Form.Item>
<Form.Item
label="Compose Project"
required
help="Docker Compose project name (from COMPOSE_PROJECT_NAME or directory name)"
>
<Input
value={data.composeProject}
onChange={(e) => update({ composeProject: e.target.value })}
placeholder="changemakerlite"
/>
</Form.Item>
</Form>
</Card>
<Card title="Port Mapping" size="small">
<Form layout="vertical">
<Space wrap size="large">
<Form.Item label="API Port">
<InputNumber
value={data.portConfig.api}
onChange={(v) => updatePort('api', v)}
min={1}
max={65535}
style={{ width: 120 }}
/>
</Form.Item>
<Form.Item label="Admin Port">
<InputNumber
value={data.portConfig.admin}
onChange={(v) => updatePort('admin', v)}
min={1}
max={65535}
style={{ width: 120 }}
/>
</Form.Item>
<Form.Item label="PostgreSQL Port">
<InputNumber
value={data.portConfig.postgres}
onChange={(v) => updatePort('postgres', v)}
min={1}
max={65535}
style={{ width: 120 }}
/>
</Form.Item>
<Form.Item label="Nginx Port">
<InputNumber
value={data.portConfig.nginx}
onChange={(v) => updatePort('nginx', v)}
min={1}
max={65535}
style={{ width: 120 }}
/>
</Form.Item>
</Space>
</Form>
</Card>
<Card title="Features (informational)" size="small">
<Space direction="vertical" style={{ width: '100%' }}>
<Space>
<Switch checked={data.enableMedia} onChange={(v) => update({ enableMedia: v })} />
<span>Media Manager</span>
</Space>
<Space>
<Switch checked={data.enableListmonk} onChange={(v) => update({ enableListmonk: v })} />
<span>Newsletter (Listmonk)</span>
</Space>
<Space>
<Switch checked={data.enableGancio} onChange={(v) => update({ enableGancio: v })} />
<span>Events (Gancio)</span>
</Space>
<Space>
<Switch checked={data.enableChat} onChange={(v) => update({ enableChat: v })} />
<span>Chat</span>
</Space>
<Space>
<Switch checked={data.enableMonitoring} onChange={(v) => update({ enableMonitoring: v })} />
<span>Monitoring</span>
</Space>
<Space>
<Switch checked={data.enableDevTools} onChange={(v) => update({ enableDevTools: v })} />
<span>Dev Tools (Code Server, Gitea, n8n, Homepage, Excalidraw)</span>
</Space>
<Space>
<Switch checked={data.enablePayments} onChange={(v) => update({ enablePayments: v })} />
<span>Payments (Vaultwarden)</span>
</Space>
</Space>
</Card>
<Card title="Notes" size="small">
<Input.TextArea
value={data.notes}
onChange={(e) => update({ notes: e.target.value })}
rows={3}
placeholder="Optional notes about this instance..."
/>
</Card>
<div style={{ display: 'flex', justifyContent: 'flex-end', gap: 12 }}>
<Button onClick={() => navigate('/app/instances')}>Cancel</Button>
<Button type="primary" loading={submitting} disabled={!canSubmit} onClick={handleSubmit}>
Verify & Register
</Button>
</div>
</Space>
</div>
);
}

View File

@ -0,0 +1,109 @@
import { useEffect, useState } from 'react';
import { Typography, Card, Form, Input, Button, Space, message, Descriptions } from 'antd';
import { api } from '@/lib/api';
export default function SettingsPage() {
const [settings, setSettings] = useState<Record<string, unknown>>({});
const [loading, setLoading] = useState(true);
const [savingPangolin, setSavingPangolin] = useState(false);
const [savingDefaults, setSavingDefaults] = useState(false);
useEffect(() => {
api.get('/settings').then(({ data }) => {
setSettings(data.data);
setLoading(false);
}).catch(() => {
message.error('Failed to load settings');
setLoading(false);
});
}, []);
const saveMultipleSettings = async (keys: string[], setLoading: (v: boolean) => void) => {
setLoading(true);
try {
await Promise.all(keys.map((key) => api.put(`/settings/${key}`, { value: settings[key] })));
message.success('Settings saved');
} catch {
message.error('Failed to save some settings');
} finally {
setLoading(false);
}
};
return (
<div>
<Typography.Title level={3}>Settings</Typography.Title>
<Space direction="vertical" size="large" style={{ width: '100%', maxWidth: 800 }}>
<Card title="Port Allocation Ranges" loading={loading}>
<Descriptions bordered column={1} size="small">
<Descriptions.Item label="API Ports">14000 - 14999</Descriptions.Item>
<Descriptions.Item label="Admin Ports">13000 - 13999</Descriptions.Item>
<Descriptions.Item label="PostgreSQL Ports">15400 - 15499</Descriptions.Item>
<Descriptions.Item label="Nginx Ports">10000 - 10999</Descriptions.Item>
</Descriptions>
<Typography.Text type="secondary" style={{ display: 'block', marginTop: 8 }}>
Port ranges are configured via environment variables and cannot be changed at runtime.
</Typography.Text>
</Card>
<Card title="Pangolin API Configuration" loading={loading}>
<Form layout="vertical">
<Form.Item label="Pangolin API URL">
<Input
value={(settings.pangolinApiUrl as string) || ''}
onChange={(e) => setSettings({ ...settings, pangolinApiUrl: e.target.value })}
placeholder="https://api.pangolin.example.com/v1"
/>
</Form.Item>
<Form.Item label="Pangolin API Key">
<Input.Password
value={(settings.pangolinApiKey as string) || ''}
onChange={(e) => setSettings({ ...settings, pangolinApiKey: e.target.value })}
placeholder="API key"
/>
</Form.Item>
<Form.Item label="Pangolin Org ID">
<Input
value={(settings.pangolinOrgId as string) || ''}
onChange={(e) => setSettings({ ...settings, pangolinOrgId: e.target.value })}
placeholder="Organization ID"
/>
</Form.Item>
<Button
type="primary"
loading={savingPangolin}
onClick={() => saveMultipleSettings(['pangolinApiUrl', 'pangolinApiKey', 'pangolinOrgId'], setSavingPangolin)}
>
Save Pangolin Settings
</Button>
</Form>
</Card>
<Card title="Instance Defaults" loading={loading}>
<Form layout="vertical">
<Form.Item label="Default Git Branch">
<Input
value={(settings.defaultGitBranch as string) || 'v2'}
onChange={(e) => setSettings({ ...settings, defaultGitBranch: e.target.value })}
/>
</Form.Item>
<Form.Item label="Instances Base Path">
<Input
value={(settings.instancesBasePath as string) || ''}
onChange={(e) => setSettings({ ...settings, instancesBasePath: e.target.value })}
/>
</Form.Item>
<Button
type="primary"
loading={savingDefaults}
onClick={() => saveMultipleSettings(['defaultGitBranch', 'instancesBasePath'], setSavingDefaults)}
>
Save Defaults
</Button>
</Form>
</Card>
</Space>
</div>
);
}

View File

@ -0,0 +1,142 @@
import { create } from 'zustand';
import { persist } from 'zustand/middleware';
import { api, registerAuthCallbacks } from '@/lib/api';
interface User {
id: string;
email: string;
name: string;
role: string;
}
interface AuthState {
user: User | null;
accessToken: string | null;
refreshToken: string | null;
isAuthenticated: boolean;
isLoading: boolean;
error: string | null;
}
interface AuthActions {
login: (email: string, password: string) => Promise<void>;
logout: () => Promise<void>;
refresh: () => Promise<void>;
hydrate: () => Promise<void>;
clearAuth: () => void;
setTokens: (accessToken: string, refreshToken: string) => void;
}
export const useAuthStore = create<AuthState & AuthActions>()(
persist(
(set, get) => ({
user: null,
accessToken: null,
refreshToken: null,
isAuthenticated: false,
isLoading: true,
error: null,
login: async (email: string, password: string) => {
set({ error: null, isLoading: true });
try {
const { data } = await api.post('/auth/login', { email, password });
set({
user: data.user,
accessToken: data.accessToken || null,
refreshToken: data.refreshToken || null,
isAuthenticated: true,
isLoading: false,
});
} catch (err: unknown) {
const resp = (err as { response?: { data?: { error?: { message?: string } } } })
?.response?.data?.error;
set({ error: resp?.message || 'Login failed', isLoading: false });
throw err;
}
},
logout: async () => {
const { refreshToken } = get();
try {
if (refreshToken) {
await api.post('/auth/logout', { refreshToken });
}
} catch {
// Ignore logout errors
}
get().clearAuth();
},
refresh: async () => {
const { refreshToken } = get();
if (!refreshToken) {
get().clearAuth();
return;
}
try {
const { data } = await api.post('/auth/refresh', { refreshToken });
set({
user: data.user,
accessToken: data.accessToken || null,
refreshToken: data.refreshToken || null,
isAuthenticated: true,
});
} catch {
get().clearAuth();
}
},
hydrate: async () => {
const { accessToken } = get();
if (!accessToken) {
set({ isLoading: false });
return;
}
try {
const { data } = await api.get('/auth/me');
set({ user: data.user, isAuthenticated: true, isLoading: false });
} catch {
get().clearAuth();
}
},
clearAuth: () => {
set({
user: null,
accessToken: null,
refreshToken: null,
isAuthenticated: false,
isLoading: false,
error: null,
});
},
setTokens: (accessToken: string, refreshToken: string) => {
set({ accessToken, refreshToken });
},
}),
{
name: 'ccp-auth',
partialize: (state) => ({
accessToken: state.accessToken,
refreshToken: state.refreshToken,
}),
}
)
);
// Register callbacks to break circular dependency
registerAuthCallbacks({
getTokens: () => {
const state = useAuthStore.getState();
return { accessToken: state.accessToken, refreshToken: state.refreshToken };
},
onTokenRefresh: (accessToken, refreshToken) => {
useAuthStore.getState().setTokens(accessToken, refreshToken);
},
onAuthFailure: () => {
useAuthStore.getState().clearAuth();
window.location.href = '/login';
},
});

View File

@ -0,0 +1,135 @@
export interface Instance {
id: string;
slug: string;
name: string;
domain: string;
status: 'PROVISIONING' | 'RUNNING' | 'STOPPED' | 'ERROR' | 'DESTROYING';
statusMessage?: string;
basePath: string;
composeProject: string;
gitBranch: string;
gitCommit?: string;
portConfig: Record<string, number>;
enableMedia: boolean;
enableChat: boolean;
enableGancio: boolean;
enableListmonk: boolean;
enableMonitoring: boolean;
enableDevTools: boolean;
enablePayments: boolean;
isRegistered: boolean;
adminEmail: string;
pangolinSiteId?: string;
pangolinNewtId?: string;
smtpHost?: string;
smtpPort?: number;
smtpUser?: string;
smtpFrom?: string;
emailTestMode: boolean;
notes?: string;
createdAt: string;
updatedAt: string;
lastHealthCheck?: string;
portAllocations?: PortAllocation[];
healthChecks?: HealthCheck[];
backups?: Backup[];
_count?: { healthChecks: number; backups: number };
}
export interface PortAllocation {
id: string;
port: number;
instanceId: string;
service: string;
notes?: string;
}
export interface HealthCheck {
id: string;
instanceId: string;
status: 'HEALTHY' | 'DEGRADED' | 'UNHEALTHY' | 'UNKNOWN';
serviceStatus: Record<string, unknown>;
totalServices: number;
healthyServices: number;
responseTimeMs?: number;
checkedAt: string;
}
export interface Backup {
id: string;
instanceId: string;
status: 'PENDING' | 'IN_PROGRESS' | 'COMPLETED' | 'FAILED';
archivePath?: string;
sizeBytes?: number;
manifest?: Record<string, unknown>;
startedAt: string;
completedAt?: string;
errorMessage?: string;
s3Uploaded: boolean;
}
// ─── Discovery Types ──────────────────────────────────────────────
export interface DiscoveredInstance {
name: string;
slug: string;
domain: string;
basePath: string;
composeProject: string;
portConfig: { api: number; admin: number; postgres: number; nginx: number };
adminEmail: string;
enableMedia: boolean;
enableChat: boolean;
enableGancio: boolean;
enableListmonk: boolean;
enableMonitoring: boolean;
enableDevTools: boolean;
enablePayments: boolean;
emailTestMode: boolean;
source: 'parent' | 'docker';
isRunning: boolean;
runningContainers: number;
totalContainers: number;
isAlreadyRegistered: boolean;
existingInstanceId?: string;
isParentInstance: boolean;
}
export interface DiscoverySummary {
total: number;
newInstances: number;
alreadyRegistered: number;
running: number;
parentFound: boolean;
}
export interface DiscoveryResult {
instances: DiscoveredInstance[];
summary: DiscoverySummary;
}
export interface ImportResultItem {
slug: string;
success: boolean;
instanceId?: string;
error?: string;
}
export interface ImportResult {
results: ImportResultItem[];
summary: { total: number; succeeded: number; failed: number };
}
// ─── Audit Types ──────────────────────────────────────────────────
export interface AuditLogEntry {
id: string;
userId?: string;
instanceId?: string;
action: string;
details?: Record<string, unknown>;
ipAddress?: string;
createdAt: string;
user?: { id: string; email: string; name: string } | null;
instance?: { id: string; name: string; slug: string } | null;
}

View File

@ -0,0 +1 @@
/// <reference types="vite/client" />

View File

@ -0,0 +1,25 @@
{
"compilerOptions": {
"target": "ES2020",
"useDefineForClassFields": true,
"lib": ["ES2020", "DOM", "DOM.Iterable"],
"module": "ESNext",
"skipLibCheck": true,
"moduleResolution": "bundler",
"allowImportingTsExtensions": true,
"isolatedModules": true,
"moduleDetection": "force",
"noEmit": true,
"jsx": "react-jsx",
"strict": true,
"noUnusedLocals": true,
"noUnusedParameters": true,
"noFallthroughCasesInSwitch": true,
"forceConsistentCasingInFileNames": true,
"baseUrl": ".",
"paths": {
"@/*": ["src/*"]
}
},
"include": ["src"]
}

View File

@ -0,0 +1,21 @@
import { defineConfig } from 'vite';
import react from '@vitejs/plugin-react';
import path from 'path';
export default defineConfig({
plugins: [react()],
resolve: {
alias: {
'@': path.resolve(__dirname, './src'),
},
},
server: {
port: 5100,
proxy: {
'/api': {
target: process.env.VITE_API_URL || 'http://localhost:5000',
changeOrigin: true,
},
},
},
});

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,45 @@
{
"name": "ccp-api",
"version": "1.0.0",
"description": "Changemaker Control Panel — API Server",
"main": "dist/server.js",
"scripts": {
"dev": "tsx watch src/server.ts",
"build": "tsc",
"start": "node dist/server.js",
"db:migrate": "prisma migrate deploy",
"db:migrate:dev": "prisma migrate dev",
"db:seed": "tsx prisma/seed.ts",
"db:studio": "prisma studio",
"typecheck": "tsc --noEmit"
},
"dependencies": {
"@prisma/client": "^6.3.0",
"bcryptjs": "^2.4.3",
"compression": "^1.7.5",
"cors": "^2.8.5",
"dotenv": "^16.4.7",
"express": "^4.21.2",
"express-async-errors": "^3.1.1",
"express-rate-limit": "^7.5.0",
"handlebars": "^4.7.8",
"helmet": "^8.0.0",
"ioredis": "^5.4.2",
"jsonwebtoken": "^9.0.2",
"rate-limit-redis": "^4.2.0",
"winston": "^3.17.0",
"yaml": "^2.8.2",
"zod": "^3.24.1"
},
"devDependencies": {
"@types/bcryptjs": "^2.4.6",
"@types/compression": "^1.7.5",
"@types/cors": "^2.8.17",
"@types/express": "^5.0.0",
"@types/jsonwebtoken": "^9.0.7",
"@types/node": "^22.0.0",
"prisma": "^6.3.0",
"tsx": "^4.19.2",
"typescript": "^5.7.3"
}
}

View File

@ -0,0 +1,203 @@
-- CreateSchema
CREATE SCHEMA IF NOT EXISTS "public";
-- CreateEnum
CREATE TYPE "CcpRole" AS ENUM ('SUPER_ADMIN', 'OPERATOR', 'VIEWER');
-- CreateEnum
CREATE TYPE "InstanceStatus" AS ENUM ('PROVISIONING', 'RUNNING', 'STOPPED', 'ERROR', 'DESTROYING');
-- CreateEnum
CREATE TYPE "HealthStatus" AS ENUM ('HEALTHY', 'DEGRADED', 'UNHEALTHY', 'UNKNOWN');
-- CreateEnum
CREATE TYPE "BackupStatus" AS ENUM ('PENDING', 'IN_PROGRESS', 'COMPLETED', 'FAILED');
-- CreateEnum
CREATE TYPE "AuditAction" AS ENUM ('INSTANCE_CREATE', 'INSTANCE_UPDATE', 'INSTANCE_DELETE', 'INSTANCE_START', 'INSTANCE_STOP', 'INSTANCE_RESTART', 'INSTANCE_UPGRADE', 'BACKUP_CREATE', 'BACKUP_DELETE', 'PANGOLIN_SETUP', 'PANGOLIN_SYNC', 'USER_LOGIN', 'USER_CREATE', 'USER_UPDATE', 'USER_DELETE', 'SETTINGS_UPDATE');
-- CreateTable
CREATE TABLE "ccp_users" (
"id" TEXT NOT NULL,
"email" TEXT NOT NULL,
"password" TEXT NOT NULL,
"name" TEXT NOT NULL,
"role" "CcpRole" NOT NULL DEFAULT 'OPERATOR',
"created_at" TIMESTAMP(3) NOT NULL DEFAULT CURRENT_TIMESTAMP,
"updated_at" TIMESTAMP(3) NOT NULL,
CONSTRAINT "ccp_users_pkey" PRIMARY KEY ("id")
);
-- CreateTable
CREATE TABLE "ccp_refresh_tokens" (
"id" TEXT NOT NULL,
"token" TEXT NOT NULL,
"user_id" TEXT NOT NULL,
"expires_at" TIMESTAMP(3) NOT NULL,
"created_at" TIMESTAMP(3) NOT NULL DEFAULT CURRENT_TIMESTAMP,
CONSTRAINT "ccp_refresh_tokens_pkey" PRIMARY KEY ("id")
);
-- CreateTable
CREATE TABLE "instances" (
"id" TEXT NOT NULL,
"slug" TEXT NOT NULL,
"name" TEXT NOT NULL,
"domain" TEXT NOT NULL,
"status" "InstanceStatus" NOT NULL DEFAULT 'PROVISIONING',
"status_message" TEXT,
"base_path" TEXT NOT NULL,
"compose_project" TEXT NOT NULL,
"git_branch" TEXT NOT NULL DEFAULT 'v2',
"git_commit" TEXT,
"port_config" JSONB NOT NULL,
"encrypted_secrets" TEXT NOT NULL,
"enable_media" BOOLEAN NOT NULL DEFAULT false,
"enable_chat" BOOLEAN NOT NULL DEFAULT false,
"enable_gancio" BOOLEAN NOT NULL DEFAULT false,
"enable_listmonk" BOOLEAN NOT NULL DEFAULT false,
"enable_monitoring" BOOLEAN NOT NULL DEFAULT false,
"admin_email" TEXT NOT NULL,
"pangolin_site_id" TEXT,
"pangolin_newt_id" TEXT,
"pangolin_newt_secret" TEXT,
"smtp_host" TEXT,
"smtp_port" INTEGER,
"smtp_user" TEXT,
"smtp_from" TEXT,
"email_test_mode" BOOLEAN NOT NULL DEFAULT true,
"notes" TEXT,
"created_at" TIMESTAMP(3) NOT NULL DEFAULT CURRENT_TIMESTAMP,
"updated_at" TIMESTAMP(3) NOT NULL,
"last_health_check" TIMESTAMP(3),
CONSTRAINT "instances_pkey" PRIMARY KEY ("id")
);
-- CreateTable
CREATE TABLE "port_allocations" (
"id" TEXT NOT NULL,
"port" INTEGER NOT NULL,
"instance_id" TEXT NOT NULL,
"service" TEXT NOT NULL,
"notes" TEXT,
"created_at" TIMESTAMP(3) NOT NULL DEFAULT CURRENT_TIMESTAMP,
CONSTRAINT "port_allocations_pkey" PRIMARY KEY ("id")
);
-- CreateTable
CREATE TABLE "health_checks" (
"id" TEXT NOT NULL,
"instance_id" TEXT NOT NULL,
"status" "HealthStatus" NOT NULL,
"service_status" JSONB NOT NULL,
"total_services" INTEGER NOT NULL,
"healthy_services" INTEGER NOT NULL,
"response_time_ms" INTEGER,
"checked_at" TIMESTAMP(3) NOT NULL DEFAULT CURRENT_TIMESTAMP,
CONSTRAINT "health_checks_pkey" PRIMARY KEY ("id")
);
-- CreateTable
CREATE TABLE "backups" (
"id" TEXT NOT NULL,
"instance_id" TEXT NOT NULL,
"status" "BackupStatus" NOT NULL DEFAULT 'PENDING',
"archive_path" TEXT,
"size_bytes" BIGINT,
"manifest" JSONB,
"started_at" TIMESTAMP(3) NOT NULL DEFAULT CURRENT_TIMESTAMP,
"completed_at" TIMESTAMP(3),
"error_message" TEXT,
"s3_uploaded" BOOLEAN NOT NULL DEFAULT false,
"s3_key" TEXT,
CONSTRAINT "backups_pkey" PRIMARY KEY ("id")
);
-- CreateTable
CREATE TABLE "audit_logs" (
"id" TEXT NOT NULL,
"user_id" TEXT,
"instance_id" TEXT,
"action" "AuditAction" NOT NULL,
"details" JSONB,
"ip_address" TEXT,
"created_at" TIMESTAMP(3) NOT NULL DEFAULT CURRENT_TIMESTAMP,
CONSTRAINT "audit_logs_pkey" PRIMARY KEY ("id")
);
-- CreateTable
CREATE TABLE "ccp_settings" (
"key" TEXT NOT NULL,
"value" JSONB NOT NULL,
"updated_at" TIMESTAMP(3) NOT NULL,
CONSTRAINT "ccp_settings_pkey" PRIMARY KEY ("key")
);
-- CreateIndex
CREATE UNIQUE INDEX "ccp_users_email_key" ON "ccp_users"("email");
-- CreateIndex
CREATE UNIQUE INDEX "ccp_refresh_tokens_token_key" ON "ccp_refresh_tokens"("token");
-- CreateIndex
CREATE INDEX "ccp_refresh_tokens_user_id_idx" ON "ccp_refresh_tokens"("user_id");
-- CreateIndex
CREATE INDEX "ccp_refresh_tokens_expires_at_idx" ON "ccp_refresh_tokens"("expires_at");
-- CreateIndex
CREATE UNIQUE INDEX "instances_slug_key" ON "instances"("slug");
-- CreateIndex
CREATE UNIQUE INDEX "instances_domain_key" ON "instances"("domain");
-- CreateIndex
CREATE UNIQUE INDEX "instances_compose_project_key" ON "instances"("compose_project");
-- CreateIndex
CREATE UNIQUE INDEX "port_allocations_port_key" ON "port_allocations"("port");
-- CreateIndex
CREATE INDEX "port_allocations_instance_id_idx" ON "port_allocations"("instance_id");
-- CreateIndex
CREATE INDEX "health_checks_instance_id_checked_at_idx" ON "health_checks"("instance_id", "checked_at");
-- CreateIndex
CREATE INDEX "backups_instance_id_started_at_idx" ON "backups"("instance_id", "started_at");
-- CreateIndex
CREATE INDEX "audit_logs_instance_id_created_at_idx" ON "audit_logs"("instance_id", "created_at");
-- CreateIndex
CREATE INDEX "audit_logs_user_id_created_at_idx" ON "audit_logs"("user_id", "created_at");
-- CreateIndex
CREATE INDEX "audit_logs_action_created_at_idx" ON "audit_logs"("action", "created_at");
-- AddForeignKey
ALTER TABLE "ccp_refresh_tokens" ADD CONSTRAINT "ccp_refresh_tokens_user_id_fkey" FOREIGN KEY ("user_id") REFERENCES "ccp_users"("id") ON DELETE CASCADE ON UPDATE CASCADE;
-- AddForeignKey
ALTER TABLE "port_allocations" ADD CONSTRAINT "port_allocations_instance_id_fkey" FOREIGN KEY ("instance_id") REFERENCES "instances"("id") ON DELETE CASCADE ON UPDATE CASCADE;
-- AddForeignKey
ALTER TABLE "health_checks" ADD CONSTRAINT "health_checks_instance_id_fkey" FOREIGN KEY ("instance_id") REFERENCES "instances"("id") ON DELETE CASCADE ON UPDATE CASCADE;
-- AddForeignKey
ALTER TABLE "backups" ADD CONSTRAINT "backups_instance_id_fkey" FOREIGN KEY ("instance_id") REFERENCES "instances"("id") ON DELETE CASCADE ON UPDATE CASCADE;
-- AddForeignKey
ALTER TABLE "audit_logs" ADD CONSTRAINT "audit_logs_user_id_fkey" FOREIGN KEY ("user_id") REFERENCES "ccp_users"("id") ON DELETE SET NULL ON UPDATE CASCADE;
-- AddForeignKey
ALTER TABLE "audit_logs" ADD CONSTRAINT "audit_logs_instance_id_fkey" FOREIGN KEY ("instance_id") REFERENCES "instances"("id") ON DELETE SET NULL ON UPDATE CASCADE;

View File

@ -0,0 +1,3 @@
-- AlterTable
ALTER TABLE "instances" ADD COLUMN "is_registered" BOOLEAN NOT NULL DEFAULT false,
ALTER COLUMN "encrypted_secrets" DROP NOT NULL;

View File

@ -0,0 +1,3 @@
-- AlterTable
ALTER TABLE "instances" ADD COLUMN "enable_dev_tools" BOOLEAN NOT NULL DEFAULT false,
ADD COLUMN "enable_payments" BOOLEAN NOT NULL DEFAULT false;

View File

@ -0,0 +1,3 @@
# Please do not edit this file manually
# It should be added in your version-control system (e.g., Git)
provider = "postgresql"

View File

@ -0,0 +1,232 @@
generator client {
provider = "prisma-client-js"
}
datasource db {
provider = "postgresql"
url = env("DATABASE_URL")
}
// ─── CCP Users (control panel operators) ───────────────────
enum CcpRole {
SUPER_ADMIN
OPERATOR
VIEWER
}
model CcpUser {
id String @id @default(uuid())
email String @unique
password String
name String
role CcpRole @default(OPERATOR)
createdAt DateTime @default(now()) @map("created_at")
updatedAt DateTime @updatedAt @map("updated_at")
refreshTokens CcpRefreshToken[]
auditLogs AuditLog[]
@@map("ccp_users")
}
model CcpRefreshToken {
id String @id @default(uuid())
token String @unique
userId String @map("user_id")
expiresAt DateTime @map("expires_at")
createdAt DateTime @default(now()) @map("created_at")
user CcpUser @relation(fields: [userId], references: [id], onDelete: Cascade)
@@index([userId])
@@index([expiresAt])
@@map("ccp_refresh_tokens")
}
// ─── Managed Instances ─────────────────────────────────────
enum InstanceStatus {
PROVISIONING
RUNNING
STOPPED
ERROR
DESTROYING
}
model Instance {
id String @id @default(uuid())
slug String @unique
name String
domain String @unique
status InstanceStatus @default(PROVISIONING)
statusMessage String? @map("status_message")
basePath String @map("base_path")
composeProject String @unique @map("compose_project")
gitBranch String @default("v2") @map("git_branch")
gitCommit String? @map("git_commit")
// Allocated host ports (JSON: { api: 14001, admin: 13001, postgres: 15401, nginx: 10001 })
portConfig Json @map("port_config")
// AES-256-GCM encrypted JSON blob of all instance secrets (null for registered instances)
encryptedSecrets String? @map("encrypted_secrets")
// True if this instance was registered externally (not provisioned by CCP)
isRegistered Boolean @default(false) @map("is_registered")
// Feature flags
enableMedia Boolean @default(false) @map("enable_media")
enableChat Boolean @default(false) @map("enable_chat")
enableGancio Boolean @default(false) @map("enable_gancio")
enableListmonk Boolean @default(false) @map("enable_listmonk")
enableMonitoring Boolean @default(false) @map("enable_monitoring")
enableDevTools Boolean @default(false) @map("enable_dev_tools")
enablePayments Boolean @default(false) @map("enable_payments")
// Admin config
adminEmail String @map("admin_email")
// Pangolin tunnel
pangolinSiteId String? @map("pangolin_site_id")
pangolinNewtId String? @map("pangolin_newt_id")
pangolinNewtSecret String? @map("pangolin_newt_secret")
// SMTP
smtpHost String? @map("smtp_host")
smtpPort Int? @map("smtp_port")
smtpUser String? @map("smtp_user")
smtpFrom String? @map("smtp_from")
emailTestMode Boolean @default(true) @map("email_test_mode")
notes String?
createdAt DateTime @default(now()) @map("created_at")
updatedAt DateTime @updatedAt @map("updated_at")
lastHealthCheck DateTime? @map("last_health_check")
portAllocations PortAllocation[]
healthChecks HealthCheck[]
backups Backup[]
auditLogs AuditLog[]
@@map("instances")
}
// ─── Port Allocation ───────────────────────────────────────
model PortAllocation {
id String @id @default(uuid())
port Int @unique
instanceId String @map("instance_id")
service String
notes String?
createdAt DateTime @default(now()) @map("created_at")
instance Instance @relation(fields: [instanceId], references: [id], onDelete: Cascade)
@@index([instanceId])
@@map("port_allocations")
}
// ─── Health Checks ─────────────────────────────────────────
enum HealthStatus {
HEALTHY
DEGRADED
UNHEALTHY
UNKNOWN
}
model HealthCheck {
id String @id @default(uuid())
instanceId String @map("instance_id")
status HealthStatus
serviceStatus Json @map("service_status")
totalServices Int @map("total_services")
healthyServices Int @map("healthy_services")
responseTimeMs Int? @map("response_time_ms")
checkedAt DateTime @default(now()) @map("checked_at")
instance Instance @relation(fields: [instanceId], references: [id], onDelete: Cascade)
@@index([instanceId, checkedAt])
@@map("health_checks")
}
// ─── Backups ───────────────────────────────────────────────
enum BackupStatus {
PENDING
IN_PROGRESS
COMPLETED
FAILED
}
model Backup {
id String @id @default(uuid())
instanceId String @map("instance_id")
status BackupStatus @default(PENDING)
archivePath String? @map("archive_path")
sizeBytes BigInt? @map("size_bytes")
manifest Json?
startedAt DateTime @default(now()) @map("started_at")
completedAt DateTime? @map("completed_at")
errorMessage String? @map("error_message")
s3Uploaded Boolean @default(false) @map("s3_uploaded")
s3Key String? @map("s3_key")
instance Instance @relation(fields: [instanceId], references: [id], onDelete: Cascade)
@@index([instanceId, startedAt])
@@map("backups")
}
// ─── Audit Log ─────────────────────────────────────────────
enum AuditAction {
INSTANCE_CREATE
INSTANCE_UPDATE
INSTANCE_DELETE
INSTANCE_START
INSTANCE_STOP
INSTANCE_RESTART
INSTANCE_UPGRADE
BACKUP_CREATE
BACKUP_DELETE
PANGOLIN_SETUP
PANGOLIN_SYNC
USER_LOGIN
USER_CREATE
USER_UPDATE
USER_DELETE
SETTINGS_UPDATE
}
model AuditLog {
id String @id @default(uuid())
userId String? @map("user_id")
instanceId String? @map("instance_id")
action AuditAction
details Json?
ipAddress String? @map("ip_address")
createdAt DateTime @default(now()) @map("created_at")
user CcpUser? @relation(fields: [userId], references: [id], onDelete: SetNull)
instance Instance? @relation(fields: [instanceId], references: [id], onDelete: SetNull)
@@index([instanceId, createdAt])
@@index([userId, createdAt])
@@index([action, createdAt])
@@map("audit_logs")
}
// ─── CCP Settings ──────────────────────────────────────────
model CcpSetting {
key String @id
value Json
updatedAt DateTime @updatedAt @map("updated_at")
@@map("ccp_settings")
}

View File

@ -0,0 +1,49 @@
import 'dotenv/config';
import { PrismaClient } from '@prisma/client';
import bcrypt from 'bcryptjs';
const prisma = new PrismaClient();
async function main() {
const email = process.env.INITIAL_ADMIN_EMAIL || 'admin@example.com';
const password = process.env.INITIAL_ADMIN_PASSWORD || 'ChangeMe2025!!';
// Create initial admin user
const existing = await prisma.ccpUser.findUnique({ where: { email } });
if (!existing) {
const hashedPassword = await bcrypt.hash(password, 12);
await prisma.ccpUser.create({
data: {
email,
password: hashedPassword,
name: 'Admin',
role: 'SUPER_ADMIN',
},
});
console.log(`Created initial admin user: ${email}`);
} else {
console.log(`Admin user already exists: ${email}`);
}
// Create default settings
const defaults: Record<string, unknown> = {
defaultGitBranch: 'v2',
instancesBasePath: process.env.INSTANCES_BASE_PATH || 'instances',
};
for (const [key, value] of Object.entries(defaults)) {
await prisma.ccpSetting.upsert({
where: { key },
update: {},
create: { key, value: value as string },
});
}
console.log('Default settings seeded');
}
main()
.catch((e) => {
console.error('Seed error:', e);
process.exit(1);
})
.finally(() => prisma.$disconnect());

View File

@ -0,0 +1,80 @@
import 'dotenv/config';
import path from 'path';
import { z } from 'zod';
const envSchema = z.object({
// Server
NODE_ENV: z.enum(['development', 'production', 'test']).default('development'),
PORT: z.coerce.number().default(5000),
// Database
DATABASE_URL: z.string().url(),
// Redis
REDIS_URL: z.string().default('redis://localhost:6399'),
// JWT
JWT_ACCESS_SECRET: z.string().min(32),
JWT_REFRESH_SECRET: z.string().min(32),
JWT_ACCESS_EXPIRES_IN: z.string().default('15m'),
JWT_REFRESH_EXPIRES_IN: z.string().default('7d'),
// Encryption key for secrets at rest (64 hex chars = 32 bytes for AES-256)
ENCRYPTION_KEY: z.string().min(64).regex(/^[0-9a-f]+$/i, 'Must be hex-encoded (use: openssl rand -hex 32)'),
// Initial admin
INITIAL_ADMIN_EMAIL: z.string().email().default('admin@example.com'),
INITIAL_ADMIN_PASSWORD: z.string().min(12).default('ChangeMe2025!!'),
// CORS
CORS_ORIGINS: z.string().default('http://localhost:5100'),
// Instance management (resolved by setup.sh; fallback for local dev)
INSTANCES_BASE_PATH: z.string().default(
path.resolve(process.cwd(), '..', 'instances')
),
CML_SOURCE_PATH: z.string().default(''),
CML_GIT_REPO: z.string().default(''),
CML_GIT_BRANCH: z.string().default('v2'),
// Port allocation ranges
PORT_RANGE_API_START: z.coerce.number().default(14000),
PORT_RANGE_API_END: z.coerce.number().default(14999),
PORT_RANGE_ADMIN_START: z.coerce.number().default(13000),
PORT_RANGE_ADMIN_END: z.coerce.number().default(13999),
PORT_RANGE_POSTGRES_START: z.coerce.number().default(15400),
PORT_RANGE_POSTGRES_END: z.coerce.number().default(15499),
PORT_RANGE_NGINX_START: z.coerce.number().default(10000),
PORT_RANGE_NGINX_END: z.coerce.number().default(10999),
PORT_RANGE_EMBED_START: z.coerce.number().default(12000),
PORT_RANGE_EMBED_END: z.coerce.number().default(12499),
// Pangolin (optional)
PANGOLIN_API_URL: z.string().default(''),
PANGOLIN_API_KEY: z.string().default(''),
PANGOLIN_ORG_ID: z.string().default(''),
// Health checks
HEALTH_CHECK_INTERVAL_MS: z.coerce.number().default(300_000), // 5 min (0 to disable)
// Backups
BACKUP_STORAGE_PATH: z.string().default(
path.resolve(process.cwd(), '..', 'backups')
),
BACKUP_RETENTION_DAYS: z.coerce.number().default(30),
});
function validateEnv() {
const result = envSchema.safeParse(process.env);
if (!result.success) {
console.error('❌ Invalid environment variables:');
for (const [key, errors] of Object.entries(result.error.flatten().fieldErrors)) {
console.error(` ${key}: ${errors?.join(', ')}`);
}
process.exit(1);
}
return result.data;
}
export const env = validateEnv();
export type Env = z.infer<typeof envSchema>;

View File

@ -0,0 +1,14 @@
import Redis from 'ioredis';
import { env } from './env';
import { logger } from '../utils/logger';
export const redis = new Redis(env.REDIS_URL, {
maxRetriesPerRequest: 3,
retryStrategy(times) {
if (times > 10) return null;
return Math.min(times * 200, 5000);
},
});
redis.on('connect', () => logger.info('Redis connected'));
redis.on('error', (err) => logger.error('Redis error:', err.message));

View File

@ -0,0 +1,3 @@
import { PrismaClient } from '@prisma/client';
export const prisma = new PrismaClient();

View File

@ -0,0 +1,51 @@
import { Request, Response, NextFunction } from 'express';
import jwt from 'jsonwebtoken';
import { CcpRole } from '@prisma/client';
import { env } from '../config/env';
import { AppError } from './error-handler';
interface TokenPayload {
id: string;
email: string;
role: CcpRole;
}
declare global {
namespace Express {
interface Request {
user?: TokenPayload;
}
}
}
export function authenticate(req: Request, _res: Response, next: NextFunction) {
const header = req.headers.authorization;
if (!header?.startsWith('Bearer ')) {
throw new AppError(401, 'Authentication required', 'AUTH_REQUIRED');
}
const token = header.slice(7);
try {
const payload = jwt.verify(token, env.JWT_ACCESS_SECRET) as TokenPayload;
req.user = {
id: payload.id,
email: payload.email,
role: payload.role,
};
next();
} catch {
throw new AppError(401, 'Invalid or expired token', 'INVALID_TOKEN');
}
}
export function requireRole(...roles: CcpRole[]) {
return (req: Request, _res: Response, next: NextFunction) => {
if (!req.user) {
throw new AppError(401, 'Authentication required', 'AUTH_REQUIRED');
}
if (!roles.includes(req.user.role)) {
throw new AppError(403, 'Insufficient permissions', 'FORBIDDEN');
}
next();
};
}

View File

@ -0,0 +1,51 @@
import { Request, Response, NextFunction } from 'express';
import { ZodError } from 'zod';
import { env } from '../config/env';
import { logger } from '../utils/logger';
export class AppError extends Error {
constructor(
public statusCode: number,
message: string,
public code?: string
) {
super(message);
this.name = 'AppError';
}
}
export function errorHandler(
err: Error,
_req: Request,
res: Response,
_next: NextFunction
) {
if (err instanceof AppError) {
res.status(err.statusCode).json({
error: { message: err.message, code: err.code },
});
return;
}
if (err instanceof ZodError) {
const fieldErrors = err.flatten().fieldErrors;
const errorCount = Object.keys(fieldErrors).length;
res.status(400).json({
error: {
message: 'Validation error',
code: 'VALIDATION_ERROR',
...(env.NODE_ENV === 'development' && { details: fieldErrors }),
...(env.NODE_ENV === 'production' && { fieldCount: errorCount }),
},
});
return;
}
logger.error('Unhandled error:', err);
res.status(500).json({
error: {
message: env.NODE_ENV === 'production' ? 'Internal server error' : err.message,
code: 'INTERNAL_ERROR',
},
});
}

View File

@ -0,0 +1,16 @@
import { Request, Response, NextFunction } from 'express';
import { ZodSchema } from 'zod';
export function validate(schema: ZodSchema) {
return (req: Request, _res: Response, next: NextFunction) => {
schema.parse(req.body);
next();
};
}
export function validateQuery(schema: ZodSchema) {
return (req: Request, _res: Response, next: NextFunction) => {
schema.parse(req.query);
next();
};
}

View File

@ -0,0 +1,30 @@
import { Router, Request, Response } from 'express';
import { AuditAction } from '@prisma/client';
import { authenticate, requireRole } from '../../middleware/auth';
import * as auditService from './audit.service';
const router = Router();
router.use(authenticate);
router.get('/', requireRole('SUPER_ADMIN', 'OPERATOR'), async (req: Request, res: Response) => {
const { action, instanceId, userId, from, to, page, limit } = req.query;
const filters = {
action: action && Object.values(AuditAction).includes(action as AuditAction)
? (action as AuditAction)
: undefined,
instanceId: instanceId as string | undefined,
userId: userId as string | undefined,
from: from as string | undefined,
to: to as string | undefined,
};
const pageNum = Math.max(1, parseInt(page as string, 10) || 1);
const limitNum = Math.min(100, Math.max(1, parseInt(limit as string, 10) || 50));
const result = await auditService.listAuditLogs(filters, pageNum, limitNum);
res.json(result);
});
export default router;

View File

@ -0,0 +1,43 @@
import { AuditAction, Prisma } from '@prisma/client';
import { prisma } from '../../lib/prisma';
interface AuditFilters {
action?: AuditAction;
instanceId?: string;
userId?: string;
from?: string;
to?: string;
}
export async function listAuditLogs(
filters: AuditFilters,
page = 1,
limit = 50
) {
const where: Prisma.AuditLogWhereInput = {};
if (filters.action) where.action = filters.action;
if (filters.instanceId) where.instanceId = filters.instanceId;
if (filters.userId) where.userId = filters.userId;
if (filters.from || filters.to) {
where.createdAt = {};
if (filters.from) where.createdAt.gte = new Date(filters.from);
if (filters.to) where.createdAt.lte = new Date(filters.to);
}
const [data, total] = await Promise.all([
prisma.auditLog.findMany({
where,
orderBy: { createdAt: 'desc' },
skip: (page - 1) * limit,
take: limit,
include: {
user: { select: { id: true, email: true, name: true } },
instance: { select: { id: true, name: true, slug: true } },
},
}),
prisma.auditLog.count({ where }),
]);
return { data, total, page, limit };
}

View File

@ -0,0 +1,59 @@
import { Router, Request, Response } from 'express';
import { AuditAction } from '@prisma/client';
import { prisma } from '../../lib/prisma';
import { authenticate } from '../../middleware/auth';
import { validate } from '../../middleware/validate';
import { loginSchema, refreshSchema, logoutSchema, verifyPasswordSchema } from './auth.schemas';
import * as authService from './auth.service';
const router = Router();
router.post('/login', validate(loginSchema), async (req: Request, res: Response) => {
const { email, password } = req.body;
const result = await authService.login(email, password);
// Audit log the login
await prisma.auditLog.create({
data: {
userId: result.user.id,
action: AuditAction.USER_LOGIN,
details: { email: result.user.email },
ipAddress: req.ip,
},
});
res.json(result);
});
router.post('/refresh', validate(refreshSchema), async (req: Request, res: Response) => {
const { refreshToken } = req.body;
const result = await authService.refresh(refreshToken);
res.json(result);
});
router.post('/logout', validate(logoutSchema), async (req: Request, res: Response) => {
const { refreshToken } = req.body;
await authService.logout(refreshToken);
res.json({ message: 'Logged out' });
});
router.post(
'/verify-password',
authenticate,
validate(verifyPasswordSchema),
async (req: Request, res: Response) => {
const { password } = req.body;
const valid = await authService.verifyPassword(req.user!.id, password);
if (!valid) {
res.status(401).json({ error: { message: 'Invalid password', code: 'INVALID_PASSWORD' } });
return;
}
res.json({ verified: true });
}
);
router.get('/me', authenticate, async (req: Request, res: Response) => {
const user = await authService.getMe(req.user!.id);
res.json({ user });
});
export default router;

View File

@ -0,0 +1,18 @@
import { z } from 'zod';
export const loginSchema = z.object({
email: z.string().email(),
password: z.string().min(1),
});
export const refreshSchema = z.object({
refreshToken: z.string().min(1),
});
export const logoutSchema = z.object({
refreshToken: z.string().min(1),
});
export const verifyPasswordSchema = z.object({
password: z.string().min(1),
});

View File

@ -0,0 +1,131 @@
import bcrypt from 'bcryptjs';
import jwt, { SignOptions } from 'jsonwebtoken';
import crypto from 'crypto';
import { CcpRole } from '@prisma/client';
import { prisma } from '../../lib/prisma';
import { env } from '../../config/env';
import { AppError } from '../../middleware/error-handler';
interface TokenPayload {
id: string;
email: string;
role: CcpRole;
}
function signAccessToken(payload: TokenPayload): string {
return jwt.sign(payload, env.JWT_ACCESS_SECRET, {
expiresIn: env.JWT_ACCESS_EXPIRES_IN as SignOptions['expiresIn'],
});
}
function signRefreshToken(payload: TokenPayload): string {
return jwt.sign(payload, env.JWT_REFRESH_SECRET, {
expiresIn: env.JWT_REFRESH_EXPIRES_IN as SignOptions['expiresIn'],
});
}
function parseExpiry(expiresIn: string): Date {
const match = expiresIn.match(/^(\d+)([smhd])$/);
if (!match) return new Date(Date.now() + 7 * 24 * 60 * 60 * 1000); // default 7d
const [, num, unit] = match;
const multipliers: Record<string, number> = { s: 1000, m: 60000, h: 3600000, d: 86400000 };
return new Date(Date.now() + parseInt(num) * multipliers[unit]);
}
export async function login(email: string, password: string) {
const user = await prisma.ccpUser.findUnique({ where: { email } });
if (!user) {
throw new AppError(401, 'Invalid credentials', 'INVALID_CREDENTIALS');
}
const valid = await bcrypt.compare(password, user.password);
if (!valid) {
throw new AppError(401, 'Invalid credentials', 'INVALID_CREDENTIALS');
}
const payload: TokenPayload = { id: user.id, email: user.email, role: user.role };
const accessToken = signAccessToken(payload);
const refreshToken = signRefreshToken(payload);
// Store refresh token
await prisma.ccpRefreshToken.create({
data: {
token: crypto.createHash('sha256').update(refreshToken).digest('hex'),
userId: user.id,
expiresAt: parseExpiry(env.JWT_REFRESH_EXPIRES_IN),
},
});
return {
user: { id: user.id, email: user.email, name: user.name, role: user.role },
accessToken,
refreshToken,
};
}
export async function refresh(refreshToken: string) {
let payload: TokenPayload;
try {
payload = jwt.verify(refreshToken, env.JWT_REFRESH_SECRET) as TokenPayload;
} catch {
throw new AppError(401, 'Invalid refresh token', 'INVALID_TOKEN');
}
const tokenHash = crypto.createHash('sha256').update(refreshToken).digest('hex');
// Atomic rotation: delete old, create new
const result = await prisma.$transaction(async (tx) => {
const existing = await tx.ccpRefreshToken.findUnique({ where: { token: tokenHash } });
if (!existing || existing.expiresAt < new Date()) {
throw new AppError(401, 'Refresh token expired or revoked', 'TOKEN_EXPIRED');
}
await tx.ccpRefreshToken.delete({ where: { token: tokenHash } });
const user = await tx.ccpUser.findUnique({ where: { id: payload.id } });
if (!user) {
throw new AppError(401, 'User not found', 'USER_NOT_FOUND');
}
const newPayload: TokenPayload = { id: user.id, email: user.email, role: user.role };
const newAccessToken = signAccessToken(newPayload);
const newRefreshToken = signRefreshToken(newPayload);
await tx.ccpRefreshToken.create({
data: {
token: crypto.createHash('sha256').update(newRefreshToken).digest('hex'),
userId: user.id,
expiresAt: parseExpiry(env.JWT_REFRESH_EXPIRES_IN),
},
});
return {
user: { id: user.id, email: user.email, name: user.name, role: user.role },
accessToken: newAccessToken,
refreshToken: newRefreshToken,
};
});
return result;
}
export async function logout(refreshToken: string) {
const tokenHash = crypto.createHash('sha256').update(refreshToken).digest('hex');
await prisma.ccpRefreshToken.deleteMany({ where: { token: tokenHash } });
}
export async function verifyPassword(userId: string, password: string): Promise<boolean> {
const user = await prisma.ccpUser.findUnique({ where: { id: userId } });
if (!user) {
throw new AppError(401, 'Authentication required', 'AUTH_REQUIRED');
}
return bcrypt.compare(password, user.password);
}
export async function getMe(userId: string) {
const user = await prisma.ccpUser.findUnique({ where: { id: userId } });
if (!user) {
throw new AppError(401, 'Authentication required', 'AUTH_REQUIRED');
}
return { id: user.id, email: user.email, name: user.name, role: user.role };
}

View File

@ -0,0 +1,71 @@
import { Router, Request, Response } from 'express';
import fs from 'fs';
import path from 'path';
import { authenticate, requireRole } from '../../middleware/auth';
import { env } from '../../config/env';
import * as backupService from '../../services/backup.service';
const router = Router();
router.use(authenticate);
// ─── Cross-Instance Backup Endpoints ────────────────────────────────
// List all backups (cross-instance)
router.get('/', async (req: Request, res: Response) => {
const { instanceId, page, limit } = req.query;
const pageNum = Math.max(1, parseInt(page as string, 10) || 1);
const limitNum = Math.min(100, Math.max(1, parseInt(limit as string, 10) || 50));
const result = await backupService.listBackups(
instanceId as string | undefined,
pageNum,
limitNum
);
res.json(result);
});
// Delete a backup
router.delete(
'/:backupId',
requireRole('SUPER_ADMIN'),
async (req: Request, res: Response) => {
await backupService.deleteBackup(req.params.backupId as string, req.user!.id, req.ip);
res.json({ message: 'Backup deleted' });
}
);
// Download a backup archive
router.get(
'/:backupId/download',
requireRole('SUPER_ADMIN'),
async (req: Request, res: Response) => {
const backup = await backupService.getBackup(req.params.backupId as string);
if (!backup.archivePath || backup.status !== 'COMPLETED') {
res.status(400).json({ error: { message: 'Backup not available for download', code: 'NOT_AVAILABLE' } });
return;
}
// Validate path is within backup storage (prevent traversal)
const normalized = path.resolve(backup.archivePath);
const normalizedStorage = path.resolve(env.BACKUP_STORAGE_PATH);
if (!normalized.startsWith(normalizedStorage + path.sep)) {
res.status(403).json({ error: { message: 'Access denied', code: 'FORBIDDEN' } });
return;
}
if (!fs.existsSync(normalized)) {
res.status(404).json({ error: { message: 'Backup file not found', code: 'FILE_NOT_FOUND' } });
return;
}
const filename = path.basename(normalized);
res.setHeader('Content-Disposition', `attachment; filename="${filename}"`);
res.setHeader('Content-Type', 'application/gzip');
const stream = fs.createReadStream(normalized);
stream.pipe(res);
}
);
export default router;

View File

@ -0,0 +1,46 @@
import { Router, Request, Response } from 'express';
import { prisma } from '../../lib/prisma';
import { authenticate } from '../../middleware/auth';
const router = Router();
// Public health endpoint for CCP itself
router.get('/', async (_req: Request, res: Response) => {
try {
await prisma.$queryRaw`SELECT 1`;
res.json({ status: 'healthy', timestamp: new Date().toISOString() });
} catch {
res.status(503).json({ status: 'unhealthy', timestamp: new Date().toISOString() });
}
});
// Authenticated: overview of all instances' health
router.get('/overview', authenticate, async (_req: Request, res: Response) => {
const instances = await prisma.instance.findMany({
select: {
id: true,
name: true,
slug: true,
domain: true,
status: true,
lastHealthCheck: true,
healthChecks: {
orderBy: { checkedAt: 'desc' },
take: 1,
},
},
});
const summary = instances.map((i) => ({
id: i.id,
name: i.name,
slug: i.slug,
domain: i.domain,
status: i.status,
lastHealthCheck: i.lastHealthCheck,
health: i.healthChecks[0] || null,
}));
res.json({ data: summary });
});
export default router;

View File

@ -0,0 +1,278 @@
import { Router, Request, Response } from 'express';
import { AuditAction } from '@prisma/client';
import rateLimit from 'express-rate-limit';
import { prisma } from '../../lib/prisma';
import { authenticate, requireRole } from '../../middleware/auth';
import { validate } from '../../middleware/validate';
import { createInstanceSchema, updateInstanceSchema, registerInstanceSchema, reconfigureInstanceSchema, importInstancesSchema } from './instances.schemas';
import * as instancesService from './instances.service';
import * as healthService from '../../services/health.service';
import * as backupService from '../../services/backup.service';
import { discoverInstances } from '../../services/discovery.service';
const secretsLimiter = rateLimit({
windowMs: 15 * 60 * 1000,
max: 10,
standardHeaders: true,
legacyHeaders: false,
message: { error: { message: 'Too many secrets requests, please try again later', code: 'RATE_LIMITED' } },
});
const router = Router();
// All instance routes require authentication
router.use(authenticate);
// ─── Discovery Endpoints ────────────────────────────────────────────
router.post(
'/discover',
requireRole('SUPER_ADMIN', 'OPERATOR'),
async (_req: Request, res: Response) => {
const result = await discoverInstances();
res.json({ data: result });
}
);
router.post(
'/import',
requireRole('SUPER_ADMIN', 'OPERATOR'),
validate(importInstancesSchema),
async (req: Request, res: Response) => {
const { instances } = req.body as { instances: Array<Record<string, unknown>> };
const results: Array<{ slug: string; success: boolean; instanceId?: string; error?: string }> = [];
for (const inst of instances) {
try {
const registered = await instancesService.registerInstance(
inst as Parameters<typeof instancesService.registerInstance>[0],
req.user!.id,
req.ip
);
results.push({ slug: inst.slug as string, success: true, instanceId: registered?.id });
} catch (err) {
results.push({ slug: inst.slug as string, success: false, error: (err as Error).message });
}
}
const succeeded = results.filter((r) => r.success).length;
const failed = results.filter((r) => !r.success).length;
res.json({
data: {
results,
summary: { total: results.length, succeeded, failed },
},
});
}
);
// ─── CRUD Endpoints ──────────────────────────────────────────────────
router.get('/', async (_req: Request, res: Response) => {
const instances = await instancesService.listInstances();
res.json({ data: instances });
});
// Register an existing (externally-managed) instance for monitoring
router.post(
'/register',
requireRole('SUPER_ADMIN', 'OPERATOR'),
validate(registerInstanceSchema),
async (req: Request, res: Response) => {
const instance = await instancesService.registerInstance(req.body, req.user!.id, req.ip);
res.status(201).json({ data: instance });
}
);
router.get('/:id', async (req: Request, res: Response) => {
const instance = await instancesService.getInstance(req.params.id as string);
res.json({ data: instance });
});
router.post(
'/',
requireRole('SUPER_ADMIN', 'OPERATOR'),
validate(createInstanceSchema),
async (req: Request, res: Response) => {
const instance = await instancesService.createInstance(req.body, req.user!.id, req.ip);
res.status(201).json({ data: instance });
}
);
router.put(
'/:id',
requireRole('SUPER_ADMIN', 'OPERATOR'),
validate(updateInstanceSchema),
async (req: Request, res: Response) => {
const instance = await instancesService.updateInstance(
req.params.id as string,
req.body,
req.user!.id,
req.ip
);
res.json({ data: instance });
}
);
router.delete(
'/:id',
requireRole('SUPER_ADMIN'),
async (req: Request, res: Response) => {
const result = await instancesService.deleteInstance(req.params.id as string, req.user!.id, req.ip);
res.json(result);
}
);
// Get decrypted secrets (SUPER_ADMIN only, rate limited)
router.get(
'/:id/secrets',
secretsLimiter,
requireRole('SUPER_ADMIN'),
async (req: Request, res: Response) => {
const secrets = await instancesService.getInstanceSecrets(req.params.id as string);
// Audit log: someone viewed secrets
await prisma.auditLog.create({
data: {
userId: req.user!.id,
instanceId: req.params.id as string,
action: AuditAction.INSTANCE_UPDATE,
details: { type: 'secrets_viewed' },
ipAddress: req.ip,
},
});
res.json({ data: secrets });
}
);
// ─── Reconfiguration ────────────────────────────────────────────────
router.post(
'/:id/reconfigure',
requireRole('SUPER_ADMIN', 'OPERATOR'),
validate(reconfigureInstanceSchema),
async (req: Request, res: Response) => {
const result = await instancesService.reconfigureInstance(
req.params.id as string,
req.body,
req.user!.id,
req.ip
);
res.json({ data: result });
}
);
// ─── Lifecycle Endpoints ─────────────────────────────────────────────
router.post(
'/:id/provision',
requireRole('SUPER_ADMIN', 'OPERATOR'),
async (req: Request, res: Response) => {
const result = await instancesService.provisionInstance(req.params.id as string, req.user!.id, req.ip);
res.json(result);
}
);
router.post(
'/:id/start',
requireRole('SUPER_ADMIN', 'OPERATOR'),
async (req: Request, res: Response) => {
const result = await instancesService.startInstance(req.params.id as string, req.user!.id, req.ip);
res.json(result);
}
);
router.post(
'/:id/stop',
requireRole('SUPER_ADMIN', 'OPERATOR'),
async (req: Request, res: Response) => {
const result = await instancesService.stopInstance(req.params.id as string, req.user!.id, req.ip);
res.json(result);
}
);
router.post(
'/:id/restart',
requireRole('SUPER_ADMIN', 'OPERATOR'),
async (req: Request, res: Response) => {
const service = req.query.service as string | undefined;
const result = await instancesService.restartInstance(
req.params.id as string,
req.user!.id,
req.ip,
service
);
res.json(result);
}
);
// ─── Services & Logs ─────────────────────────────────────────────────
router.get(
'/:id/services',
async (req: Request, res: Response) => {
const services = await instancesService.getInstanceServices(req.params.id as string);
res.json({ data: services });
}
);
router.get(
'/:id/logs',
requireRole('SUPER_ADMIN', 'OPERATOR'),
async (req: Request, res: Response) => {
const { service, tail, since } = req.query;
const tailNum = tail ? Math.min(Math.max(parseInt(tail as string, 10) || 200, 1), 2000) : 200;
const logs = await instancesService.getInstanceLogs(
req.params.id as string,
service as string | undefined,
tailNum,
since as string | undefined
);
res.json({ data: logs });
}
);
// ─── Health Checks ──────────────────────────────────────────────────
router.post(
'/:id/health-check',
requireRole('SUPER_ADMIN', 'OPERATOR'),
async (req: Request, res: Response) => {
const check = await healthService.checkInstanceHealth(req.params.id as string);
res.json({ data: check });
}
);
router.get(
'/:id/health-history',
async (req: Request, res: Response) => {
const page = Math.max(1, parseInt(req.query.page as string, 10) || 1);
const limit = Math.min(100, Math.max(1, parseInt(req.query.limit as string, 10) || 20));
const result = await healthService.getHealthHistory(req.params.id as string, page, limit);
res.json(result);
}
);
// ─── Backups ────────────────────────────────────────────────────────
router.post(
'/:id/backup',
requireRole('SUPER_ADMIN', 'OPERATOR'),
async (req: Request, res: Response) => {
const backup = await backupService.createBackup(req.params.id as string, req.user!.id, req.ip);
res.status(201).json({ data: backup });
}
);
router.get(
'/:id/backups',
async (req: Request, res: Response) => {
const page = Math.max(1, parseInt(req.query.page as string, 10) || 1);
const limit = Math.min(100, Math.max(1, parseInt(req.query.limit as string, 10) || 50));
const result = await backupService.listBackups(req.params.id as string, page, limit);
res.json(result);
}
);
export default router;

View File

@ -0,0 +1,82 @@
import { z } from 'zod';
export const createInstanceSchema = z.object({
name: z.string().min(2).max(100),
slug: z.string().min(2).max(50).regex(/^[a-z0-9-]+$/, 'Slug must be lowercase alphanumeric with hyphens'),
domain: z.string().min(3).max(255),
adminEmail: z.string().email(),
enableMedia: z.boolean().default(false),
enableChat: z.boolean().default(false),
enableGancio: z.boolean().default(false),
enableListmonk: z.boolean().default(false),
enableMonitoring: z.boolean().default(false),
enableDevTools: z.boolean().default(false),
enablePayments: z.boolean().default(false),
smtpHost: z.string().optional(),
smtpPort: z.coerce.number().optional(),
smtpUser: z.string().optional(),
smtpFrom: z.string().optional(),
emailTestMode: z.boolean().default(true),
enablePangolin: z.boolean().default(false),
notes: z.string().optional(),
});
export const updateInstanceSchema = z.object({
name: z.string().min(2).max(100).optional(),
enableMedia: z.boolean().optional(),
enableChat: z.boolean().optional(),
enableGancio: z.boolean().optional(),
enableListmonk: z.boolean().optional(),
enableMonitoring: z.boolean().optional(),
enableDevTools: z.boolean().optional(),
enablePayments: z.boolean().optional(),
smtpHost: z.string().optional(),
smtpPort: z.coerce.number().optional(),
smtpUser: z.string().optional(),
smtpFrom: z.string().optional(),
emailTestMode: z.boolean().optional(),
notes: z.string().nullable().optional(),
});
export const registerInstanceSchema = z.object({
name: z.string().min(2).max(100),
slug: z.string().min(2).max(50).regex(/^[a-z0-9-]+$/, 'Slug must be lowercase alphanumeric with hyphens'),
domain: z.string().min(3).max(255),
basePath: z.string().min(1),
composeProject: z.string().min(1),
portConfig: z.object({
api: z.coerce.number().int().min(1).max(65535),
admin: z.coerce.number().int().min(1).max(65535),
postgres: z.coerce.number().int().min(1).max(65535),
nginx: z.coerce.number().int().min(1).max(65535),
}),
adminEmail: z.string().email().optional().default('admin@localhost'),
enableMedia: z.boolean().default(false),
enableChat: z.boolean().default(false),
enableGancio: z.boolean().default(false),
enableListmonk: z.boolean().default(false),
enableMonitoring: z.boolean().default(false),
enableDevTools: z.boolean().default(false),
enablePayments: z.boolean().default(false),
notes: z.string().optional(),
});
export const reconfigureInstanceSchema = z.object({
enableMedia: z.boolean().optional(),
enableChat: z.boolean().optional(),
enableGancio: z.boolean().optional(),
enableListmonk: z.boolean().optional(),
enableMonitoring: z.boolean().optional(),
enableDevTools: z.boolean().optional(),
enablePayments: z.boolean().optional(),
});
export const importInstancesSchema = z.object({
instances: z.array(registerInstanceSchema).min(1).max(50),
});
export type CreateInstanceInput = z.infer<typeof createInstanceSchema>;
export type UpdateInstanceInput = z.infer<typeof updateInstanceSchema>;
export type RegisterInstanceInput = z.infer<typeof registerInstanceSchema>;
export type ReconfigureInstanceInput = z.infer<typeof reconfigureInstanceSchema>;
export type ImportInstancesInput = z.infer<typeof importInstancesSchema>;

View File

@ -0,0 +1,607 @@
import { Prisma, InstanceStatus, AuditAction } from '@prisma/client';
import fs from 'fs/promises';
import { parse as parseDotenv } from 'dotenv';
import { prisma } from '../../lib/prisma';
import { env } from '../../config/env';
import { AppError } from '../../middleware/error-handler';
import { encryptJson, decryptJson } from '../../utils/encryption';
import { generateSecrets } from '../../services/secret-generator';
import { allocatePorts, releasePorts } from '../../services/port-allocator';
import * as docker from '../../services/docker.service';
import { provision } from './provisioner';
import { CreateInstanceInput, UpdateInstanceInput, RegisterInstanceInput, ReconfigureInstanceInput } from './instances.schemas';
import { buildTemplateContext, renderAllTemplates, clearTemplateCache } from '../../services/template-engine';
import { logger } from '../../utils/logger';
import path from 'path';
// ─── CRUD Operations ─────────────────────────────────────────────────
export async function listInstances() {
return prisma.instance.findMany({
orderBy: { createdAt: 'desc' },
omit: { encryptedSecrets: true },
include: {
portAllocations: true,
_count: { select: { healthChecks: true, backups: true } },
},
});
}
export async function getInstance(id: string) {
const instance = await prisma.instance.findUnique({
where: { id },
omit: { encryptedSecrets: true },
include: {
portAllocations: true,
healthChecks: { orderBy: { checkedAt: 'desc' }, take: 10 },
backups: { orderBy: { startedAt: 'desc' }, take: 10 },
},
});
if (!instance) {
throw new AppError(404, 'Instance not found', 'NOT_FOUND');
}
return instance;
}
export async function createInstance(input: CreateInstanceInput, userId: string, ipAddress?: string) {
// Check uniqueness
const existing = await prisma.instance.findFirst({
where: { OR: [{ slug: input.slug }, { domain: input.domain }] },
});
if (existing) {
const field = existing.slug === input.slug ? 'slug' : 'domain';
throw new AppError(409, `Instance with this ${field} already exists`, 'DUPLICATE');
}
// Allocate ports
const ports = await allocatePorts();
// Generate secrets
const secrets = generateSecrets(input.adminEmail);
// Compute paths
const composeProject = `cml-${input.slug}`;
const basePath = path.join(env.INSTANCES_BASE_PATH, input.slug, 'changemaker.lite');
// Create instance record
const instance = await prisma.instance.create({
data: {
slug: input.slug,
name: input.name,
domain: input.domain,
status: InstanceStatus.PROVISIONING,
basePath,
composeProject,
gitBranch: env.CML_GIT_BRANCH,
portConfig: ports.config,
encryptedSecrets: encryptJson(secrets as unknown as Record<string, unknown>),
enableMedia: input.enableMedia,
enableChat: input.enableChat,
enableGancio: input.enableGancio,
enableListmonk: input.enableListmonk,
enableMonitoring: input.enableMonitoring,
enableDevTools: input.enableDevTools,
enablePayments: input.enablePayments,
adminEmail: input.adminEmail,
smtpHost: input.smtpHost,
smtpPort: input.smtpPort,
smtpUser: input.smtpUser,
smtpFrom: input.smtpFrom,
emailTestMode: input.emailTestMode,
notes: input.notes,
portAllocations: {
create: ports.allocations,
},
},
include: { portAllocations: true },
});
// Audit log
await prisma.auditLog.create({
data: {
userId,
instanceId: instance.id,
action: AuditAction.INSTANCE_CREATE,
details: { name: input.name, domain: input.domain, slug: input.slug },
ipAddress,
},
});
// Kick off provisioning asynchronously (fire-and-forget)
provision(instance.id).catch((err) => {
logger.error(`[instances] Provisioning failed for ${instance.slug}: ${err}`);
});
return instance;
}
export async function registerInstance(input: RegisterInstanceInput, userId: string, ipAddress?: string) {
// Check uniqueness (slug, domain, composeProject)
const existing = await prisma.instance.findFirst({
where: {
OR: [
{ slug: input.slug },
{ domain: input.domain },
{ composeProject: input.composeProject },
],
},
});
if (existing) {
const field = existing.slug === input.slug ? 'slug'
: existing.domain === input.domain ? 'domain'
: 'composeProject';
throw new AppError(409, `Instance with this ${field} already exists`, 'DUPLICATE');
}
// Verify basePath has a docker-compose.yml
try {
await fs.access(path.join(input.basePath, 'docker-compose.yml'));
} catch {
throw new AppError(400, `No docker-compose.yml found at ${input.basePath}`, 'INVALID_PATH');
}
// Detect running containers to determine initial status
let initialStatus: InstanceStatus = InstanceStatus.STOPPED;
try {
const containers = await docker.composePs(input.basePath, input.composeProject);
const runningCount = containers.filter((c) => c.state === 'running').length;
if (runningCount > 0) {
initialStatus = InstanceStatus.RUNNING;
}
} catch {
logger.warn(`[instances] Could not detect containers for ${input.composeProject}, defaulting to STOPPED`);
}
// Create instance record
const instance = await prisma.instance.create({
data: {
slug: input.slug,
name: input.name,
domain: input.domain,
status: initialStatus,
statusMessage: initialStatus === InstanceStatus.RUNNING ? 'Registered — containers running' : 'Registered — containers not running',
basePath: input.basePath,
composeProject: input.composeProject,
portConfig: input.portConfig,
encryptedSecrets: null,
isRegistered: true,
enableMedia: input.enableMedia,
enableChat: input.enableChat,
enableGancio: input.enableGancio,
enableListmonk: input.enableListmonk,
enableMonitoring: input.enableMonitoring,
enableDevTools: input.enableDevTools,
enablePayments: input.enablePayments,
adminEmail: input.adminEmail,
notes: input.notes,
},
});
// Create PortAllocation records (try-catch each for unique constraint)
for (const [service, port] of Object.entries(input.portConfig)) {
try {
await prisma.portAllocation.create({
data: { port, service, instanceId: instance.id },
});
} catch (err) {
logger.warn(`[instances] Port ${port} (${service}) already allocated, skipping`);
}
}
// Audit log
await prisma.auditLog.create({
data: {
userId,
instanceId: instance.id,
action: AuditAction.INSTANCE_CREATE,
details: { name: input.name, domain: input.domain, slug: input.slug, registered: true },
ipAddress,
},
});
// Trigger immediate health check if running
if (initialStatus === InstanceStatus.RUNNING) {
import('../../services/health.service').then((healthService) => {
healthService.checkInstanceHealth(instance.id).catch((err) => {
logger.warn(`[instances] Initial health check failed for ${instance.slug}: ${(err as Error).message}`);
});
});
}
// Re-fetch with relations
return prisma.instance.findUnique({
where: { id: instance.id },
include: { portAllocations: true },
});
}
export async function updateInstance(id: string, input: UpdateInstanceInput, userId: string, ipAddress?: string) {
const instance = await prisma.instance.findUnique({ where: { id } });
if (!instance) {
throw new AppError(404, 'Instance not found', 'NOT_FOUND');
}
const updated = await prisma.instance.update({
where: { id },
data: input,
});
await prisma.auditLog.create({
data: {
userId,
instanceId: id,
action: AuditAction.INSTANCE_UPDATE,
details: input as unknown as Prisma.InputJsonValue,
ipAddress,
},
});
return updated;
}
export async function deleteInstance(id: string, userId: string, ipAddress?: string) {
const instance = await prisma.instance.findUnique({ where: { id } });
if (!instance) {
throw new AppError(404, 'Instance not found', 'NOT_FOUND');
}
// Registered instances: just remove from DB, never touch containers or files
if (instance.isRegistered) {
await releasePorts(id);
await prisma.instance.delete({ where: { id } });
await prisma.auditLog.create({
data: {
userId,
action: AuditAction.INSTANCE_DELETE,
details: { name: instance.name, domain: instance.domain, slug: instance.slug, unregistered: true },
ipAddress,
},
});
return { message: 'Instance unregistered' };
}
// Mark as destroying
await prisma.instance.update({
where: { id },
data: { status: InstanceStatus.DESTROYING, statusMessage: 'Shutting down containers...' },
});
// Stop containers and remove volumes
try {
await docker.composeDown(instance.basePath, instance.composeProject, true);
logger.info(`[instances] ${instance.slug}: Containers stopped and volumes removed`);
} catch (err) {
logger.warn(`[instances] ${instance.slug}: Docker cleanup warning: ${(err as Error).message}`);
// Continue with deletion even if docker cleanup partially fails
}
// Delete instance directory (with safety check)
const instanceDir = path.resolve(path.dirname(instance.basePath));
const expectedBase = path.resolve(env.INSTANCES_BASE_PATH);
if (instanceDir.startsWith(expectedBase + '/') && instanceDir !== expectedBase) {
try {
await fs.rm(instanceDir, { recursive: true, force: true });
logger.info(`[instances] ${instance.slug}: Directory ${instanceDir} removed`);
} catch (err) {
logger.warn(`[instances] ${instance.slug}: Directory cleanup warning: ${(err as Error).message}`);
}
} else {
logger.error(`[instances] ${instance.slug}: Refusing to delete path outside base: ${instanceDir}`);
}
// Release ports and delete instance from DB
await releasePorts(id);
await prisma.instance.delete({ where: { id } });
await prisma.auditLog.create({
data: {
userId,
action: AuditAction.INSTANCE_DELETE,
details: { name: instance.name, domain: instance.domain, slug: instance.slug },
ipAddress,
},
});
return { message: 'Instance deleted' };
}
export async function getInstanceSecrets(id: string) {
const instance = await prisma.instance.findUnique({ where: { id } });
if (!instance) {
throw new AppError(404, 'Instance not found', 'NOT_FOUND');
}
// CCP-provisioned instance: decrypt from DB
if (!instance.isRegistered && instance.encryptedSecrets) {
return decryptJson(instance.encryptedSecrets);
}
// Registered/discovered instance: read from .env on disk
// Path traversal protection: basePath must be within INSTANCES_BASE_PATH or CML_SOURCE_PATH
const resolvedBase = path.resolve(instance.basePath);
const allowedPaths = [
path.resolve(env.INSTANCES_BASE_PATH),
...(env.CML_SOURCE_PATH ? [path.resolve(env.CML_SOURCE_PATH)] : []),
];
const isAllowed = allowedPaths.some(
(allowed) => resolvedBase === allowed || resolvedBase.startsWith(allowed + '/')
);
if (!isAllowed) {
throw new AppError(400, 'Instance path is outside the allowed directory', 'INVALID_PATH');
}
const envPath = path.join(resolvedBase, '.env');
let envVars: Record<string, string> | null = null;
try {
const content = await fs.readFile(envPath, 'utf-8');
envVars = parseDotenv(Buffer.from(content));
} catch {
envVars = null;
}
if (!envVars) {
throw new AppError(400, 'Could not read .env file for this instance', 'ENV_NOT_FOUND');
}
return {
initialAdminEmail: envVars.INITIAL_ADMIN_EMAIL || instance.adminEmail,
initialAdminPassword: envVars.INITIAL_ADMIN_PASSWORD || null,
};
}
// ─── Lifecycle Operations ────────────────────────────────────────────
export async function provisionInstance(id: string, userId: string, ipAddress?: string) {
// Registered instances cannot be provisioned
const check = await prisma.instance.findUnique({ where: { id }, select: { isRegistered: true } });
if (check?.isRegistered) {
throw new AppError(400, 'Cannot provision a registered instance', 'NOT_MANAGED');
}
// Atomic check-and-update to prevent concurrent provisioning
const { count } = await prisma.instance.updateMany({
where: { id, status: { in: [InstanceStatus.ERROR, InstanceStatus.STOPPED] } },
data: { status: InstanceStatus.PROVISIONING, statusMessage: 'Retrying provisioning...' },
});
if (count === 0) {
const instance = await prisma.instance.findUnique({ where: { id } });
if (!instance) throw new AppError(404, 'Instance not found', 'NOT_FOUND');
throw new AppError(400, `Cannot provision instance in ${instance.status} state`, 'INVALID_STATE');
}
await prisma.auditLog.create({
data: {
userId,
instanceId: id,
action: AuditAction.INSTANCE_UPDATE,
details: { event: 'provision_retry' },
ipAddress,
},
});
// Fire-and-forget
provision(id).catch((err) => {
logger.error(`[instances] Re-provisioning failed for ${id}: ${err}`);
});
return { message: 'Provisioning started' };
}
export async function startInstance(id: string, userId: string, ipAddress?: string) {
const instance = await prisma.instance.findUnique({ where: { id } });
if (!instance) {
throw new AppError(404, 'Instance not found', 'NOT_FOUND');
}
if (instance.status !== 'STOPPED' && instance.status !== 'ERROR') {
throw new AppError(400, `Cannot start instance in ${instance.status} state`, 'INVALID_STATE');
}
try {
await docker.composeUp(instance.basePath, instance.composeProject);
await prisma.instance.update({
where: { id },
data: { status: InstanceStatus.RUNNING, statusMessage: 'All containers started' },
});
await prisma.auditLog.create({
data: {
userId,
instanceId: id,
action: AuditAction.INSTANCE_START,
details: { slug: instance.slug },
ipAddress,
},
});
return { message: 'Instance started' };
} catch (err) {
const errorMsg = (err as Error).message;
await prisma.instance.update({
where: { id },
data: { status: InstanceStatus.ERROR, statusMessage: `Start failed: ${errorMsg}` },
});
throw new AppError(500, `Failed to start instance: ${errorMsg}`, 'DOCKER_ERROR');
}
}
export async function stopInstance(id: string, userId: string, ipAddress?: string) {
const instance = await prisma.instance.findUnique({ where: { id } });
if (!instance) {
throw new AppError(404, 'Instance not found', 'NOT_FOUND');
}
if (instance.status !== 'RUNNING' && instance.status !== 'ERROR') {
throw new AppError(400, `Cannot stop instance in ${instance.status} state`, 'INVALID_STATE');
}
try {
await docker.composeStop(instance.basePath, instance.composeProject);
await prisma.instance.update({
where: { id },
data: { status: InstanceStatus.STOPPED, statusMessage: 'All containers stopped' },
});
await prisma.auditLog.create({
data: {
userId,
instanceId: id,
action: AuditAction.INSTANCE_STOP,
details: { slug: instance.slug },
ipAddress,
},
});
return { message: 'Instance stopped' };
} catch (err) {
const errorMsg = (err as Error).message;
throw new AppError(500, `Failed to stop instance: ${errorMsg}`, 'DOCKER_ERROR');
}
}
export async function restartInstance(id: string, userId: string, ipAddress?: string, service?: string) {
const instance = await prisma.instance.findUnique({ where: { id } });
if (!instance) {
throw new AppError(404, 'Instance not found', 'NOT_FOUND');
}
try {
await docker.composeRestart(instance.basePath, instance.composeProject, service);
await prisma.auditLog.create({
data: {
userId,
instanceId: id,
action: AuditAction.INSTANCE_RESTART,
details: { slug: instance.slug, service: service || 'all' },
ipAddress,
},
});
return { message: `${service || 'All services'} restarted` };
} catch (err) {
const errorMsg = (err as Error).message;
throw new AppError(500, `Failed to restart: ${errorMsg}`, 'DOCKER_ERROR');
}
}
export async function getInstanceServices(id: string) {
const instance = await prisma.instance.findUnique({ where: { id } });
if (!instance) {
throw new AppError(404, 'Instance not found', 'NOT_FOUND');
}
try {
return await docker.composePs(instance.basePath, instance.composeProject);
} catch {
// If compose ps fails (e.g. no containers), return empty array
return [];
}
}
export async function getInstanceLogs(
id: string,
service?: string,
tail = 200,
since?: string
) {
const instance = await prisma.instance.findUnique({ where: { id } });
if (!instance) {
throw new AppError(404, 'Instance not found', 'NOT_FOUND');
}
try {
return await docker.composeLogs(
instance.basePath,
instance.composeProject,
service,
tail,
since
);
} catch (err) {
throw new AppError(500, `Failed to get logs: ${(err as Error).message}`, 'DOCKER_ERROR');
}
}
// ─── Reconfiguration ─────────────────────────────────────────────────
export async function reconfigureInstance(
id: string,
features: ReconfigureInstanceInput,
userId: string,
ipAddress?: string
) {
const instance = await prisma.instance.findUnique({ where: { id } });
if (!instance) {
throw new AppError(404, 'Instance not found', 'NOT_FOUND');
}
if (instance.isRegistered) {
throw new AppError(400, 'Cannot reconfigure an external instance', 'NOT_MANAGED');
}
if (!instance.encryptedSecrets) {
throw new AppError(400, 'Instance has no secrets — cannot reconfigure', 'NOT_MANAGED');
}
if (instance.status !== 'RUNNING' && instance.status !== 'STOPPED') {
throw new AppError(400, `Cannot reconfigure instance in ${instance.status} state`, 'INVALID_STATE');
}
// Update feature flags in DB
const updated = await prisma.instance.update({
where: { id },
data: {
...features,
statusMessage: 'Reconfiguring...',
},
});
// Clear template cache so updated templates are re-read
clearTemplateCache();
// Re-render templates with updated flags
const secrets = decryptJson<Record<string, string>>(instance.encryptedSecrets);
const context = buildTemplateContext(updated, secrets);
await renderAllTemplates(context, instance.basePath);
// If instance is running, apply changes via docker compose up
if (instance.status === 'RUNNING') {
try {
await docker.composeUp(instance.basePath, instance.composeProject);
// --remove-orphans (from composeUp) will clean up disabled services
await prisma.instance.update({
where: { id },
data: { statusMessage: 'Reconfiguration complete' },
});
} catch (err) {
const errorMsg = (err as Error).message;
await prisma.instance.update({
where: { id },
data: { statusMessage: `Reconfiguration failed: ${errorMsg}` },
});
throw new AppError(500, `Reconfiguration failed: ${errorMsg}`, 'DOCKER_ERROR');
}
} else {
await prisma.instance.update({
where: { id },
data: { statusMessage: 'Reconfiguration complete — start instance to apply' },
});
}
// Audit log
await prisma.auditLog.create({
data: {
userId,
instanceId: id,
action: AuditAction.INSTANCE_UPDATE,
details: { event: 'reconfigure', features } as unknown as Prisma.InputJsonValue,
ipAddress,
},
});
return updated;
}

View File

@ -0,0 +1,197 @@
import { InstanceStatus, AuditAction } from '@prisma/client';
import { exec as execCb } from 'child_process';
import { promisify } from 'util';
import fs from 'fs/promises';
import path from 'path';
import { prisma } from '../../lib/prisma';
import { env } from '../../config/env';
import { decryptJson } from '../../utils/encryption';
import { renderAllTemplates, buildTemplateContext } from '../../services/template-engine';
import * as docker from '../../services/docker.service';
import { logger } from '../../utils/logger';
const exec = promisify(execCb);
/**
* Directories/files to exclude when copying CML source to instance directory.
*/
const COPY_EXCLUDES = [
'node_modules',
'.git',
'.env',
'changemaker-control-panel',
'.claude',
];
/**
* Update instance status and statusMessage in the database.
*/
async function updateStatus(
instanceId: string,
status: InstanceStatus,
statusMessage: string
): Promise<void> {
await prisma.instance.update({
where: { id: instanceId },
data: { status, statusMessage },
});
}
/**
* Provision a CML instance: copy source, render configs, build and start Docker stack.
*
* This function runs asynchronously call it without awaiting.
* Progress is tracked via instance.status and instance.statusMessage.
*/
export async function provision(instanceId: string): Promise<void> {
const totalSteps = 13;
let step = 0;
function stepMsg(description: string): string {
step++;
return `Step ${step}/${totalSteps}: ${description}`;
}
try {
// Load instance with all details
const instance = await prisma.instance.findUnique({
where: { id: instanceId },
include: { portAllocations: true },
});
if (!instance) {
throw new Error(`Instance ${instanceId} not found`);
}
const { basePath, composeProject } = instance;
const instanceDir = path.dirname(basePath); // parent of changemaker.lite
// ── Step 1: Create instance directory ───────────────────────────
await updateStatus(instanceId, InstanceStatus.PROVISIONING, stepMsg('Creating instance directory'));
logger.info(`[provisioner] ${instance.slug}: Creating directory ${basePath}`);
await fs.mkdir(basePath, { recursive: true });
// ── Step 2: Copy CML source ────────────────────────────────────
await updateStatus(instanceId, InstanceStatus.PROVISIONING, stepMsg('Copying CML source'));
logger.info(`[provisioner] ${instance.slug}: Copying from ${env.CML_SOURCE_PATH} to ${basePath}`);
const excludeFlags = COPY_EXCLUDES.map((e) => `--exclude='${e}'`).join(' ');
await exec(
`rsync -a ${excludeFlags} ${env.CML_SOURCE_PATH}/ ${basePath}/`,
{ timeout: 120_000 }
);
// ── Step 2b: Create media directories ──────────────────────────
// Media API volume mounts use ./media as the read-only base with
// rw overlays for subdirectories. All must exist before Docker starts.
const mediaDirs = [
'media/local/inbox',
'media/local/thumbnails',
'media/local/photos',
'media/public',
];
for (const dir of mediaDirs) {
await fs.mkdir(path.join(basePath, dir), { recursive: true });
}
logger.info(`[provisioner] ${instance.slug}: Created media directories`);
// ── Step 3: Decrypt secrets ────────────────────────────────────
await updateStatus(instanceId, InstanceStatus.PROVISIONING, stepMsg('Preparing configuration'));
if (!instance.encryptedSecrets) {
throw new Error('Instance has no encrypted secrets — cannot provision');
}
const secrets = decryptJson<Record<string, string>>(instance.encryptedSecrets);
// ── Step 4: Build template context ─────────────────────────────
const context = buildTemplateContext(instance, secrets);
// ── Step 5: Render templates ───────────────────────────────────
await updateStatus(instanceId, InstanceStatus.PROVISIONING, stepMsg('Rendering configuration files'));
logger.info(`[provisioner] ${instance.slug}: Rendering templates to ${basePath}`);
await renderAllTemplates(context, basePath);
// ── Step 6: Pull base images ───────────────────────────────────
await updateStatus(instanceId, InstanceStatus.PROVISIONING, stepMsg('Pulling Docker images'));
logger.info(`[provisioner] ${instance.slug}: Pulling images`);
try {
await docker.composePull(basePath, composeProject);
} catch (err) {
// Pull failures are non-fatal if images already exist locally
logger.warn(`[provisioner] ${instance.slug}: Pull warning: ${(err as Error).message}`);
}
// ── Step 7: Build custom images ────────────────────────────────
await updateStatus(instanceId, InstanceStatus.PROVISIONING, stepMsg('Building Docker images'));
logger.info(`[provisioner] ${instance.slug}: Building images`);
await docker.composeBuild(basePath, composeProject);
// ── Step 8: Start infrastructure (Postgres + Redis) ────────────
await updateStatus(instanceId, InstanceStatus.PROVISIONING, stepMsg('Starting database and cache'));
logger.info(`[provisioner] ${instance.slug}: Starting infrastructure services`);
await docker.composeUp(basePath, composeProject, ['v2-postgres', 'redis']);
// ── Step 9: Wait for infrastructure healthy ────────────────────
await updateStatus(instanceId, InstanceStatus.PROVISIONING, stepMsg('Waiting for database to be ready'));
logger.info(`[provisioner] ${instance.slug}: Waiting for Postgres + Redis healthy`);
// Container names match template's container_name: {{containerPrefix}}-postgres / {{containerPrefix}}-redis
const pgContainer = `${composeProject}-postgres`;
const redisContainer = `${composeProject}-redis`;
await Promise.all([
docker.waitForHealthy(pgContainer, 60_000),
docker.waitForHealthy(redisContainer, 60_000),
]);
// ── Step 10: Run database schema sync ─────────────────────────
await updateStatus(instanceId, InstanceStatus.PROVISIONING, stepMsg('Setting up database schema'));
logger.info(`[provisioner] ${instance.slug}: Pushing Prisma schema to database`);
// Use composeRun with --entrypoint "" to skip the API's startup entrypoint
// (which would try migrate deploy + seed and fail on schema drift)
await docker.composeRun(basePath, composeProject, 'api', 'npx prisma db push --accept-data-loss', 180_000);
// ── Step 11: Seed database ─────────────────────────────────────
await updateStatus(instanceId, InstanceStatus.PROVISIONING, stepMsg('Seeding database'));
logger.info(`[provisioner] ${instance.slug}: Seeding database`);
await docker.composeRun(basePath, composeProject, 'api', 'npx prisma db seed', 120_000);
// ── Step 12: Start all services ────────────────────────────────
await updateStatus(instanceId, InstanceStatus.PROVISIONING, stepMsg('Starting all services'));
logger.info(`[provisioner] ${instance.slug}: Starting all services`);
await docker.composeUp(basePath, composeProject);
// ── Step 13: Health check ──────────────────────────────────────
await updateStatus(instanceId, InstanceStatus.PROVISIONING, stepMsg('Verifying instance health'));
logger.info(`[provisioner] ${instance.slug}: Waiting for API health check`);
const ports = instance.portConfig as Record<string, number>;
// Use host.docker.internal to reach ports exposed on the Docker host
// (localhost inside the CCP container refers to the CCP container itself)
await docker.waitForHttp(`http://host.docker.internal:${ports.api}/api/health`, 120_000);
// ── Done! ──────────────────────────────────────────────────────
await updateStatus(instanceId, InstanceStatus.RUNNING, 'Provisioning complete');
logger.info(`[provisioner] ${instance.slug}: Provisioning complete!`);
// Audit log
await prisma.auditLog.create({
data: {
instanceId,
action: AuditAction.INSTANCE_CREATE,
details: { event: 'provisioning_complete', slug: instance.slug },
},
});
} catch (err) {
const errorMsg = err instanceof Error ? err.message : String(err);
logger.error(`[provisioner] ${instanceId}: Failed — ${errorMsg}`);
await updateStatus(
instanceId,
InstanceStatus.ERROR,
`Provisioning failed at ${step > 0 ? `step ${step}/${totalSteps}` : 'startup'}: ${errorMsg}`
).catch(() => {});
// Audit log
await prisma.auditLog.create({
data: {
instanceId,
action: AuditAction.INSTANCE_CREATE,
details: { event: 'provisioning_failed', error: errorMsg, step },
},
}).catch(() => {});
}
}

View File

@ -0,0 +1,43 @@
import { Router, Request, Response } from 'express';
import { AuditAction } from '@prisma/client';
import { prisma } from '../../lib/prisma';
import { authenticate, requireRole } from '../../middleware/auth';
const router = Router();
router.use(authenticate);
router.get('/', async (_req: Request, res: Response) => {
const settings = await prisma.ccpSetting.findMany();
const map: Record<string, unknown> = {};
for (const s of settings) {
map[s.key] = s.value;
}
res.json({ data: map });
});
router.put(
'/:key',
requireRole('SUPER_ADMIN'),
async (req: Request, res: Response) => {
const { key } = req.params as { key: string };
const { value } = req.body;
const setting = await prisma.ccpSetting.upsert({
where: { key },
update: { value },
create: { key, value },
});
await prisma.auditLog.create({
data: {
userId: req.user!.id,
action: AuditAction.SETTINGS_UPDATE,
details: { key, value },
ipAddress: req.ip,
},
});
res.json({ data: setting });
}
);
export default router;

View File

@ -0,0 +1,66 @@
import 'express-async-errors';
import express from 'express';
import cors from 'cors';
import helmet from 'helmet';
import compression from 'compression';
import rateLimit from 'express-rate-limit';
import { env } from './config/env';
import { logger } from './utils/logger';
import { errorHandler } from './middleware/error-handler';
// Route imports
import authRoutes from './modules/auth/auth.routes';
import instanceRoutes from './modules/instances/instances.routes';
import settingsRoutes from './modules/settings/settings.routes';
import healthRoutes from './modules/health/health.routes';
import auditRoutes from './modules/audit/audit.routes';
import backupRoutes from './modules/backups/backup.routes';
import { startHealthScheduler } from './services/health.service';
import { autoDiscoverOnStartup } from './services/discovery.service';
const app = express();
// Global middleware
app.use(helmet());
app.use(compression());
app.use(express.json({ limit: '10mb' }));
app.use(
cors({
origin: env.CORS_ORIGINS.split(',').map((s) => s.trim()),
credentials: true,
})
);
// Rate limiters
const authLimiter = rateLimit({
windowMs: 15 * 60 * 1000, // 15 minutes
max: 15, // 15 attempts per window
standardHeaders: true,
legacyHeaders: false,
message: { error: { message: 'Too many attempts, please try again later', code: 'RATE_LIMITED' } },
});
// Routes
app.use('/api/auth', authLimiter, authRoutes);
app.use('/api/instances', instanceRoutes);
app.use('/api/settings', settingsRoutes);
app.use('/api/health', healthRoutes);
app.use('/api/audit', auditRoutes);
app.use('/api/backups', backupRoutes);
// Error handler (must be last)
app.use(errorHandler);
app.listen(env.PORT, () => {
logger.info(`CCP API listening on port ${env.PORT} (${env.NODE_ENV})`);
startHealthScheduler(env.HEALTH_CHECK_INTERVAL_MS);
// Auto-discover parent CML instance on first boot (5s delay for DB readiness)
setTimeout(() => {
autoDiscoverOnStartup().catch((err) =>
logger.error(`[discovery] Auto-discovery failed: ${(err as Error).message}`)
);
}, 5_000);
});
export default app;

View File

@ -0,0 +1,313 @@
import { Prisma, BackupStatus, AuditAction, InstanceStatus } from '@prisma/client';
import fs from 'fs/promises';
import path from 'path';
import crypto from 'crypto';
import { exec as execCb } from 'child_process';
import { promisify } from 'util';
import { prisma } from '../lib/prisma';
import { env } from '../config/env';
import { AppError } from '../middleware/error-handler';
import { decryptJson } from '../utils/encryption';
import * as docker from './docker.service';
import { logger } from '../utils/logger';
const exec = promisify(execCb);
/**
* Compute SHA-256 hash of a file.
*/
async function fileHash(filePath: string): Promise<string> {
const fileBuffer = await fs.readFile(filePath);
return crypto.createHash('sha256').update(fileBuffer).digest('hex');
}
/**
* Get file size in bytes.
*/
async function fileSize(filePath: string): Promise<number> {
const stat = await fs.stat(filePath);
return stat.size;
}
/**
* Create a backup for a given instance.
*/
export async function createBackup(instanceId: string, userId?: string, ipAddress?: string) {
const instance = await prisma.instance.findUnique({ where: { id: instanceId } });
if (!instance) {
throw new AppError(404, 'Instance not found', 'NOT_FOUND');
}
if (instance.status !== InstanceStatus.RUNNING) {
throw new AppError(400, `Cannot backup instance in ${instance.status} state`, 'INVALID_STATE');
}
if ((instance as { isRegistered?: boolean }).isRegistered) {
throw new AppError(400, 'Backups not managed by CCP for registered instances', 'NOT_MANAGED');
}
// Create backup record
const backup = await prisma.backup.create({
data: {
instanceId,
status: BackupStatus.PENDING,
},
});
// Run backup asynchronously
performBackup(backup.id, instance, userId, ipAddress).catch((err) => {
logger.error(`[backup] Backup ${backup.id} failed: ${(err as Error).message}`);
});
return backup;
}
async function performBackup(
backupId: string,
instance: { id: string; slug: string; basePath: string; composeProject: string; encryptedSecrets: string | null },
userId?: string,
ipAddress?: string
) {
const timestamp = new Date().toISOString().replace(/[:.]/g, '-');
const backupDir = path.join(env.BACKUP_STORAGE_PATH, instance.slug, timestamp);
try {
// Update status to IN_PROGRESS
await prisma.backup.update({
where: { id: backupId },
data: { status: BackupStatus.IN_PROGRESS },
});
// Ensure backup directory exists
await fs.mkdir(backupDir, { recursive: true });
const manifestFiles: Array<{ name: string; size: number; sha256: string }> = [];
// 1. Dump PostgreSQL
try {
const secrets = instance.encryptedSecrets
? decryptJson<Record<string, string>>(instance.encryptedSecrets)
: {} as Record<string, string>;
const pgPassword = secrets.V2_POSTGRES_PASSWORD || secrets.postgresPassword || 'changemaker';
// Use docker compose exec to run pg_dump inside the container
// Pass PGPASSWORD via -e flag so pg_dump can authenticate
const dumpOutput = await docker.composeExec(
instance.basePath,
instance.composeProject,
'v2-postgres',
`pg_dump -U changemaker -d changemaker`,
300_000, // 5 min timeout for large DBs
{ PGPASSWORD: pgPassword }
);
const dumpPath = path.join(backupDir, 'v2-postgres.sql');
await fs.writeFile(dumpPath, dumpOutput);
// Gzip the dump
await exec(`gzip "${dumpPath}"`, { timeout: 120_000 });
const gzPath = dumpPath + '.gz';
manifestFiles.push({
name: 'v2-postgres.sql.gz',
size: await fileSize(gzPath),
sha256: await fileHash(gzPath),
});
logger.info(`[backup] ${instance.slug}: PostgreSQL dump complete`);
} catch (err) {
logger.warn(`[backup] ${instance.slug}: PostgreSQL dump failed: ${(err as Error).message}`);
// Continue with backup — mark the dump as failed in manifest
}
// 2. Archive uploads if they exist
const uploadsDir = path.join(instance.basePath, 'uploads');
try {
await fs.access(uploadsDir);
const uploadsArchive = path.join(backupDir, 'uploads.tar.gz');
await exec(`tar -czf "${uploadsArchive}" -C "${instance.basePath}" uploads`, { timeout: 300_000 });
manifestFiles.push({
name: 'uploads.tar.gz',
size: await fileSize(uploadsArchive),
sha256: await fileHash(uploadsArchive),
});
logger.info(`[backup] ${instance.slug}: Uploads archive complete`);
} catch {
// No uploads directory or archive failed — skip
logger.debug(`[backup] ${instance.slug}: No uploads directory or archive skipped`);
}
// 3. Generate manifest
const manifest = {
instanceId: instance.id,
instanceSlug: instance.slug,
timestamp: new Date().toISOString(),
files: manifestFiles,
};
const manifestPath = path.join(backupDir, 'manifest.json');
await fs.writeFile(manifestPath, JSON.stringify(manifest, null, 2));
// 4. Create final archive
const archiveName = `backup-${instance.slug}-${timestamp}.tar.gz`;
const archivePath = path.join(env.BACKUP_STORAGE_PATH, instance.slug, archiveName);
await exec(`tar -czf "${archivePath}" -C "${path.dirname(backupDir)}" "${path.basename(backupDir)}"`, {
timeout: 300_000,
});
const totalSize = await fileSize(archivePath);
// Cleanup the temp directory
await fs.rm(backupDir, { recursive: true, force: true });
// Update backup record
await prisma.backup.update({
where: { id: backupId },
data: {
status: BackupStatus.COMPLETED,
archivePath,
sizeBytes: BigInt(totalSize),
manifest: manifest as unknown as Prisma.InputJsonValue,
completedAt: new Date(),
},
});
// Audit log
if (userId) {
await prisma.auditLog.create({
data: {
userId,
instanceId: instance.id,
action: AuditAction.BACKUP_CREATE,
details: { backupId, archiveName, sizeBytes: totalSize },
ipAddress,
},
});
}
logger.info(`[backup] ${instance.slug}: Backup complete (${(totalSize / 1024 / 1024).toFixed(1)} MB)`);
} catch (err) {
// Update backup as failed
await prisma.backup.update({
where: { id: backupId },
data: {
status: BackupStatus.FAILED,
errorMessage: (err as Error).message,
completedAt: new Date(),
},
});
// Cleanup temp directory on failure
try {
await fs.rm(backupDir, { recursive: true, force: true });
} catch {
// Ignore cleanup errors
}
throw err;
}
}
/**
* Delete a backup (file + DB record).
*/
export async function deleteBackup(backupId: string, userId?: string, ipAddress?: string) {
const backup = await prisma.backup.findUnique({
where: { id: backupId },
include: { instance: { select: { id: true, slug: true } } },
});
if (!backup) {
throw new AppError(404, 'Backup not found', 'NOT_FOUND');
}
// Delete archive file
if (backup.archivePath) {
try {
await fs.unlink(backup.archivePath);
} catch {
logger.warn(`[backup] Could not delete file: ${backup.archivePath}`);
}
}
await prisma.backup.delete({ where: { id: backupId } });
if (userId) {
await prisma.auditLog.create({
data: {
userId,
instanceId: backup.instanceId,
action: AuditAction.BACKUP_DELETE,
details: { backupId, instanceSlug: backup.instance?.slug },
ipAddress,
},
});
}
}
/**
* List backups with optional instance filter and pagination.
*/
export async function listBackups(instanceId?: string, page = 1, limit = 50) {
const where = instanceId ? { instanceId } : {};
const [data, total] = await Promise.all([
prisma.backup.findMany({
where,
orderBy: { startedAt: 'desc' },
skip: (page - 1) * limit,
take: limit,
include: {
instance: { select: { id: true, name: true, slug: true } },
},
}),
prisma.backup.count({ where }),
]);
return { data, total, page, limit };
}
/**
* Get a single backup by ID.
*/
export async function getBackup(backupId: string) {
const backup = await prisma.backup.findUnique({ where: { id: backupId } });
if (!backup) {
throw new AppError(404, 'Backup not found', 'NOT_FOUND');
}
return backup;
}
/**
* Cleanup backups older than retention period.
*/
export async function cleanupOldBackups(retentionDays: number): Promise<number> {
const cutoff = new Date(Date.now() - retentionDays * 24 * 60 * 60 * 1000);
const oldBackups = await prisma.backup.findMany({
where: {
startedAt: { lt: cutoff },
status: { in: [BackupStatus.COMPLETED, BackupStatus.FAILED] },
},
});
let deleted = 0;
for (const backup of oldBackups) {
try {
if (backup.archivePath) {
await fs.unlink(backup.archivePath);
}
await prisma.backup.delete({ where: { id: backup.id } });
deleted++;
} catch (err) {
logger.warn(`[backup] Failed to cleanup backup ${backup.id}: ${(err as Error).message}`);
}
}
if (deleted > 0) {
logger.info(`[backup] Cleaned up ${deleted} old backups (>${retentionDays} days)`);
}
return deleted;
}

View File

@ -0,0 +1,388 @@
import fs from 'fs/promises';
import path from 'path';
import { exec as execCb } from 'child_process';
import { promisify } from 'util';
import { parse as parseDotenv } from 'dotenv';
import { prisma } from '../lib/prisma';
import { env } from '../config/env';
import { logger } from '../utils/logger';
import { registerInstance } from '../modules/instances/instances.service';
import * as docker from './docker.service';
const exec = promisify(execCb);
// ─── Types ──────────────────────────────────────────────────────────
export interface DiscoveredInstance {
name: string;
slug: string;
domain: string;
basePath: string;
composeProject: string;
portConfig: { api: number; admin: number; postgres: number; nginx: number };
adminEmail: string;
enableMedia: boolean;
enableChat: boolean;
enableGancio: boolean;
enableListmonk: boolean;
enableMonitoring: boolean;
enableDevTools: boolean;
enablePayments: boolean;
emailTestMode: boolean;
// Discovery metadata (UI-only, not persisted)
source: 'parent' | 'docker';
isRunning: boolean;
runningContainers: number;
totalContainers: number;
isAlreadyRegistered: boolean;
existingInstanceId?: string;
isParentInstance: boolean;
}
export interface DiscoverySummary {
total: number;
newInstances: number;
alreadyRegistered: number;
running: number;
parentFound: boolean;
}
export interface DiscoveryResult {
instances: DiscoveredInstance[];
summary: DiscoverySummary;
}
interface ComposeProject {
Name: string;
Status: string;
ConfigFiles: string;
}
// ─── .env Parser ────────────────────────────────────────────────────
/**
* Parse a CML instance's .env file and extract safe configuration metadata.
* Never reads secrets (JWT keys, passwords, encryption keys).
*/
async function parseCmlEnv(envPath: string): Promise<Record<string, string> | null> {
try {
const content = await fs.readFile(envPath, 'utf-8');
return parseDotenv(Buffer.from(content));
} catch {
return null;
}
}
function extractPortConfig(envVars: Record<string, string>): DiscoveredInstance['portConfig'] {
return {
api: parseInt(envVars.API_PORT || '4000', 10),
admin: parseInt(envVars.ADMIN_PORT || '3000', 10),
postgres: parseInt(envVars.V2_POSTGRES_PORT || '5433', 10),
nginx: parseInt(envVars.NGINX_HTTP_PORT || '80', 10),
};
}
function extractFeatureFlags(envVars: Record<string, string>) {
const isTrue = (val?: string) => val?.toLowerCase() === 'true';
return {
enableMedia: isTrue(envVars.ENABLE_MEDIA_FEATURES),
enableChat: isTrue(envVars.ENABLE_CHAT),
enableGancio: isTrue(envVars.GANCIO_SYNC_ENABLED),
enableListmonk: isTrue(envVars.LISTMONK_SYNC_ENABLED),
enablePayments: isTrue(envVars.ENABLE_PAYMENTS),
emailTestMode: isTrue(envVars.EMAIL_TEST_MODE),
};
}
function slugify(str: string): string {
return str
.toLowerCase()
.replace(/[^a-z0-9]+/g, '-')
.replace(/^-|-$/g, '')
.substring(0, 50);
}
// ─── CML Fingerprint Test ───────────────────────────────────────────
/**
* Check if a directory is a CML instance by verifying key files exist
* and the .env has CML-specific variables.
*/
async function isCmlInstance(dirPath: string): Promise<boolean> {
try {
// Must have docker-compose.yml and api/prisma/schema.prisma
await fs.access(path.join(dirPath, 'docker-compose.yml'));
await fs.access(path.join(dirPath, 'api', 'prisma', 'schema.prisma'));
// Must have .env with DOMAIN and JWT_ACCESS_SECRET
const envVars = await parseCmlEnv(path.join(dirPath, '.env'));
if (!envVars) return false;
return !!(envVars.DOMAIN && envVars.JWT_ACCESS_SECRET);
} catch {
return false;
}
}
// ─── Container Status ───────────────────────────────────────────────
async function getContainerCounts(
projectDir: string,
composeProject: string
): Promise<{ running: number; total: number }> {
try {
// Use docker.composePs which validates the project name
const containers = await docker.composePs(projectDir, composeProject);
const running = containers.filter((c) => c.state === 'running').length;
return { running, total: containers.length };
} catch {
return { running: 0, total: 0 };
}
}
// ─── Compose Project Discovery ──────────────────────────────────────
/**
* List all Docker Compose projects on the host via `docker compose ls`.
*/
async function listComposeProjects(): Promise<ComposeProject[]> {
try {
const { stdout } = await exec('docker compose ls --format json', {
timeout: 15_000,
env: { ...process.env, COMPOSE_ANSI: 'never' },
});
if (!stdout.trim()) return [];
return JSON.parse(stdout);
} catch (err) {
logger.warn(`[discovery] Failed to list compose projects: ${(err as Error).message}`);
return [];
}
}
/**
* Derive the compose project name from a directory path.
* Docker Compose defaults to the directory basename (lowercased, special chars replaced).
*/
function deriveComposeProject(dirPath: string, projects: ComposeProject[]): string {
// Try to find project by matching config file path
const composePath = path.join(dirPath, 'docker-compose.yml');
for (const p of projects) {
// ConfigFiles can be comma-separated
const configs = p.ConfigFiles.split(',').map((s) => s.trim());
if (configs.some((c) => path.resolve(c) === path.resolve(composePath))) {
return p.Name;
}
}
// Fall back to directory basename convention
return path.basename(dirPath).toLowerCase().replace(/[^a-z0-9]/g, '');
}
// ─── Build Discovered Instance ──────────────────────────────────────
async function buildDiscoveredInstance(
dirPath: string,
source: 'parent' | 'docker',
composeProject: string,
existingInstances: Array<{ id: string; basePath: string; composeProject: string; domain: string }>
): Promise<DiscoveredInstance | null> {
const envVars = await parseCmlEnv(path.join(dirPath, '.env'));
if (!envVars) return null;
const domain = envVars.DOMAIN || path.basename(dirPath);
const portConfig = extractPortConfig(envVars);
const flags = extractFeatureFlags(envVars);
// Check container status
const { running, total } = await getContainerCounts(dirPath, composeProject);
// Infer monitoring/devtools from running containers (these aren't .env flags)
let enableMonitoring = false;
let enableDevTools = false;
try {
// Use docker.composePs which validates the project name (prevents shell injection)
const containers = await docker.composePs(dirPath, composeProject);
const serviceNames = containers.map((c) => c.service.toLowerCase());
enableMonitoring = serviceNames.some((n) => n.includes('prometheus') || n.includes('grafana'));
enableDevTools = serviceNames.some((n) => n.includes('code-server') || n.includes('gitea') || n.includes('n8n'));
} catch {
// Couldn't inspect services — leave as false
}
// Check deduplication against existing instances
const resolvedPath = path.resolve(dirPath);
const match = existingInstances.find(
(inst) =>
path.resolve(inst.basePath) === resolvedPath ||
inst.composeProject === composeProject ||
inst.domain === domain
);
const isParent = source === 'parent' ||
(!!env.CML_SOURCE_PATH && path.resolve(env.CML_SOURCE_PATH) === resolvedPath);
return {
name: envVars.SITE_NAME || domain.split('.')[0] || path.basename(dirPath),
slug: slugify(envVars.SITE_NAME || domain.split('.')[0] || path.basename(dirPath)),
domain,
basePath: resolvedPath,
composeProject,
portConfig,
adminEmail: envVars.INITIAL_ADMIN_EMAIL || 'admin@localhost',
...flags,
enableMonitoring,
enableDevTools,
source,
isRunning: running > 0,
runningContainers: running,
totalContainers: total,
isAlreadyRegistered: !!match,
existingInstanceId: match?.id,
isParentInstance: isParent,
};
}
// ─── Main Discovery Function ────────────────────────────────────────
export async function discoverInstances(): Promise<DiscoveryResult> {
const discovered: DiscoveredInstance[] = [];
const seenPaths = new Set<string>();
// Load existing instances for deduplication
const existingInstances = await prisma.instance.findMany({
select: { id: true, basePath: true, composeProject: true, domain: true },
});
// List all compose projects upfront
const composeProjects = await listComposeProjects();
// Strategy 1: Parent instance (from CML_SOURCE_PATH)
if (env.CML_SOURCE_PATH) {
const parentPath = path.resolve(env.CML_SOURCE_PATH);
if (await isCmlInstance(parentPath)) {
const project = deriveComposeProject(parentPath, composeProjects);
const inst = await buildDiscoveredInstance(parentPath, 'parent', project, existingInstances);
if (inst) {
// Ensure parent gets a sensible name
if (inst.isParentInstance && inst.name === path.basename(parentPath)) {
inst.name = `Parent (${inst.domain})`;
}
discovered.push(inst);
seenPaths.add(parentPath);
}
} else {
logger.debug(`[discovery] CML_SOURCE_PATH (${env.CML_SOURCE_PATH}) is not a valid CML instance`);
}
}
// Strategy 2: Docker scan — check all running compose projects
for (const project of composeProjects) {
// Skip CCP's own project
if (project.Name.startsWith('ccp-') || project.Name.startsWith('changemaker-control-panel')) {
continue;
}
// Extract project directory from config file path
if (!project.ConfigFiles) continue;
const configFile = project.ConfigFiles.split(',')[0].trim();
const projectDir = path.dirname(configFile);
const resolvedDir = path.resolve(projectDir);
// Skip if already found via parent strategy
if (seenPaths.has(resolvedDir)) continue;
// CML fingerprint test
if (!(await isCmlInstance(resolvedDir))) continue;
const inst = await buildDiscoveredInstance(resolvedDir, 'docker', project.Name, existingInstances);
if (inst) {
discovered.push(inst);
seenPaths.add(resolvedDir);
}
}
// Build summary
const newInstances = discovered.filter((d) => !d.isAlreadyRegistered).length;
const alreadyRegistered = discovered.filter((d) => d.isAlreadyRegistered).length;
const running = discovered.filter((d) => d.isRunning).length;
const parentFound = discovered.some((d) => d.isParentInstance);
return {
instances: discovered,
summary: {
total: discovered.length,
newInstances,
alreadyRegistered,
running,
parentFound,
},
};
}
// ─── Auto-Import on First Boot ──────────────────────────────────────
/**
* Checks if the CCP database has zero instances. If empty, discovers
* the parent instance and auto-registers it.
* Called 5s after server startup via setTimeout.
*/
export async function autoDiscoverOnStartup(): Promise<void> {
const count = await prisma.instance.count();
if (count > 0) {
logger.debug('[discovery] Instances already exist, skipping auto-discovery');
return;
}
logger.info('[discovery] No instances found — running auto-discovery...');
const result = await discoverInstances();
if (result.instances.length === 0) {
logger.info('[discovery] No CML instances discovered on this host');
return;
}
// Auto-register parent instance first, then any others
const sorted = [...result.instances].sort((a, b) => {
if (a.isParentInstance && !b.isParentInstance) return -1;
if (!a.isParentInstance && b.isParentInstance) return 1;
return 0;
});
// Get or create a system user ID for audit logging
const systemUser = await prisma.ccpUser.findFirst({
where: { role: 'SUPER_ADMIN' },
select: { id: true },
});
const userId = systemUser?.id || 'system';
let imported = 0;
for (const inst of sorted) {
if (inst.isAlreadyRegistered) continue;
try {
await registerInstance(
{
name: inst.name,
slug: inst.slug,
domain: inst.domain,
basePath: inst.basePath,
composeProject: inst.composeProject,
portConfig: inst.portConfig,
adminEmail: inst.adminEmail,
enableMedia: inst.enableMedia,
enableChat: inst.enableChat,
enableGancio: inst.enableGancio,
enableListmonk: inst.enableListmonk,
enableMonitoring: inst.enableMonitoring,
enableDevTools: inst.enableDevTools,
enablePayments: inst.enablePayments,
},
userId,
'auto-discovery'
);
imported++;
logger.info(`[discovery] Auto-registered instance: ${inst.name} (${inst.domain})`);
} catch (err) {
logger.warn(`[discovery] Failed to auto-register ${inst.name}: ${(err as Error).message}`);
}
}
logger.info(`[discovery] Auto-discovery complete: ${imported}/${sorted.filter((s) => !s.isAlreadyRegistered).length} instances registered`);
}

View File

@ -0,0 +1,351 @@
import { exec as execCb } from 'child_process';
import { promisify } from 'util';
import http from 'http';
import { logger } from '../utils/logger';
const exec = promisify(execCb);
const EXEC_TIMEOUT = 120_000; // 2 minutes
const DOCKER_SOCKET = '/var/run/docker.sock';
/** Validate a service/project name to prevent shell injection. */
function validateName(name: string, label: string): string {
if (!/^[a-zA-Z0-9][a-zA-Z0-9_.-]*$/.test(name)) {
throw new Error(`Invalid ${label}: ${name}`);
}
return name;
}
/** Validate a Docker duration format (e.g., "1h", "30m", "24h"). */
function validateDuration(value: string): string {
if (!/^\d+[smhd]$/.test(value)) {
throw new Error(`Invalid duration: ${value}`);
}
return value;
}
/** Validate a tail count (positive integer, capped). */
function validateTail(value: number): number {
const n = Math.max(1, Math.min(value, 5000));
return Math.floor(n);
}
/** Parsed container status from `docker compose ps --format json` */
export interface ContainerInfo {
name: string;
service: string;
status: string;
state: string;
health: string;
ports: string;
createdAt: string;
exitCode: number;
}
/**
* Execute a shell command with timeout and proper error handling.
*/
async function execCmd(
command: string,
cwd: string,
timeoutMs = EXEC_TIMEOUT
): Promise<{ stdout: string; stderr: string }> {
logger.debug(`[docker] exec: ${command} (cwd: ${cwd})`);
try {
const result = await exec(command, {
cwd,
timeout: timeoutMs,
maxBuffer: 10 * 1024 * 1024, // 10MB
env: { ...process.env, COMPOSE_ANSI: 'never' },
});
return result;
} catch (err: unknown) {
const error = err as { stdout?: string; stderr?: string; message?: string; killed?: boolean };
if (error.killed) {
throw new Error(`Command timed out after ${timeoutMs}ms: ${command}`);
}
// Include stderr in the error for debugging
const msg = error.stderr || error.message || 'Unknown exec error';
throw new Error(`Command failed: ${command}\n${msg}`);
}
}
/**
* Build the base compose command with project name.
*/
function composeCmd(project: string): string {
return `docker compose -p ${validateName(project, 'project')}`;
}
// ─── Docker Compose CLI Operations ───────────────────────────────────
export async function composeUp(
projectDir: string,
project: string,
services?: string[]
): Promise<string> {
const svc = services?.length ? ` ${services.map((s) => validateName(s, 'service')).join(' ')}` : '';
const orphanFlag = services?.length ? '' : ' --remove-orphans';
const { stdout, stderr } = await execCmd(
`${composeCmd(project)} up -d${orphanFlag}${svc}`,
projectDir
);
return stdout || stderr;
}
export async function composeDown(
projectDir: string,
project: string,
removeVolumes = false
): Promise<string> {
const flags = removeVolumes ? ' -v' : '';
const { stdout, stderr } = await execCmd(
`${composeCmd(project)} down${flags}`,
projectDir
);
return stdout || stderr;
}
export async function composeStop(
projectDir: string,
project: string
): Promise<string> {
const { stdout, stderr } = await execCmd(
`${composeCmd(project)} stop`,
projectDir
);
return stdout || stderr;
}
export async function composeRestart(
projectDir: string,
project: string,
service?: string
): Promise<string> {
const svc = service ? ` ${validateName(service, 'service')}` : '';
const { stdout, stderr } = await execCmd(
`${composeCmd(project)} restart${svc}`,
projectDir
);
return stdout || stderr;
}
export async function composePull(
projectDir: string,
project: string
): Promise<string> {
const { stdout, stderr } = await execCmd(
`${composeCmd(project)} pull`,
projectDir,
300_000 // 5 min for pulls
);
return stdout || stderr;
}
export async function composeBuild(
projectDir: string,
project: string
): Promise<string> {
const { stdout, stderr } = await execCmd(
`${composeCmd(project)} build`,
projectDir,
600_000 // 10 min for builds
);
return stdout || stderr;
}
/**
* List containers with status. Returns parsed container info.
*/
export async function composePs(
projectDir: string,
project: string
): Promise<ContainerInfo[]> {
const { stdout } = await execCmd(
`${composeCmd(project)} ps --format json`,
projectDir
);
if (!stdout.trim()) return [];
// docker compose ps --format json outputs one JSON object per line
const containers: ContainerInfo[] = [];
for (const line of stdout.trim().split('\n')) {
if (!line.trim()) continue;
try {
const raw = JSON.parse(line);
containers.push({
name: raw.Name || raw.name || '',
service: raw.Service || raw.service || '',
status: raw.Status || raw.status || '',
state: raw.State || raw.state || '',
health: raw.Health || raw.health || '',
ports: raw.Ports || raw.ports || '',
createdAt: raw.CreatedAt || raw.created_at || '',
exitCode: raw.ExitCode ?? raw.exit_code ?? 0,
});
} catch {
logger.warn(`[docker] Failed to parse container line: ${line}`);
}
}
return containers;
}
/**
* Get logs from a specific service.
*/
export async function composeLogs(
projectDir: string,
project: string,
service?: string,
tail = 200,
since?: string
): Promise<string> {
const parts = [composeCmd(project), 'logs', '--no-color'];
if (tail > 0) parts.push(`--tail=${validateTail(tail)}`);
if (since) parts.push(`--since=${validateDuration(since)}`);
if (service) parts.push(validateName(service, 'service'));
const { stdout, stderr } = await execCmd(parts.join(' '), projectDir);
return stdout || stderr;
}
/**
* Execute a command inside a running service container.
* Optionally pass environment variables via -e flags.
*/
export async function composeExec(
projectDir: string,
project: string,
service: string,
command: string,
timeoutMs = EXEC_TIMEOUT,
envVars?: Record<string, string>
): Promise<string> {
const envFlags = envVars
? Object.entries(envVars).map(([k, v]) => `-e ${k}=${v}`).join(' ') + ' '
: '';
const { stdout, stderr } = await execCmd(
`${composeCmd(project)} exec -T ${envFlags}${validateName(service, 'service')} ${command}`,
projectDir,
timeoutMs
);
return stdout || stderr;
}
/**
* Run a one-off command in a service container (docker compose run).
* Uses --entrypoint "" to skip the service's entrypoint script.
* Useful for running setup commands (prisma db push, seed) without the entrypoint.
*/
export async function composeRun(
projectDir: string,
project: string,
service: string,
command: string,
timeoutMs = EXEC_TIMEOUT
): Promise<string> {
const { stdout, stderr } = await execCmd(
`${composeCmd(project)} run --rm --no-deps -T --entrypoint "" ${validateName(service, 'service')} ${command}`,
projectDir,
timeoutMs
);
return stdout || stderr;
}
// ─── Docker Socket API ───────────────────────────────────────────────
/**
* Make a request to the Docker Engine API via Unix socket.
*/
function dockerSocketRequest(path: string): Promise<string> {
return new Promise((resolve, reject) => {
const req = http.request(
{
socketPath: DOCKER_SOCKET,
path,
method: 'GET',
headers: { 'Content-Type': 'application/json' },
},
(res) => {
let data = '';
res.on('data', (chunk) => (data += chunk));
res.on('end', () => {
if (res.statusCode && res.statusCode >= 400) {
reject(new Error(`Docker API returned ${res.statusCode}: ${data}`));
} else {
resolve(data);
}
});
}
);
req.on('error', reject);
req.setTimeout(10_000, () => {
req.destroy();
reject(new Error('Docker socket request timed out'));
});
req.end();
});
}
/**
* Get status of a specific container by name via Docker socket API.
*/
export async function getContainerStatus(
containerName: string
): Promise<{ state: string; health: string; running: boolean } | null> {
try {
const data = await dockerSocketRequest(
`/containers/${encodeURIComponent(containerName)}/json`
);
const info = JSON.parse(data);
return {
state: info.State?.Status || 'unknown',
health: info.State?.Health?.Status || 'none',
running: info.State?.Running === true,
};
} catch {
return null;
}
}
/**
* Poll until a container reaches healthy state or timeout expires.
*/
export async function waitForHealthy(
containerName: string,
timeoutMs = 60_000,
pollIntervalMs = 2_000
): Promise<boolean> {
const deadline = Date.now() + timeoutMs;
while (Date.now() < deadline) {
const status = await getContainerStatus(containerName);
if (status?.health === 'healthy') return true;
if (status?.state === 'exited' || status?.state === 'dead') {
throw new Error(`Container ${containerName} exited unexpectedly`);
}
await new Promise((r) => setTimeout(r, pollIntervalMs));
}
throw new Error(`Container ${containerName} did not become healthy within ${timeoutMs}ms`);
}
/**
* Wait for an HTTP endpoint to respond with 200.
*/
export async function waitForHttp(
url: string,
timeoutMs = 120_000,
pollIntervalMs = 3_000
): Promise<boolean> {
const deadline = Date.now() + timeoutMs;
while (Date.now() < deadline) {
try {
const response = await fetch(url, { signal: AbortSignal.timeout(5_000) });
if (response.ok) return true;
} catch {
// Expected while service is starting
}
await new Promise((r) => setTimeout(r, pollIntervalMs));
}
throw new Error(`HTTP endpoint ${url} did not respond within ${timeoutMs}ms`);
}

View File

@ -0,0 +1,191 @@
import { InstanceStatus, HealthStatus } from '@prisma/client';
import { prisma } from '../lib/prisma';
import * as docker from './docker.service';
import { logger } from '../utils/logger';
import type { ContainerInfo } from './docker.service';
/**
* Determine overall health status from container list.
*/
function determineHealth(containers: ContainerInfo[]): {
status: HealthStatus;
serviceStatus: Record<string, { state: string; health: string }>;
totalServices: number;
healthyServices: number;
} {
if (containers.length === 0) {
return { status: HealthStatus.UNKNOWN, serviceStatus: {}, totalServices: 0, healthyServices: 0 };
}
const serviceStatus: Record<string, { state: string; health: string }> = {};
let healthyCount = 0;
let runningCount = 0;
for (const c of containers) {
serviceStatus[c.service || c.name] = {
state: c.state,
health: c.health,
};
const isRunning = c.state === 'running';
if (isRunning) runningCount++;
// A service is "healthy" if it's running AND either has no health check or passes it
const isHealthy = isRunning && (c.health === '' || c.health === 'healthy');
if (isHealthy) healthyCount++;
}
const total = containers.length;
let status: HealthStatus;
if (healthyCount === total) {
status = HealthStatus.HEALTHY;
} else if (runningCount === 0) {
status = HealthStatus.UNHEALTHY;
} else if (healthyCount >= total / 2) {
status = HealthStatus.DEGRADED;
} else {
status = HealthStatus.UNHEALTHY;
}
return { status, serviceStatus, totalServices: total, healthyServices: healthyCount };
}
/**
* Check the health of a single instance. Returns the created HealthCheck record.
*/
export async function checkInstanceHealth(instanceId: string) {
const instance = await prisma.instance.findUnique({ where: { id: instanceId } });
if (!instance) {
throw new Error(`Instance ${instanceId} not found`);
}
if (instance.status !== InstanceStatus.RUNNING) {
throw new Error(`Instance ${instance.slug} is not running (status: ${instance.status})`);
}
const startTime = Date.now();
let containers: ContainerInfo[];
try {
containers = await docker.composePs(instance.basePath, instance.composeProject);
} catch (err) {
// If compose ps fails, record UNKNOWN status
const healthCheck = await prisma.healthCheck.create({
data: {
instanceId,
status: HealthStatus.UNKNOWN,
serviceStatus: {},
totalServices: 0,
healthyServices: 0,
responseTimeMs: Date.now() - startTime,
},
});
await prisma.instance.update({
where: { id: instanceId },
data: { lastHealthCheck: new Date() },
});
logger.warn(`[health] ${instance.slug}: compose ps failed: ${(err as Error).message}`);
return healthCheck;
}
const responseTimeMs = Date.now() - startTime;
const { status, serviceStatus, totalServices, healthyServices } = determineHealth(containers);
const healthCheck = await prisma.healthCheck.create({
data: {
instanceId,
status,
serviceStatus,
totalServices,
healthyServices,
responseTimeMs,
},
});
await prisma.instance.update({
where: { id: instanceId },
data: { lastHealthCheck: new Date() },
});
return healthCheck;
}
/**
* Check all running instances sequentially.
*/
export async function checkAllInstances(): Promise<void> {
const instances = await prisma.instance.findMany({
where: { status: InstanceStatus.RUNNING },
select: { id: true, slug: true },
});
if (instances.length === 0) {
logger.debug('[health] No running instances to check');
return;
}
let healthy = 0;
let degraded = 0;
let unhealthy = 0;
for (const inst of instances) {
try {
const check = await checkInstanceHealth(inst.id);
if (check.status === HealthStatus.HEALTHY) healthy++;
else if (check.status === HealthStatus.DEGRADED) degraded++;
else unhealthy++;
} catch (err) {
logger.warn(`[health] Failed to check ${inst.slug}: ${(err as Error).message}`);
unhealthy++;
}
}
logger.info(
`[health] Checked ${instances.length} instances: ${healthy} healthy, ${degraded} degraded, ${unhealthy} unhealthy`
);
}
/**
* Get paginated health history for an instance.
*/
export async function getHealthHistory(instanceId: string, page = 1, limit = 20) {
const [data, total] = await Promise.all([
prisma.healthCheck.findMany({
where: { instanceId },
orderBy: { checkedAt: 'desc' },
skip: (page - 1) * limit,
take: limit,
}),
prisma.healthCheck.count({ where: { instanceId } }),
]);
return { data, total, page, limit };
}
/**
* Start the periodic health check scheduler.
*/
export function startHealthScheduler(intervalMs: number): NodeJS.Timeout | null {
if (intervalMs <= 0) {
logger.info('[health] Automated health checks disabled (interval=0)');
return null;
}
logger.info(`[health] Starting health scheduler (interval: ${intervalMs}ms)`);
// Run initial check after a short delay (let services start)
setTimeout(() => {
checkAllInstances().catch((err) =>
logger.error(`[health] Initial check failed: ${(err as Error).message}`)
);
}, 10_000);
return setInterval(() => {
checkAllInstances().catch((err) =>
logger.error(`[health] Scheduled check failed: ${(err as Error).message}`)
);
}, intervalMs);
}

View File

@ -0,0 +1,94 @@
import { prisma } from '../lib/prisma';
import { env } from '../config/env';
import { AppError } from '../middleware/error-handler';
interface PortRangeConfig {
service: string;
start: number;
end: number;
}
function getPortRanges(): PortRangeConfig[] {
return [
{ service: 'api', start: env.PORT_RANGE_API_START, end: env.PORT_RANGE_API_END },
{ service: 'admin', start: env.PORT_RANGE_ADMIN_START, end: env.PORT_RANGE_ADMIN_END },
{ service: 'postgres', start: env.PORT_RANGE_POSTGRES_START, end: env.PORT_RANGE_POSTGRES_END },
{ service: 'nginx', start: env.PORT_RANGE_NGINX_START, end: env.PORT_RANGE_NGINX_END },
{ service: 'embed', start: env.PORT_RANGE_EMBED_START, end: env.PORT_RANGE_EMBED_END },
];
}
async function findNextAvailablePort(
start: number,
end: number,
tx?: Parameters<Parameters<typeof prisma.$transaction>[0]>[0]
): Promise<number> {
const client = tx || prisma;
const allocated = await client.portAllocation.findMany({
where: { port: { gte: start, lte: end } },
select: { port: true },
orderBy: { port: 'asc' },
});
const usedPorts = new Set(allocated.map((a) => a.port));
for (let port = start; port <= end; port++) {
if (!usedPorts.has(port)) {
return port;
}
}
throw new AppError(503, `No available ports in range ${start}-${end}`, 'PORT_EXHAUSTED');
}
export interface AllocatedPorts {
config: Record<string, number>;
allocations: Array<{ port: number; service: string }>;
}
/**
* Allocate ports using a serializable transaction to prevent race conditions.
* Port allocation records are created immediately to act as a DB-level lock.
* They are linked to the instance later via instances.service.ts.
*/
export async function allocatePorts(): Promise<AllocatedPorts> {
return prisma.$transaction(async (tx) => {
const ranges = getPortRanges();
const config: Record<string, number> = {};
const allocations: Array<{ port: number; service: string }> = [];
for (const range of ranges) {
const port = await findNextAvailablePort(range.start, range.end, tx);
config[range.service] = port;
allocations.push({ port, service: range.service });
}
return { config, allocations };
});
}
export async function releasePorts(instanceId: string): Promise<void> {
await prisma.portAllocation.deleteMany({ where: { instanceId } });
}
export async function getPortUsage(): Promise<{
ranges: Array<{ service: string; start: number; end: number; used: number; total: number }>;
}> {
const ranges = getPortRanges();
const result = [];
for (const range of ranges) {
const used = await prisma.portAllocation.count({
where: { port: { gte: range.start, lte: range.end } },
});
result.push({
service: range.service,
start: range.start,
end: range.end,
used,
total: range.end - range.start + 1,
});
}
return { ranges: result };
}

View File

@ -0,0 +1,73 @@
import crypto from 'crypto';
function randomHex(bytes = 32): string {
return crypto.randomBytes(bytes).toString('hex');
}
function randomPassword(length = 16): string {
// Generate password meeting CML policy: 12+ chars, uppercase, lowercase, digit
const upper = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ';
const lower = 'abcdefghijklmnopqrstuvwxyz';
const digits = '0123456789';
// Avoid $, #, !, % — they break Docker Compose .env files
// ($=var expansion, #=comment, !=bash history, %=printf)
const special = '@&*_-+=';
const all = upper + lower + digits + special;
// Ensure at least one of each required class
const required = [
upper[crypto.randomInt(upper.length)],
lower[crypto.randomInt(lower.length)],
digits[crypto.randomInt(digits.length)],
special[crypto.randomInt(special.length)],
];
// Fill remaining with random chars
const remaining = Array.from({ length: length - required.length }, () =>
all[crypto.randomInt(all.length)]
);
// Shuffle
const chars = [...required, ...remaining];
for (let i = chars.length - 1; i > 0; i--) {
const j = crypto.randomInt(i + 1);
[chars[i], chars[j]] = [chars[j], chars[i]];
}
return chars.join('');
}
export interface InstanceSecrets {
postgresPassword: string;
redisPassword: string;
jwtAccessSecret: string;
jwtRefreshSecret: string;
encryptionKey: string;
initialAdminPassword: string;
nocodbAdminPassword: string;
grafanaAdminPassword: string;
listmonkAdminPassword: string;
listmonkApiToken: string;
giteaAdminPassword: string;
n8nEncryptionKey: string;
gancioAdminPassword: string;
}
export function generateSecrets(adminEmail: string): InstanceSecrets & { adminEmail: string } {
return {
adminEmail,
postgresPassword: randomHex(16),
redisPassword: randomHex(16),
jwtAccessSecret: randomHex(32),
jwtRefreshSecret: randomHex(32),
encryptionKey: randomHex(32),
initialAdminPassword: randomPassword(16),
nocodbAdminPassword: randomPassword(16),
grafanaAdminPassword: randomPassword(16),
listmonkAdminPassword: randomPassword(16),
listmonkApiToken: randomHex(16),
giteaAdminPassword: randomPassword(16),
n8nEncryptionKey: randomHex(32),
gancioAdminPassword: randomPassword(16),
};
}

View File

@ -0,0 +1,227 @@
import Handlebars from 'handlebars';
import fs from 'fs/promises';
import path from 'path';
import { logger } from '../utils/logger';
// Register helpers
Handlebars.registerHelper('ifEq', function (this: unknown, a: unknown, b: unknown, options: Handlebars.HelperOptions) {
return a === b ? options.fn(this) : options.inverse(this);
});
Handlebars.registerHelper('unless', function (this: unknown, condition: unknown, options: Handlebars.HelperOptions) {
return !condition ? options.fn(this) : options.inverse(this);
});
Handlebars.registerHelper('now', function () {
return new Date().toISOString();
});
Handlebars.registerHelper('math', function (a: number, op: string, b: number) {
switch (op) {
case '+': return a + b;
case '-': return a - b;
default: return a;
}
});
export interface TemplateContext {
// Instance identity
slug: string;
name: string;
domain: string;
containerPrefix: string; // e.g., "cml-better-edmonton"
networkName: string; // e.g., "cml-better-edmonton"
composeProject: string; // e.g., "cml-better-edmonton"
// Ports
ports: {
api: number;
admin: number;
postgres: number;
nginx: number;
embed: number;
};
// Secrets (decrypted)
secrets: {
postgresPassword: string;
redisPassword: string;
jwtAccessSecret: string;
jwtRefreshSecret: string;
encryptionKey: string;
initialAdminPassword: string;
adminEmail: string;
nocodbAdminPassword: string;
grafanaAdminPassword: string;
listmonkAdminPassword: string;
listmonkApiToken: string;
giteaAdminPassword: string;
n8nEncryptionKey: string;
gancioAdminPassword: string;
};
// Feature flags
enableMedia: boolean;
enableChat: boolean;
enableGancio: boolean;
enableListmonk: boolean;
enableMonitoring: boolean;
enableDevTools: boolean;
enablePayments: boolean;
// SMTP
smtpHost: string;
smtpPort: number;
smtpUser: string;
smtpFrom: string;
emailTestMode: boolean;
// Git
gitBranch: string;
}
/** Subset of Instance fields needed to build a TemplateContext. */
export interface InstanceForTemplate {
slug: string;
name: string;
domain: string;
composeProject: string;
portConfig: unknown;
enableMedia: boolean;
enableChat: boolean;
enableGancio: boolean;
enableListmonk: boolean;
enableMonitoring: boolean;
enableDevTools: boolean;
enablePayments: boolean;
smtpHost: string | null;
smtpPort: number | null;
smtpUser: string | null;
smtpFrom: string | null;
emailTestMode: boolean;
gitBranch: string;
}
/**
* Build the TemplateContext from an instance record and its decrypted secrets.
*/
export function buildTemplateContext(
instance: InstanceForTemplate,
secrets: Record<string, string>
): TemplateContext {
const ports = instance.portConfig as Record<string, number>;
return {
slug: instance.slug,
name: instance.name,
domain: instance.domain,
containerPrefix: instance.composeProject,
networkName: instance.composeProject,
composeProject: instance.composeProject,
ports: {
api: ports.api,
admin: ports.admin,
postgres: ports.postgres,
nginx: ports.nginx,
embed: ports.embed,
},
secrets: {
postgresPassword: secrets.postgresPassword,
redisPassword: secrets.redisPassword,
jwtAccessSecret: secrets.jwtAccessSecret,
jwtRefreshSecret: secrets.jwtRefreshSecret,
encryptionKey: secrets.encryptionKey,
initialAdminPassword: secrets.initialAdminPassword,
adminEmail: secrets.adminEmail,
nocodbAdminPassword: secrets.nocodbAdminPassword,
grafanaAdminPassword: secrets.grafanaAdminPassword,
listmonkAdminPassword: secrets.listmonkAdminPassword,
listmonkApiToken: secrets.listmonkApiToken,
giteaAdminPassword: secrets.giteaAdminPassword,
n8nEncryptionKey: secrets.n8nEncryptionKey,
gancioAdminPassword: secrets.gancioAdminPassword,
},
enableMedia: instance.enableMedia,
enableChat: instance.enableChat,
enableGancio: instance.enableGancio,
enableListmonk: instance.enableListmonk,
enableMonitoring: instance.enableMonitoring,
enableDevTools: instance.enableDevTools,
enablePayments: instance.enablePayments,
smtpHost: instance.smtpHost || '',
smtpPort: instance.smtpPort || 587,
smtpUser: instance.smtpUser || '',
smtpFrom: instance.smtpFrom || '',
emailTestMode: instance.emailTestMode,
gitBranch: instance.gitBranch,
};
}
const templateCache = new Map<string, HandlebarsTemplateDelegate>();
async function loadTemplate(templatePath: string): Promise<HandlebarsTemplateDelegate> {
if (templateCache.has(templatePath)) {
return templateCache.get(templatePath)!;
}
const source = await fs.readFile(templatePath, 'utf-8');
const compiled = Handlebars.compile(source, { noEscape: true });
templateCache.set(templatePath, compiled);
return compiled;
}
export async function renderTemplate(templateName: string, context: TemplateContext): Promise<string> {
const templatesDir = path.resolve(__dirname, '../..', 'templates');
const templatePath = path.join(templatesDir, templateName);
const template = await loadTemplate(templatePath);
return template(context);
}
export async function renderAllTemplates(context: TemplateContext, outputDir: string): Promise<void> {
const templatesDir = path.resolve(__dirname, '../..', 'templates');
const templateFiles = [
{ template: 'docker-compose.yml.hbs', output: 'docker-compose.yml' },
{ template: 'env.hbs', output: '.env' },
{ template: 'nginx/conf.d/default.conf.hbs', output: 'nginx/conf.d/default.conf' },
{ template: 'nginx/conf.d/api.conf.hbs', output: 'nginx/conf.d/api.conf' },
{ template: 'nginx/conf.d/services.conf.hbs', output: 'nginx/conf.d/services.conf' },
{ template: 'configs/pangolin/resources.yml.hbs', output: 'configs/pangolin/resources.yml' },
{ template: 'configs/prometheus/prometheus.yml.hbs', output: 'configs/prometheus/prometheus.yml' },
];
for (const { template, output } of templateFiles) {
const templatePath = path.join(templatesDir, template);
try {
await fs.access(templatePath);
} catch {
logger.warn(`Template not found: ${template}, skipping`);
continue;
}
const rendered = await renderTemplate(template, context);
const outputPath = path.join(outputDir, output);
// Ensure output directory exists
await fs.mkdir(path.dirname(outputPath), { recursive: true });
await fs.writeFile(outputPath, rendered, 'utf-8');
logger.debug(`Rendered ${template}${outputPath}`);
}
// Copy static files (nginx.conf doesn't need templating)
const staticFiles = ['nginx/nginx.conf'];
for (const file of staticFiles) {
const srcPath = path.join(templatesDir, file);
try {
await fs.access(srcPath);
const destPath = path.join(outputDir, file);
await fs.mkdir(path.dirname(destPath), { recursive: true });
await fs.copyFile(srcPath, destPath);
} catch {
logger.warn(`Static file not found: ${file}, skipping`);
}
}
}
export function clearTemplateCache(): void {
templateCache.clear();
}

View File

@ -0,0 +1,37 @@
import crypto from 'crypto';
import { env } from '../config/env';
const ALGORITHM = 'aes-256-gcm';
const IV_LENGTH = 16;
const TAG_LENGTH = 16;
function getKey(): Buffer {
return Buffer.from(env.ENCRYPTION_KEY, 'hex').subarray(0, 32);
}
export function encrypt(plaintext: string): string {
const iv = crypto.randomBytes(IV_LENGTH);
const cipher = crypto.createCipheriv(ALGORITHM, getKey(), iv);
const encrypted = Buffer.concat([cipher.update(plaintext, 'utf8'), cipher.final()]);
const tag = cipher.getAuthTag();
// Format: iv:tag:ciphertext (all base64)
return [iv.toString('base64'), tag.toString('base64'), encrypted.toString('base64')].join(':');
}
export function decrypt(encryptedText: string): string {
const [ivB64, tagB64, ciphertextB64] = encryptedText.split(':');
const iv = Buffer.from(ivB64, 'base64');
const tag = Buffer.from(tagB64, 'base64');
const ciphertext = Buffer.from(ciphertextB64, 'base64');
const decipher = crypto.createDecipheriv(ALGORITHM, getKey(), iv);
decipher.setAuthTag(tag);
return Buffer.concat([decipher.update(ciphertext), decipher.final()]).toString('utf8');
}
export function encryptJson(data: Record<string, unknown>): string {
return encrypt(JSON.stringify(data));
}
export function decryptJson<T = Record<string, unknown>>(encryptedText: string): T {
return JSON.parse(decrypt(encryptedText)) as T;
}

View File

@ -0,0 +1,13 @@
import winston from 'winston';
export const logger = winston.createLogger({
level: process.env.NODE_ENV === 'production' ? 'info' : 'debug',
format: winston.format.combine(
winston.format.timestamp(),
winston.format.errors({ stack: true }),
process.env.NODE_ENV === 'production'
? winston.format.json()
: winston.format.combine(winston.format.colorize(), winston.format.simple())
),
transports: [new winston.transports.Console()],
});

View File

@ -0,0 +1,24 @@
{
"compilerOptions": {
"target": "ES2022",
"module": "commonjs",
"lib": ["ES2022"],
"outDir": "./dist",
"rootDir": "./src",
"strict": true,
"esModuleInterop": true,
"skipLibCheck": true,
"forceConsistentCasingInFileNames": true,
"resolveJsonModule": true,
"declaration": true,
"declarationMap": true,
"sourceMap": true,
"baseUrl": ".",
"paths": {
"@/*": ["src/*"],
"@prisma/*": ["prisma/*"]
}
},
"include": ["src/**/*"],
"exclude": ["node_modules", "dist"]
}

View File

@ -0,0 +1,91 @@
# Changemaker Control Panel — Docker Compose
# Start: docker compose up -d
services:
ccp-postgres:
image: postgres:16-alpine
container_name: ccp-postgres
restart: unless-stopped
environment:
POSTGRES_USER: ccp
POSTGRES_PASSWORD: ${CCP_POSTGRES_PASSWORD:-ccp_secret}
POSTGRES_DB: ccp
volumes:
- ccp-postgres-data:/var/lib/postgresql/data
ports:
- "127.0.0.1:${CCP_POSTGRES_PORT:-5480}:5432"
networks:
- ccp-network
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ccp -d ccp"]
interval: 10s
timeout: 5s
retries: 5
ccp-redis:
image: redis:7-alpine
container_name: ccp-redis
restart: unless-stopped
command: redis-server --requirepass ${REDIS_PASSWORD:-ccp_redis_secret}
volumes:
- ccp-redis-data:/data
ports:
- "127.0.0.1:${CCP_REDIS_PORT:-6399}:6379"
networks:
- ccp-network
healthcheck:
test: ["CMD", "redis-cli", "-a", "${REDIS_PASSWORD:-ccp_redis_secret}", "ping"]
interval: 10s
timeout: 5s
retries: 5
ccp-api:
build:
context: .
dockerfile: Dockerfile.api
container_name: ccp-api
restart: unless-stopped
depends_on:
ccp-postgres:
condition: service_healthy
ccp-redis:
condition: service_healthy
env_file: .env
environment:
DATABASE_URL: postgresql://ccp:${CCP_POSTGRES_PASSWORD:-ccp_secret}@ccp-postgres:5432/ccp
REDIS_URL: redis://:${REDIS_PASSWORD:-ccp_redis_secret}@ccp-redis:6379
ports:
- "${CCP_API_PORT:-5000}:5000"
extra_hosts:
- "host.docker.internal:host-gateway"
volumes:
- ./api:/app
- /app/node_modules
- ./templates:/app/templates:ro
- /var/run/docker.sock:/var/run/docker.sock
- ${INSTANCES_BASE_PATH}:${INSTANCES_BASE_PATH}
- ${CML_SOURCE_PATH}:${CML_SOURCE_PATH}:ro
- ${BACKUP_STORAGE_PATH}:${BACKUP_STORAGE_PATH}
networks:
- ccp-network
ccp-admin:
build:
context: .
dockerfile: Dockerfile.admin
container_name: ccp-admin
restart: unless-stopped
depends_on:
- ccp-api
ports:
- "${CCP_ADMIN_PORT:-5100}:5100"
networks:
- ccp-network
volumes:
ccp-postgres-data:
ccp-redis-data:
networks:
ccp-network:
driver: bridge

View File

@ -0,0 +1,91 @@
#!/usr/bin/env bash
# ============================================================
# Changemaker Control Panel — Setup Script
# ============================================================
# Detects the installation directory and configures .env with
# resolved absolute paths. Run once after cloning, or re-run
# any time you move the CCP directory.
#
# Usage:
# cd changemaker-control-panel
# chmod +x setup.sh
# ./setup.sh
# ============================================================
set -euo pipefail
# ── Resolve absolute CCP directory ──────────────────────────
CCP_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
# ── Derive paths ────────────────────────────────────────────
INSTANCES_PATH="${CCP_DIR}/instances"
BACKUP_PATH="${CCP_DIR}/backups"
# Auto-detect CML source path (CCP is expected inside the CML repo)
CML_SOURCE="${CCP_DIR%/changemaker-control-panel}"
if [[ ! -f "${CML_SOURCE}/docker-compose.yml" ]]; then
CML_SOURCE=""
fi
echo ""
echo "Changemaker Control Panel — Setup"
echo "=================================="
echo " CCP directory: ${CCP_DIR}"
echo " Instances path: ${INSTANCES_PATH}"
echo " Backup path: ${BACKUP_PATH}"
[[ -n "${CML_SOURCE}" ]] && echo " CML source: ${CML_SOURCE}"
echo ""
# ── Create directories ──────────────────────────────────────
mkdir -p "${INSTANCES_PATH}"
mkdir -p "${BACKUP_PATH}"
echo "✓ Created instance and backup directories"
# ── Create .env from .env.example if it doesn't exist ───────
if [[ ! -f "${CCP_DIR}/.env" ]]; then
if [[ -f "${CCP_DIR}/.env.example" ]]; then
cp "${CCP_DIR}/.env.example" "${CCP_DIR}/.env"
echo "✓ Created .env from .env.example"
else
echo "⚠ No .env.example found — creating minimal .env"
touch "${CCP_DIR}/.env"
fi
fi
# ── Helper: set or update a key in .env ─────────────────────
update_env() {
local key="$1" value="$2" file="${CCP_DIR}/.env"
if grep -q "^${key}=" "$file" 2>/dev/null; then
sed -i "s|^${key}=.*|${key}=${value}|" "$file"
else
echo "${key}=${value}" >> "$file"
fi
}
# ── Set resolved paths ──────────────────────────────────────
update_env "INSTANCES_BASE_PATH" "${INSTANCES_PATH}"
update_env "BACKUP_STORAGE_PATH" "${BACKUP_PATH}"
[[ -n "${CML_SOURCE}" ]] && update_env "CML_SOURCE_PATH" "${CML_SOURCE}"
echo "✓ Updated .env with resolved absolute paths"
# ── Generate random secrets if still using placeholders ─────
generate_secret() {
openssl rand -hex 32
}
for key in JWT_ACCESS_SECRET JWT_REFRESH_SECRET ENCRYPTION_KEY; do
current=$(grep "^${key}=" "${CCP_DIR}/.env" 2>/dev/null | cut -d= -f2- || true)
if [[ "$current" == *"change-me"* ]] || \
[[ "$current" =~ ^[a]{32,}$ ]] || \
[[ "$current" =~ ^[b]{32,}$ ]] || \
[[ "$current" =~ ^[c]{32,}$ ]]; then
update_env "$key" "$(generate_secret)"
echo "✓ Generated random ${key}"
fi
done
echo ""
echo "Setup complete! Next steps:"
echo " 1. Review ${CCP_DIR}/.env and adjust settings as needed"
echo " 2. Run: docker compose up -d"
echo ""

View File

@ -0,0 +1,56 @@
# Pangolin Resources — Instance: {{name}}
# Maps subdomains to internal services via Newt tunnel
resources:
- name: app
subdomain: app
target: http://{{containerPrefix}}-nginx:80
isBaseDomain: false
- name: api
subdomain: api
target: http://{{containerPrefix}}-nginx:80
isBaseDomain: false
{{#if enableMedia}}
- name: media
subdomain: media
target: http://{{containerPrefix}}-nginx:80
isBaseDomain: false
{{/if}}
- name: docs
subdomain: docs
target: http://{{containerPrefix}}-mkdocs:8000
isBaseDomain: false
- name: db
subdomain: db
target: http://{{containerPrefix}}-nocodb:8080
isBaseDomain: false
{{#if enableListmonk}}
- name: listmonk
subdomain: listmonk
target: http://{{containerPrefix}}-listmonk:9000
isBaseDomain: false
{{/if}}
{{#if enableGancio}}
- name: events
subdomain: events
target: http://{{containerPrefix}}-gancio:13120
isBaseDomain: false
{{/if}}
{{#if enableMonitoring}}
- name: grafana
subdomain: grafana
target: http://{{containerPrefix}}-grafana:3000
isBaseDomain: false
{{/if}}
- name: root
subdomain: ""
target: http://{{containerPrefix}}-mkdocs:8000
isBaseDomain: true

View File

@ -0,0 +1,22 @@
# Prometheus — Instance: {{name}}
global:
scrape_interval: 15s
evaluation_interval: 15s
scrape_configs:
- job_name: '{{composeProject}}-api'
static_configs:
- targets: ['{{containerPrefix}}-api:4000']
metrics_path: /api/metrics
{{#if enableMedia}}
- job_name: '{{composeProject}}-media-api'
static_configs:
- targets: ['{{containerPrefix}}-media-api:4100']
metrics_path: /api/metrics
{{/if}}
- job_name: '{{composeProject}}-redis'
static_configs:
- targets: ['{{containerPrefix}}-redis-exporter:9121']

View File

@ -0,0 +1,658 @@
# Changemaker Lite — Instance: {{name}}
# Compose project: {{composeProject}}
# Generated by CCP
services:
# ─── Core Infrastructure ───────────────────────────────────
v2-postgres:
image: postgres:16-alpine
container_name: {{containerPrefix}}-postgres
restart: unless-stopped
environment:
POSTGRES_USER: changemaker
POSTGRES_PASSWORD: {{secrets.postgresPassword}}
POSTGRES_DB: changemaker_v2
volumes:
- {{containerPrefix}}-postgres-data:/var/lib/postgresql/data
- ./api/prisma/init-nocodb-db.sh:/docker-entrypoint-initdb.d/10-init-nocodb.sh:ro
- ./api/prisma/init-gancio-db.sh:/docker-entrypoint-initdb.d/20-init-gancio.sh:ro
ports:
- "127.0.0.1:{{ports.postgres}}:5432"
networks:
- {{networkName}}
healthcheck:
test: ["CMD-SHELL", "pg_isready -U changemaker -d changemaker_v2"]
interval: 10s
timeout: 5s
retries: 5
redis:
image: redis:7-alpine
container_name: {{containerPrefix}}-redis
restart: unless-stopped
command: redis-server --appendonly yes --maxmemory 512mb --maxmemory-policy noeviction --requirepass {{secrets.redisPassword}}
volumes:
- {{containerPrefix}}-redis-data:/data
networks:
- {{networkName}}
healthcheck:
test: ["CMD", "redis-cli", "-a", "{{secrets.redisPassword}}", "ping"]
interval: 10s
timeout: 5s
retries: 5
deploy:
resources:
limits:
cpus: '1'
memory: 512M
reservations:
cpus: '0.25'
memory: 256M
logging:
driver: "json-file"
options:
max-size: "5m"
max-file: "2"
# ─── Application Services ──────────────────────────────────
api:
build:
context: ./api
dockerfile: Dockerfile
target: development
container_name: {{containerPrefix}}-api
restart: unless-stopped
depends_on:
v2-postgres:
condition: service_healthy
redis:
condition: service_healthy
env_file: .env
environment:
DATABASE_URL: postgresql://changemaker:{{secrets.postgresPassword}}@{{containerPrefix}}-postgres:5432/changemaker_v2
REDIS_URL: redis://:{{secrets.redisPassword}}@{{containerPrefix}}-redis:6379
PORT: "4000"
NAR_DATA_DIR: /data
LISTMONK_URL: http://{{containerPrefix}}-listmonk:9000
ADMIN_URL: https://app.{{domain}}
API_URL: https://api.{{domain}}
{{#if enableGancio}}
GANCIO_URL: http://{{containerPrefix}}-gancio:13120
{{/if}}
ports:
- "{{ports.api}}:4000"
volumes:
- ./api:/app
- /app/node_modules
- ./assets/uploads:/app/uploads
- ./mkdocs:/mkdocs:rw
- ./data:/data:ro
- ./configs:/app/configs:ro
networks:
- {{networkName}}
healthcheck:
test: ["CMD", "wget", "-q", "--spider", "http://localhost:4000/api/health"]
interval: 15s
timeout: 5s
retries: 3
start_period: 30s
deploy:
resources:
limits:
cpus: '2'
memory: 1G
reservations:
cpus: '0.25'
memory: 256M
admin:
build:
context: ./admin
target: development
container_name: {{containerPrefix}}-admin
restart: unless-stopped
depends_on:
- api
environment:
DOMAIN: {{domain}}
NODE_ENV: production
VITE_API_URL: http://{{containerPrefix}}-api:4000
VITE_MKDOCS_URL: http://{{containerPrefix}}-mkdocs:8000
{{#if enableMedia}}
VITE_MEDIA_API_URL: http://{{containerPrefix}}-media-api:4100
{{/if}}
volumes:
- ./admin:/app
- /app/node_modules
ports:
- "{{ports.admin}}:3000"
networks:
- {{networkName}}
healthcheck:
test: ["CMD", "wget", "-q", "--spider", "http://127.0.0.1:3000/"]
interval: 30s
timeout: 5s
retries: 3
start_period: 20s
{{#if enableMedia}}
media-api:
build:
context: ./api
dockerfile: Dockerfile.media
target: development
container_name: {{containerPrefix}}-media-api
restart: unless-stopped
depends_on:
v2-postgres:
condition: service_healthy
redis:
condition: service_healthy
env_file: .env
environment:
DATABASE_URL: postgresql://changemaker:{{secrets.postgresPassword}}@{{containerPrefix}}-postgres:5432/changemaker_v2
REDIS_URL: redis://:{{secrets.redisPassword}}@{{containerPrefix}}-redis:6379
MEDIA_API_PORT: "4100"
CORS_ORIGINS: https://app.{{domain}},http://localhost:{{ports.admin}}
ENABLE_MEDIA_FEATURES: "true"
MEDIA_ROOT: /media/local
MEDIA_UPLOADS: /media/uploads
volumes:
- ./api:/app
- /app/node_modules
- ./media:/media:ro
- ./media/local/inbox:/media/local/inbox:rw
- ./media/local/thumbnails:/media/local/thumbnails:rw
- ./media/local/photos:/media/local/photos:rw
- ./media/public:/media/public:rw
networks:
- {{networkName}}
healthcheck:
test: ["CMD", "wget", "-q", "--spider", "http://127.0.0.1:4100/health"]
interval: 15s
timeout: 5s
retries: 3
start_period: 30s
deploy:
resources:
limits:
cpus: '2'
memory: 1G
reservations:
cpus: '0.25'
memory: 256M
{{/if}}
# ─── Reverse Proxy ─────────────────────────────────────────
nginx:
image: nginx:alpine
container_name: {{containerPrefix}}-nginx
restart: unless-stopped
depends_on:
- api
- admin
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
- ./nginx/conf.d:/etc/nginx/conf.d:ro
ports:
- "{{ports.nginx}}:80"
- "{{math ports.embed "+" 0}}:8881" # NocoDB embed proxy
- "{{math ports.embed "+" 1}}:8882" # n8n embed proxy
- "{{math ports.embed "+" 2}}:8883" # Gitea embed proxy
- "{{math ports.embed "+" 3}}:8884" # MailHog embed proxy
- "{{math ports.embed "+" 4}}:8885" # Mini QR embed proxy
- "{{math ports.embed "+" 5}}:8886" # Excalidraw embed proxy
- "{{math ports.embed "+" 6}}:8887" # Homepage embed proxy
- "{{math ports.embed "+" 7}}:8888" # Code Server embed proxy
- "{{math ports.embed "+" 8}}:8889" # MkDocs embed proxy
- "{{math ports.embed "+" 9}}:8890" # Vaultwarden embed proxy
- "{{math ports.embed "+" 10}}:8891" # Rocket.Chat embed proxy
- "{{math ports.embed "+" 11}}:8892" # Gancio embed proxy
- "{{math ports.embed "+" 12}}:8893" # Grafana embed proxy
- "{{math ports.embed "+" 13}}:8894" # Listmonk embed proxy
networks:
- {{networkName}}
healthcheck:
test: ["CMD", "wget", "-q", "--spider", "http://127.0.0.1:80/"]
interval: 30s
timeout: 5s
retries: 3
# ─── Supporting Services ───────────────────────────────────
nocodb-v2:
image: nocodb/nocodb:latest
container_name: {{containerPrefix}}-nocodb
restart: unless-stopped
depends_on:
v2-postgres:
condition: service_healthy
environment:
NC_DB: pg://{{containerPrefix}}-postgres:5432?u=changemaker&p={{secrets.postgresPassword}}&d=nocodb_meta
NC_ADMIN_EMAIL: {{secrets.adminEmail}}
NC_ADMIN_PASSWORD: {{secrets.nocodbAdminPassword}}
volumes:
- {{containerPrefix}}-nocodb-data:/usr/app/data
networks:
- {{networkName}}
mailhog:
image: mailhog/mailhog:latest
container_name: {{containerPrefix}}-mailhog
restart: unless-stopped
networks:
- {{networkName}}
logging:
driver: "json-file"
options:
max-size: "5m"
max-file: "2"
mkdocs:
image: squidfunk/mkdocs-material:latest
container_name: {{containerPrefix}}-mkdocs
restart: unless-stopped
volumes:
- ./mkdocs:/docs:rw
- ./assets/images:/docs/assets/images:rw
user: "1000:1000"
environment:
SITE_URL: https://{{domain}}
ADMIN_PORT: "{{ports.admin}}"
ADMIN_URL: https://app.{{domain}}
BASE_DOMAIN: https://{{domain}}
API_URL: https://api.{{domain}}
API_PORT: "{{ports.api}}"
{{#if enableMedia}}
MEDIA_API_PUBLIC_URL: https://media.{{domain}}
MEDIA_API_PORT: "4100"
{{/if}}
{{#if enableGancio}}
GANCIO_URL: http://{{containerPrefix}}-gancio:13120
GANCIO_PORT: "8092"
{{/if}}
command: serve --dev-addr=0.0.0.0:8000 --watch-theme --livereload
networks:
- {{networkName}}
{{#if enableListmonk}}
listmonk-db:
image: postgres:17-alpine
container_name: {{containerPrefix}}-listmonk-db
restart: unless-stopped
environment:
POSTGRES_USER: listmonk
POSTGRES_PASSWORD: {{secrets.listmonkAdminPassword}}
POSTGRES_DB: listmonk
volumes:
- {{containerPrefix}}-listmonk-data:/var/lib/postgresql/data
networks:
- {{networkName}}
healthcheck:
test: ["CMD-SHELL", "pg_isready -U listmonk"]
interval: 10s
timeout: 5s
retries: 6
listmonk-app:
image: listmonk/listmonk:latest
container_name: {{containerPrefix}}-listmonk
restart: unless-stopped
depends_on:
listmonk-db:
condition: service_healthy
command: [sh, -c, "./listmonk --install --idempotent --yes --config '' && ./listmonk --upgrade --yes --config '' && ./listmonk --config ''"]
environment:
LISTMONK_app__address: "0.0.0.0:9000"
LISTMONK_db__host: {{containerPrefix}}-listmonk-db
LISTMONK_db__port: "5432"
LISTMONK_db__user: listmonk
LISTMONK_db__password: {{secrets.listmonkAdminPassword}}
LISTMONK_db__database: listmonk
LISTMONK_db__ssl_mode: disable
TZ: Etc/UTC
LISTMONK_ADMIN_USER: admin
LISTMONK_ADMIN_PASSWORD: {{secrets.listmonkAdminPassword}}
volumes:
- ./assets/uploads:/listmonk/uploads:rw
networks:
- {{networkName}}
healthcheck:
test: ["CMD", "wget", "-q", "--spider", "http://localhost:9000/"]
interval: 30s
timeout: 5s
retries: 3
start_period: 30s
listmonk-init:
image: postgres:17-alpine
container_name: {{containerPrefix}}-listmonk-init
depends_on:
listmonk-app:
condition: service_started
restart: "no"
environment:
PGPASSWORD: {{secrets.listmonkAdminPassword}}
LISTMONK_API_USER: v2-api
LISTMONK_API_TOKEN: {{secrets.listmonkApiToken}}
LISTMONK_SMTP_HOST: {{containerPrefix}}-mailhog
LISTMONK_SMTP_PORT: "1025"
entrypoint: ["/bin/sh", "-c"]
command:
- |
echo "[listmonk-init] Waiting for Listmonk tables..."
for i in $$(seq 1 30); do
if psql -h {{containerPrefix}}-listmonk-db -U listmonk -d listmonk -c "SELECT 1 FROM users LIMIT 1" >/dev/null 2>&1; then
break
fi
sleep 2
done
if [ -n "$$LISTMONK_API_TOKEN" ]; then
echo "[listmonk-init] Upserting API user '$$LISTMONK_API_USER'..."
psql -h {{containerPrefix}}-listmonk-db -U listmonk -d listmonk -q <<SQL
INSERT INTO users (username, password, password_login, email, name, type, user_role_id, status)
VALUES ('$$LISTMONK_API_USER', '$$LISTMONK_API_TOKEN', true, '$$LISTMONK_API_USER@api.internal', '$$LISTMONK_API_USER', 'api', 1, 'enabled')
ON CONFLICT (username) DO UPDATE SET password = EXCLUDED.password, status = 'enabled', user_role_id = 1;
SQL
echo "[listmonk-init] API user configured"
else
echo "[listmonk-init] LISTMONK_API_TOKEN not set, skipping API user"
fi
MAILHOG_ENTRY='{"host":"{{containerPrefix}}-mailhog","port":1025,"username":"","password":"","tls_type":"none","auth_protocol":"none","enabled":true,"max_conns":5,"idle_timeout":"15s","wait_timeout":"5s","max_msg_retries":2,"tls_skip_verify":false,"email_headers":[],"hello_hostname":""}'
SMTP_VALUE="[$$MAILHOG_ENTRY]"
psql -h {{containerPrefix}}-listmonk-db -U listmonk -d listmonk -q <<SQL
UPDATE settings SET value = '$$SMTP_VALUE' WHERE key = 'smtp';
SQL
echo "[listmonk-init] SMTP configured"
echo "[listmonk-init] Done"
networks:
- {{networkName}}
{{/if}}
{{#if enableGancio}}
gancio:
image: cisti/gancio:latest
container_name: {{containerPrefix}}-gancio
restart: unless-stopped
depends_on:
v2-postgres:
condition: service_healthy
environment:
GANCIO_DATA: /home/node/data
NODE_ENV: production
GANCIO_DB_DIALECT: postgres
GANCIO_DB_HOST: {{containerPrefix}}-postgres
GANCIO_DB_PORT: "5432"
GANCIO_DB_DATABASE: gancio
GANCIO_DB_USERNAME: changemaker
GANCIO_DB_PASSWORD: {{secrets.postgresPassword}}
server__baseurl: https://events.{{domain}}
volumes:
- {{containerPrefix}}-gancio-data:/home/node/data
networks:
- {{networkName}}
healthcheck:
test: ["CMD", "node", "-e", "require('http').get('http://localhost:13120/', r => process.exit(r.statusCode < 400 ? 0 : 1)).on('error', () => process.exit(1))"]
interval: 30s
timeout: 10s
retries: 5
start_period: 60s
{{/if}}
{{#if enableChat}}
nats-rocketchat:
image: nats:2.11-alpine
container_name: {{containerPrefix}}-nats
restart: unless-stopped
command: --http_port 8222
networks:
- {{networkName}}
mongodb-rocketchat:
image: mongo:6.0
container_name: {{containerPrefix}}-mongodb
restart: unless-stopped
command: ["mongod", "--replSet", "rs0", "--bind_ip_all"]
volumes:
- {{containerPrefix}}-mongodb-data:/data/db
networks:
- {{networkName}}
healthcheck:
test: ["CMD", "mongosh", "--quiet", "--eval", "try { rs.status().ok } catch(e) { rs.initiate({_id:'rs0',members:[{_id:0,host:'{{containerPrefix}}-mongodb:27017'}]}).ok }"]
interval: 10s
timeout: 10s
retries: 10
start_period: 30s
rocketchat:
image: rocketchat/rocket.chat:7.9.7
container_name: {{containerPrefix}}-rocketchat
restart: unless-stopped
depends_on:
mongodb-rocketchat:
condition: service_healthy
nats-rocketchat:
condition: service_started
environment:
ROOT_URL: http://chat.{{domain}}
MONGO_URL: mongodb://{{containerPrefix}}-mongodb:27017/rocketchat?replicaSet=rs0
MONGO_OPLOG_URL: mongodb://{{containerPrefix}}-mongodb:27017/local?replicaSet=rs0
TRANSPORTER: monolith+nats://{{containerPrefix}}-nats:4222
PORT: "3000"
ADMIN_USERNAME: rcadmin
ADMIN_NAME: Admin
ADMIN_EMAIL: {{secrets.adminEmail}}
ADMIN_PASS: {{secrets.nocodbAdminPassword}}
CREATE_TOKENS_FOR_USERS: "true"
OVERWRITE_SETTING_Iframe_Integration_send_enable: "true"
OVERWRITE_SETTING_Iframe_Integration_receive_enable: "true"
OVERWRITE_SETTING_Iframe_Integration_receive_origin: http://app.{{domain}},https://app.{{domain}}
volumes:
- {{containerPrefix}}-rocketchat-uploads:/app/uploads
networks:
- {{networkName}}
healthcheck:
test: ["CMD", "wget", "-q", "--spider", "http://127.0.0.1:3000/api/info"]
interval: 30s
timeout: 10s
retries: 10
start_period: 90s
{{/if}}
# ─── Pangolin Tunnel ───────────────────────────────────────
newt:
image: fosrl/newt:latest
container_name: {{containerPrefix}}-newt
restart: unless-stopped
depends_on:
- nginx
environment:
PANGOLIN_ENDPOINT: ${PANGOLIN_ENDPOINT}
NEWT_ID: ${PANGOLIN_NEWT_ID}
NEWT_SECRET: ${PANGOLIN_NEWT_SECRET}
networks:
- {{networkName}}
# ─── Always-On Utilities ──────────────────────────────────
mini-qr:
image: ghcr.io/lyqht/mini-qr:latest
container_name: {{containerPrefix}}-mini-qr
restart: unless-stopped
networks:
- {{networkName}}
mkdocs-site-server:
image: nginx:alpine
container_name: {{containerPrefix}}-mkdocs-site
restart: unless-stopped
volumes:
- ./mkdocs/site:/usr/share/nginx/html:ro
networks:
- {{networkName}}
{{#if enableDevTools}}
# ─── Dev Tools ────────────────────────────────────────────
code-server:
image: lscr.io/linuxserver/code-server:latest
container_name: {{containerPrefix}}-code-server
restart: unless-stopped
environment:
PASSWORD: {{secrets.nocodbAdminPassword}}
SUDO_PASSWORD: {{secrets.nocodbAdminPassword}}
volumes:
- .:/config/workspace:rw
networks:
- {{networkName}}
gitea:
image: gitea/gitea:latest
container_name: {{containerPrefix}}-gitea
restart: unless-stopped
depends_on:
v2-postgres:
condition: service_healthy
environment:
GITEA__database__DB_TYPE: postgres
GITEA__database__HOST: {{containerPrefix}}-postgres:5432
GITEA__database__NAME: gitea
GITEA__database__USER: changemaker
GITEA__database__PASSWD: {{secrets.postgresPassword}}
GITEA__server__ROOT_URL: https://git.{{domain}}
GITEA__server__DOMAIN: git.{{domain}}
GITEA__security__INSTALL_LOCK: "true"
volumes:
- {{containerPrefix}}-gitea-data:/data
networks:
- {{networkName}}
healthcheck:
test: ["CMD", "curl", "-fsSL", "http://localhost:3000/api/healthz"]
interval: 30s
timeout: 10s
retries: 5
start_period: 30s
n8n:
image: n8nio/n8n:latest
container_name: {{containerPrefix}}-n8n
restart: unless-stopped
environment:
N8N_ENCRYPTION_KEY: {{secrets.n8nEncryptionKey}}
WEBHOOK_URL: https://n8n.{{domain}}
N8N_HOST: n8n.{{domain}}
N8N_PROTOCOL: https
volumes:
- {{containerPrefix}}-n8n-data:/home/node/.n8n
networks:
- {{networkName}}
healthcheck:
test: ["CMD", "wget", "-q", "--spider", "http://localhost:5678/healthz"]
interval: 30s
timeout: 5s
retries: 3
homepage:
image: ghcr.io/gethomepage/homepage:latest
container_name: {{containerPrefix}}-homepage
restart: unless-stopped
volumes:
- {{containerPrefix}}-homepage-data:/app/config
- /var/run/docker.sock:/var/run/docker.sock:ro
networks:
- {{networkName}}
excalidraw:
image: excalidraw/excalidraw:latest
container_name: {{containerPrefix}}-excalidraw
restart: unless-stopped
networks:
- {{networkName}}
{{/if}}
{{#if enableMonitoring}}
# ─── Monitoring Stack ──────────────────────────────────────
prometheus:
image: prom/prometheus:latest
container_name: {{containerPrefix}}-prometheus
restart: unless-stopped
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.path=/prometheus'
- '--storage.tsdb.retention.time=30d'
volumes:
- ./configs/prometheus:/etc/prometheus:ro
- {{containerPrefix}}-prometheus-data:/prometheus
networks:
- {{networkName}}
grafana:
image: grafana/grafana:latest
container_name: {{containerPrefix}}-grafana
restart: unless-stopped
environment:
GF_SECURITY_ADMIN_PASSWORD: {{secrets.grafanaAdminPassword}}
GF_USERS_ALLOW_SIGN_UP: "false"
GF_SERVER_ROOT_URL: https://grafana.{{domain}}
GF_SECURITY_ALLOW_EMBEDDING: "true"
volumes:
- {{containerPrefix}}-grafana-data:/var/lib/grafana
- ./configs/grafana:/etc/grafana/provisioning
depends_on:
- prometheus
networks:
- {{networkName}}
alertmanager:
image: prom/alertmanager:latest
container_name: {{containerPrefix}}-alertmanager
restart: unless-stopped
command:
- '--config.file=/etc/alertmanager/alertmanager.yml'
- '--storage.path=/alertmanager'
volumes:
- ./configs/alertmanager:/etc/alertmanager:ro
- {{containerPrefix}}-alertmanager-data:/alertmanager
networks:
- {{networkName}}
{{/if}}
# ─── Volumes ──────────────────────────────────────────────
volumes:
{{containerPrefix}}-postgres-data:
{{containerPrefix}}-redis-data:
{{containerPrefix}}-nocodb-data:
{{#if enableListmonk}}
{{containerPrefix}}-listmonk-data:
{{/if}}
{{#if enableGancio}}
{{containerPrefix}}-gancio-data:
{{/if}}
{{#if enableChat}}
{{containerPrefix}}-mongodb-data:
{{containerPrefix}}-rocketchat-uploads:
{{/if}}
{{#if enableDevTools}}
{{containerPrefix}}-gitea-data:
{{containerPrefix}}-n8n-data:
{{containerPrefix}}-homepage-data:
{{/if}}
{{#if enableMonitoring}}
{{containerPrefix}}-prometheus-data:
{{containerPrefix}}-grafana-data:
{{containerPrefix}}-alertmanager-data:
{{/if}}
# ─── Networks ─────────────────────────────────────────────
networks:
{{networkName}}:
driver: bridge

View File

@ -0,0 +1,223 @@
# ============================================================
# Changemaker Lite — Instance: {{name}}
# Generated by CCP on {{now}}
# ============================================================
# Core
NODE_ENV=production
DOMAIN={{domain}}
USER_ID=1000
GROUP_ID=1000
DOCKER_GROUP_ID=984
# V2 PostgreSQL
V2_POSTGRES_USER=changemaker
V2_POSTGRES_PASSWORD={{secrets.postgresPassword}}
V2_POSTGRES_DB=changemaker_v2
V2_POSTGRES_PORT={{ports.postgres}}
DATABASE_URL=postgresql://changemaker:{{secrets.postgresPassword}}@{{containerPrefix}}-postgres:5432/changemaker_v2
# Redis
REDIS_PASSWORD={{secrets.redisPassword}}
REDIS_URL=redis://:{{secrets.redisPassword}}@{{containerPrefix}}-redis:6379
# JWT Auth
JWT_ACCESS_SECRET={{secrets.jwtAccessSecret}}
JWT_REFRESH_SECRET={{secrets.jwtRefreshSecret}}
JWT_ACCESS_EXPIRY=15m
JWT_REFRESH_EXPIRY=7d
# Encryption
ENCRYPTION_KEY={{secrets.encryptionKey}}
# Initial Admin
INITIAL_ADMIN_EMAIL={{secrets.adminEmail}}
INITIAL_ADMIN_PASSWORD={{secrets.initialAdminPassword}}
# API
API_PORT=4000
PORT=4000
API_URL=https://api.{{domain}}
CORS_ORIGINS=https://app.{{domain}},http://localhost:{{ports.admin}},http://localhost
ADMIN_URL=https://app.{{domain}}
# Admin GUI
ADMIN_PORT=3000
# Nginx
NGINX_HTTP_PORT={{ports.nginx}}
NGINX_HTTPS_PORT=443
# SMTP / Email
{{#if emailTestMode}}
SMTP_HOST={{containerPrefix}}-mailhog
SMTP_PORT=1025
SMTP_USER=
SMTP_PASS=
EMAIL_TEST_MODE=true
{{else}}
SMTP_HOST={{smtpHost}}
SMTP_PORT={{smtpPort}}
SMTP_USER={{smtpUser}}
SMTP_PASS=
EMAIL_TEST_MODE=false
{{/if}}
SMTP_FROM={{smtpFrom}}
SMTP_FROM_NAME={{name}}
TEST_EMAIL_RECIPIENT={{secrets.adminEmail}}
# NocoDB
NOCODB_V2_PORT=8080
NOCODB_URL=http://{{containerPrefix}}-nocodb:8080
NC_ADMIN_EMAIL={{secrets.adminEmail}}
NC_ADMIN_PASSWORD={{secrets.nocodbAdminPassword}}
# Listmonk
{{#if enableListmonk}}
LISTMONK_SYNC_ENABLED=true
LISTMONK_URL=http://{{containerPrefix}}-listmonk:9000
{{else}}
LISTMONK_SYNC_ENABLED=false
LISTMONK_URL=
{{/if}}
LISTMONK_PORT=9000
LISTMONK_DB_USER=listmonk
LISTMONK_DB_PASSWORD={{secrets.listmonkAdminPassword}}
LISTMONK_DB_NAME=listmonk
LISTMONK_WEB_ADMIN_USER=admin
LISTMONK_WEB_ADMIN_PASSWORD={{secrets.listmonkAdminPassword}}
LISTMONK_API_USER=v2-api
LISTMONK_API_TOKEN={{secrets.listmonkApiToken}}
LISTMONK_ADMIN_USER=v2-api
LISTMONK_ADMIN_PASSWORD={{secrets.listmonkApiToken}}
LISTMONK_PROXY_PORT=9002
# Media
{{#if enableMedia}}
ENABLE_MEDIA_FEATURES=true
{{else}}
ENABLE_MEDIA_FEATURES=false
{{/if}}
MEDIA_API_PORT=4100
MEDIA_ROOT=/media/local
MEDIA_UPLOADS=/media/uploads
MAX_UPLOAD_SIZE_GB=10
# NAR Data
NAR_DATA_DIR=/data
# Platform Service URLs (used for health checks)
MINI_QR_URL=http://{{containerPrefix}}-mini-qr:8080
EXCALIDRAW_URL=http://{{containerPrefix}}-excalidraw:80
HOMEPAGE_URL=http://{{containerPrefix}}-homepage:3000
VAULTWARDEN_URL=http://{{containerPrefix}}-vaultwarden:80
# Geocoding
MAPBOX_API_KEY=
GOOGLE_MAPS_API_KEY=
GOOGLE_MAPS_ENABLED=false
# Represent API
REPRESENT_API_URL=https://represent.opennorth.ca
# Pangolin Tunnel
PANGOLIN_API_URL=
PANGOLIN_API_KEY=
PANGOLIN_ORG_ID=
PANGOLIN_SITE_ID=
PANGOLIN_ENDPOINT=
PANGOLIN_NEWT_ID=
PANGOLIN_NEWT_SECRET=
# Gancio
{{#if enableGancio}}
GANCIO_SYNC_ENABLED=true
GANCIO_URL=http://{{containerPrefix}}-gancio:13120
{{else}}
GANCIO_SYNC_ENABLED=false
GANCIO_URL=
{{/if}}
GANCIO_BASE_URL=https://events.{{domain}}
GANCIO_ADMIN_USER=admin
GANCIO_ADMIN_PASSWORD={{secrets.gancioAdminPassword}}
GANCIO_PORT=8092
# Chat (Rocket.Chat)
{{#if enableChat}}
ENABLE_CHAT=true
ROCKETCHAT_URL=http://{{containerPrefix}}-rocketchat:3000
ROCKETCHAT_ADMIN_USER=rcadmin
ROCKETCHAT_ADMIN_PASSWORD={{secrets.nocodbAdminPassword}}
{{else}}
ENABLE_CHAT=false
ROCKETCHAT_URL=
ROCKETCHAT_ADMIN_USER=
ROCKETCHAT_ADMIN_PASSWORD=
{{/if}}
# Monitoring
GRAFANA_ADMIN_PASSWORD={{secrets.grafanaAdminPassword}}
GRAFANA_ROOT_URL=https://grafana.{{domain}}
PROMETHEUS_PORT=9090
GRAFANA_PORT=3000
# MkDocs
MKDOCS_PORT={{math ports.embed "+" 8}}
CODE_SERVER_PORT={{math ports.embed "+" 7}}
BASE_DOMAIN=https://{{domain}}
# Gitea
GITEA_URL=http://{{containerPrefix}}-gitea:3000
GITEA_DB_PASSWD={{secrets.giteaAdminPassword}}
GITEA_DB_ROOT_PASSWORD={{secrets.giteaAdminPassword}}
GITEA_ROOT_URL=https://git.{{domain}}
GITEA_DOMAIN=git.{{domain}}
# n8n
N8N_HOST=n8n.{{domain}}
N8N_URL=http://{{containerPrefix}}-n8n:5678
N8N_ENCRYPTION_KEY={{secrets.n8nEncryptionKey}}
N8N_USER_EMAIL={{secrets.adminEmail}}
N8N_USER_PASSWORD={{secrets.nocodbAdminPassword}}
# MailHog
MAILHOG_URL=http://{{containerPrefix}}-mailhog:8025
MAILHOG_SMTP_PORT=1025
MAILHOG_WEB_PORT=8025
# Dev Tools
{{#if enableDevTools}}
ENABLE_DEV_TOOLS=true
{{else}}
ENABLE_DEV_TOOLS=false
{{/if}}
# Payments
{{#if enablePayments}}
ENABLE_PAYMENTS=true
{{else}}
ENABLE_PAYMENTS=false
{{/if}}
# Vite (admin build)
VITE_API_URL=http://{{containerPrefix}}-api:4000
VITE_MKDOCS_URL=http://{{containerPrefix}}-mkdocs:8000
{{#if enableMedia}}
VITE_MEDIA_API_URL=http://{{containerPrefix}}-media-api:4100
{{/if}}
# Embed proxy ports (nginx proxy for iframe embedding in admin GUI)
NOCODB_EMBED_PORT={{math ports.embed "+" 0}}
N8N_EMBED_PORT={{math ports.embed "+" 1}}
GITEA_EMBED_PORT={{math ports.embed "+" 2}}
MAILHOG_EMBED_PORT={{math ports.embed "+" 3}}
MINI_QR_EMBED_PORT={{math ports.embed "+" 4}}
EXCALIDRAW_EMBED_PORT={{math ports.embed "+" 5}}
HOMEPAGE_EMBED_PORT={{math ports.embed "+" 6}}
CODE_SERVER_EMBED_PORT={{math ports.embed "+" 7}}
MKDOCS_EMBED_PORT={{math ports.embed "+" 8}}
VAULTWARDEN_EMBED_PORT={{math ports.embed "+" 9}}
ROCKETCHAT_EMBED_PORT={{math ports.embed "+" 10}}
GANCIO_EMBED_PORT={{math ports.embed "+" 11}}
GRAFANA_EMBED_PORT={{math ports.embed "+" 12}}
LISTMONK_EMBED_PORT={{math ports.embed "+" 13}}

View File

@ -0,0 +1,2 @@
# API-specific configuration placeholder
# (Primary API routing is in default.conf)

View File

@ -0,0 +1,81 @@
# Changemaker Lite Nginx — Instance: {{name}}
# Routes all traffic through single entry point
server {
listen 80 default_server;
server_name localhost _;
add_header X-Frame-Options "SAMEORIGIN" always;
# Admin GUI + Public pages (default)
location / {
set $upstream_admin http://{{containerPrefix}}-admin:3000;
proxy_pass $upstream_admin;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
{{#if enableMedia}}
# Media API admin routes (must come BEFORE /api/ for longest prefix match)
# Rewrites /api/media/* to /api/* on media-api
location /api/media/ {
set $upstream_media http://{{containerPrefix}}-media-api:4100;
rewrite ^/api/media/(.*) /api/$1 break;
proxy_pass $upstream_media;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Large upload support
client_max_body_size 10G;
proxy_read_timeout 3600s;
proxy_connect_timeout 75s;
proxy_request_buffering off;
# WebSocket support
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
# Media API routes rewrite (matches Vite dev proxy behavior)
# Rewrites /media/* to /api/* on media-api (port 4100)
location /media/ {
set $upstream_media_default http://{{containerPrefix}}-media-api:4100;
rewrite ^/media/(.*) /api/$1 break;
proxy_pass $upstream_media_default;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Large upload support
client_max_body_size 10G;
proxy_read_timeout 3600s;
proxy_connect_timeout 75s;
proxy_request_buffering off;
# WebSocket support
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
{{/if}}
# Express API
location /api/ {
set $upstream_api http://{{containerPrefix}}-api:4000;
proxy_pass $upstream_api;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 3600s;
}
}

View File

@ -0,0 +1,243 @@
# Changemaker Lite — Instance: {{name}}
# Embed proxy ports for iframe embedding in admin GUI.
# These strip X-Frame-Options and CSP so services can be iframed.
# Internal ports 8881-8894 are mapped to host ports via docker-compose.
# NocoDB embed proxy (internal 8881)
server {
listen 8881;
location / {
set $upstream_nocodb http://{{containerPrefix}}-nocodb:8080;
proxy_pass $upstream_nocodb;
proxy_hide_header X-Frame-Options;
proxy_hide_header Content-Security-Policy;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
# n8n embed proxy (internal 8882)
server {
listen 8882;
location / {
set $upstream_n8n http://{{containerPrefix}}-n8n:5678;
proxy_pass $upstream_n8n;
proxy_hide_header X-Frame-Options;
proxy_hide_header Content-Security-Policy;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
# Gitea embed proxy (internal 8883)
server {
listen 8883;
client_max_body_size 2048M;
location / {
set $upstream_gitea http://{{containerPrefix}}-gitea:3000;
proxy_pass $upstream_gitea;
proxy_hide_header X-Frame-Options;
proxy_hide_header Content-Security-Policy;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
# MailHog embed proxy (internal 8884)
server {
listen 8884;
location / {
set $upstream_mailhog http://{{containerPrefix}}-mailhog:8025;
proxy_pass $upstream_mailhog;
proxy_hide_header X-Frame-Options;
proxy_hide_header Content-Security-Policy;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
# Mini QR embed proxy (internal 8885)
server {
listen 8885;
location / {
set $upstream_miniqr http://{{containerPrefix}}-mini-qr:8080;
proxy_pass $upstream_miniqr;
proxy_hide_header X-Frame-Options;
proxy_hide_header Content-Security-Policy;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
# Excalidraw embed proxy (internal 8886)
server {
listen 8886;
location / {
set $upstream_excalidraw http://{{containerPrefix}}-excalidraw:80;
proxy_pass $upstream_excalidraw;
proxy_hide_header X-Frame-Options;
proxy_hide_header Content-Security-Policy;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_http_version 1.1;
}
}
# Homepage embed proxy (internal 8887)
server {
listen 8887;
location / {
set $upstream_homepage http://{{containerPrefix}}-homepage:3000;
proxy_pass $upstream_homepage;
proxy_hide_header X-Frame-Options;
proxy_hide_header Content-Security-Policy;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
# Code Server embed proxy (internal 8888)
server {
listen 8888;
location / {
set $upstream_code http://{{containerPrefix}}-code-server:8080;
proxy_pass $upstream_code;
proxy_hide_header X-Frame-Options;
proxy_hide_header Content-Security-Policy;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
# MkDocs embed proxy (internal 8889)
server {
listen 8889;
location / {
set $upstream_mkdocs http://{{containerPrefix}}-mkdocs:8000;
proxy_pass $upstream_mkdocs;
proxy_hide_header X-Frame-Options;
proxy_hide_header Content-Security-Policy;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
# Vaultwarden embed proxy (internal 8890)
server {
listen 8890;
location / {
set $upstream_vaultwarden http://{{containerPrefix}}-vaultwarden:80;
proxy_pass $upstream_vaultwarden;
proxy_hide_header X-Frame-Options;
proxy_hide_header Content-Security-Policy;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_http_version 1.1;
}
}
# Rocket.Chat embed proxy (internal 8891)
{{#if enableChat}}
server {
listen 8891;
location / {
set $upstream_rocketchat http://{{containerPrefix}}-rocketchat:3000;
proxy_pass $upstream_rocketchat;
proxy_hide_header X-Frame-Options;
proxy_hide_header Content-Security-Policy;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_http_version 1.1;
client_max_body_size 100m;
}
}
{{/if}}
# Gancio embed proxy (internal 8892)
{{#if enableGancio}}
server {
listen 8892;
location / {
set $upstream_gancio http://{{containerPrefix}}-gancio:13120;
proxy_pass $upstream_gancio;
proxy_hide_header X-Frame-Options;
proxy_hide_header Content-Security-Policy;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
{{/if}}
# Grafana embed proxy (internal 8893)
{{#if enableMonitoring}}
server {
listen 8893;
location / {
set $upstream_grafana http://{{containerPrefix}}-grafana:3000;
proxy_pass $upstream_grafana;
proxy_hide_header X-Frame-Options;
proxy_hide_header Content-Security-Policy;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
{{/if}}
# Listmonk embed proxy (internal 8894)
{{#if enableListmonk}}
server {
listen 8894;
location / {
set $upstream_listmonk http://{{containerPrefix}}-listmonk:9000;
proxy_pass $upstream_listmonk;
proxy_hide_header X-Frame-Options;
proxy_hide_header Content-Security-Policy;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
{{/if}}

View File

@ -0,0 +1,47 @@
worker_processes auto;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
server_tokens off;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
client_max_body_size 50m;
# Gzip compression
gzip on;
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript image/svg+xml;
# Security headers (applied globally X-Frame-Options set per server block)
add_header X-Content-Type-Options "nosniff" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
add_header Permissions-Policy "geolocation=(self), microphone=(), camera=()" always;
# Docker internal DNS enables runtime resolution so nginx starts
# even when optional services are not running
resolver 127.0.0.11 valid=30s;
include /etc/nginx/conf.d/*.conf;
}