changemaker.lite/api/scripts/backfill-hls.ts
bunker-admin 21208b58c7 feat(media): HLS adaptive bitrate streaming with MP4 fallback
Replaces single-MP4 + range-request streaming with HLS multi-bitrate
segments to fix video stutter through the Newt tunnel. Range-request
bursts were the root cause; HLS chunks are small and tunnel-friendly,
plus the player adapts bitrate to bandwidth.

Backend
- New BullMQ `hls-transcode` queue (in-process worker, concurrency 1)
- FFmpeg single-pass transcode → 360p/720p/1080p variants with aligned
  keyframes; output at /media/local/hls/{id}/master.m3u8
- New /api/{videos|public}/{id}/hls/* routes serving signed manifests
  and segments (URLs emitted as /media/* so nginx rewrites to media-api)
- Prisma: HlsStatus enum + 6 fields on Video + index, migration
- Upload + yt-dlp fetch paths enqueue transcode jobs
- ENABLE_HLS_TRANSCODE flag (default off; gates enqueue only)
- Backfill script: `npm run backfill:hls`
- media-api bumped to 4 CPU / 2G for FFmpeg headroom

Frontend
- New useHls hook: lazy-imports hls.js (kept out of main bundle),
  native HLS on Safari/iOS, gives up after 2 NETWORK_ERRORs so MP4
  fallback engages cleanly
- VideoPlayer, VideoViewerModal, ShortsPage, ProductDetailPage now
  prefer HLS when ready; MP4 fallback is automatic
- ShortsPage prefetches next-3 master manifests via <link rel="prefetch">
- PublicVideoCard hover preview stays MP4 (avoids hls.js init latency)

Bunker Admin
2026-04-30 19:03:29 -06:00

68 lines
2.4 KiB
TypeScript

/**
* Backfill HLS transcoding for existing videos.
*
* Finds every Video record that has never been queued for HLS (hlsStatus is
* NULL) and is otherwise transcodable, then enqueues a transcode job for
* each. Idempotent — re-runs only pick up still-NULL rows. Skips invalid or
* zero-duration videos.
*
* Bypasses the ENABLE_HLS_TRANSCODE flag (calls forceSubmitTranscode)
* because the flag is meant to gate the *upload-time* enqueue; once an
* operator runs this script they're explicitly asking for transcoding.
*
* Usage:
* docker compose exec api tsx scripts/backfill-hls.ts
* # or after building:
* docker compose exec api node dist/scripts/backfill-hls.js
*/
import { prisma } from '../src/config/database';
import { hlsTranscodeQueueService } from '../src/services/hls-transcode-queue.service';
import { logger } from '../src/utils/logger';
async function main() {
const candidates = await prisma.video.findMany({
where: {
hlsStatus: null,
isValid: true,
durationSeconds: { gt: 0 },
width: { gt: 0 },
height: { gt: 0 },
},
select: { id: true, filename: true, durationSeconds: true },
orderBy: { id: 'asc' },
});
if (candidates.length === 0) {
logger.info('[backfill-hls] No videos require HLS transcoding.');
process.exit(0);
}
logger.info(`[backfill-hls] Enqueueing ${candidates.length} video(s) for HLS transcoding`);
let enqueued = 0;
for (const video of candidates) {
try {
const jobId = await hlsTranscodeQueueService.forceSubmitTranscode(video.id);
enqueued++;
logger.info(`[backfill-hls] Enqueued video ${video.id} (${video.filename}) → job ${jobId}`);
} catch (err) {
logger.error(`[backfill-hls] Failed to enqueue video ${video.id}: ${err instanceof Error ? err.message : String(err)}`);
}
}
logger.info(`[backfill-hls] Done. ${enqueued}/${candidates.length} jobs enqueued. Worker concurrency is 1, so total wall time depends on per-video transcode duration (~2 min per 1080p video).`);
// Give BullMQ a moment to flush, then exit cleanly.
await new Promise((r) => setTimeout(r, 500));
await hlsTranscodeQueueService.close();
await prisma.$disconnect();
process.exit(0);
}
main().catch(async (err) => {
logger.error(`[backfill-hls] Fatal: ${err instanceof Error ? err.stack : String(err)}`);
await prisma.$disconnect().catch(() => {});
process.exit(1);
});