Everything we’ve built — the ML scoring engine, the chat agent, the 18 cron jobs, the real-time alert pipeline — deploys as a single Next.js application on AWS Amplify. This chapter covers how the build works, why certain decisions were made, and the configuration that makes it all fit together.
The Build Spec
amplify.yml defines the build pipeline:
version: 1
frontend:
phases:
preBuild:
commands:
- npm ci --legacy-peer-deps
build:
commands:
- |
for var in DB_HOST DB_PORT DB_NAME DB_USER DB_PASSWORD DB_SSL JWT_SECRET \
HUBSPOT_CLIENT_ID HUBSPOT_CLIENT_SECRET HUBSPOT_REDIRECT_URI \
GROQ_API_KEY GROQ_MODEL GROQ_MODEL_GOLDILOCKS \
SERPER_API_KEY REDIS_URL UPSTASH_REDIS_REST_URL \
UPSTASH_REDIS_REST_TOKEN CRON_SECRET ...; do
val=$(printenv "$var" 2>/dev/null || echo "")
if [ -n "$val" ]; then echo "$var=\"$val\"" >> .env; fi
done
- rm -rf .next/cache
- npm run build
artifacts:
baseDirectory: .next
files: ['**/*']
cache:
paths:
- node_modules/**/*
- .next/cache/**/*Why Write Env Vars to .env?
Amplify sets environment variables in the AWS Console, but Next.js needs them in a .env file at build time for process.env to work in API routes. The shell loop iterates over every expected variable and writes it with shell quoting:
echo "$var=\"$val\"" >> .envThe double-quoting is important: values containing #, $, !, or spaces would break without it. A database password like p@ss#word! would be silently truncated at the # without quotes.
Legacy Peer Deps
npm ci --legacy-peer-depsThis flag resolves peer dependency conflicts between MUI v6, React 19, and various chart/UI libraries. Without it, npm ci would fail with peer dependency resolution errors.
Cache Strategy
cache:
paths:
- node_modules/**/*
- .next/cache/**/*Both node_modules and .next/cache are cached between builds. But the build also runs rm -rf .next/cache before npm run build — this clears stale build artifacts while the cache config ensures the cache directory structure persists for future builds.
The Next.js Configuration
// next.config.js
const isMarketingOnly = process.env.NEXT_PUBLIC_MARKETING_ONLY === 'true';
const nextConfig = {
typescript: { ignoreBuildErrors: isMarketingOnly },
eslint: { ignoreDuringBuilds: true },
webpack: (config, { isServer }) => {
if (!isServer) {
config.resolve.fallback = {
fs: false, dns: false, net: false, tls: false,
pg: false, 'pg-native': false,
};
}
return config;
},
};The Webpack Fallback Problem
This is one of the most important lines in the entire config, and it deserves a detailed explanation.
Astrelo uses barrel exports (index.ts files) that re-export both server-side services and client-side components from the same directory:
// src/features/alerts/index.ts
export { AlertFeed } from './components/AlertFeed'; // Client component
export { evaluateAlertJob } from '../../domain/alerts'; // Server serviceThe server service imports pg (PostgreSQL client), which imports Node.js builtins like fs, dns, net, and tls. When Next.js bundles the client-side code, it follows these imports and tries to include pg in the browser bundle — which fails because browsers don’t have fs.
The webpack fallback tells the bundler: “When you encounter fs, dns, net, tls, pg, or pg-native in a client bundle, replace them with false (empty module).” The actual server code that uses these modules runs in API routes (server-side only), so the fallback never affects runtime behavior — it only fixes the build.
Marketing-Only Mode
typescript: { ignoreBuildErrors: isMarketingOnly },The NEXT_PUBLIC_MARKETING_ONLY flag allows deploying a stripped-down marketing site without the full app. When set to true, TypeScript errors in the app code (API routes, features, services) are ignored, letting the marketing pages deploy independently.
This is useful for staging: you can deploy the marketing site to a separate Amplify environment while the app is still under development.
The Migration Runner
Database migrations run separately from the application deploy. The migration runner (scripts/migrate.js) is invoked manually or from a CI step:
node scripts/migrate.js # Run all pending safe migrations
node scripts/migrate.js --all # Include destructive (200-205)
node scripts/migrate.js --from 250 # Only migrations >= 250
node scripts/migrate.js --only 293 # Run exactly one migration
node scripts/migrate.js --dry-run # Preview without executingHow It Works
1. Connect to PostgreSQL
2. Ensure schema_migrations table exists
3. Load all applied migrations from schema_migrations
4. Filter candidates: SAFE_MIGRATIONS (206+) minus already-applied
5. For each pending migration:
a. BEGIN transaction
b. Execute SQL file
c. INSERT INTO schema_migrations (filename, applied_at)
d. COMMIT
6. On error: ROLLBACK — but continue to next migrationThe “Already Exists” Safety Net
if (err.message.includes('already exists') || err.message.includes('duplicate key')) {
// Schema is already in expected state — mark as applied and continue
await client.query(
'INSERT INTO schema_migrations (filename) VALUES ($1) ON CONFLICT DO NOTHING',
[migration.filename]
);
}If a migration fails because the table/column/index already exists, the runner marks it as applied anyway. This handles the case where a migration was partially applied (created the table but failed before recording itself in schema_migrations). On re-run, it won’t try to create the table again — it’ll just record that it’s done.
Two Migration Categories
Destructive (200-205): DROP and RECREATE tables. Only run with --all flag. A 5-second countdown gives you time to cancel:
⚠️ DESTRUCTIVE MIGRATIONS DETECTED: 200, 201, 202, 203, 204, 205
These will DROP and RECREATE tables. Data will be lost.
Starting in 5 seconds... (Ctrl+C to cancel)Safe (206-298): All additive — CREATE TABLE IF NOT EXISTS, ALTER TABLE ADD COLUMN IF NOT EXISTS, CREATE INDEX IF NOT EXISTS. These are idempotent by design: running them twice is harmless.
The Monolith Advantage
Astrelo deploys as a monolith. The ML scoring engine, the chat agent, the cron handlers, the React frontend, and the marketing site are all in one Next.js application. This is a deliberate architectural choice:
Shared code: The evaluateAlertJob function is used by both the webhook handler (API route) and the cron processor (another API route). In a microservice architecture, this would require a shared library, a message queue, or code duplication. In a monolith, it’s just an import.
Shared database pool: The pool singleton is imported everywhere. Cron jobs, API routes, and service functions all share the same connection pool. No separate infrastructure for each service.
Single deploy: One git push deploys everything. No coordinating multiple services, no version compatibility matrix, no distributed deployment orchestration.
The tradeoff: Scaling is all-or-nothing. You can’t scale the chat agent independently of the scoring engine. For Astrelo’s current scale (hundreds of users, not millions), this is the right tradeoff. The monolith can be decomposed later if scale demands it.
Environment Variables
The application depends on roughly 30 environment variables:
| Category | Variables |
|---|---|
| Database | DB_HOST, DB_PORT, DB_NAME, DB_USER, DB_PASSWORD, DB_SSL |
| Auth | JWT_SECRET, CRON_SECRET |
| HubSpot | HUBSPOT_CLIENT_ID, HUBSPOT_CLIENT_SECRET, HUBSPOT_REDIRECT_URI |
| Salesforce | SALESFORCE_CLIENT_ID, SALESFORCE_CLIENT_SECRET, SALESFORCE_REDIRECT_URI |
| Slack | SLACK_CLIENT_ID, SLACK_CLIENT_SECRET, SLACK_REDIRECT_URI |
| LLM | GROQ_API_KEY, GROQ_MODEL, GROQ_MODEL_GOLDILOCKS |
| Search | SERPER_API_KEY |
| Cache | UPSTASH_REDIS_REST_URL, UPSTASH_REDIS_REST_TOKEN |
REACHER_API_BASE_URL |
All secrets are stored in the AWS Amplify Console’s environment variable settings, never in code or .env files committed to git.
Key Takeaways
-
Single Next.js deploy on AWS Amplify — the entire platform (frontend, API, cron handlers) in one unit.
-
Webpack fallbacks solve the barrel-export problem where server-side imports leak into client bundles.
-
Env var shell loop with quoting handles special characters that would break simpler approaches.
-
Migration runner with idempotent SQL and “already exists” safety nets ensures database changes are applied reliably.
-
Monolith advantage: shared code, shared pool, single deploy. The simplicity outweighs the scaling limitations at current scale.
This concludes the technical walkthrough of Astrelo. From JWT authentication to ML scoring, from webhook pipelines to LLM-powered chat agents, from rate limiting to production deployment — you’ve seen how every piece fits together.
The system is complex, but it’s built from simple patterns composed together: parameterized queries, Promise.all for parallelism, Promise.race for timeouts, barrel exports for modularity, JSONB for flexibility, and two-model LLM architecture for cost optimization. Master these patterns and you can build anything.