Skip to Content
Part 2: Intelligence EngineCh 9: Composite Score

You now understand Fit (how similar is this company to your winners?) and Intent (how actively are they engaging?). The Composite Score combines them into a single number that ranks your entire pipeline.

Composite Score
Composite = (Fit Ă— 50%) + (Intent Ă— 50%)
ComponentWeightDescription
Fit50%How well the company matches your ICP (NAICS + size)
Intent50%Buying signals strength (volume + recency + topic)

The Composite Formula

Composite Score = (Fit Ă— fitWeight) + (Intent Ă— intentWeight)

At its simplest, with equal weights:

Composite = (Fit Ă— 0.5) + (Intent Ă— 0.5)

A company with Fit 90 and Intent 30 scores: (90 Ă— 0.5) + (30 Ă— 0.5) = 60. A company with Fit 60 and Intent 60 also scores: (60 Ă— 0.5) + (60 Ă— 0.5) = 60.

Same composite, very different stories. The first is a great-fit company that isn’t engaging. The second is a moderate-fit company that’s actively interested. The composite tells you “these are equally worth your time” — but the recommended action differs. More on that later.

Buying Stage Weights

The equal-weight default is a starting point. In practice, the optimal weight depends on where your prospect is in the buying journey:

// src/domain/scoring/constants/index.ts, lines 124-131 export const COMPOSITE_STAGE_WEIGHTS: Record<BuyingStage, { fit: number; intent: number }> = { awareness: { fit: 0.65, intent: 0.35 }, // "Is this the right company?" consideration: { fit: 0.50, intent: 0.50 }, // Balanced decision: { fit: 0.35, intent: 0.65 }, // "Are they actively buying?" retention: { fit: 0.60, intent: 0.40 }, // Relationship value matters };

Why the shift?

In the awareness stage, you’re prospecting — looking for companies that match your winning profile. Fit dominates because you haven’t established a relationship yet. Intent data is sparse (they might not even know you exist).

In the decision stage, fit is already established (they wouldn’t be evaluating your product if they weren’t a reasonable fit). What matters now is momentum — are they replying to emails? Attending demos? Downloading contracts? Intent dominates.

In retention, fit circles back — you’re asking “should we invest in keeping this customer?” A bad-fit customer that was won through heavy discounting might not be worth retaining.

Sales Strategy Presets

For users who prefer simpler controls:

// src/config/scoring.ts, lines 59-65 export const RANKING_SCORE = { PRESETS: { NET_NEW: { FIT: 0.7, INTENT: 0.3 }, // Prospecting: find new segments EXPANSION: { FIT: 0.3, INTENT: 0.7 }, // Upsell: focus on engaged accounts BALANCED: { FIT: 0.5, INTENT: 0.5 }, // Default }, };

A rep doing outbound prospecting selects “Net New” → Fit gets 70% weight, surfacing companies that match the ICP regardless of current engagement. A rep managing existing accounts selects “Expansion” → Intent gets 70%, surfacing accounts that are actively showing buying signals.

The Goldilocks Zone

The most important concept in Astrelo’s scoring: Goldilocks Accounts.

Goldilocks = Fit ≥ 70 AND Intent ≥ 70 AND Composite ≥ 75

These are accounts that are both a good fit AND actively engaging. They’re “just right” — not too cold (high fit, no intent), not too improbable (high intent, bad fit).

Goldilocks Account Thresholds
Fit ≥ 70 AND Intent ≥ 70 AND Composite ≥ 75
ComponentWeightDescription
Fit ≥ 70RequiredStrong company-ICP match
Intent ≥ 70RequiredActive buying signals
Composite ≥ 75RequiredOverall priority threshold

Why three conditions?

  • Fit ≥ 70 eliminates companies that don’t match your winning profile, no matter how engaged they are
  • Intent ≥ 70 eliminates companies that match your profile but aren’t showing buying signals
  • Composite ≥ 75 provides a floor that catches edge cases (a company with exactly 70/70 scores composite 70 with equal weights — below the 75 threshold)

Goldilocks accounts are flagged throughout the UI: gold badges in the ranking table, priority placement in the command center, and preferential treatment in Cosmo’s recommendations.

The Scoring Orchestrator

All three scores (fit, intent, composite) are calculated in a single batch operation:

// src/domain/scoring/services/orchestration/mlScoringJobProcessor.ts (simplified) async function processScoring(userId: string) { // 1. Build winning profile from closed deals const winningProfile = await buildWinningProfile(userId); // 2. Load all companies for this user const companies = await loadCompanies(userId); // 3. Calculate fit scores (NAICS + size + tech) const fitScores = await fitScoringProcessor.scoreBatch(companies, winningProfile); // 4. Calculate intent scores (volume + recency + topics) const intentScores = await intentScoringProcessor.scoreBatch(companies, userId); // 5. Combine into composites const composites = companies.map((company, i) => { const fit = fitScores[i] || 0; const intent = intentScores[i] || 0; const composite = (fit * 0.5) + (intent * 0.5); return { companyId: company.id, fit, intent, composite }; }); // 6. Batch upsert to scores table await scorePersistenceService.batchUpsert(userId, composites); }

The key insight is batch processing. Instead of scoring one company at a time (N database queries), the orchestrator loads all data upfront, processes everything in memory, and writes results in a single batch upsert. For 500 companies, this takes ~30 seconds instead of the 10+ minutes that individual scoring would require.

Batch Upsert: Writing Scores Efficiently

The scores are written using PostgreSQL’s UNNEST for batch insertion:

INSERT INTO scores (user_id, company_id, fit_score, intent_score, composite_score, scored_at) SELECT $1, unnest($2::uuid[]), unnest($3::numeric[]), unnest($4::numeric[]), unnest($5::numeric[]), NOW() ON CONFLICT (user_id, company_id) DO UPDATE SET fit_score = EXCLUDED.fit_score, intent_score = EXCLUDED.intent_score, composite_score = EXCLUDED.composite_score, scored_at = NOW()

One SQL statement handles 500 inserts-or-updates. The ON CONFLICT clause (Chapter 3’s upsert pattern) means this works whether the company has been scored before or not.

Score Caching with Redis

Scoring is expensive — LLM calls, embedding lookups, and statistical computation. Results are cached in Upstash Redis with a 24-hour TTL:

Cache key: ml:scores:{userId}:{icpProfileId} TTL: 24 hours Value: { scores: [...], scoredAt: timestamp }

When you open the ranking page:

  1. React Query sends GET /api/ranking/calculate
  2. The API checks Redis cache
  3. If cached AND less than 24 hours old → return cached scores
  4. If stale → trigger background re-scoring, return cached scores while it runs
  5. When re-scoring completes → update Redis cache

This means the first load after 24 hours shows slightly stale data while fresh scores calculate in the background. The user sees an instant response and then a refresh when the new scores arrive.

The Scoring Trigger

Scoring doesn’t run continuously. It runs:

  1. On login — if scores are stale (fire-and-forget, Chapter 2)
  2. After CRM sync — new data from HubSpot/Salesforce might change scores
  3. On demand — user clicks “Recalculate” in the ranking view
  4. Via cron — scheduled nightly for active users
// From the login handler (src/pages/api/auth/login.ts, lines 126-129) const orchestrator = createScoringOrchestrator(pool); orchestrator.runScoringIfStale(user.id) .then(() => {}) .catch(() => {});

The .then(() => {}).catch(() => {}) pattern means “fire and forget.” Login returns immediately. Scoring runs in the background. If it fails, the user never knows — they just see their last-known scores.

Key Takeaways

  1. Composite = Fit Ă— weight + Intent Ă— weight. The simplest formula with the most impact. Weights can shift by buying stage or sales strategy.

  2. Goldilocks accounts (Fit ≥ 70, Intent ≥ 70, Composite ≥ 75) are the highest-probability prospects. Focus here first.

  3. Batch processing scores all companies at once instead of one at a time. This takes 30 seconds instead of 10+ minutes for 500 companies.

  4. Redis caching with 24-hour TTL makes the ranking page load instantly. Background re-scoring keeps data fresh without blocking the UI.

  5. Scoring runs on triggers (login, sync, demand, cron), not continuously. This balances freshness with computational cost.

Next chapter: the Discovery Engine — how Astrelo finds NEW companies outside your CRM using an 80/20 explore/exploit strategy.

Last updated on