Skip to Content
Part 2: Intelligence EngineCh 10: Discovery Engine

The scoring engine ranks companies already in your CRM. But what about companies you haven’t found yet? The Discovery Engine searches for new prospects that match your winning profile β€” and deliberately explores outside that profile to prevent tunnel vision.

The Explore/Exploit Dilemma

This is a fundamental problem in machine learning and decision theory: should you exploit what you know works, or explore something new?

Exploit only: You find more companies exactly like your past winners. Safe, predictable, but you’ll never discover new markets. If your winning profile says β€œ100-200 person SaaS companies in North America,” you’ll never find that 500-person European fintech that’s actually a great fit.

Explore only: You cast a wide net, testing every possible company type. You’ll discover new segments, but waste enormous time on bad-fit prospects. Most explorations fail.

The solution: 80/20 split. 80% of discovered companies match your winning profile (exploit). 20% are deliberate experiments outside your profile (explore). This is inspired by Google’s β€œ20% time” and the multi-armed bandit problem in statistics.

How Discovery Works

// src/domain/scoring/services/discovery/mlProspectDiscoveryService.ts (simplified) async discoverProspectsForUser(options: DiscoverProspectsOptions): Promise<DiscoveryResult> { // 1. Get/build winning profile from closed deals const winningProfile = await mlFitScoreService.getOrBuildWinningProfile(options.userId); if (!winningProfile || winningProfile.confidence === 'insufficient') { return { success: false, error: 'Insufficient deal data. Need at least 5 closed deals for discovery.', }; } // 2. Get existing domains to exclude (don't discover companies already in CRM) const existingDomains = await getExistingDomains(options.userId); // 3. Build LLM prompt from winning profile const promptContext = await this.buildPromptContextFromPatternAnalyzer( options.userId, winningProfile, config, existingDomains, options ); // 4. Run discovery strategies in parallel // ... 7 strategies, each using Groq LLM }

The Safety Check: Minimum 5 Closed Deals

Discovery requires at least 5 closed deals. Why? The winning profile is built from statistical analysis (weighted means, standard deviations, embedding centroids). With fewer than 5 data points, the statistics are unreliable β€” your β€œwinning profile” might just be random noise. Five deals is the minimum for a meaningful pattern.

Domain Deduplication

Before running discovery, we load every domain already in the user’s CRM. Discovered companies are filtered against this set to prevent duplicates. If β€œacme.com” is already in your pipeline, the LLM won’t suggest it again.

The Seven Discovery Strategies

Discovery runs seven strategies, each targeting a different dimension:

Exploit Strategies (80%)

1. Profile Matches β€” Companies that closely match your winning profile across all dimensions (industry, size, geography).

2. Goldilocks Matches β€” Companies that would score Composite β‰₯ 70 based on available signals. These are the β€œhigh probability” prospects.

Explore Strategies (20%)

3. Smaller Companies β€” Companies 50-75% of your typical deal size. Tests whether your solution works at a lower price point.

4. Larger Companies β€” Companies 150-300% of your typical deal size. Tests whether you can sell upmarket.

5. Adjacent Industries β€” Companies in industries related to but not identical to your winners. If you win in β€œComputer Systems Design,” this explores β€œData Processing” and β€œIT Consulting.”

6. New Geographies β€” Companies in regions where you haven’t closed deals. If all your wins are in North America, this explores European or APAC companies.

Each explore strategy has a minimum of 2 companies β€” enough to test the segment without over-investing:

const MIN_PER_CATEGORY = 2; // Minimum 2 per exploration category const EXPLORE_OVERFETCH = 4; // Request 4, expect 2 after filtering

We request 4 and keep 2 because the LLM sometimes suggests duplicates or companies that fail domain verification.

The LLM Prompt

The discovery engine uses Groq (Llama 3.1/3.3) to generate prospect lists. The prompt is built from your winning profile:

You are a B2B sales intelligence expert. Find companies matching this profile: WINNING PROFILE: - Top industries: Computer Systems Design (45% of wins), Software Publishing (30%) - Employee range: 50-300 (sweet spot: 150) - Revenue range: $5M-$50M (sweet spot: $20M) - Top regions: California (35%), New York (20%), Texas (15%) - Average deal value: $85,000 - Win rate: 42% EXCLUDE these domains (already in CRM): acme.com, bigcorp.io, techstart.com, ... Return 10 companies matching this profile. For each, provide: - Company name, domain, industry, employee count, revenue range - Why they match the winning profile - A confidence score (0-100) Respond in JSON format.

The LLM returns structured JSON (using Groq’s JSON mode), which is parsed, validated, and scored before being stored.

Tracking Exploration Outcomes

The explore/exploit split isn’t static. It adapts based on results:

-- exploratory_segments table tracks each exploration exploratory_segments (20 cols) β”œβ”€β”€ segment_type -- e.g., 'industry', 'employee_size', 'geography' β”œβ”€β”€ segment_value -- e.g., 'Fintech', '>500 employees', 'Europe' β”œβ”€β”€ times_shown INT -- How many times this segment was presented β”œβ”€β”€ times_acted_on INT -- How many times the user engaged β”œβ”€β”€ deals_created INT -- How many deals came from this segment β”œβ”€β”€ deals_won INT -- How many were won β”œβ”€β”€ deals_lost INT -- How many were lost β”œβ”€β”€ total_deal_value NUMERIC β”œβ”€β”€ status -- 'exploring', 'absorbed', 'abandoned' β”œβ”€β”€ confidence_score NUMERIC

Each exploration segment accumulates data over time:

  • Exploring β†’ The system is still testing this segment
  • Absorbed β†’ The segment proved successful and is now part of the winning profile (the ICP expands)
  • Abandoned β†’ After enough data, the segment clearly doesn’t work (too many losses or no engagement)

Bell Curve Shifts

When an exploratory segment is β€œabsorbed,” it shifts the winning profile:

-- bell_curve_shifts table records how the ICP changed bell_curve_shifts (12 cols) β”œβ”€β”€ shift_type -- e.g., 'expansion', 'contraction' β”œβ”€β”€ dimension -- e.g., 'employee_size', 'industry' β”œβ”€β”€ previous_range JSONB -- { min: 50, max: 300 } β”œβ”€β”€ new_range JSONB -- { min: 50, max: 500 } (expanded!) β”œβ”€β”€ trigger_reason -- e.g., "3 deals won in 300-500 employee segment" β”œβ”€β”€ supporting_deals INT -- How many deals support this shift

If you start exploring larger companies (300-500 employees) and win 3 deals there, the system proposes expanding your ICP’s employee range from 50-300 to 50-500. The β€œbell curve” of your winning profile literally shifts to accommodate the new data.

This is the learning loop in action:

  1. Winning profile β†’ Discovery finds companies
  2. Some are explored (outside the profile)
  3. User converts some explores to deals
  4. Deals close β†’ Winning profile updates
  5. Updated profile β†’ Better discovery

The Discovery Pipeline

Discovery results go through a pipeline before reaching the user:

LLM generates companies β†’ Domain verification (does the domain actually exist?) β†’ Dedup against existing CRM data β†’ Score each company (quick fit estimate) β†’ Filter by minimum confidence (>50) β†’ Store in discovery_results table β†’ Present in "Ready to Engage" queue

The discovery_results table tracks the lifecycle:

discovery_results (37 cols) β”œβ”€β”€ status DEFAULT 'new' -- new β†’ viewed β†’ saved/dismissed/converted β”œβ”€β”€ viewed_at -- When the user first saw it β”œβ”€β”€ saved_at -- When saved for later β”œβ”€β”€ dismissed_at -- When rejected β”œβ”€β”€ dismiss_reason TEXT -- "Bad fit", "Already contacted", etc. β”œβ”€β”€ converted_at -- When added to the CRM pipeline β”œβ”€β”€ ml_fit_score NUMERIC -- Quick fit estimate at discovery time β”œβ”€β”€ discovery_source -- 'groq_llm', 'web_search', etc. β”œβ”€β”€ explored_segment -- Which explore category, if any β”œβ”€β”€ explore_category -- 'smaller', 'larger', 'adjacent_industry', etc. β”œβ”€β”€ deviation_reason TEXT -- Why this deviates from the profile

The Ready to Engage Queue

Discovered companies surface in the Command Center as a prioritized queue:

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ Ready to Engage β”‚ β”‚ β”‚ β”‚ β˜… CloudTech Solutions Fit: 87 New β”‚ β”‚ Computer Systems Design, 200 emp β”‚ β”‚ "Strong profile match: industry, β”‚ β”‚ size, and geography align" β”‚ β”‚ [Save] [Dismiss] [Add to Pipeline] β”‚ β”‚ β”‚ β”‚ β—† FinanceAI Corp Fit: 72 Explore β”‚ β”‚ Financial Software, 450 emp β”‚ β”‚ "Testing larger company segment" β”‚ β”‚ [Save] [Dismiss] [Add to Pipeline] β”‚ β”‚ β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Profile matches (β˜…) are shown first. Explore picks (β—†) are labeled so the user knows they’re experimental. The dismiss and save actions feed back into the segment tracking system.

Key Takeaways

  1. 80/20 explore/exploit balances known winners with deliberate experimentation. Without exploration, your pipeline ossifies. Without exploitation, it’s random.

  2. Seven strategies cover profile matches, Goldilocks prospects, and four exploration dimensions (size up, size down, adjacent industries, new geographies).

  3. Minimum 5 closed deals are required. The winning profile is statistical β€” it needs data.

  4. Exploration outcomes are tracked. Successful experiments get absorbed into the ICP. Failed ones are abandoned. The system learns over time.

  5. Bell curve shifts are the payoff β€” the winning profile literally expands when explorations prove successful. Discovery makes your scoring smarter over time.

Next chapter: we leave the scoring engine and enter the integrations layer β€” starting with how Astrelo connects to HubSpot through OAuth.

Last updated on