We started by sprinting
We had the tools, we had the ability, so we built what we thought we needed - a scoring model on intent, behavior, and company data. Zero to a hundred. No data science in the first pass because we had no historical data. Just which offers converted at higher rates, feedback from the sales team, and then a lot of arbitrarily assigned values. 20 points to this, 40 to that. If you looked at it you'd giggle.
We sent the scored leads to sales. They said: we just need all of them.
So we reverted. Back to binary. Good enough company data, good enough activity - yes or no. That's where I'd recommend most people start. It's tempting to pick seven or eight attributes and run a complex model. Nobody used ours. It was way above where we actually needed to be.
Three layers
What we ended up with was three layers that worked together:
Layer 1: Key activities. Binary. Four specific actions that hit a score of 100 and go straight to sales. Speed to lead for the ones we know are ready.
Layer 2: Self-selection. If a lead scores low on company data, we serve up a questionnaire on-site right after they enter their email. A few questions about whether they'd be a good fit. If they answer correctly, they still get surfaced to the SDR team. Data isn't perfect - if you rely on it as the only way to route leads, you miss good ones.
Layer 3: Engagement scoring. All events collected through Segment, modeled in dbt, scored zero to a hundred. Each offer gets categorized as product or content, unioned together, and we look at the last 60 days. That activity score syncs to Salesforce nightly via Census so the team can filter and prioritize leads that have been heavily engaged but maybe didn't hit the binary threshold.
The feedback loop
We eventually moved to MadKudu for machine learning-based scoring on the company data side. But the model is only as good as your feedback. We got the sales team to log DQ reasons and follow the process, then reviewed quarterly.
Through that analysis we found leads we'd scored low that came back and converted. We'd scored them wrong on the company data. That's the signal that tells you where your model is wrong.
Updating the model isn't just technical - it's five minutes to change the attributes. The real work is getting buy-in across sales and marketing on what the ICP should be. At a small company that's a quick conversation. As we doubled the team, that timeline stretched. You need sales in the room because watching a salesperson go through a lead list is the most informative thing you can do as someone building a scoring model.
What I learned
- Start simple. We went from sprint to crawl and that's how it should have been from the start. Binary yes/no on company fit plus key activity is enough to get going.
- Build fail-safes. No scoring model is perfect. Questionnaires, nurture emails, engagement scoring - build ways for good leads to surface even when the model misses them.
- Sales buy-in is the whole game. I sent an outbound list to sales and they sent it back and said "this is crap." I watched them go through it, show me why each lead wasn't good, and learned more from that than any data analysis. The data is the same - the context is different.
- It's mostly art. Unless you're Reddit or eBay, good luck getting anything statistically significant in B2B Enterprise. Data helps identify attributes you missed. But given our size, sales cycle, and customer base - it was more art than I'd like to admit.