Skip to content

How To Build A Lead Score That Actually Works

An effective lead score is a thing of beauty. Leads pour into a CRM, and behind the scenes a scoring model quantifies the quality. Marketing can then determine how many leads to pass to sales and signal which should be contacted immediately. Sales can receive new leads and prioritize those who are most likely to close. Voila! Sounds easy right?

In reality, most businesses fail to implement an effective scoring model, wasting precious resources in the process.

Why do lead score projects fail so often?

Here are the most common pitfalls we've seen across hundreds of businesses:

  • Teams don't know where to get started, or there's too many ideas and analysis paralysis ensues
  • There's a disconnect between the purpose of the lead score and how it's actually being used
  • The scoring model becomes a monster of endless attributes beyond anyone's control or comprehension
  • Worse yet, after all the time and effort spent creating a scoring model, it's proven ineffective at predicting positive outcomes, no better than randomly assigning values and hoping for the best

If you are facing similar challenges, don't worry we've got you covered. This article will guide you through our proven step-by-step process for how to create a lead score that actually works.

Here's what we'll cover and what you can expect to learn today:

1. Where to start before you start

2. How to analyze existing data

3. Why you should use a lead score planner

4. Common pitfalls to avoid

5. How to successfully implement a lead score

6. Ongoing optimization

 

Where to start before you start

In order to implement an effective lead score, you first need to critically answer these three questions:

  1. Why do we need a lead score?
  2. Who needs to be involved?
  3. How will we know if it's working?

Let’s break down each of these questions, starting with the first.

 

A. Why do we need a lead score?

The most common purpose of a scoring model is to identify which leads are most likely to convert into customers, enabling sales to prioritize their outreach efforts. While that purpose can feel applicable to any organization, it’s important to recognize the underlying challenge a lead score seeks to solve:


Sales does not have enough time to adequately manage lead volume.


If this statement doesn’t resonate with you, it’s a strong signal that the time isn’t right to focus energy on building a lead score. Your time would be better served on efforts to grow volume, since sales has enough time to adequately follow up with every new lead. If you are an early stage startup, you also might need more data in order to build a reliable scoring model. Bookmark this article for later if lead volume is manageable and/or you are just starting to grow your customer base.

Pro Tip: If you are a HubSpot customer and have Marketing Hub Enterprise or Sales Hub Enterprise, it's also worth mentioning HubSpot's predictive score properties, powered by machine-learning. If your sole goal is the prioritization of leads, you should consider the 'likelihood to close' property either in place of, or in addition to, a manual leading scoring model. Learn more about predictive lead scoring here.

 

B. Who needs to be involved?

After establishing the value a lead score will bring to your business, the next step is to involve other key stakeholders. These stakeholders need to share the vision of how a lead score will help the company and commit to the project plan.

The two departments that are most heavily invested in the success of a lead score project are marketing and sales. Be sure to involve leaders from both departments early on, as they’ll determine whether the lead score is implemented & adopted effectively.

Let’s take a moment to reflect on the importance of this step. Remember when we said a common reason for failure is due to a disconnect between the vision vs reality of how the lead score is being used? So often the underlying reason for the disconnect is because only one department sees the value of a scoring model. Either it’s the marketing team creating a lead score that nobody on sales actually uses, or a sales team creating their own score without a full view into the attributes collected by marketing.

 

C. How will we know if it's working?

With leaders from both teams bought in, the final step to define what success looks like. While the ultimate goal is growing revenue, you need to be cognizant of other outside factors impacting your measurement plan. Such as delays between when the score has been fully adopted and how long your buyer's journey takes to complete a cycle. If it typically takes around 5 weeks for a new lead to become a buyer, then it’s unrealistic to expect a boost in sales after just the first month. Better leading indicators of success early on are deal creation or pipeline progression rates.

Once you’ve aligned on a measurement plan, the final step is to establish benchmarks. Use performance from trailing months, as well as year-over-year comparisons to control for seasonality. If enough time passes and benchmarks are being exceeded consistently, it's time to celebrate! You've built an effective lead scoring model.

Now let's cover the executional steps on how to turn this vision into reality.

 

How to analyze existing data

When you announce your intention to build a lead score to your colleagues, one thing you can expect is that everyone will have an opinion about what attributes they think should be included.

  • “We should score leads based on their company size”
  • “We should track which have been engaging with our emails recently”
  • “We should score certain form submissions higher than others”


None of these are necessarily bad ideas, but often a flood of well-intentioned ideas from every direction can become overwhelming and counter-productive. To avoid this, start by selecting your audience carefully. Sales reps and service reps are often the closest to your buyers, and are therefore most plugged into what attributes quality leads & customers possess.

Shadow a few tenured sales reps who consistently hit their quota. Whether they know it or not, these reps have developed an attuned ability to prioritize quality leads and deprioritize others. Meet individually and unpack what’s going on in their head as they scan through their lists:

  • Where do they start? How do they sort and filter information? Do they look at the person, the business, or both?
  • What signals do they look for when identifying a hot lead? What about a junk lead that isn't worth their time?
  • When thinking about their recent wins, what commonalities come to mind?

 

Take note of everything they say and do. Once you've met with each of the top performers, clean-up your notes and write down all the datapoints that can be tracked. What you'll be left with is a manageable list of lead scoring attributes that you'll be able to analyze as you move onto the next phase of the project.

 

Data Analysis: Five Steps To Effectively Analyze Scoring Attributes

The goal of the data analysis is to verify which attributes truly indicate a lead that's more likely to become a customer. If you don't analyze your existing data, you are merely relying on gut instinct to inform your scoring model.

Follow the steps below to complete an effective analysis on new or existing attributes you are looking to validate.

 

I. Prioritize the most relevant datapoints. It's easy to get lost in a sea of scoring attributes when you look at all the data that's available. Thankfully, we have a north star to help guide us - the shortlist of attributes from the interviews we conducted. Highlight the attributes that top performing reps focused on the most to prioritize the 20% of attributes that will have an 80% impact on the effectiveness of the scoring model.

 

II. Understand the data available to you. Scoring attributes commonly fall into three buckets: behavioral, demographic, and firmographic data. Behavioral traits are things like page views, form submissions, and email engagements. Demographic and firmographic traits are characteristics about the person or organization, like job title or industry. Your short-list of attributes likely contains a mix of behavioral and demographic/firmographic attributes. In this step, you need to investigate which of these data points are available to you, where they are stored, and how they've been collected. If you are using HubSpot, the majority of these data points are likely stored in the CRM with behavioral data points captured automatically, and everything else either provided by the lead, entered by the sales rep, or populated via HubSpot Insights.

 

III. Prepare the data for analysis. As you move into the data analysis, begin with the demographic and firmographic attributes. The reason we start here, is so that we can avoid redundant logic (which we'll discuss more later on), and because characteristics about a person or business tend to have the greatest impact on the scoring model's efficacy. Once you've gathered your data, export everything to a spreadsheet. While we could use reporting within the CRM to perform the data analysis, it's typically more efficient to use pivot tables in a spreadsheet to sort through the data. In your spreadsheet, be sure to also include fields that indicate which records went on to become quality customers. For example, lifecycle stage and/or total revenue.

 

IV. Analyze the attribute index score. The most common mistake during data analysis is failing to calculate the attribute index score. This is also the step that's talked about the least, if at all, in other resources about building a lead score model. We'll spend the majority of our time on this step so you don't fall for this trap. 

So, what is the attribute index score? And why is it imperative you integrate this concept into your lead score analysis?

Attribute Index Score: A measure used to calculate the correlation between a lead score attribute and the desired outcome.

 

In other words, how "likely" or "unlikely" it is that a lead will become a customer if they possess a certain attribute. An index score of 100 is average, above 100 is positively correlated with a successful outcome, and anything below 100 is negatively correlated with a successful outcome. As another point of reference, think about an attribute with a score of 200 as one that indicates leads who are twice as likely to convert into customers, whereas a score of 50 means they are twice as likely to not become a customer.

With your data prepared for analysis, here's how you would calculate the attribute index score:

Attribute Index Score Calculation

Using the image above, we can see 3 out of 10 (30%) of customers share a common attribute. Among all contacts, including those customers, there's 5 out of 50 (10%) who have the same attribute. To calculate the attribute index score, we then take the customer percentage and divide it by the contact percentage (30% / 10% = 3). The final step is to multiple by 100, which gives us an attribute index score of 300.

Now that we've defined what an attribute index score is and how it can be calculated, let's use an example to further illustrate why it is so important. In the fictitious scenario below, we see the distribution of a demographic characteristic, favorite color, among a set of customer data. Take a moment to review the chart.

Partial Lead Score Analysis

What did you notice? Is it that almost half of all customers have green as their favorite color? Pretty remarkable. Surely we can include "favorite color is equal to green" as a positive attribute within our lead scoring model to support this insight, right?

Wrong. Well, at least not yetWe've only completed one half of the analysis. Now we need to look at the customer data in relation to the entire audience.

Let's take another look at the analysis, using the completed picture below.

Lead Score Data Analysis

What do you see this time?

Looking at the entire group of contacts it's clear the color green is very popular among both customers and the audience at large. In fact, there's an even higher percentage of people in the general audience pool who's favorite color is green when we compare it against the customer data. Now let's look at the color blue. In the entire group of contacts 14% of respondents said their favorite color was blue. Among customers that percentage rose to 27% - almost twice as high!

So, what does all of this mean in the context of building an effective lead scoring model?

Let's go back to the beginning and pretend we stopped short of running the full analysis. We saw almost half of all customers said their favorite color was green, and decided it should be a positive attribute. What was seemingly a logical addition to the lead score, turns out to be quite the opposite. What we actually built was a lead score that's negatively correlated with our desired outcome. Sales would be better off ignoring the lead score entirely!

As extreme as this scenario might sound, it's the sad reality many organizations face without even knowing it. There's a rush of excitement when a trend is discovered among customers, and teams don't think to compare how that new attribute holds up against the audience at large. Thankfully, you are now aware of this pitfall and can apply the concept of an attribute index score to effectively analyze your data.

 

V. Ensure results are stable. We've now gone from conducting initial interviews, to understanding what data is available, to analyzing customer data using attribute index scores. But our work isn't done just yet. Before the data analysis is concluded, your final step is to validate the results are stable.

If the initial interviews were considered more of an art, this last step is much more of a science. You'll be applying proven methods of statistical analysis against the datapoints with high attribute index scores to validate the results are trustworthy.

The good news is you don't need to be a practiced statistician or data scientist to perform this analysis. There's dozens of online tools available that perform the heavy lifting. Here's a calculator created by HubSpot as part of their A/B Testing Kit that's free to download. All you need to do is make a copy of the worksheet, and relabel the following:

  • Variation A to "Customers"
  • Variation B to "Contacts"
  • Visitors to "Count of All"
  • Conversions to "Count of [Attribute X]"

AB Test Calculator

Here's an image of what the final result would look like for the "favorite color" example shared above. In case you were wondering - yes, the results for the color blue were statistically significant!

 

Why you should use a lead score planner

A lead score planner is any type of document, usually a workbook, that helps organize every scoring attribute in one place. When completed, teams will be able to see all of their logic statements with individually assigned point values, making the actual construction of the lead score itself a simple task.

Now if you've come this far, it's only natural to be wondering:

"Don't we have all the insights we need already? Why do we have to take an extra step to plan everything out?"

Fair questions. In this section, we'll breakdown the top three reasons why you should use a lead score planner.

 

1. A lead score planner will *save* you time.

If you are concerned about wasting time on this extra step, listen closely. Using a lead score planner before attempting to build the lead score will actually save you time.

To illustrate this point, let's walk through a scenario we've seen play out time and time again when the planning step is skipped.

Start the clock.

  • Minute 0: Your team has just completed a comprehensive data analysis. Momentum is strong.
  • Minute 1: The decision is made to begin building the lead scoring model in your CRM. An admin logs into the system.
  • Minute 5: The team watches the admin start a new scoring model. The blank canvas appears.
  • Minute 10: The raw data analysis is shared with the admin. After some discussion, transposing begins from top to bottom.
  • Minute 60: After nearly an hour, the work is complete. All of the attributes have been entered. But wait, someone says.
  • Minute 90: Point values were assigned incorrectly. Each is either the same or was picked at random to start. Now that the team sees the entire scoring model, it's clear scores need to be readjusted. The problem is identified, deliberation ensues.
  • Minute 120: Going through all the attributes takes time. It's hard to see more than a few attributes in a single screen, so there's lots of scrolling, adjusting, and second-guessing.
  • Minute 150: It doesn't seem possible, but the team has now spent another hour rescoring all of the attributes. Each attribute reopened discussions about all the other attributes, and why some were scored higher or lower than others. It was a painstaking process, that thankfully is now complete. But wait, someone says.
  • Minute 180: Another issue is identified. There's unintended overlap between some of the logic statements. A contact can qualify for two different attributes, when they should really just meet one statement. The closer everyone looks, the larger the problem becomes. The majority of individual attributes need to be bundled together in order to keep the score in a desired range. Momentum is weakening.

While some of the specifics in this story might vary, the underlying challenge is the same. Lead scoring builders are not an effective space for planning. To go directly from the data analysis into the lead score builder will end up costing you more time. Instead, use a lead score planner to draft all of the logic statements and scoring.

 

2. A lead score that's understood, is more likely to be adopted.

Building an effective lead score is a team sport. You might have one individual who takes ownership of the project and others who step up during certain phases, but ultimately the entire team needs to be involved in order for the project to be a success.

A great time to involve the broader team is after the lead score planner has first been completed. For the people who drafted the lead score, it's a chance to present findings and receive feedback on the logic statements. For stakeholders who haven't been as close to the project, it's a chance to better understand how & why leads will be scored a certain way.

Throughout this meeting, it's important to lean heavily on the data analysis. Those who are presenting their findings, should start by explaining the steps that got them to this point. The attributes that have already been analyzed are not subject to people's opinion, as to avoid regressing back to gut instincts. Valuable feedback during this meeting is about identifying gaps and reviewing the logic itself. When individual attributes are combined into a logic statement, does it make sense? Is the range of point values easy to understand? Is anything else missing? Building off that last question, if a new attribute is uncovered that hasn't been analyzed yet, simply take note and follow-up with findings after the data has been analyzed.

After completing this step, all key stakeholders should understand the lead score well enough to be able to give a high-level overview to someone else. The lasting impact is a scoring model that's better adopted, as the management team down to all the individual reps will know how the lead score is working for them.

 

3. A lead score planner helps avoid common pitfalls.

In the story above, we saw an example of a team who skipped the planning step. Suddenly they would realize a misstep had been made and work had to be redone. We saw this happen twice, for two common reasons:

  • Arbitrary Score Values

and

  • Redundant Logic

The use of a lead score planner can help avoid both of these common pitfalls by shining a brighter light on the problems earlier on. Meaning it's easier to see the bigger picture when assigning point values, and to spot redundancies when all of the logic is displayed on a single worksheet.

After either of these pitfalls are identified, a lead score planner also makes resolving issues less painful. Remember, the planner is simply a worksheet. Meaning it's easy to take notes, save different versions, and ultimately make updates with a few keystrokes.

In summary, the use of a lead score planner will help your team save time, increase adoption through understanding, and resolve problems that are easier to miss and harder to fix within the actual score builder.

[RESOURCE] Here's a workbook we created that's free to download. On the third tab you'll see an example planner. Adjust as you see fit.

 

Common pitfalls to avoid

Let's take a moment to appreciate the progress made so far. You've now planned for success by answering the three starting questions, completed a comprehensive data analysis, and drafted all of the logic statements within a lead score planner.

With all of this work complete, you are ready to build an effective scoring model.

Thankfully, there's plenty of resources to assist you in this step. Existing step-by-step articles, help documentation, etc.

Unfortunately, what most of these resources fail to mention are the mistakes teams make when building a lead score.

The purpose of this section is to highlight these common pitfalls. What we won't cover is how to build the lead score itself.

Wait, what?! I thought this was an article about how to build a lead score?

Building, meaning the actual point and click, is incredibly simple. Do this, do that, hit save. As mentioned above, there's already plenty of how-to resources available for your system of choice. Avoiding pitfalls is more difficult, and is arguably the only thing that matters during the building phase. Remember, the data has been gathered, attributes analyzed, and the logic is drafted. Now you just need to avoid mistakes as you enter everything into the system.

So without further ado, here are the top five pitfalls to avoid when building a lead score.

 

1. Too Many Logic Statements

Picture this scene. A team has just graduated from spreadsheets into their first CRM. As they explore their new system, it's like a honeymoon phase. Activity tracking that used to be manual is now automatic. Web behavior they would have never imagined tracking is now at their fingertips. Excitedly, the team decides to take advantage of all the CRM has to offer and build a lead score.

"Look at all of this amazing data," the team thinks, "we have to build a scoring model that aggregates all of these insights."

What nobody told this team, is just because you can score something doesn't mean you should. In fact, when it comes to building a lead score, less is often more.

How can too many logic statements become a problem?

In a word, dilution. When too many independent logic statements are included in a scoring model, the most powerful predictors become intermixed with average indicators. As a result, the larger scoring model regresses to the mean, and the output blends into something average. Even in a case where the team followed the five step data analysis, the output might still only be moderately effective at predicting positive outcomes if too many statements are included. Good, but not great.

The way to avoid this pitfall is to trust in the data analysis and the 80/20 rule. Meaning 80% of the effectiveness of the scoring model is going to come from 20% of the attributes analyzed. You only need a few key measures to build an effective lead score. Once you've found those powerful predictors of success, resist the temptation to add more new statements just because you can.

 

2. Arbitrary Score Values

We touched on this pitfall during our story of the team who skipped the planning step. They began assigning point values at random, only to realize everything had to be adjusted afterwards. Using a lead score planner can help make this easier to manage, but it doesn't prevent the problem entirely. So, what exactly are 'arbitrary score values' and how can they be avoided?

Arbitrary score values occur when logic statements are given point values without reason. This undermines the effectiveness of the scoring model, as the strongest statements aren't given the proper weighting. The best way to avoid this pitfall is to approach the planning phase in the following order:

  1. Input all the logic statements into the planner, without assigning point values.
  2. Go back to the data analysis, and bring up all of the attribute index scores.
  3. Starting small, score each logic statement from lowest to highest based on the related attribute index scores.

This process helps your team do two things.

First, you are able to see the full picture before point values are assigned. This is important because you need to understand the relative weight of each individual statement as compared to all the others. For instance, how do you know if one statement should be a "2" when looking in isolation? The answer is, you don't know. You need to see the broader range, and statements of equal, greater, and lesser value.

Second, you are able to use a data-driven approach when determining point values. The attribute index scores are your deciding factor for what's scored lowest vs highest. This takes gut instinct and guess work out of the equation.

Lastly, the reason we recommend starting small and going from lowest to highest, is because it's an easier way to get started. Find the statement with the lowest or most moderate attribute index score. Let's say it was a score of 120. Make that statement a 1 and work your way up.

And yes, we did say 1. Not 5, not 10, not 100. I see you there ratcheting up that score value. Don't do it!

There's a natural temptation to go big with scoring, but as we'll explain in the next section it's best to keep the scoring output simple.

 

3. Undefined Distribution

When it comes to operationalizing your lead score, an undefined distribution is a silent killer. You can make no other mistakes and have a beautifully built model, yet still have a hard time fully utilizing the lead score. Why is this? 

We'll answer this in a moment. First, let's define the undefined distribution.

Undefined distributions are scoring models that don't have an acknowledged minimum and maximum, nor any well measured thresholds within the scoring range. In these cases, the model could still score quality leads correctly, allowing sales to sort views from high to low and prioritize effectively. But nobody can really say what values are low vs average vs high. Even if they can, definitions are vague and differ widely from person to person.

This lack of definition makes it difficult to integrate the score into lead distribution rules and more advanced outreach strategies. Additionally, an undefined range makes it hard for sales reps to understand how the logic works, and therefore harder to trust that the scoring actually is working. These operational and usability challenges can hold back an otherwise beautifully built scoring model.

Thankfully, we've prepared three simple steps you can follow to avoid this common pitfall.

i. Decide which leads should always be passed to sales. Usually these are leads who complete high-intent actions, like 'hand-raisers' filling out a demo request form. If applicable, also decide which leads should never be passed to sales. These might be people who identify themselves as non-buyers, like prospective job applicants filling out the contact us form.

ii. Next, define the scoring range. This is your minimum and maximum. Keep the entire range is small as possible, ideally between 1-10 or even 1-5. Many of us are hard wired from our school days to think of 100 as the max, but if you can break this temptation and keep the range small the point values will carry more significance. With time, the sales team might start referring to leads by their score (for instance, a new "10" reached out). This is great for adoption.

iii. Finally, determine the key thresholds within the distribution. In many cases, you might just have one threshold, where anything above a certain point value means a sales rep is assigned. While there are many factors to consider, the most important thing to keep in mind are the rules that were established in the first step. Make sure any leads who should always be assigned are above the threshold you choose.

By using the steps above to define your scoring distribution, you'll be better equipped to integrate the lead score into the team's process and maximize the impact it will have across the business.

 

4. Redundant Logic

Redundant logic occurs when two or more scoring statements unintentionally overlap. As a result, score values are higher or lower than expected. This creates problems for the defined scoring range and distribution rules you just worked so hard to establish.

In this section we'll go over some of the most common redundancies and explain how you can spot check your own scoring model to avoid this mistake.

As a quick note, let's put to the side some of the more obvious redundancies. We'll consider obvious redundancies those that can be easily spotted upon a second review. Mistyped statements. Identical statements entered twice accidentally. What we'll focus on instead are the harder to spot redundancies.

Here are some non-obvious scenarios to be aware of as you create your scoring model:

  • Overlap Across Attribute Types - To recap, the three attribute types are behavioral, demographic, and firmographic. In this scenario, a behavioral statement unintentionally overlaps with either a demographic or firmographic statement. The most common example is a form where key fields are populated upon submission. A team might have planned for a certain range based on the answers provided to required questions. Then a separate statement around the form submission itself is added, without realizing the activity and answers are now double counting.
  • Single Property, Multiple Values - In this scenario, one property could contain multiple values and therefore be scored multiple times if each logic statement is entered independently. The multiple checkbox property is the primary culprit here as a lead can have more than one value selected, as opposed to a drop-down where it's only a single visible value.
  • One-To-Many Associations - Redundancies can also occur if your database structure utilizes a one-to-many relationship between newly created contacts and another record type. If any scoring logic relates to the other record type, you'll need to watch out for scenarios where a single lead has two or more associated records. A common scenario here happens when automation creates a new deal when a lead submits a high-intent form. Since there can be "many" deals associated to one contact, you need to be cognizant of statements that reference information on the deal record as separate deals could trigger separate statements.

Now that you know some of the less obvious redundancies, let's discuss how you can effectively spot check your scoring model. In short, just remember: when it doubt, test it out. As with many things in revenue operations, it's best practice to run experiments before releasing an update. The same is true when building a lead score. Be sure to test the logic before sharing it with the team.

 

5. Not Effectively Tracking Results

Lead scores are dynamic, ever-changing models. As time goes by, they should evolve based on new learnings and adapt to changes in the environment. Teams who treat the lead score as a one-and-done project, and fail to improve upon the initial model are making a huge mistake. Not only will some statements need maintenance, but top-performing teams know there is always room for improvement. Still, even well intentioned teams who want to optimize their lead score often have a hard time doing so. Why is this?

The reason teams struggle to make improvements is because from the beginning they haven't been effectively tracking results. Without proper tracking methods put in place, reporting on the lead score results can become a treacherous task.

If you are building your first lead score, the goods news is that now is the perfect time to put effective tracking in place. For others who have an existing lead score, pay attention as the measures you are using to analyze performance might be misleading you.

To start, let's talk about the underlying challenge that can corrupt an effective lead score analysis. Self-fulfilling prophecies. Teams who don't have effective tracking measures in place, can fall victim to two different types of self-fulfilling prophecies.

The first type, is a self-fulfilling score. Meaning by the time a lead becomes a customer, the lead score increases to a higher value. This can be misleading when you are analyzing lead score values among customers.

Let's use an example to illustrate this point.

  • A new lead, let's call her Linda, converts on a high-intent form. At the time, she's scored as a 7, and is assigned to a sales rep.
  • As Linda continues in the buying process, she completes activities on the website. For instance, reviewing terms & conditions.
  • These web behaviors cause her lead score to increase. By the time she's done her score goes from 7 to 10, the highest value.

In this scenario, Linda ended up becoming a customer, and many others like her who started out with a moderate value ended up with a higher score by the time they had bought. For the team analyzing the lead score results, they might look at people like Linda and conclude the lead score model is well calibrated. After all, the majority of customers are 9s and 10s. But in reality, Linda had been a 7 at the time she first became a qualified lead. This was the moment outreach was first prioritized, and when the lead score mattered the most.

Thankfully there's a relatively simple solution to this type of self-fulfilling prophecy. Create a secondary property and copy the initial lead score into that field. If you are using HubSpot, this is something you can do within the same workflow where sales is being assigned. First, create a new property, and then use the copy property value workflow action to preserve the original score value within the new field.

Now when you analyze score values later, the secondary property will pinpoint what the score value was when sales first started their outreach. If you see more and more 7s and 8s like Linda becoming customers, and less of the "real" 9s and 10s progressing past the lead stage, then there's an opportunity to improve the scoring logic. Your final steps are to figure out which scoring statements need to be refined and adjust so that the most valuable leads are given the highest scores.

The second type of self-fulfilling prophecy is self-fulfilling results. In this scenario, teams are once again analyzing the performance of the lead score by looking at the full distribution of score values across all leads and customers. Self-fulfilling results skew the score values higher among customers, as leads with higher point values are the ones who get more immediate attention from sales. Conversely, if a new lead converts and doesn't possess a high enough value, they won't get the same personal attention and are therefore less likely convert into a customer.

Unlike the first type of self-fulfilling prophecy, there's not a quick fix to this address this dynamic. What you should focus on instead is an effective rollout of the lead score, and building reports that analyze both the effectiveness of the lead score and your team's overall performance.

We'll cover both of these topics in the concluding sections, starting with how you can effectively bring your lead score to life.

 

How to successfully implement a lead score

With a pitfall-free scoring model built, you are now ready to share it with the team.

This can be a scary moment, as the project has largely been in your control up until this point. Now you are relying on others to actually adopt what you've built.

In this section, we'll guide you through three key steps to ensure a successful rollout.

 

Validate The Scoring Model

There's a magical moment when you hit save on a lead score for the first time. The gears start turning, and attributes begin calculating. Once complete, every record in the database has been assigned their proper point values.

And yet. It's untouched.

You have yet to reveal the lead score to the team at large, and for a brief moment in time the self-fulfilling prophecies we touched on above are yet to be a concern.

This is the perfect time to analyze the lead score and validate the model is working properly.

The goal of the analysis is to confirm higher points correlate to positive outcomes. The measures you use will depend on what outcome you are tracking towards. Most often, it will be converting more customers and/or generating more revenue.

Your method of analysis can also vary. Our preferred method is to use a scatter plot report, like what's shown below.

Lead Score vs Lifecycle Stage Distribution

This report was built inside of HubSpot. Both axis are displaying the lead score, broken down by the lifecycle stage default property. The sizing is based on count of contacts, so we're able to see how many people fall into each lifecycle stage value at a glance.

In the example above the lead score is very well calibrated. People who are in the lower lifecycle stages like lead have lower values, and as we move towards more positive outcomes like opportunity and customer we see the values increase.

Imagine a scoring model built around statements like company size, job title, location, along with other custom properties. Even after completing an effective data analysis, there's still a certain degree of mystery around how everything comes together. It's a special moment when you see the output of all those attributes, and validate everything correlates with your desired outcome.

Hopefully this is the moment you experience, but if not it's better to catch it now before the rollout begins.

 

Enable Early Adopters

An effective rollout strategy includes at least two distinct phases. Phase one is the initial rollout to early adopters, and phase two is the broader rollout to the team at large.

First, find your early adopters.

Depending on your team size, this could be a couple sales reps who you know to be influential amongst their peers and open to adopting new tools & tactics. If your organization has dozens or hundreds of sales reps, your early adopters could be a sales manager and their entire team.

Once you've found your early adopters, make sure they are setup for success.

Start by sharing the purpose of the lead score, and explain how the early adopters will play a key role in determining if the overall project is a success. Then schedule a meeting to formally kickoff, and demonstrate how to use the lead score. Make sure everyone is ready to go by the end of the meeting.

Next, create a space for feedback. If you have Slack or Microsoft Teams, start a channel where the early adopters can chat amongst each other and have questions answered by a moderator.

Finally, end the early adopter period with another meeting. You'll want to schedule this meeting at the very beginning of the project, and let everyone know that they'll be responsible for bringing their thoughts and feedback to the meeting.

Once the initial rollout period has concluded, you'll ideally be left with three things:

1. Valuable feedback. 

2. Valuable insights.

3. Valuable allies.

Inevitably there will be questions you didn't anticipate, new ideas, and things that might be missing. Your early adopters can supply you all this valuable feedback, as they are actually using the lead score for the first time. Accept their feedback as a gift, and act on any improvements you can make before the next phase of the rollout.

The early adoption period also creates a unique opportunity to analyze the performance of the scoring model. During this time, you'll have an isolated group of people using the lead score alongside the rest of the team who aren't using the lead score. This provides a solid baseline for analysis. You can see if the early adopter's performance improves, and to what degree they outperformed the rest of the team during the same time period.

Finally, having an initial rollout period will leave you with valuable allies. Convincing everyone to adopt the lead score by yourself is difficult. You can explain the benefits of the lead score, showcase the results from the early rollout, and make it as easy as possible for everyone across the organization to adopt. But ultimately, you aren't a peer. Because of this, you don't carry the same credibility as one of their teammates. Recognize your early adopters are your best advocates, and give them opportunities to speak to the team as the rest of the rollout continues.

 

Operationalize The Lead Score

You've probably heard the saying:

  • If a tree falls in a forest and no one is around to hear it, does it make a sound?

Well, here's a new version you haven't heard:

  • If a lead score is built and not operationalized, does it make an impact?

 

An effective scoring model has the potential to significantly impact results. Whether this potential is realized or not, comes down to how well the lead score has been operationalized.

Use the list of implementation ideas below to unlock the lead score's greatest potential.

 

A. Lead Volume

In the very beginning of this article, we talked about a challenge your team is facing. Sales doesn't have enough time to adequately manage lead volume.

In short, you have a quantity problem. Thankfully, the lead score is ready to act as your quality control.

Before we get into the specifics, let's zoom out and talk big picture. In the image below we see an example customer journey.

Growth Operations _ Lifecycle Stage Overview

When a CRM is effectively implemented, there are a few key fields accurately tracking the buyer's journey. In HubSpot, arguably the most important field is called Lifecycle Stage. This field is meant to track the end-to-end journey from lead to customer, and as the middle of the image above illustrates, it plays a central role in orchestrating internal processes.

Now let's zoom in on the starred area. Here we can see the transition from a lead to a marketing qualified lead (MQL). This is where the lead score acts as quality control. Every new lead enters at the top, but only those with a high lead score will become MQLs.

Turning this picture into a reality requires a system with automation. In HubSpot, that's the workflows tool. If you are a HubSpot admin, the workflow itself is pretty simple. You'll build a contact-based workflow where the enrollment trigger is a having a score greater than a certain threshold. The primary action within the workflow is then to update the contact's lifecycle stage to marketing qualified lead (MQL).

If you are following along and have a workflow drafted, there's one important question you should be asking yourself.

"What's the score threshold I should use?"

To answer this question, you need to understand three things. First, you need to know your sales team's capacity for handling new leads. For example, if you have a sales team of five (5) reps and each can handle 40 new leads per day, that's 200 leads per day.

Next, you need to analyze your existing lead volume. How many leads are you generating on average in a given day or week?

And finally, you need to analyze existing volume based on lead score values. Let's say your lead score is on a scale of 0-10. At what threshold would you typically be passing 200 leads per day? If you saw total volume was 500 leads per day, but the combination of 7, 8, 9 and 10 would average around 200 leads per day, then you've found a good starting threshold of 7+ for the enrollment criteria.

 

B. Lead Distribution

With quality controls in place, the next item on the checklist is lead distribution. Lead distribution is all about where leads are routed. Are they immediately assigned to a seasoned sales rep? Does someone else reach out first to confirm their fit? Or is nobody assigned, and instead quality leads hit an inbox where whoever responds first can claim a new lead?

Any of these can be viable lead distribution models. What works best for you will depend on your size, structure, sales culture, and most importantly, what provides the best experience for your prospective customers.

That being said, the most popular model is for leads to be distributed to a named sales rep. The reason this is the most common is because there's more accountability from the start, and greater trust that the system is fair & equitable.

With a distribution model in mind, the next step is to think about how the lead score could factor into the mechanics.

Should the highest quality leads could go directly to more seasoned account executives, and the rest be divided among an SDR/BDR team for further qualification?

As the question above suggests, the lead score can help determine both when to assign someone and who specifically to assign.

 

C. Lead Alerts

There's a saying in sales: time kills all deals. This is especially true when a prospect first converts into a new lead. New leads are likely evaluating you alongside other vendors, and in competitive head-to-head situations the first to reply will often win the business.

When every second counts, you need an alert system that will instantly let the team know when a high quality lead shows interest.

Thankfully, there's no shortage of alert options if you are using HubSpot. In fact, your team might even want to turn off certain notifications. For instance, two alerts will trigger at the same time if an owner is assigned to a lead at the same time a task is created.

In addition to what alerts your team receives, there's also flexibility in where alerts are sent. Do they want a pop-up notification on their computer? Emails in their inbox that they can action later? Real-time push notifications sent to their phone via the mobile app?

The good news is you don't have to choose for everyone, each rep can customize alerts to their own liking. The main thing you should care about is whether the alerts are enabling fast response times. If a rep is underperforming on their speed to lead metric, part of your coaching playbook can be to talk about their alerts. If you discover they've customized their alerts to only show in their email inbox, but they don't check their email inbox throughout most of the day, it's likely not an effective system.

 

D. Lead Outreach

Lead outreach is all about how the sales team contacts new leads.

  • How should the sales team get in touch with quality leads? Phone call? Email?
  • How soon should they reach out? How often should they follow-up?
  • How many times before marking the lead as unresponsive?

Another way to maximize the lead score's impact, is to vary lead outreach based on score.

Let's unpack what this might look like using an example scenario with Acme Inc.

Acme Inc is a software company selling 3D Printing Technology to widget manufacturers.

They have a team of 10 sales reps. On average they generate 2,000 inbound leads per day.

The majority of these inbound leads are converting on top-of-the-funnel offers.

However, 200 leads are actually starting a free 14-day trial of the Acme Inc software.

A poor outreach strategy would be to contact all of these new leads the same way.

But the Acme Inc sales team is smarter than that.

They know that the 1,800 top-of-the-funnel leads are better left with marketing, so they can focus more on the higher intent sign-ups. Only when they have time to spare are they reaching out to leads who have yet to activate a trial.

During the free trial sign-up, leads are required to answer key qualifying questions.

These answers inform a scoring model.

A decent outreach strategy would be to prioritize the trial sign-ups, yet reach out to everyone in the same way.

But the Acme Inc sales team is smarter than that.

They know to reach out to the highest quality sign-ups first, using the lead score to sort new leads from high to low.

Moreover, the Acme Inc sales team has established different outreach processes based on predetermined score thresholds.

For example, leads with the highest scores are called immediately. If there's no answer, a voicemail is left in their inbox.

A good outreach strategy stops here, prioritizing the highest quality leads at the moment of initial contact only.

But the Acme Inc sales team is smarter than that.

Not only is more attention is given from the start, sales reps focus more attention on the higher quality leads all throughout their free trial period.

Sales leadership at Acme Inc is also smart enough to know that it's unrealistic to ask the sales team to give this type of personal attention to every new sign-up.

For instance, offering a free implementation call for every user. Only for the highest quality leads is this the expectation.

This is what a great outreach strategy looks like.

 

Underpinning Acme Inc's example, is the reality that there's only so much time in a given day. An effective outreach strategy maximizes the impact of that time by allocating more effort towards higher quality leads and less effort to lower quality leads.

One final note. Less does not mean none. There is a point of diminishing returns with high quality leads, and such a thing as too much outreach. For example, rather than trying a 5th outbound call to make contact with a high quality lead, it might be a better use of time to place the 1st outbound call with a lower quality lead.

In summary, a well implemented lead score is woven into an effective outreach strategy. Consider your current approach, and use the lead score to help your sales team optimize their outreach efforts.

 

E. Lead Management

The final item on the checklist is to incorporate the lead score into your team's lead management system. Before we talk about how to add in the lead score, let's start by defining lead management and outline what a strong system looks like.

Lead management is how your sales team stays organized with the leads they've reached out to, need to follow-up with, etc.

If you aren't sure what the team's lead management system looks like, ask. For instance, how does a rep keep track of all the people they've talked to, yet need to follow-up with again since the prospect hasn't made a decision on whether to move forward or not?

Vague answers or no answers to questions like these indicate there's not a set system in place. If this rings true, use the lead score rollout as an opportunity to introduce a set of standards for how to manage leads. The latter is what the lead management system will be built to support.

So, what does a strong lead management system look like?

To start, let's revisit the image below.

Growth Operations _ Lifecycle Stage Overview

We introduced this in section A) above, focusing on the starred area where the lead score determines which leads are quality leads.

As you define or refine your lead management system, focus on everything your sales team is doing after receiving a lead in the CRM.

  • Where does the sales team find the new leads they need to reach out to?
  • How are they contacting new leads? Is outreach always logged inside the CRM?
  • When they connect with a lead, how are they tracking whether the lead is qualified or unqualified?
  • If someone is qualified, what do they do when it's clear there's a potential sale?
  • How are they managing active deals within their pipeline?

Your lead management system should make the answers to questions like these clear and obvious.

If this is your first time implementing a lead management system, or you are starting anew, the most important thing to remember is that complexity is the enemy. Keep the system simple and easy to use. Centralize wherever possible.

The image above illustrates this in action. During the early parts of the sales process, there's a single field that answers the question a manager might ask someone on their team: "how's it going with that prospect?"

That field is called lead status. Default values include "new" for someone who hasn't been contacted, "attempting to contact" for those where outreach has been attempted, and towards the end values like "qualified" and "unqualified" would ultimately determine if a prospect should move to the next phase of the sales process or not.

Think of it as a mini-funnel within the MQL/SQL lifecycle stages. Every MQL enters as new. Only the sales qualified leads exit. A similar concept then repeats for the deal stage field, as the qualified leads who exit the MQL/SQL funnel then enter into the deal pipeline.

These two fields alone can serve as the backbone to a highly effective lead management process. What makes it highly effective, is a) that you've adapted each field for your own business context, b) you have training in place so everyone knows what each field value means, and c) you are leveraging automation where appropriate to eliminate the need for manual updates - thereby mitigating subjectivity and human error.

Each of these points warrants further discussion, but that's for another post on another day.

For now, let's return to the topic at hand - how to implement the lead score into your lead management system.

At every phase in the sales process, your team will have a combination of views and tasks helping guide their next action. Make sure the lead score is visible in all of these places. For instance:

  • When a new MQL converts and a task is created, add a token to display the lead score within the task title and/or task body.
  • Within views, introduce a lead score column towards the left so it's easy to see and sort upon.
  • On board views (i.e. a deal board), include the lead score as one of the properties that's displayed.
  • On an individual record, add the lead score into left sidebar and/or middle column areas.

These changes might seem small, but in aggregate the influence on adoption is big. Simply put, you can't use what you can't see.

 

Phew! We covered a ton of ground in this section. Before we move on, let's do a quick review of the main points.

  1. First, validate the scoring model. It's new and untouched. The perfect time to verify the score's efficacy.
  2. Second, enable your early adopters. You need to learn from them, and you'll need them as allies as the rollout continues.
  3. And lastly, operationalize the lead score. Use the score to optimize everything from assignment to outreach and beyond.

 

Ongoing optimization

Congratulations! You've made it to the final section. Give yourself a pat on the back for making it this far.

A ton of time and effort has been spent building an effective lead score that the team is actually utilizing. 

The goal of this final section is to make sure all that hard work doesn't go to waste.

In this concluding section, you'll learn how to both maintain and optimize your lead score on an ongoing basis. Because a lead score is not a static, one-and-done project. It's a living, breathing entity, that needs to be nurtured.

To stay organized, we've grouped everything into three frequency-based subsections.

 

1. As Needed Maintenance

As needed maintenance are all of the unplanned, one-off tasks that arise as a result of changes in the environment.

Said differently, your lead score logic statements can become outdated if the subject of the statement is modified. You need to be ready to react to these shifts and adjust impacted scoring statements.

So, what exactly are these environment changes? Let's review some common examples.

  • The subject of a statement is no longer in use. If you score based on job title and that field is removed, you are going to have some outdated statements on your hands that need to be adjusted. If something new is added instead, you might want to create statements. For instance, a job title text entry field is replaced on website forms with a new job level drop-down field. For behavioral statements, a modification could be an asset that's no longer being used, e.g. a page that's been archived. If there's a new version of an asset, you'd need to update the related scoring statements. 
  • Field values are added, edited, or removed. This type of environment change is less obvious, and often more common than an entire field being modified. That combination means you need to be especially mindful of these shifts. To illustrate an example, let's say a persona-mapped question was asked on a form. "What best describes you?" with values like "I'm a HubSpot admin", "I'm a HubSpot solutions partner", etc. If a new audience emerges and another value is added - "I'm a HubSpot user" - you'd need to rethink your existing statements that don't take this new value into consideration.
  • The data source updating a field is changed. Said differently, how you are gathering answers to a relevant question changes in a material way. Let's focus first on the last part - a material change. If a demo request form asked a certain question (i.e. what products are you interested in), and then a cloned version of that demo request form is used on another demo request page, that's nothing noteworthy. However, if that question had never been asked on a lead capture form, and is suddenly added to something like a webinar registration, then that's a different story. Why? Your scoring model could have assumed before, and rightfully so, that someone answering the products of interest question was filling out a high-intent form (i.e. demo request). Therefore, the scoring model always scored leads high enough to always be seen by sales. If a flood of webinar sign-ups come in answering the products of interest question, you could be misallocating the sales team time with leads that aren't as sales-ready.

These updates might seem minor and insignificant, especially when you have dozens of scoring statements and a change only impacts one specific statement. But make no mistake, even a single outdated statement can compromise the integrity of the entire scoring model. Completing as needed maintenance is critically important to keeping your lead score operating effectively. 

 

2. Quarterly Reviews

At least once per quarter, you should proactively be asking yourself the following questions:

  • Is our lead score well calibrated? Are higher scores correlating to better outcomes?
  • How many leads were passed to sales recently? Were volumes too high or too low?
  • What commonalities are we seeing for the highly rated leads that didn't move forward?

In this section, we'll show you how to build reports in HubSpot that answer all of these questions.

 

Is our lead score well calibrated? 

This first question should sound familiar. That's because we asked this earlier at the start of the lead score implementation section.

The report view below validated the model's effectiveness when the lead score was first built.

We can also use this report on an ongoing basis to validate that higher lead scores are correlated with better outcomes.

Report Settings

Report Visual

Lead Score DistributionPro Tip: Create two versions of the report above. One version uses the score itself as the axis measures. The other version uses the secondary property we covered earlier. As a reminder, the secondary property is meant to preserve whatever the score value was at the time a lead converted. Therefore, the cloned report will be a more accurate indication of score's predictive power when outreach was first prioritized.

 

How many leads were passed to sales recently?

Let's zoom out for a moment. Why did we decide to build a lead score in the first place? What problem were we trying to solve?

Sales does not have enough time to adequately manage lead volume.

This starting statement validated the lead score's purpose. Helping sales manage lead volume, and optimizing their available time.

It is only logical that lead volumes continue to be monitored. Otherwise we run the risk of passing too few or too many leads to sales on an ongoing basis, and straying unknowingly from the score's primary purpose.

Thankfully the report setup here is pretty straightforward. If you followed along through the lead score operationalization steps, then you'll already have a workflow updating a contact's lifecycle stage when leads are passed to sales.

In HubSpot's standard definition, this is the Marketing Qualified Lead (MQL) stage. In the images below, we use this default value to show what the report settings and report visual should look like.

Report Settings

Report Visual

Pro Tip: Add a goal into the report so you can easily spot time periods where volumes were above or below the ideal quantity.

 

What commonalities are we seeing?

To answer this last question, we first to take one step back and isolate the individual attributes that make up all of the logic. This means deconstructing the scoring model back down to the foundational elements. The goal is to then analyze all of the core attributes to understand if each is working as intended.

To illustrate this, let's say some of the key underlying attributes within a scoring model are as follows:

  • Number of employees
  • Industry
  • Persona
  • Country

Let's take the first attribute as an example, number of employees. We'll say the original data analysis revealed significantly higher attribute index scores for the larger employee size ranges. Meaning leads at larger companies were more likely to convert.

But that data analysis was from quarters or even years worth of historical data.

What has the data looked like recently within the present quarter?

Is the underlying assumption still true, that leads at larger companies covert at a higher rate into paying customers?

The goal of the report below is to answer this exact question.

Within the report, the lead status property is being used to understand which leads progressed beyond their initial conversations with sales, and more importantly, which did not (i.e. are marked unqualified). The X-axis then displays the specific attribute being analyzed, which in this case is the contact's associated company size.

What this extreme example reveals is a massive departure from what had historically indicated a high-quality lead. Recently, every single lead at a company with more than 10,000 employees has failed to progress past their initial conversations with sales. Whereas historically those were leads of the highest quality.

What's changed?

This is where you'd dig in further. Talk to the sales team. See if they've observed a recent trend. Analyze any additional data the team might be tagging, like a disqualification reason to aggregate some additional insights.

The point is, you were able to spot a lead score attribute that's no longer working as intended, to then dig in and course correct.

Report Settings

Report Visual

Pro Tip: Use a dashboard to group together all of the quarterly reports outlined above. Then create a recurring email to send this dashboard to key stakeholders at the end of every quarter. Now you have a built-in reminder with everything you need to perform the quarterly review.

 

3. Annual Analysis

The final component of an effective ongoing optimization plan is the annual analysis. During an analysis, the scoring model's attributes, logic statements, and point values will all be validated, with adjustments made to ensure everything is as fined tuned as possible.

The best part?

You already know how to complete the annual analysis.

It starts with the same five steps we outlined earlier on how to complete an effective data analysis.

The difference is you are now pulling in a refreshed data set with information from the past year.

What you should be looking for as you compare the previous data analysis to the current data analysis, are departures between attribute index scores. Meaning, if some scores were moderate and are now high, that's noteworthy. If some where high and are now moderate or low, that's noteworthy too.

Next, open up the latest version of your lead score planner, and save a new version for this year's analysis. Find any statements within the planner where a notable change has occurred. Adjust point values accordingly, while ensuring there is still a defined distribution.

With all of the planned edits ready to go, the last step is to enter updates into the scoring model. Click save, and that's it! 

One final note. If you are worried about the ongoing time commitment here, don't be. On average, the entire annual analysis only takes a few hours, because the departures from one year to the next are often relatively minor. In these cases, updates to the scoring model are more akin to tightening loose bolts, as opposed to rebuilding entire parts of the machine.

 

And with that, we've now covered everything from how to get off to a successful start through maintaining & optimizing your lead score on an ongoing basis.

As we conclude, let's take inventory of everything you accomplished.

Because you didn't just build a lead score. You took the time to do it right.

You made sure:

  • It's actually going to make an impact
  • It's actually informed by data
  • It's actually been validated
  • It's actually being used by the team
  • It's actually maintained & optimized

It's a lead score that actually works.

--

P.S. Did you enjoy this article? Click below to receive posts like these each month.

Not a subscriber yet?
 
Sign-up and be the first to know when our next post goes live.