Ahh, the “Demand Waterfall”. Otherwise known as “The Funnel,” this framework introduced by the research and advisory firm SiriusDecisions defines a shared view between marketing and sales of the lead management process. Though many of us many have various versions of the funnel defined within each of our organizations, here’s the official and most updated version of the SiriusDecisions Demand Waterfall.
Image used with permission from SiriusDecisions, The Demand Waterfall®, Rearchitected
The context for commonly-used terms in B2B like “marketing qualified leads” (MQLs) and “sales qualified leads” (SQLs) is found within this framework. Performance of marketing and sales teams are typically measured by KPIs aligned along the funnel, with target/quota metrics that are calculated (or oftentimes assumed!) by backing in from the bottom of the funnel (i.e., closed won customers) all the way to the top (i.e., inquiries or prospects). Many demand generation marketers will then plan campaigns and programs that are meant to drive conversions throughout the funnel with the aim at meeting or exceeding these performance goals.
However, as the saying goes “the best laid plans of mice and men often go awry.” According to studies by SiriusDecisions, about 98% of MQLs never result in closed business. If this is the case, and if we have a certain number of wins we need to acquire at the bottom of the funnel as our target, then the easiest thing for a marketer to do is pour more into the top of the funnel. More leads! I want more leads! Said no sales person ever. Sales wants ready-to-close deals with a neatly tied bow on top. If they can’t get that then the next best thing is more opportunities.
[Tweet "98% of MQLs never result in closed business."]
However, with the growth of marketing technology, we (the marketers) have gotten better and better at creating larger and larger volumes of leads. The problem is, of course, that not all leads convert, and there are numerous reasons why. But what inadvertently happens is that in order for marketing to meet their MQL targets, they generate more and more leads. They throw them over the wall to sales in the hopes that, if conversion rates stay stable, then more will flow through the middle and bottom of the funnel.
In reality, this torrent of leads contain a high amount of crappy leads with some great-fit leads sprinkled in here and there without any reliable way to identify them. So, it’s not uncommon to hear sales tell marketing that the leads they’re passing over are crap. As a result, sales stop following up on all of the leads and marketing begins blaming sales by saying that they don’t call on the leads. It’s a vicious cycle that breeds mistrust between the two teams, which leads to the stereotypical misalignment between marketing and sales. Junior sales or business development reps (SDRs and BDRs) are hired as a stopgap to crank out a high volume of prospecting calls with the goal of finding the good leads to pass on to their account executives.
The fact of the matter is that there are still a lot more wasted calls on poor leads than on the good ones. So as a way to prioritize leads for follow-up, the concept of lead scoring was born. According to Marketo’s Definitive Guide to Lead Scoring, lead scoring is a methodology that ranks leads in order to determine their sales readiness.
With greater adoption of marketing automation, the built-in functionality for lead scoring in these tools are used as an attempt to predict which leads have a higher propensity to buy or likeliness to convert to the next buying stage. I say “attempt” because while lead scoring that occurs in marketing automation platforms today has provided tremendous benefits, it also has some serious limitations. Setting up a lead scoring system that works (i.e., is accurate) takes A LOT of time and effort. If you’ve ever taken part in building up a lead scoring model, you’ll know that to be true. Why is lead scoring so hard to do and to get it right? I think there could be a number of different reasons:
Not Enough Data Is Available For Accurate Scoring
The first reason is that most of us don’t have enough data in order to build an accurate scoring model off of. Much of the data is simply captured upon form fills on a website or landing page. Best practices tell us that the shorter the form, the better the on-page conversion. However, the less data you collect, the less you know about your prospect. In addition, this is assuming that the prospect is not entering any incorrect info, either purposefully (e.g., “email@example.com”) or unwittingly (“fat-fingering” it while typing on a mobile device).
Traditional Scoring Models Are Often Based On Guesswork
Secondly, we don’t know if the data that we're using to score are actually the right ones to use. We follow general rules of thumb, such as scoring visits to our pricing page or looking at long on-page dwell times as a qualifier. Thus, we say to ourselves, “let’s assign 50 points to any leads that engaged in these activities.” This is essentially basing our scoring on gut feeling (aka guessing). The problem is that we don't know if these actions are truly applicable or how much importance they should carry.
Irrelevant Data Points Used In Scoring Will Give Inaccurate Results
Thirdly (and this is related to the second reason above), our scoring may be based on false correlations because we performed what I call “eyeballing-analysis.” What this refers to is basically the fallacy of placing emphasis on specific data points because we see them as common data points across our customers, which may not be great indicators. For example, let's say we noticed in our data that four out of the last five customers who purchased our CRM software had red hair. So, we decide to score any new prospects with red hair really high. We even decide to get this critical info by asking what hair color they have on our landing page forms. This is obviously a very silly example, but many lead scoring models are based on this type of shoot-from-the-hip analysis.
Predictive Lead Scoring - Restoring Sales’ Trust In Marketing
So while lead scoring as a concept is actually meant to be predictive in nature, the reality is that traditional lead scoring models aren't very accurate predictors of sales readiness. This is because of the challenges that lie in the surface level analysis of the small pool of data that we use to build our lead scoring upon.
The recent increase in popularity of predictive marketing vendors (including the company that I work with) is proofpoint that traditional lead scoring methodologies aren’t cutting it. What’s missing are all the data points describing your ideal customer profile that aren’t found in your CRM or marketing automation. And what’s needed is a systematic and scientific way to identify which of these data points truly matter in determining your ideal customer profile.
Many predictive lead scoring systems scour the web for potential buying signals that can be found about your customers. These can include:
- Looking at source code on their websites.
- Gathering information found on job boards and press releases about their company.
- Knowing that kind of technology stack they use.
- Understanding how much they spend on PPC ads.
- And perhaps thousands of other data points from all over the web.
Then, all of this data is inputted into predictive models to uncover the data points that actually matter for your business and have the highest correlation across your customers. These commonly-shared data points or traits will then give you a good, full picture of your ideal customer profile. You can therefore match your leads up against this profile - the closer the match the higher the predictive lead score, which means the lead looks very similar to customers who purchased from you before.
This is obviously a very simplified explanation of how predictive lead scoring works, but the end benefit should be clear - lead scoring that truly identifies best fit prospects most likely to purchase or convert. To learn more about predictive lead scoring, check out our on-demand webinar titled “Demystifying Predictive Lead Scoring.”