I read an interesting exchange on LinkedIn recently.
The post asked about how sellers engage with prospects who don’t yet either know they have a problem or understand the problem or how they want to solve it.
The sales folks who answered the post leaned toward the feeling that their time was valuable, and they weren’t inclined to waste it on those not ready to buy.
Which, as a marketer, I found disappointing—and yes, a bit maddening—until I stopped to think about it.
Shouldn’t sending people who actively engage and show interest in solving the problem be the core qualification for a marketing qualified lead (MQL)? And arguably for a marketing qualified account (MQA).
And, before you start yelling that you “have” to send everyone to sales, I’ve heard this antiquated notion shared by marketers in a variety of companies. Usually, these are sales driven companies that tolerate marketing as the fluffy cost center but don’t trust them with prospects beyond the form fill.
Don’t get me started on how ridiculous that is and how much revenue they’re sacrificing with those blinders. Especially if the notion shared by those sales folks above holds true across the majority.
So, let’s look at a few reasons why sales reps shun marketing leads and what (most of) you can do about it.
Most Marketing Qualified Leads Are Viewed as Imposters by Sales
Here are three reasons why, that when compounded, make the case.
Lack of Agreement
When Gartner asked recently whether sales and marketing teams had a common lead definition, only 49% said they did. The analyst firm had expected that between 60% and 85% of organizations would have this common definition. This is 2021, after all.
This gap just highlights the problem. If there’s no agreement about what constitutes a lead, there’s no alignment or acceptance criteria. Therefore, sales have every right to push back, and call marketing leads imposters because they don’t fit whatever definition your sales team is using.
Arbitrary Scoring
Because there’s no common lead definition and a shaky ICP, if any, no matter how lead scoring is implemented, it’s going to be substandard.
Here’s why. Lead scoring consists of a score that combines firmographics and demographics with behavioral scoring. Firmographics identify the right fit companies – industry, employee size, revenue, and geo. Demographics scores based on title and role.
Behavior is a bit trickier. In most lead scoring schemas, you award higher points for form fills and downloads or event registration and attendance and lower for web page views, frequency, and recency. Product pages can fall somewhere in the middle. If a lead submits a demo request, that’s an immediate trigger to MQL and a push to sales.
The issue with scoring is that it’s mostly based on activity, not dependent on topic or viewing related topical pages that build a storyline based on problem-to-solution and would indicate interest level.
Heck, if I view five web pages in 3 days and download a paper, I’d be an MQL at a lot of websites and routed to sales. I know this from experience—it happens often—given the amount of research I do. Although I can’t figure out how I fit the firmographic score or match an ICP for most of the company websites I visit. Then again, they probably haven’t agreed to what constitutes a lead.
Refusal to Engage
I’m sure you know where I’m going here. People who have scored up to the threshold automatically become MQLs and routed to sales. Sales ops assigns them to a rep and the rep reaches out to…crickets. Much of the time with a lukewarm approach if they perceive the lead to be a waste of their time. This lackluster outreach and ineffective score combine to reinforce the notion that MQLs are not qualified.
Put Some Substance Behind Your MQLs
B2B buyers consume at least 13 content assets during their buying process. I’ve seen it as high as 17. How much of a share of their information consumption do you have? If buyer engagement isn’t telling you enough to understand their intent and where they are in the process, perhaps it’s time to re-evaluate how leads becomes a marketing qualified leads.
Consider adjusting your scoring for the following factors:
- Persona-related content engagement
- Are they viewing the content designed for the role they play in the buying decision?
- What stage is the content designed to address or what question does it answer?
- Note that this would mean you’ve got a taxonomy or tagging structure that’s well maintained. Most companies do not.
- Problem-to-solution storylines
- Did they read 3 out of 5 pieces that make up a story?
- Did they read them in the same website session?
- Or did they engage over time via your nurturing program?
- Pre-sales cycle content
- Have they accessed pre-sales content that answers their change management questions and helps them decide to fix the problem?
- Increase dwell time requirements
- 30 seconds is what I usually see, yet most content takes longer than 30 seconds to read. Are they only skimming?
- Are they staying long enough to read the content on the page? On any page? Which ones?
- Source of sessions
- Are they only visiting your website when prompted by an email you send?
- Do they only arrive on your site via search or a targeted display or social ad?
- Or do they come to your site as direct traffic – on purpose?
- Downloads and Events (webinars)
- For a download, did they click the link and download the resource when you sent it?
- For a webinar, did they attend or watch it on demand? Or only register?
There is a reason intent data platforms, like 6Sense and Bombora and now ZoomInfo, are gaining in popularity. It’s helpful to know if an account is searching for information that relates to the problem you solve, your category, and your competitors. It’s also helpful to know if it’s one person or more than one, indicating account interest.
The trick is in tuning the keywords you’re monitoring to be tight enough to be truly indicative of the level of buying intent. But it’s also important to understand the weighting of the algorithms. The more engagement accounts have with you (via branded keywords) and your website and content, the higher the intent.
Combining intent data with MAP data is an imperative to get the most beneficial understanding. How does your lead score show the same trend in growth as the contact’s intent score? Or the account score? Are they in the same sphere?
What I mean by this is that if the “lead” is reading your blog content but not looking at your product or solution pages or Why Us? pages, then they’re likely just interested in the information. Intent scores are also weighted by a scoring schema which increments higher for company and product pages and designated campaigns than for other types of content.
Give Sales a Reason to Engage
If your sales team is skeptical of the marketing qualified leads you send them, or is outright shunning them, consider providing them with evidence and a reason to care and become enthusiastic about engaging.
What triggered the handoff? Just incrementing to a score isn’t a great incentive. What did they do that indicates they’re ready for sales engagement? If it wasn’t to request a demo or to speak with a sales rep, you need to give a reason you think sales should spend their time on outreach.
The problem here is that automation has removed the human element from much of the workflows. And that’s a shame. Data can tell you what someone did, but not why they did it.
However, if you’ve orchestrated your content to guide buyers from the start and created content experiences that build attention, engagement, and intent, then you have the evidence to provide the reasoning behind the status change.
Instead of routing MQLs to sales with no explanation, insert a review step where you can add the information that sales reps need to take the next step. Not only will this help with lead reception, but you’ll gain a whole lot of insight about what’s working with your content and digital buying experiences…and what needs improvement.
Leave a Reply