The “sinking boat” approach to RevOps data quality

The “sinking boat” approach to RevOps data quality

With all that has changed since 2002, one trend we see in the RevOps world is a return to fundamentals, especially a focus shift from shiny toys to data quality and data management fundamentals. This is not surprising for several reasons:

  • CxOs now demand solid ROI and many GTM technologies have not lived up to their promises, partly because they don’t have access to good enough data for the features to work as designed. It’s the proverbial garbage-in, garbage-out problem.
  • Every team is now trying to be more efficient, getting more done with fewer resources. This means brute-force GTM efforts with money and people are no longer an option. Efficient operations require good decision making and resource allocation, and neither is possible with poor data quality.
  • AI killer apps for GTM don’t exist yet, but there is great anticipation that some are sure to emerge in 2025, thus CxO is meandating the Ops team to get ready for it. Since AI is the ultimate data driven solution, many organizations are now investing in data quality to get ready for AI.

Ops professionals with data quality chops are already in short supply because there is only one way to learn data quality expertise, which is learning by doing. RevOps teams without experienced data management personnel often struggle with how to approach this renewed focus on data fundamentals. Nothing looks more daunting than a few million records of bad data that seem to have a life of their own.?

Openprise has been in the RevOps data quality business for over a decade, and we have seen the good (very very few), the bad (mostly), and the ugly (more than we would like). Along the way, we have developed data management best practices and have advised our customers on them. At the highest level, we advocate a “sinking boat” model when it comes to addressing RevOps data quality challenges without getting overwhelmed and paralyzed. The sinking boat model recommends a three-phase approach to dealing with most RevOps problems, but especially data quality problems. The metaphor is obvious, if you have a sinking boat with water coming into the boat, how do you get yourself out of the jam? The model has three phases in the following order:

  1. Plug the leak in the boat
  2. Bail water out of the boat
  3. Fix and improve the boat

First, plug the leak

Strangely, when most people are faced with a sinking boat situation, their first instinct is to start bailing water out of the boat. The problem is, often you can’t bail water out of the boat fast enough to prevent the situation from getting worse. So before you jump to the bailing water phase, first investigate where the leak is and plug the leak first, or as we advise our customers, stop the bleeding first.

Your ugly data quality problem didn’t happen overnight. It’s the result of years of neglect and bad data coming into your databases. To plug the leak, you need to find the sources where the bad data is pouring in. Some of the most common places we have seen are list loading and your existing data quality solutions.?

There is one fine point worth highlighting. Many people confuse plugging the leak with fixing the boat. Plugging the leak is about finding the quickest resolution to slow down the progression of the problem drastically. The resolution may very well be temporary and unable to scale, but it buys you time to bail the water and fix the problem properly.

1. List loading

GTM data—from your content syndication vendor, your field events, your partners, or your data vendors—is usually loaded in without quality checks or improvements by a variety of people with direct access to the CRM or marketing automation platform (MAP). List loading is often the largest leak source because of the sheer volume and frequency of data that is loaded through this channel. The fastest way to plug this leak is to limit who can load lists and force that small team, usually RevOps, to follow a data quality process: in other words, clean, dedupe, and enrich the list data before the list is imported. Yes, this requires the RevOps team to do more work, but it does stop a constant stream of bad data from coming into your databases.?

2. Your existing data quality solutions

This is counterintuitive but true. Some data quality solutions you have can actually cause more harm than good. Here are three common sources we see:

  • Dupe blocker and CRM data validation rules

Dupe blocker and data validation rules in your CRM are supposed to catch the bad data and prevent them from being created in the first place, but they don’t work well because of human nature. When a salesperson is blocked from entering the data he wants, he has two choices: spend the time and effort to figure out what the problem is and do it right, or just find a way around the blocker. Which option do you think your average salesperson will pick? To stop the bleeding, just remove these point solutions and let the data come in so you can fix them afterwards. Don’t provide perverse incentives for users to intentionally create bad data.

  • Picklists on marketing forms

Picklists on marketing forms for fields like state and country are intended to improve quality at the point of data collection. The problem is that the person filling out the form has no incentive to provide accurate data except her email address. Everything else depends on her willingness to share her data and spend time doing so, and the quickest way to fill a form with picklists is to pick the first item on the list. Picklists encourage people to be lazy and give you bad data. To stop the bleeding, make these fields type-in. It’s actually more work for people to intentionally make up bad data, and with browser autofill so prevalent, you’re more likely to get better data and create less work for the form filler by not using picklists.

  • ABM and other RevOps solutions

ABM and other RevOps solutions that help you find new prospects or scrape data from sources like calendar, email, and web pages often have simplistic or non-existent matching capabilities, so they can consistently add duplicate records to your CRM. It’s impractical to shut these solutions off, so to stop the bleeding, mark these new records so they can be quarantined and identified for review and remediation.

Second, bail water

Once you have plugged the major leaks and slowed the rate of sinking, it’s time to bail water so you can be in a better position to execute the permanent fixes. In the data quality context, there are quick and simple things you can do that can drastically improve the quality of life for your GTM and analytics teams, and have your CxO stop saying they don’t trust the data. Pursuing perfection here will sink so much time and resources that you will not be able to get to the last phase to implement scalable and permanent solutions. Here’s some typical low hanging fruit that we recommend:

1. Standardize state, country, phone, and lead source fields

Simply standardizing these fields not only improves the appearance of data quality for all users, it actually drastically reduces the work involved in building reports and dashboards and analyzing data. Nothing flags bad data quality more than showing a report to management that has separate data bars for “United States”, “US”, and “U.S.A.”

2. Flag and fix common phone and email problems

Two of the data fields the sales team cares about the most are phone numbers and emails. These are often bad and outdated. You can quickly fix or remove them by fixing email typos, identifying fake, personal, and disposable emails, confirming deliverability with a validation service, and identifying bad or invalid phone numbers such as numbers that have not been assigned by the telecom companies or do not fit the phone number format of any known country.?

3. Enrich missing data

If you have a lot of missing information in your database, even a one-time enrichment exercise can drastically improve your data quality.

4. Dedupe both records and fields

Duplicate records of the same person or account is a common data quality problem. Most of them are trivial to identify and merge, while others can be more tricky. Focus on identifying and removing the trivial ones, which are typically 80% of your duplicate record problems. Duplicate field problems don't get as much attention but are equally disruptive. Duplicate fields include having multiple phone numbers, emails, and addresses within the same record, often from different enrichment sources. It’s not uncommon to see customers with 5 to 10 phone numbers on each CRM record.Taking these numbers down not only will make the sales team’s job easier, but also will help you discover additional sources of data quality challenges.

Third, fix and improve

Once you’ve slowed the pace of degradation and improved the overall quality of your data temporarily, you will finally have the time to properly design and implement permanent and scalable solutions. This will not only further improve your data quality, but reduce cost and improve ROI of every GTM investment. Here are some key points to consider.

1. Implement a “data firewall”

In order to prevent bad and irrelevant data from polluting your CRM and MAP, consider implementing data firewall solutions to:

  • Improve the raw data coming in from different sources. Clean and enrich them before they are introduced into your systems.
  • Hold back raw data that are not needed in your CRM and MAP and archive them separately for potential needs later. For example, you don’t need all 400 data fields from each of your data enrichment vendors.
  • Reduce the number of data entry points and channel them through your data firewall solutions to maintain control and scalability.

2. Enable self-service and delegated administration

It’s hard to scale data quality programs if the RevOps team is doing all the work. To scale the program, consider enabling self-service and delegated administration solutions so the work can be pushed as close to the users or admins that own the problem as possible.? For example, rather than having marketing users submit files for loading by the Ops team, create self-service list loading solutions with a centralized, automated process that ensures the data is onboarded properly. Similarly, create review and approval applications so business users or regional ops teams can approve new account creation, review duplicate and select winning records, or customize account hierarchies.

3. Remediate quickly

Fast remediation is often better than prevention. Prevention solutions often only work if you have ideal human behavior, which is rare in practice. While theoretically it’s better to enforce data quality as upstream as possible, it’s often more effective to let the bad data come in, quarantine it, then clean it promptly with automation.? A corollary is to use as few technologies as you can to enable data quality. The more technologies involved, the more likely you will create inconsistency and manageability issues with too many moving pieces.

4. Follow the 80/20 rule

We often tell customers RevOps teams must follow the 80/20 rule religiously or they will not be able to keep up with the pace of change the business requires. This is especially true when it comes to data quality because:

  • It’s easy to automate 80% of the common scenarios. Augment the 20% of edge cases with human processes because that’s where all the complexity resides that will make your automation too complex to scale and manage.
  • It’s easy to clean and standardize 80% of the data quickly and promptly. The last 20% requires so much time and effort to get right that by the time you’re done, another 20% of the data has already gone bad. 100% perfect data is an unattainable illusion in practice.
  • It’s easy to get your stakeholders to agree to 80% of the business requirements. That last 20% will drive you up the wall. You can own and deliver on 80% of the requirements and let the stakeholders handle their “holycow” requirements.

5. Set proper expectations?

Setting the proper expectations is just as important as delivering results, and goes hand-in-hand with the 80/20 rules. Clearly communicate to the business users:

  • what level of data quality you can deliver and what it means to the business.
  • the benefit of 80% results now vs. 100% results that will never get done.
  • that they are expected to be part of the solution and you’re looking to make their involvement as painless as possible.

Are you ready to fix a sinking boat?

There is no other way to pick up data quality and data management skill sets except learning by doing. While the task may seem daunting and technical, it can be much more manageable if you follow the simple sinking boat model we have outlined here. Would love to hear your feedback.

Phillip Swan

I help CEOs reimagine businesses delivering billion-dollar ROI with the power of AI | "the GTM Unleashed guy" | Built for scale

8 个月

Great article, Ed! I continue to witness firsthand how many companies still need help with RevOps data quality and alignment, hampering their ability to measure revenue performance accurately. Taking a pragmatic "sinking boat" approach of prioritizing the most critical data gaps while also measuring ROI to justify further investments can help advance RevOps maturity. Focusing on standardizing data and improving cross-functional collaboration is key to realizing the full potential of RevOps.

Andrew Smith

Operations Consultant | Analytics, Automation, AI & Growth | Salesforce Certified Consultant | Make Automation Certified Consultant

8 个月

Great article thanks Ed King. Very practical and honest advice and approach.

Ruby Raley

Strategic Sales Leader who GETS Marketing | Growing Revenue | Executive Member @ Pavilion

8 个月

Ed King excellent advice for today and tomorrow. Because if we don't fix the data quality issue, the new AI engine purchased to help sales will produce erroneous and inconsistent results at speed

要查看或添加评论,请登录

社区洞察

其他会员也浏览了