Future Proof Your Time Series Data
If you add a time series of the things that happened, you’ll get better analysis

Future Proof Your Time Series Data

Time series measure the change "from x to y by when": a value, demonstrated by its value at the beginning of a time range and at the end of that time range. The continuous line created by connecting these points implies movement over time.

You see graphs like this in sales charts that track performance, in marketing metrics that count leads, and in customer success graphs that count the number of active tickets for important customers. You might also see comparison or snapshot charts that compare period-over-period data for metrics, as in a week-over-week or month-over-month comparison.

Here’s the problem. When you see a change in an important metric, it’s often very hard to see?why?it changed and?what?you might have been thinking at the time.

But there’s a key problem in trying to analyze most metrics in a typical system: you don’t easily see the history of past values. Adding to that, even when you do see the history of past values you don’t have the proper context to what you were thinking at the time.

Turning Data into a Data Layer

Ok, so you have a point-in-time problem. When you look at the metric?today?you don’t have a good view of how it changed. If you are lucky enough to have a time series of your data you can at least see how things have moved through time, but you don’t know what else what changing simultaneously. One way of expanding your view from a single point in time – the current view of your metrics –?to a continuous stream of time is to create a metadata layer around your data.

It sounds fancy, but it boils down to a few truths about your data:

  1. What is the structure of the data??- what are the fields that make up an “opportunity”, for example, and how do you store them
  2. How does that structure change over time??Is there a decision internally that happens to record the meaning of a stage name or to change your metrics when those definitions change?
  3. What is that data layer related to??What entity does this represent and what other items does it reference? In the example of an opportunity, we’re tracking a sale, but we also are referencing the company that is in the sales cycle, contacts who are at that company, and potential requests that are happening during that same cycle.
  4. When the data changes, what do we know and what gets notified??Things go up and down with every metric. How do you know what’s a good change and what’s one that needs to trigger an alert or an alarm? When that data changes, how do other systems get notified?

What does a data layer mean to you? I believe it’s an essential idea to match the data you’re looking at now to the meaning of that data in your organization. By defining items in a data layer, you are clarifying what matters to your business.

Andy Mowat, CEO of Gated, does a great job of explaining the concept of using a data layer to learn more about the core metrics you see in a system and also to enable you to gain context about past metrics.

Listen from 06:30 to 07:45 in the clip below to hear Andy’s explanation:

What do you put in your Data Layer

Now, the inevitable question should be in your head: what data makes sense to put in your data layer? The answer can’t be?nothing?and probably shouldn’t be?everything.

Here’s a heuristic for thinking about what to include:

  1. The actual data. Duh. What do you want to track on an ongoing basis, and at what interval? This one’s pretty easy.
  2. Some way to link notes or observations with a series of data. This means having a common way to reference a time stamp and a piece of data, so that you can later line up the notes you make on data with a series of data in a like time frame or time grain.
  3. Descriptive metadata about each object - think of this as a “version” of an entity – indicating the time when things were added, modified, or changed. This is crucial for playing the part of data detective and investigating the view of past data given the lens of the current state of that data.
  4. A concept of thresholds for important metrics, along with versioning for those metrics. What is “good” now may be “great” or “horrible” in the future, so it’s important to know not only that our measurement changed but that we made a decision and why we changed.

The data layer, then, viewed holistically, is a way to line up the data in your business with the data about your business so that you can assess change over time. It makes the data layer itself a perfect way to future-proof your time series data by providing a link to know what happened, when, and why.

What’s the takeaway??The solution to future-proofing your time series data is to create a way to track the observations about your time series data over time. By having these wayposts (think of them as guides on a map), you’ll be able to know where you’re going and where you’ve been. Measuring how long it takes will give you a sense of whether progress is expected or not.

Sean Byrnes

General Partner, Near Horizon

1 年

Most analytics tools support annotations these days, which function like that in a way. The problem with a feed of "interesting decisions" is that I'm not sure it would be possible to identify all the interesting decisions a priori, you have to look back to see what ended up being interesting. For example, sometimes a minor feature change introduced a bug which tanked engagement.

Andreas Drakos

Unlocking GTM Excellence Through the Power of RevOps

1 年

What an interesting and great idea there Greg Meyer ?? a time series for your critical decisions. Already trying to imagine combinations and impact! Definitely saving this and putting in our backlog

Noam Cohen ????♂?

I'll tame your AI, so it plays nice in production | Founder @ Revrod

1 年

Meaningful business events are typically ignored in analytics. Quote from design partner - "There is always someone in the org, that remembers all releases/changes to add the right context when looking at time series data". Great opportunity that we are actually prioritizing :)

Tracy G.

Higher Education professional with expertise in program management, marketing strategy, operations, and budget management.

1 年

This same notion applies to understanding recruiting, event attendance, and application numbers in higher education. Often times a decision is made but not recorded and the context for why the data is what it is, is not known. The story is disjointed. All that to say, I totally get the problem and would love an integrated solution! Even more important for whe there is turnover in an organization.

Aaron Howerton

Co-Founder @ Partner Foundations, a Native Salesforce App for Partner Management | Partner Operations & Partner Experience Leader | Podcast Host | Home Remodel Junkie

1 年

One of the big selling points of SFDC is 'dynamic reporting' - one of the huge drawbacks is that people didn't (and maybe still don't) understand that 'dynamic reporting' inherently disqualifies you from 'historical reporting' and 'trending.' The result? Admins have to build snapshot reports and workflow to create the historical trending. This includes: - Custom date stamps on all data they want to track trending against - Custom workflows to stamp those dates - Custom worksflows to capture the snapshot data against time/event triggers - Custom object to store the snapshot records (for every record you want to trend) - Custom report type to be able to report on your fancy new custom object - Custom reports and dashboards to visualize the data Depending on your licensing level (edition), you are also consuming precious capacity for these items within your org. Larger companies are absolutely shifting to data lakes and outside reporting to meet these needs, but small to midsize companies often lack the resources to get this done efficiently and keep it maintained.

要查看或添加评论,请登录

Greg Meyer的更多文章

  • Redefining the Customer Journey

    Redefining the Customer Journey

    Have you ever played RevOps detective? ??? The story goes something like this. There’s a closed-won (or a closed-loss)…

  • Going from 0-1 in Data Operations

    Going from 0-1 in Data Operations

    Imagine you are starting a new venture and need to describe all the data tasks that need to happen to get you from…

  • An ode to console.log()

    An ode to console.log()

    Some of the first programs I ever wrote on a computer used PRINT to echo a line to the screen. Using BASIC, I filled…

    1 条评论
  • Great performance demands mental preparation

    Great performance demands mental preparation

    The coach will see you now When I was younger I wanted to be a professional baseball player. Professional baseball…

    2 条评论
  • Data Operations, revisited

    Data Operations, revisited

    When I started writing about data operations In 2020 I suggested an example definition that focused on data shared…

  • From Atoms to Bits: Building Software from Cow Paths

    From Atoms to Bits: Building Software from Cow Paths

    It’s not easy to be a technologist these days. For almost any problem you can think of, there is a solution claiming to…

  • Am I typing to a person or a bot?

    Am I typing to a person or a bot?

    Dear Bot, may I speak to your manager? It seems like almost every company with a large volume of customer requests is…

    3 条评论
  • 10 common ways your revops data enrichment might be failing

    10 common ways your revops data enrichment might be failing

    Picture this: you have a million contact records to fix and need to find a title match based on email and determine the…

    4 条评论
  • Creating a "Minimum Viable Record"

    Creating a "Minimum Viable Record"

    What kind of data belongs in a “Minimum Viable Record”? There’s a lot of pressure on sellers today to hit activity…

  • What do you need to build a good no-code application

    What do you need to build a good no-code application

    There’s a lot of promise in a no-code environment. No-code development lets you abstract the building blocks of an…

    3 条评论

社区洞察

其他会员也浏览了