Data team productivity, all about metrics layer, and more
Picture by Pawel Czerwinski on Unsplash

Data team productivity, all about metrics layer, and more

Welcome to this week's edition of the ? Metadata Weekly ? newsletter.

Every week I bring you my recommended reads and share my (meta?) thoughts on everything metadata! ??If you’re new here, subscribe to the?newsletter?and get the latest from the world of metadata and the modern data stack.

??Spotlight: Data Team Productivity

Last week, in the Analytics Engineering Roundup, Tristan talked about the value of data work inside organizations and touched on the importance of measuring the value of the modern data stack as well as the time of the data team inside organizations.

“Are you good at measuring and attributing your spend across the modern data stack? No? Very few of us are...

I’m going to say something that some of you may not want to hear: I think that there is too little scrutiny of the value of data teams today. Six years ago I had to fight like hell to convince some startups that they should hire any full-time data people at all; today I run into data people who feel like they don’t totally understand how their work maps to the priorities of the business, whose time is not prioritized well. I talk to folks who have started to think of their outputs as ticket completion or model creation, not as business problems solved. Others spend major chunks of their time blocked and waiting on upstream dependencies.

These are talented data professionals who care about their work. But they’re working inside of organizations that don’t consider this inefficiency to be a big problem.

You should want your company to care tremendously about the value you create.

I echo Tristan’s thoughts that most data teams are not as productive as they could be, and most data teams are not driving as much value as they should be driving. Most data teams spend an inordinate amount of time on tasks that they shouldn’t be spending time on, like finding access to the “right data”, responding to questions like “what does this variable mean?” or “which data should I use for this dashboard?”, troubleshooting issues on broken dashboards, or simply answering questions from the business about whether they can trust the data.

These issues inevitably take up over 50-60% of a data practitioner's time. That’s expensive. Not just because of the cost of the time, but because of the value that the additional time could generate.

I’ve been personally very interested in the “developer productivity engineering” movement in the software engineering world, with companies like Netflix building out entire Developer Productivity Engineering teams. Even sales teams invest in sales operations and sales enablement to improve their productivity.

I believe we need a version of that in the data world: a data enablement function focused on automation, team productivity, and supporting the data-adjacent roles and functions inside a company.

VP of Data Enablement, anyone?

?? Fave?Link from Last Week

I highly recommend this article from Emily! Below is an excerpt from her article:

Keep in mind that a typical data stack doesn’t just have one end customer. Data engineers, analytics engineers, data scientists, and analysts are also going to use the platform they are building, and you should consider them as important customers with real needs as well. For example, a data platform engineer responsible for provisioning/maintaining a cluster in an AWS shop will have different needs than a data scientist focused on understanding customer behavior, and they will have different needs than an analyst creating daily reports for a marketing executive. It also helps to frame the personas of your data customers by what skills each of them has, and what their ideal workflows look like. Later, when making tooling choices, you can look back at your full set of data customers and make sure you aren’t over-rotating on support for one while cutting the arm off another.”

Image by Emily Thompson

???More from my reading list

I’ve also added some more resources to my data stack reading list. If you haven’t checked out the list yet, you can find and bookmark it?here.

???All about the “metrics layer”

Metrics are critical to assessing and driving a company’s growth, but they’ve been struggling for years. They’re often split across different data tools, with different definitions for the same metric across different teams or dashboards. How do you make sure people across the organization have a common understanding of metrics? How do you set the right definitions? Should metrics be designed based on standard practices or on the popularity of usage?

I’ve personally been very interested in the conversation around metrics and looks like this week I’m getting a ton of opportunities to deep dive! ??

→ I’m super excited to host Drew Banin (co-founder, dbt Labs) and Nick Handel (co-founder, Transform) on Thursday 21st of April to discuss the future of the metrics layer and metadata. Come join us!

On the 26th of April, I’ll also be joining Barry McCardel from Hex, Kashish Gupta from Hightouch, Chetan Sharma from Eppo, Oliver Laslett from Lightdash, and Allegra Holland from Transform to talk about the evolving metrics layer next week at Transform’s Metrics Store Summit.

I'll see you next week with more interesting updates from the modern data stack! ???Meanwhile, you can subscribe to the?newsletter on Substack and connect with me on LinkedIn?here.

Ramdas Narayanan

SVP Client Insights Analytics (Digital Data and Marketing) at Bank Of America, Data Driven Strategist, Innovation Advisory Council. Member at Vation Ventures. Opinions/Comments/Views stated in LinkedIn are solely mine.

2 年

A very valid and important observation on what value are you able to provide being part of the data team, also a good business/data strategy makes one feel more connected with the overall purpose.

要查看或添加评论,请登录

Prukalpa ?的更多文章

社区洞察

其他会员也浏览了