Career Wins & Mutating Mental Models
Tim Metz - Mental Models

Career Wins & Mutating Mental Models

My friend Benn Stancil wrote a piece on Kahneman's System 1 vs. System 2 thinking in an organization. A simplified take is that "executives’ System 1 assumptions" always win, and that "[t]he much more powerful thing that we [data analysts] could do is teach a company a theory—or as Abhi Sivasailam put it, our job shouldn’t be to look for insights, but to mutate mental models."?


This take made me think of a career highlight -- that really had nothing to do with me.


In 2010, I joined Yammer, Inc. , an enterprise social network, as a data leader (of a team of 2). Some very raw, but great systems were already in place - in fact, we built a very early (and much lesser functioning) version of Mozart Data .


At the time, people called Yammer "Twitter for Business" and users at a company chose to follow other employees to receive useful information. The follow model was assumed to be right (as the largest social networks) used something like it. Yammer cared deeply about its active user count, and tried to iterate on the product to improve it. One of the most powerful levers for improving the product experience was improving the content feed algorithm, unsurprisingly because all users interact with it.


In my first 3 months, someone was working on an improvement to the new user algorithm that extended beyond simply the followed employees (joining a sparse network was a new user retention problem). The CTO decided that the new algo should not only square off against the existing product, but also something which also showed additional content (to have a richer understanding of the mechanism of the potential improvement). To implement something quickly to compete with the new experimental algorithm, an engineer (rather than reason about a competing algorithm) decided to show all company content.


As the team suspected, the new algorithm was an improvement over the old model, but it actually got trounced by the naive algorithm (show everything). In fact, that change was actually one of the 5 most impactful experiments in my 4 years at Yammer (though eventually, the naive algorithm was improved upon).


As a data analyst, I had to compare the averages of metrics for conditions A, B, & C. The work was (very) straightforward, but the impact was large.


First, we implemented the improvement for all companies, which showed a lift consistent with the initial test (and we built further upon).


Second, it shook any blind confidence in any product feature (there was still a vision and many product opinions). We learned to experiment and incrementally build.


Third, it led to a greater investment in the analytics/data team. Though all we did was take the mean of the conditions, the company reverse shot the messenger. There was a strong belief in the value of analytics infrastructure to aid in product development (and help assess tests - which uncovered more wins).


The Yammer Analytics Team was probably the most impactful data team I've worked on. The belief in investing in data was already there, but it got supercharged by a single (somewhat accidental) experiment. Had that not happened in my first 3 months at the company, we may never have been as successful as a team.


The coincidence of a once-a-year level win (very quickly in my tenure), counterintuitive results, and the demonstrated capabilities to assess changed my team's outlook and my own career, though none of it was directly driven by me. We had updated the heuristic that testing matters (from somewhat to a lot, which proved mostly right). Rather than be solely a machine of reports of averages, we had infected the minds of how to best to product development - we mutated mental models.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了