The Eightfold Path of Page Speed Measurement
Photo credit: Saamiblog -- https://flic.kr/p/cb8LYQ

The Eightfold Path of Page Speed Measurement

One of an occasional series of articles about the technical fundamentals which underpin the ability to meet the demand in your market or information space (the latter being what my normal work is all about). This one's about achieving astonishingly fast load times for web pages, and is a companion article to In Search of the Dark Bounce and The Need for Speed.

As Bananarama almost sang: It ain't what you measure, it's the way that you measure it. That's what get results.

There's a lot of different ways to measure page load speed. Navigation timings; resource timings; even homebrew ‘time to interaction’ (scroll or click) scripts. You can also report as average speeds (yuck), bar chart distributions (histograms really). You can show how page load speed affects bounce rate and conversion rate. You can show what changes are going to get the biggest bang for buck (answer: all of them at once). If you want to improve web page speed performance, your measurement choices matter.

It’s tempting to compare to the competition. As the competition is as bad as you are, why improve? But you aren't competing with you competition. You are competing with human distraction. The real comparison is “how much more money do we make if we can get even faster?”

With that in mind, here are my top tips on reporting page load speed.

  1. Report the change in the number of non-bounced (we would say ‘engaged’) pageviews. You may wish to show this as a rolling 12-month total to account for seasonality.
  2. Remember that you’re only reporting timings for page loads that completed. As discussed in In Search of the Dark Bounce, you only see the pageviews that completed, so make that explicit in any labels or graph titles.
  3. Report the absolute number of page views per time bucket. Time buckets might be 0-to-0.5s, 0.5-to-1s, 1-to-1.5s, 1.5-to-2s, 2-to-2.5s, 2.5-to-3s, 3-to-4s, 4-to-5s and so on. If you manage to bring some dark bounces into the light, you can see it as an increased absolute number of pageviews measured in a time bucket.
  4. Graph by time bucket, and report mode and centiles. Show the distribution and report key centiles. For example, “25% of customers who completed a page load waited longer than 6 seconds for the home page to load”. Page load time at the 95th centile is a great reality check – 1 in twenty users are experiencing it slower than that.
  5. Separate out timings for the first page view in the session. You would expect the load speed of the first page in a session to be (lots) slower than those of pages later in the session. Those later page-views will be able to load some of their resources from the browser’s cache.
  6. Segment by page template. Different page templates i.e. home, category, product listing, product) need different resources. Report them separately.
  7. Measure the effects of measurement. Some measurement, tracking and reporting technologies introduce their own delays to the page load time. Examples I have met include Kenshoo, Coremetrics, Dynatrace and Tealeaf (though they may have improved by now). Test every script or tool for its effect on load times. You could use Web Page Test for this.
  8. Convert to value. Present speed improvements and (and slow-downs) in terms of money gained or lost. Ensure that everyone sees it.
No alt text provided for this image

The top tips in action.

.***

Optional further thinking

There's plenty more you can do to visualise improvements to page load speed, of course. But it gets very technical from here on in. But for the sake of demonstration of where you can get to when you start to take the possibility of the sub-second page load seriously...

No alt text provided for this image

Below: use packet traces (captured with pcap, processed through Wireshark) to measure the effect of changes to server TCP/IP settings (initial congestion window, congestion control algorithm). It's no good having more bandwidth if you never fill it!

No alt text provided for this image

Below: use packet modelling identify better asset load orders under different transfer protocols (SPDY, HTTP1.1, HTTP2). This is all about getting the critical data down the pipe first.

No alt text provided for this image


All of the above were Kaiasm labs projects. We actually spend most of our time radically aligning websites with their markets, page speed is a personal side interest. Stay posted for musings on how cognitive science helps you implement complicated Information Architectures, drawn from my experiences with Screwfix, the NHS and more. Thanks for reading, and spread the good word!

Liam McGee

Making data fit for human use. Keeping it real (ontologically). Building tools for the future of infrastructure.

5 年

Ah, it was actually The Fun Boy Three and Bananarama. My bad.

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了