Chrome UX Report: Another Dataset or is there more to it?
Source for the Image: https://web.dev/

Chrome UX Report: Another Dataset or is there more to it?

Introduction

The Web development and performance monitoring community has seen a significant shift in the course of the last decade with respect to how applications are developed, shipped and monitored.

We have witnessed the evolution of DevOPS, SRE practices and an increased focus on making the experience of the "end user" better.

If we look at some of the core themes that are used to define the applications of today's world, some of these would resonate with all of us:

Themes that define today's modern applications

When it comes to the shifts, some of these are pretty prominent:

  • Shift from Monolithic to Microservices first
  • Shift from On-Prem to SaaS
  • Shift from Waterfall to Agile/Scrum/Kanban etc.
  • Shift from traditional infrastructure monitoring to monitoring from an end – user’s?perspective

If we step back a little and focus on the "Why", we would realize that most of the shifts that we see are focused towards achieving a single objective..

The objective being: "Making the life of the buyer/user/consumer of the tool we build a little easier.

Coming from a monitoring background, on a day to day basis, I deal with a lot of datasets to detect, triage. mitigate and resolve.

These datasets are related to:

  • BGP feed telemetry
  • Network telemetry
  • Device telemetry
  • Telemetry from CDNs, Load balancers, Global Traffic Managers etc.
  • Logs, Metrics and Traces from APM solutions
  • Real User (Field) and Synthetic (Lab) telemetry

So, when I first had my encounter with CrUX, my first reaction was: oh no! not another dataset. I already have my Synthetic and Real User Monitoring solutions from a dataset standpoint helping understand Reachability, Availability, Reliability issues and performance bottlenecks.

More about Monitoring pillars

Fiddling with CrUX for sometime, one use case where I felt the dataset would help me immensely was Performance Benchmarking.

Synthetic measurements are what I look at when it comes to creating benchmarks because of the following reasons:

  • Measurements from Synthetic are noise free, stable and actionable
  • Since the source from where the measurements are run is same for all the runs, benchmark reports from Synthetic can be trusted
  • No instrumentation and easy to setup
  • With a tool like Catchpoint, it becomes easy to run global benchmarks because of the distributed and global points of presence that it has to offer. Also, "No Cloud nodes because.. c'mon, we all know why"

However, there was something which was missing when it came to the benchmarks that I ran. All the measurements were run from a Synthetic standpoint. Though, these were actionable and helped paint a really clear picture of what the rankings were, it was difficult to correlate it against what the end-users were seeing in the "field".

Correlation is always the most important aspect if you deal with data and this is exactly where CrUX came into the picture. Let us now spend some time understanding what CrUX is, how it can be queried and used.

Chrome User Experience Report (CrUX)

According to developer.chrome.com: "CrUX data is collected from real browsers around the world, based on certain browser options which determine user eligibility. A set of dimensions and metrics are collected which allow site owners to determine how users experience their sites."

The data collected by CrUX is available publicly through a number of tools and is used by Google Search to inform the page experience ranking factor. Catchpoint as well as WebPageTest, both the solutions allow you to query the CrUX datasets and build your custom reporting, benchmarks and Alerting on top of the datasets.

The CrUX dataset primarily focusses at the Web Vital metrics: Largest Contentful Paint (LCP), Cumulative Layout Shift (CLS) and First Input Delay (FID). The dataset also includes diagnostic metrics like Time to First Byte (TTFB/Wait) and First Contentful Paint (FCP).

The CrUX datasets enable web developers, performance engineers and enthusiasts to compare their real user performance against the competition and industry.

Querying the CrUX datasets

There are a few different ways in which the data can be accessed.

  • CrUX Dashboard: The CrUX Dashboard is a customizable data visualization tool of websites' historical performance built on Data Studio.
  • PageSpeed Insights
  • CrUX on BigQuery
  • WebpageTest: The UI as well as the WebPageTest API supports CrUX natively allowing you to build your reporting on top of it.
  • CrUX API: The CrUX API is probably my preferred option here because it allows me to build my own queries to run against the datasets and fetch the metrics, sites that I care about the most. The CrUX API is what I query using Catchpoint to pull the metrics. The queries are straight forward and the documentation is pretty neat which helps querying the API a lot easier.

CrUX API: Sample Queries

There is a single endpoint for the CrUX API which accepts?POST?HTTP requests.

A sample Query would look like this:

POST https://chromeuxreport.googleapis.com/v1/records:queryRecord?key=[YOUR_API_KEY]"?

with the following POST body:

{

'url':'https://www.dhirubhai.net/',

'formFactor':'Desktop'

}
        

The output of this call would look something like this:


{
  "record": {
    "key": {
      "formFactor": "DESKTOP",
      "url": "https://www.dhirubhai.net/"
    },
    "metrics": {
      "experimental_time_to_first_byte": {
        "histogram": [
          {
            "start": 0,
            "end": 800,
            "density": 0.89203436157750893
          },
          {
            "start": 800,
            "end": 1800,
            "density": 0.0835611089418196
          },
          {
            "start": 1800,
            "density": 0.024404529480671448
          }
        ],
        "percentiles": {
          "p75": 426
        }
      },
      "first_contentful_paint": {
        "histogram": [
          {
            "start": 0,
            "end": 1800,
            "density": 0.90857255210381438
          },
          {
            "start": 1800,
            "end": 3000,
            "density": 0.054463232402674013
          },
          {
            "start": 3000,
            "density": 0.036964215493511636
          }
        ],
        "percentiles": {
          "p75": 1000
        }
      },
      "first_input_delay": {
        "histogram": [
          {
            "start": 0,
            "end": 100,
            "density": 0.93289224952741023
          },
          {
            "start": 100,
            "end": 300,
            "density": 0.044990548204158785
          },
          {
            "start": 300,
            "density": 0.022117202268431037
          }
        ],
        "percentiles": {
          "p75": 17
        }
      },
      "largest_contentful_paint": {
        "histogram": [
          {
            "start": 0,
            "end": 2500,
            "density": 0.87517268600749953
          },
          {
            "start": 2500,
            "end": 4000,
            "density": 0.072330767712650471
          },
          {
            "start": 4000,
            "density": 0.052496546279850051
          }
        ],
        "percentiles": {
          "p75": 1675
        }
      },
      "cumulative_layout_shift": {
        "histogram": [
          {
            "start": "0.00",
            "end": "0.10",
            "density": 0.901910577112468
          },
          {
            "start": "0.10",
            "end": "0.25",
            "density": 0.037029741973606464
          },
          {
            "start": "0.25",
            "density": 0.0610596809139255
          }
        ],
        "percentiles": {
          "p75": "0.02"
        }
      },
      "experimental_interaction_to_next_paint": {
        "histogram": [
          {
            "start": 0,
            "end": 200,
            "density": 0.77618442548556865
          },
          {
            "start": 200,
            "end": 500,
            "density": 0.15175167907061166
          },
          {
            "start": 500,
            "density": 0.072063895443819789
          }
        ],
        "percentiles": {
          "p75": 178
        }
      }
    }
  }
}
        

You can also change the POST body to query for specific metrics.

Example:


{
  "url": "https://linkedin.com/",
  "formFactor": "DESKTOP",
  "metrics": [
    "largest_contentful_paint",
    "experimental_time_to_first_byte"
  ]
}        

Output/Result

Once you have your queries setup and going, the next step is to parse the data -> store and build some amazing visualizations on top of it.

In my case, I used Catchpoint to query the datasets using the API test and use the same datastore offered by the solution.

Here are some dashboards which I setup:

No alt text provided for this image
No alt text provided for this image

To summarize, for me, CrUX ended up being not just a dataset but a pretty powerful one which allowed me to correlate the Synthetic measurements (lab measurements) with the Real User measurements (Field measurements).

This correlation helps track not just the individual metrics involved but also the changes that are observed with respect to Chrome CrUX measurements because of optimization efforts at our end. In short, it allows me to have a perfect combination of Synthetic and Real User benchmarks in the same place also allowing me to correlate it with the other datasets which I deal with in my day to day.

要查看或添加评论,请登录

Nilabh .的更多文章

  • DNSSEC & It's Importance

    DNSSEC & It's Importance

    DNS is a fundamental building block of the Internet. Its responsibility is to locate and translate domain names to its…

  • Connections Matter: Gaining Visibility Into APIs

    Connections Matter: Gaining Visibility Into APIs

    Digital transformation is at the center of modern enterprise. Yet, as cloud initiatives take shape and the complexity…

社区洞察

其他会员也浏览了