MeasureCamp London 2023
Alban Gér?me
Founder, SaaS Pimp and Automation Expert, Intercontinental Speaker. Not a Data Analyst, not a Web Analyst, not a Web Developer, not a Front-end Developer, not a Back-end Developer.
Yesterday, I had the pleasure of attending MeasureCamp London. London is where Peter O'Neill started MeasureCamp 11 years ago. Although many international MeasureCamps have existed since many hardcore attendees still make the trip. One of them flew as far as Sydney to attend, even though there will be a MeasureCamp in Sydney at the end of this month! He was a sponsor, which was a business expense, but he still sat on the flight. Speaking of international MeasureCamps, this one starts a long streak of MeasureCamps all over Europe, with not one single weekend without a MeasureCamp somewhere until November 4th: Stockholm next week, and then Warsaw, Istanbul, Bratislava, Sydney, and Baku. There will also be MeasureCamps in Vienna and Brussels a little later. With little notice, we found out about a national train strike in the UK on the same day as MeasureCamp London. Many were concerned about the potential impact on the event, but Keely Jacob revealed that we smashed the attendance record with a little over 300 attendees. Many of us managed to or had planned to travel to London the day before, leaving on Sunday. As a result, the pre-MeasureCamp drinks meeting at the Walrus and Carpenter pub was very well attended.
Data Layer Philosophy - Matt Bentley at Loop Horizon
A data layer is a data repository, but rather than a whole data lake, data warehouse, schema or database, this is a small chunk of Javascript code embedded in the web page. The data layer, as initially intended by the Worldwide Web Consortium (W3C), was to act as a tool-agnostic data structure containing things such as the page name, the site section, the logged-in status of the visitor stored in one place so that all tools we want to implement on the website can use the data layer data. Before having a data layer, someone would soon notice that all tools use a different value for the page name and must map them to each other. A data layer eliminates all this because all tools now use the same page name. What goes for the page name goes for many other data points. Data layers are a huge saver. Recently, the data layer has evolved to become a queue. The old-style data layer is not gone; we call it the "computed state" of the data layer.
Matt Bentley made the case for a few things regarding the data layer. His approach reminds me of the "form follows function" mantra well known to architects. You can have the most creative building ever, but it's impossible, or even dangerous, to live, work or shop in such a building, and then the project has failed. Data layers face the constraint of a lack of people able to do the job of creating them. The people making these data layers also work with several clients or internal teams, all with their priorities. The result is a need for simplicity, and Matt posited that we should only track interactions, decisions and views. Although it may be tempting to make the data layer platform-specific (mobile, web, other device types), Matt also recommends removing bespoke data layer features. Data layers are supposed to be tools- but also platform-agnostic or not, to put it simply, standardised.
The need for more straightforward data layers evolves further towards the concept of modular data layers. It is possible to split a data layer into chunks. You would have a global chunk that every web page requires, plus one or more other chunks, depending on the context. For example, product pages may require data layer information that the homepage does not. The same goes for the thank you page. We can stitch these data layer chunks into a full data layer and even flatten it through code.
Flattening a data layer relies on data layers having a tree-like structure with branches spawning other branches and eventually ending with a leaf - a data value. For example, we often find the page name to describe a branch for all page attributes, and this page branch represents one aspect of the data layer, often called digitalData: digitalData.page.pageName = "homepage" - digitalData and page are branches, and "homepage" is the leaf. Flattening a data layer consists of having a two-column table with all values in the right column and the sequence of branches required to find that value. I like to describe these as driving instructions for squirrels. A flattened data layer lists all routes for each value starting from the ground, i.e. the trunk of the data layer. I know squirrels can jump between branches, but I hope you understand. Replace squirrels with slugs, if you must.
Matt explained how you can build validation checks that will compare the values you pass to a data layer with the data layer schema. A data layer schema describes the expected data types like a traditional database table schema. For example, suppose you try using a number as a page name. In that case, the validation step should return an error because it expects a string, i.e. a sequence of letters, possibly digits and a few other characters. You might also enforce a maximum number of characters for that page name, and exceeding this maximum length would also return an error.
Circling back to the idea that data layer specialists are scarce compared to developers, one such simple, standardised and modular data layer would then be ready to share with the developers. The developers would add the global data layer chunk into a CMS module that is also global, i.e. required to load on every page. The other data layer chunks can go into other CMS modules with the same scope, i.e. loading together on the same page type.
SPA approaches - Nathaniel Weiss , CTO at Conductrics
SPAs were the first of the MeasureCamp sessions I gave over the years, not the last. I wrote a basic SPA using Google Analytics, Angular (aka AngularJS, version 1) and a few routing Angular modules for that first MeasureCamp session. An SPA is a single-page application. Imagine you need to print 20 pages of paper; all pages have the same heading and footer. You could print them all from blank white pages. Or you could speak to a printing company to sell you sheets with the header and footer printed in advance, and all you have to do is print only the parts unique to each of the 20 pages. SPAs work with the same principle in mind. You have a footer and a header that you want to keep identical from one page to the next; perhaps the same goes for a navigation menu on the side of the screen. Some parts of the screen never need to change, and others do. So why load a new page with all the content that doesn't change? Can't we keep the content that does not change on the screen, load and replace only the content that changes? The answer is yes, and that's precisely what an SPA does. Why do SPAs matter? Because the web pages load faster since you only load the content that changes. It's great on mobile phones because it loads less data and is kinder to your data plan. Search engines penalise slow websites with lower search engine ranking positions.
But since there's no full page load, you only get a page view for the first one. The visitors see what looks like a sequence of new pages. They should be tracked as such and fire so-called virtual page views. Most Digital Analytics vendors support virtual page views. So the question becomes how do we detect that the page has changed and respond to that change with a virtual page view? Over time, people found many answers to this question. Nate recognised two main approaches: paint on top?and?within. "Paint on top" means that whatever solution or Javascript framework your developers choose to develop the SPA, there are solutions entirely independent of that technical context. If your developers decided to upgrade their framework or replace it with another one, it would have zero impact on your tracking abilities. "Within" relates to options where we need to work closely with the developers to have them embed our tracking code inside their SPA code. That code will be specific to the SPA technical framework they chose.
Because triggering our tracking at the right time is critical, Nate favours the options where the tracking code fires from within the SPA code rather than painted on top. However, the advantage of having an SPA framework-agnostic tracking solution is hard to deny. A Digital Analytics implementation team can implement this in the tag management system (TMS) without speaking to the developers or waiting for their turn. We all know how the developers treat all Digital Analytics work as low priority because it happens under the hood, out of sight of the visitor, and doesn't break the site when the Digital Analytics code does not work. Here, Nathan suggested two "paint on top" approaches:
DOM Mutation Observers is something I discovered perhaps five years ago, if not more, and this is my favourite approach. It is entirely SPA framework-agnostic and, therefore, requires no discussion with the developers and does not depend on a triage process that is unfair to us. As the web page needs updating, the content structure under the hood will change. Some copy and images will disappear, and new ones will come into view. The document object model (DOM) describes the webpage's structure, and the DOM will mutate when the screen updates. With an observer, we can report such mutations, and one of them will make the page name change. We can filter through the cascade of DOM mutation observer events to seek the one that also caused the page name change. If that event is missing, we consider that we are still on the same page even though the screen has changed, so there is no need for a new page name. But if we do, then we need to fire a virtual page view.
Intersection Observers is something I was aware of, but I failed to see its potential to detect SPA screen changes. DOM mutation observers are a fantastic solution, in my view. But you will run into SPAs where the DOM does not mutate. All you get is a page loading all potential screen changes simultaneously, showing the first one while hiding all others. Then, in reaction to visitor behaviour, it will hide the first screen, show the second one, and so on. You get zero DOM mutations unless you manipulate the display state with a Javascript class name change, which would trigger a DOM mutation when adding the attributes to the observer options. Your DOM mutations observer would be blind in that case. The situation I am describing here is a carousel. As the current slide shifts to the left and the new one to the right, the carousel viewport and the new slide intersect, and we can use an intersection observer for this. If the slideshow is manual, you can track the clicks on the forward and backward buttons instead. But if you have a slideshow that runs automatically, the intersection observer might be your solution.
A participant asked a question about rehydration. Be careful when searching for content about SPAs and?rehydration,?as it may not return what you expect. Rehydration is also a page performance optimisation technique. You load a full static page with little code on it, and your page will load fast and reach the state of "first contentful paint" sooner. The page is not "interactive" yet, but I assume the?first contentful paint?status matters more to SEO indexing bots. The little code that the static version of the page contains will act like a foot in the door for the rest of all the code that also needs to load on the page. It's a bit like those call centres with an agent who picks up the phone immediately only to tell you whether they can put you on hold. It games the metrics, and Nate admitted that using DOM mutation observers may lead to unexpected results, although these DOM mutation observers are framework agnostic. I have no experience with rehydration, but I understand that the rehydration process should only load code that does not impact the DOM, i.e., not result in visible screen changes. But if it ever does, DOM mutations would happen, and the page name might change. Here, as well, an intersection observer could work in some cases.
From intuition to insight: Building a data-driven culture by Eugen Potlog at UX Studio
Despite GDPR and Article 5 treating data hoarding as a data privacy breach, companies are collecting a lot of data they don't use, and the stakeholders fail to realise the value of a lot of that data. Companies may not use more than 1 to 5% of that data. Many stakeholders rely on gut feeling because that's how they, or their company, have always done it. But data-informed decision-making has advantages:
Eugen sees three pillars to achieve data transformation:
Data Accessibility
The benefits are increased decision speed, more effective collaboration with a single version of the truth, and more inter-departmental trust thanks to transparency. Companies must invest in user-friendly platforms and tools, centralised data sources with easy access, and proper data governance and protocols to achieve these benefits.
Data Literacy
The teams become empowered to generate insights from data, reduce data misinterpretation risk, and increase confidence in data-driven strategies. Companies can deliver this with regular training and workshops, create a culture that incentivises discussions about data and leverage data visualisation for a better understanding of the data.
Decision-making processes
The companies can benefit from evidence-based decisions rather than guesswork and intuition, leading to improved confidence in the business outcomes. Companies must define a transparent process involving data before any significant decision, encourage brainstorming about the data, and engage in a Plan-Do-Check-Act (PDCA, DMAIC, Demming's Wheel) loop of constant refinement.
The challenges are many. People will prefer the status quo over new methodologies. When people believe knowledge is power, teams hoard the data and multiple versions of the truth. The data will also have varying degrees of quality across the silos. Data literacy remains rare, and people do not understand how to use the data. Circling back to the initial point, companies are hoarding data in volumes that people find overwhelming to work with, leading them to be unable to know where to start and the infamous paralysing analysis issue.
领英推荐
To overcome these challenges, Eugen recommended embracing change management. People should take advantage of training and workshop opportunities, engage in open communication and evangelise the data-driven approach's benefits. Too many teams are still operating inside silos; companies should invest in integrated data platforms that let people from different teams cooperate. Without good data, getting people to trust data-informed decisions will remain elusive. Companies need data governance, clean and up-to-date data and datasets following the same quality standards.
Doing good with data by Jono Alderson , technical SEO consultant; Valentin Radu , Omniconvert Founder and CEO; and Alice Jennifer Moore , digital analytics consultant
Due to post-lunch coma, my notes of the first half of the session are incomplete. Lydia mentioned a UK charity called DataKind , with which she is involved. Maggie Petrova had a similar experience back home in Bulgaria during the pandemic. She worked on projects such as verifying electoral results and analysing children's care data. Maggie explained how she has found a lot of purpose in doing this and gained exposure to tools that she would not have while working agency-side.
Nataly suggested that data transparency could do many good things and spotlight cases of perverse outcomes. But, companies might not be so keen to damage their brands in the pursuit of transparency. Another participant mentioned how data could help fight short-termism, which he holds responsible for many issues.
Nicholas Redding suggested creating and adopting a code of conduct for data practitioners, something akin to the Hippocratic Oath for doctors. This resonates with many data practitioners who desire to distance themselves from the Cambridge Analytics scandal. Data practitioners should always refuse to collect personally identifiable information (PII), GDPR, and other national legislations. But we should also challenge the requirement to collect sensitive data points such as salary. Ton Wesseling told us about how one such code of conduct already exists in the Netherlands. It provides agencies with a unique selling proposition that helps them stand out from the competition and helps clients who share similar values identify worthy partners.
Ellie Hughes suggested tackling the environmental impact of data centres using unorthodox methods. It is also a concern of mine whenever a stakeholder asks to track everything. Data centres will need to crunch that data. Jono mentioned that ChatGPT requires a lot of water to cool their servers. An article published by Euronews in April 2023 discussed a survey revealing that for every 20 to 50 ChatGPT requests, the computers analysing the request and providing the answer required half a litre of water to cool down. A Gizmodo article published in May 2023 also revealed that "training ChatGPT required enough water to fill a nuclear reactor's cooling tower", according to a study. Ellie added that we should look into creating a guild of data practitioners, potentially using the guild of software developers as a template.
Alice, Valentin and Jono agreed to communicate soon a plan about a code of conduct. That's a session that Camille Chaudet , Siobhan S. and Stephane Hamel would have enjoyed a lot, and perhaps we will see a similar code of conduct in their respective countries, i.e. France, Greece and Canada.
Centralised Implementation: Who is responsible for Digital Analytics Implementation by Andrew at Loop Horizon, yep, them again!
Andrew explained how implementation work covered working with data layers and tags, but the "and" somehow became an "or" for no good reason. Implementation specialists should be able to work with both. Some of this might result from early marketing comms about tag management systems (TMS) and how TMS users no longer need to liaise with their dev team. As his colleague Matt pointed out, how we work must follow the talent constraints we face. For every single TMS user, there are roughly 58 software developers. Andrew estimates the total number of Digital Analytics practitioners in the UK at around 7,000. The number of people who can do implementation will be an even smaller fraction. If it's a data layer or a firewall issue, then that's for the dev team to deal with, but everything else should be fair game for an implementation specialist.
Andrew mentioned a few ways implementation specialists can work more closely with the developers. He will cover this whole session in a separate post soon, but here are some of the ones I found very relevant. When tracking a website, we should strive for a minimally viable product (MVP); the data layer and the tags should be simple and collect only the strictly necessary. From there, we can scale up. Reusability of components is also something that Andrew recommended, which resonates with Matt's session on how data layers should be simple, standardised and split into chunks that you can reuse across projects and clients. One point that Andrew could not stress enough was fighting against technological debt at all costs. I agree it's hard to debug old code. If, in addition to this, you use your TMS to implement quick fixes that the devs have no time to work on, you get another problem on your hands. Don't let the TMS team become a shadow IT function managing processes having nothing to do with Digital Analytics, processes of which IT has no visibility.
Regarding Andrew's primary TMS, Adobe Launch, Andrew outlined the pros and cons of where to save code. Custom code can be inside a data element, a reusable block of code returning one value. Rules can also contain reusable custom code, but only if you can bulk copy rules across properties, which is how Adobe calls an account linked to a specific domain. There is also the option of putting custom code inside an Adobe Launch TMS private or public, but preferably private, TMS extension. Andrew is okay with complex components as long as they make day-to-day work easier and these complex components do not require any frequent maintenance.
Analytics Career Choices by Steen Rasmussen at IIH
In Alice's Adventures in Wonderland, Alice arrives at a fork in a path and does not know which way to choose. The Cheshire Cat appears and tells her that if she doesn't know, which direction to take doesn't matter. Every year, Harnham publishes its salary survey in Digital Analytics, and one of the top reasons why people leave their roles is the lack of career progression. So, Steen's session was a sensitive topic for many attending MeasureCamp.
Steen remarked that we all fell into Digital Analytics by accident rather than design. Steen's background is in Public Relations; mine is Languages. According to Steen, the field is reasonably safe because companies are desperate to prove that they are data-driven.
Steen referred to a professor from Lund in Sweden, who proposed a way to categorise people, which would work well with Digital Analytics practitioners:
As mentioned above, career-oriented professionals will achieve career progression, something relatively rare in our field.
The insecure specialists will be on the bleeding edge, reading the blog posts as soon as they are published. They are also driven by the pursuit of mastery, finding that the job is its reward. Many companies made the costly mistake of promoting them in recognition of their talents, only to resign two months later when they found out they hated managing people.
Finally, the shamans are evangelists. They will post content often, speak at conferences, publish books, etc. Becoming a shaman requires a lot of work and persistence. Steen gave Simo Ahava and Mark Edmondson as examples.
Many of us know the visualisation of the Martech sector, with loads of different vendors and tools, a list that never stops expanding. According to Steen, there is at least one niche in this landscape where everybody can become a shaman, i.e.?the?specialist that everybody recognises, the big fish in a small pond. To become one such shaman, however, one must be prepared to relocate to another country to seize the opportunities where they are.?
I attended Ellie Hughes 's and Chiara C. 's session, but too late to have a summary that does it justice. Ellie and Chiara gave advice on how to coach and get coached. I'll keep my eyes peeled for a post where they share their deck or the content they covered.
This concludes my takeaways from yesterday's MeasureCamp in London. Like Neo in The Matrix, you might feel that you know kung fu with hours of sessions condensed into a 15-20 min-read. I want to thank Keely Jacob , Anna Lewis , and ?? Charles Meaden , who are part of the organisation team. You managed to break the record on a national train strike day! Thank you to all the volunteers and the sponsors. Shouts to, oh my God, where do I begin? That's even harder than writing the whole article: Nicolas Malo , Tim Ceuppens , Arianne Leijenaar , Eddie May , Jomar Reyes , Matt Gershoff , David Vallejo , Russel M , Bhavik Patel , Adrian Kingwell , ?????Alexander Holman-Butt , Michael Fong , Sarah Hector , Samia ABARA , Jean-Philippe Jover , Fran?ois Joly , Sébastien Monnier , Tom Robbins , Thijs van Oirschot , Daphne Tideman , Mirna van Slooten , there's method to this madness, I swear! Patrick Oliver Mohr , Marcus Stade , can't find Bea? Moritz Bauer , Carsten Minnecker , Yael Farkas , Mihail Dobrinski , Mark Pinkerton , Grant Kemp , Gerry White , @James, I can't remember how you spell your last name, Cotterell, Catterill or Cotterill, Gideon Delayahu , Clarice Lin, Marketing Strategist?? , Iwona Posel , Gabriela 'G' Szpalerska-Denison , Craig Sullivan , Dion Jones , James Lovell , Adam Gutteridge , Mahdi Aden , Alexandre N. Koletsis , Phillip Law , Andrew Hood , Ibrahim Elawadi if I forgot anybody, I apologise! Vi ses i Stockholm n?sta vecka f?r #MeasureCampSTHLM !
For more articles like these, follow me on LinkedIn, or X: @albangerome
#MeasureCampLDN #MeasureCamp #London #DigitalAnalytics #WAWCPH #CBUSWAW
Global Director, Marketing Analytics @ Philips, Data Advocate and Public Speaker
1 年Nice Recap Alban, it was nice meeting you as always. Till the next one!
Founder, SaaS Pimp and Automation Expert, Intercontinental Speaker. Not a Data Analyst, not a Web Analyst, not a Web Developer, not a Front-end Developer, not a Back-end Developer.
1 年Thanks Gerry White
Managing Consultant at AND Digital
1 年Thanks for the nudge to write about the workshop we hosted with Ellie Hughes. Here's a link to the article ?? https://www.dhirubhai.net/pulse/3-things-you-didnt-know-feedback-what-missed-chiara-consiglio
Exp Performance Analyst GOV.UK at Government Digital Service
1 年That's really helpful thank you. It's a great resource for pepople who weren't there. I've already shared it at work.
The Disabled CEO ?? Empowering “Left-brain leaders & founders to lead with ease, communicate with impact, secure funding & build long term success. Founder, LIL Exec Consulting - Goal to impact 100 businesses in Year 1
1 年I would have liked to have heard the talk by Eugen Potlog, over the last 10 years in particular I’ve been working with Leaders to help them understand the importance of their people data and how it can impact decision making.