10 commandments for success in online marketing I wish I had learned in school

10 commandments for success in online marketing I wish I had learned in school

Just after finishing my nine-month trainee programme at the Swedish price comparison site PriceRunner, I was asked by my manager to reflect upon my personal development and learnings over that period. Together we carved out these commandments, which form our modus operandi for traffic acquisition. Hopefully, these insights can help you get a head-start in this business whilst avoiding some of the mistakes we have made along the way.

Always on is the holy grail

When starting out learning the ropes of traffic acquisition, my mind was focused on fixed period campaigns and budgets with well-defined strategic objectives - more like the work at an advertising agency. It turned out that the reality at PriceRunner was quite different…

Traffic acquisition at PriceRunner is all about finding and optimising “always on” opportunities, which can be ran with uncapped budgets optimised on ROI targets. The benefit of pursuing these kinds of initiatives is that they have more longevity that allows you to focus more on campaign optimisation and exploration.

For something to have potential as an always on initiative we evaluate three aspects:

  • Volume potential on short- and long-term basis
  • ROI potential with consideration taken to share of new users and branding potential
  • Automation and scalability, across both categories and markets

Aside from saving overhead time, doing the negotiating, setting-up, optimising and reporting on ad-hoc campaigns, the always on strategy gives us the benefit of more stable traffic volumes, which is welcomed both by our management and sales team.

What I came to learn over time is that these types of projects can seem straight forward initially, but you often have to be prepared to take things in micro-steps, test hypotheses, re-iterate and adjust your approach several times to reach that always on holy grail.

A clear example of an “always on” initiative I’ve had a lot of hands-on involvement with over the course of the trainee program was the development of our content marketing campaigns and strategy. We took everything in gradual steps in the beginning, re-scoped and adjusted how we approached these campaigns through several rounds of tests and optimisations over several months. Now these campaigns make up a significant portion of our monthly traffic and we’ve expanded the content campaigns to include large media site partnerships.

No such thing as voodoo

Voodoo what? The first time I was told this phrase, I didn′t know what to think. Today, I’ve learned to never accept vague explanations or assumptions when problems or anomalies occur in our traffic related field of work – things always have a root cause.  

In this context, “voodoo” means explanations that are rooted in either speculation or assumption. It’s all too easy when you’re managing loads of campaigns and working on multiple projects like we do at PriceRunner, to see occasional anomalies or unexpected results and fall into the trap of guesswork in trying to contextualise potential causes for them. 

 So, how do you know when you are in voodoo land? That′s when you start saying or hearing things like:

  • It was probably just a fluke
  • Well, let′s hope it doesn′t happen again
  • It might have been [X third party system] causing it
  • It works now, let′s not bother…

I’ve learnt that anomalies are usually the biproduct of a minor error that has occurred somewhere earlier in the workflow or process. For example, you’ve probably just made an input error somewhere or forgot to check a box early in the process. Being able to identify and troubleshoot those errors is essential. Not only to ensure that mistakes are mitigated but also to develop clear, defined processes that allow you to be better equipped in tackling anomalies when you see them.

Otherwise, your outlook becomes distorted and fixated on an abstract issue that is impossible to pinpoint, let alone solve. That is problematic when you work in a fast-paced and KPI-driven industry like PriceRunner is, where reliable reporting is essential. Another big issue is that if these problems are left unresolved, they might affect other campaigns, systems or processes creating a chain reaction. These problems will then be even harder to solve.

In the Traffic Team at PriceRunner we apply the following process when troubleshooting for addressing anomalies we come across:

  •  Frame the problem - i.e. identify the discrepancy and take action to resolve it immediately
  • Go as granular as you can in the numbers to see if the discrepancy occurs for a specific device type, browser, landing page type, etc.
  • Verify the discrepancy in different analytical dimensions. For example, the top-down aggregate session numbers are correct, but the bottom-up landing page numbers are not
  • Which systems could have caused it, which can be ruled out?
  • Decide on which end of the systems the problem was caused - i.e. tracking on our website vs. campaign setup error in platform
  • Reverse engineer the process applied in those systems and review the sequential steps taken from start to finish. Also, check the tech team release logs.
  • Analyse the effects of the anomalies in reporting and other affected systems and take necessary actions to mitigate these effects as not to risk making decisions on faulty data
  • Apply the 5 Whys technique (from Lean) and make sure that the root cause is fixed, even if not in your team
  • Put safeguards and checkpoints in place to not risk re-occurrence in the future

Always have 3 tests running

When starting my career at PriceRunner, the concept of continuous testing was new to me. In school we had learnt to work with campaigns and initiatives in a linear, waterfall manner, grounded in marketing and consumer behaviour theory. 

The work at PriceRunner is much more fast-paced and we tend to prioritise getting real data faster to base our decisions on. One way we do this is by running proof-of-concept projects. The beauty of this approach is that it takes the guesswork and debate out of the picture. It also gives us a baseline to iterate upon.

In our team we have an ambition to always have three kind of tests running:

1. Learning and staying in the forefront

These tests are primarily centred around learning something completely new to gauge potential opportunities and methods of working that help in keeping us ahead of the curve. The online marketing space is moving fast, and new channels, formats, targeting and optimisation methods emerge all the time!

2. Improve performance of core traffic channels

Our largest paid channels stand for a substantial part of our total traffic. Optimising these just a couple of percentage points leads to substantial traffic gains in absolute numbers and justifies always having a test initiative going. We also find it important to be proactive in these channels making sure that we are protecting our market share. 

3. Identifying new always-on traffic channels

Depending on your business model, only few traffic channels (under certain circumstances) might be available as prospects for always-on strategies. The same goes for PriceRunner and we constantly need to push ourselves in finding, testing and tweaking new potential channels.

Establishing our presence in new traffic channels has several benefits:

  • It increases our footprint in multiple channels that complement each other
  • Reduces our dependency on the major traffic channels
  • Unlocks access to new, untapped users

During the trainee program at PriceRunner I’ve been heavily involved in all three kinds of tests. I have been working both on developing new always on channels in the form of content marketing initiatives and also leading our work improving our retargeting creatives for higher CTR.

Accept that you will make mistakes and fail

 As a young person fresh out of university, I came to PriceRunner ready to dig my heels into work and demonstrate my worth – naturally that meant being somewhat risk averse as a total newcomer. However, I discovered that I’d better overcome this feeling, because to work in the traffic team requires being bold and testing things with a small chance of instant success. 

 At PriceRunner we apply the rule of six - the expectation that one out of six tests will succeed with enough margin and scale to be developed further and be deemed worthwhile. This does not mean that the other five were a waste of time. Rather, it is from these failed tests that we learn the antitheses of what works, develop new hypotheses and find ideas to iterate upon.

Some might argue that 1 out of 6 is too low of a success ratio to stay motivated in testing out new things. It could be a fair point, but one needs to consider that PriceRunner is a well-optimised business without much low hanging fruit. Also, by lowering the expectancy rate, it invites more risky projects to be tested, which might give the company a stronger competitive advantage if successfully implemented. 

 Naturally, when doing stuff that is new, either entirely or just to you, you will fail. The key, I’ve learned, is to fail fast, adapt and iterate. The implementation at PriceRunner is guided by constantly asking team members for micro-feedback. 

 The difference between micro-feedback and regular feedback is that you actively request this at the smallest point of iteration or milestone reached, rather than after having carved out a larger piece of work. This quality assures your work at a much earlier point, gives the rest of your team transparency in what you are doing and the delivery they can expect. When planning projects, I am constantly being challenged by my manager asking for what the earliest iteration looks like which I can present and get feedback upon.

 $0.67 on the first iteration can be turned into profitability

Online marketers will know all too well that positive return on investment (ROI) is what you’re always hoping to achieve in any campaign work you spearhead. When you work with as many different types of paid traffic campaigns, ad formats, content types etc. and are always testing out new ones as we do at PriceRunner, you learn quickly that sometimes this can be more complicated than just setting up a POC and letting it run. That’s where iterative learning and optimisation comes into play.

We generally believe that if we can make $0.67 on every dollar spent in the very first iteration of a POC or a campaign, that’s usually a good indicator that you can optimise and find ways to increase that ROI to break-even or positive levels. 

This is a concept which originates from the pioneers in direct mailing. The key is figuring out where you can drive the performance to reap the benefit - this is where the art of the trade comes to the forefront!

For us, it is mostly a matter of twisting and turning many knobs and reaping a couple of percentage points here and there rather than finding a silver bullet. Our standard procedure goes something like this:

 Creative optimisation

We always run a/b-tests on both ad copy and creatives. Initially we focus on the heading and the images. For larger campaigns we test three different versions of each giving us nine combinations.

 Category optimisation

At PriceRunner we have the benefit to work with +2m products spread over 400 categories. We usually test 3-10 categories from the outset in order to learn which works best for the specific channel, audience and format.

 Bidding optimisation

As soon as we get some initial numbers, we can start optimising our bids on actual ROI. Of course, for some of our platforms this is a built-in feature as with tROAS, for example. 

 Placement optimisation

We prune our campaign placement and exposure based on the data we have at hand. For example, by adding negative keywords, excluding certain publishers or running placement bid adjustments.

 Targeting optimisation

We look at the converting audience groups, apply negative audience lists and work with the interest- or demographic based targeting.

 Landing page optimisation

Here we work with our UX and developer teams trying to lower bounce rate and increase conversion. As this depends on other teams at our company, we tend to base campaigns on already optimised landing pages in order to stay lean in our approach.

Just a word of caution - when working with campaign optimisation, always make sure that you base your decisions on significant data for a longer time period to avoid being fooled by short term trends or inconclusive numbers. This is something I have had to experience the hard way…

 Hypothesis and lines in the sand

 Prior to joining PriceRunner, my only experience related to hypotheses was from statistics class at university. In our team it’s a central concept, inspired by the book The Lean Startup by Eric Ries, and something I have gotten to appreciate as a guiding principle for my work. 

A common pitfall when running tests and POCs is to sit back and wait for the data to come in and then try to react on it. However, this is a missed opportunity to validate and improve the understanding of your business and users. Let me explain why…

  • A well-formed hypothesis defines what to be built or tested and makes it easy for the team to get a shared view on the objective
  • It aligns your team on the current state of affairs and beliefs
  • It forces you to set a goal for your project beforehand
  • You get the antithesis learning for free, i.e. if the hypothesis isn’t validated it implies that…
  • It lets you iterate and formulate new hypotheses for further testing
  • The learning becomes more universal and sharable within your organisation

A requisite for reaping the benefits above is that the hypotheses used is well-formulated. A simple to use template is - Because we believe X, if we do Y, we expect Z to happen

So far, so good…but, there’s one piece missing – drawing a line in the sand. This concept is less well known and originates from the book Lean Analytics by Croll & Yoskovitz. A line in the sand is a predefined KPI that needs to be met for the test to be deemed successful or implementable.

Without that line, it’s easy to find yourself in a vacuum post-testing or attempting to rationalise why a weak numeric result still makes sense. A line in the sand gives you the following benefits:

  • It forces you to be true to yourself with regards to success or failure by taking away the opportunity to retroactively make a call based on the data at hand
  • It tells you if you are making enough progress at enough pace to reach your defined goal, i.e. you can react much earlier if early test results are not satisfactory
  • It allows you to properly size the investment and resources needed to conduct the test based on the expected business value
  • It creates full alignment with regards to the threshold the tests need to meet for a continuation or implementation to be realized
  • It allows you to break down the desired result into components which can be tracked and acted upon

So, how do you define this land in the sand? That’s a tricky question…it’s a combination of previous experience, benchmarking, guesswork and general aspirations. Given it is a line in the sand, it is moveable and sometimes you need to adjust to reality as you learn more about the problem at hand.-

 Tracking and naming is key to correct attribution

When I started the trainee programme with PriceRunner, I had some basic understanding of UTM tracking and attribution from a digital marketing course I had in university. However, the examples I had come across previously were far less complex than those in real life.

 I came to learn, through my work with our paid traffic campaigns, that our campaign tracking and naming serve the purpose of ensuring reliable and accurate reporting. This is essential not only for our own work and KPIs in the traffic team, but also for the management and finance team. It’s a hygiene factor that we couldn’t work effectively without!

What’s the drawback of not having well-defined tracking and naming rules and conventions to live by? Well, there are many, but here are a few key ones worth noting from our perspective:

  •  Traffic may not be logged correctly in your analytics platform – that can be highly problematic not only for measuring paid campaign results, but also for cost reporting and follow ups with ad network platforms.
  • Having incorrect naming can make it much more difficult for you to attribute the costs of your campaigns to the correct revenue streams, this takes loads of time to investigate, identify and resolve.
  • Reliable decision making becomes very difficult to execute when you’re dealing with faulty or incomplete data – highly problematic.

We generally apply a standardised naming format to each ad network where we buy traffic to ensure we can clearly distinguish the type of campaign, when it’s running and the ad content. 

We also use specified tracking codes for different campaign types (i.e. one for retargeting, another for display and another for content campaigns etc.). The last thing you want is your retargeting traffic getting lumped in with your content campaigns and vice versa!

 Covering these bases ensures our reporting is accurate and attributed correctly both for our own benefit in analysing but also for cost reporting.

 The art of batching and why it matters

PriceRunner is fast-paced and a forever-evolving working environment, so learning to manage my time effectively was something I knew early on would be essential in making a future career in the traffic team.

Batching is a secret planning weapon for getting tasks done in a timely and efficient manner, especially when you’ve got several projects running in parallel. The idea is to map out and group your tasks, timeboxing them into set periods for optimal time management. Failing to batch tasks can easily result in one getting swamped with a load of projects clashing together – nobody wants that!

 This is how the batching process looks for me:

1. Review the list of tasks to be completed in order of priority

2. Identify the low effort/high outcome tasks which can be conducted to push the needle forward

3. Group similar tasks together and carve out time in the calendar in 1-3-hour blocks for each task (depending on the volume or complexity)

4. Repeat sequentially until the entire project / task is completed

Batching isn’t a difficult concept to grasp at face value, but it takes some time and a bit of trial and error to master, which was something I came to learn in my work during the trainee programme. I found my time and energy would be more efficiently used in re-occurring tasks like our monthly campaign set-ups or creation and editing of our retargeting ad templates because batching makes the delivery timeline more manageable. 

 Batching also makes mapping out tasks that are more expansive and a bit less straight-forward in nature a lot easier and less daunting, which is helpful when you take on a bold new project (or several simultaneously).

 Manage those dependencies early

Active teamwork is often a given and PriceRunner is no different in this regard. It’s not uncommon for us run shared projects with other teams, as we like leveraging the extensive knowledge and expertise of our talented colleagues within the company. 

 Naturally, this means you’re going to occasionally run into dependencies on others, which isn’t a problem, provided you manage those dependencies proactively. 

 On several occasions I had to learn how to adopt this proactive approach, since I initially was hesitant to ask others for help as a total newcomer, which ultimately wasn’t an effective or considerate strategy. When everyone’s got their own busy schedules, nobody likes being approached frantically from the side with an urgent request you probably should have approached them with sooner!

In the traffic team we have a general framework to help us mitigate those kinds of scenarios:

1. Map out and anticipate the need or dependency very early on in the planning process

 2. Book your resources – this way you can explain what’s needed, what the expectations are, when you need that input at the latest, etc. to whomever will be assisting you.

3. Add a hefty margin for delivery– unanticipated things can always pop up, so it’s always wise to give yourself and the person your dependent on ample time to deliver so you can mitigate stressful situations.

 4. For critical objectives or when the dependency is of an uncertain nature - work with two parallel tracks. A great example would be if you are reliant on a new app tracking software for delivering on your app downloads target. Then source and negotiate with two different providers in parallel.

 I’ve found myself in situations where I’ve been on both ends of the dependency continuum. For example, there have been times where I’ve needed input on ad creatives from folks in our Design team or have been requested to assist with some content translation with a narrow turnaround window.

 What’s the simplest solution? Reaching out, communicating and managing those dependencies early on!

 Treat agencies as extended team

 I think I missed the class covering in-housing vs out-sourcing online marketing work – not something you come across much in academia. Coming to PriceRunner, I landed in a lean and efficient mix of the two models with a fuzzy line between internal and agency work.

At PriceRunner, external agencies and partners play a big role in our traffic acquisition execution and longevity. Not relying on external agencies would require us to hire a large team of specialists and make us more personnel dependent, increasing operational risk.

However, how much value we can extract from external partners depends a lot on our interactions and collaboration. Below are our guiding principles at PriceRunner:

  • We are not buying a product or a fixed service, we are buying someone’s time and passion – respect that!
  • Acknowledge that people in this field of work are performance driven and want to deliver results – give them clearly defined goals to strive towards
  • Have a solid routine for recurring meetings and reporting with clear expectations – expect nothing less than from your ordinary team members
  • Invite your partners to be a part of your company’s journey and vision – be transparent with what you want to achieve
  • People have shortcomings and will make mistakes – apply the same level of acceptance to your external partners as to your team members in-house
  • Be candid – give feedback, both positive and negative, as you would to in-house team members. People in our line of business want to grow and develop new skill sets
  • Make sure that your partners have the same data as you and be overly generous with communicating what goes on in your business – remember that you are the expert on this!
  • Don′t be hesitant connecting different agencies working for you directly with each other – this avoids unnecessary communication and stupid misunderstandings

 Have you experienced similar learnings and want to take the next step in your career?

Ingen alternativ text angiven f?r den h?r bilden

We are currently looking for new junior team member in the traffic team at PriceRunner. Read more at our career website (https://jobs.pricerunner.com/jobs/908940-junior-traffic-acquisition-specialist) or connect with us on LinkedIn (https://www.dhirubhai.net/company/165537/) to learn more!

 

Mikael Lindahl

CEO @ Fxity - Modernize your business treasury processes with automation

4 年

??????

回复
Malin Ruhn Gibeck

Head of Operations/People Lead - Passionate about People & Culture

4 年
回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了