Optimization vs Personalization: How They Differ, Can Collaborate, and Learn from Each Other

While website optimization, achieved through A/B testing and various research methods, represents a more commonly practiced approach, personalization is still emerging. We definitely see more companies embracing Conversion Rate Optimization (CRO), testing and research. But in terms of personalization, it’s safe to say that most organizations are yet to fully harness its potential. This article discusses why having a strong CRO program is essential for personalization and how these two practices can benefit each other.

Differences and similarities

The primary distinction between optimization and personalization lies in their respective objectives. Optimization seeks to enhance the user experience for all website visitors, while personalization aims to tailor the experience to specific segments. The crucial divergence lies in how hypotheses are formulated. Personalization necessitates a clear understanding of the target audience, diving deeper than general A/B tests focused on the entire population. Although the latter also requires consideration of visitor needs and preferences, it is less specific.

Another difference lies in the implementation process. After an A/B test in optimization proves successful, the changes can be directly applied. In contrast, personalization relies on segmentation through a Customer Data Platform (CDP). Consequently, a hard-coded implementation is often impractical, and the personalization continues through the CDP, as it relies on creating visitor profiles and segments to effectively target them.

Despite the differing goals, the working methodologies are, or at least should be, similar. Learning through testing is fundamental to both approaches. Hypotheses are grounded in theory or research, whether qualitative or quantitative. A test plan is formulated, encompassing clear KPIs, target demographics, and the expected duration of the test. Subsequently, a test is developed, launched, and followed by a thorough analysis. However, the extent to which this process is adhered to in personalization can be a subject of debate.

How they can collaborate

Optimization and personalization should work together. As illustrated in the pyramid, we consider personalization as the ultimate phase in e-commerce. Prior to delving into personalization, it’s imperative to establish a solid foundation of usability, which is accomplished through optimization. Even the most sophisticated personalization efforts can’t salvage a poorly performing website. Defining baseline usability can be somewhat elusive, but it essentially involves ensuring a website is free of glitches, implementing best practices, and addressing major obstacles in the checkout process. Furthermore, this baseline encompasses developing a comprehensive understanding of your customers’ desires, needs, and the factors impeding conversion.

Optimization vs Personalization

When an organization is prepared to engage in both general optimization and personalized experiences, it becomes essential to ensure that these activities are in sync. For instance, let’s imagine running an A/B test on the product page for all website visitors, and it fails to produce the anticipated improvement. However, a closer analysis reveals variations in test performance among different segments (such as devices, the number of pages viewed, new/returning users, and traffic sources). This situation opens the door to personalization, where the same adjustment on the product page can be retested on a specific segment. The reverse scenario is also plausible; if a personalization A/B test proves successful within a particular segment, it can be considered for implementation across the entire website user base, albeit with the necessity of conducting initial tests.

Furthermore, alignment is crucial to prevent conflicts between traditional A/B tests and personalization A/B tests that may inadvertently run simultaneously. While this might seem straightforward, things get easily tangled up when different teams or individuals are collaborating.

How personalization can evolve

Previously, we emphasized that the working methods between optimization and personalization are, or should be, quite similar. However, the motivation behind this article is the observation that in many companies, these similarities often do not hold true. CRO practices, such as formulating well-founded hypotheses, conducting user research, implementing SRM checks, or performing in-depth test analyses, are not standard procedures within personalization projects.

A more pressing concern is the fact that systematic A/B testing is not yet a widespread practice in many organizations. Personalization initiatives are frequently launched without validating their impact or testing the optimal means of visualizing or presenting the personalization. Additionally, we have noticed that some organizations carry out random personalization experiments without a clear strategic framework behind them.

When starting with personalization, it is crucial that the organization has a strong CRO program in place. This entails not only having a well-defined process that encompasses research, ideation, test development, and analysis but also necessitates the establishment of a clear strategy. This strategy should outline how both optimization and personalization can contribute to achieving the business goals or targets. Failing to establish such a strategy runs the risk of investing considerable time and effort on things that may ultimately not contribute to overall success. Although single personalization efforts can be effective, when all personalizations work together toward a shared goal, the chances of achieving better results are higher.

To sum up

Many organizations are increasingly embracing CRO, which includes both optimization and personalization. However, there’s room for improvement in how organizations approach this. The crucial factor is to understand the distinctions and subtleties of personalization while maintaining a process similar to traditional CRO. Most importantly, personalization should not be isolated but rather integrated with optimization and extend where the other leaves off. All these efforts should align with a long-term strategy that leverages experimentation to contribute to business goals.

Author: Nick Schaperkotter, Team Lead CRO & Design at Yellowgrape.

Introducing the Experimentation Loop

Take a look at the history of technological progress. You can see that advanced technology did not come out of the blue. It evolved with one advancement becoming the foundation for another. For instance, the smartphone industry stands on the foundation of numerous technological breakthroughs. From the initial landline telephones, the concept of cordless phones emerged, followed by the integration of mobile communication with computing power.

Over time, we witnessed an evolution from personal digital assistants, such as BlackBerry devices, to the advent of the iPhone, which paved the way for the smartphone industry. It’s like a loop, where each advancement created new opportunities that, in turn, lead to further progress. The loop has revolutionized our technology because we never left a loose end after an advancement.

What if we followed the same approach toward experimentation on digital properties? 

Experimentation can sometimes lift your conversion rate beyond expectations and at times drop even for a promising hypothesis. It’s part and parcel of the process. But if you stick to a linear approach of closing the test after getting results and move on to test something new, it will rarely give you breakthroughs. You’ll miss out on chances to improve conversion rates and overlook valuable insights for future success. In the best-case scenario, it will plateau your growth rate.

That is why it’s time to move on from the linear approach and take a strategic approach with the Experimentation Loop to realize the true conversion potential of your websites and mobile apps.

But what is an Experimentation loop? Let’s delve into this fascinating concept.

What is an Experimentation Loop?

An Experimentation Loop starts with identifying a problem through behavior analysis and creating a solution in the form of a hypothesis. Then, you run experiments to test the hypothesis. You either win or lose, but with a linear approach, you stop the experimentation cycle here. But with the Experimentation Loop, you investigate the test results to uncover valuable insights. The uncovered insights can derive new hypotheses, which lead to further experiments, creating a continuous cycle of learning and optimization.

Here’s a visual illustration of how the Experimentation Loop works:

Experimentation Loop

With Experimentation Loops, you are not just stopping at the results but diving deeper to understand the reasons behind the results, identifying anomalies, and discovering if particular audiences (or participants of the experiment) react differently from others. This becomes the foundation for your new hypothesis and experiments.

It is especially critical in today’s ever-changing digital landscape, where user behavior is constantly evolving. By embracing the continuous learning and optimization provided by Experimentation Loops, you can stay ahead of the curve and keep improving your conversion rate.

Understanding the Experimentation Loop with an example

Here is a hypothetical example that explains how the Experimentation Loop functions:

Consider a landing page created with the intent to generate leads. The original version of the page has a description of the offering in the first fold, followed by the call-to-action (CTA) button that will lead to the contact form.

Let’s say that the behavioral analysis of the landing page reveals many visitors dropping off on the first fold. This leads to the hypothesis of adding a CTA above the fold to improve engagement. This way, you create an A/B test to compare the original version and the variation with additional CTA above the fold.

Here is the visual representation of the original and the variation of the landing page:

Visual representation of the original and the variation of the landing page

Let’s assume that the test ends with the variation outperforming the original in terms of the conversion rate (i.e. number of clicks on the CTA). Here, the traditional approach concludes the test. But with the experimentation loop, we will try to analyze the results to come up with more hypotheses and open up multiple opportunities for improvement.

Suppose, we zero down on the hypothesis that demands testing the CTA button. Then, the second round will involve coming up with multiple variations of the CTA text and CTA color to optimize the button. Here, to find out the best variation, we can run a multivariate test to compare the original version and multiple variations with different combinations.

Multivariate test

At the end of the test, there can be an uplift in conversion, which would have not been possible with the traditional approach. And if the test fails to get an uplift in conversion rate, it will lead to insights that can help in knowing more about the users.

Likewise, we can check the results to know if a particular audience segment engaged with the button more than others (and if they have common attributes) – in which case, it could lead to a hypothesis for a personalization campaign that includes personalizing the headings or subheading before the CTA as per behavior, demographic, or geographic attributes of the segment.

Thus, an Experimentation Loop opens up the opportunity to improve, which is not possible with a siloed and linear approach.

But how can you carry out the successful execution of the Experimentation Loop?

The experimentation loop consists of three steps, and we will delve into each of these steps in the upcoming section.

Three steps in the Experimentation Loop

Following are the three key steps in the Experimentation Loop for improving conversions.

Three key steps in the Experimentation Loop for improving conversions

Step 1: Identify problems

The Experimentation Loop starts with identifying the existing problem in user experience. First, you do a quantitative analysis that involves going through key metrics like conversion rate, bounce rate, and page views to identify the low-performing pages on the user journey.

Once you zero down on the weak links, you can do a qualitative analysis to understand the pain points. You can check session recordings and heatmaps to know the performance of each element that affects the conversion rate.

Once you identify the problem associated with the elements, it can help draft a hypothesis.

Step 2: Build hypothesis from insights

After identifying elements that are affecting the conversion negatively, you can start digging into the insight data to make sense of it.

For example, you identified the banner image position as the reason for the high bounce rate of the blog after all the quantitative and qualitative analyses. Then you can build a hypothesis about the position of this image that offers a solution for the high bounce rate.

While framing the hypothesis, you should specify the key performance indicator (KPI) to be measured, the expected uplift, and the element to test.

Next, you move forward to run the experiment.

Step 3: Run experiments

Based on the hypothesis, you choose from tests like the A/B test, multivariate test, split URL, and multipage test. You run it till the test reaches a statistical significance.

The test may result in a change in the conversion rate, and the insights about the user behavior toward the new experience can open doors to identify areas for the second cycle of the experimentation.

Thus, the Experimentation Loop will constantly carve a path to improve conversion.

Get an ebook copy that is dedicated to the subject Experimentation Loop

Interested to know more about Experimentation Loop? Get an ebook copy that is dedicated to this amazing concept. 

Experimentation Loop and sales funnel

Running Experimentation Loops at every stage of the funnel can substantially improve the conversion rate and provide a strategic framework for testing hypotheses rather than a haphazard approach. To enhance the conversion rate of the same element, you can run an Experimentation Loop, as seen in the example of A/B testing to Multivariate testing.

Alternatively, you can analyze the insights from a test that improved a metric to see how it affected other metrics, which could lead to the second cycle of the test.

For instance, let’s take the awareness stage. The goal in this stage is to attract users and introduce them to products or services on a digital platform.

Suppose you ran an A/B test on search ads to get more users to the website and monitored metrics like the number of visitors.

Let’s say the test led to an improvement in traffic. Now, you can move on to analyze other metrics, such as % scroll depth and bounce rate for the landing page, and identify areas for improvement. To pinpoint the specific areas where users are leaving, you can use tools such as scroll maps, heat maps, and session recordings. The analysis can lead you to create hypotheses for the second leg of the experiment. It could involve improving user engagement by testing a visual element or a catchy headline.

Likewise, running the Experimentation Loop at other stages of the funnel can optimize the micro journey that the customer takes at each funnel stage. Moreover, the Experimentation Loop can lead to hypotheses creation from one funnel stage to another, resulting in a seamless experience that is hard to achieve with a siloed approach.

How Frictionless Commerce uses Experimentation Loops for conversion copywriting

Frictionless Commerce, a digital agency, has relied on VWO for over ten years to conduct A/B testing on new buyer journeys. They have established a system where they build new experiments based on their previous learnings. Through iterative experimentation, they have identified nine psychological drivers that impact first-time buyer decisions.

Recently, they worked with a client in the shampoo bar industry, where they created a landing page copy that incorporated all nine drivers. After running the test for five weeks, they saw an increase of 5.97% in conversion rate resulting in 2778 new orders.

Example client shampoo bar industry shows how Experimentation Loops can bring valuable insights

It just shows how Experimentation Loops can bring valuable insights and take your user experience to the next level.

You can learn more about Frictionless Commerce’s experimentation process in their case study.

Conclusion

Embracing the continuous learning and optimization provided by Experimentation Loops is crucial for businesses looking to stay ahead of the curve and improve their conversion rates.

To truly drive success from your digital property, it’s time to break the linear mold and embrace the Experimentation Loop. By using a strategic framework for testing hypotheses, rather than a haphazard approach, businesses can continuously optimize and improve their digital offerings.

Author: Ketan Pande, Content Marketer at VWO

Automisation in Experimentation: 8 ways to automate your CXO Program

One of the largest obstacles when expanding experimentation programmes and speeding up change is maintaining high experiment programme quality. Especially when working with multiple teams with different maturity levels. To solve this, automation can unlock a lot of opportunities for optimizers. In this article, we list some of the benefits of this, and what automation opportunities you can use in each phase of your CXO program.

Mark de Winter is the Director of Product at Clickvalue. Edwin de Brouwer is Lead Conversion Specialist at Clickvalue. Clickvalue one of the sponsors of DDMA Experimentation Heroes 2023 on October 31st. Curious about the very best experiments of 2023? Secure your ticket now at: experimentationheroes.com.

The benefits of CXO automation

The benefits of automation within experimentation programs can be divided over 4 categories:

  • Efficiency: automating certain parts of your CXO Program will help lower costs and enables opportunities to reallocate resources and increase experimentation and change velocity.
  • Scale: When scaling towards double or triple digits in monthly experiments, you can’t do without automation. Otherwise you’ll quickly lose grip over your program.
  • Process: Many aspects of the experimentation process can be highly standardized. From detecting winners to implementation requests, MDE calculations, Priority calculation, etc.
  • More fun: Having automation in place, will help minimizing boring, cumbersome tasks, increasing the opportunities for team members to focus on the stuff they like most.

When considering the potential applications of these automations, we look at the four phases of the CXO process, each of which presents numerous opportunities for automation – 8 in total.

  • Phase 1: Problem Discovery: Exploratory research to identify opportunities
  • Phase 2: Problem Validation: Prioritizing opportunities by stacking evidence
  • Phase 3: Solution Discovery: Creation of solutions
  • Phase 4: Solution Validation: Validating solutions (through experimentation)

The benefits of CXO automation

Phase 1: Problem Discovery (Research)

In the Problem Discovery phase, where the goal is to uncover as many problems and optimization prospects as possible, several tasks can be automated. If don’t have access to a continuous flow of opportunity research to fill your opportunity backlog, you can automate things to still continuously discover opportunities to accelerate experimentation and increase the pace of change.

Some examples:

  • Automated anomaly detection: Anomalies are large deviations or changes in your data. Many UX tools offer some sort of anomaly detection such as Contentsquare, Fullstory, Decibel, etc. But anomalies can also be set-up based on predetermined thresholds in your own analytics stack. These anomaly reports can be automated into your workstream by making sure these automatically end up in your insights backlog to be processed by a researcher.
  • AI in research: AI is finding its way into opportunity research quickly. Finding rationale behind user behaviour or for quickly processing user data. Parts of this process can be automated as well. An example: One can set up a simple language processing AI on your customer insights, opportunities and/or experiment database to generate optimization opportunities based upon your own data. Of course these opportunities need to be reviewed by an experienced researcher, but it can quickly become very powerful when increasing the size of your database and the learning capabilities of GPT’s.

To increase the number of discovered problems and opportunities it becomes crucial to move away from ad-hoc research towards always-on research where opportunities find you instead of (re)starting expensive user research resources. Use those resources to interpret data, improve algorithms and stack evidence for opportunity prioritization.

Phase 2: Problem Validation

During the Problem Validation phase of the CXO Process, insights and problems found in the problem discovery phase are validated. Any discovered user problems are looked at closely to determine if they actually exist and if they need to be fixed or acted upon. This is done by stacking evidence from multiple research sources and prioritising problems and opportunities. Several aspects of this validation can be automated.

Some examples:

  • Prediction models: To predict what opportunity will have the largest impact or has the highest chance of success, you can use prediction models. You can use AI for this, but you can also use predefined search queries upon your insight-, opportunity-, or experimentation database. For example, if you’d like to understand if opportunities with urgency on a homepage lead to a higher impact compared to opportunities with social proof on a homepage, you can set up this in your database and use predefined queries to provide you with the opportunities with the highest impact based on experiment or research data. You can do this by setting up a customer knowledge bank with opportunity and experiment data, which can be used to determine the highest amount of impact or success.
  • Automated opportunity mapping: Such as Customer Journey Maps, Opportunity Solution Trees, Goal trees, etc. Opportunities are not one dimensional and not linear. They are dynamic and have clear relationships with each other. Mapping these opportunities in a structured manner, for example in an Opportunity Solution tree can drastically improve the grip on your research process. A way to do this is to ensure that relations between opportunities, experiments, solutions, etc. are identified in your customer knowledge bank. Once these relations are established, a visual representation can be made to present opportunities and opportunity areas to your team ensuring a better understanding of what research is doing.

Phase 3: Solution Discovery

During the Solution Discovery phase solutions are gathered to fix the issues found during problem discovery and validation. During this phase UX Designers, copywriters and psychologists think about ways to best solve issues and create the best performing user experience.

Some examples:

  • Prediction models: Similar to prediction models in the problem validation phase, prediction models can be used in the solution discovery phase to determine what solution will likely have the highest impact or chance of success. If you’ve setup your customer knowledge database in a way that solution designs and proposed changes can be quickly processed, you can leverage that information into information to prioritize your solutions and UX Designs based on for example impact or successfulness.
  • AI in solution design: Many design tools such as Figma embrace the power of AI to quickly generate multiple designs based on certain elements or one design. Also AI can be used to quickly generate solution ideas for a specific problem or prompt, saving large amounts of precious UX design resources.

Phase 4: Solution Validation

The Solution Validation phase, in which solutions are validated through – for example – experiments, is a phase where automation is omnipresent. Many parts of this process are highly standardized and can be easily automated.

Some examples:

  • Automated experiment reporting: Because of the standardized nature of specifically experiments, reporting can be automated easily. For example, Google sheets can be connected to predefined datasources to include a hypothesis, KPI impact/dashboard, revenue impact, imagery, conclusions and recommendations. This saves up a lot of work creating decks. Also, many workflow tools offer API’s to automatically generate large parts of reports, leaving more room and time for optimizers to add interpretation and context.
  • Automated monitoring & decision support: Monitoring your own first party experiment-data from analytics can become quite a hassle. Especially when running a high velocity program. Due to the standardized nature of how experiments are setup with predetermined primary goals for example, this task can be easily automated.

Automations in solution validation can bring a lot. From automatically fetching all active experiments from testing tools, to creating the needed segments on the fly and extracting all data on a daily basis. In short you can benefit from it in three ways:

  • Reduces manual work of monitoring tests with 80%
  • Increases quality as we have a richer view of the data and lower chance of human errors
  • You can automate things in coherence with any AB testing tool.

To conclude

Automation in CXO is an important step to the future and it will continue to grow as capabilities and innovations are increasing exponentially with AI. Automation creates room for optimizers to be creative and to work on expanding an experiment driven culture within the organisation. The fun stuff ?.

Experimentation Heroes 2023: How to Prepare Your Case

Have you conducted a brilliant experiment (or a series of experiments) with a carefully thought-out design and impressive results? Then you deserve recognition for your work. Perhaps Experimentation Heroes ’23 is the perfect opportunity for you. During this event, a select group of experimentation specialists will have the chance to share their cases on stage. Do you want to be a part of it? Then you can submit your very best experiment for consideration.

To stand on the stage of Experimentation Heroes, your case must first pass through an expert jury. These leading experts will determine which cases proceed and shape the program of Experimentation Heroes 2023. But how can you best prepare? In this article, we provide valuable tips and tricks to make a lasting impression on the jury with your case. Who knows, you might find yourself on stage and be awarded as a winner in one of the categories.

Why participate?

There are several reasons why participating in Experimentation Heroes is a no-brainer for every experimentation specialist:

  1. Participation is an excellent opportunity to learn from other companies and be inspired by how they have tackled challenges within their organizations. With more and more companies and departments embracing experiment-driven work, this method of validation and data-informed decision-making fosters a transparent, knowledge-driven culture that values knowledge sharing. This willingness to share knowledge is also evident in companies’ eagerness to share expertise in the field. Participating in competitive events like Experimentation Heroes is therefore a great chance to learn from other companies and get inspired by how they have solved challenges within their organizations.
  2. By participating, you build knowledge and have the opportunity to test your case with other professionals.
  3. As a nominated Experimentation Hero, you receive recognition for your work, which can contribute to a significant boost in confidence and recognition for both you and your entire organization, leading to further adoption of an experiment-driven culture.

Three main goals: entertain, inform, and motivate

When submitting a case, it is important to keep three main goals in mind:

  1. Entertain the reader: A jury is no different from a consumer; you need to continuously engage their attention. If a case fails to capture their interest from the start, you risk losing their attention. Therefore, structure your case like a story and provide reasons for the reader to keep reading.
  2. Inform the reader: It is essential to clearly explain what you have done, how you have done it, and when it happened in your case submission. Support your case with data and appropriate visualizations.
  3. Motivate the reader: It is crucial to emotionally engage the readers with your story, thereby inspiring them to take action. After all, you want the jury to choose your case submission as the winner!

Failure to consider the above points may result in the jury perceiving your case as dull, uninspiring, or even unreliable. Keeping these three main goals in mind will guide you towards developing a high-quality case submission.

Important Do’s and Don’ts

If you present to the jury that you discovered a Type 1 error in the data during a multivariate test and you aim to increase the percentage of micro-conversions in the sales funnel by displaying USPs, the jury will be confused. You must provide a clear narrative that leaves no room for questions. Here are some do’s and don’ts to consider when preparing your case.

Do’s

  1. Thoroughly review the criteria and categories: Before you start writing your case submission, carefully read through the criteria and categories. This way, you will know exactly what is expected of you and in which category your case submission belongs.
  2. Use the submission form: In the submission form of each category, you will find a checklist of what the jury wants to know about the case.
  3. Tell a story: Stories can often be divided into three parts. For example, a hero who needs to defeat a dragon (1). Think of a customer trying to complete their purchase, but encountering a problem or conflict/new twist (2). Fortunately, there is also a solution (3). Maintain this storyline in your case submission. Tip: the template aligns with this storyline, so you can easily adopt it.
  4. Be honest and provide evidence: The jury often sees case submissions with significant conversion increases. While this is good for the company, without evidence or logical reasoning, the jury doesn’t know what to believe. Therefore, always use your data, such as visitor or transaction numbers, device category ratios, A/B test results, or durations, and be transparent to support your claims.
  5. Show what went wrong during the experiment: Don’t be afraid to show what went wrong during the experiment. Also, demonstrate what you have learned from these mistakes.
  6. Include a powerful conclusion; a WOW factor: Every year, dozens of case submissions are received. If your submission stands out from the rest, it will rise to the top of the stack more quickly. Therefore, think about a WOW factor. For example, did you use a new approach, involve different departments, or is your problem-solving innovative or the lessons learned groundbreaking? Highlight these aspects! The effectiveness of emphasizing your WOW factor is also evident from previous award winners.

    The value of the WOW factor: VodafoneZiggo won an award last year with their case in which they involved the legal department in the planning, execution, and follow-up of the experiment. Watch the case-recording here.

  7. Utilize a second reader: Have someone who was not involved in the experiment review the case before submission. An extra pair of eyes never hurts.

Don’ts

  1. Avoid using jargon: Sentences like “We conducted an NHST test on the POP to improve UX” or “We want to ensure the integration of our design system” can raise unnecessary questions for the jury. Assume that people without knowledge of UX and CRO are reading your story. Therefore, never write in jargon and make it easily readable.
  2. Be honest about the value added by your experiments: The calculation and communication about your experiments should be honest. For example, it is not realistic to directly multiply a conversion increase by your annual revenue and consider it as the contribution of your experiment to revenue. Consider diminishing effects, Bayesian and other corrections.
  3. Don’t unnecessarily complicate things: Keep it concise. This may be stating the obvious, but unfortunately, the jury still receives cases regularly where the story is lengthy and unclear. This demotivates the jury from reading further. Therefore, always keep your case submission short and concise.
  4. Don’t leave out important information: During the evaluation of case submissions, we often come across entries of 1 to 2 pages for the entire experiment. This is insufficient for the jury to get a sense of the experiment’s quality. We recommend keeping your story concise, but ensure that essential information is included. If you have amazing results, definitely mention them, but don’t forget that the idea, execution, and follow-up are even more important. Let these aspects shine!
  5. Avoid making large leaps in reasoning and don’t jump from one topic to another: It still frequently happens that the jury is unclear about why certain choices were made. This is unfortunate. For example, not all case submissions contain a clear trigger. Which problem has been identified, based on which data, and how does the experiment expect to solve it?

An example of a hypothesis that makes too large leaps and skips information is: “We observe a high bounce rate on the product detail page, so by adding persuasive information, we increase the motivation of a user to make a purchase, resulting in higher conversion.” This does not provide the jury with sufficient justification for why persuasive information was chosen, why specifically that information, and why it is persuasive for the visitor. Also, do not introduce new conclusions at the end of your story that have not been mentioned before. This raises unnecessary questions.

Finally, With these valuable tips and tricks in mind, you are well on your way to creating a stunning case submission for Experimentation Heroes ’23. By entertaining, informing, and motivating in your story, you ensure that the jury remains interested and maybe selects your case as the winner. Go through all the do’s and don’ts carefully, and who knows, you might shine with your case on stage and walk away with the victory.

3 steps to determine the maturity of your CRO strategy

Until very recently, you could define conversion rate optimization (CRO) as the practice of increasing the percentage of users who perform a desired action on a website, such as buying a product or service, signing up for a newsletter, or simply clicking on certain links. However, for leading CRO practitioners, this definition doesn’t cut it anymore. It’s too short-term and too tactical. As the saying goes, what got you here won’t get you there. 

Becoming successful at CRO today means going beyond short-term, tactical thinking. To drive digital transformation in your organization and influence the C-suite you need to think strategically, not tactically. But what does it mean to think strategically about CRO? What’s the difference between CRO strategy and tactics? Most importantly, what steps should you take to become a strategic CRO mastermind? This article by Katie Leask, Global Head of Content at Contentsquare, sponsor of the DDMA Dutch CRO Awards, aims to define the questions you need to ask yourself to determine the maturity of your CRO activities. To help you diagnose where you are now on that journey.

The key takeaways

While tactical CRO is important, strategic CRO helps to optimize ROI and lead digital transformation across your organization.  Strategic CRO entails a mindset shift towards long-term rather than short-term thinking. You must gain a deeper understanding of your customers and align CRO goals with the broader goals of your business. To diagnose where you are on your journey towards strategic CRO, work out where the gaps are. These could be in the skillset of your team, in your processes, or in the data points you currently have access to.

According to some industry experts Contentsquare interviewed, there are 7 key steps you can take towards becoming a strategic CRO mastermind:

  1. Align your optimization goals with the wider goals of your business
  2. Focus on generating long-term insights rather than short-term metrics
  3. Create a CRO roadmap and focus on testing one thing at a time
  4. Build a culture of continuous experimentation and improvement
  5. Use a combination of software tools
  6. Build a multi-disciplinary CRO team
  7. Use multiple data points to gain a deeper understanding of your customers

Check out the full statements of the industry experts Contentsquare interviewed here.

Defining strategic CRO

The first step in becoming a strategic CRO mastermind is to define what strategic CRO is.

Strategic CRO vs tactical CRO

Marketers and CRO practitioners are used to thinking tactically. They run multiple A/B tests across a range of variables and then try to draw conclusions from the results. This tactical CRO mindset focuses on conversion percentages, averages, and benchmarks. But having such a data-led approach can lead to not focusing enough on long-term goals. It can also lead to a lack of joined-up thinking.

A strategic CRO mindset instead starts with the bigger picture. On the one hand, that means trying to gain a deeper understanding of your customers and prospects. On the other, it means looking at where CRO fits into the broader strategic goals of the business.

Strategic CRO is about defining long-term goals, and then working out what data points you need to help reach those goals. Then focusing on the KPIs and tactics that will get you closer to those goals.

Long-term vs short-term thinking

Moving from short-term goals to a long-term vision means shifting focus. Rather than focus purely on front-end tests—for example optimizing landing page conversion—strategic CRO defines the long-term goals and then works backward.

To get a long-term view you need to:

  • Understand visitors’ intentions
  • Identify and resolve any user experience issues or friction on your website
  • Understand and overcome visitors’ objections

Here’s one example. Tactical CRO might aim to maximize conversions to a sale. Strategic CRO aims to attract and convert customers who are more likely to spend more money with you. These customers buy more of your products or services and stay with you longer.

As another example, tactical CRO might aim to maximize click-throughs from a specific page. Strategic CRO looks at the entire customer journey and aims to optimize every stage of the customer experience.

KPIs for strategic CRO

Here are examples of the kinds of KPIs that help drive strategic CRO:

Annual recurring revenue (ARR) – used to work out the annual value of a subscription or contract. Because ARR is the amount of revenue that a company expects to repeat, you can use it to predict future growth.

Customer lifetime value (CLV) – the total value to your business of a customer over the whole period of their relationship with you.

Sales pipeline – a representation of your prospective customers/clients, what stage they are at in the sales process, and how much revenue you expect to earn from them.

Sales velocity – how quickly sales move through your pipeline and generate revenue, based on four metrics:

  • Number of opportunities
  • Average deal value
  • Win rate
  • Length of sales cycle

Domain authority – an SEO concept that describes the strength of a given web domain, and how findable it is on search engines. usually measured on a score out of 100 using specific digital tools.

Sentiment analysis – an analysis based on aggregated reviews or social media mentions, which indicates whether your audience feels positive, negative or neutral about your brand. There are a variety of digital tools that can do this for you.

Digital happiness index – a combination of specific KPIs from 5 key pillars that measure overall customer satisfaction:

  • Flawlessness: Are customers enjoying a smooth experience free of technical performance issues?
  • Engagement: Are customers engaging with and satisfied with your content?
  • Stickiness: Are visitors loyal and returning frequently to your website?
  • Intuitiveness: Does your site navigation make it easy for visitors to enjoy a complete experience?
  • Empowerment: How easy is it for customers to find the products and services they want?

Diagnosis: Where are you now?

To become a more strategic CRO practitioner, you need to work out where you are on your strategic CRO journey. Every organization is different, and you’ll need to work out where you are in the context of the strategic priorities in your business. But there are some simple rules that everyone can adapt to suit their circumstances. Here’s a handy set of questions you can use to diagnose where you are with your CRO, split into three parts:

Step 1: People and skills

  • Do you have someone on your team to act as an advocate for users/customers?
  • Do you have someone on your team who understands the business priorities of your organization?
  • Do you have someone to carry out UX research?
  • Do you have someone to carry out UX design?
  • Do you have someone to carry out UX analysis?
  • Do you have a web engineer on your team?
  • How data literate is your team?
  • What skills gaps can you identify?
  • How can you fill those gaps? (Based on your resources, can you hire new staff, develop the skills of existing staff, or access those skills on a freelance or contract basis?)

Step 2: Process

  • Are you clear on your overall strategic goals?
  • Can you measure your progress towards those goals?
  • Do you have a quarterly or monthly CRO roadmap?
  • Are you constantly optimizing the customer journey?
  • Are you surveying or interviewing your customers to get feedback about the user experience on your website?
  • Are you surveying or interviewing your customers to understand what drove them to the site and whether they were able to achieve what they wanted?
  • Are you regularly generating new ideas and design concepts to test?
  • Do you leverage expertise and insights from other parts of your organization, e.g. Marketing, Commercial/Sales, Product, Support/Customer Services, etc?
  • Do you share your results with other parts of your organization and encourage feedback?
  • How far ahead do you plan your CRO testing?
  • How do you prioritize which issues to solve?

Strategic CRO optimization works best when you are clear about your strategic goals and can measure your progress towards those goals. Business priorities and external pressures change constantly. So you do need to revise your goals periodically, as well as update them in line with your progress. Once you reach your goals, it’s time to set new ones. Strategic CRO is about establishing a process of continuous improvement.

Step 3: Technology and data

  • Are you able to measure every step of the customer journey?
  • Can you assess where visitors are leaving your website?
  • Can you identify common usability issues across your website?
  • Can you identify frustration on your site?
  • Can you identify your most frequent conversion issues and opportunities?
  • Are you measuring the ROI of your content efforts?
  • What tech tools are you using right now?
  • What knowledge gaps do you have?
  • What data do you need to fill in those knowledge gaps?
  • Which tech tools could help you collect or analyze that data?

Tactical CRO often falls into the trap of testing what you know you can measure. Strategic CRO focuses on working out what you want to test and then finding the data you need to be able to carry out those tests. It’s important to understand where you may need more data, and then look into how you can get access to it.

Now you know where you are, it’s time to look at some practical steps you can take on the path to becoming a strategic CRO mastermind. Contensquare reached out to leading CRO experts to get their insights. Check out their response at Contentsquare’s website and start elevating your CRO strategy.

Katie LeadsKatie Leask is Global Head of Content bij Contentsquare, sponsor of the DDMA Dutch CRO Awards 2022. During the award ceremony on November 3 in B. Amsterdam, we will the crown the very best CRO cases the Dutch marketing industry has to offer. Do you want to attend? Get your tickets at: dutchcroawards.nl/koop-tickets

De huidige taak op het bord van de CRO-specialist: focus op kennisopbouw

Het CRO-vak wordt steeds meer volwassen. En hoewel van een echte ‘Culture of experimentation’ bij veel organisaties nog geen sprake is – een cultuur waarin testen organisatiebreed wordt toegepast – is de beweging ernaartoe wél in gang gezet. Een positieve ontwikkeling uiteraard. Toch blijkt kennis van het vak en het gebruik van testtools vaak ondermaats. Daar ligt dan ook de taak van de CRO-specialist in deze tijd: de opbouw van kennis over het gehele CRO-proces binnen hun organisaties.

Auteur: Joshua Kreuger (DPG Media)

CRO krijgt meer bedrijfsbrede steun

De beweging richting volwassenheid is uiteraard een positieve ontwikkeling. Voor het nemen van gefundeerde beslissingen en het optimaliseren van klantreizen moet men de waarde van experimentation binnen data-driven marketing immers accepteren en erkennen. Dit besef, waarin het belang van testen steeds meer wordt ingezien, komt steeds vaker voor. Dat blijkt uit de DDMA CRO Maturity Test. 40,5% van de CRO-specialisten voelt zich gesteund op managementniveau en hebben collega’s die in hoge mate geïnteresseerd zijn in testresultaten. Bij 19% wordt de waarde van CRO zelfs volledig onderschreven én ondersteund met budget door C-level. Daarbij kunnen we spreken van een daadwerkelijke ‘culture of experimentation’, een organisatiecultuur waarin het belang van testen door de hele organisatie wordt gezien en doorlopend ideeën worden gevalideerd met experimenten. Het algemene beeld is dat de CRO-specialist er vaak alleen voorstaat, maar de CRO Maturity Test laat nu zien dat het bewustzijn en enthousiasme onder collega’s juist verrassend hoog is en dat C-level ook steeds vaker is betrokken. Dit is een belangrijke stap in de groeiende volwassenheid van CRO in Nederland.

De CRO-specialist is niet meer (altijd) alleen

Een andere positieve ontwikkeling is dat – in vergelijking tot de gemiddelde CRO-volwassenheid – bij relatief veel organisaties (22%) financiering voor het CRO-programma vrijwel nooit een probleem is. Ook is het managementniveau (54%) relatief vaak supporter van het programma en is bij 11% zelfs een ‘chief of experimentation’ in dienst die op het hoogste niveau binnen de organisatie opereert. De meeste CRO-specialisten staan er dan ook niet alleen voor: in 65% van de gevallen is er sprake van een klein CRO-team, waarin meestal ook een UX-designer en een developer actief zijn.

CRO-kennis blijft nog achter, testtools onderontwikkeld

De steun en aandacht voor CRO-activiteiten is natuurlijk fantastisch, maar om CRO nog waardevoller te laten zijn, is nog wel een aantal stappen nodig. Zo is het kennisniveau relatief laag. Bij 46% van de organisaties is enkel sprake van basiskennis van het CRO-proces en wordt weinig informatie over testresultaten uitgewisseld. 13,5% van de ondervraagden beschrijven hun organisatie op dit vlak als volwassen, met doorlopende kennisuitwisseling over verschillende onderwerpen tussen teams. Daarnaast is de testtooling relatief onderontwikkeld. 57% van de CRO-specialisten maakt gebruik van een tool waar alleen simpele A/B-tests en grove analyses mee te doen zijn. Slechts een tiende heeft een testplatform waar over meerdere kanalen en gepersonaliseerd getest kan worden op basis van gebruikersgedrag. Ook op het vlak van onderzoek en analyse, bijvoorbeeld om tot een goede hypothese te komen of een experiment te optimaliseren, is verbetering noodzakelijk. Hoewel iets meer dan de helft van de organisaties hiermee aardig op weg is, doet een derde nog nauwelijks onderzoek.

Focus als CRO-specialist op kennisopbouw van de marketeer

Dat er steeds meer enthousiasme en bewustwording voor testen is binnen organisaties is dus zeer positief. Dat komt het kennisniveau alleen maar ten goede. Zonder de juiste kennis, goed onderzoek en tools bestaat immers het risico dat er onjuiste conclusies getrokken worden, bijvoorbeeld door een te kleine test sample. Hier ligt een taak voor de CRO-specialist. In plaats van zelf alle tests op te zetten, moeten CRO’ers zich steeds meer richten op het begeleiden van mensen en het hele CRO-proces. Het is een goede ontwikkeling dat marketeers steeds vaker zelf tests opzetten, maar je hebt altijd iemand nodig die kritisch kijkt naar de toegevoegde waarde en de resultaten van een test. Net zoals elke campagne eerst een compliance-check krijgt. Niet ieder vraagstuk kan namelijk getest worden, bijvoorbeeld omdat er simpelweg niet genoeg data voor te verzamelen is in een redelijke tijdspanne.

De DDMA Maturity Test is nog steeds in te vullen! Doe de test en kom erachter hoe jij scoort ten opzichte van de rest van de data-driven marketingsector. Bovendien krijg je na het invullen van de test directe aanbevelingen voor jouw CRO-praktijk. Doe de test via: cromaturitytest.nl

Maarten Plokker (SiteSpect): “De cookieless world hangt als een zwaard van Damocles boven de CRO-markt”

De naderende cookieless world, server-side tagging, moeilijkheden op het gebied van Single Page apps en Progressive Web apps, allemaal ontwikkelingen waar bedrijven zich bewust van moeten zijn en actie op moeten ondernemen, aldus Maarten Plokker, Managing Director Europe bij Martech-aanbieder SiteSpect, een van de sponsoren van de DDMA Dutch CRO Awards.

Cookieless world: het zwaard van Damocles

Met alle drie de bovengenoemde zaken moeten bedrijven bewuster mee omgaan, vindt Plokker. “Grote kans dat de ontwikkelingen in meer of mindere mate consequenties gaan hebben voor de meeste bedrijven. Hoewel third-party cookies niet morgen zullen verdwijnen, hangt de cookieless world als een zwaard van Damocles boven de CRO-markt. Dit gaat zeker een impact hebben op hoe je met data om kan gaan. De eerste veranderingen zijn al in gang gezet, waardoor de betekenis van de data is gaan verschuiven. Des te meer reden om als organisatie nu al na te denken over hoe je hiermee omgaat. Niet iedere organisatie is zich hiervan bewust. Dit ligt deels aan slechte adviezen en de onbekendheid met het fenomeen, maar ook omdat de grootste browser, Google Chrome, er nu nog niets mee doet, al hebben ze wel veranderingen aangekondigd. Hierdoor ontbreekt de noodzaak, waardoor bedrijven er vooralsnog laconiek mee omgaan. Zo hoor ik wel eens verhalen dat organisaties, als het gaat om A/B-testen, ‘strengere’ browsers gewoon niet meenemen in hun testen.”

Op het gebied van A/B-testen zit de moeilijkheid van de cookieless world in het herkennen van gebruikers. Als je een trui koopt in een webshop via Firefox, en je wilt een week later dezelfde trui in een andere kleur kopen, zal de webshop jou identificeren als twee unieke bezoekers. Als je op basis van dit soort cijfers conclusies trekt, kan dat verkeerd uitpakken. Uiteindelijk zal de impact op CRO en A/B-testen per bedrijf verschillen. Dat heeft voor een deel te maken met tooling, en voor een deel met de data die voor jou belangrijk is, legt Plokker uit. “Qua tools is het belangrijk om je te realiseren dat er alternatieven zijn die prima kunnen omgaan met een wereld zonder (of met minder) cookies. Aangezien data de basis is voor CRO & A/B-testen, is het dus essentieel dat je begrijpt wat de door jou verzamelde data representeert, zowel nu als in de toekomst. Heb je voldoende om de beslissing die je neemt te onderbouwen? Moet je op zoek naar nieuwe bronnen van data? Of pas je je besluitvorming aan? Er is geen “one-size-fits-all”-oplossing, dus het is verstandig nu al na te denken over de toekomst, hoe ver weg die ook nog lijkt.”

Server-side tagging: een permanente oplossing?

Server-side tagging is één van de manieren waarmee je je kunt voorbereiden op een toekomst zonder cookies. Bij server-side tagging plaats je je tags niet meer aan de client-side, die onderhevig is aan toenemende regelgeving omtrent privacy. Een goede stap om te nemen, aldus Plokker, omdat je gegevens langer kan bewaren en de data nodig hebt om goed mee te kunnen testen. Toch is server-side tagging geen permanente oplossing, legt Plokker uit. “Je meet de consument immers op dezelfde manier, en dat is precies waarvoor beperkende regelgeving is bedacht. Uiteindelijk blijft het cookievraagstuk een langlopende wedstrijd tussen de browsers, die de privacy van de consument zo goed mogelijk proberen te waarborgen, en CRO-ers, analisten en marketeers, die op basis van data een optimale gebruikerservaring willen neerzetten. Er is dus geen perfecte oplossing, en zeker geen permanente, het blijft een compromis, waarbij het zwaartepunt zal blijven verschuiven tussen privacy en een goede ervaring. Het is de kunst om daar de balans in te vinden en de bezoeker de controle te geven. Des te belangrijker is het om vooruit te blijven kijken op wat er kan gaan komen.”

Single Page apps en Progressive Web apps: een uitdaging voor CRO-ers

Single page apps en progressive web apps zijn moderne technieken om (onderdelen van) een (mobiele) website aan te bieden. Deze (onderdelen van) websites gedragen zich als het ware als native apps en bezoekers krijgen dan ook een soortgelijke gebruikerservaring, aldus Plokker. “Het voordeel van SPA’s en PWA’s is dat gebruikers geen app hoeven te downloaden en dat de gebruikservaring prettiger is. Het wordt vooral gebruikt bij formulieren. Als een gebruiker naar de mobiele website navigeert worden alle stappen uit het formulier in één keer uitgeleverd op één pagina, waardoor interactie met de webserver niet nodig is en waardoor de journey efficiënter en veel sneller is.”

De keerzijde is helaas dat het voor de CRO-er een stuk complexer kan worden om een experiment uit te voeren. Er is bij verschillende stappen die gebruikers uitvoeren geen interactie met een webserver die je kan meten. Verder zit een SPA technisch zo in elkaar dat het zeer lastig is om aanpassingen te doen zoals bijvoorbeeld het implementeren van A/B-testvariaties. Plokker: “Dat betekent dat je andere resources moet betrekken bij het opzetten van experimenten (zoals developers) en dat je over tools moet beschikken die met je meewerken en -denken, en niet een obstakel op zich vormen.”

Hoe maak je A/B-testen onderdeel van de bedrijfscultuur?

‘’Éen woord: betrokkenheid”, benoemt Plokker. “Concentreer je op het doel van het A/B-testen en betrek relevante stakeholders. Communicatie is daarbij essentieel en zorg dat je je A/B-testen afstemt op de bedrijfsdoelstellingen en KPI’s. Het is de sleutel tot het genereren van interesse, support en samenwerking. Ook is het belangrijk om, ongeacht de doelgroep of het medium, de nadruk te leggen op het “waarom”. Waarom deel je deze informatie? Waarom heb je deze A/B-test opgezet? Waarom presteerden de bezoekers zoals ze deden? En op het “wat”. Wat heeft het opgeleverd voor de organisatie? Wat is de toegevoegde waarde? Houd deze richtlijnen in gedachten om de communicatie zo effectief mogelijk te maken, zodat het testen onderdeel kan worden van de bedrijfscultuur. Creëer meerwaarde door waardevolle inzichten te genereren voor alle betrokkenen in een bedrijf. Waar je wellicht begint met een klein team dat A/B-testen uitvoert, kan de volgende stap zijn om de resultaten breder in de organisatie te delen. Langzamerhand kan de organisatie dan de data gaan gebruiken en creëer je uiteindelijk een behoefte, een wens om ook zelf betrokken te worden in het testproces.”

Het hangt ook af van hoe een organisatie is gestructureerd, benadrukt Plokker. “CRO is vaak op 3 verschillende wijzen bedeeld binnen een organisatie. Ten eerste decentraal, waar testen her en der worden gedaan in kleinere teams. Bij kleinere organisaties bestaat er vaak een centraal team die testen voor een hele website voor zijn rekening neemt. En tot slot heb je het centre of excellence, vooral geschikt voor grote organisaties. Ergens binnen de organisatie zit één centre of excellence, een centraal orgaan dat de rest van de organisatie informeert en opleidt op het gebied van CRO. Op deze manier wordt er kennis uit de organisatie zelf gehaald en gewaarborgd dat alle onderdelen binnen de organisatie kunnen groeien in hun CRO-maturity.”

Maarten Plokker is Managing Director Europe bij SiteSpect, een van de sponsoren van de DDMA Dutch CRO Awards 2021, die op 4 november worden uitgereikt in Het Sieraad in Amsterdam. Wil jij hierbij zijn? Bestel dan nu je ticket.