From Isolated Tests to Intelligent Optimisation: Monetate on the Future of Experimentation

With experimentation becoming a given in today’s digital landscape, the real question is no longer whether organisations should experiment, but how they can make it an integral part of their business across marketing, sales, product, and even IT. Added to that is the rise of AI, which is reshaping the way we work and opening up new opportunities for testing and optimisation.

We spoke with Maarten Plokker, Senior Leader at Monetate, about how the newly merged organisation (Monetate and SiteSpect) is redefining experimentation, and what the future holds for AI-driven optimisation.

Building a foundation for continuous optimisation

Maarten has worked in the experimentation and personalisation space for over a decade. In his role at Monetate, he focuses on building and managing strategic client relationships across EMEA, helping organisations embed experimentation as a structural part of their operations.

The recent merger between Monetate and SiteSpect under the unified Monetate brand marked an important step in that journey.

“Unifying under the Monetate brand signals our commitment as a leader in digital experience optimisation,” Maarten explains. “Every interaction is an opportunity to build loyalty. By combining two pioneers in personalisation and experimentation, we help businesses move beyond isolated tests to continuous optimization, where every interaction is guided by data and intelligence.”

That combination has strengthened Monetate’s ability to deliver what modern enterprises need most: AI-driven personalisation, secure, enterprise-grade experimentation, and the performance and scale to optimise complex environments across channels.

Embedding experimentation across the business

For Maarten, experimentation can no longer be treated as a side project. It needs to live at the heart of how a company operates.

“Experimentation must be part of an organisation’s operations, not a one-off initiative. The companies that succeed are those that embed optimisation into everyday decision-making, across every team.”

At Monetate, this means enabling organisations to design, test, and optimise real customer interactions from classic A/B tests to dynamic, AI-driven personalisation. By combining personalisation, advanced recommendations, and testing in a single platform, experimentation becomes repeatable, scalable, and relevant to every function. And it’s precisely that cross-functional value that ensures buy-in beyond marketing.

“Experimentation resonates when it delivers value to every function. For marketing, it’s about personalising journeys; for product, validating new features; for sales, reducing friction in the buying process; and for IT, ensuring methods are secure and reliable.”

To make this shift sustainable, Maarten emphasises alignment and empowerment over restructuring. Clear governance ensures teams are working toward shared goals, while tools and training empower them to test, learn, and apply insights independently.

“When leaders champion experimentation and teams see real results, it naturally becomes part of everyday decision-making,” he adds.

The rise of AI in experimentation

Few developments have accelerated experimentation as much as artificial intelligence. At Monetate, AI acts as a catalyst, not a replacement for human creativity.

“AI strengthens experimentation by making it faster and smarter,” says Maarten. “It helps identify opportunities, refine segments, accelerate analysis, and deliver relevant experiences in real time. But it’s still humans who shape the strategy and creativity behind it”

Within Monetate’s platform, AI works hand in hand with testing and recommendations to turn insights into action. It can surface new test ideas, identify granular audiences, adapt traffic allocation in real time, and personalise experiences at the individual level.

But Maarten cautions against relying on AI blindly:“Transparency and governance are essential. AI should enhance decision-making, not replace it. The goal is to remove repetitive manual work so teams can focus on higher-value contributions, while ensuring that user experiences remain relevant, trustworthy, and valuable.”

Looking ahead: hybrid ecosystems and AI agents

Predicting the future of experimentation has become more complex and more exciting. Maarten believes the next five years will bring a fundamental shift.

“Experimentation will evolve beyond human-centric digital experiences to hybrid ecosystems where AI agents play an active role. These agents will transact, compare, and even advocate on behalf of users, often through APIs and service layers rather than traditional interfaces.”

In this new environment, companies will compete not only on brand and design but on how well their systems, data, and trust signals are exposed and optimised for AI consumption. That means experimentation will increasingly extend to APIs, metadata, and personalisation logic -optimising both human and AI interactions.

Monetate’s offering, which unites personalisation, recommendations, testing, and AI-native intelligence, is designed for exactly that future.

“Organisations that start preparing now for experiences tailored to both people and AI agents will be in the best position to build loyalty, trust, and differentiation.”

Advice for scaling experimentation today

For companies looking to grow their experimentation maturity, Maarten’s advice is simple: focus on impact, not volume.

“Prioritise the areas of your customer journey where optimisation and personalisation will have the biggest effect on loyalty, revenue, and retention. Start small, scale fast, and invest in a platform that can grow with you.”

Equally important is building a culture where results are shared openly.

“When teams across the organisation see that experimentation drives outcomes, not just tests, it becomes part of everyday thinking. That’s when you see exponential returns.”

The bottom line

Experimentation, personalisation, and AI are converging. For Maarten Plokker and Monetate, the future of optimisation lies in creating a continuous, intelligent loop – where data, insight, and creativity work together to drive growth.

“AI doesn’t replace experimentation,” Maarten concludes. “It supercharges it making it smarter, faster, and more meaningful. The future belongs to organisations that embrace that combination.”

Author
Maarten Plokker | Director, Customer Success EMEA | Monetate

 

Experimentation Hero in Focus: Timo Stegeman – From isolated tests to working in themes

Our next Experimentation Hero in our interview series is Timo Stegeman. Since February, Timo has been working at Heineken as a freelance specialist in Product Discovery and Research. He deliberately takes on one assignment at a time, positioning himself close to product teams. “I work with one foot in Experimentation, but my main focus is research that directly feeds into what product teams do,” he explains. In large corporate environments, that means joining early in the process, prioritising themes, and defining together which “levers” can be pulled within a theme – for example, loyalty.

Have you got your ticket for Experimentation Heroes 2025 yet? Or do you want to submit a case and get the chance to present on stage? Visit experimentationheroes.nl/tickets for tickets and more details.

Superpower: positioning yourself as a resource

Timo’s strength is not about running A/B tests faster, but about how he positions himself within the product apparatus. Instead of requesting scarce dev capacity, he acts as the right hand of the Product Owner. “I don’t come to take something from the PO; I come to give. I position research & experimentation asa resource in product teams,” he says.

Concretely, this can mean co-creating a quarterly narrative: one clear theme, core problems within it, and only then potential solutions and experiments. He sharpens problems through research, works with UX to design solutions, and validates them through the right method, whether that’s an A/B test, a prototype, or a user test. “Experiments are part of a wider story, not isolated occurrences that happen in a separate stream   ”

The biggest enemy: the Feature Factory

The recurring counterforce Timo encounters in many organisations is the “Feature Factory”: delivery-driven teams where output trumps outcome. “Features get built because they’re on the backlog, not because they add proven value,” he explains. Symptoms some Experimentation teams struggle with like fights over dev resources or experiments not making the roadmap often come back to this root cause: teams working towards different goals with little shared context.

Timo’s answer is to structure around themes and shared goals to bridge that gap. His own reading list reflects this philosophy: Outcomes over Output (Josh Seiden), Escaping the Build Trap (Melissa Perri), and Continuous Discovery Habits (Teresa Torres) .

Gamechangers in his journey

Two shifts have been pivotal in Timo’s hero’s journey:

  1. From CRO discipline to way of working.
    Timo started out at a specialised CRO agency but realised experimentation is not an island. “Sustainable impact requires embedding it in product and IT,” he says. Since then, he has deliberately chosen roles where experimentation is part of product work, not a separate service.
  2. Opportunity Solution Trees (OSTs) as backbone.
    OSTs make a clear distinction between goals, problems, and solutions, and connect them visually. “It prevents solutioneering and makes thematic work concrete,” Timo explains. He always delivers his Product Owners an OST, making choices for features and experiments fall into place logically.

AI in practice: coach and grunt-work eliminator

AI has brought both inspiration and a minor “existential crisis” for Timo. “It’s a great sparring partner. Writing prompts is like rubber-ducking; halfway through, your own thoughts become clearer,” he says. He uses AI to cluster survey answers, draft outlines, or provide first text versions. “For genuine idea generation or sharp selection, the quality isn’t always consistent yet. And while agents and flow automation are promising, in my current practice they don’t create the biggest impact yet.”

The place of experimentation in organisations:  Part of Product

Timo observes major differences between clients: sometimes experimentation is placed next to product, sometimes it’s fully integrated. “In complex organisations, this often still leads to separate backlogs and priorities,” he notes. The ideal, according to him, is tight integration with product. “But structures, mandates, and management layers take time to change, especially if there isn’t a ‘burning platform’ creating urgency”.

Tip for future Experimentation Heroes

Timo’s advice is clear: position yourself as a resource for the Product Owner. “Throw away your own agenda, help the PO build the roadmap and backlog, work in themes, and deliver via an Opportunity Solution Tree. If trust grows, you’ll shift from contributing to co-defining and you’ll avoid the trap of random tests and resource fights. You can’t build a roadmap out of a few isolated and / or random  A/B tests; you build it from goals, problems, and solutions, with experiments as a natural part of that journey.”

Experimentation Hero in Focus: Erwin Vinke (a.s.r. verzekeringen)

With a new edition of Experimentation Heroes on the horizon, we’re once again giving the floor to a number of specialists who truly live up to our event’s title. Each one a genuine ‘Hero’ in the field. Today, we speak with Erwin Vinke, Technical Web Analyst at a.s.r. verzekeringen.

Have you got your ticket for Experimentation Heroes 2025 yet? Or do you want to submit a case and get the chance to present on stage? Visit experimentationheroes.nl/tickets for tickets and more details.

From CRO to Technical Web Analyst

Three months ago, Erwin Vinke joined a.s.r. verzekeringen as a Technical Web Analyst, after holding similar positions at other companies. His career, however, started in CRO.
“I’ve always had a passion for optimisation, online persuasion, and psychology,” he says. “But I realised I get truly energised by technical marketing. That’s when I took the turn from CRO into a more technical role.”

Today, Erwin works on setting up and implementing measurement plans, data collection and tracking, analytics, CRO, and personalisation. His daily toolkit includes GTM, GA4, BigQuery, and now at a.s.r., Tealium.
“You could call me a data facilitator. My job is to make sure data is both available and reliable. If the data isn’t trustworthy, wrong decisions are made faster, and that’s something you want to avoid.”

A Technical Superpower

Erwin describes his superpower as the technical side of experimentation. He builds tagging and tracking setups, sometimes contributes to test builds in VWO, and writes SQL queries for reports and dashboards.
“At a previous employer, we built an automated A/B test dashboard in BigQuery using raw GA4 data. It made it so much easier for CRO specialists to monitor and analyse their tests.”

Not every test produces a win and that’s okay. Erwin recalls a series of tests for a website targeting older users with incontinence products. “We thought we were making the content easier to understand, but test after test came back negative. Sometimes you learn the most from what doesn’t work. It shows how important it is to think through your hypotheses.”

Challenges Beyond the Tech

For Erwin, the biggest challenges in experimentation often lie outside the technical realm.
“The success of CRO depends heavily on organisational commitment. If there’s no UX capacity or if development sees experimentation as a delay, the CRO train struggles to move. You need to involve departments, be transparent, and show the value of a ‘no effect’ outcome, not just the big uplifts.”

Lessons That Changed the Game

For Erwin, one of the biggest game changers in his work has been learning to work with GA4 data in BigQuery for A/B test analysis.

“If you want to analyse A/B test results sharply, I think it’s a must to work with GA4 data in BigQuery. Either yourself or with a colleague who can. It lets you create very refined analyses, with precise segments and the ability to exclude certain users. You can go much deeper than you ever could within GA4 itself.”

Another key learning: base your A/B tests on users rather than sessions, and use a maximum time window for conversions.

“In the end, you’re dealing with a user, a real person, not a session. In BigQuery, you can set rules like: the user must convert within one or two days of seeing the test, if that’s relevant to what you’re testing. That gives you much more reliable results.”

This approach addresses a common issue: visitors often see a test in their first session, then return later – perhaps hours or even days afterwards – to complete the purchase. If your analysis is session-based, you often miss linking that test exposure to the eventual conversion.

“When you control the timeframe yourself, you get results that are closer to reality. The closer to the truth, the better.”

At the same time, Erwin recognises the limitations in today’s privacy-first world:

“You’re never going to be able to trace data perfectly back to a single user. A cookie is tied to one device, after all. But you can at least get a more realistic picture. Analysts sometimes strive for this utopian 100% truth, but with all the constraints we face today, that’s simply not possible. Still, you should do your best to get as close as possible – while weighing the effort it takes and the costs involved.”

For Erwin, this mindset – aiming for precision without falling into perfectionism – is  essential to delivering trustworthy, actionable insights for experimentation.

AI as a Daily Assistant

AI already plays a big role in Erwin’s work.

“I use it to generate code for tracking scripts and test builds, and to improve SQL queries. In the next decade, much of the technical work might be taken over entirely by AI, but that will just shift my role towards prompting and directing. The key is to evolve with it.”

Advice for Future Experimentation Heroes

Erwin’s advice is twofold: keep developing yourself and understand the value of a technical web analyst in CRO.

“Keep developing your knowledge and skills and don’t get discouraged by setbacks, technical challenges, or resistant stakeholders. Find common ground and keep going. And as a technical web analyst, make sure your data quality and definitions are consistent. Your colleagues need to trust your output blindly, because bad data leads to bad decisions. That principle keeps me sharp every single day.”

Author:
Robert de Kok
CRO & Experimentation Specialist | DeltaFiber

Maarten Plokker (Sitespect): Bridging the gap between Client- and Server-side Testing

In the run-up to Experimentation Heroes 2024, we got the chance to talk with Maarten Plokker, Managing Director Europe at SiteSpect, one of this year’s sponsors. He shared his expertise on the evolving practice of hybrid experimentation, which combines both client-side and server-side testing in a single, unified platform. This approach allows different teams across an organization to work simultaneously on distinct parts of the tech stack, creating a cohesive and efficient UX optimization process.

Got your tickets for Experimentation Heroes 2024 yet? Get them here.

What is Hybrid Experimentation?

“Hybrid experimentation is an approach that combines client-side and server-side testing under a single platform,” Plokker explains. “More importantly, it enables both client- and server-side to be leveraged simultaneously by different teams, all in support of a unified UX optimization effort.”

As an example, Plokker describes how a product and development team might be working on a server-side optimization for a checkout flow, while at the same time, a marketing or UX team could be iterating on the same platform using client-side functionality. This dual approach, he notes, greatly increases test velocity and reduces the time to achieve meaningful results.

“Hybrid isn’t just about offering both client- and server-side testing in a separate, disjointed way,” he emphasizes. “It’s about delivering all these functions in a unified platform, giving teams a single view into all testing activities, analytics, and reducing potential conflicts across tests.”

The Key Benefits of Hybrid Experimentation

One of the most significant advantages of hybrid experimentation is the flexibility it provides. Plokker points out that by not limiting testing to either the client or server side—or requiring a sequential process—organizations can test and iterate more quickly. “Server-side is great for complex changes, but it often requires more development resources and longer cycles,” he says. “By isolating critical elements to server-side testing and using client-side for rapid iteration, organizations can innovate faster and gain insights in a shorter period.”

Another key advantage lies in performance improvements. “Shifting some aspects of an experiment to the server side reduces the impact on page load times,” Plokker explained. This is particularly important for high-traffic websites, such as in e-commerce, where speed directly affects conversion rates. Hybrid experimentation can also provide more reliable results, as server-side changes are less vulnerable to browser variations and client-side inconsistencies.

Starting with Hybrid Experimentation: Key Considerations

For companies just beginning to explore hybrid experimentation, Plokker offers practical advice on how to start. “Think about integration, team alignment, performance, and privacy from the start,” he advises. He stresses the importance of ensuring that the platform integrates seamlessly with both front-end systems—like CMS, analytics, and tag managers—and backend elements, such as APIs and CDNs.

He also emphasizes the need for teams to be well-organized and agile. “Your teams should be empowered to collaborate across client- and server-side tests, and it’s important to ensure you have the right skill sets in place,” Plokker says. If those capabilities are lacking, he recommends seeking a provider that offers training or consulting services.

Performance is another critical factor to consider. Plokker notes that hybrid platforms relying on JavaScript tags for client-side testing can slow down websites and potentially introduce “flicker” in the user experience. “Choose a hybrid platform that doesn’t use JavaScript, like our unique proxy-based platform at SiteSpect,” he recommends. “This will avoid slow load times and provide a smoother user experience.”

He also underscores the importance of data privacy and security, urging companies to select platforms that comply with regulations such as GDPR. “Ensuring data privacy not only protects customer information but also builds brand trust,” Plokker adds.

Features to Look for in a Hybrid Experimentation Platform

When it comes to choosing the right hybrid experimentation platform, Plokker stresses the significance of finding a solution that offers all functionalities within a single interface. “Throw out anything that separates client-side and server-side into different platforms or interfaces,” he warns. “That separation creates silos, slows testing velocity, and removes the benefits of a true hybrid model.”

In addition, Plokker advised against platforms that rely on JavaScript tags, as they can greatly slow down page load speeds and create flicker. “Look for a natively flicker-free solution,” he recommends, “and make sure the platform supports a wide range of testing methods, such as A/B and multivariate testing, across multiple devices—from web to mobile and IoT.”

Data is another crucial element of a successful hybrid experimentation platform. “Choose a platform that offers comprehensive analytics and reporting tools, and integrates with major analytics providers,” Plokker says. “This will allow you to thoroughly analyse experiment outcomes and track performance in real time.”

Beyond platform capabilities, Plokker emphasizes the importance of fostering an experimentation culture within the organization. “Support and expertise from your vendor are essential for success,” he notes. “Companies should look for vendors with a strong track record, high renewal rates, and credible references, ensuring they have the support needed to run a sustainable and successful hybrid experimentation program.”

In conclusion

In conclusion, Maarten Plokker’s insights reveal that hybrid experimentation offers a powerful and flexible solution for organizations looking to optimize user experiences while maintaining high performance and security standards. “A true hybrid model allows you to innovate faster, deliver better user experiences, and see results more quickly,” he says. “It’s about bringing the best of both client-side and server-side testing together in a way that maximizes value and efficiency.”

Experimentation Hero in Focus: Jonas Timmer (Nu.nl)

As we prepare for Experimentation Heroes 2024, we’re highlighting specialists in the field who embody what it means to be a true “Experimentation Hero.” Today, we speak with Jonas Timmer, a seasoned CRO specialist at Nu.nl.

Have you secured your ticket for Experimentation Heroes 2024 yet? Visit experimentationheroes.com/tickets for more details and to grab your spot.

Who is Jonas Timmer?

Jonas Timmer is a Conversion Rate Optimization (CRO) specialist at Nu.nl, part of DPG, who has transitioned from a generalist to an expert in A/B testing. “I aim to improve Nu.nl’s platform, both app and web, focusing on user engagement and monetization,” Jonas explains. Over the past year, Jonas and his team have conducted nearly 100 experiments, a personal milestone he’s proud of.

Jonas brings a wealth of experience from his previous role at an agency where he focused on e-commerce. He found transitioning from an agency to an in-house role a significant shift, allowing for more in-depth specialization. “I taught myself basic coding within two months because we had a development capacity issue,” he says, illustrating his proactive approach to overcoming challenges in the experimentation field.

From Coding to Optimizing: Jonas’s Journey at Nu.nl

The switch from an agency to client-side work has allowed Jonas to delve deeper into CRO practices. “At an agency, you often need to be a ‘T-shaped marketer’—knowing a little about everything,” Jonas says. “But at Nu.nl, I became a CRO specialist, which was exactly what I wanted.”

Jonas’s passion for learning didn’t stop there. He taught himself HTML, CSS, and JavaScript out of necessity and interest, reducing dependence on external developers and speeding up the testing process. “I wanted to be less dependent on others,” he reflects, highlighting how this new skill set enabled him to run more complex and timely experiments.

Experimentation Superpower: Enthusiasm and Analytical Prowess

When asked about his “superpower” in the experimentation field, Jonas had to think for a moment. “I’d say my ability to get people excited about CRO is a key strength,” he eventually shares. While he is skilled in analysis, Jonas believes his enthusiasm for results—whether positive or negative—encourages others to listen and engage with the data. “I can provide the data, but it’s my excitement over any outcome that keeps people interested,” he adds.

Jonas regularly hosts a “CRO and UX moment” once a month with his UX research colleague Arie Bart, where they present monthly learnings to the team. These sessions, which even have their own theme tune, have become a staple at Nu.nl, showcasing both successes and failures. “When a hypothesis doesn’t work out, it’s an opportunity to understand why,” Jonas explains, emphasizing the value of negative results in driving further experimentation.

A Proud Experimentation Moment: From Frustration to Innovation

Reflecting on his proudest experiment, Jonas recalls a project for a client in the beauty industry. “It stemmed from my personal frustration with discount codes on e-commerce sites,” Jonas laughs. He noticed that placing a discount code field on the checkout page often led customers to abandon their cart in search of a code. Jonas proposed a creative solution: a special page accessible only through Google search results, offering a small discount to users who actively searched for codes.

The experiment didn’t stop there. Jonas also implemented a system where, after two failed attempts at entering a discount code, users were automatically given a 5% discount. “This led to a significant increase in conversion rates,” he shares.

Overcoming Challenges: The Battle Against “Gut Feelings”

Like any experimentation journey, Jonas’s path is not without challenges. One major obstacle he faces is stakeholders making decisions based on instinct rather than data. “People sometimes say, ‘I think this button should be here,’ without any reasoning,” Jonas explains. To counter this, he encourages deeper questioning and testing. “I often ask, ‘Why do you think that?’ and then suggest we test it to see what the data says,” he says, demonstrating his commitment to data-driven decisions.

Jonas believes in collaborating closely with UX researchers to gain deeper insights into user behavior. “Sometimes, data alone isn’t enough,” he notes. “We combine quantitative results with qualitative research to provide a full picture.”

The Role of AI in Experimentation: A Game Changer

Jonas also sees AI as a revolutionary tool in his experimentation work. Initially skeptical, he began using ChatGPT to help with coding issues and found it invaluable. “I tested it with some code, and it not only fixed my problem but explained the solution,” Jonas shares. This new capability doubled his output, allowing him to run about 100 experiments a year.

He sees further potential for AI in performing statistical analyses and automating processes within Nu.nl. “AI-generated summaries have already proven helpful in making longer articles more accessible,” Jonas adds, showing his enthusiasm for expanding AI’s role in experimentation.

Advice for Aspiring Experimentation Heroes

For those looking to follow in his footsteps, Jonas offers this advice: “Surround yourself with people who are enthusiastic about experimentation. It’s a lonely job otherwise,” he warns. He emphasizes the importance of resilience, learning from both positive and negative results, and not taking things personally. “Experimentation is about trying, failing, and trying again,” he concludes.

With his innovative mindset and passion for experimentation, Jonas Timmer exemplifies what it means to be an Experimentation Hero. His journey at Nu.nl continues to inspire and pave the way for future heroes in the field.

Experimentation Hero in Focus: Jan Karel Ekkel (Kees Smit Tuinmeubelen)

In the run up to Experimentation Heroes 2024, we’re highlighting specialists in the field who embody what it means to be a true “Experimentation Hero.” Today, we speak with Jan Karel Ekkel, CRO Specialist at Kees Smit Tuinmeubelen. 

Jan Karel Ekkel started as a CRO specialist at Kees Smit Tuinmeubelen five years ago. He soon expanded his focus and became the product owner of the webshops. He has always remained active in the field of experimentation and optimization, with a strong focus on data-driven work. He does everything he can to get the organization on board with this approach. For example, together with his colleagues, he set up the ‘Brainstorm Bistro’—a cross-departmental brainstorming session to come up with ideas for improving the webshop, aiming to create more support within the organization. 

According to Jan Karel, experimentation must go hand in hand with product ownership: “You can’t be a product owner without knowing what needs to be improved.” It is important not only to look at the data but also to gather the right people around you to collectively set the right priorities. 

Got your tickets for Experimentation Heroes 2024 yet? Get them here.

What is your superpower? What makes you a real experimentation hero? Can you give a concrete example of a successful test/experiment and explain how your superpower helped? Why are you proud of it? 

“My strength? Rationality and a constant drive for improvement—both in my personal life and at work. Everything can always be better, but “better” doesnʼt just mean what you think; itʼs what you can back up with facts.ˮ 

A concrete example: We applied the anchoring effect on the category pages by showcasing a more expensive, popular product first. This worked well, but another department felt the selected products came across as too pricey. Instead of sticking with what we had tested, I took a critical look: too low had already been proven ineffective, but what happens with an anchor just above the average order value? I didnʼt have an answer, so we tested again. The result? No significant differences between those two anchors. This way, we found a middle ground that satisfied both the data and the organization. 

Who or what is your biggest enemy in your approach to experimentation? What are your biggest challenges? What do you struggle with? 

At the moment, enough support is being generated for experimentation. Internally, results are shared, and input is gathered. “When we receive suggestions for optimizations, the challenge is to remain neutral, so you can truly listen to the input from colleagues and departments.ˮ Itʼs often difficult to stay neutral, but you can solve this by gathering the right people around you. A team with a mix of personalities helps to leverage different perspectives. For example, I work with Dianthe Forkink, the creative force behind designing and setting up the tests, and Lena Groothuis, who executes and analyzes the tests with precision. This combination ensures a good balance and better results. 

Self-awareness is essential here. Itʼs important to know your strengths and weaknesses. For instance, I tend to be quite direct and result-oriented, so it helps to have colleagues who are more open and connecting.

Can you name one or two key learnings that were true game-changers in your heroʼs journey through the field of experimentation? 

 There are three different learnings in the field that have been game-changers for the CRO maturity we have achieved today: 

  1. Gather the right (and therefore diverse) people around you.
  2. Be data-driven, but with an open mind.
  3. Let people draw their own conclusions from factual data, as this will help you reach the same conclusion faster and avoid debate. When people draw their own conclusions from the data, little to no persuasion is needed—theyʼve already convinced themselves. 
To what extent does AI influence your work? Are you already applying it concretely, and in what way? 

AI has a significant influence on my daily work and has allowed me to perform many tasks more quickly. Additionally, I use AI a lot for data analysis and data processing, as well as for generating personas for brainstorming sessions. 

What role does experimentation and testing play within your organization? Is it still isolated in silos, or is there already a true experimentation culture? 

Experimentation has now taken on a prominent role within both the e-commerce and performance departments. Nothing goes live unless itʼs been tested. 

What is the tip you would give to future Experimentation Heroes? How do you become an Experimentation Hero? 

My tip for future Experimentation Heroes is: stay curious, ask questions, and keep asking until you understand. Every question you ask now will help you later. Donʼt just do what you have to do—aim for continuous improvement and adopt an attitude of never being fully satisfied. Of course, look for topics that interest you; that will make it happen more naturally. Always assume that the answer lies in the data, not necessarily within yourself. 

Got your tickets for Experimentation Heroes 2024 yet? Get them here.

Experimentation Hero in Focus: Daan van Vliet, Pioneering Experimentation at Alleo

As Experimentation Heroes 2024 approaches, we continue our series by interviewing experts in the field who exemplify what it means to be a true Experimentation Hero. Today, we have a conversation with Daan van Vliet, who recently joined Alleo in a growth-focused role.

Have you secured your ticket for Experimentation Heroes 2024 yet? Visit experimentationheroes.com/tickets for more details and to grab your spot.

A New Role at Alleo

Since April 2024, Daan van Vliet has been making waves at Alleo, a flexible benefits platform that allows employees to manage their benefits—from leasing a bike to purchasing additional vacation days—directly from a dashboard. “It’s a great platform where employees can choose how to spend their benefits,” he shares enthusiastically. Before joining Alleo, Daan was with Plus, a popular supermarket brand, where he honed his skills in A/B testing.

Reflecting on his previous experience, Daan notes, “Plus was fantastic because of the high traffic and opportunities to run numerous A/B tests. However, user testing sometimes got overshadowed.” At Alleo, he finds himself in a growth-oriented role, actively engaging in user research and testing. “Right after my onboarding, we set up a customer panel and conducted user interviews with three clients. It was an incredible start, offering immediate insights,” he says. This panel will now serve as a continuous feedback loop for testing new designs, features, and ideas to ensure alignment with customer needs.

The Fascination with Data and Psychology

When asked what he finds most intriguing about experimentation and testing, Daan points to the interplay between data and psychology. “It all started for me after watching a VPRO Tegenlicht documentary featuring Bart Schutz from Online Dialogue,” he recalls. The methodology of combining data with psychological insights struck a chord with him. “As long as it’s done correctly, it’s incredibly inspiring,” he adds.

However, Daan is critical of a more rigid approach to experimentation. “Some people take a formulaic approach, applying Cialdini’s Six Principles of Persuasion or some FOMO tricks from Booking.com and thinking they’ve nailed it. I don’t enjoy working with people who have a fixed mindset and aren’t open to testing or evolving their methods,” he emphasizes. For Daan, the beauty of experimentation lies in its ability to surprise and challenge preconceived notions. “When I first started at Plus, I noticed customers reacted differently to experiments that had previously been successful elsewhere. It taught me to go back to basics and learn what works specifically for your audience.”

Navigating Ethical Boundaries in Experimentation

Daan has been fortunate to work with brands that prioritize long-term relationships over short-term gains. When asked if he’s ever been pressured to apply questionable persuasion techniques, often referred to as “dark patterns,” he confidently responds, “No, I’ve been lucky with the brands I’ve worked with.” At Plus, the focus was on optimizing for loyal, long-term customers—the ‘champions’—by enhancing order value, purchase frequency, and overall convenience. “It’s all about creating genuine value and building a lasting relationship. Gimmicks won’t get you far in that environment,” he explains.

At Alleo, the same principles apply. “We aim to be true partners with our clients, building relationships that last for years,” Daan says. For him, the focus is always on delivering real value, not employing transparent tricks.

Exciting New Developments in Experimentation

Looking ahead, Daan is particularly excited about the growing emphasis on user testing. “Personally, I’m looking forward to doing more user testing,” he says. He’s also thrilled about new advancements in experimentation tools, especially those involving AI. “Imagine being able to ask an AI agent questions about your test database. After a few years at a company, you have all this knowledge—like knowing when to use nudging on a checkout page versus another type of page in the journey. How amazing would it be for a newcomer to query an AI about what’s been learned about product detail pages or returning visitors? It’s fascinating,” he exclaims.

Advice for Aspiring Experimentation Heroes

Daan has valuable advice for those just starting in the field: “Attend events like Experimentation Heroes, Elite, or The Conference Formerly Known as Conversion Hotel. That’s where you hear the real stories through interactions and conversations. You’ll see that even experienced professionals are open to new techniques and aren’t stuck in their ways,” he suggests. This openness to learning and evolving is, according to Daan, crucial for growth in the experimentation field.

With a passion for data-driven decision-making and a commitment to ethical experimentation, Daan van Vliet is a testament to the qualities of an Experimentation Hero. His journey at Alleo continues to inspire, offering valuable lessons for those looking to make their mark in this ever-evolving field.

 

Experimentation Hero in Focus: Simonluca Definis

As we approach Experimentation Heroes 2024, we are interviewing several specialists in experimentation who exemplify the qualities of an actual Experimentation Hero. First up: Simonluca Definis.

Have you got your ticket for Experimentation Heroes 2024 yet? Or do you want to submit a case and get the chance to present on stage? Visit experimentationheroes.nl/tickets for tickets and more details.

Who is Simonluca Definis?

Simonluca Definis is a Digital Experience Designer at frog, part of Capgemini Invent NL, currently consulting at Philips D2B in the healthcare system sector. With a passion for transforming insights into innovative ideas, Simonluca believes his superpower lies in “rapidly prototyping product or service solutions for market MVPs.” In this interview he will explain why.

Prototyping in healthcare

In 2018, Simonluca and his team collaborated with a global healthcare leader to develop a service strategy for a nasal spray product. They conducted four weeks of preliminary international data-driven research, focusing on perceived pollution in some of Europe’s most polluted cities. “This experiment revealed that in certain countries, customers rely heavily on pharmaceutical professionals for quick healthcare tips,” Simonluca explains. This insight allowed the team to prototype a solution tailored to this reliance, marking a successful experiment that Simonluca is particularly proud of, and which shows his experimentation superpower.

Experimentation at Philips

Within Philips, experimenting and testing have become increasingly integral to their processes. Over the past few years, Simonluca’s team has worked diligently to help business and product owners understand the value of their data-driven and design processes. “ We’ve emphasized how an experimental and iterative approach can significantly enhance our final deliverables,” he explains. Although they are still striving for the ideal, fully integrated cross-disciplinary scenario, substantial cultural improvements have been made.

UX and AI: The Start of a Massive Revolution

When it comes to the influence of AI on his work, Simonluca observes that many design professionals in the UX industry are currently exploring AI tools. These tools are used to complete some tasks or create basic foundations, such as suggesting questions for user interviews, creating personas, or generating variant designs for A/B testing, but this is only the start of it. “We are just at the beginning of a massive revolution”  he remarks, expressing both curiosity and apprehension about how AI will impact many current jobs.

Convincing stakeholders: Never discuss data in isolation

Despite his successes, his experimentation journey, however, has not been without its challenges. Convincing senior stakeholders to adhere to the integrated optimization cycle, which includes data-driven research and customer validations, has been a significant obstacle. “Often, stakeholders are tempted to bypass these steps to save time, resources, or budget when launching an experience,” Simonluca noted. Despite these challenges, he has demonstrated that the optimization cycle, based on both quantitative and qualitative validation methods, is crucial for organizations to achieve a high return on investment (ROI). Additionally, ensuring that end-to-end research insights are effectively deployed from the initial pre-analysis phase through to development and implementation requires close collaboration with multiple teams.

Through this experimentation journey, Simonluca has learned a valuable lesson” “Never discuss data in isolation; always contextualize it with a narrative and tailored storytelling” he emphasizes. This approach has proven to be a game changer in his experience of convincing stakeholders.

Use storytelling to make data compelling

Finally, Simonluca offers valuable advice for aspiring experimentation heroes. “A true experimentation hero is first of all an advocate for users and a champion of the design process,” he advises. They bridge business conversations and user needs through design thinking methodologies, facilitating decision-making and co-creation with multidisciplinary teams, especially those focused on data and optimization. “Most importantly, an experimentation hero brings fun and positivity to the team and uses storytelling to make data compelling.”

Navigating the Evolving Landscape of Experimentation: Challenges and Insights

The realm of experimentation is one of constant evolution, marked by a multitude of influences, perspectives, and advancements. From navigating the challenges of procuring reliable data despite increasingly stringent privacy regulations, to major corporations relinquishing their foothold in experimentation ventures – these factors collectively create a complex backdrop. For enterprises venturing into the realm of experimentation for the very first time, these dynamics certainly don’t pave an easy path, and serve as a base for a rich discussion. In this interview, we talk with Maarten Plokker (SiteSpect), as we delve into these intricate topics.

Maarten Plokker is the Managing Director of SiteSpect Europe. SiteSpect is one of the sponsors for the 2023 edition of DDMA Experimentation Heroes, scheduled for October 23rd. On this day, the most outstanding experimentation heroes from the Dutch Market will gather to showcase their finest experiments on stage. If you’re keen on participating or attending, you can find additional details at experimentationheroes.com.

Q: Gathering reliable data nowadays is easier said than done. What leads to this difficulty?

A: Privacy laws have and will continue to tighten. With a rising focus on user privacy, browsers now have built-in privacy features activated by default. We appreciate this as users, however, for those curating user experiences, these changes have impacted data previously used for enhancing and A/B testing digital interactions. Browsers have all but eliminated the use of third-party cookies. This has been an issue for the ad-serving industry, but a non-issue for experimentation since testing platforms deal with first-party cookies. However, many browsers now also delete first-party client-side cookies after 7 days, some in as little as 24 hours. This means a returning user might be repeatedly treated as a new user if they leave the site and don’t return for a period of time. This not only dramatically skews data on things like loyalty and new user acquisition, it also creates a whole segment for which you can’t provide a personalized, sticky user experience. And without any evident errors, everything appears to be working fine in terms of personalization, data collection, and data accuracy, but it’s not – far from it. Skewed data, bad user experience, the inability to personalize and experiment on a segment of users…it’s very counterproductive to experimentation and CRO. With client-side tools, you also have issues like JavaScript errors, flicker, and latency, and these all can negate the “improvements” you’re trying to test. While SiteSpect natively addresses all these issues, most tools don’t, unless you move to a more server-side approach.

Q: You used the terms client-side and server-side a bit just now. Could you educate us a bit on the distinctions between the two and why some organizations or agencies might choose one or the other?

A: Sure, let’s start with some quick definitions. Client-side pertains to changes and tests done at the browser, while server-side involves changes and tests delivered from the servers. SiteSpect is capable of doing both, because it uniquely alters things in the middle of that flow, but more on that later In traditional client-side experimentation, a browser receives a web page, and the page gets rendered in the browser. Then, code is used to reach out to a third party and modify the experience after the page is loaded and collect data on it. A tool like the now-retired Google Optimize is a prime example of a client-side tool. Client-side tools are typically user-friendly and easy to get going but have limitations. While they can assess elements like copy, images, and promotions, they can’t optimize more complex elements like checkout flow, shipping thresholds, search and recommendation algorithms, or test mobile. They also suffer from the data and performance issues I outlined a few minutes ago.

Server-side experimentation addresses those performance and data issues and allows for more advanced use cases. However, server-side is far more resource-intensive, requiring coding and developers, and typically constrains experimentation to release cycles. Often a transition to server-side moves control from Marketers to Developers, with the hope that the Product team can and will balance marketing, feature, and UX priorities equitably. Still, whether it’s client-side, server-side, or both, we’re often talking about siloed tools that confine teams, because the tools themselves are almost always offered with different implementations, logins, and interfaces, and they are designed based on different technical skill sets. SiteSpect is the only tool that straddles these two worlds by making changes between the client and server, and so it combines the benefits of both client and server-side and doesn’t drag with it the issues of either, and it does so in a single user interface which is something few tools do.

Q: You mentioned Google Optimize and its recent sunset, and I recently heard Oracle is sunsetting its Maxymiser product as well. What do you make of that? Is it a coincidence?

A: I don’t think one necessarily has to do with the other, but that doesn’t mean it’s only a coincidence. Some of these larger companies like Google, Oracle, and I could even throw Adobe in there as well, they do many things, but experimentation has never been at their core. It has seemed more like an afterthought. To their credit, these companies keep a keen eye on their longer-term strategies and their core focus and are willing to bail on things that aren’t part of that. There are companies out there that are 100% focused on experimentation and have really honed their offerings. I think that plays into a larger company’s appetite to continue to develop, market, sell, and support what has become a set of sub-par experimentation tools. So that’s what I think happened there. We’ve already converted a ton of former Google Optimize customers. I’m almost sad that the sunset deadline has passed because it’s been good for business [smiles]. I’ll be watching the movement with Oracle, as their announcement is fairly recent. Separately there’s a host of companies like Optimizely that were once heavily focused on experimentation, but they’ve gone through acquisitions and mergers, etc. and the market feedback shows they are diversifying their R&D and sales focus significantly towards the other offerings that are now part of their portfolio. It’s all pretty interesting.

Q: It certainly sounds like there is a lot of movement out there. So what guidance can you give organizations who are either embarking on experimentation for the first time, who have been displaced by their tool sunsetting, or who are looking to expand from pure client-side to server-side

A: There are things you want to consider about your organization, your UX priorities, your culture, and your technology disposition. We actually have a short and fairly solid eBook we put out to help organizations walk through that process. Anyone can contact me for a copy of that. Most organizations have a build-or-buy question to answer, and the answer may differ based on the technology and what it’s providing to their competitive edge and overall strategy. We sometimes see super large enterprises with a ton of developers build their own experimentation platforms, but it’s rare, and when we see it, it’s almost always of the server-side variety. However, those companies are still often functionally behind a fairly mature experimentation space. So while we may see that with a Netflix or Booking.com, for the most part, we still see most organizations, midsize, large, and even very large enterprises buying solutions like ours, and that’s because they want something that already brings Marketers, Developers, Product Managers, and CRO Specialists under one platform umbrella. They want client-side, server-side, personalization, API transformation, and product recommendations in one interface, and they want to include agency-level optimization consulting as extensions of their own teams.

To conclude

In the dynamic landscape of experimentation, where influences, regulations, and technological advancements shape the path forward, the challenges and insights discussed in this interview provide a profound understanding of the intricacies involved. We hope that Maarten Plokker’s insights shed light on the evolving challenges of data privacy regulations, the transition from client-side to server-side experimentation, and the changing dynamics of major players in the field. As organizations embrace experimentation to enhance user experiences and drive growth, the complexities of choosing the right approach, tools, and strategies become paramount. With the sunsetting of tools like Google Optimize and Oracle Maxymiser, a pivotal shift is observed, reflecting the market’s demand for dedicated, specialized solutions. As the experimentation journey continues, organizations must carefully consider their unique needs, technological landscapes, and collaborative aspirations, aiming to strike a harmonious balance between innovation and stability. In this era of transformation, this combination illuminates the path forward.

Hoe kun je doorlopend innoveren binnen CRO?

Bij CRO draait alles om groei. Groei door optimalisatie en innovatie. CRO als discipline groeit ook en is dus volop in ontwikkeling. Daarom is het belangrijk om als CRO-organisatie doorlopend te optimaliseren en te innoveren, aldus Maurice Beerthuyzen, Managing Consultant bij Clickvalue (sponsor van de DDMA Dutch CRO Awards 2022). In dit interview vroegen we hem hoe je dit voor elkaar krijgt.

Hi Maurice, je geeft aan dat de CRO werkwijze een goede methode is om binnen je organisatie te innoveren. Hoe richt je je CRO-praktijk daarop in?

‘CRO-specialisten optimaliseren en innoveren continu. Of het nou gaat om platformen, email, of andere vormen van communicatie. Iedere CRO-er weet hoe belangrijk het is optimalisatiekansen te onderbouwen met data, dat je problemen moet onderscheiden van oplossingen en dat je je oplossingen moet valideren. Deze CRO-werkwijze kun je ook toepassen binnen je innovatiepraktijk. Baseer je groeikansen op data. Denk bijvoorbeeld aan trendrapporten en stakeholder-interviews. Wat zijn de belangrijkste marktontwikkelingen? Hoe wordt je dienstverlening binnen en buiten de organisatie gewaardeerd? Prioriteer vervolgens nauwkeurig je groeikansen en valideer ze samen met stakeholders. Dit is erg belangrijk. Je moet samen innoveren. Alleen zo snappen alle betrokkenen waar je naartoe werkt en welke innovatie wordt beoogd. Wij doen dit zelf door middel van ‘Change OKR’s’. Onze discipline leads stellen deze op en presenteren die aan het team. De onderliggende initiatieven kunnen door alle teamleden worden opgepakt. Zo zorgen ze voor een duidelijke koers en realiseren ze betrokkenheid en commitment binnen onze teams. Ze waarborgen dat iedereen binnen het team ergens binnen het innovatietraject verantwoordelijk voor is.’

Wat zie jij binnen het vakgebied CRO gebeuren waar men op kan innoveren?

‘CRO is enorm in ontwikkeling en zie je steeds vaker als methodiek terugkomen binnen bestaande product, design en development teams. CRO als begrip bestaat ook eigenlijk niet binnen onze organisatie omdat dit voor ons een te generalistisch perspectief schetst van het vakgebied en geen recht doet aan andere essentiële specialisaties binnen het vak. Denk aan specialisaties als research, design-analyse, design, psychologie, copywriting, projectmanagement, etc. Innovaties vinden ook plaats bínnen deze disciplines. Het is belangrijk je ook specifiek daarop te richten. Zo bieden wij onze experts de mogelijkheid om zich in hun specifieke vakgebied verder te specialiseren – en dus niet alleen CRO breed.

UX Design is een mooi voorbeeld. Doordat wij specialisten opleiden in hun eigen vakgebied binnen CRO zijn ze goed inpasbaar in bestaande CRO- optimalisatie- of PO-teams. Zo heeft iedereen een gelijkwaardig kennisniveau. Hierdoor kun je eenvoudig hybride teams samenstellen en eventuele kennis- of competentie- gaten invullen.

Binnen de markt is er een toenemende behoefte aan een centraal beschikbaar, uniform klantbeeld. Iets waar we vanuit DDMA veel aandacht aan besteden. Bestaan er innovaties om dit te bereiken?

‘We zien inderdaad dat een groot deel van de markt de klantinzichten die ze verzamelen niet uniform of centraal opslaat. Dit gebeurt vaak nog in mapjes, met legacy software of (te) complexe systemen, met als gevolg onvoldoende tijd om die informatie op de juiste manier te ontsluiten – zelfs als er wel de juiste tools beschikbaar zijn. Hierdoor neemt de (her)bruikbaarheid van de verzamelde inzichten af.

Een tool uitzoeken is dus niet voldoende. Je dient hiervoor een duidelijke, eenvoudige workflow in te richten. Wij hebben natuurlijk voorkeuren voor tools, maar kunnen onze kennisbank workflow ook doorvoeren op software die klanten reeds inzetten. Hierin kunnen we inzichten van tientallen jaren AB-testen bundelen en input verzamelen voor specifieke onderdelen in het CRO proces. Denk bijvoorbeeld aan het proces van opportunity refinement, waardoor we met deze kennisbank kansen beter kunnen prioriteren en aanscherpen. Daarnaast kan zo’n kennisbank solution designers helpen met het zoeken naar de beste oplossing. Overal in het CRO proces biedt een kennisbank meerwaarde. Het logisch structureren van de data, maar ook het controleren van deze data op vooraf bepaalde criteria is onderdeel van deze workflow.’

We optimaliseren steeds meer binnen de data-driven marketingsector. Wat daarmee gepaard gaat is dat marketingpraktijken binnen de meeste disciplines steeds geavanceerder en ingewikkelder worden. En dat verschillen in maturity in de markt groot zijn. Denk bijvoorbeeld aan complexe AI-modellen en machine learning. Is dat ook het geval bij CRO? En zo ja, bestaan er innovaties zodat je hier altijd mee om kan gaan?

‘Ja, ook voor CRO geldt dat de uitvoering steeds ingewikkelder wordt. De toename van complexiteit van experimenten en de bijkomende verschillen in maturity binnen CRO teams is zeker zichtbaar. Dit kan gevolgen hebben voor de snelheid en output van je programma. Wat hierbij kan helpen is het inrichten van flexibiliteit in je aanpak door bijvoorbeeld meerdere validatiemethoden te omarmen in je CRO-praktijk waardoor je situationeel kunt kijken welk type validatie op welk moment de meeste waarde oplevert en je inzichten kunt blijven leveren, blijft leren en niet steeds stil komt te staan.

Enkele voorbeelden:

  • Attitude of kwalitatieve validatie: afhankelijk van de hypothese kan je hiermee bijvoorbeeld brand teams helpen beoordelen welke wijziging bijdraagt aan hun doelen. Je kunt dit bovendien inrichten als pre-validatie instrument om de succeskans van een opvolgend kwantitatief experiment te vergroten.
  • Fast & Slow track validatie: door in je CRO Pipeline verschillende type experimenten te definiëren kan je meerdere typen experimenten naast elkaar draaien en continue programma output verzekeren.
  • Behavioral validatie: als je louter in het gedrag geïnteresseerd bent, kun je naast je analytics-stack ook overwegen om customer-experience-tools in te zetten waardoor je vaak rijkere inzichten ontvangt in gedrag.
  • Het afwegen van LEARN en EARN in je experiment opzet: experimenteren om van te leren of experimenteren om mee te verdienen? Door deze afweging vaker te maken kun je een grotere verscheidenheid aan CRO-doelen bedienen.

Zo kun je dus per situatie afwegen welk type validatie het meest geschikt wat je soms winst kan opleveren; lagere kosten, sneller leren, meer output, etc.’

Zijn er nog valkuilen waar je voor moet waken om je CRO-praktijk naar een hoger plan te tillen?

‘Je moet oppassen dat je niet te ver afdwaalt van de inhoud. CRO op de juiste manier toepassen betekent ook veel project-/procesmanagement, met vaak handwerk. Voor sommigen is dit zelfs het minder leuke deel van het werk. Veel activiteiten binnen CRO kun je echter prima automatiseren. Denk bijvoorbeeld aan:

  • Het automatisch monitoren van testen
  • Het automatisch evalueren van testen
  • Het automatisch runnen, pauseren en re-runnen van testen
  • Het automatische laten versturen van “decision supporting” inzichten
  • Het automatisch lanceren en releasen.

Op die manier houd je meer tijd over voor de inhoud en het mensenwerk wat CRO juist zo leuk maakt.’

Maurice Beerthuyzen, Managing Consultant bij Clickvalue, een van de sponsoren van de DDMA Dutch CRO Awards 2022, die op 3 november worden uitgereikt in B. Amsterdam. Wil jij weten welke nomineerden gaan winnen? Bestel nu je ticket.

How to build a successful optimization program? This is how you do it

Building a successful optimization program is the holy grail for many companies. ‘It depends on careful planning, implementation, and measurement’, according to Maarten Plokker, Managing Director at SiteSpect Europe (Sponsor of the DDMA Dutch CRO Awards 2022). Easy to say, you might point out. How do you achieve this then?

Basically, you should include the three critical elements for implementing a profitable optimization program, which consist of:

  1. Forming a Great Experimentation Team
  2. Creating an A/B Testing Plan
  3. Getting Your Stakeholders on Board

In this article I’ll elaborate on each of these elements one by one.

Forming a Great Experimentation Team

Experimentation involves a lot of different skills coming together in unison. This should always happen under some degree of project management and supervision, with the right central processes to hold and oversee all skills and stakeholders, to guarantee that everything runs on time. Most companies start out with a conversion manager/specialist, or a similar person, whose background is likely to be in marketing, analytics, UX, or maybe a mix of each. If your company wishes to have an in-house Experimentation Team – no outsourcing to agencies, you need to consider all the skills required for this team, which are:

  • Analytics & Research – the ability to listen to and understand customers using multiple different sources of data, and to be able to translate the data into insight and ideas and generate hypotheses for experimentation.
  • Design & Creative – the ability to both visualize and realize the application of a test hypothesis on the front-end user experience.
  • Development and Technology – the ability to build and produce the designed experience, as well as support the implementation and maintenance of the various technologies required for the whole operation.

Can your company afford to hire someone for each of these roles? Or do you need to outsource? Once your company has determined how these skills will be represented, you can create an image of the team you want to put together with the allocated resources. Once you have this image there are two more things you need to consider to actually start building your team: gathering leadership support and socializing team efforts.

  1. Leadership support includes gathering digitally minded executives who champion A/B testing projects and see the opportunities available for it. You want to convince them of wanting their names associated with the financial gains that A/B testing so commonly produces. They also need to recognize the value of using A/B testing to prevent mistakes from happening so they can report saving the company millions as they are reporting incremental revenue. Having leadership support for the Experimentation Team will help sustain your optimization program.
  2. Finally, make sure to socialize and monetize the Experimentation Team’s efforts throughout the company. Top down and bottom up. Doing this will provide clarity to other internal resources in why their content is changing in a seemingly random way. Secondly, it creates accountability for the Team to produce more. It gives them the platform they need to talk about their successes and failures.

Creating an A/B Testing Plan

Companies with highly successful A/B testing and optimization programs have a very formal process for requesting and planning A/B tests. Despite the fact that some in our industry love to proclaim, “A/B test early, A/B test often, A/B test aggressively,” the reality is that good A/B testing can be quite an intensive process, and the results really depend on the effort that is put in. Without a structured plan for A/B testing, it is incredibly easy to end up with meaningless data, wasted time, and frustrated internal stakeholders.

Still, developing an A/B testing plan is quite simple. The following questions are a good place to start.

  1. What is being A/B tested?
  2. Why is it being A/B tested?
  3. What are the expectations for the A/B test?
  4. What are the measures of success for the A/B test?
  5. What are the risks associated with running the A/B test?
  6. What internal resources are required to run the A/B test?
  7. Who is requesting the A/B test?
  8. By when are results needed?

Individually, each of these questions is relatively easy to answer. Some are technical (#5 & #6), some are theoretical (#3), and some are political (#7 & #8). The best answers are not page-long explanations; rather, concise explanations designed to help the Experimentation Team best plan for the deployment of the A/B test.

Most people initially get stuck answering questions three, four, and five. Measures of success and risks associated with A/B testing are important enough issues that they merit their own best practices. Expectations are tough, at least until you start to get the hang of A/B testing because it is impossible to predict whether a change will result in a substantial improvement, a small improvement, or a net decline.

Your Experimentation Team should plan to have a formal A/B testing plan documented and ready to go when you start to socialize the group with senior stakeholders. The presence of this document and a few examples of the kind of information you’re looking for will go a long way towards demonstrating that you are serious about A/B testing. Most senior executives have seen enough ad hoc exercises designed to drive incremental improvement during their tenure to appreciate the level of consideration a plan conveys and understand the likelihood of failure in the absence of an A/B testing plan.

Requiring a formal A/B Test Planning document from anyone in the organization wanting to leverage the Experimentation Team, will allow the team to insert A/B tests into a long-term schedule prioritized by opportunity, risk, and political considerations. While I don’t recommend the creation of a timeline so structured that real opportunities will be lost—A/B testing is frequently an opportunistic endeavor, especially when there is a high level of awareness about A/B testing efforts—having this “roadmap” for A/B testing projects dramatically improves each A/B test’s likelihood for being successfully executed.

Ultimately, the goal for requiring a formal A/B test plan is to drive home an appropriate level of seriousness and rigor about A/B testing in your organization. Especially if your results are similar to the companies interviewed for this research, your successes will breed the desire to create more successes. If any product manager who walks through the door can have his or her A/B test jump the queue with little more than waving of the hands and saying, “make the button more blue,” then you are destined to struggle to get your A/B testing program off the ground.

Conversely, if you provide clear guidance about what is required and how the requirements will be evaluated and slotted, at least in our experience, you will soon exceed your expectations and be well on your way to success.

Getting Your Stakeholders on Board

As mentioned above, management and leadership support for A/B testing projects is critical. Having buy-in from members of the senior management team / C-Level will make or break A/B testing efforts. Work with these stakeholders from the beginning—in concert with the executive sponsor—and directly solicit their feedback, suggestions, and ideas that can be A/B tested by the newly formed Experimentation Team.

Another consideration is to establish a “Multivariate Testing Steering Committee” made up of senior members who are helping to decide what will be A/B tested, when, and how. I recommend socializing the A/B testing program with senior management early on. You will undoubtedly need their support to assemble the Experimentation Team and will often need budget, approval, or assistance getting A/B testing technology implemented. By approaching management with a clear plan for success, you are far more likely to gain their critical support and validation for your work.

Summary

By creating and implementing the three critical elements outlined, your company will be set up for a higher rate of success when it involves optimizing the customer experience. With a team, a plan and stakeholder support in place, the next phase is making sure your tech stack can support your goals and objectives for experimentation.

If you are considering an optimization platform, we encourage you to download our Ebook: Choosing an Optimization Platform.

Maarten Plokker is Managing Director of SiteSpect Europe, sponsor of the DDMA Dutch CRO Awards 2022. During the award ceremony on November 3 in B. Amsterdam, we will the crown the very best CRO cases the Dutch marketing industry has to offer. Do you want to attend? Get your tickets at: dutchcroawards.nl/koop-tickets

Berber Bijlsma (GrandVision Benelux) & Thecla Goossen (ClickValue): “Besteed meer aandacht aan het optimaliseren van je service-gerichte onderdelen van je website”

A/B-testen worden vaak gedaan met het oog op het verhogen van harde conversies. Voor veel bedrijven kan het ook zeker de moeite waard zijn om het online servicegedeelte van je organisatie te optimaliseren, zeker voor complexer aan te schaffen producten zoals brillen en lenzen, vinden Berber Bijlsma, online analist bij GrandVision Benelux, en Thecla Goossen, UX Designer & Researcher bij ClickValue, een van de sponsoren van de DDMA Dutch CRO Awards 2021.

CRO bij Pearle en ClickValue

Pearle is een van de merken die onder GrandVision Benelux opereert, welke deel uitmaakt van de wereldwijde optiekketen GrandVision. Vanuit GrandVision Benelux bestaat er een e-commerce team dat werkzaamheden doet voor deze vier merken, legt Bijlsma uit. “Aan de achterkant is veel van de verschillende merken van GrandVision op dezelfde manier ingesteld en ingericht waardoor we de learnings die we opdoen bij het ene merk ook bij andere merken kunnen toepassen.”

ClickValue sluit aan bij GrandVision (Global) om alle landen zo goed mogelijk te helpen bij hun CRO programma’s, aldus Goossen. “Wij komen met elk land om de week samen om onze ideeën en die van de landen zelf door te spreken. Vervolgens kijken wij vanuit ClickValue of die plannen echt van waarde kunnen zijn voor het betreffende land, maar ook of die plannen in meerdere landen uitgevoerd kunnen worden. Als we iets internationaal willen uitrollen, testen we meestal eerst in 3 landen en vergelijken we de resultaten met elkaar, zodat we vanuit global een advies kunnen geven aan alle landen.”

CRO bij Pearle draait niet alleen om omzet, maar ook om service

Een belangrijke KPI die Pearle heeft is vanzelfsprekend omzet, maar dat is niet de enige belangrijke, vertelt Berber. “Naast omzet is ook het aantal mensen dat online een afspraak maakt, bijvoorbeeld voor een oogmeting, erg van belang. Want alleen met een recente oogmeting kunnen consumenten een bril of lenzen kopen. Het is dan ook een van de graadmeters waarop we baseren of we het goed doen of niet. Bovendien sturen we steeds meer op een omnichannel Pearle-ervaring. Het maakt dan niet zoveel uit of klanten online of offline een aankoop doen, zolang ze het maar bij ons doen. Daarnaast merken we dat het optimaliseren van service-gerichte onderdelen van de Pearle-website, bijvoorbeeld onze online oogtest, ook werkt voor het stimuleren van harde conversies. Je bedient de klant dus beter, én je maakt meer omzet. Echt een win-win situatie.”

Ook Goossen merkt op dat Pearle steeds meer afwijkt van het zo snel mogelijk sturen op harde conversies, en zich meer richt op het service-gedeelte van hun website. Goossen: “Het kan echt de moeite waard zijn om een gedeelte van je website te reserveren om service centraal te zetten en dit te optimaliseren door te onderzoeken hoe je dit ook in de e-commerce flow kan verwerken. Zo kun je mensen daadwerkelijk helpen om een keuze te maken. Misschien is niet iedere e-commerce partij hier geschikt voor, maar bij Pearle werpt het zeker zijn vruchten af wegens de complexiteit van de producten.”

CRO-programma’s hangen af van de maturity van een organisatie.

Hoe ClickValue CRO bij hun klanten aanpakt, is een kwestie van maturity, legt Goossen uit: “In het begin moet je draagvlak creëren. Dat kan heel simpel, door veel te testen, te evalueren en resultaten te bespreken. Je moet klanten echt bij de hand nemen en het proces samen met ze doorlopen. Dan krijg je iedereen vanzelf mee. Als organisaties iets meer mature zijn hebben wij vaak een meer kritische blik op de kans en werken we deze dieper uit om te bepalen of een test uitgevoerd moet worden. Door het bouwen van een dergelijke businesscase met een gedegen opportunity stel je jezelf in staat om te leren van een test en zo deze learnings voor vervolgtesten ook weer mee te nemen. Zo zorg je voor een cyclus die steeds blijft bijdragen aan de verdere optimalisatie van de website door diepere learnings.”

Bij Pearle heeft CRO inmiddels 4 jaar zijn plek gevonden, eerst lokaal, maar inmiddels binnen het wereldwijde programma in samenwerking met ClickValue, vertelt Berber.

CRO en cookies

Goossen ziet onder Clickvalue’s zusterlabels dat er op het gebied van datacollectie in relatie tot third-party cookies veranderingen aan zitten te komen. Goossen: “De manier waarop we data verzamelen gaat verschuiven, van client-side naar server-side. Dit zijn zeker opties om te verkennen zodat we nog steeds de data kunnen gebruiken om goed te kunnen testen. Ondanks eventuele veranderingen ben ik wel blij dat de discussie over cookies wordt gevoerd. Het schijnt licht op datacollectie en waarom we het precies doen. Als consumenten dit meer begrijpen en duidelijk wordt dat we ze niet zo individueel volgens zoals in het debat hierover vaak naar voren komt, zal men er begripvoller tegenover staan.”

De leukste A/B-testen: verrassende resultaten die verandering teweegbrengen

Als CRO-er doe je natuurlijk veel testen, waarvan een deel uitwijst dat er geen verschillen zijn of dat er vervolgonderzoek nodig is. Een test, waaruit een percentage rolt dat bevestigt dat een verandering écht doorgevoerd moet worden terwijl hier volop discussie over was, is dan ook het allerleukst, vindt Bijlsma. “De leukste testen vind ik vanzelfsprekend die de grootste verschillen aantonen.”

Goossen vindt testen met de meest verrassende resultaten het meest interessant. “Voor mij als researcher vind ik het spannend als een resultaat mij op het andere been zet. Zo hebben we ooit getest of een ‘sticky check-out button’ op de winkelwagenpagina, met een altijd zichtbare knop op de check-out cart, met de verwachting dat dit de volgende stap bereiken makkelijker zou maken. Niets bleek minder waar. Wat bleek: omdat de prijs ook mee scrollde, legden we juist de focus op hetgeen wat minder leuk is voor de bezoeker, wat dus nog niet direct voor een positief resultaat zorgde. We zagen echter wel een verandering in gedrag wat aangeeft dat de ‘sticky check-out button’ wel iets doet met de bezoeker. We gaan dus gewoon verder testen aan de hand van deze learning.”

Laat klanten een langer pad bewandelen en neem een online analist in dienst

De kortste route werpt de meeste vruchten af, is de klassieke aanname onder CRO-ers. Maar dit is niet helemaal waar. Goossen geeft als tip dat, zeker voor servicegerichte onderdelen van een website, de kortste route niet perse de beste is: “Het zoveel mogelijk limiteren van het aantal kliks dat bezoekers nodig hebben om een conversie te voltooien is niet altijd de beste manier. Want soms, als een onderdeel van een website wat meer leunt op service en informatievoorziening, kan het juist lonen om iemand bij de hand te nemen en een wat langer pad te laten bewandelen, in plaats van het zo snel mogelijk tonen van een button. Hier mogen we binnen de CRO-wereld echt wel wat meer aandacht aan besteden. Je helpt de klant namelijk beter.”

Tot slot benadrukt Bijlsma de waarde van de online analist, een functieprofiel dat je voor A/B-testen absoluut niet kan missen. Bijlsma: “Hoewel deze rol nog vaak wordt onderschat, worden we steeds belangrijker. Een online analist kan de pijnpunten herkennen in de route die consumenten afleggen en in kaart brengen waar ze precies afhaken.”

Berber Bijlsma is online analist bij GrandVision Benelux. Thecla Goossen is UX Designer & Researcher bij Clickvalue, een van de sponsoren van de DDMA Dutch CRO Awards 2021, die op 4 november worden uitgereikt in het Sieraad in Amsterdam.