Starting a Formal CRO Program & The Business Case for CRO

By
in Conversion Optimization on

Beyond growing KPIs, a dedicated conversion rate optimization (CRO) program can help validate marketing approaches and test competing internal opinions while mitigating risk.

Grow ARR / Key Business Metrics

The most obvious reason to start a formal Conversion Rate Optimization (CRO) program is to increase Annual Recurring Revenue (ARR) and other key business metrics such as website leads, MQL (Marketing Qualified Lead) conversion rates, opportunity creation and sales pipeline, and e-commerce revenue. The best way to start a formal CRO program is with a detailed web analytics breakdown to analyze and identify key website growth areas, start a detailed list of test ideas and hypotheses, and prioritize these by estimating complexity and business impact.

However, a qualified website experimentation leader can also influence other areas of the business by using experimentation in other ways.

Validate Strategic Marketing Approaches and Product Messaging

A/b testing and experimentation are also great ways to validate marketing strategy, existing audience assumptions and product differentiation. High-traffic pages such as homepages, pricing, and product overviews are usually the best targets for this type of testing. 

Example: Validating Free Trial Sign-Up Reasons

A major software company I worked with had a free trial sign-up for a remote computer support tool that included an activation email page, a consistent page/screen across SaaS companies with freemium motion. After analyzing global website Google Analytics data, it was obvious a huge majority of users left this page open for 5+ minutes (we assumed activating their account via their phone or in another tab). I added an optional survey that popped up after 15+ seconds (via Google Tag Manager to avoid the need to integrate testing the product code base) asking prospects to validate their primary sign-up reasons. This idea came up based on internal organizational conversations on free trial login rates and observations that many customers would sign up on the trial and never log in again. Since trial users hitting this page already signed up for the product there and we actively monitored the test segment email activation rates we limited potential risk.

Above is the original confirmation page. After 15 seconds, we prompted users to confirm their primary reasons for signing up for our remote support tool and gave them 5 potential answers, with a 6th optional custom field.

  1. Solve a one-time support issue
  2. Compare to current software
  3. Evaluate against competitors
  4. Accidentally signed up for the wrong product
  5. Did not intend to sign up for a trial
  6. Other (custom field)

I also randomized the order of answers to avoid potential biases of users selecting the first answer. 

Out of 93% eligible users (those that waited 15+ seconds on the page), 19.2% submitted the survey which was significantly higher than the 5% - 10% response rate we expected. Ultimately, we received 215+ responses to the survey running this for a few weeks and were able to analyze results in SalesForce via a custom field. 

The results were surprising and helped us significantly improve both the product and free trial funnel, though that was not necessarily our initial goal with testing.

  • The significantly majority, 44% of users selected "Solve a one-time support issue". Interestingly, these were primarily non-business / personal users and the least qualified for our sales team and unfortunately, often abuse of the intended use of the product. 
  • 23% to 26% selected either "Compare to my current remote support software " or "Evaluate against other competitors for a potential purchase". Unfortunately, these had the highest opportunity conversion rates and average opportunity value.
  • The most popular custom answer was that users were signing up as part of a sales process. In the case of this product, prospects had to create a free account before AMs or products would upgrade their accounts. 

Based on the survey results we made significant changes to the free sign-up form to prioritize qualified leads and score down one-time users from going to sales unless they significantly interacted with the product.

  • We created a custom sales referral URL to segment sales-assisted leads from our overall marketing and automatically tier these accounts to a custom plan that did not have the free account limits. 
  • We added a permanent form checkmark field to determine if users were solving a one-time problem, scoring these trials down to ensure sales teams were not prioritizing leads that were solving a one-time issue and were unlikely to purchase.
    • We also scored down geographies that were especially prone to abuse and limited the number of remote sessions that could be initiated by these accounts.

Test Conflicting / Competing Internal Opinions

Major website redesigns or website rebranding efforts are often months-long projects with significant internal iterations and changes, especially when including multiple department stakeholders. Using a/b testing to validate ideas not only helps avoid project gridlock from competing internal stakeholders but can help validate direction from different departments - ultimately ensuring better organizational buy-in across teams for a formal CRO program. 

During these projects, I've created a backlog of post-launch experimentation ideas from competing opinions. Occasionally these ideas have led to significant web growth and telling someone in another department their test idea that was shot down was a winning strategy in a public forum does wonders in building cross-department buy-in, involvement, and excitement to experimentation.

Validate Impact Before Larger Development Investment

A/b tests are by nature built to run for a fixed amount of time and are faster and easier to launch/pause before standardizing the changes in a global website codebase or marketing website CMS. By testing changes to a smaller audience you can also validate the need for more significant development efforts to integrate into the website (such as validating tests against desktop traffic only before building mobile-specific versions). 

Isolate Changes from Existing Code-Base

Unless engineering or web teams have already built an alternate experience and you are either redirecting or using feature flagging to show/hide different experiences, a/b tests should exist isolated from a standard website code base. CSS, JavaScript, HTML development, and other assets (images/videos) do not need to be handled in the standard web build processes leading to faster creation, test launches, and test iteration. This is also another benefit of separating web operations and web growth functions.

De-Risk Potentially Costly Changes

For high-risk / reward conversion testing such as pricing pages or form testing, a/b testing empowers companies to de-risk and validate changes without having to be concerned about pushing changes to 100% of traffic. Comparatively pre/post analysis is more cumbersome and not as accurate. By isolating testing to a portion of traffic and then ramping up traffic split, or running versions side by side it’s much easier to isolate and ultimately identify the business the impact of changes. 

Example: GitHub Sign-Up Process for Open-Source Software

I launched a sign-up form test on high-traffic, open-source software with hundreds of sign-ups per day. Our product had recently launched a significant GitHub integration, but prospects rarely manually added the integration post-sign-up. We discussed and ultimately launched a flow that made account creation via GitHub the default form, even though this led to a longer sign-up process and ultimately impacted the number of free account signups. 

I first launched this at 10% for a few weeks, before ramping up to 50% to compare the traffic funnel and lead quality in SalesForce against each other. Initially, the challenger with the GitHub sign-up appeared to be a net loss (22% decrease in total signups) but we saw a significant increase after a few weeks, 175% growth in trial-to-opportunity conversion rate against our default sign-up flow. 

Had we launched the new flow by default the business would have certainly reverted to the original version out of concerns about the sign-up decrease since this fed our largest sales team but by testing, we mitigated risk and ultimately proved the business value (and development investment) of the new flow