Why A-B split content testing is so good Published April 9, 2015 If you’re already use subject line testing for your email campaigns then you will easily appreciate what A-B split content testing can do for you. As I tried to emphasise in my guide to testing, these types of tests are vital to your assessment of success, especially when calculating ROI. The principle is simple, consider this example; you have, as an email marketer, a database of 2000 people and wish to send an email containing a discount code calculated to encourage sales from your e-commerce site. How to get the best results from split content testing You create two emails differing only in the nature of the Call to Action (CTA). Perhaps in one the CTA might read “Offer ends by 30th July 2012”, in the other the CTA would merely qualify it as a “Limited Offer”. By splitting the database into two parts and sending the different message to each half you can then easily establish which CTA impacts the most giving you a guide for future broadcasts. By carefully adjusting the scope of what you change within each message you can easily see where, if at all, you are gaining in terms of either additional opens or increased click throughs. The trick is to keep differences between the two to a minimum, if not you will risk being unable to identify what content or link produced the differences in metrics. The example quoted above with the changed CTAs is a perfect illustration of this. By limiting the differences in each message to this single example it becomes quite clear what has affected the change in the metrics. In the context of ongoing reporting, assessment and subsequent email auditing this is a valuable tool. The use of split testing on a regular basis allows you to assess what your audience is responding to and make such changes as required to ensure you maintain the required levels of response to your emailing. Split testing automation Now let’s take this a stage further. What I have described thus far can be easily done with a simple manual process. You can split a list in to two then send each half to a different message at approximately the same time and wait for the reports. But what if this could be an automated process operating in similar way to a subject line selector you should already be using? What if you could choose to a specific list and, let us say, two messages then set up the system to send each message out alternately a given percentage of the list? After this has been sent and the relevant metrics analysed by the system the remainder of the list can be sent to the message having been the most successful. Of course a user choice mechanism for which metrics to base the success measurement must be included and a degree of set up choice must be available to decide which portion of the list to send the test out to initially. Here then is your automated process. I have already indicated its usefulness in reporting assessments and email audits, if you now consider how flexible the feature could be. You could choose different aspects of a service as a CTA, the plus points of a product, test out different banner heading, the possibilities are endless. I consider this tool to be an absolute must for all email marketing teams from the smallest to the largest. With the right amount of user choice and automation it is elegantly simple and extremely effective.