There’s a lot written by online marketing experts about A/B testing. This means trying out two possibilities on a random basis, and seeing which works best. That’s great if you have lots of opportunities to run the test. Unfortunately for us in engineering and scientific marketing, we don’t usually have enough interactions on our websites or in our emails to make any test statistically significant.
You may be designing a landing page on a website where you’re expecting 10 people a day to look at it, and maybe 3 or 4 a week to actually respond to your offer. Unless the A/B test runs for months, it’s going to be hard to claim that A is better than B.
I have worked on websites where the volume of interactions did allow us to make sensible deductions quite quickly. Nowadays I tend to use the lessons I learned then and apply them to low volume pages, rather than reinvent the wheel. Perhaps the main one is to keep things linear and predictable. So if you have a form, take people through it one step at a time, in one column, then put a button at the end which says what’s going to happen (e.g. “Send me the brochure”). Let them focus on the task, and don’t distract them along the way.
We worry about what we’re committing ourselves to when we agree to something, however trivial it may be. It’s human nature. So the clearer we make the transaction, the more people are likely to accept it.
If you want to know more, here’s some great stuff on the psychological principles of high converting websites.