Loading...

The Blog.

How not to make motivated participants quit - yes, this sounds backward, but it's not.

In the previous blog post we discussed how hygienic factors and motivators can influence participant satisfaction, engagement and drop-out in the context of web-experiments. We're still continuing with analogy of web-experiments as digital media products, so if you didn't read up on the previous post, go do it now!

Some participants are excited about your research, but they drop out when they're disappointed with the execution. The first thing you need to do is to care for hygienic factors. Polish your user experience to a degree where there is nothing intrinsic to the design that would irritate the user sufficiently to cause them to quit. It is possible that the user will still drop out because the battery is too long or because the task is too difficult, but if they drop out because your instructions are poorly written or because your web page lacks usability, you lose a participant for absolutely no reason - and you know exactly how valuable that data would have been. Additionally, data from the participants who do participate despite of the unfavorable conditions will be littered by artifacts that might be there simply due to limited usability.

It saves you a lot of time to do even the simplest usability studies before launch. Just ask for a bit of feedback, try the talk-out-loud protocol with a fellow researcher, but do not go launching blindly, because this is where you differ from production in a very important way - you have no iterative development process. Once your experiment it's launched, it's up. No-one goes altering their instrumentation in the middle of sampling.

Last week I participated in a poorly designed experiment, where I had to click dots to produce lines between them on a map, with the instruction to connect all of the dots so that the total length of the line was as short as possible. The stimulus was stacked into the lower left quadrant of the screen, the dots were very small and it required quite a bit of maneuvering to click them, and if I clicked on the wrong place, the entire figure would disappear. On top of this, there was a distracting clock constantly ticking on the left side panel, the colors were very conflicting, and after four trials of the experiment, I noticed there were buttons with features that had not even been presented in the training trials, such as an option to import a previously used pattern (which, upon clicking, deleted the pattern I had just completed as my previous trial had been submitted with an empty response due to pattern disappearing bug). Needless to say, it was infuriating, I dropped out and sent e-mail to the authors recommending some improvements.

Your participants will do the same. This is where you do customer support. Respond to them. During my previous research, I received a few bug reports and requests for improvements. I tried my best to answer them within the same day, whether it was something I could work with or not. If it was due to an intrinsic characteristic of the task - e.g. dissatisfaction with wording in a standardized instrument or irritation with how the mental rotation tasks were difficult - I would respond telling them that I very much appreciate their feedback and participation despite of it being mentally strenuous, I'd tell them the idea behind the task, and that it's something I unfortunately cannot fix right now. If it was due to a problem with the page, I'd ask for more details and we could typically resolve the problem very quickly (e.g. the participant had scripts turned off in the browser). If it was a genuine bug report that I simply could not reproduce, I apologised for the inconvenience and promised to look into it further. If it was just general curiosity about the study with questions about the instruments, I would answer as soon as possible and try and use clear language to the best of my ability. This is all very, very important, especially if you're sampling from several sources and hoping for a diverse and volunteering participant pool, because it promotes a positive experience with your product, and positive experience promotes virality, which is what gives web-experiments their natural scope. I got a very positive response from communicating with the participants, and many of the ones I talked to directly said they forwarded the experiment to their friends. The moral of the story is that you can turn negative product experience, such as frustration about a particular task, into engagement and virality by taking personal contact.

But as I mentioned, some things are out of your control. Test batteries are long and cognitive tasks are mentally straining, by definition. There will always be some level of drop-out that you cannot optimize for, which is why it's also a good idea to take it into account when ordering your test battery. It's best to pipe data throughout the experiment so you can use incomplete data as an addition to full completions, and it's a good idea to have your demographic items at the start if you want to know who dropped out.