Psychologist logo
Two groups of wooden figures facing off
Methods and statistics

Recruiting your 'control' group - Linchpin or afterthought?

Paul Wicks on the potential problem, and an innovative solution.

18 January 2007

Where do you get your control participants from? Posters? Newspaper ads? Anyone who's carried out research with human volunteers knows they can be hard to reach, and that meagre research funding often leaves little for incentives. But getting a group to take part could only be the start of your problems: through the recruitment methods you employ, you could inadvertently introduce a source of systematic bias which renders your findings meaningless. 'There is no correct method for inference from data haphazardly collected with bias of unknown size. Fancy formulas cannot rescue badly produced data' (Moore, 1995).

Perhaps because of this, control groups can seem like something of a guilty secret. Journal articles often go into great detail about the lengths the authors went to in order to recruit their patient groups, but control participants are usually given a brief mention as having been recruited 'through advertising' or 'a volunteer panel'. Control participants appear to be a mere afterthought. In this article I will argue that this part of the research process is in fact crucial to a successful study, and I will highlight a new strategy in place at the Institute of Psychiatry.

There are both extrinsic and intrinsic factors to consider in attempting to recruit a control sample with as little bias as possible. Intrinsic factors consist of the plans made by researchers in designing a study, allocating funds to recruitment, advertising, and selecting participants. Extrinsic factors are constraints imposed upon researchers by external agencies such as research ethics committee, funders, or their institutions. We can do more to control the intrinsic factors, but as a discipline, we must at least be vigilant for the possible biases that external constraints can exert upon sampling techniques.

Intrinsic factors – Is control recruitment a priority?

There are three key intrinsic factors to consider. First, the design of the project: what should a representative control group really look like? Secondly, advertising: how will you attract them to participate? Thirdly, monitoring the participants you have recruited so far and adapting your recruitment techniques accordingly.

Before data collection gets underway, careful thought is required to construct a study which maintains 'internal validity', defined by Kazdin (2003) as 'when results can be attributed with little or no ambiguity to effects of the independent variable'. For example, at a basic level we might expect participant groups to be matched on age or years of education for a study using a neuropsychological test battery; however, this is not always the case, even in large published studies in prestigious academic journals.

For instance, in a study of cognitive change in people with motor neurone disease, Ringholz et al. (2005) recruited a control group that were, on average, 12 years older and had two more years of education than the patient group. Although these variables can be entered as covariates into the analysis it is preferable to recruit a matched control group, rather than assume a homogeneous distribution of test data.

Furthermore, matching on a few selected variables is not sufficient to dismiss more fundamental questions about participant selection. There are many, sometimes surprising, potential sources of bias that any given recruitment method could produce. For instance, if looking at cognition in Parkinson's disease, a representative sample might be one composed of participants who appear virtually identical to the patient group in every way except the absence of illness; aged 50–75, in good health, with no previous history of other medical conditions. But is this group truly comparable?

On the basis of recent evidence one might argue that there may be a set of traits which define a 'Parkinson's personality' many years before the appearance of motor symptoms (Evans et al., 2006). Furthermore, Goldman et al. (2005) found a preponderance of Parkinson's patients with an occupational background in farming, teaching or health care. Would a truly comparable control group be matched only on basic demographic variables, or should the researchers seek to recruit healthy participants who share risk-averse personality traits and a similar mix of occupational backgrounds? To maintain internal validity we should always act to reduce ambiguity, which may mean thinking harder about who we recruit.

Once a suitable target population has been identified, the next key factor is advertising. When researchers wish to investigate a clinical population they can normally rely upon convenient and relatively well-maintained lists from hospitals, GP surgeries, or charitable organisations. By contrast, there are no corresponding lists of healthy people in the community who are willing to take part in research, they must be actively sought.

The internet is a cheap way to advertise, but is suitable only for those with a computer and internet connection. Posters in the community are also cheap, but might only attract those who are not at work during the day. Paid ads in newspapers may have a greater reach, but what sort of people read a particular newspaper, or indeed the classifieds section where such ads are typically displayed? The use of any one of these sources introduces an element of systematic bias which can't be easily untangled in the complete dataset.

In fact, the primary difficulty in control recruitment is usually awareness; not enough potential volunteers in the community are aware that they can participate in research or know where they can go for more information. Researchers have to reach out and think about what motivates volunteers to take part in research.

They may have altruistic reasons, such as wanting to help find out more about the mind or cure a particular disease. They may be particularly open to new experiences and find research interesting in itself. For a lot of volunteers though, it boils down to money. There are no national guidelines on how much a participant should be paid for doing what, but as a rule of thumb the more invasive or inconvenient a study is, the greater the reimbursement.

Researchers are discouraged or explicitly prohibited from advertisements which emphasise how much money participants will be paid, but there is nothing to say that researchers can't produce higher-quality advertisements for their studies. Most advertising for research projects is frankly quite unappealing: Saatchi & Saatchi have little to fear from 'Volunteers wanted' typed in bold print on a piece of white A4 paper pinned to a noticeboard. The use of colour, aesthetic design, laminating posters to protect them from damage, and branding should be considered when creating appealing advertising materials. 

To paraphrase Helmuth von Moltke, 'No plan survives first contact with the data'. For instance, it is quite common to find that as data collection approaches completion, there are significant differences in IQ between groups. It is important to monitor such discrepancies closely and make sure your recruitment policies react accordingly. For instance, you might decide not to accept any more participants who have completed a university degree, or you might want to redeploy your advertising materials.

Finally, any consideration of participants in research would not be complete without mentioning one convenient sample who are not only willing to participate, but in many cases have no choice: undergraduates. Whisper it, but findings from undergraduates may not generalise to the population as a whole.

And there is also recent evidence to suggest that undergraduates may make poor participants for other reasons: they may attempt to deceive experimenters about their naivety to specific tests, or they may pass on sensitive information to their peers about studies, despite signing non-disclosure statements (Tindell & Bohlander, 2005). Whilst this is potentially true of any population, it is less likely to occur in participants recruited from a variety of sources.

Extrinsic factors – Serving the public interest?

So, you have the perfect design for an important study. Your research worker happens to be a freelance graphic artist and creates some eye-catching posters and a website to attract participants. You have a pile of demographic data from a friendly advertising agency which tells you exactly how to recruit the people you want from the places you want at the lowest prices. What could remain in your way?

There are various bodies in the UK tasked with ensuring participant safety, minimising expenditure, or ensuring compliance with statutory regulations. It can sometimes feel like one is jumping through bureaucratic hoops, and inconveniences such as long ethics forms are frequently cited as barriers to research.

Overcoming such barriers is more than just an idle grumble; it can cost significant amounts of time, money and effort. Ward et al. (2004) discussed their experience in recruiting healthy participants for their UK case-control study of Creutzfeldt-Jakob disease (CJD). They had initially planned to recruit participants from GPs' surgeries, and had to apply for COREC approval from 213 separate ethics committees.

Unfortunately, they suffered a poor response rate, and were prevented from following up their initial contacts with a telephone call by the research ethics committees, who wanted to avoid the possibility of coercion to participate. This and other factors meant that whilst it cost approximately £300 to recruit each case with CJD (a very rare condition); the corresponding cost for recruiting control participants was £1100 each.

A more concerning possibility is not just that external factors prove irritating, but that they can mean that you carry out your research in a way that systematically biases your data. An illuminating study by Junghans et al. (2005) investigated two methods of obtaining consent from potential volunteers. Participants were double-blind and randomised to be recruited using one of two procedures.

During the course of their regular follow-up appointment, those in the 'opt-in' condition (as dictated by most research ethics committees) were told they were eligible for a research study, and if they wanted to take part they had to telephone the research team to opt themselves in. If they hadn't responded in two weeks' time they were sent a reminder letter, but were not contacted further if there was no response. By contrast, those in the 'opt-out' condition were told that they were eligible for the study and that they would be receiving a phone call from the research team to enter the study unless they specifically opted themselves out.

Unsurprisingly, the response rate in the opt-out condition was significantly higher than the opt-in condition (50 per cent vs. 38 per cent). However, there was evidence that the use of an opt-in protocol introduced an element of systematic bias. Those in the opt-in arm who agreed to participate were significantly healthier than those recruited from the opt-out arm, suggesting that the recruitment methods dictated by external agencies on ethical grounds may in fact lead to a poorer set of data, with potentially serious consequences when research is translated into clinical practice.

As a profession, we should open up a dialogue with policymakers in the fields of ethics and R&D to ensure that research can take place in an efficient manner and that funders (including the research councils and by extension the taxpayer) are getting value for money.

Top of the pops?

The Institute of Psychiatry in Camberwell, south London, is the largest academic community in Europe devoted to the study of mental illness and brain diseases, and as a whole requires upwards of 5000 participants per year. In the past there have been attempts to maintain a 'subject pool' of research volunteers using index cards or a paper-based folder.

However, as the institute grew larger, such a system became unmanageable, and the subject pool was only updated sporadically. In the absence of a centralised system, our colleagues invested a lot of their time and effort in overcoming the challenges of participant recruitment in order to publish high-quality research.

Since last year, a small team at the Institute of Psychiatry has been working on a dedicated recruitment system to alleviate these problems. Funded by a grant from the Psychiatry Research Trust, we have constructed a computerised database called 'MindSearch'. The database itself is fairly simple, but it provides structure to the recruitment of participants.

Previously, every researcher at the institute would have to spend their own time and resources independently advertising for control participants. If a potential volunteer didn't match their inclusion/exclusion criteria, they were simply rejected and the contact was lost. Now these potential volunteers can be directed to one point of contact through multiple routes: telephone voicemail, e-mail, a postal address and a website.

Within our local community, we are now free to develop MindSearch as a brand, using more engaging marketing, and perhaps targeting different populations. For example, we might aim to target ethnic minorities with different advertising campaigns, as has been done by the National Blood Service in their efforts to get more donors from black and minority ethnic communities. Since we have no inclusion/exclusion criteria, we are effectively open to any potential volunteer over the age of 18. Therefore, more expansive and costly strategies such as mail shots, radio, leafleting, the internet and adverts in national newspapers can be more cost-effective for us than for smaller projects.

Although a truly representative sample might never be achieved, we believe that the volunteers registered on MindSearch should suffer from a lesser degree of selection bias than those obtained using existing techniques.

Having consulted extensively with our colleagues during the planning phases of MindSearch, we have been able to introduce several novel features. For example, if we know that participant John Smith has already taken part in a study involving a particular test, and that Dr Susan Jones wants to recruit participants naive to that test, the database can automatically exclude participants like John Smith.

Another area of future development involves the recording of specific test scores on the database itself. The consent forms provided to participants currently include a clause allowing us to store data from individual tests which could be useful in selecting closely matched control groups on the basis of IQ or even personality traits. As our pool of volunteers gets bigger, we will also be able to choose a random sub-sample that is passed on to our researchers to avoid creating an over-researched population. On the advice of our colleagues who carry out longitudinal studies, we have also included a feature which allows us to reserve any given participant for a set period of time.

Currently, MindSearch is in the early stage of its development and is only starting to come online at the IOP. In its first few months, more than 200 potential volunteers registered through the website at www.mindsearch.co.uk. As we ask participants where they heard about us, we hope at the end of the year to compare the cost-effectiveness of different advertising strategies we have used.

We also hope to carry out a randomised control trial within one of our departments or another institution to test the hypothesis that using a centralised control database can save a significant amount of resources in terms of research time and money. Once the economic advantages have been established, we might then be able to persuade other research institutions to consider adopting a participant recruitment strategy like MindSearch.

Paul Wicks is at the Institute of Psychiatry

Drugs cause withdrawal?

In March of 2006, eight men made headline news after a trial they had volunteered for caused a severe immune response in at least four of the volunteers. The Phase I clinical trial of the monoclonal antibody TGN1412 caused symptoms including severe headache, fever, burning pain, and nausea. Despite this, volunteers for clinical trials increased significantly during this period. Damien Gough, director of www.entertrials.co.uk, was mentioned in The Guardian as saying web traffic to the site had increased three-fold since the news broke.

Damien suggests that a core problem to volunteer recruitment is a general lack of awareness in the public about the benefits of research and how or where one can register as a volunteer. That said, this event will have introduced fears into the minds of some potential volunteers, and when advertising psychology studies you should consider emphasising that your research study is non-invasive and that it does not involve drugs. 

Discuss and debate

  • Is it ethical to publish findings from a study with a poorly matched control group?
  • How much time do you waste each month trying to track down participants for your study?
  • Does your department have a centralised volunteer pool? Is it truly representative of the population you need? How could you improve it?
  • Why do people agree to participate in research? How can we encourage more people to get involved without being coercive?

References

  • Evans, A.H., Lawrence, A.D., Potts, J. et al (2006). Relationship between impulsive sensation-seeking traits, smoking, alcohol and caffeine intake, and Parkinson's disease. Journal of Neurological and Neurosurgical Psychiatry, 77, 317–321.
  •  Goldman, S.M., Tanner, C.M., Olanow, C.W. et al. (2005). Occupation and parkinsonism in three movement disorders clinics. Neurology, 65, 1430–1435.
  • Junghans, C., Feder, G., Hemingway, H., Timmis, A. & Jones, M. (2005). Recruiting patients to medical research: Double blind randomised trial of 'opt-in' versus 'opt-out' strategies. British Medical Journal, 331, 940.
  • Kazdin, A.E. (2003). Research design in clinical psychology (4th edn). Needham Heights, MA: Allyn & Bacon.
  • Moore, D.S. (1995). The basic practice of statistics. New York, W.H. Freeman.
  • Ringholz, G.M., Appel, S.H., Bradshaw, M. et al. (2005). Prevalence and patterns of cognitive impairment in sporadic ALS. Neurology, 65, 586–590.
  • Tindell, D. & Bohlander, R. (2005). Participants' naivete and condfidentiality in psychological research. Psychological Reports, 96, 963–969.
  • Ward, H.J.T., Cousens, S.N., Smith-Bathgate, B. et al. (2004). Obstacles to conducting epidemiological research in the UK general population. British Medical Journal, 329, 277–279.