Abstract
Pilot testing is crucial when preparing any community-based vaccination coverage survey. In this paper, we use the term pilot test to mean informative work conducted before a survey protocol has been finalized for the purpose of guiding decisions about how the work will be conducted. We summarize findings from seven pilot tests and provide practical guidance for piloting similar studies. We selected these particular pilots because they are excellent models of preliminary efforts that informed the refinement of data collection protocols and instruments. We recommend survey coordinators devote time and budget to identify aspects of the protocol where testing could mitigate project risk and ensure timely assessment yields, credible estimates of vaccination coverage and related indicators. We list specific items that may benefit from pilot work and provide guidance on how to prioritize what to pilot test when resources are limited.
1. Introduction
Community-based vaccination coverage assessments are a unique type of household data collection involving the coordination of numerous logistical, administrative, and data management factors [1]. Assessments of this kind are often surveys that employ a rigorous sampling protocol to estimate coverage for a subnational or national population. In other situations, a targeted set of households in a more limited geographical area are interviewed, and the resulting data are used to inform local or regional vaccination program management. In this paper, we employ the word survey informally to mean either sort of project. Vaccination coverage interviews involve recording subjects’ (usually children’s) personal vaccination history from home-based records (HBRs), from caregiver recollection using probing [2], and sometimes from health facility-based records (FBRs).
Pilot testing can help survey coordinators strike a favorable balance among pressures on the project budget, timeline, and quality of implementation, analysis and reporting [2] Survey practitioners use the term pilot test in a variety of ways—ranging from early small-scale tests of a specific data collection feature to a full test of the entire study protocol [3,4]. In this paper, we use the term pilot test to mean informative work carried out before the survey protocol is finalized for the purpose of guiding decisions about how the work will be conducted. Pilot testing has the potential to benefit all stages of the work. We organize lessons learned and recommendations around four areas:
- Sampling
- Survey instruments
- Field procedures
- Data management, data quality, and data analysis
This paper provides practical guidance for informing decisions regarding each project stage. We first present examples of pilot findings we used to refine survey protocols and instrumentation for childhood vaccination coverage assessments. After describing the pilot test examples, we present general recommendations followed by a thorough list of survey elements that might benefit from pilot work. We then conclude with a framework for prioritizing what to pilot when resources are limited. Although there is likely benefit to piloting potential methods of disseminating results to stimulate action, that topic is beyond the scope of this paper.
2. Illustrative Examples of Vaccination Coverage Pilot Tests
This section describes illustrative vaccination coverage data collection pilot efforts and corresponding lessons learned. We selected these seven pilots because they are excellent models of preliminary efforts that informed the refinement of data collection protocols and instrument(s). All pilot procedures described in this paper are in line with the current World Health Organization’s (WHO) guidance on vaccination coverage surveys [2,5]. An ethical review for each survey was carried out in accordance with the requirements from each country. Table 1 lists the specific activities, components explored during pilot testing, and findings that informed future work.
Table 1.
Illustrative pilot studies.
2.1. American Red Cross (AmCross) 5-Point Plan Rural Pilot
The 5-Point Plan (5PP) is a methodology developed by AmCross to identify pockets of zero-dose and under-vaccinated children and understand the reasons they are under-served [6]. In November 2019, AmCross, the Kenya Red Cross Society (KRCS), the Kenyan Expanded Program on Immunization (EPI) team, and the Ministry of Health (MoH) piloted the 5PP in Bobasi, a rural sub-county in Western Kenya with the intention of scaling up the 5PP program in Kenya and elsewhere. A total of 293 Red Cross volunteers working in 29 teams aimed to conduct face-to-face interviews with every household in the sub-county. The teams visited over 60,000 households in a week-long period. The pilot identified several pockets of children without cards or HBRs or who were missing one or more age-appropriate vaccinations. To learn why these children were un- or under-vaccinated, a small team conducted focus group discussions and one-on-one interviews with the children’s caregivers, which were complemented with interviews of frontline healthcare workers who provide vaccination services in Bobasi.
During the 5PP rural pilot, the project team evaluated several logistical and administrative procedures and project strategies, including adequacy of community awareness, project team training, the household questionnaire, and data management and storage procedures (Table 1). Following the pilot test, written and verbal feedback was obtained from supervisors, independent monitors, AmCross staff and consultants, KRCS, and the MoH. Project leaders documented lessons learned, including a need to standardize volunteer training materials and pedagogy; translate the Open Data Kit (ODK) household questionnaire into the local language; refine, add, and remove selected questions; upgrade equipment and procedures to avoid data collection errors and equipment malfunctions (e.g., phone crashes); address deficits in data quality; and develop a standardized process to revisit a select subset of the survey households (Table 1).
2.2. AmCross 5-Point Plan Urban Test
In November 2021, 5PP project staff piloted a protocol for dense urban settings with informal settlements and high-rise buildings in the Pipeline neighborhood of Nairobi, Kenya. Over three days, two teams of Red Cross data collection volunteers visited 2791 households. Items of interest included data quality, volunteer training, revisits, and challenges of navigating in a densely populated urban context. One of the priorities that stemmed from the earlier Bobasi pilot with respect to data quality was to develop the capacity to validate the accuracy of the date of birth and vaccination status information recorded by interviewers. In this later urban exercise, interviewers took digital photos of each available HBR. Project staff, who were different from the interviewers and knowledgeable about vaccination, later reviewed every photo and scored the accuracy of the data entered. Staff also assessed the feasibility of navigating and conducting interviews in poorly lit, high-rise buildings, including interviewers’ ability to take clear photographs of HBRs and record the data needed to facilitate revisiting households where no eligible respondent was at home during the initial visit. Standardized volunteer training was also tested during this pilot in response to one of the main findings from the earlier Bobasi pilot.
Project leaders convened a workshop of representatives from KRCS and the MoH, Red Cross volunteers, and AmCross staff in early November 2021 to collect lessons learned from the urban pilot. Participants first completed an anonymous assessment of their opinions related to their experience. Attendees next split into small discussion groups and responded to three questions: What went well? What did not go well? What could be carried out differently in the future? Workshop leaders summarized this feedback thematically in a final report [7]. Findings from the workshop that were used to improve future implementation efforts included identifying the need to amend the ODK form to tag non-residential locations (e.g., stores) or vacant units within multi-family dwellings (e.g., high-rise buildings); encouraging teams to rely more on landmarks and line list information for revisits; expanding the volunteer training to provide more information about revisits; and a need for trusted community escorts to address safety and security concerns (Table 1).
2.3. Bangladesh Survey Pilot
In 2014, the team responsible for updating the WHO’s Vaccination Coverage Cluster Surveys Reference Manual [2] conducted pilot work in Bangladesh to shed light on recommendations being considered for the manual. The team worked in the district of Bogra, northwest of Dhaka on the West bank of the Jamuna River (Bramaputra River), in five rural clusters in the Upazila of Sariakandi and five semi-urban clusters in the town of Bogra. The work examined two cohorts, children aged 0–11 months and 12–23 months, and used an adapted version of the latest EPI national survey questionnaire [8]. The pilot study explored a variety of dimensions, including testing household selection and sampling methods; concordance of HBR and caregiver recall data; feasibility of visiting health facilities to collect vaccination data; use of electronic data forms to capture vaccination evidence; and operational measures, including study-related time, cost, and logistics (Table 1). Findings from the pilot used to refine subsequent study protocols and instruments included identifying a need for additional supervision for field teams, determination that FBR data did not provide much additional vaccination evidence above HBRs or caregiver recall in this setting, and a need for a long lead time when requesting copies of official maps from government offices (Table 1).
2.4. Gambia Operational Study
In September 2022, a team affiliated with the Gambia MoH and Gambia Bureau of Statistics conducted an operational study to test field procedures in preparation for an upcoming routine vaccination coverage survey. Study staff completed a three-day training covering the project’s purpose, tools, maps, and field testing, followed by 10 days of data collection and a 14-day transcription period. The focus of the data collection was evidence of routine vaccination (using HBR and recall) among children aged 12–35 months in 19 urban and 13 rural clusters. The pilot study assessed the agreement between HBRs and caregiver recall for 20 vaccines, coverage differences between HBR and recall, differences in transcription time for data entered in the field versus in an office setting, and barriers and enablers to transcription. Lessons learned from the pilot included the need for good planning, thorough testing and re-testing of electronic data collection tools, and the importance of high-quality training and supervision. On average, transcription of information from photographed HBRs conducted in an office setting took roughly half as much time as transcription conducted from the original HBRs in the field (Table 1).
2.5. Mali Survey Pilot
In preparation for the second phase of a routine vaccination coverage survey in Mali, a team from Appui au Développement Sanitaire (ADS) Côte d’Ivoire and L’ Institut National de la Statistique (INSTAT) Mali conducted a pilot study to address issues known to be challenges in previous vaccination coverage surveys. The first phase, following the pilot described here, took place from June to September 2022 and covered six districts of Bamako (Capital) and three districts of the Koulikoro Region. This first phase surveyed 9570 children aged 12 to 35 months.
The objectives of the pilot included cataloging all types of vaccination documentation in circulation to ensure staff training materials included photographs of each type of documentation and instructions tailored to each. As part of the pilot, the team tested a two-level process for cross-checking data entry accuracy and a training protocol designed to provide field staff with experience interpreting challenging vaccination cards. The pilot also explored the correspondence between vaccination status assessed via caregiver recall compared to health FBRs. During the pilot phase, the team encountered up to 20 different types of vaccination documentation—a big difference from the one official vaccination card or HBR—much of which was outdated. Staff responsible for cross-checking data entry accuracy noted errors in the transcription of vaccination information from notebooks to the study database. Lessons learned from the Mali pilot that investigators will use to inform future vaccination coverage surveys include the following: catalog and train on all variations of HBRs that interviewers are likely to encounter; train interviewers to look for improvised entries when older cards do not list newer vaccine doses; double-and triple-check the date of birth and dates of vaccination; and use colloquial names for vaccines.
2.6. Measles and Rubella Vaccination Campaign Evaluation Pilot Study, Burundi
In 2022, the Institute of Statistics and Economic Studies of Burundi (ISTEEBU), in collaboration with the Ministry of Public Health and the Fight against AIDS and its technical and financial partners, evaluated an approach for conducting a post-measles and rubella vaccination campaign coverage survey. The primary objective of the pilot was to assess the feasibility of visiting health facilities to obtain vaccination records or FBRs for children whose caregivers said they were vaccinated but were lacking an HBR.
Study staff participated in three days of training and a four-day data collection period. One phase of the pilot involved testing the methods and data collection tools planned for use in the main survey. Five field teams collected data in households and health facilities in ten enumeration areas of five provinces (Bujumbura Mairie, Cibitoke, Kirundo, Makamba and Ruyigi). Teams subsequently participated in a plenary session in which they provided a status report on the data collection for their assigned area, the difficulties they encountered, and corresponding solutions. Study leaders used information from the plenary session to make improvements to the questionnaires and computer applications that will be used in the upcoming main survey. Helpful lessons included increasing the number of staff available for listing households in large clusters. For children without cards whose records were found in the health facility, pilot findings indicate the value of transcribing the FBR dose dates onto a card, photographing the card, and presenting the card to the caregiver when feasible (Table 1).
2.7. National Stop Transmission of Polio (NSTOP) Pilot, Nigeria
The NSTOP program was established in 2012 to accelerate polio eradication efforts in Nigeria. The program places staff at national, state, and local government area (LGA; equivalent to district) levels to strengthen routine vaccination service delivery with the aim of eradicating polio [9]. In preparation for conducting routine vaccination coverage data collection in 40 polio high-risk LGAs across eight states in Northern Nigeria, the field team conducted household visits to pilot test the survey instrument and data collection procedures, including locating assigned clusters using global positioning system (GPS) navigation. Leaders from NSTOP and the U.S. Centers for Disease Control and Prevention first trained senior and team supervisors during a three-day period in Abuja and conducted a pre-pilot (field testing) of the data tools in an Abuja suburb. Following initial minor modifications to the field guides and data collection instruments, the master trainers then conducted a two-day interviewer training in one of the eight states (Kaduna State) as part of the survey pilot. During the pilot phase, a surplus of interviewers was recruited and trained. Only interviewers who performed well during the training and post-training evaluation were retained for the primary data collection effort. The pilot test took place in two Kaduna LGAs with varied characteristics (urban/rural, geographic/population size, administrative vaccination coverage estimates, etc.). Lessons learned from the pilot used to improve the data collection protocol included strengthening the supervisory structure of data collection teams; reducing the number of clusters per day each interview team was expected to cover; leveraging the utility of GPS navigation to help survey teams locate assigned clusters; and deploying a remote monitoring system using GPS information and satellite imagery to confirm household structures remotely due to security issues which prohibited command center staff travel out of Abuja (Table 1) [9].
3. Recommendations
This section begins with some general recommendations, followed by a thorough list of survey elements that could benefit from pilot work, and concludes with considerations for how to prioritize which elements to pilot when time and resources are limited.
3.1. General Recommendations
The following list of general recommendations was informed by all seven of the illustrative case studies featured in Section 2. These recommendations are applicable to any pilot or full-scale household survey project.
Enlist assistance from analysts with local vaccination expertise. For vaccination coverage survey and vaccination modules in multi-indicator surveys, we recommend advanced analyses be conducted by specialists in the field of vaccination to ensure analyses account for elements such as rapidly evolving vaccination schedules, variability in vaccination cards or HBRs and idiosyncrasies in how they are completed (e.g., writing in pencil the next scheduled visit date versus writing in pen the dates of vaccination doses), and common terminology used in a country to refer to vaccines and vaccine-preventable diseases [10].
Solicit feedback on earlier data collection efforts from vaccination stakeholders. Teams should meet with key stakeholders to ask about the credibility of similar surveys conducted in the same country: Were the goals achieved? Were the results believable? Controversial? Why? Did the survey influence important decisions and how is that influence regarded now with the benefit of hindsight?
Learn from survey teams with recent in-country experience. Teams will benefit from meeting with others who faced similar challenges or used the same resources they will use (e.g., same sampling frame, primary sampling unit [PSU] maps, data collection hardware and software, logistics and data management teams). Helpful insight may be gleaned even if the topic of the earlier team’s work was not vaccination coverage. Ask: What went well? What went poorly? What do they wish they had known at the start? If appropriate, consider asking teams with prior experience to provide constructive feedback on the new project’s plans.
Identify key decisions that need insight from pilot work. Survey teams will want to synthesize feedback on earlier work with a short list of their project’s goals and make a list of possible pilot study topics and questions such as the following: What are earlier weaknesses and pitfalls you want to avoid? What protocol elements could be realistically added to improve the credibility and utility of the study’s outcomes? What could be removed or simplified without incurring risk or compromising quality? Which of the proposed tools are unfamiliar and untested?
Scope pilot testing effort appropriately. To be as informative as possible, a pilot might follow the 5PP model and test all procedures before the full survey begins [11]. Trying to pilot in as many different settings as possible (and especially dense urban versus sparse rural settings) can be particularly relevant: different settings present varying challenges that may require the field teams to depart from the survey protocol.
Start early. Ideally, the project team should conduct pilot testing in advance of the primary data collection effort with sufficient time to refine survey instruments and protocol elements and provide retraining if needed [2] and allow time to obtain (and train on) better equipment if needed. When the pilot is a small token effort or an afterthought, there is little opportunity for the team to adjust the survey design or tools.
Engage community leaders. If the pilot includes substantial fieldwork, it will be helpful to engage community leaders early in the survey planning process. This is important for security and for improving the overall cooperation of the community to be willing participants. It is also critical to provide leaders with sufficient opportunity to ask questions.
Allocate sufficient time and money to pilot testing. Piloting key aspects of the protocol is an investment in the credibility of the survey results and the project organizers; thus, it often warrants a notable place in the project’s schedule and budget.
Document pilot findings. Vaccination coverage data collection teams should document findings in final survey reports and strongly consider publishing findings based on their pilot testing experience to help other teams who plan similar work.
3.2. Survey Elements to Pilot Test
In contrast to the general recommendations, which are largely applicable across projects, determining which specific survey elements would benefit most from pilot testing is a more nuanced process. The tables in this next section list elements for teams to consider pilot testing, organized by project stage:
- Table 2 presents options for pilot testing sampling procedures, which can be critical for ensuring that the survey sample is representative.
Table 2. Elements to pilot test: sampling procedures. - Table 3 lists options for pilot testing survey instruments, which will increase the likelihood that respondents will interpret the questions as intended and that interviewers will capture responses correctly.
Table 3. Elements to pilot test: survey instruments. - Table 4 includes a list of field procedures for pilot testing, which can maximize the likelihood that data collection processes will be carried out efficiently.
Table 4. Elements to pilot test: field procedures. - Table 5 provides data management, data quality, and data analysis components to consider for pilot testing so data will be captured, stored, and interpreted accurately.
Table 5. Elements to pilot test: data management, data quality, and data analysis.
3.3. How to Prioritize Survey Elements to Pilot Test When Resources Are Limited
Every survey occurs in a very specific context and is carried out by a team that has unique strengths and weaknesses. Our seven examples represent a variety of real-world situations, but do not, of course, span all possible contexts. Each survey steering committee will need to assess their team’s capabilities, survey-related goals, and resources, to discern which items from Table 2, Table 3, Table 4 and Table 5 are most relevant and warrant priority consideration for pilot study investigation. When it is not practical to pilot every process in the protocol, we suggest that high-priority candidates include (a) design choices that could substantially increase data quality but also increase the budget or lengthen the survey implementation timeline, (b) protocol components that are untested or new to the project team, and (c) hardware and protocols for electronic data collection.
Two high-cost/(possibly) high-reward field procedure decisions are whether to collect data from FBRs and how to collect and curate photographs of HBRs. In countries with low HBR availability and facilities that keep vaccination records well organized, stakeholders’ confidence in coverage outcomes and the base of data for timeliness outcomes may be substantially boosted by visiting health facilities and matching FBRs with records from household interviews. To be carried out successfully, this process requires considerable organizational liaising and attention to technical detail. It also requires understanding where to look for records of children who received vaccination services at multiple facilities. Research teams should spend adequate time during the pilot visiting health facilities and learning how to optimize the yield of good data if they plan to add this component to their survey design. In some contexts, FBRs are organized by workday rather than by recipient child, and it would be nearly impossible to have the longitudinal vaccination story of an individual match records with those from household interviews. In that situation, the FBR component should be dropped from the data collection protocol.
Well-curated clear photographs of HBRs can improve data quality without needing to revisit households to correct mistakes. However, including photographs in the protocol requires data management workflows and a sufficient budget to have humans review those photos with careful attention to detail. Photos will require extra storage capacity on data collection devices, extra bandwidth for uploading to the server, the matching of often multiple photographs to an individual child, and extra procedures to compare the dates entered in households with what is seen on the images. One novel idea, illustrated in the Gambia example described above, is to have interviewers collect data from caregiver recall and collect excellent HBR photos. In lieu of asking field-based staff to enter HBR dates into touchscreen devices while visiting respondents’ homes, office-based staff subsequently examine the HBR photos in an office setting—possibly at a multi-screen workstation—and enter dates there using either a touchscreen device or a keyboard. This approach can reduce survey costs, save time in the field, and mitigate data entry errors from touchscreens, but it requires careful attention to obtaining high-quality photos while in the respondent’s home [23].
If one of the data collection objectives is to characterize vaccination timeliness or the prevalence of missed opportunities for simultaneous vaccination, the research team needs a plan to minimize date data entry errors and check dates that seem illogical or impossible to ensure the data collected are of high quality. These objectives depend on accurately recording each child’s date of birth and date of each vaccine dose. If the protocol includes using a calendar of local events to narrow down the interval that includes the child’s birthdate, data collectors should practice this in a pilot to understand whether it is well understood by caregivers. Teams should validate the responses, when possible, by comparing dates of birth from caregiver recollections with those on HBRs or FBRs or other official documents.
Any tools or elements of the proposed data collection workflow that are new to the team should be prioritized for testing. Complex data collection protocols and measures that are central to survey goals will warrant thorough testing. Research teams will benefit from contacting other teams who have used the same tools and should try the tools in realistic settings early enough to adapt them or drop them in favor of alternatives if they do not work.
Electronic devices for data capture have become the default standard in recent years even in low-income countries. Although tremendous progress has been made in simplifying many aspects of using touchscreen phones and tablets, we noticed that a recurring theme in our conversations was problems with hardware, software, and connectivity and a post hoc sense that those problems could have been mitigated or eliminated if we had conducted additional rounds of testing before going to the field. Our collective advice is to hire key project staff who are hardware and software savvy; test all the processes that involve the devices in realistic settings; and re-test after each change in the software and each change in the electronic questionnaire.
Finally, if there are team members who are new to vaccination coverage surveys or new in their roles, then extra investment in training and testing is warranted. Some data collectors may need to be dropped or assigned other duties if they do not show promise. If training is brief with little time in the field, or if training is diluted through a cascade or train-the-trainer model, experience suggests that data collected for the first week or more may be of very poor quality because staff are still learning their roles. We consider practical training to be a separate issue from piloting, but it is important enough to mention here.
3.4. Limitations and Additional Considerations
Key considerations for implementation of any pilot test are the time and monetary costs of the pilot relative to the knowledge likely to be gained. Unfortunately, we are not able to comment on this issue because none of our featured case studies explored time or cost factors systematically.
4. Conclusions
Pilot testing aspects of a vaccination coverage survey protocol is a prudent investment in the success, credibility, and eventual influence of the survey [4,51,52,53,54,55]. Lu Ann Aday put it well:
No survey should ever go into the field without a trial run of the questionnaire and data collection procedures to be used in the final study. One can be sure that something will go wrong if there is not adequate testing of the procedures in advance of doing the survey. Even when such testing is done, situations can arise that were not anticipated in the original design of the study. The point with testing the procedures in advance is to anticipate and eliminate as many of these problems as possible and, above all, to avert major disasters in the field once the study is launched [15].
We echo these sentiments and add that piloting is helpful for deciding which high-cost protocol elements are likely to add valuable insights to the survey. We recommend that stakeholders devote time and budget in the early stages of planning to use pilot work to inform decisions about the survey protocol. Further, project leaders should foster a team-wide expectation that the project plans will likely be revised because of pilot work and that some things may need to be piloted more than once. Ideally, project leaders should share their experiences and insights to benefit downstream teams who may carry out similar work.
Author Contributions
The authors confirm contribution to the paper as follows: conception and design: D.A.R., F.T.C., M.C.D.-H. and M.G.-D.; data collection: C.B.C., J.B. and M.K.T.; manuscript preparation: D.A.R., J.B. and C.B.C.; case studies: M.A., K.C., M.C.D.-H., D.K., J.C.M., A.S., R.G. and I.U.O. All authors contributed to critical review and revisions of the manuscript. All authors have read and agreed to the published version of the manuscript.
Funding
D.A.R., F.T.C., C.B.C., J.B. and M.K.T. were funded by consultancies with the Bill & Melinda Gates Foundation. D.A.R., M.K.T. and C.B.C. were also funded by a consultancy with the American Red Cross. D.K. was funded by consultancies with WHO and Gavi. The other authors were engaged in this work for their respective employers. R.G. and I.U.O. were employed by the U.S. Centers for Disease Control and Prevention.
Institutional Review Board Statement
An ethical review for each pilot study was carried out in accordance with the requirements from each country.
Informed Consent Statement
Informed consent was obtained from all subjects involved in each pilot study.
Acknowledgments
The authors are very grateful to the staff who carried out the surveys described here. The Bangladesh trial was conducted under the supervision of Bappi Majumder and Maqbul Bhuiyan at the firm Data Management Aid. Amina Ismail, James Ooko, Phanice Omondi, and James Noe assisted in the implementation of the American Red Cross pilots of the 5-Point Plan, working in close partnership with Kenneth Kamande of the Kenyan Red Cross Society.
Conflicts of Interest
The authors declare no conflict of interest. Biostat Global Consulting is a for profit statistical consultancy firm. The funders of this work did not unduly influence the recommendations in this manuscript. The authors alone are responsible for the views expressed in this article and they do not necessarily represent the views, decisions or policies of the institutions with which they are affiliated.
Dedication
We dedicate this paper to our late friend, Pierre Claquin, who passed away in 2021. Pierre designed and managed the Bangladesh pilot survey described here and was a coauthor of the revised World Health Organization cluster survey reference manual that the pilot informed. He was a gentleman and scholar, a compassionate physician, a proud smallpox warrior, and a talented photographer who appreciated people, fine food, wine, and conversation. We are grateful to have known him and labored alongside him.
References
- Cutts, F.T.; Izurieta, H.S.; Rhoda, D.A. Measuring Coverage in MNCH: Design, Implementation, and Interpretation Challenges Associated with Tracking Vaccination Coverage Using Household Surveys. PLoS Med. 2013, 10, e1001404. [Google Scholar] [CrossRef] [PubMed]
- World Health Organization. Vaccination Coverage Cluster Surveys: Reference Manual; World Health Organization: Geneva, Switzerland, 2018; Available online: https://apps.who.int/iris/handle/10665/272820 (accessed on 18 November 2023).
- Biemer, P.P.; Lyberg, L. Introduction to Survey Quality; Wiley Series in Survey Methodology; Wiley: Hoboken, NJ, USA, 2003; 402p. [Google Scholar]
- Thabane, L.; Ma, J.; Chu, R.; Cheng, J.; Ismaila, A.; Rios, L.P.; Thabane, M.; Giangregorio, L.; Goldsmith, C.H. A tutorial on pilot studies: The what, why and how. BMC Med. Res. Methodol. 2010, 10, 1. [Google Scholar] [CrossRef] [PubMed]
- World Health Organization. Harmonizing Vaccination Coverage Measures in Household Surveys: A Primer; WHO: Geneva, Switzerland, 2019; Available online: https://cdn.who.int/media/docs/default-source/immunization/immunization-coverage/surveys_white_paper_immunization_2019.pdf?sfvrsn=7e0fb0ae_9 (accessed on 22 February 2023).
- Agócs, M.; Ismail, A.; Kamande, K.; Tabu, C.; Momanyi, C.; Sale, G.; Rhoda, D.A.; Khamati, S.; Mutonga, K.; Mitto, B.; et al. Reasons why children miss vaccinations in Western Kenya; A step in a five-point plan to improve routine immunization. Vaccine 2021, 39, 4895–4902. [Google Scholar] [CrossRef] [PubMed]
- American Red Cross. 5PP Nairobi Field Test—Dense Urban Setting with Highrise Buildings (Pipeline) Lessons Learned Workshop; Final Report; 2021; unpublished. [Google Scholar]
- World Health Organization. Immunization Coverage Cluster Survey: Reference Manual; World Health Organization: Geneva, Switzerland, 2005; Available online: https://apps.who.int/iris/handle/10665/69087 (accessed on 18 November 2023).
- Gunnala, R.; Ogbuanu, I.U.; Adegoke, O.J.; Scobie, H.M.; Uba, B.V.; Wannemuehler, K.A.; Ruiz, A.; Elmousaad, H.; Ohuabunwo, C.J.; Mustafa, M.; et al. Routine Vaccination Coverage in Northern Nigeria: Results from 40 District-Level Cluster Surveys, 2014–2015. PLoS ONE 2016, 11, e0167835. [Google Scholar] [CrossRef]
- Rhoda, D.A.; Wagai, J.N.; Beshanski-Pedersen, B.R.; Yusafari, Y.; Sequeira, J.; Hayford, K.; Brown, D.W.; Danovaro-Holliday, M.C.; Braka, F.; Ali, D.; et al. Combining cluster surveys to estimate vaccination coverage: Experiences from Nigeria’s Multiple Indicator Cluster Survey/National Immunization Coverage Survey (MICS/NICS), 2016–2017. Vaccine 2020, 38, 6174–6183. [Google Scholar] [CrossRef]
- Smith, P.G.; Morrow, R.H.; Ross, D.A. Field Trials of Health Interventions: A Toolbox, 3rd ed.; International Epidemiological Association, Wellcome Trust (London, England), Eds.; Oxford University Press: Oxford, UK, 2015; 444p. [Google Scholar]
- Moser, C.; Kalton, G. Survey Methods in Social Investigation; Routledge: London, UK, 2017; Available online: https://nls.ldls.org.uk/welcome.html?ark:/81055/vdc_100041339659.0x000001 (accessed on 1 March 2021).
- Thomson, D.R. Designing and Implementing Gridded Population Surveys; Rhoda, D.A., Ed.; Dana Thomson Consulting: Piermont, NY, USA, 2022; Available online: www.gridpopsurvey.com (accessed on 7 October 2022).
- Cutts, F.T.; Claquin, P.; Danovaro-Holliday, M.C.; Rhoda, D.A. Monitoring vaccination coverage: Defining the role of surveys. Vaccine 2016, 34, 4103–4109. [Google Scholar] [CrossRef]
- Aday, L.A. Designing and Conducting Health Surveys: A Comprehensive Guide, 2nd ed.; Jossey-Bass Publishers: San Francisco, CA, USA, 1996; 535p. [Google Scholar]
- Koch, A. Within household selection of respondents. In Advances in Comparative Survey Methods: Multinational, Multiregional, and Multicultural Contexts (3MC); Wiley Series in Survey Methodology; Wiley: Hoboken, NJ, USA, 2018; p. 93. [Google Scholar]
- Gaziano, C. Comparative Analysis of Within-Household Respondent Selection Techniques. Public Opin. Q. 2005, 69, 124–157. [Google Scholar] [CrossRef]
- Wagenaar, B.H.; Augusto, O.; Ásbjörnsdóttir, K.; Akullian, A.; Manaca, N.; Chale, F.; Muanido, A.; Covele, A.; Michel, C.; Gimbel, S.; et al. Developing a representative community health survey sampling frame using open-source remote satellite imagery in Mozambique. Int. J. Health Geogr. 2018, 17, 37. [Google Scholar] [CrossRef]
- Lowther, S.A.; Curriero, F.C.; Shields, T.; Ahmed, S.; Monze, M.; Moss, W.J. Feasibility of satellite image-based sampling for a health survey among urban townships of Lusaka, Zambia. Trop. Med. Int. Health 2009, 14, 70–78. [Google Scholar] [CrossRef]
- Wagner, J.; Olson, K.; Edgar, M. The Utility of GPS data in Assessing Interviewer Travel Behavior and Errors in Level-of-Effort Paradata. Surv. Res. Methods 2017, 11, 218–233. [Google Scholar]
- Edwards, B. Cross-cultural considerations in health surveys. In Handbook of Health Survey Methods; Wiley Handbooks in Survey, Methodology; Johnson, T.P., Ed.; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 2015; pp. 243–274. [Google Scholar]
- Harkness, J.; Stange, M.; Cibelli, K.L.; Mohler, P.; Pennell, B.E. Surveying cultural and linguistic minorities. In Hard-to-Survey Populations; Tourangeau, R., Edwards, B., Johnson, T.P., Wolter, K.M., Bates, N., Eds.; Cambridge University Press: Cambridge, UK, 2014; pp. 245–269. [Google Scholar]
- Mansour, Z.; Brandt, L.; Said, R.; Fahmy, K.; Riedner, G.; Danovaro-Holliday, M.C. Home-based records’ quality and validity of caregivers’ recall of children’s vaccination in Lebanon. Vaccine 2019, 37, 4177–4183. [Google Scholar] [CrossRef] [PubMed]
- Brown, D.W.; Tabu, C.; Sergon, K.; Shendale, S.; Mugoya, I.; Machekanyanga, Z.; Okoth, P.; Onuekwusi, I.U.; Ogbuanu, I.U. Home-based record (HBR) ownership and use of HBR recording fields in selected Kenyan communities: Results from the Kenya Missed Opportunities for Vaccination Assessment. PLoS ONE 2018, 13, e0201538. [Google Scholar] [CrossRef] [PubMed]
- Kaboré, L.; Méda, C.Z.; Sawadogo, F.; Bengue, M.M.; Kaboré, W.M.F.; Essoh, A.T.; Gervaix, A.; Galetto-Lacour, A.; Médah, I.; Betsem, E. Quality and reliability of vaccination documentation in the routine childhood immunization program in Burkina Faso: Results from a cross-sectional survey. Vaccine 2020, 38, 2808–2815. [Google Scholar] [CrossRef] [PubMed]
- Fowler, F.J. Survey Research Methods, 5th ed.; Applied Social Research Methods Series; SAGE: Los Angeles, CA, USA, 2013; 171p. [Google Scholar]
- Kite, J.; Soh, L.K. An Intelligent Survey Framework Using the Life Events Calendar. In Proceedings of the 2005 IEEE International Conference on Electro Information Technology, Lincoln, NE, USA, 22–25 May 2005; IEEE: Lincoln, NE, USA, 2005; pp. 1–6. Available online: http://ieeexplore.ieee.org/document/1627033/ (accessed on 16 November 2023).
- Glasner, T.; Van Der Vaart, W.; Belli, R.F. Calendar Interviewing and the Use of Landmark Events—Implications for Cross-cultural Surveys. Bull. Sociol. Methodol./Bull. De Méthodologie Sociol. 2012, 115, 45–52. [Google Scholar] [CrossRef]
- Glasner, T.; Van Der Vaart, W. Applications of calendar instruments in social surveys: A review. Qual. Quant. 2007, 43, 333–349. [Google Scholar] [CrossRef] [PubMed]
- Belli, R.F. The Structure of Autobiographical Memory and the Event History Calendar: Potential Improvements in the Quality of Retrospective Reports in Surveys. Memory 1998, 6, 383–406. [Google Scholar] [CrossRef]
- Willis, G.B. Cognitive Interviewing: A Tool for Improving Questionnaire Design; Sage Publications: Thousand Oaks, CA, USA, 2004. [Google Scholar]
- Willis, G. Pretesting of Health Survey Questionnaires: Cognitive Interviewing, Usability Testing, and Behavior Coding. In Handbook of Health Survey Methods; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 2015. [Google Scholar]
- Chipchase, J. The Field Study Handbook, 3rd ed.; Field Institute: Toronto, ON, USA, 2018. [Google Scholar]
- Donaldson, P.J. Using Photographs to Strengthen Family Planning Research. Fam. Plan. Perspect. 2001, 33, 176. [Google Scholar] [CrossRef]
- Gong, W. Developing a Systematic Approach to Better Monitor Vaccine Coverage Using Multiple Data Sources. Ph.D. Thesis, Johns Hopkins University, Baltimore, MA, USA, 2017. Available online: http://jhir.library.jhu.edu/handle/1774.2/60920 (accessed on 6 November 2023).
- Bjärkefur, K.; De Andrade, L.C.; Daniels, B. iefieldkit: Commands for primary data collection and cleaning. Stata J. 2020, 20, 892–915. [Google Scholar] [CrossRef]
- Mohadjer, L.; Edwards, B. Paradata and dashboards in PIAAC. QAE 2018, 26, 263–277. [Google Scholar] [CrossRef]
- DeMaio, T.J.; Mathiowetz, N.; Rothgeb, J.; Beach, M.E.; Durant, S. Protocol for Pretesting Demographic Surveys at the Census Bureau; United States Census Bureau: Washington, DC, USA, 1993. Available online: https://www.census.gov/library/working-papers/1993/adrm/sm93-04.html (accessed on 18 November 2023).
- Burnett, E.; Wannemuehler, K.; Ngoie Mwamba, G.; Yolande, M.; Guylain, K.; Muriel, N.N.; Cathy, N.; Patrice, T.; Wilkins, K.; Yoloyolo, N. Individually Linked Household and Health Facility Vaccination Survey in 12 At-risk Districts in Kinshasa Province, Democratic Republic of Congo: Methods and Metadata. J. Infect. Dis. 2017, 216 (Suppl. S1), S237–S243. [Google Scholar] [CrossRef]
- Barchard, K.A.; Freeman, A.J.; Ochoa, E.; Stephens, A.K. Comparing the accuracy and speed of four data-checking methods. Behav. Res. Methods 2019, 52, 97–115. [Google Scholar] [CrossRef] [PubMed]
- Barchard, K.A.; Scott, J.; Weintraub, D.; Pace, L.A. Better Data Entry: Double Entry Is Superior to Visual Checking: (516032008-001); American Psychological Association: Worcester, MA, USA, 2008; Available online: https://tinyurl.com/better-data-entry (accessed on 18 November 2023).
- Kozak, M.; Krzanowski, W.; Cichocka, I.; Hartley, J. The effects of data input errors on subsequent statistical inference. J. Appl. Stat. 2015, 42, 2030–2037. [Google Scholar] [CrossRef]
- Zimmerman, L.; OlaOlorun, F.; Radloff, S. Accelerating and improving survey implementation with mobile technology: Lessons from PMA2020 implementation in Lagos, Nigeria. Afr. Popul. Stud. 2015, 29, 1699. [Google Scholar] [CrossRef][Green Version]
- Loft, J.D.; Murphy, J.; Hill, C.A. Surveys of health care organizations. In Handbook of Health Survey Methods; Johnson, T.P., Ed.; Wiley Handbooks in Survey Methodology; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 2015; pp. 545–560. [Google Scholar]
- Pagel, C.; Prost, A.; Lewycka, S.; Das, S.; Colbourn, T.; Mahapatra, R.; Azad, K.; Costello, A.; Osrin, D. Intracluster correlation coefficients and coefficients of variation for perinatal outcomes from five cluster-randomised controlled trials in low and middle-income countries: Results and methodological implications. Trials 2011, 12, 151. [Google Scholar] [CrossRef]
- Kaiser, R.; Woodruff, B.A.; Bilukha, O.; Spiegel, P.B.; Salama, P. Using design effects from previous cluster surveys to guide sample size calculation in emergency settings. Disasters 2006, 30, 199–211. [Google Scholar] [CrossRef]
- Campbell, M.K.; Grimshaw, J.M.; Elbourne, D.R. Intracluster correlation coefficients in cluster randomized trials: Empirical insights into how should they be reported. BMC Med Res. Methodol. 2004, 4, 9. [Google Scholar] [CrossRef]
- Rowe, A.K.; Lama, M.; Onikpo, F.; Deming, M.S. Design effects and intraclass correlation coefficients from a health facility cluster survey in Benin. Int. J. Qual. Health Care 2002, 14, 521–523. [Google Scholar] [CrossRef]
- Kalton, G.; Brick, J.M.; Lê, T. Chapter VI Estimating components of design effects for use in sample design. In Household Sample Surveys in Developing and Transition Countries; United Nations: New York, NY, USA, 2005; p. 27. [Google Scholar]
- Pettersson, H.; Nascimento Silva, P.L. Chapter VII Analysis of design effects for surveys in developing countries. In Household Sample Surveys in Developing and Transition Countries; United Nations: New York, NY, USA, 2005; p. 21. [Google Scholar]
- Lancaster, G.A.; Dodd, S.; Williamson, P.R. Design and analysis of pilot studies: Recommendations for good practice: Design and analysis of pilot studies. J. Eval. Clin. Pract. 2004, 10, 307–312. [Google Scholar] [CrossRef]
- Arain, M.; Campbell, M.J.; Cooper, C.L.; Lancaster, G.A. What is a pilot or feasibility study? A review of current practice and editorial policy. BMC Med. Res. Methodol. 2010, 10, 67. [Google Scholar] [CrossRef]
- Malmqvist, J.; Hellberg, K.; Möllås, G.; Rose, R.; Shevlin, M. Conducting the Pilot Study: A Neglected Part of the Research Process? Methodological Findings Supporting the Importance of Piloting in Qualitative Research Studies. Int. J. Qual. Methods 2019, 18, 160940691987834. [Google Scholar] [CrossRef]
- Hassan, Z.A.; Schattner, P.; Mazza, D. Doing A Pilot Study: Why Is It Essential? Malays. Fam. Physician 2006, 1, 70–73. [Google Scholar] [PubMed]
- van Teijlingen, E.; Hundley, V. The importance of pilot studies. Nurs. Stand. 2002, 16, 33–36. [Google Scholar] [CrossRef] [PubMed]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).