The Goldilocks Challenge by Mary Kay Gugerty & Dean Karlan

The Goldilocks Challenge by Mary Kay Gugerty & Dean Karlan

Author:Mary Kay Gugerty & Dean Karlan
Language: eng
Format: epub
Publisher: Oxford University Press
Published: 2018-06-15T00:00:00+00:00


Oversight

Just as good data are integral to good program management, good management goes a long way toward producing valid, reliable data. Monitoring and survey activities should be subject to data quality assurance. We use the following three methods extensively in our fieldwork to ensure that surveyors ask questions consistently and deliver reliable data:

Accompanying: In the first few days of the data collection activity, a team leader should directly observe a surveyor for the duration of the survey. The team leader may also complete the instrument alongside the staff member and check the surveyor’s work against her own; discussing any discrepancies helps the surveyor learn how answers should be recorded and helps improve reliability. The team leader should also discuss any issues—such as how the surveyor asks questions and his or her body language and tone of voice—that could result in inconsistently collected data. As data collection staff get more experienced, accompanying can be done for a smaller portion of the survey. For organizations using an external survey firm, the firm’s staff will likely be responsible for accompanying surveyors. It is a good practice to accompany 10% of all surveys regardless, including those conducted by an external firm.

Random spot checks: Spot checks are unannounced visits by the team leader or the field manager to observe surveyors administering a survey. Spot checks help make sure that data collectors are actually out collecting data when they say they are (rather than fabricating surveys or monitoring forms under a tree near the office). While accompanying can be predictable, spot checks are surprise visits. One needs to vary the order of visits, times of day, and who does the spot checks so that data collection staff understand that any instrument could be checked at any time.

Back-checks: Back-checks (also called audits) are performed by an independent team that visits a random subset of respondents to ask them a few questions from the data collection instrument. The person responsible for analysis matches the answers to this subset of questions to the original ones to catch any discrepancies. This provides a way to check data reliability, acting as a check on both the quality of the data collection instrument and surveyor performance. A good rule of thumb is to conduct random back-checks for 10% of surveys. Back-checks may reveal patterns in the data signaling that surveyors are prefilling in the data or claiming that a respondent cannot be found. Back-checking is only useful if there is an efficient system for analyzing the data quickly and providing constructive feedback to the team. Therefore, planning for back-checking should start well before any field activities.



Download



Copyright Disclaimer:
This site does not store any files on its server. We only index and link to content provided by other sites. Please contact the content providers to delete copyright contents if any and email us, we'll remove relevant links or contents immediately.