Skip to main content
Page 8 of 10

Gather Credible Evidence

Now that you have your evaluation questions and design, the research can begin!

Gathering evidence for an evaluation is similar to the process experienced in any public health research endeavor.

The following section will provide a high level summary of the process paying particular attention to areas of differentiation from other research or data-intensive projects.

The CDC manual for health evaluation highlights several key considerations for the evidence gathering process:

  • Indicators
  • Sources of evidence/methods of data collection
  • Quality
  • Quantity
  • Logistics


Indicators are specific, observable and measurable tools or metrics to help you answer your research questions.

For example, if your question is how much more adherent participants on chronic asthma medications were than their comparator group, it will be important to explore which indicator would allow you to answer this question.  

For this example, it would likely be a specific metric on adherence such as percentage of days covered or perhaps mean days of coverage. These types of outcome indicators will help with identifying what is needed for data collection.

There are also process indicators, which will help you to explore more the activity-based process questions you may have in your evaluation. Such as, were staff members properly trained or did the participants receive high quality materials.

It will be important to review the literature at this stage to understand if there are already available indicators that can be adapted and applied to your evaluation. For example, for an intervention targeted at lessening depressive symptoms there may be a questionnaire that could help with measuring depression experiences and helping to define severity.

However, you may also have to develop your own outcome and/or process indicators to fit your specific evaluation situation.

Scenario: Improving Medication Adherence for Type II Diabetic Adults

Let's imagine that you are an evaluator just getting ready to evaluate a program focused on improving medication adherence for adult, type II diabetics. The goal of the program was to improve on existing low adherence rates. Now, you as the evaluator need to decide on your indicator for adherence, as this term could be interpreted many different ways. Please explore the possible adherence measurements you could use as your indicator (PubMed will likely have a lot of options to explore). Choose the three that you think could be most relevant for this research. For each, describe the metric, any important ranges or cutoffs to consider, and the pros and cons of this metric. Consider in your pros and cons how easy or difficult it would be to gather such data.

  • For Adherence Metric 1, indicate the pros and cons.
  • For Adherence Metric 2, indicate the pros and cons.
  • For Adherence Metric 3, indicate the pros and cons.

Sources of Evidence/Methods of Data Collection

After selecting the indicators/metrics you will use to answer your evaluation questions, you will next need to consider how to find and collect your data.

Key Question: What existing data sources may be available to you to answer your question using your indicator?

This secondary data should always be considered, particularly when time and resources are constrained. Examples are US Census files, state vital statistics, any internal databases utilized by the program.

Review any available secondary data, decide what may be utilized for the evaluation versus what you will need to collect yourself (primary data collection)

The type of data (secondary or primary) necessary to answer evaluation questions depends entirely on the evaluation, the chosen medications, and the resources available.

Primary data collection may be a strong option, particularly if your program has no internal databases or you are looking at a very small target population. There are many different types or primary data such as surveys, interviews, focus groups, observational, and document review.

Deciding on your data sources will depend not only on your questions, but as with many things in evaluation, the context of the research itself. Primary data collection can often be expensive and time consuming, so all your resources and timelines will have to be taken into consideration


Evaluation research, even though it has real-world constraints, should always strive for a high level of quality and rigor. Much of how your evaluation will be interpreted will be based on the validity and quality of your data sources.

According to the CDC program evaluation for public health introduction:

An evaluation is reliable to the extent that it repeatedly produces the same results, and it is valid if it measures what it is intended to measure.

There are many situations that could influence data quality. Every effort should be made to reduce error and increase quality during the design, collection, and analysis phase of your evaluation.

The CDC suggests a pretest for any primary data collection, to help pinpoint any issues with the data collection instrument.


Evaluations can result in quite a bit of data and data sources. It will be important to consider how much data are needed to answer your research questions.

Particularly if you are interested in seeing a specific effect, it may be helpful to discuss with a statistician how much sample is needed to provide sufficient statistical power to observe any effect.


Consider here what physical resources are needed to collect your data. In addition consider how much time and man power will be necessary.

Keep in mind here ethical considerations for data collection such as confidentiality as well as cultural and personal preferences. For example, it may not be culturally appropriate to survey a random sample at the Braintree mall about sexual education in the school district. Most you ask will not deem it an appropriate location for asking such a question given the sensitivity of the topic.

Gather Evidence Checklist

From the CDC manual for Health Program Evaluation

  • Identify indicators for activities and outcomes in the evaluation focus.
  • Determine whether existing indicators will suffice or whether new ones must be developed.
  • Consider the range of data sources and choose the most appropriate one.
  • Consider the range of data collection methods and choose those best suited to your context and content.
  • Pilot test new instruments to identify and/or control sources of error
  • Consider a mixed-method approach to data collection.
  • Consider quality and quantity issues in data collection.
  • Develop a detailed protocol for data collection.
current page (page 8)