CDC Program Evaluation Framework Program Evaluation

If conducting a survey, it is vital to collect additional contextual information to fully understand state health department capacity. These combined approaches provided a more comprehensive and valid picture of state health department capacity by the end of the funding. Second, in each step of the framework and when applying the standards, evaluators and interest holders can think about how to advance equity and potential effects of decisions. Doing so can create an intentional process in which evaluation discussions can provide insights about how the program can address drivers, which are “factors that create, perpetuate, or exacerbate a health inequity” (14,39). For example, when describing the program (Step 2), evaluators can ask questions about how the program’s underlying theory of change addresses the drivers of health inequities. If the theory of change does not address these topics, interest holders can discuss what opportunities exist.

Comparing Formative and Summative Evaluation

To advance equity through evaluation, these discussions also can include conversations about whether (or how) the activities advance equity, whether the program has (or could have) any long-term intended health equity outcomes, and the pathway to achieve those outcomes. Logic models and the narrative that accompanies them are “living documents” and it is important to update them as program changes occur or are anticipated. Evaluation can contribute to advancing equity and eliminating health inequities in multiple ways (14). First, by using collaborative and equitable evaluation approaches, evaluators can create environments where everyone is respected and heard (39). Such environments can advance equity by creating forums where interest holders who might otherwise not have been as involved are able to share their perspectives (14,29,30).

Evaluation Standards

In contexts similar to DOL-supported vocational training or employment programs, formative evaluation is used to study the complexities of implementing new strategies, identify factors influencing progress, and suggest necessary adaptations to optimize success. In educational settings, teachers frequently use formative assessment to monitor student understanding during lessons and adapt their teaching strategies in real-time to meet student needs. Such changes need to be documented throughout an evaluation so that persons making use of the findings can make a well-informed decision about the quality and trustworthiness of the work performed. Documenting modifications provides a level of transparency required of high-quality evaluations (13,16,26).

The benefits of an evaluation plan

  • For example, engaging a diverse set of interest holders can improve the program evaluation process and outcomes by providing a more complete understanding of the context, including its complexities (28), and help decrease overemphasis on values held by specific persons or groups (29).
  • By calculating expected durations based on these estimates, teams can identify the critical path and forecast project timelines more accurately.
  • Recommendations are actions for consideration resulting from the evaluation and can suggest how improvements could be made and how existing successes and strengths can be leveraged (88).
  • Breaking down the project into specific tasks ensures nothing is overlooked and helps clarify dependencies between activities.

Depending on circumstances, the importance of one evaluation standard might need to be balanced with the relative importance of another standard, and decisions made based on this balance might need to be revisited if the relative importance of the standards changes during an evaluation. Early in the BARE analysis of year 1 annual progress reports, it became evident that many grantees were not offering enough detail about their strategies and/or activities to determine the evidence base. In the Table 2 example, without information on what was meant by educating policy makers, it was impossible to determine whether the activity was well informed by the evidence base. If the education was related to home visitation, for example, that would be coded as an activity supportive of BARE. If the education was simply delivering a fact sheet related to child abuse and neglect, this would be coded as “none” and TA would be delivered to shift toward activities that are supported by program evaluation evidence to have a desired impact on health outcomes.

People who might use the evaluation findings

program evaluation

For instance, if a new local initiative is launched, an informed citizen can inquire whether formative evaluations are planned to monitor its early progress and ensure it stays on track. Similarly, if a long-standing public program is under review, one can ask about the findings and recommendations from its most recent summative evaluation. Evaluation findings, recommendations, and lessons learned are crucial for improving programs; however, they do not automatically translate into action for informed decision-making. To ensure evaluation insights are used requires early planning, collaboration, and commitment from the evaluator and all interest holders to act on the findings and recommendations.

While the theoretical frameworks and approaches discussed earlier served the needs of the Core VIPP well, they were not without room for improvement. In this section, lessons learned and planned enhancements for future program evaluations are discussed. Our suite of services is designed to facilitate the evaluation and improvement of programs so you can determine what initiatives are making a positive impact in your community. We also help you document and demonstrate the impact you’ve already made, which helps to secure new funding or justify continued funding.

Coordinating organizations can also learn from the RNCO structure and how it was modified to be a more deliberate mechanism to support a community of practice with resources that reduce the individual burden for the members themselves. RNCOs demonstrate that allowing coordinating organizations to have either a regional focus of peer-to-peer collaboration or a national focus on specific topics and initiatives (eg, research-to-practice and practice-to-research) can enhance group utility and participation. Finally, FOA applications requiring the production of an evaluation plan and requisite revision to that plan during the first year postaward build a more deliberate process for continuous quality improvement from the very first planning stage. All of these changes combined can lead to greater success and understanding of the impact of a program.

  • This involves determining which sequence of tasks has the longest total expected duration in the PERT diagram.
  • Formative evaluation functions like a series of check-ups or progress reports conducted while a program is still being developed or implemented.
  • Secure timesheets feed actual effort data into the system, enabling accurate tracking and more informed decisions.
  • During the evaluation, these include in-process data sharing, user check-ins, and feedback sessions with interest holders.

This article delves into the essential components of program evaluation, offering a comprehensive guide to help organizations maximize their potential and achieve sustainable success. The journey through program evaluation not only illuminates strengths and weaknesses but also opens doors to increased funding opportunities and improved performance. Programs without evaluation plans in place can experience significant challenges during evaluations. If a program does not have an evaluation plan, an evaluability assessment can help determine whether the program can be evaluated and whether an evaluation will produce useful results. A program with an evaluation plan also can benefit from an evaluability assessment, which can gauge how well the evaluation plan was put into action and its effectiveness in preparing the program for an evaluation. Evaluation is more difficult and less meaningful after the program ends, because stakeholders cannot use information gathered from the evaluation to alter the program’s implementation or to justify continued funding.

In addition, 50+page aggregate evaluation reports were deemed as not useful and oftentimes not read by stakeholders. Reports were frequently not finalized and disseminated to states until 9 months following the end of the previous reporting period. Resultantly, shorter, timelier TA reports were developed for states within a 2-month period following submission of annual reports. In addition, state versions of the interactive dashboards previously mentioned were shared back with state partners when possible.

In the realm of grant-funded initiatives, program evaluation emerges as a pivotal tool for organizations striving to enhance their effectiveness and impact. By systematically assessing a program’s design, implementation, and outcomes, organizations can unlock valuable insights that inform decision-making and foster accountability. Putting the plan in writing helps ensure that the process is transparent and that all stakeholders agree on the goals of both the program and the evaluation. It serves as a reference when questions arise about priorities, supports requests for program and evaluation funding, and informs new staff. An evaluation plan also can help stakeholders develop a realistic timeline for when the program will (or should) be ready for evaluation. Much of the planning for acting on the findings and recommendations has been discussed in previous steps of the framework.

Interviews involve in-depth conversations with program staff, participants, administrators, and other relevant stakeholders to gather detailed perspectives. Our resources are updated regularly but please keep in mind that links, programs, policies, and contact information do change. It also showcases recent evaluations spanning key areas such as health emergencies, noncommunicable diseases, WHO’s disability policy, and water, sanitation and hygiene. The workload chart uses color-coding to show who is over- or under-assigned at a glance, helping you maintain balance across your team.