Evaluation of impact and developments of

The key challenge in impact evaluation is that the counterfactual cannot be directly observed and must be approximated with reference to a comparison group. There are a range of accepted approaches to determining an appropriate comparison group for counterfactual analysis, using either prospective ex ante or retrospective ex post evaluation design. Retrospective evaluations are usually conducted after the implementation phase and may exploit existing survey data, although the best evaluations will collect data as close to baseline as possible, to ensure comparability of intervention and comparison groups.

Evaluation of impact and developments of

A pdf copy is available here Abstract: Table of Contents What is a theory of change? What is the problem? A summary of the problems…. And a word in defence…. Framing the discussion around the 6 problems, and the possible ways forward is a good way to organize the presentation. The documentation and links that you present will be greatly appreciated, as well as the graphical illustrations of the different approaches.

Evaluation of impact and developments of

Without getting into too much detail, the following are a few general thoughts on this very useful paper: For example, there is an extensive literature documenting negative consequences for women of political and economic empowerment, often including increased domestic violence.

So these could be built into the TOC, but in many cases they are not. Many TOCs implicitly assume that the project and its environment remain relatively stable throughout the project lifetime. Of course, many of the models you describe do not assume a stable environment, but it might be useful to flag the challenges of emergence.

Many agencies are starting to become interested in agile project management to address the emergence challenge. Given the increasing recognition that most evaluation approaches do not adequately address complexity, and the interest in complexity-responsive evaluation approaches, you might like to focus more directly on how TOCs can address complexity.

Complexity is, of course, implicit in much of your discussion, but it might b useful to highlight the term.

Core self-evaluations - Wikipedia We rely on filters to make sense of the scholarly literature, but the narrow, traditional filters are being swamped. However, the growth of new, online scholarly tools allows us to make new filters; these altmetrics reflect the broad, rapid impact of scholarship in this burgeoning ecosystem.
Welcome to the PARE homepage It is participatory because many project stakeholders are involved both in deciding the sorts of changes to be recorded and in analysing the data collected.

Do you think it would be useful to include a section on how big data and data analytics can strengthen the ability to develop more sophisticated TOCs. Many agencies may feel that many of the techniques you mention would not be feasible with the kinds of data they collect and their current analytical tools.

Related to the previous point, it might be useful to include a brief discussion of how accessible the quite sophisticated methods that you discuss would be to many evaluation offices. What kinds of expertise would be required? Often the TOC is constructed at the start of a project with major inputs from an external consultant.

The framework is then rarely consulted again until the final evaluation report is being written, and there are even fewer instances where it is regularly tested, updated and revised.

There are of course many exceptions, and I am sure experience may be different with other kinds of agencies. There is probably very little appetite among many implementing agencies as opposed to a few funding agencies such as DFID for more refined models. Among agencies where this is the case, it will be necessary to demonstrate the value-added of investing time and resources in more refined TOCs.

So it might be useful to expand the discussion of the very practical, as opposed to the broader theoretical, justifications for investing in the existing TOC. In addition to the above considerations, many evaluators tend to be quite conservative in their choice of methodologies and they are often reluctant to adopt new methodologies — particularly if these use approaches with which they are not familiar.Guidelines for Project and Programme Evaluations.

Imprint: Final Evaluation, Impact Evaluation, Summative Evaluation) 3. 1 This document is not applicable for evaluations directly commissioned by the ADA Evaluation optimal quality and impact of development interventions. They also . While experimental impact evaluation methodologies have been used to assess nutrition and water and sanitation interventions in developing countries since the s, the first, and best known, application of experimental methods to a large-scale development program is the evaluation of the Conditional Cash Transfer (CCT) program Progresa (now.

Manuscripts published in Practical Assessment, Research & Evaluation are scholarly syntheses of research and ideas about methodological issues and caninariojana.com are designed to help members of the community keep up-to-date with effective methods, trends, and research developments from a .

Introduction of antibacterials into the clinic in the s ushered in a new era of medicine and changed the course of history. Many physicians anticipated that .

Evaluation of impact and developments of

of the three main phases of policy evaluation: policy impact evaluation. Brief 5: Evaluating Policy Impact Step by Step – Evaluating Violence and Injury Prevention Policies relation to policy development phases is illustrated in Figure 1.

2 Step by Step – Evaluating Violence and Injury Prevention Policies. Monitoring, Evaluation and Impact Evaluation: Some Basic Characteristics Monitoring Evaluation Impact Evaluation • Periodic, using data gath- holders of an ongoing development interven-tion with indications of the extent of progress and achievement of objectives and progress in the use of allocated funds.


Impact evaluation - Wikipedia