Brand new website coming soon. Learn MoreClose

CMHC Evaluation function

Evaluation's role

The mission of the Evaluation function is to provide evidence-based, credible, neutral and timely information on the ongoing relevance, results, and value of initiatives, policies and programs (herein covered within “program”), as well as alternative ways of achieving expected results for the purposes of learning or decision-making.

Defined by the Canadian Evaluation Society as the systematic assessment of the design, implementation or results of a program for the purposes of learning or decision making, Evaluation is uniquely positioned to provide both an analytical and an advisory capacity to support program management.

Our objective is to assist our clients in delivering:

  • Demonstrated value of their program to clients and stakeholders
  • Increased visibility of their program and achievements to senior management
  • Enhanced qualitative and quantitative data available to identify and manage potential risks and issues
  • Enhanced accountability through reporting on results

Evaluation Clients and stakeholders

Internal Clients and Stakeholders

Internal to CMHC, Evaluation supports CMHC primarily through the provision of timely information for the purposes of learning and decision making. Key internal clients and stakeholders include:

  • Audit Committee
  • Board of Directors
  • Management Committees supporting programs
  • Program Managers and Team Leads

External Clients and Stakeholders

External to CMHC, Evaluation supports our clients and stakeholders through active collaboration and coordination to deliver information that can inform optimal decision making for all. Key external clients and stakeholders supported by Evaluation may include:

  • Federal Departments and Agencies
  • Crown Corporations
  • Partners in housing inclusive of Provinces, Territories, First Nations, and Municipalities

Evaluation Services

Evaluation provides our clients and stakeholders with the following services

Program Planning

Logic Model

Depicts the logical relationships between the inputs, activities, outputs and outcomes of a program. As the logic model is intended to be a visual depiction of the program, its level of detail should be comprehensive enough to adequately describe the program but concise enough to capture the key details on a single page.

Theory of Change

Identifies how certain activities or actions are intended to produce results. The Theory of Change depicts a set of assumptions, risks and external factors that describes how and why the program is intended to work. This theory connects the program's activities with its goals. It is inherent in the program design and is often based on knowledge and experience of the program, research, evaluations, best practices and lessons learned.

Advisory Services on Program Design

Provision of advisory services such as input on the development of expected results, performance measures, or indicators for a planned program. It should be noted that the advisory role is not incompatible with the independence and neutrality of Evaluation; Evaluators are not involved in decision making, are careful to make their evaluation advisory role apparent, and are vigilant to guard against bias in accordance with the ethical principles of Evaluation.

Program Implementation

Formative Evaluation

Examines the process of implementing the program to review the extent to which early outcomes are being achieved. A formative evaluation can be conducted continuously, following the full life of the program, or as a one-time assessment, typically during the earlier stage of life for the program. Results can be used to make adjustments to the program during its lifetime.

Needs Assessment

Determines who needs the program, how great the need is, and what can be done to best meet the need. A Needs Assessment can be used to identify audiences that are not currently being adequately served by existing programs, and provide recommendations for ways to reduce these gaps.

Program Review

Considers a program's operations, processes and systems for the purpose of finding potential efficiencies, cost savings, and opportunities for possible realignment with another level of government and/ or other delivery options. It is needed when there may be a need for cost savings or a concern regarding the program's operations or ongoing relevance.

Developmental Evaluation

Reviews the activities of a program operating in dynamic, novel environments with complex interactions. It focuses on innovation and strategic learning rather than standard outcomes.

Case Study

Reviews components of a program that may be impacted by unpredictable circumstances, or result in unique outcomes. A Case Study is not based on the assumption that the program follows a measurable path, and instead treats the components of the program as a reaction to a series of events with adaptive strategies and outcomes. This allows for greater latitude in assessing program impacts.

Success Case Method

Identifies the most and least successful cases in a program and examines them in detail. Results may be used to enable an informed decision as well as to justify program effectiveness. This type of assessment is generally recommended for later stages of a program to enable the use of actual outcome data.

Rapid Impact Assessment

Provides a structured way to gather expert assessments of a program’s impact. A Rapid Impact Assessment engages a number of experts to provide a balanced perspective on the impacts of a program and ultimately increase acceptance and adoption of the findings. Each expert assesses program outcomes relative to a counterfactual, which is an alternative program design or situation, in order to assess the program’s impact relative to alternatives.

Economic Impact Assessment

Identifies the value or benefit of a program in relation to its costs. An Economic Impact Assessment may review the cost-effectiveness or cost-benefit of a program in relation to alternative options or models. Results may be used to assess the program model against best-practices, or options for program change, to enable an informed decision as well as to justify program relevance. This type of assessment is generally recommended for later stages of a program to enable the use of actual outcome data.

Program Closure

Summative Evaluation

Investigates the extent to which the program is achieving results and delivering on intended outcomes. A summative evaluation can be used not only to make improvement to a program at a later stage in its life, but also to inform summative decision making such as whether the program should continue as-is, expand, reduce, or be eliminated.




Print(opens in a new window)