Our approach to evidence generation, led by Blueprint, involves identifying solutions with the most promise to move the dial on pressing challenges, and supporting sustained, high-quality implementation as they build evidence to inform scaling decisions.

We call this our “evidence pipeline”. For each funded project, we develop and execute a customized evaluation plan that is linked to a larger learning agenda that helps promising interventions improve their performance and impact over time.

Instead of simply asking whether a particular project works in the present, our evidence pipeline approach ensures that our evaluation findings provide a clear path towards the growth and development of our most promising projects, as well as clearly identifying which approaches are not as successful. By connecting our evaluations to a broader evidence generation strategy, we ensure we are capturing the right evidence at the right moment to move an intervention forward. 

Bringing a Multidimensional Approach

Our evidence pipeline starts from the premise that evidence generation is a multidimensional process. In addition to more traditional evaluation activities, our process includes support for performance improvement and assessment of the potential to scale as an integral part of the evidence-building journey.

As interventions grow from initial ideas to full-scale interventions, we assess three dimensions:

The evidence journey begins with a rigorous assessment of an intervention’s logic model and theory of change. As interventions demonstrate preliminary evidence of success, they are ready for more rigorous evaluation with the ultimate aim of preparing for impact evaluation and cost-benefit analysis to generate the quality of evidence necessary to inform scaling decisions.

As interventions mature, we set quality benchmarks and use techniques, such as rapid-cycle evaluation, to support projects through an ongoing cycle of continuous learning and program improvement.

As evaluation findings emerge, we update our assessment of each intervention’s relevance to our mandate and its potential to have an impact at a pan-Canadian scale. This assessment process is conducted in collaboration with provinces and territories and other key stakeholders in Canada’s skills development ecosystems. The results of this assessment are critical input to decision-making for reinvestment.

Evaluation Subcommittee

To ensure our evidence generation strategy is best-in-class, we are establishing an International Evaluation Advisory Subcommittee that brings together leading academics, evaluators, policy makers, and practitioners to provide strategic advice and guidance.

Evaluation Approach

Rigorous evaluation of innovation projects is a critical part of the Future Skills Centre’s mandate to generate evidence about what works to strengthen Canada’s skills development ecosystem.

Our aim is to generate high-quality evidence about the performance of innovation projects that helps practitioners and policy makers understand how they can learn, improve, and achieve impact.

Our strategy combines a systematic approach to measuring outcomes across innovation projects with the flexibility to customize designs to the purpose, context, and goals of each project. We work closely with innovation project partners to ensure we are measuring what matters most – ensuring that findings not only contribute to a broader evidence base on what works, but also inform the day-to-practices and decision-making of our partners.

Through our evaluations, we seek to foster a culture of evidence-informed decision-making, ensuring that learning and evidence are embedded in policy, program, and practice decisions throughout the skills development ecosystem. We work closely with service providers and policy makers to ask the right questions about the performance of skills development initiatives and produce answers using rigorous methods and approaches. This enables us to create systems where data and evidence are continually leveraged to address pressing skills development challenges.

Read More Read Less

Key Features

We are measuring a set of core labour market outcomes for all of the projects we fund. This allows us to measure and compare the performance of individual projects and groups of projects based on project type, sector, or target population, and estimate the collective impact of all funded projects.
Download details on our outcomes framework

In addition to outcomes measurement, our approach focuses on progress metrics that support continuous learning. We work closely with project partners to build capacity to use their own data to generate insights to drive program improvement.
Download details on our approach to supporting improvement

Evaluation designs are selected to align with a project’s purpose, goals and context. Broadly speaking, project evaluation designs fall into one of three categories: 1) evaluations to understand and strengthen program effectiveness; 2) evaluations to test the causal effects of interventions; and, 3) evaluations to improve the performance of systems change initiatives.
Download details on our evaluation designs

Evaluation Process

We use a consistent process for evaluating each innovation project.
1

Discovery

Our evaluation process begins with a discovery phase. We work closely with partner organizations to introduce our evaluation approach, develop a better understanding of their model and learning goals, and co-design elements of the evaluation plan. This ensures our approach aligns with the context and goals of the model being evaluated and that findings will inform the day-to-practices and decision-making of our partners.

Our activities in the discovery phase are also informed by a Gender-based Analysis Plus lens that ensures we are considering from the beginning how each project is experienced by – and has impacts for – diverse groups of people across multiple identity factors.

2

Evaluation plan development

Informed by the discovery phase, the next step is to design an evaluation plan that includes a robust logic model and theory of change, well-specified evaluation questions, and detailed research design and analytical framework. Once the plan is developed, we build data collection and reporting tools including surveys, interview and focus group protocols, participant consent forms, monitoring dashboards and other reporting instruments.

3

Ethics review

Next, we apply for ethics clearance allowing us to be in compliance with the Tri-Council Policy Statement: Ethical Conduct for Research Involving Humans.

4

Data collection, reporting, and learning

Throughout the course of each project, we collect data, actively monitor data quality, and communicate frequently with partners to share learnings and address issues. We continually adapt our approach as needed to ensure data quality and minimize the burden of the data collection process. We also produce interim reports to communicate early findings and support course corrections and adaptations as needed.

5

Final report

At the conclusion of the project we develop a final report outlining our findings and summarizing the most important insights or learnings from the project to inform future practice and policy decisions.

Ongoing Project Evaluations

Project evaluations for each innovation project will be detailed soon.

A young indigenous woman in a library.

Indigenous ICT Development Centre

Exploring approaches to build awareness and capacity in the information and communications technology sector for…
Youth playing outdoors with a stick and ball.

For-Credit InSTEM Program

Testing a culturally-based approach to essential employability skills training for Indigenous and Northern youth.
Three women having a discussion infront of laptops.

FAST: Facilitating Access to Skilled Talent

Testing expanded occupation streams for an online skills assessment and development platform to help newcomers…
View all Projects