Welcome
This guide is designed to introduce you to Canada’s Algorithmic Impact Assessment—or AIA.
The AIA takes the form of an open-source questionnaire platform, which is available via the Government of Canada on GitHub. The goal of this guide is to give communities, scholars, governments, and you—dear reader—the context and tools needed to build on this platform and help develop more inclusive approaches to AI governance.
Using this guide
Canada's AIA is a messy assemblage of people, ideas, technical objects, and bureaucratic processes. Instead of condensing that complexity into a single narrative, this guide is organized into three modules. Each module offers us a different way to think about the AIA, which in turn offers us different ways to think about creative interventions and engagements.
What's an AIA?
An algorithmic impact assessment, or AIA, refers to a broad group of processes for defining, evaluating, and mitigating risks or harms posed by algorithmic systems—including AI systems. The concept borrows from "impact assessments" in other domains, including privacy impact assessments and environmental impact assessments.
Platform
Let’s set aside the content of AIA questionnaire and the policy environment the AIA is meant to be used in. Instead, let's consider Canada’s AIA as an open-source policy platform.
Module Resources
GitHub
The GitHub repository for the AIA contains the source code for the AIA itself, as well as a few useful developer tools. This repository (repo for short) is the actual development environment for the live AIA, which that means that anyone can participate in the development of the platform.
Languages and Frameworks
The primary AIA platform is built using the Vue JavaScript framework. But older versions have been built using Python and Django, C# and .NET, and pure JavaScript. When the project was in development, early versions even hosted the AIA as a spreadsheet!
The questionnaire portion of the AIA is built using a modified version of SurveyJS—a JavaScript library for building surveys and questionnaires. One particularly useful feature of SurveyJS is that it provides a web-based visual editor for building and modifying questionnaires.
The repo also includes a python script for converting AIA surveys from JSON files to CSV. This can be a useful way to visualize the entire questionnaire, as well as the weights assigned to specific questions. Click here to download a sample CSV based on the most recent questionnaire.
Note: the script provided in the repo is setup for use on Windows and uses backslashes for file paths. To run the script on a Mac or another UNIX-based system, replace the backslashes with slashes, or download this pre-modified version.
A note on licenses
The AIA is released as open-source under an MIT License. This license grants anyone the right to use, redistribute, and modify the AIA, so long as any derivatives also include a copy of the MIT license.
Making changes
There are a several ways you can start working with and modifying Canada's AIA platform.
Clone
or fork the
AIA-EIA-JS GitHub Repo.
The repo also includes
instructions
for running a local version of the AIA in Docker or Node.
You can load the questionnaire into
- Open SurveyJS’s free survey builder in your web browser and navigate to the "JSON Editor" tab.
- Copy the code from the JSON file on this page into the SurveyJS JSON Editor, then navigate back to the "Designer" tab.
- The survey should now be loaded and editable.
Note: Canada’s AIA uses a modified version of the SurveyJS library, so not all features of the survey are reflected using this method. In particular, impact and mitigation scores can’t be viewed this way.
JSON file or a CSV (spreadsheet).
You can directly download the questionnaire as aAdditional Resources
Walkthrough
If the AIA is a platform, what are the platform's features?
We can think about this by "walking through" the platform itself.
Context and Landing Pages
When you open the live version of the AIA, the first thing you’ll notice is the landing page. This page provides context for the AIA tool, including how it is intended to be completed, who is expected to completed it, and what to do after completion. This page is the bridge between the AIA platform and its regulatory context. It expresses a vision of how the AIA is intended to be used, and how it’s expected to function as a regulatory tool.
After clicking the "Start your assessment" button, you’ll be brought to the beginning of the questionnaire itself. The first page is, again, descriptive. It provides a summarized description of the tool itself, as well as instructions for actually using the tool.
Sections
The platform is divided into two major sections, and several subsections. We can refer to the major sections as the "impact section" and the "mitigation section." Questions in each section contribute to different scores. Each of these sections has subsections, which focus on particular aspects of the system. The dropdown menu at the top of the page allows you to quickly jump between these subsections.
Scoring
You’ve probably noticed an (initially) green bar at the bottom of your screen. This bar displays your score as you progress through the AIA. There are 3 scores displayed: a raw impact score, a mitigation score, and a "current score". The "current score" is the raw impact score minus 15% if the system scores at least 80% of the maximum mitigation score. We'll go more in-depth on the scoring system in the Policy module.
The colour of the scoring bar corresponds to the impact level displayed on the left of bar. The bar turns green for impact level 1, blue for level 2, yellow for level 3, and red for level 4.
In this case, impact levels correspond to requirements set out in the Canadian Treasury Board Secretariat’s Directive on Automated Decision-Making.
Questions and Input Types
At the heart of the AIA platform are the questions themselves. We'll cover the content of the AIA in a different module, so let’s consider the different kinds of question and input types that are available—and how they impact (get it) the AIA.
Scoreable Inputs
We can divide AIA’s inputs into two categories: scoreable and unscoreable. Scoreable inputs allow the result to be quantified, which means that their values can contribute to the AIA’s scoring system. Not all uses of a scoreable input type need to actually be scored, but from a platform and design perspective, these inputs always have the potential to be scored.
Canada’s AIA platform allows for three kinds of scoreable input:
- Radio buttons
Radio buttons allow the user to select only one option from a group. They’re often used for “yes/no” questions or other binary choices.
- Checkboxes
Checkboxes are similar to radio buttons, but they allow the user to select more than one option. They’re often used for itemized lists and “check all that apply” questions.
- Dropdown lists
Dropdown lists allow the user to select a single item from a collapsable list. They’re often used for very long lists (like selecting a country) or when the option names are too long for a radio button.
Scoreable inputs!
Unscoreable Inputs: Free-Text Fields
Free-text fields allow the user to enter any text they want into the questionnaire. Free-text fields allow for much more expression, as they don’t prescribe specific options. But the information entered into free-text fields is also unquantified and unscoreable.
There are two kinds of free-text fields on the AIA platform. Single-line text fields are primarily used to collect discrete pieces of information—like the project title and the respondent’s name. Multi-line text fields are primarily used to let users provide further explanations about a response, or to describe a particular feature of a system.
Free text!
Info buttons
Info buttons appear as an "i" in a circle next to some questions. When clicked, they bring up an overlay providing additional context for the question. These info boxes are the newest feature to be added to the AIA platform.
Import/Export
One feature to note on the top of the first page is the "Upload JSON File" button. This is accompanied by an info box, informing users that the AIA platform and the Government of Canada don’t store any information that users submit.
Exporting responses
Once you begin the questionnaire, an additional "Save" button appears. This allows you to download a JSON file with your responses up to that point.
After an AIA is completed, you can also download responses as a PDF in either English or French.
Importing responses
Clicking "Upload JSON File" allows you to re-upload one of these saved JSON files and load those answers back into the survey. This allows you both to resume your own questionnaire, or to view and modify a questionnaire completed by someone else (like the completed AIAs uploaded to Canada’s Open Government Portal).
Policy
Canada’s AIA is a platform, and platforms need content. Let’s consider the AIA’s intended policy context, and the contents of the current questionnaire.
Directive on Automated Decision-Making
Canada’s AIA primarily exists in relation to the Treasury Board Secretariat’s Directive on Automated Decision-Making. The Directive guides most Canadian federal government agencies and departments' use of ADM (automated decision-making) systems. The Directive doesn't apply to "national security systems" or to the work of most agents of parliament, like Elections Canada and the Office of the Privacy Commissioner.
Directives are a specific kind of policy in the Treasury Board’s policy instrument hierarchy. Directives are meant to explain how a policy objective should be met. They're also mandatory policy within the Canadian government, but they don't create any actionable rights outside of government.
What’s the Treasury Board?
The Treasury Board of Canada Secretariat (TBS for short) is a central branch of the Canadian federal government. TBS’s role is largely the management and administration of the Canadian public service itself. For example, TBS oversees the federal access to information system. TBS is also home to Canada’s Office of the Chief Information Officer, who developed and oversee the Directive on Automated Decision-Making and the AIA.
Defining "ADM"
The Directive defines "Automated Decision System" as:
Includes any technology that either assists or replaces the judgement of human decision-makers. These systems draw from fields like statistics, linguistics, and computer science, and use techniques such as rules-based systems, regression, predictive analytics, machine learning, deep learning, and neural nets.
AIAs are often discussed in the context of artificial intelligence (AI). But as the TBS’s definition demonstrates, AIAs often focus on how the algorithm is used, rather than how it works. The TBS definition includes systems that use "AI" methods, but TBS' focus is mostly whether or not the system is used to replace human judgment, not what technology it's using to do so.
Coming Into Force and Consultations
The Directive was first formally introduced in 2019. It came into force a year later, in April 2020, and only applies to systems implemented or substantially changed after that date. But both the Directive and Canada’s AIA can trace their origins to 2016, with the drafting of a government white paper on "Responsible Artificial Intelligence in the Government of Canada."A more recent version of that white paper is still available on Google Docs.
Early parts of this policy project were conducted in an exceptionally open process. Early drafts of both the white paper and the AIA were hosted on Google Docs, and they were shared for comment via Twitter and the Government’s GCCollab. The project team also led consultations with internal departments and external experts. In principle, the open drafts were open to public comment, but in practice the process mostly engaged experts. No public consultations were held.
Principles of Administrative Law
The Directive is primarily concerned with ensuring that automated decision-making systems are implemented in a way that is "compatible with core principles of administrative law," including "transparency, accountability, legality, and procedural fairness."
Understanding the Directive’s grounding in administrative law is fundamental to understanding the Directive. These principles are reflected in the design of the AIA platform, and in the AIA’s questionnaire. They also inform the design of the procedures and remedies established by the Directive, like its focus on having a human-in-the-loop (accountability/procedural fairness) and on disclosure of automated decisions (transparency).
It bears noting that (as Teresa Scassa demonstrates) just because the goal is to translate administrative law principles to an automated decision-making context doesn’t mean that these principles can be straightforwardly translated.
A note on periodic reviews
The Directive (and by extension the AIA) undergo mandatory periodic reviews. The 3rd review of the Directive is underway, and the changes are expected to come into effect in April 2023. This site currently reflects the version of the Directive and AIA which were in effect as of March 1st, 2023.
Process
Enough preamble. How does the Directive itself actually work? What’s the policy system that Canada’s AIA was designed to be a part of? And what role does the AIA play in that system?
We can think about the Directive having 3 stages: design, implementation, and post-implementation. The design and implementation stages both share the same basic process.
General Procedure
This is the the general procedure for using the AIA, as laid out in the Directive. This procedure is repeated during both the design and implementation stages.
- Fill out an AIA
- Determine the impact level
Depending on the final score of the AIA, the platform assigns the system an impact level from 1 to 4. This corresponds with the impact levels defined in Appendix B of the Directive:
- Level 1 (Low): "Level I decisions will often lead to impacts that are reversible and brief."
- Level 2 (Moderate): "Level II decisions will often lead to impacts that are likely reversible and short-term."
- Level 3 (High): "Level III decisions will often lead to impacts that can be difficult to reverse, and are ongoing."
- Level 4 (Very High): "Level IV decisions will often lead to impacts that are irreversible, and are perpetual."
Who fills out an AIA?
The AIA is generally completed by the team implementing an ADM system, but the AIA questionnaire is designed to ensure that teams have to consult with other groups within the government—like their department’s legal team and the system’s developers.
- Determine impact level requirements
Each impact level corresponds to a different set of requirements, which are established in Appendix C of the Directive. The requirements are divided into 7 categories:
- Peer review
- Notice
- Human-in-the-loop for decisions
- Explanation requirement
- Training
- Contingency planning
- Approval for the system to operate
Within each category, the specific requirements scale proportionally with the system’s impact level. For example, level 1 and 2 systems don’t require a human-in-the-loop at all. Whereas level 3 and 4 system require "specific human intervention points" throughout the decision-making process, and the final decision for systems at these impact levels must be made by a human.
Note: It isn’t possible to "fail" the algorithmic impact assessment, and the Directive itself doesn’t disallow systems of any kind. The requirements for level 4 systems are quite onerous, however, and they are designed to strongly disincentivize very high risk systems.
- Implement requirements
The final step of the procedure is, of course, to actually implement the Directive’s requirements. This includes specific requirements based on the system’s impact level, as well as several baseline requirements for all systems.
This step changes the most depending on what stage in the development process the system is at (more on that in a second). As we discuss in the Function module, one of the ways that the AIA can function is by encouraging best practices and de-risking a system. If a team doesn’t want to comply with a requirement, they can try to modify the system to lower its impact level instead.
Read the full requirements
The full list of requirements set out by the directive is split between general requirements (Section 6) and impact level specific requirements (Appendix C).
Design Stage
The first application of the Directive’s procedure begins in the design stage of a project. Once the scope and general design of the system are established, an initial AIA needs to be completed before the system goes into production.
The design-stage AIA is not published or made public. Instead, these initial AIAs do two things:
- They guide the implementation of risk mitigation processes, based on responses to the mitigation section of the AIA.
- Based on the impact level, they guide what requirements need to be put in place. Some requirements—like peer review—need to take place before the system goes live. While others—like having a human-in-the-loop—may need to be planned for and built into the system itself.
Implementation Stage
The Directive’s procedure is applied for a second time after the system is built, but before it goes live. Assuming that no major changes have been made to the system, it is unlikely that any new requirements will be introduced during the implementation stage. Instead, the AIA is completed again to validate the design-stage AIA and to reflect any changes to the system during the development process.
The final AIA, completed at the implementation stage, then must be published on Canada’s Open Government Portal.
Post-Implementation Stage
Many of the Directive’s requirements create ongoing obligations. For example, the Directive requires that processes be developed for ongoing monitoring of a system’s compliance and decision outcomes.
The AIA may also need to be updated after the system goes live. Any time that the system’s functionality substantially changes, or the scope of the automated decision changes, the AIA needs to be updated.
Scope of the Directive?
Before applying any of the Directive’s processes and requirements, public servants need to determine if the system in question is in scope of the Directive to begin with.
Broadly speaking, the Directive applies to "any system, tool, or statistical models used to recommend or make an administrative decision about a client."(§5.2) But there are additional limitations to the Directive’s scope:
- The Directive currently only applies to external-facing systems. ADM systems that are only used internally by the government, for the government, are currently exempt.
- The Directive doesn’t apply to "national security systems."
- The Directive doesn’t apply to systems that aren’t in production, like systems operating in test environments.
- The Directive only applies to systems developed or procured after the Directive came into force on April 1, 2020.
AIA Questionnaire
The Directive on Automated Decision-Making provides a framework and procedure for regulating ADMs in the government of Canada. But as we now know, that procedure relies heavily on "impact levels" established by the AIA. So what is the AIA actually assessing, and how are the Directive’s impact levels determined?
Questionnaire Overview
We can break down the AIA questionnaire itself into two sections—risk and mitigation. Each section has several sub-sections, and each sub-section contains one or more questions to which the applicant must provide a response.
Based on the responses provided by the applicant, the system calculates two scores: a "raw impact score" based on the risk section, and a "mitigation score" based on the mitigation section. The raw impact score draws on questions about how the system works and what decision is going to be automated, while the mitigation score measures what practices have been put in place to mitigate those risks.
Read the full description
This module is intended to be a digestible overview of the AIA’s questions and scoring system. If you’re interested in learning more, the Treasury Board has an excellent detailed description on the AIA’s landing page.
Impact and Mitigation Scores
The system calculates the raw impact and mitigation scores based on the value assigned to each question in their section. The "riskier" an answer is, the higher its score. Ultimately, the score assigned to any given question is arbitrary and determined by the Treasury Board.
Reminder: Questions associated with "free-text" inputs can’t contribute to the AIA’s scoring.
The risk section has there are a total of 48 questions, and a total possible maximum raw impact score of 107. Within the risk section, the "Data" and "Impact" sub-sections contain the most questions and contribute the most to the raw impact score. Accordingly, they contribute the most to the overall score calculated for the risk section.
The mitigation section has a total of 33 questions, and a maximum section score of 45. There are only two mitigation sub-sections: "Consultations" and "De-risking and mitigation measures.’’ 31 of the 33 mitigation questions are in the later section.
Calculating Current Score
The raw impact score and mitigation score are used to calculate an overall score. This overall score is referred to as the "current score" in the AIA. The final impact level is based on how close the final "current score" is to the maximum possible score. A higher current score means a higher impact level, which in turn means more stringent requirements (per the Directive).
The current score doesn’t just combine the raw impact score and mitigation score. Instead, the current score is based on the following formula:
- "If the mitigation score is less than 80% of the maximum attainable mitigation score, the current score is equal to the raw impact score."
- "If the mitigation score is 80% or more than the maximum attainable mitigation score, 15% is deducted from the raw impact score to yield the current score."
Question weights
There’s no straightforward way to view the weight assigned to each
question. TBS provide a per-sub-section breakdown of there scores
here. We can view a full list of weights using the AIA’s JSON to
CSV converter.
Click here to download a CSV based on the
most recent
questionnaire.
Risk Section Overview
Risk Sub-Section | Sub-Section Description | Example Question or Prompt |
---|---|---|
Project | Description of the project, motivation for automating the decision, and "high-level risk indicators for the project." | Is the project within an area of intense public scrutiny (e.g. because of privacy concerns) and/or frequent litigation? |
System | "Capabilities of the system (that is, image recognition, risk assessment)" | Please check which of the following capabilities apply to your system. |
Algorithm | "Transparency of the algorithm, whether it is easily explained" | The algorithmic process will be difficult to interpret or to explain |
Decision | "Classification of the decision being automated (that is, health services, social assistance, licensing)" | Does the decision pertain to any of the categories below (check all that apply): |
Impact | "Duration, reversibility and area impacted (freedom, health, economy or environment)" | Will the system be replacing a decision that would otherwise be made by a human? |
Data | What type of data is being used, and where the data came from. | Will the system require the analysis of unstructured data to render a recommendation or a decision? |
Mitigation Section Overview
Mitigation Sub-Section | Sub-Section Description | Example Question |
---|---|---|
Consultation | "Internal and external stakeholders consulted, such as privacy and legal experts" | Will you be engaging with any of the following groups? |
De-risking and mitigation measures | Processes to ensure data quality, procedural fairness, and privacy of personal information. | Will you design and build security and privacy into your systems from the concept stage of the project? |
Function
One way we can approach Canada’s AIA is by thinking about it’s "functions." What are the mechanisms by which the AIA affects change? How do different mechanisms effect the design, audience, and use of the AIA? How does the AIA contribute to or help achieve a given goal?
Why It Matters
In this context, we can think about "functions" like a hypothesis; a theory of causation, and way of thinking about both what we’re trying to achieve and (crucially) how we expect to achieve it. If our goal is to make sure that AI systems are trustworthy, and we’ve decided to use an AIA, how do we expect the AIA to help achieve that goal?
The answer is less straightforward than it might seem. There are at least 5 different ways that Canada’s AIA can function within a given context. And as we’ll see in a moment, even a relatively straightforward example—like an AIA in the context of the Directive on Automated Decision-Making—can function in several different ways at the same time.
Having a clear understanding of how we expect the AIA to work is important for two reasons:
- It gives us a way to measure the system’s success. (A key performance indicator, if you will.)
- It gives us a framework for suggesting, designing, and evaluating changes and new features.
Official Functions
There are at least 3 ways that we can think about Canada’s AIA "functioning" in the context the Directive on Automated Decision-Making.
Defining the Problem Statement
Before considering how the AIA functions to achieve a particular goal, we need to define what that goal is.
Section 4 of Directive on Automated Decision-Making provides quite a clear objective for the policy. It states:
The objective of this Directive is to ensure that Automated Decision Systems are deployed in a manner that reduces risks to Canadians and federal institutions, and leads to more efficient, accurate, consistent, and interpretable decisions made pursuant to Canadian law.
Put more concisely, we can say that the stated goal of the Directive (and by extension the government’s use of the AIA) is to ensure safe, fair, and responsible use of ADMs. How, then, does the AIA help achieve this goal?
Nudge Function
Going sequentially, the first way that the AIA can function is by acting as a "nudge" and encouraging best practices (whatever those may be in a given context). In the Government of Canada’s implementation of the AIA, this largely occurs during the design stage. The design stage is where the most substantial changes to the system can be made, and it’s also the first time in a project’s lifecycle that the AIA is used.
This function occurs in two ways:
- Encouraging consultation
The Government of Canada’s AIA questionnaire is intentionally designed to require consultation outside of the group developing or implementing a particular ADM system. In some cases this is done overtly, like the "consultations" sub-section which asks whether respondents "will be engaging with" either internal or external stakeholders.
In other cases, the scope of knowledge required to respond to a question acts as a more subtle nudge. The "about the data" sub-section of the questionnaire, for example, asks extremely specific technical questions which effectively require discussions between policy makers and developers.
- Rewarding Best Practices
The mitigation score in particular is designed to reward certain practices without making them mandatory. Unlike the impact score, where every scored question adds to or detracts from the total, the mitigation score only helps the final score if the system achieves 80% or more of the maximum possible mitigation score. This creates a strong incentive to adopt not just one, but almost all of the practices endorsed in the mitigation sub-section.
We could also likely consider the nudge effects of gamifying the AIA via the scoring system in general, but that discussion is beyond the scope of this guide…
These cases share share two important features:
- In both cases, it is the process of filling out the AIA where the AIA performs its function. The intended audience in this function are the respondents themselves.
- In both cases, the AIA functions by nudging respondents towards certain actions, rather than explicitly mandating them. Even when the nudges are extremely overt, they’re still voluntary.
Fill-in the Blanks: Nudge Function
The AIA will ensure safe, fair, and responsible use of ADMs by promoting consultation and best practices.
Enforcement Function
The second way that the AIA can function is by acting as a measurement tool, which informs formal and enforceable requirements. In the Government of Canada’s implementation of the AIA platform, this is the primary purpose of the scoring system and the corresponding impact levels in the Directive.
In this case, the AIA almost acts like a contract. The respondent describes the system, and then the AIA score determines which set of requirements (see: impact levels) apply. After the AIA is completed, it still functions as a reference for what standards should be enforced for a given system. The AIA serves as an agreement between the respondent and an oversight body as to what standards the system should be held to.
This function occurs after an AIA is completed, and it necessarily involves some kind of oversight or regulatory body. Within the Directive on Automated Decision-Making, enforcement and non-compliance are managed by the Treasury Board. Potential consequences and enforcement actions are determined by the Treasury Board’s Framework for the Management of Compliance.
Fill-in the Blanks: Enforcement Function
The AIA will ensure safe, fair, and responsible use of ADMs by measuring risk and proportionally scaling enforcement.
Accountability Function
The third way that the AIA can function is as an object that serves as a focal point for discussion and critique of the underlying ADM-system being assessed. This function is enabled by two features of the Government of Canada’s AIA system:
- The fact that the system produces a PDF as an output
By generating a PDF, the completed AIA becomes a "thing" in the world. We can imagine a version of Canada’s AIA which simply outputs a score rather than a document of the completed questionnaire. The "thing-ness" of completed AIAs allows them to have effects after their completion.
- The fact that completed AIAs are published and public
A "thing" isn’t very useful without someone to use it. The Directive requires implementation-stage AIAs to be published on Canada’s Open Government Portal. And it’s the publication of completed AIAs is what realizes the potential of the AIA’s output.
Fill-in the Blanks: Accountability Function
The AIA will ensure safe, fair, and responsible use of ADMs by engaging and informing the public.
A Note on Conflicting Functions
The mechanisms which contribute to the enforcement and
accountability functions also draw attention to how different
functions can conflict with each other.
Free-text fields in the AIA questionnaire are an important tool
for accountability, but they don’t contribute to the AIA’s score.
If new questions and areas of inquiry are only added to the AIA as
free-text fields, they can’t contribute to the AIA’s enforcement
function.
Potential Functions and Future Directions
In the context of the Directive on Automated Decision-Making, the AIA functions in these three ways: nudges, enforcement, and accountability. But there are other ways that we can imagine the AIA functioning.
For example, Data & Society’s excellent work on AIAs draws attention to distinction between "impacts" and "harms." Impacts are what an AIA measures, but harms are what is felt on the ground. Ideally, the relationship between impact and harm in an AIA should be as close as possible, but that isn’t always the case. How, then, might we re-imagine or repurpose Canada’s AIA to have a harm-reduction function?
We can also imagine a consultative function for Canada’s AIA. Could the AIA be used to engage publics at the design stage? Or perhaps we could flip the script and allow publics to design questionnaires and scoring sheets for policy makers to use?
What functions can you imagine for Canada’s AIA?
The AIA will by .
Acknowledgments
About the Author
Nick Gertler is a graduate student at Concordia University whose work focuses on the policy and politics of AI in Canada. His thesis research focuses on algorithmic impact assessments and community-led AI governance. He is a member of the Machine Agencies working group at the Milieux Institute.
This guide was developed in collaboration with the Media Governance After AI project.
This guide draws on research supported by the Social Sciences and Humanities Research Council.
This guide draws on research funded in part by a master's training scholarship from the FRQSC.