User Guide: Ethical AI Canvas


Unfamiliar with the Ethical AI Canvas and how it can benefit your organisation?

Inspired by the Lean Canvas model, the Ethical AI Canvas is a practical template to facilitate ethical considerations at every stage of AI development. It comprises nine core sections, each focusing on a different aspect of ethical AI, from data ethics and transparency to fairness and environmental impact. The canvas serves as both a guide and a documentation tool, ensuring that your AI project is ethical, transparent, and aligned with best practices from inception to deployment.

“The Ethical AI Canvas: A Lightweight Entry Point for Considering Ethical AI”


Getting Started

Download the Ethical AI Canvas


ⓘ Supporting Artefacts

Deep dive and enrich your understanding and application of Ethical AI principles by developing the suggested ‘Supporting Artefacts’. These artefacts can, in turn, help underpin your Ethical AI Canvas, which is a high-level overview of your approach to addressing the ethics surrounding a specific innovation involving AI or machine learning.


1. Problem Identification

Guidance: Clearly define the problem your AI system aims to solve and assess any ethical implications. Knowing the problem inside out will help you anticipate ethical dilemmas.

Example

An AI system that predicts criminal behaviour based on social media activity raises ethical concerns around privacy and discrimination.

Checklist

  • Define the problem in one sentence: A concise problem statement guides the project’s focus.
  • Evaluate ethical implications: Identify any ethical concerns and the groups potentially affected.

Supporting Artefacts

  • Problem Statement Document: A deeper dive into the problem you’re solving, explaining its scope and ramifications.
  • Ethical Review: A summary of the ethical landscape, risks, and proposed mitigations.
    • Ensure research and development using AI holds societal benefit
    • Minimise and scientifically understand any potential harm to society

Start by creating a simple Problem Statement document that expands on your one-sentence problem definition. Then, consult with ethical experts or champions within or outside your organisation to produce an Ethical Review. Both documents should be periodically reviewed and updated.


2. Data Ethics

Guidance: Evaluate your data sources, how data is collected, and any potential biases. Ethical use of data is fundamental in AI, given the risk of perpetuating existing biases.

Example

If you’re developing a facial recognition system and your data primarily consists of one ethnic group, this can lead to significant bias.

Checklist

  • Data sources: Identify the origin of your data and ensure it’s obtained ethically.
  • Data collection methods: Specify how data is gathered—automatically, through user input or otherwise.
  • Data bias assessment: Check for underrepresentation or overrepresentation of certain groups.

Supporting Artefacts

  • Data Governance Policy Framework: Guidelines and rules for data management and usage.
  • GDPR Compliance Checklist: A list of GDPR rules applicable to your data and how your system adheres to them.

Produce a Data Governance Policy Framework that details your commitment to ethical data practices. Complement this with a GDPR Compliance Checklist. Regularly review these artefacts for compliance.


3. Fairness and Equity

Guidance: Ensuring that your AI system treats all user groups fairly is crucial for ethical and business reasons.

Example

A loan approval AI system must not favour a specific gender or race.

Checklist

  • Error rate comparisons: Check that the system performs consistently across different demographic groups.
  • Fairness metrics: Use statistical methods to measure and quantify fairness.
  • Distribution of errors: Ensure the system is tested thoroughly, pre-release, on sample data representative of the inputs the system will likely receive when live. Continue to monitor post-release.

Supporting Artefacts

  • Fairness Audit and Remediation Plan: Evaluate how well the system adheres to your fairness metrics and develop a strategy for improving fairness where shortcomings are discovered.

Create a Fairness Audit to analyse system fairness in depth. If issues are found, develop a Remediation Plan outlining the steps to correct them.


3.1 Mitigating Bias

Practical Steps

Click to expand

Mitigating the risk of bias in AI and data analytics tools is imperative. Creating code, applications, and tools with accessibility and inclusivity at its core ensures that technology serves a broad spectrum of society without discrimination. Below is a non-exhaustive breakdown of steps and mechanisms that help to address bias from the inception to deployment and beyond:

1. Education and Awareness

Stay abreast of the latest research, methodologies, and best practices concerning AI and data analytics bias mitigation.

2. Diverse Data Collection

Strive for a dataset representing a broad population spectrum to ensure inclusivity. Analysing the class distribution in the data can provide insights into whether the data represents the population means to serve.

3.Transparent Data Sources

Maintain transparency about the data sources and allow external audits to identify and rectify biases.

4. Bias Assessment and Mitigation Tools

Leverage tools and frameworks to identify, measure, and mitigate bias at various data processing and model training stages.

5. Multi-disciplinary Teams

Compose teams with diverse expertise, including technical, cultural, demographic and ethical backgrounds, to review and advise on bias mitigation strategies.

6. Testing and Validation

Conduct thorough testing to identify biases and validate the effectiveness of bias mitigation strategies before deployment. This includes reviewing the class distribution to ensure the model performs well across all statistical classes.

7. Continuous Monitoring and Feedback Loops

Establish mechanisms for ongoing monitoring of systems for biases and incorporate real-time feedback loops to improve and adapt continually.

8. Ethics Committees and Review Boards

Set up internal ethics committees or engage external review boards to assess and advise on bias mitigation strategies, nurturing a culture of ethical considerations.

9. Regulatory Compliance and Industry Standards

Ensure compliance with legal and industry standards concerning fairness, bias, and discrimination while also contributing to the development of such standards.

10. Public Reporting and Transparency

Share methodologies, findings, and actions taken to mitigate bias with the public and stakeholders, maintaining a high level of transparency about the limitations and potential biases within the technology.

11. Engagement with the Wider Community

Collaborate with the broader community, including other businesses, academia, and advocacy groups, to share knowledge and work collectively on addressing bias.

12. Inclusive Coding Practices

Inclusive coding practices aim to make technology more respectful and welcoming for all users by eliminating non-inclusive language and assumptions from codebases. This involves replacing problematic terms, such as “master” and “slave,” with more inclusive alternatives like “primary” and “secondary.” Additionally, system commands with violent connotations, such as “kill,” should be aliased to neutral terms like “stop.” Error messages should be crafted sensitively, avoiding any language that could be considered offensive. Adopting these practices promotes diversity, equity, and inclusivity in the tech industry.

Beyond internal code, you can review libraries and dependencies for non-inclusive language and maintain a mindful and respectful tone in code comments and interpersonal communications.

Ideally, you should integrate inclusive coding practices into your continuous manual and automated verification processes to ensure that the data and the code remain free from bias.


Bias Arising in the Data

Click to expand

The following biases occur from how data is collected or selected, sorted, cleaned and used in the training of machine learning models. Identifying and mitigating these biases is crucial to ensure that the AI models and data analytics tools produce fair, accurate, and reliable outcomes. Addressing biases in the data is a fundamental step towards achieving ethical AI and machine learning solutions, particularly in sensitive fields like MedTech or LegalTech, where biases could have profound implications.

1. Selection Bias

Arises when the data collected or selected for a project does not represent the population or phenomena it’s supposed to represent.

2. Sampling Bias

A form of selection bias where the data collected is skewed due to the method of sampling used.

3. Historical Bias

Occurs when historical prejudices, stereotypes, or social norms are reflected in the data.

4. Measurement Bias

Arises from faulty data collection instruments or human error during data collection which leads to systematic distortion.

5. Confirmation Bias

Data might be collected or selected in a way that confirms pre-existing beliefs or hypotheses.

6. Exclusion Bias

Occurs when certain groups or data points are excluded from the data, intentionally or unintentionally.

7. Observer Bias

The observer’s presence might influence the behaviour of the observed subjects, leading to biased data.

8. Overfitting or Underfitting

Overfitting occurs when the model learns the detail and noise in the training data to the extent that it performs poorly on new data. Underfitting occurs when the model is too simple to handle the complexity of the data.

9. Class Imbalance

Occurs when the classes in the target variable are not represented equally or nearly equally.

10. Non-response Bias

Arises when the individuals who participate in the study differ significantly from those who do not.

11. Recall Bias

Occurs when participants do not remember past events accurately.

12. Survivorship Bias

Arises when the data only includes survivors or successful individuals or processes, thus ignoring failures or outliers.


4. Transparency

Guidance: Transparency helps build trust among users and stakeholders. This involves clear communication about how the AI system makes its decisions.

Example

A chatbot for mental health should explain how it generates responses and stores user data.

Checklist

  • User-facing explanations: Ensure the user understands the system’s outputs.
  • Algorithmic transparency: Publish or explain the logic, within legal and intellectual property limits, behind the AI system’s decisions.

Supporting Artefacts

  • System Guide: A user-friendly explanation of how the system works.
  • Release Notes: Information about updates, including those affecting:
    • Transparency
    • Fairness and equity
    • Data ethics, privacy
    • Accessibility
    • Environmental impact

Start with a System Guide accessible to non-experts, followed by regularly updated Release Notes that clarify and add context around any changes.


5. Accountability

Guidance: Clearly defining responsibility is crucial. This involves understanding who takes ownership if the system fails or produces erroneous outputs.

Example

For a self-driving car, clear lines of accountability must be established for different scenarios.

Checklist

  • Responsible parties: Identify who is ultimately accountable for the AI system’s actions, outcomes or impacts.
  • Escalation paths: Define the chain of escalation in case of errors or issues and have a pre-rehearsed and pre-discussed remediation plan based on likely scenarios.
  • Feedback channels: Identify how users will provide pre and post-release feedback and how the development and business teams will effectively collect, make sense of (e.g. sort, visualise, prioritise) and act on this valuable information.

Supporting Artefacts

  • RACI Matrix: A matrix specifying who is Responsible, Accountable, Consulted, and Informed during AI system development and for its performance when live.
  • Decision-making and Remediation Process Map: A flowchart detailing decision-making processes and pre-agreed remediation decisions.

Use a RACI Matrix to allocate responsibility for each aspect of the AI system. Complement this with a Decision-making and Remediation Process Map that outlines procedures for critical decisions.


6. User Consent and Privacy

Guidance: Consent and privacy are essential when users provide data or when the AI system handles sensitive information.

Example

A health diagnosis app needs explicit user consent to collect and use medical data.

Checklist

  • Consent mechanisms: Outline the methods for securing and documenting user consent, rectifying inaccuracies in personal data, and facilitating consent withdrawal.
  • Data storage and encryption methods: State how user data will be secured.

Supporting Artefacts

  • Privacy Policy: A public document detailing how user data will be used and protected. Compliant with relevant standards such as GDPR or CCPA

A Privacy Policy suitable for AI should be drafted and made easily accessible to users. Complement this with relevant compliance documents, such as GDPR or California Privacy documentation.


7. Accessibility


Guidance: Making your AI system accessible to as many users as possible, including those with disabilities, is an ethical and practical concern.

Example:

A voice-activated system should also offer a text-based interface for deaf and hard-of-hearing people.

Checklist

  • Accessibility features: List all the system’s accessibility features, identify features which have no accessible option and develop a committed plan to address accessibility gaps.
  • Compliance with accessibility standards: Are you meeting guidelines like WCAG 2.2?
  • Inclusive access: Evaluate potential hurdles to access such as costs or technological barriers that could limit the reach and inclusivity of your AI system, and plan for ways to mitigate these challenges.

Supporting Artefacts

  • Accessibility Audit: An evaluation of the system against established accessibility standards.
  • User Guide for Accessibility Features: Instructions on how to use the system’s accessibility features.

Perform an Accessibility Audit to evaluate your system’s accessibility features. Produce a User Guide that walks through each accessibility feature in detail.


8. Environmental Impact

Guidance: Assess the environmental impact of your AI system, including energy usage and carbon footprint.

Example

A data centre running large-scale AI algorithms needs to monitor its energy consumption.

Checklist

  • Energy usage: Quantify the system’s energy consumption.
  • Carbon footprint: Measure the system’s carbon emissions.

Supporting Artefacts

  • Environmental Impact Summary: A more comprehensive document detailing the AI system’s impact.
  • Carbon Offset Plan: Steps to offset or mitigate the system’s environmental impact.

Prepare an Environmental Impact Summary that breaks down all aspects of the AI system’s environmental impact. Create a Carbon Offset Plan to address any negative impacts.


9. Monitoring and Auditing

Guidance: Ongoing system monitoring ensures it maintains its ethical standing. This includes tracking key ethical metrics and regular audits.

Example

A predictive policing system needs regular audits to assess biases and inaccuracies.

Checklist

  • Key ethical metrics: Metrics like error rates and energy consumption to track.
  •  Audit schedule: A timetable for regular system reviews.

Supporting Artefacts

  • Monitoring Dashboard: A real-time dashboard for tracking key metrics.
  • Audit Reports: Detailed reports outlining findings from each audit.

Create a Monitoring Dashboard to monitor key metrics. Schedule regular audits and produce Audit Reports after each, summarising the findings and proposing actions.


9.1 Metrics

Click to expand

When implementing, monitoring, and auditing AI systems, it’s crucial to have a set of metrics that help evaluate the system’s performance, fairness, transparency, and other crucial aspects. Here is a comprehensive list of metrics to monitor, measure, and audit your organisation’s AI innovations.

1. Performance Metrics

  • Accuracy: Measure of correct predictions among the total number of cases.
  • Precision: Proportion of true positive predictions among all positive predictions.
  • Recall (Sensitivity): Proportion of true positive predictions among all actual positives.
  • F1 Score: Harmonic mean of precision and recall, providing a balance between the two.
  • Area Under ROC Curve (AUC-ROC): Represents the likelihood of the model distinguishing between a positive and a negative class.
  • Log-Loss: Measures the performance of a classification model outputting probabilities.

2. Fairness Metrics

  • Disparate Impact: Measures the difference in a favorable outcome for different groups.
  • Equal Opportunity Difference: Difference in true positive rates among groups.
  • Statistical Parity Difference: Difference in the probability of positive decisions among groups.

3. Transparency and Explainability Metrics

  • Feature Importance: Indicates the contribution of each feature to the model prediction.
  • Local Interpretable Model-agnostic Explanations (LIME): Provides local explanations for individual predictions.
  • SHAP Values (SHapley Additive exPlanations): Provides a measure of the impact of each feature on the prediction.

4. Bias Detection Metrics

  • Mean Difference: Mean score difference between different groups.
  • Median Difference: Median score difference between different groups.
  • Cohort Analysis: Examining how different cohorts are affected by the model.

5. Robustness Metrics

  • Adversarial Robustness: Measures the model’s performance against adversarial attacks.
  • Model Uncertainty: Evaluates the model’s confidence in its predictions.

6. Privacy Metrics

  • Differential Privacy: Measures the privacy safeguarding in data analysis and machine learning algorithms.
  • Re-Identification Risk: The risk of identifying individuals within a dataset.

7. Maintainability Metrics

  • Technical Debt: Measures the cost of future changes and improvements in the system.
  • Code and Architecture Maintainability: Evaluates the ease of maintaining and updating the system.

8. Monitoring and Auditing Metrics

  • Model Drift: Monitors the change in model performance over time.
  • Data Drift: Monitors the change in data distribution over time.
  • Anomaly Detection: Identifies unusual patterns that do not conform to expected behavior.
  • Version Control: Tracks changes in model version, data, and configurations.

9. Economic Metrics

  • Cost-Benefit Analysis: Evaluates the economic value and cost of the AI system.
  • Return on Investment (ROI): Measures the gain or loss generated on the investment relative to the amount of money invested.

10. User Experience and Acceptance Metrics

  • User Satisfaction: Measures the satisfaction and acceptance of the end-users.
  • Usability: Evaluates the ease of use of the AI system.

11. Regulatory Compliance Metrics

  • Compliance Rate: Measures adherence to laws, regulations, and standards.
  • Incident Response Time: Measures the time taken to respond to and resolve compliance incidents.

User Guide: Ethical AI Canvas by Mat Wade is licensed under CC BY-NC-SA 4.0

Create a website or blog at WordPress.com