Building Responsible AI Algorithms by 2023

Building Responsible AI Algorithms by 2023

Author:2023
Language: eng
Format: epub, mobi


Chapter 4 Fairness

Another mitigation strategy is to conduct adversarial sampling, where

samples are measured in the model for discrimination and bias and later

used to retrain the model, ensuring the discriminatory/unfair outputs are

excluded from the model.

Fairness Tools

There are several tools that you can use to measure fairness. Some of these

are described here:

• IBM’s AI Fairness 360 tool,109 a Python toolkit.

• Google’s Learning Interpretability Tool,110 which

provides interpretability on natural language

processing mode behavior and helps identify

unintended biases.

• Microsoft’s Fairlearn,111 an open source toolkit

designed to assess and improve the fairness of AI

systems.

• PwC’s Responsible AI Toolkit,112 which covers the

various parts of responsible AI, ranging from AI

governance to policy and risk management, including

fairness.

• Pymetrics’ audit-AI tool,113 which is used to measure

and mitigate the effects of potential biases in

training data.

• Synthesized’s Fairlens,114 an open source library

designed to tackle fairness and bias issues in ML.

• Fairgen,115 a toolbox that provides data solutions with

respect to fairness.

73

Chapter 4 Fairness

Let’s take a sneak peek at one of these tools—Fairlearn by Microsoft.

Fairlearn is an open-source toolkit suitable for analyzing fairness issues

in ML systems. It looks at the negative impact and harms of models on

different groups of people, giving clear focus to attributes such as race,

sex, age, and disability status. Its metrics and dashboard provide insights

into the specific groups of people who could be negatively impacted by a

model. Fairlearn also has “unfairness” mitigation algorithms that can help

mitigate unfairness in classification and regression models. Fairlearn’s

two components, an interactive dashboard and mitigation algorithms,

are aimed at helping to navigate tradeoffs between fairness and model

performance, where fairness metrics appropriate to a developer’s setting

are selected and choices are made toward the most suitable unfairness

algorithm to suit the developer’s needs. Here are more details about its two

components:

• The interactive dashboard: This has two general uses,

(a) to help users assess groups that might be negatively

impacted by a model and (b) to provide model

comparison between multiple models in relation to

fairness and performance. As an example, Fairlearn’s

classification metrics include “demographic parity,

equalized odds, and worst-case accuracy rate.”

• The unfairness mitigation algorithms: The unfairness

mitigation algorithms are composed of two different

algorithms—postprocessing and reduction algorithms.

The postprocessing algorithm evaluates trained models

and adjusts the predictions to satisfy the constraints

specified by the selected fairness metric while working

toward maximizing the model’s performance, for

example, ensuring a good accuracy rate. The reduction

algorithms “treat any standard classification or

regression algorithm as a black box and iteratively

74



Download



Copyright Disclaimer:
This site does not store any files on its server. We only index and link to content provided by other sites. Please contact the content providers to delete copyright contents if any and email us, we'll remove relevant links or contents immediately.