Towards Trustworthy Artificial Intelligent Systems by Unknown

Towards Trustworthy Artificial Intelligent Systems by Unknown

Author:Unknown
Language: eng
Format: epub
ISBN: 9783031098239
Publisher: Springer International Publishing


4 Exclusion by Design and Discriminatory Use

Recruitment AI risks inadvertently, but adversely impacting employment seekers with disabilities via two major routes: biased systems and discriminatory processes.

4.1 Biased Systems

The design of an AI system involves first specifying an objective and then specifying how the system achieves and optimizes achieving that objective. When an objective is not specified adequately, the assessment may lead to unintended consequences in the outcome.

Unwanted biases, or biases that treat people negatively, or adversely due to protected characteristics or other features of their identity within an AI system, raise serious risks of discrimination. It is critical to identify and mitigate these potentially harmful biases.

Unwanted biases relevant to marginalised people, including people with disability, are primarily introduced by historical hiring decisions. Since people with disabilities are already twice as unlikely to be unemployed, they are consequently less likely to be represented in data on past successful employees. These biases may be introduced into systems through two mediums: the algorithmic model and the training data.

The algorithmic model is the mathematical process by which an AI system performs a certain function. Designing this model involves (1) defining the objective or problem the developer determines and (2) selecting the parameters that define the operation of the system at an optimal level [18].

A scenario in which this may occur is if automated CV screener is programmed to predict the best qualified candidate based on an exclusionary parameter, such as having attended a top-tier university. The prestige of an institution may be one factor in a successful employee, but that parameter also disadvantages historically marginalised people, including people with disabilities, who already face systemic barriers to be equally represented in prestigious institutions.

The training data is the preliminary data from which an system learns how to apply the model and produce results in application [18]. The model operates as well as the supplied training data. Bias may be introduced in multiple points along the process before learning begins, from decisions made at data collection to data cleaning to selection for the purpose of training.

Building on the previous scenario, problems may arise with an automated CV screener that was trained on data that did not include the profiles of successful, yet historically marginalised employees. Interacting with information in a CV that the programme has not previously encountered means that the system may be more likely to reject a candidate. This is because these novel or “unusual” features do not fit the prescribed collection of features that is modelled to represent the ‘ideal’ employee. These novel features may be innocuous, but they may also be indirectly related to the experience of being disadvantaged on the job market.



Download



Copyright Disclaimer:
This site does not store any files on its server. We only index and link to content provided by other sites. Please contact the content providers to delete copyright contents if any and email us, we'll remove relevant links or contents immediately.