A Citizen's Guide to Artificial Intelligence by John Zerilli
Author:John Zerilli
Language: eng
Format: epub, pdf
Tags: artificial intelligence; machine learning; transparency; bias; responsibility; liability; meaningful human control; privacy; autonomy; algorithms in government; employment; oversight and regulation;regression;classification;decision trees;neural networks;Bayesian models;supervised learnin;gtransparency;explainability;accessibility;accountability;black box;intentional stance;practical reason;biasdiscrimination;prejudice;heuristics;predictive policing;fairness;compatibility;accuracy;strict liability;fault-based liability;insurance;moral agency;automation bias;automation complacency;human in the loop;meaningful human control;human factors;privacyGDPR;right to be forgotten;inferential analytics;purpose limitation;autonomy;Facebook;political advertising;manipulation;democracy;administrative;soft law;hard law;self-regulation;oversight body;auditing law;public sector;procurement
Publisher: MIT Press
Automation has a significant impact on situation awareness.17 For example, we know that drivers of autonomous vehicles are less able to anticipate takeover requests and are often ill prepared to resume control in an emergency.18
The third difficulty relates to the currency of human skills (the âcurrency problemâ). Here is Bainbridge, again: âUnfortunately, physical skills deteriorate when they are not used. ⦠This means that a formerly experienced operator who has been monitoring an automated process may now be an inexperienced one.â19
The fourth and final difficulty, and the one weâve chosen to focus on in this chapter, relates to the attitudes of human operators in the face of sophisticated technology (the âattitudinal problemâ). Except for a few brief remarks,20 this problem wasnât really addressed in Bainbridgeâs paper.21 It has, however, been the subject of active research in the years since.22 Here the problem is that as the quality of automation improves and the human operatorâs role becomes progressively less demanding, the operator âstarts to assume that the system is infallible, and so will no longer actively monitor what is happening, meaning they have become complacent.â23 Automation complacency often co-occurs with automation bias, when human operators âtrust the automated system so much that they ignore other sources of information, including their own senses.â24 Both complacency and bias stem from overtrust in automation.25
What makes each of these problems especially intriguing is that each gets worse as automation improves. The better a system gets, the more adept at handling complex information and at ever greater speeds, the more difficult it will be for a human supervisor to maintain an adequate level of engagement with the technology to ensure safe resumption of manual control should the system fail. When it comes to the current (âSAE Level 2â) fleet of autonomous vehicles* that allow the driver to be hands- and feet-free (but not mind-free, because the driver still has to watch the road), legendary automotive human factors expert Neville Stanton expressed the conundrum wryly: âEven the most observant human driverâs attention will begin to wane; it will be akin to watching paint dry.â26 And as far as complacency and bias go, there is evidence that operator trust is directly related to the scale and complexity of an autonomous system. For instance, in low-level partially automated systems, such as SAE Level 1 autonomous vehicles, there is âa clear partition in task allocation between the driver and vehicle subsystems.â27 But as the level of automation increases, this allocation gets blurred to the point that drivers find it difficult to form accurate assessments of the vehicleâs capabilities, and on the whole are inclined to overestimate them.28
These results hold in the opposite direction too. Decreases in automation reliability generally seem to increase the detection rate of system failures.29 Starkly put, automation is âmost dangerous when it behaves in a consistent and reliable manner for most of the time.â30 Carried all the way, then, it seems the only safe bet is to use dud systems that donât inspire overtrust, or, on the contrary, to use systems that are provably better-than-human at particular tasks.
Download
A Citizen's Guide to Artificial Intelligence by John Zerilli.pdf
This site does not store any files on its server. We only index and link to content provided by other sites. Please contact the content providers to delete copyright contents if any and email us, we'll remove relevant links or contents immediately.
AI & Machine Learning | Bioinformatics |
Computer Simulation | Cybernetics |
Human-Computer Interaction | Information Theory |
Robotics | Systems Analysis & Design |
Algorithms of the Intelligent Web by Haralambos Marmanis;Dmitry Babenko(8303)
Test-Driven Development with Java by Alan Mellor(6736)
Data Augmentation with Python by Duc Haba(6652)
Principles of Data Fabric by Sonia Mezzetta(6403)
Learn Blender Simulations the Right Way by Stephen Pearson(6301)
Microservices with Spring Boot 3 and Spring Cloud by Magnus Larsson(6174)
Hadoop in Practice by Alex Holmes(5960)
Jquery UI in Action : Master the concepts Of Jquery UI: A Step By Step Approach by ANMOL GOYAL(5809)
RPA Solution Architect's Handbook by Sachin Sahgal(5569)
Big Data Analysis with Python by Ivan Marin(5371)
The Infinite Retina by Robert Scoble Irena Cronin(5258)
Life 3.0: Being Human in the Age of Artificial Intelligence by Tegmark Max(5152)
Pretrain Vision and Large Language Models in Python by Emily Webber(4335)
Infrastructure as Code for Beginners by Russ McKendrick(4098)
Functional Programming in JavaScript by Mantyla Dan(4039)
The Age of Surveillance Capitalism by Shoshana Zuboff(3959)
WordPress Plugin Development Cookbook by Yannick Lefebvre(3809)
Embracing Microservices Design by Ovais Mehboob Ahmed Khan Nabil Siddiqui and Timothy Oleson(3613)
Applied Machine Learning for Healthcare and Life Sciences Using AWS by Ujjwal Ratan(3587)
