A Citizen's Guide to Artificial Intelligence by John Zerilli

A Citizen's Guide to Artificial Intelligence by John Zerilli

Author:John Zerilli
Language: eng
Format: epub, pdf
Tags: artificial intelligence; machine learning; transparency; bias; responsibility; liability; meaningful human control; privacy; autonomy; algorithms in government; employment; oversight and regulation;regression;classification;decision trees;neural networks;Bayesian models;supervised learnin;gtransparency;explainability;accessibility;accountability;black box;intentional stance;practical reason;biasdiscrimination;prejudice;heuristics;predictive policing;fairness;compatibility;accuracy;strict liability;fault-based liability;insurance;moral agency;automation bias;automation complacency;human in the loop;meaningful human control;human factors;privacyGDPR;right to be forgotten;inferential analytics;purpose limitation;autonomy;Facebook;political advertising;manipulation;democracy;administrative;soft law;hard law;self-regulation;oversight body;auditing law;public sector;procurement
Publisher: MIT Press


Automation has a significant impact on situation awareness.17 For example, we know that drivers of autonomous vehicles are less able to anticipate takeover requests and are often ill prepared to resume control in an emergency.18

The third difficulty relates to the currency of human skills (the “currency problem”). Here is Bainbridge, again: “Unfortunately, physical skills deteriorate when they are not used. … This means that a formerly experienced operator who has been monitoring an automated process may now be an inexperienced one.”19

The fourth and final difficulty, and the one we’ve chosen to focus on in this chapter, relates to the attitudes of human operators in the face of sophisticated technology (the “attitudinal problem”). Except for a few brief remarks,20 this problem wasn’t really addressed in Bainbridge’s paper.21 It has, however, been the subject of active research in the years since.22 Here the problem is that as the quality of automation improves and the human operator’s role becomes progressively less demanding, the operator “starts to assume that the system is infallible, and so will no longer actively monitor what is happening, meaning they have become complacent.”23 Automation complacency often co-occurs with automation bias, when human operators “trust the automated system so much that they ignore other sources of information, including their own senses.”24 Both complacency and bias stem from overtrust in automation.25

What makes each of these problems especially intriguing is that each gets worse as automation improves. The better a system gets, the more adept at handling complex information and at ever greater speeds, the more difficult it will be for a human supervisor to maintain an adequate level of engagement with the technology to ensure safe resumption of manual control should the system fail. When it comes to the current (“SAE Level 2”) fleet of autonomous vehicles* that allow the driver to be hands- and feet-free (but not mind-free, because the driver still has to watch the road), legendary automotive human factors expert Neville Stanton expressed the conundrum wryly: “Even the most observant human driver’s attention will begin to wane; it will be akin to watching paint dry.”26 And as far as complacency and bias go, there is evidence that operator trust is directly related to the scale and complexity of an autonomous system. For instance, in low-level partially automated systems, such as SAE Level 1 autonomous vehicles, there is “a clear partition in task allocation between the driver and vehicle subsystems.”27 But as the level of automation increases, this allocation gets blurred to the point that drivers find it difficult to form accurate assessments of the vehicle’s capabilities, and on the whole are inclined to overestimate them.28

These results hold in the opposite direction too. Decreases in automation reliability generally seem to increase the detection rate of system failures.29 Starkly put, automation is “most dangerous when it behaves in a consistent and reliable manner for most of the time.”30 Carried all the way, then, it seems the only safe bet is to use dud systems that don’t inspire overtrust, or, on the contrary, to use systems that are provably better-than-human at particular tasks.



Download



Copyright Disclaimer:
This site does not store any files on its server. We only index and link to content provided by other sites. Please contact the content providers to delete copyright contents if any and email us, we'll remove relevant links or contents immediately.