I Got 99 Problems But a Breach Ain't One! by James Scott

I Got 99 Problems But a Breach Ain't One! by James Scott

Author:James Scott [Scott, James]
Language: eng
Format: azw3
Publisher: Institute for Critical Infrastructure Technology
Published: 2017-07-19T04:00:00+00:00


Dragnet Surveillance Cannot Stymie Terrorism, But A.I. Can

In August 2016, a UK government committee said that Facebook, Twitter, and Google have been “failing to tackle extremism” and the Home Affairs Select Committee said the social networks need to show a “greater sense of responsibility” and they should use their earnings to help solve problems in the online world. In early 2017, Google lost millions in advertising revenue on its YouTube platform when brands boycotted in reaction to their ads appearing before or next to extremist videos. In response, Google adopted a machine learning and artificial intelligence system that utilized video analysis models that rely on content classifiers to discover more than half of the terrorism-related content removed from YouTube in the past six months. Obviously, artificial intelligence and machine learning alone cannot detect all adversary activity nor can they perfectly prevent false positives that unintentionally remove legal user content. But the solutions better ensure security and privacy than censorship or dragnet surveillance. Artificial intelligence and machine learning systems are taught by humans to increase gradually in accuracy and efficiency.

The system is trained by operators while independent experts still respond to flagged content. YouTube was accused of hosting extremist content in the immediate backlash following the London attacks, and they have since expanded the efforts of their Jigsaw group, which points those seeking radical videos to anti-terrorist content instead. Similarly, Facebook is leveraging machine learning algorithms to identify and remove extreme content using indicators such as friend count, connections to accounts disabled for terrorist activity, or similarities to said accounts [100]. The algorithms also mine words, images, and videos to root out propaganda and messages. Hashes or digital video fingerprints are also used to flag and intercept extremist videos before that are posted. Artificial intelligence is also being used to analyze text that has been removed for supporting or praising terrorist organizations, to identify other propaganda, and to ferret out private groups that support terrorism. [101].

Rather than censor the entire Internet in an attempt to sift through the dynamically increasing pool of user data for the few extremists, state entities could leverage artificial intelligence and machine learning systems to identify potential lone-wolves prior to polarization or to distinguish shifts in the propaganda delivery channels. After all, if Facebook can implement an algorithm that identifies whether users are depressed and if so, alters their content to improve their mood, is it out of the realm of possibility for intelligence agencies to discover developing lone-wolf threat actors prior to radicalization based on their distinct profiles and redirect them to accepting communities that provide them a sense of purpose and meaning without the extremism [102]?



Download



Copyright Disclaimer:
This site does not store any files on its server. We only index and link to content provided by other sites. Please contact the content providers to delete copyright contents if any and email us, we'll remove relevant links or contents immediately.