I began this week’s batch of readings with the shortest in length, but I found The Society of Intelligent Veillance (Minsky et. al) to be a fascinating article. Even in the first section, where the authors discuss the “society of mind,” I caught myself asking questions about the implications of their ideas. They write, “Natural intelligence arises from the interactions of numerous simple agents, each of which, taken individually, is ‘mindless,’ but, collectively, give rise to intelligence” (Minsky et al., 2013, p.13). This makes sense, and in many cases, more minds on a task can lead to better, more diverse and sometimes even unpredictable ideas and solutions. But how do the notions of groupthink and hive mind (with their generally negative connotations) factor into this quote? Oftentimes, additional people on a task can lead to mindless agreement and blind following as a way to finish the task quickly by the path of least resistance. The authors apply their concept of the “society of mind” to modern computing and the rise of distributed, cloud-based computing across the internet, going so far as to quote the slogan of Sun Microsystems: “The network is the computer” (p. 13). Since computers often reflect natural human bias in their programming, are computers subject to the same negative aspects of groupthink?
The section of the article on the “Cyborg Age” also caught my attention, where the authors write, “Humanistic intelligence is intelligence that arises because of a human being in the feedback loop of a computational process, where human and computer are inextricably intertwined” (Minsky et al, 2013, p. 15). Machines have acted as extensions of our bodies and senses ever since their inception, but now we’ve become reliant on them to the point of wanting to incorporate them as wearable devices, and even possibly implement them into our biological makeup. This idea brought to mind the many depictions of cyborgian technology in popular media, such as (in order of realism) Will Smith’s prosthetic machine arm in i, Robot (replaced, enhanced limb), Doc Ock’s tentacle arm things in Spiderman (added, enhanced limbs), and Wolverine getting his bones fortified with adamantium nanotechnology in X-Men. There are countless other examples of this, and obviously our imaginations can sometimes take us further than science. But while these types of cyborgian innovations hold tremendous potential for the human race, when will this kind of technological advancement end? Maybe it’s just the sci-fi/comic book nerd in me, but I hope it doesn’t take a destructive cyborg supervillian in 20+ years to make us realize we need to pump the brakes on these technological extensions and enhancements to our human bodies.
I also enjoyed contemplating the Society of Intelligent Veillance, and how we are now subject to “both surveillance (cameras operated by authorities) and sousveillance (cameras operated by ordinary people)” (Minsky et al, 2013, p. 14) in our everyday public activities. We are in a modern, living panopticon, enforced and perpetuated through our own insatiable internet use. So many of the videos we see on the news and social media now come from citizen journalism: people with smartphones catching an ugly encounter in a brand-name restaurant, or a racist incident on a train platform, etc. An especially chilling line from this section is, “If and when machines become truly intelligent, they will not necessarily be subservient to (under) human intelligence, and may therefore not necessarily be under the control of governments, police…or any other centralized control.” Who will these super-intelligent and capable machines answer to?
The answer, according to the level-headed Johnson & Verdicchio (2016), is computer programmers and engineers. They write, “To get from current AI to futuristic AI, a variety of human actors will have to make a myriad of decisions” (Johnson & Verdecchio, 2016, p. 587). The authors discuss how AI is often misrepresented in popular media, news coverage, and even academic writing, because of 1.) confusion over the term “autonomy” (machine autonomy vs. human autonomy); and 2.) a “sociotechnical blindness” that neglects to include human actors “at every stage of the design and deployment of an AI system” (p. 575). This is useful reasoning to keep in mind when becoming fearful about artificially intelligent cyborgian supervillians. It’s the type of reassuring logic we need to maintain faith in the positive development and incorporation of AI in our digital age.
Johnson, D. G., & Verdicchio, M. (2017). Reframing AI Discourse. Minds and Machines, 27(4), 575–590. https://doi.org/10.1007/s11023-017-9417-6