Over the course of the semester, the main theme that comes to the surface when discussing artificial intelligence is hype and the harms that this hype has on a broader conversation. No one really knows what artificial intelligence actually is, or what standard of intelligence to use to judge a machine’s achievement of such, and this confusion is reflected in media articles that work to obscure the sociotechnical systems that artificial intelligence systems are a part of.
Cycles of Hype and Fear — and calls for regulation on things that no one fully understands
Even before this course, I noticed that many different computational methods were conglomerated under the mantle of “artificial intelligence.” Any company that created anything began to implement apparent “artificial intelligence” in either its design or services. People responded to the hype train with excitement and capital, so much so that 40 percent of European startups that claimed to use artificial intelligence used no such methods. The claim that the organizations were somehow associated with artificial intelligence was enough to harness the hype beast into capital investment.
There are areas of genuinely exciting application for artificial intelligence – content moderation
While the massive hype train would allow an easy out for a cynic to dismiss all progress in artificial intelligence recently, that would be irresponsible as there are several ways and reasons that artificial intelligence is becoming an increasingly important societal conversation. Because there is so much digital content created and acted upon every day, hour, minute and second of the day online, there is a wealth of training data available to companies that create and train AI systems using techniques such as machine learning. Thus, with this recent accessibility of “big data,” artificial intelligence has improved markedly over the last decade. This improvement, especially with machine vision, has exciting – and ethically difficult – implications in the realms of automating content moderation online.
Sociotechnical blindness as a result of misunderstanding and hype
To me, the biggest takeaway in learning more about artificial intelligence and the coverage surrounding it is how deep and widespread the phenomenon of sociotechnical blindness is surrounding systems that utilize artificial intelligence. This concept, that Deborah G. Johnson and Mario Verdicchio introduced, explores the ways in which artificially intelligent systems are deemed to be separate entities of agency from their creators. Average people are unaware of the human-mediated design decisions that go into artificial intelligence and the systems on which these systems operate. That’s a symptom of – and why we get – simplistic headlines that say things like “AI is racist” or “AI caused a fatal accident.” This simplification of sociotechnical systems involving AI obfuscates human action and agency that goes into the system, making users feel powerless and allowing creators to eschew responsibility for real-world actions.