Duality in Development of Technology

It is likely that the most important reason why we develop technology (AI, ML, etc.) is for the benefit of the human – physically, emotionally, and mentally. The social reception of technologies, therefore, is the basis of what determines useful and useless advancements in tech. It can be argued that we as a society have held the view (explicitly and implicitly) that technology is an independent thing and can cause social, political, or economic effects. The utopian/dystopian creates hype, hope that developments in technology will bring about a better world, and hysteria regarding technology as independent, uncontrollable, and influential. On the other hand, I would argue that people who are somewhat knowledgeable about developments in technology can understand the human power, both negative and positive, behind the technology. We can see in films like The Facebook Dilemma the power a human-created algorithm has on politics around the world, or facial recognition scanners programmed to work better on specific races. While it is easy to live in the bliss of having tasks become easier and more automated, overall it is not as dreamlike or uninterpretable as it has once been. It may be more difficult to place blame on specific people, but nonetheless, we can see there is a clear human-powered bias in many of our technologies.

In addition to this idea of hype and hysteria, while I do understand the almost ridiculous technology and automation we see in movies (specifically for the time in which the movies were produced), I believe we are unaware of revolutionary technologies in the works. While talk of self-driving cars has been increasingly popular, we fail to recognize the history of automated vehicles. When attending a tour of the Google office in Chelsea, New York, a Googler said technologies like the Google Home have been in the works for over ten years and there are plenty more technologies in the works right now that no one knows about but will become all the rage in ten years time. 

The two frameworks for producing human-level intelligent behavior in a computer seem to be battling a popularity contest. We have the mind model, or symbolic AI utilizes a series of binary yes/no true/false 0/1 operations to arrive at a conclusion/action. It uses these symbols to represent what the system is reasoning about. Symbolic AI was the most widely adopted approach to building AI systems from the mid-1950s until the late 1980s, and it is beneficial because we can use it to explicitly understand the goal of our technology and the AI’s decision. Alternatively to the mind model is the brain model, which aims at simulating the human nervous system. Obviously, as the brain is extremely complex, it is not yet possible to replicate human-level intelligent behavior, but developers have created technology similar to the human brain. For example, neural networks are based on a collection of connected units or nodes modeled after the neurons in a biological brain.

What I am interested in learning more about is the list of tasks that Michael Wooldridge describes in A Brief History of Artificial Intelligence. At the ‘nowhere near solved’ level, he writes of interpreting what is going on in a photo as well as writing interesting stories. Notably, a scandal broke out in which women searching the word ‘bra’ when in the Photos app were returned photos of themselves in a bra/bathing suit. And we continue to see information from photos read like this. I can type ‘dog’ and get many dog photos from my camera roll, etc. And while I have not been able to do so myself, in an Intro NLP course last semester, we trained our system on a large dataset and could write extremely simple sentences using bigrams or trigrams. While technology cannot create an interesting story out of nothing, it cannot do anything without data and storytelling is no different. This data was also used to predict the order of a sentence and the part of speech of each word, making Wooldridge’s example of “Who [feared/advocated] violence?” or “What is too [small/large]?” as questions a more experienced developer would be able to program. So I suppose my question is: are there truly limits to what we can automate/create? It seems that as time progresses we continue to do things we thought were once impossible.

“A Brief History of Autonomous Vehicle Technology.” Wired, March 31, 2016. https://www.wired.com/brandlab/2016/03/a-brief-history-of-autonomous-vehicle-technology/.

MIT Technology Review. “A US Government Study Confirms Most Face Recognition Systems Are Racist.” Accessed February 1, 2021. https://www.technologyreview.com/2019/12/20/79/ai-face-recognition-racist-us-government-nist-study/.

the Guardian. “Apple Can See All Your Pictures of Bras (but It’s Not as Bad It Sounds),” October 31, 2017. http://www.theguardian.com/technology/shortcuts/2017/oct/31/apple-can-see-bra-photos-app-recognises-brassiere.

FRONTLINE PBS, Official. The Facebook Dilemma, Part One (Full Film) | FRONTLINE, 2018. https://www.youtube.com/watch?v=T48KFiHwexM.