My understanding of computing through the ages (semester)

As the semester starts winding down, it’s interesting to go back through my previous posts for the course and see how my thoughts have evolved over time. We sampled an extraordinary range of very complex, high-level topics in a relatively short amount of time, and dabbling with each of them is only merely scratching the surface of these rich intellectual traditions. In revisiting my old posts, I can see how I worked through de Saussure’s and Peirce’s semiotic models, at first perhaps misunderstanding the true distinction between the two, but eventually grasping how important these differences are to understanding symbolic meaning-making.

I also developed an appreciation for the ambiguousness and self-reflexivity of language itself, and how the human brain is uniquely equipped with the ability to create and understand meaning from arbitrary occurrences (be they sounds, letters, or even events). In this sense, the brain is a computer—our OS Alpha. Computers are not these distinct, separate pieces of hardware, and computing is not this magical thing that happens inside them. Rather, their logic evolved out of a natural progression of symbolic and representative technologies that are based on particular elements of the human mind. When we joke that somebody “thinks like a computer” (in that either they are gifted in computational thinking, or that they lack emotion) what we really mean is that they think in a specific way that has been isolated and applied to how computers function.

As they have advanced, computers have been adopting more and more of the characteristics and functions of humans, and more so resembling the human brain (just multiplied, of course). With AI, computers attempt to replicate emotional, conversational, and interactional functions that were previously unavailable. With predictive technologies, such as Google’s search suggestions or Amazon’s “you may be interested in…”, they have adopted our forms of associative thinking. This is not by accident—this is intentionally directed by humans. We used to make mix tapes on cassettes and give them to people we had crushes on—now we have Spotify make playlists for us based on our listening history. This was not just an accidental progression—this technology was built on how humans already thought. The same can be said for the technologies we use for offloading—instead of “filing” them away in our mind, they allow us to keep track of them while also juggling overwhelming amounts of information.

Samantha from “Her”









Needless to say, my understanding of meaning-making, symbols, representation, and computing has changed throughout this course. I now understand computing as an extension of human thought based on human rules, not as a mysterious black-box in opposition to it. But one thing does still bug me. I can’t quite figure out where the distinction between humans and computers should be (“should” because I’m operating under this normative assumption). Computers are bundles of calculations and processes that are necessarily derived from human thinking. The more you bundle them together, and at higher levels of abstraction and analysis, the more “complex” a computer or technology you have. The “highest” form of this would be AI that functions precisely as a human—one that contains all the analytical, judgmental, sensory, emotional, etc., capabilities that we have. But is this possible? Is it only a matter of technological capability, or is there a necessary divide between us? By divide I don’t mean in the “humans v computers” sense I described before, but just the mere fact of how our reality functions. Anyways, computers are really cool, and provide an infinitely fascinating mirror with which we can examine what it means to be human.