A big part of “deblackboxing” the mystery behind computer is realizing that there isn’t something necessarily to de-blackbox. The gap between true understanding of computing and what goes on behind ‘closed screens’ is the complex arbitrary notions that we ourselves have given to something we have actually created. The truth behind it, is that it is not as unfamiliar as we think. If we think about it, all these designs, systems, software didn’t create themselves. Someone had to build them based on human knowledge, needs, desires, etc. In reality, they are only a reflection of our human day-to-day functions that we encoded to make our lives easier and faster, or at least that is the goal without including all the ethical, security, privacy, etc. issues that have rises over the years. “The action of computing comes from people, not principles” (Denning & Martel, 2015, 19). However, breaking down and highlighting these subparts of computing and systems in order to understand the information process and algorithms that guide them towards executing specific commands and demands. We use design structures and principles of computing to transform information, discover it, classify it, store it and communicate it, these “structures are not just descriptive, they are generative” (Denning & Martel, 2015, 15). The countless masses of information whether physical, digital or even conceptional have been overwhelmingly growing through out the years and scientists, coders, etc. have needed to find different and more sufficient ways to manage such matters but also “build systems that could take over human cognitive work” (Denning & Martel, 2015, 27) and as Morse had suggested; to “construct a system of signs which intelligence could be instantaneously transmitted” (Irvine, 2020, video).
Digging into what are these main concepts helps us realize that in reality computing and this black box isn’t so dark and mysterious after all. A simple duo of numbers, 1 and 0 have managed to create such a vast system of knowledge, storage and processing of information that have ultimately changed life as we know it forever. For example, just as human memory is crucial in conducting really any type of daily matter no matter how important or unimportant it can be. Similarly, computer, digitized and software memory is a crucial design principle for the functionality and existence of computes as we know them today and “the most sophisticated mechanism” (Denning & Martel, 2015, 23). However, in order to keep that memory and all of its functionalities safe, the concept of security came to play a major role in the computer’s system design principles as life slowly started taking a turn “online”, we had to find ways to secure privacy, individuality, etc. the same way we did in real life, online. Starting with time-sharing systems in the 1960s, information protection, ways to control access to confidential and private information, ways to file systems hierarchically to provide user customization and more, policies for computer operators ((Denning & Martel, 2015, 23), needed to be created in order for people to share the same familiarity and feeling of safety that they do in real life, virtually.
Because of the aforementioned, two number usage, the “Y2K” problem arose highlighting the importance of danger in information vulnerability that can be found due to network sharing, the World Wide Web and more, database records, passwords, personal information, etc. can be easily accessed and uncovered if they want to be (Denning & Martel, 2015, 23-25). Machine Learning and Artificial Intelligence have made it possible to create for security purposes factors of authentication and identification. Biometrics, for example is the “recognition or authentication of people using their physiological and/or behavioral characteristics”, these can include “the face, […], fingerprints, iris, and palm [as well as]. dynamics of signature, voice, gait and keystrokes” (Alpaydin, 2016, 66). Even under these circumstances where technology has developed to such an extent where we can literally unlock our phones with our faces or walk through stores and office spaces while purchasing things and tracking location is rooms through facial recognition, to unlocking high risk information and privacy matters with your eyeball or finger-print, we can extensively comment and discuss the social and ethical issues that arise from such capabilities, showing exactly the idea that in reality all of these “ultimate-crypto-computer-sciency-too-hard-for-anyone-else-to-understand” myths, are truly just a reflection of very human selves onto something technological that we have created so extensively to the point were even our human biases, debates, prejudices, etc., have been unconsciously (or consciously) applied on/in to them.
Denning & Martell, 2015 p.27-28
“Automated cognition systems need not work the same way the human mind works; they do not even need to mimic a human solving a problem. […] The methods used in these programs were highly effective but did not resemble human thought or brain processes. Moreover, the methods were specialized to the single purpose and did not generalize.”
So why do we alienated ourselves and are so concerned/scared about the development of tech, AI, computers, etc. when they can basically never be as intelligent and as advanced as the human cognitive brain and mind?
Alpaydin, E. (2016). Machine learning: The new AI. MIT Press.
Denning, P. J., & Martell, C. H. (2015). Great principles of computing. The MIT Press.
Irvine, M. (2020). Introduction to Computer System Design.