Computers are designed with various limited purposes but almost entirely with the underpinning goal to “extend and distribute human collective cognition and agency”. To my best knowledge now, to do something computationally (thinking, arithmetic calculation, etc.) means that you are sequentially representing information multiple times until a specific goal is achieved. Seeing the big picture requires one to analyze the computational process against Peirce’s semiotic triadic theory of signs, to uncover just how any intelligence that is acquired , exhibited or displayed depends on the symbol systems that it is observably associated with, be it a human being with a physical biological body in synergy or a computer with physical material components organized to carry out operations in tandem.
The analogy drawn between information and light in the introduction of Sarbo’s book reveals a profound insight into the nature of information. Computationally speaking, any input (Peirce’s representant) as a set of signs can be broken down, re-represented as more signs and brought back to its original form to retain its
intended meaning original condition (Peirce’s object) for interpretation or whatever use. However, this process also depends on the state (Peirce’s interpretant) of the observer, which in this case may be an internal state of the computer itself. As Sabro discusses in length in the later part of the introduction on knowledge representation in human meaning systems, he says “through interactions, we experience, via learning we know reality”. At first this did not seem too different from the way computers learn through the model of machine learning. However there is a slight modification to be made- “through data the machine experiences, via learning they know reality”. Just as Sabro concludes that the knowledge should come from the interactions that are “forced upon us”, machines acquire knowledge through the data that is forced into them. However that may be again just be a matter of time until technology is developed to afford such human-like interactions from which the physical machine can store data from. This fundamental comparison of knowledge representation between humans and computers proves again the concept of the computer as designed to extend human cognition.
Exposing myself to such an integrated method of thinking about computation through its semiotic-cognitive structures better informs my research interests. First and foremost, although I’ve been impatient in my pursuit to learn more about quantum computing, it does not intimidate or repulse me anymore as I feel better equipped to begin with asking the right (“big picture”) questions. For instance, I think it useful to primarily view quantum computing as an information process and synthesize what functions it has so far already proved to operate in the context of an extension of human collective cognition and agency.
Peter Wegner, “Why Interaction Is More Powerful Than Algorithms.” Communications of the ACM 40, no. 5 (May 1, 1997): 80–91.
Herbert A. Simon, The Sciences of the Artificial. Cambridge, MA: MIT Press, 1996. Excerpt (11 pp.).
Janos J. Sarbo, Józef I. Farkas, and Auke J. J. van Breeman. Knowledge in Formation: A Computational Theory of Interpretation. Heidelberg; New York: Springer, 2011. Selections.