Author Archives: Madhumitha Kumar

“You can’t just make up a word”: The Semiotic-Pragmatic Approach to Philosophizing Strong AI

“All human thinking is in symbols” – C.S. Pierce


What is the most useful conception for the affordances of strong artificial intelligence or artificial general intelligence, (henceforth, AGI) as it is realized, to draw practical assumptions about the intellectual futures of humanity? Contrary to dangerously simplistic popular notions, the semiotic method tells us that human-like AGI will never be able to internalize meaning the same way humans do, and as a result of which, will not have the capacity to exhibit enough independent agency that warrants a “moral status” or “personhood”. This essay attempts to synthesize research from the fields of philosophy of AI, cognitive science, philosophy of mind and semiotics to address the consequences of automating the full range of symbolic-cognitive capabilities of the human species.

Key concepts: symbolic system; philosophy of mind; cognitive science; philosophy of AI; semiotics; intelligence. 


Contemporary AI philosophy and popular science have expelled deterministic predictions about the nature of computing technological development, such as the “intelligence explosion”, “singularity” and “superintelligence”. This speculative narrative assumes that once machines reach human-level intelligence (as a consequence of human scientific development), they will acquire the ability to “self-actualize”, and therefore express its own desires, needs and intentions. My goal in this paper is to challenge this notion and prove that such radical foresight may be fatally mistaken along with why such a hypothetical event is not plausible or even remotely useful. The motivation driving AGI research is not so much so that we want to progenate a new species to provide company to humanity or even replace humanity, but to automate complex human problem-solving in a human-machine symbiotic fashion. However, while this field of study remains largely conjectural, active research has been being carried out since interest in AI sprung up in the 1950s. Whether the AGI technology is realized or not, we are presented with profound scientific questions of the characteristic uniqueness of the human mind and its future.

My inquiry is rooted in a curiosity of AGI “machines” solely based on its imminent ability to communicate in natural language (since it is commonly agreed to be an important criterion for human-level intelligence). I suspect applying the symbolic-cognitive hypothesis reveals important consequences for such a long-term goal (to mechanize general intelligence) in AI research. What motivates my inquiry is an intellectual dispute with the impression that human-level artefacts (artificial minds) will soon emerge self-conscious and develop a “mind of its own”. My contention is that AGI technology, from a phenomenological standpoint, will primarily be a tool, yet a powerful one to augment human cognitive agency that may perhaps exponentially accelerate human intellectual progress. For the sake of argument, I use the term AGI as how John Searle defines it: “a physical symbol-system capable of having a mind and mental states”. 

The computer is and always has been an interface to the symbolic world represented inside (which in itself is an artefact of human symbolic-cognition). Therefore an artificial intelligence machine will primarily remain a human-created symbol system. While there are compelling views for the implausibility of manufacturing intelligence based on the simulation argument, that is, just because a computer simulates human thought, does not mean it is the real thing, we are missing the whole point here in plain hindsight. What will be the utility functions of a machine that simulates human thought be? AGI will, therefore, be an extension of the “continuum” of the human symbolic cognition. “Artefactual Intelligence, not Artificial Intelligence.” — we will not be able to replicate human intelligence but achieve just enough to enhance human cognitive capacities.

The consequences of studying the philosophy of AGI are that it re-informs the way we think about the nature of such a speculative technology that purports to simulate all human intelligent behaviour. The symbolic-cognition hypothesis shines light on and urges us to take utmost advantage of the fact that artificial Intelligence is an exponentiation of the kind of cognitive offloading humans have been doing for centuries (50, 000 years or so). Computation, as an artefact of human symbolic-cognition itself, is an automation of semiosis. As such, my inquiry does not particularly bother with the technological feasibility of such a machine but rather aims to study the implications of one by means of insight borrowed from the fields of Piercian semiotics, cognitive science and philosophy of mind.

“What deserves emphasis is not these mundane observations themselves, but their powerful consequences.” ~ John Haugeland


1. Current AGI Research

Computing technological progress has not been deterministic in the past and prompts no reason to assume that it will for AGI. It may be disastrous to conclude precisely the nature of AGI designs we will interact with in the future. However, based on what AI researchers are actively working on, we can perhaps map out a way for what is to come for the sake of philosophical examination.  

“the scientific goal of mechanizing human-level intelligence implies at least the potential of complete automation of economically important jobs (even if for one reason or another the potential is never activated)”. ~ Nils Nilsson

While narrow AI research is engaged in multiple different routes or “narratives”, the prevalent goal of all AGI or “strong AI” research seems to be similar. The computing technologies we have had so far have been implemented, explicitly or implicitly, with a goal to “augment human intelligence”. AGI research aims to expand this cognitive need: “to automate rational thinking to make the best decisions with limited computational resources.” (Ben Goertzel)

2. General Intelligence

AGI research is inherently an interdisciplinary field converging insight into the same problem of defining general intelligence from computer science, cognitive science, neuroscience, psychology and linguistics. The standard usage of the term “intelligence” among modern human communities seems to ascribe to an ability to solve problems. This is important because the only way a social community can unanimously accept that a machine is, at any point, exhibits human-like intelligent behaviour, albeit externally, is if everyone can agree that it actually is. Minsky offers a flexible definition:

“Our minds contain processes that enable us to solve problems we consider difficult.“Intelligence” is our name for whichever of those processes we don’t yet understand.”

However, it is perhaps even more critical to outline what AGI researchers are defining the same term in the context of scientific development. Lo and behold, the field is ripe with a variety of growing definitions. For instance, the physical-symbol system hypothesis claims that a symbol system, with an ability to store and manipulate symbols, is necessary for intelligence. Another common view is that the ability to process and communicate in natural language is a testament to intelligent behaviour. Levesque brings up a functional approach, essentially saying that we ought to shift the discussion from defining what counts as intelligence to what counts as “intelligent behaviour = agent making intelligence choices” for a more productive understanding. Lesquevez also draws a crucial distinction in human intelligence- “mechanical” and “ingenuity”, which I think is worth noting. Mechanical is what symbolic-cognitive artefactual activities like algebra demands from us, the strict procedurality of following rules and syntax, with the latter requiring a meaningful absorption.

A quick retrospective survey of the history of AGI ventures and designs unveils that much of AGI research has been directed at natural language processing. This makes sense because to mechanize human-level intelligence, automating the species’ primary meaning-making system sounds like a good start. Most notable successes include ELIZA that was built in the 1960s to model human-like interaction, in 2011, IBM’s Watson beat the best human players at Jeopardy and The Turing test has been commonly accepted as a close approximation of confirming general intelligence of an artefact. Perhaps a more accurate test, such as garnering an insightful response from an intelligent machine when given a piece of artwork, a song or a movie, would prove its understanding of the relational function of the “interpretant”.

“Until the last have no intelligence at all. But that does not yet resolve the more basic paradox of mechanical reason: if the manipulators pay attention to what the symbols mean, then they can’t be entirely mechanical because meanings exert no mechanical forces; but if they ignore the meanings, then the manipulations can’t be instances of reasoning because what’s reasonable depends on what the symbols mean. Even granting that intelligence comes in degrees, meaningfulness still does not; that is, the paradox cannot be escaped by suggesting that tokens become less and less symbolic, stage by stage. Either they’re interpreted, or they aren’t.” (Haugeland)

IBM’s ‘Watson’: What we already have

What we expect of an AGI design

3. Semiotic Theory

The human invention of the stored-program computer escorted a revolutionary distinction in the functions of symbols (data) of ones that can do things and those that mean things. Here, I lay out two important semiotic principles that guide my discussion.

PRINCIPLE 1: The first signs have a material-physical-perceptible form

“So we normally take symbols to be objects in one domain that, by virtue of similarity or convention, stand for something, objects in another domain. The first domain is usually concrete and easily seen or heard, while the second one may be less concrete, less accessible, and sometimes purely abstract.” (Lesquevez)

“A sign is something by which we know something more” (CS Pierce). Here the sign is the perceptible-physical-material substrate and the something more is the abstraction, which is another set of signs and symbols.

PRINCIPLE 2: Dialogism

Hobbes said that human thought is inner discourse which makes sense because according to semiotic theory meaning is dialogic that is activated by symbols and we have already established that all human thinking is in symbols, according to the Pierce view of semiotics. The most pertinent role played in semiosis is the that of the interpretant which conducts the relational or mediating function of producing another set of symbols for the first sign. This is the key activation that anchors an entity’s cognition with shared ideas and the outside world.

4. Symbolic Cognition

As the core human operating system which remains the source of computation and all other artefactual cognitive agencies, this concept hypothesizes that cognitive faculties such as reasoning and decision-making are a result of “automatic” symbol manipulation. This unique human symbolic capacity to make meaning, however, depends on shared symbols within a community, rather than occurring within an individual’s mind. Human symbolic-cognition engages with symbols for two purposes: for communication (to send information) and problem-solving (computation). However, in the realm of cognitive technologies, both activities are interdependent. Information processing is required for communication across time and distance, and communication (control of information flow) is required for pulling the right data during a computation. Over an evolutionary period of time, humans have become smarter because they have been able to make higher levels of abstraction based off of the stuff the previous generation left behind in extended memories, in a continuum of symbolic-cognitive advancements (the cumulative or ratchet effect). 

“Symbolic cognition enables us to “go beyond the information given”  (Bruner, Peirce) in any concrete instance of an expression or representation. We go beyond physical perceptions to activate meanings, ideas, higher levels of abstraction, values, and responses that are not physical properties of symbolic forms but mental and physical associations that we learn how to correlate with them — and “output” our meanings in further symbols, actions, or emotions.” (Martin Irvine)

The crux of my argument is that it is highly unlikely that we can build artificial “minds” with a capacity to assimilate into its nature, the meaning of signs and symbols that it is able to manipulate like humans do because it takes a community with shared concepts and ideas to make an exchange, not solely a single superintelligent entity in isolation. The process of meaning-making, in other words, semiosis, transpires not even in the human mind, but in a community of minds. The AGI technology could mediate, communicate, transmit and store meanings, but it is according to semiotics, impossible for an artefact to realize it itself. With the advancement of NLP, computers could begin to process the vast array of information on the web, but the results will only point out to more “signs and symbols” by means of computational routes.

Perhaps no other evidence underscores my impression better that John Searle’s Chinese Room. This thought experiment involves a room divided by a door, with one filled with native Chinese speakers and another non-Chinese speaker, with no knowledge of the language. However, the non-Chinese speaker is provided with a rule-book for exactly the response to elicit for particular Chinese characters and symbols. As a result, the entities in two rooms are able to carry out a meaningful conversation in Chinese (by means of slipping papers underneath the door separating the rooms), even though the non-Chinese speaker has no sense of the meanings transmitted through the symbols. The thought experiment clearly demonstrates that since the symbols themselves contain no semantic content, the machine computing them will never be able to understand the meaning the symbols in function elicit. However, this marks another fundamental question for AGI: does general intelligence require “meaning-making”, or if “symbols-processing” sufficient? After all, the need to develop AGI is to help us (humans) solve problems not, to put it dramatically, relinquish the responsibility of human civilization to a colony of robots. Information theory was about preserving structures across time and distance. This is the process of meaning-making, aka semiosis, but that does not mean the electricity on its own understands the meaning of the information humans exchange with its help.  

The Turing machine itself has no way of guessing what the symbols are intended to stand for, or indeed if they stand for anything.” (Levesque)

“A sign, or representamen [the physical perceptible component], is something which stands to somebody for something in some respect or capacity. It addresses somebody, that is, creates in the mind of that person an equivalent sign, or perhaps a more developed sign. That sign which it creates I call the interpretant of the first sign.” – cs pierce

The only kind of “interpretation” the system is engaged in is the actions it performs as it is being designated with symbols “that do things”, rather than them “meaning things”. Hence, what humans naturally do with symbols, as a result of the species’ innate semiotic competence is make meaning, but also compute as a secondary function, whereas what computers do with symbols naturally is merely compute.

“Interpretation implies a special form of dependent action: given an expression, the system can perform the indicated process, which is to say, it can evoke and execute its own processes from expressions that designate them.” (Newell and Simon)

CONSEQUENCE 1: Why self-replication is unlikely upon the “intelligence explosion”

The “display” of intelligence does not mean the real thing. Therefore, a machine may be able to accurately manipulate symbols based on other human symbol-systems. However, since meanings are not properties of the signs or symbols themselves, and rather the process of the interpretation of them, rendering the event unobservable, we will never be able to discern the machine as engaging in meaningful communication. Such a machine can only “represent” meaning, not internalize it itself because symbol-cognition is exclusively a human capacity (species-specific). Hence, a machine that is as intelligent as humans will never be able to “express” meaning that comes from its own desires, values or goals. It can represent meaning that is being communicated, transferred and exchanged among humans across space and time.  

“What an automatic formal system does, literally, is “take care of the syntax” (i.e., the formal moves).” (Haugeland)

CONSEQUENCE 2: Intersubjectivity

For meaning to be “meaningful”, it requires two or more agents. Therefore, semiotic theory proves that intelligence does not require an understanding of the semantic content, in the way we coin the term intelligence in the technology community today.

“Meanings are public, not private.” 

“Entering language — and crossing the symbolic threshold — means entering a world of meanings that are always collective, intersubjective, public, and interpersonally shared and communicable.”  ~~ (Martin Irvine) 

All this talk about “machine bias” insofar produced by narrow AI systems, would have been predicted if previously theorized from a semiotic perspective. Symbols may or may not mean anything. They only stand for other symbols which stand for something else (surprise, more symbols). Even if a computer can “process” natural language, in actuality, it would be converting the signs in the language such as words, sentences and verbs into numerical representations which lose the same intended meaning of the initial signs. 

“Meanings and values learned and discovered in our sign systems are not “located” “in” anyone’s head or anywhere, but are cognitive-social events that we activate and instantiate by using our “cognitive-symbolic resources.” (Martin Irvine) 

Imagine machines can finally process natural language as efficiently as humans do. I do think it is worth thinking about what it would look like when two or more such machines will be able to communicate with another. Will it be meaningful? Or would it just be a set of logical responses to a set of questions? According to semiotics, it will never be in a symbolic-cognitive position to say “I understand you”, “You know what I mean?” or “Was that clear” except of course if it was programmed to use those expressions as conversation fillers to display intelligent behaviour, sort of like faking it.

CONSEQUENCE 3: Externalized, Distributed and Collective Cognition

If we ever reach a point of automating the complete linguistic competencies of the human species, AGI will still only be an interface to a more powerful distributed human cognition according to the “delegated agency or cognition” concept first provided by Latour. Consequently, human-level intelligent machines will remain an“external symbol storage”, lending itself to an extension of human symbolic-cognitive capabilities, rather than a replacement.

Cognitive semiotics (embodied cognition) also offers a different view to the mind-body problem- essentially all the physical stuff point to or “stand for” all the mental stuff such as intentions, beliefs, desires, thoughts and values. These sign relations or functions are the fundamental causation of human meaning-making. Hence, our intellectual capacities do not necessarily reside in our minds but exist pervasively across networks of meaning-making in human communities.


Understanding AGI through the semiotic method reveals implications for the philosophy of AI, including the premature drive to anthropomorphize potential human-level intelligent artefacts. This helps us see clearly the progressive nature of cognitive artefacts as products of human symbolic-cognition can not even remotely pose an existential risk, essentially nullifying false speculations fear-mongered by popular scientific experts. What do we get as a consequence of automating human symbolic-cognition?

Much of the dangers supposed of the hypothetical event of the technological singularity emerges from an overestimated extrapolation of current approaches to narrow AI which are fundamentally different from those needed to achieve artificial general intelligence. Therefore we can put on a show to make a machine look like it is truly understanding the world through its interactions, but we don’t have a valid proof test. Since signs are literally anything in the physical universe, automating the “core human operating system” can strengthen communication and computation, perhaps naturally leading to better connectivity and more efficient complex problem-solving.

Okay, sure, once we have formalized the entire symbolic-cognition, machines might be able to form thoughts based on the rules we teach it, that’s how humans “make” meaning after all (rule-governed). However, what purpose will that serve? What will be the utility functions? My aim is not to just pose rhetorical questions, but rather to urge us to think deeply about the effects of developing increasingly symbolic-cognitively powerful computing technologies. More relevant questions would be, what kind of cognitive activities will we most likely off-load? What will that mean for human experience in a much more technologically advanced world?

While we may succeed in automating human-level intelligence (perhaps even human-level linguistic competencies) into a material artefact, semiotics helps us accept that AGI machines will never be as semiotically competent as human beings are and have been since the birth of the Homo sapiens.


Jiajie Zhang and Vimla L. Patel. “Distributed Cognition, Representation, and Affordance.” Pragmatics & Cognition 14, no. 2 (July 2006): 333-341.

Stuart Russel and Peter Norvig, “Artificial Intelligence: A Modern Approach”, 3rd Ed., 2009.

Martin Irvine, “The Grammar of Meaning-Making: Signs, Symbolic Cognition, and Semiotics.”

Martin Irvine, “Introduction to Linguistics and Symbolic Systems: Key Concepts”

Martin Irvine, “Semiotics 1.0: Basic Introduction.”

Martin Irvine, “Introduction to the Theory of Extended Mind and Distributed Cognition”.

Ben Goertzel, “Artificial General Intelligence”, eds. Pennachin, Casio, Springer, 2007.

Nils Nilsson, “Human Level Artificial Intelligence? Be Serious!” AI Magazine, Volume 26, Number 4 (2006).

Newell, Allen and Simon, Herbert, “Computer Science as Empirical Inquiry: Symbols and Search”, Communications of the ACM, No. 3, Volume 19, March 1976.

Allen Newell, “Intellectual issues in the history of AI”, Carnegie-Mellon University, 10 November, 1982.

Marvin Minsky, “Society of Mind”, Simon & Schuster, 1988.

John Haugeland, “Artificial Intelligence, the very Idea”, The MIT Press Cambridge, Massachusetts, 1985.

Ray Kurzweil, “The Singularity is Near: When Humans Transcend Biology”, 2003.

Ray Kurzweil, “How to Make a Mind”, 2013.

Patrick Tucker, “The Singularity and Human Destiny”, The Futurist, 2007.

The Big Picture: Cognitive-Semiotic Theory of Computation

Computers are designed with various limited purposes but almost entirely with the underpinning goal to “extend and distribute human collective cognition and agency”. To my best knowledge now, to do something computationally (thinking, arithmetic calculation, etc.) means that you are sequentially representing information multiple times until a specific goal is achieved. Seeing the big picture requires one to analyze the computational process against Peirce’s semiotic triadic theory of signs, to uncover just how any intelligence that is acquired , exhibited or displayed depends on the symbol systems that it is observably associated with, be it a human being with a physical biological body in synergy or a computer with physical material components organized to carry out operations in tandem.

The analogy drawn between information and light in the introduction of Sarbo’s book reveals a profound insight into the nature of information. Computationally speaking, any input (Peirce’s representant) as a set of signs can be broken down, re-represented as more signs and brought back to its original form to retain its intended meaning original condition (Peirce’s object) for interpretation or whatever use. However, this process also depends on the state (Peirce’s interpretant) of the observer, which in this case may be an internal state of the computer itself. As Sabro discusses in length in the later part of the introduction on knowledge representation in human meaning systems, he says “through interactions, we experience, via learning we know reality”. At first this did not seem too different from the way computers learn through the model of machine learning. However there is a slight modification to be made- “through data the machine experiences, via learning they know reality”. Just as Sabro concludes that the knowledge should come from the interactions that are “forced upon us”, machines acquire knowledge through the data that is forced into them. However that may be again just be a matter of time until technology is developed to afford such human-like interactions from which the physical machine can store data from. This fundamental comparison of knowledge representation between humans and computers proves again the concept of the computer as designed to extend human cognition.

Exposing myself to such an integrated method of thinking about computation through its semiotic-cognitive structures better informs my research interests. First and foremost, although I’ve been impatient in my pursuit to learn more about quantum computing, it does not intimidate or repulse me anymore as I feel better equipped to begin with asking the right (“big picture”) questions. For instance, I think it useful to primarily view quantum computing as an information process and synthesize what functions it has so far already proved to operate in the context of an extension of human collective cognition and agency.


Peter Wegner, “Why Interaction Is More Powerful Than Algorithms.” Communications of the ACM 40, no. 5 (May 1, 1997): 80–91.

Martin Irvine, “Introduction: Toward a Synthesis of Our Studies on Semiotics, Artefacts, and Computing.

Herbert A. Simon,  The Sciences of the Artificial. Cambridge, MA: MIT Press, 1996. Excerpt (11 pp.).

Janos J. Sarbo, Józef I. Farkas, and Auke J. J. van Breeman. Knowledge in Formation: A Computational Theory of Interpretation. Heidelberg; New York: Springer, 2011. Selections. 



How Personal is the Personal Computer?

The emergence and development of the graphical interface was definitely a foundational concept that supported computing to serve as a mediating platform. Numerical (formal) representation played a key role in translating any “media” into data that can be manipulated, changed, deleted, meddled with and copied through means of computation. Much of Alan Kay’s ideas stemmed from his deep insight of learning in human children and adults. For children he referred to the potential application of the Dynababook (personal computer) aptly as a ”dynamic book” as supposed to an inactive one. Alan Kay mentions Papert’s work that children should be taught to program the computer instead of the other way round. “Should the computer program the kid, or should the kid program the computer?” (S. Papert). When someone buys a personal computer today, the act of “ownership” gives them certain freedoms to utilize or manage their “property”. Rarely do we enjoy those freedoms with the personal computing gadgets we own today, except maybe be able to change the desktop wallpaper or change the time. There are two things I would like to see developed in industry. Firstly, the personal computers allowing users to build their own programs. This idea at first didn’t sound feasible to me until I read Alan Kay’s written work at length about the computer as a tool that “manipulates information” for humans. Secondly, I look forward to a keyboard and mouse free interface future, although much later than if the first feature is afforded by the PCs on the market. That would, I can only imagine, include directly talking to a computer as the primary interaction (or communication), mainly with the development of natural language processing. However I have difficulty in pointing out the trumping benefits of such a user interface than what we are used to today. But then again, according to Alan Kay, “interface is a conspiracy”. I guess the question that we need to ask is what conspiracy do we want people to embroil in?


Interfaces: Bringing Man and Machine Closer

Many of the computer designs we take for granted today have had a vivid and imaginative intellectual history. The potential in a computer to do more than just big calculations was noticeable to a significant cohort. Engelbart clearly understood that computation is not just arithmetic calculation, but a machine that can generally work on a symbolic level. In fact, he claimed in his proposal of a hypothetical machine capable of augmenting human intellect, “Every person who does his thinking with symbolized concepts (whether in the form of the English language, pictographs, formal logic, or mathematics) should be able to benefit significantly.” Engelbart acutely drew out the limitations of both the machine and man when they work individually, and demonstrated that when their strengths are combined, it allows the man to work on a new level of efficiency and productivity. Moreover, he took a closer, investigative look at the nature of human cognition to explain the benefits of a computer more suitable to universal human needs. First, he clearly understood the limitation of humans to manage large amounts of information efficiently, which is a prerequisite in solving complex problems. I think that it’s not that humans cannot handle complexity, but rather that since memorizing large data sets are unnatural, it is hard. Secondly, he identified a flaw in the way the conventional pen and paper style of expressing and accumulating human thought constraints the natural symbol-structuring of the human thought process which is not in a serial fashion, but rather a sequential one.

If there were to emerge a “symbiotic association” between man and machine, there needed to be developed an interface between the two different systems so that man can essentially “communicate” or interact with the artifact. This was a significant step forward in computational thinking, moving from the conventional model of the man having a clear understanding about his work and instructing the computer what to do before hand. The Sketchpad was a radically new way of computing, allowing for the user to “talk to the computer graphically”. One of the key features in the “magic” of Sketchpad was a comprehensive memory storage system. The computer essentially translated the graphics as distinct objects to store them and their properties in a designated location. Another was the duplication or replication feature which afforded to make copies of the “masterpicture” (subpictures) which in turn allowed for much greater flexibility in problem solving (mistakes, changes). Since the symbol-structuring in the human mind was commonly understood to work in conceptual structures, visualization of concepts, Engelbart believed, was a great start in building a common ground between the artifact and man. Difference equations built into the program also facilitated dynamic interaction between the user and the computer, because both man and machine could cognitively operate (draw inferences) on the symbolic paradigm of math. “A mathematician is not a man who can readily manipulate figures; often he cannot. He is not even a man who can readily perform the transformations of equations by the use of calculus. He is primarily an individual who is skilled in the use of symbolic logic on a high plane” (Bush)


  1. Vannevar Bush, “As We May Think,” Atlantic, July, 1945.
  2. Engelbart, “Augmenting Human Intellect: A Conceptual Framework.” First published, 1962. As reprinted in The New Media Reader, edited by Noah Wardrip-Fruin and Nick Montfort, 93–108. Cambridge, MA: The MIT Press, 2003
  3. Ivan Sutherland, “Sketchpad: A Man-Machine Graphical Communication System” (1963)
  4. J. C. R. Licklider, “Man-Computer Symbiosis” (1960) | “The Computer as Communication Device” (1968)

Computation is a State of Mind

Computation does not equal computer science. While Computer Science is the study of computation for science and engineering, computational thinking can really be applied to any discipline. A basic example is when Shannon came up with a “mathematical theory of communication” to explore a new way of thinking about information processing. Thus, it is perhaps more useful to think of computation as a particular way of “reckoning” and “calculating” to solve problems.

The following are some of the (non-key) computing concepts I gleaned from the readings and my coding journey:

  • Recursion
  • Interpretation
  • Parallel-processing
  • Abstraction (conceptualizing)
  • Compartmentalizing
  • Complexity
  • Redundancy
  • Error-correction
  • Representation
  • Iteration
  • Search
  • Time and space

The key concepts however surely make more sense to me now as a non-CS student, especially upon navigating through the Python tutorial on Code Academy. I recognized fundamental shifts in the way programming was allowing me to think. For example, I quickly realized how iterative the process of coding really is, when you could have communicated the same code to another human being in a much shorter time. It is now clear that while computation did begin as the process of doing mathematical calculations, over the years, computer scientists have found more broader things to compute other than arithmetic calculations. Therefore the modern computer works based on automatic calculations to alter “operating instructions”. This also led me to consider conceptual differences between human cognition and automatic computers. One of them is storage, or memory.  Computers don’t seem to have as much of a hard time as humans to remember something, forever (until erased, of course).

The structural classification in a computer as a computational artifact, namely, hardware (material artifact), software (abstract artifact) and architecture (liminal artifact) greatly helps in comprehending the complexity of these systems. This also correlates to Charles Babbage’s ardent dream to replace both “muscle” and “minds”. In popular modern culture, computer science is almost seen as a mechanical, dehumanizing and “thinking like robots” pursuit, when it is really rooted in the humanities. Computational thinking is a way to study how humans can solve problems, not to think about how a computer can (although computers used to be people). As Dasgupta says, ‘it is both a concept and an activity historically associated with human thinking of certain kind.”

While computation is rooted in mathematics, we no longer use computers to solve arithmetic problems for us, we use them for a lot more. Computation thus does not deal with numbers, but symbols that stand for something else. Most importantly, the science in computer science is different what the normative conception of science. Computer science is the science of the artificial, to build material artifacts that perform more efficient computations than human beings, in a myriad vectors.


Texting as a Cognitive Technology

It’s nowhere near difficult to recognize that the smartphone has already symbolically turned into an extension of the mind. Even Chalmers brings light to this in his forward. So much so that people today judge your character not by your kindness or jokes but by how fast you type or how quickly you reply. We don’t always “text as we think” similar to how we don’t always “speak as we think”. However, the fundamental cognitive advantage texting presents us is that we have a lot more room to exercise our thoughts before hitting send. We draft type, edit, insert emojis, choose a meme , pick a sticker, add hashtags and so on before even deciding to hit send.

In the context of texting, the other person’s texts are (sort of) permanently etched digitally within the box, forever giving you access to what they said verbatim. You don’t have to consciously make an effort to remember everything the other person is talking about, as you would normally do in a face-to-face conversation, but because you can always go back to the chat box to read what they said again, much of our cognitive load is reduced and as a result, our performance is increased. We think quicker, text multiple people at the same time (multi-communicating) and reply faster with not just words, but with emojis and memes to make our response more meaningful.

One important way texting allows us to express “intersubjectively” accessible meaning is by way of auto-correction built into the technology. When we type a wrong word (usually a misspelling), or sometimes in the case of an inside joke, the bold red underlining of that word instantaneously signals the mind that it is not a real word and therefore you should not be using it. For example, if you wanted to include a word from your native language using the English alphabet on your phone, because you want to communicate with that person (who also knows the native language) in a different way by making a joke, the word would still be underlined implying that it is not understood by anyone (symbolically, since the type is in English).

The signs and symbol repertoires that we have today and constantly building on do indeed contribute to our cognitive scaffolding not opening new areas of meaning, but releasing a force of agency to act on a new level of abstraction with the new label. At the same time, the signs and symbols we have are available to be utilized by any human being, waiting to be “anchored”, owing to its distributive capacity. Finally, I would say that Pierce was indeed on the right track to understanding how signs and symbols work, and believe that many of the ideas in this week’s readings had been expressed by Pierce except that he didn’t assign any proper labels for the concepts rendering them not original ideas, when that is not necessarily true.



Martin Irvine, Introduction to the Theory of Extended Mind and Distributed Cognition.

Andy Clark and David Chalmers. “The Extended Mind.” Analysis58, no. 1 (January 1, 1998): 7–19.

Jiajie Zhang and Vimla L. Patel. “Distributed Cognition, Representation, and Affordance.” Pragmatics & Cognition 14, no. 2 (July 2006): 333-341.

Could Quantum Information Theory Solve the Symbol Grounding Problem?

Shannon’s foundational information theory (MTC) is simply not bothered with the meaning of the content, and doesn’t need to. Since meaning is not a property of the signal but an event, the entire information transmission process including the signal, system and processing together constitute the meaning of the message, media or artifact as we understand it. This is why we know that a hand-written note means something different from a text message. “Hartley had to admit that some symbols might convey more information, as the word was commonly understood, than others. For example, the single word ‘yes’ or ‘no,’ when coming at the end of a protracted discussion, may have an extraordinarily great significance.” (Floridi) Therefore, if meaning relies on the popularity of symbols in a meaning-making community, could probability theory be applied to explore the transmission of meaning?  Although, Shannon’s theory cheerfully neglected the meaning of information, he concluded “that apples and oranges are after all equivalent, or if not equivalent, then fungible.” Isn’t this the entire point of semiotics or human symbolic cognition?

If data + meaning = information, where is the meaning? Is it implied in the data until it is transmitted and interpreted as information that is valuable? Or is the presence of meaning in the transmitted data prompt an interpretation of it as information that can be used? “How data can come to have an assigned meaning and function in a semiotic system like a natural language is one of the hardest questions in semantics, known as the symbol grounding problem.” (Floridi) In a quantum state, information is transmitted through the entanglement phenomenon. Therefore, information is coded only in the correlation, not in the entangled states, eliminating the binary constraint present in the classical state. Is this a relevant step ahead for a semiotic model of information?

From a strictly symbolic perspective, the text (sign vehicle) gets interpreted by the transistor into electric signals which correspond to other signs, by way of “electromagnetic actions” at the logic unit (ALU) in the computer, that further render a manifestation (by way of computation) of the original sign vehicle (the text). An image is converted into pixels, areas of lightness and darkness, a sound pressure is converted into electrical current (charged free electrons (particles) in the semiconductor), numbers are converted into binary states by method of base-two notation and letters are converted into numbers in the ASCII (developed in 1963) code. Shannon soon realized that the more discrete the signals, the more efficient is the transmission of a message. This is primarily why binary won over quaternary (4 states) and even quinary (5 states). However, a quantum transmission involves superposition, where data can be coded in multiple different states as a result of the nature of the quantum particles. Would this involve a trade-off between efficiency and communication?

How do we know  what a text message means? Firstly, a digital message today usually arrives tagged with a specific sender, which generally implies it is human-motivated. Secondly the message carries a degree of syntactical familiarity. “According to Shannon, a message can behave like a dynamical system, its future course conditioned by its past history.” The design of the electronic/digital system as such carries electromagnetic current that drives the signals to be transmitted back to its original form, according to the code used to encode the message at the first instance. The system is designed based on the amount of information that can be transmitted. I think that all communication acts can be understood by merely supplying on the signal level, provided the signals are commonly known by the communicators. The symbolic cognition occurs with the supply of these signals and the presence of “more developed” ones to draw inference (through the interpretant).

Can the conduit metaphor be rendered obsolete by quantum information theory? Can quantum information theory provide better metaphors than even “network”? Moreover, in a quantum state, there also lies the potential for technology to shift its mode of communication from a transmission view to a higher dimensional one. These are questions I’m beginning to grow a lot of interest in.


  1. Luciano Floridi, Information, Chapters 1-4.
  2. James Gleick, The Information: A History, a Theory, a Flood.
  3. Ronald E. Day, “The ‘Conduit Metaphor’ and the Nature and Politics of Information Studies.”
  4. Crash Course Computer Science, YouTube,
  5. John Preskill, Making Weirdness Work: Quantum Information and Computation.

Object-Oriented Programming as a Symbolic Expression

Object-oriented programming is a kind of programming (programming paradigm) that works based on “objects” (not directly as how Pierce defines it). Adopting pierce’s semiotic model, the source code or the instructions given by the programmer to the computer would be Pierce’s “sign or representeman”. The “intepretante” is the compiler converting the signs produced in a particular programming language such as Python or Java, into “executable” binary language, which are other signs that the computer comprehends. The “objects” that these other signs allude to are other physical memory locations on the circuit board or data sets. And the “interface” corresponds to the functions and procedures the programmer chooses to interact with a particular data set.

Take simple linear regression algorithm:

y = mx + c

x, y and m respectively are variables, so we cognitively create a provision for the arbitrariness of the objects. For c (constant) however we are capable of imagining a symbolic boundary that we know as a certainty is unchangeable, even if we don’t yet know what its real value is. A semiotically noteworthy feature of “object-oriented programming” is that only the programmer and the machine know the Peirce objects that they are talking about and build on that knowledge.


  1. Primary Texts On Signs and Symbolic Thought, With Transcriptions of Unpublished Papers from Peirce’s Manuscripts, Edited by Martin Irvine

Is Python “like” or “a” language?

In this context let us assume Python as an instance of interactive language (genre): a type of programming language. Python is also an interpreted language (as opposed to compiling languages) which means that the computer would directly understand your instructions. This is possible because the code is run by a third-party entity (program) acting as the interpreter that directly interacts with the computer, rendering the computational process relatively slower. Even though it is already a “programming language”, I tried to think about how basic python works “like a natural language”, partially inspired by the computational theory of mind.


So far we have the sign system which is the code, consisting of letters, also known as strings, and numbers, which are called integers. The signs on their own mean nothing to the computer, but are of value to the human interpreter communicating with the computer through the programming language. The symbols would be the various operations represented in the form of <, >, ==, !, #, “ “, and, or, not, ( ). These figures “stand for something else” particularly, an algorithmic function, operations or a comment. However do these signs and symbols translate to Denning’s definition of “stuff”, that is an important component of a representational system?


It is very clear for programmers and non-programmers alike that a software will not run if your syntax is not correct. This means following the designated rule system of structures that are allowed in the programming language. For example, to generate a sentence, you simply cannot say:

Print hello world

The computer will explicitly tell you that you have performed a syntax error.

You have to emphasize in the proper syntax with parantheses and quotations. Like this:

print(“Hello World”)

Python works very well with Boolean Logic. Namely, the and, or and not symbols with which you can generate multiple translations or representations of the world on the program.”The symbols and their combinations express representations of the world, which have meaning to us, not to the computer. It is a matter of representations in and representations out.” (Mahoney). Isn’t it possible to think of the computer as a representational artifact, and not merely as a symbolic artifact? Truly understanding only symbols? And that signs mean nothing to the computer? Or do those two distinctions essentially mean the same thing? How does this contradict Simon’s grouping of both the computer and human mind as physical symbolic systems?

I was glad that Denning mentioned that early pioneers in computing actively sought to distinguish information as “the meaning assigned to data”. This cleared my confusion. However, why were so many people left unsatisfied? Of course the same dataset will give way to multiple dimensions of meaning and not just a linear way of making sense of something, marking a crucial precedent for a new way of conceptualizing the “semantics of data”.

Observable features of Python for how meaning is expressed:

  • Encyclopaedic levels

Abstraction (from modules)

  • Generativity

“The symbols and the strings may have several levels of structure, from bits to bytes to groups of bytes to groups of groups

of bytes, and one may think of the transformations as acting on particular levels.” (Mahoney)

  • Lexicon

A finite set of strings (a-z), integers (0-9, and statements

  • Externalized, material sign vehicles


  • Recursion

“But in the end, computation is about rewriting strings of symbols.”(Mahoney)

Can Jackendoff’s Parallel Architecture model be applied to a programming language like Python? Why not? “Almost all have exceedingly limited capacity for simultaneous, parallel activity they are basically one-thing-at- a-time systems.” (Simon). However why does Simon call the human mind/brain a physical symbol “artifact”? Is it because we constantly use the mind as a tool to strengthen our symbolic capacities through areas such as reading, writing or even performing?

One limitation of this exercise, I recognize, is that since all of our attempts at understanding anything, say a symbolic system, generates from thought, how is it possible to accurately compare other symbolic systems to language, when we don’t have a consensual theory on the process of thought itself? So far, we are not even sure if the human mind originally thinks in terms of language or not. Or if we think in different ways in different interfaces. My final thoughts are that I agree more with the concept of a universal symbolic system than in that of a universal grammar.



  1. Selections from: Semiotics, Symbolic Cognition, and Technology: A Reader of Key Texts(PDF).
  2. Daniel Chandler, Semiotics: The Basics. 2nd ed. New York, NY: Routledge, 2007. Excerpts.
  3. Martin Irvine, “Introduction to Meaning Systems and Cognitive Semiotics“.

Getting Real about Language, Film as a Language and Syntax of Film

Before I begin to attempt at defining language, it is notable to point out that a language can be natural, programmable or even visual. Therefore while the differences between those types are plenty, there does seem to underlie a string of common features that make them function the way they do, more often than not, for the same purpose of expressing or communicating complex thought. From a cognitive perspective, a language is a symbolic system shared by two agents that is typically used to transmit information. For a meaning system  to be a language it must contain some innate properties. Firstly, it should house a lexicon that determines language competence in some degree. Secondly, it must necessitate the usage of a formal syntax. Thirdly, the structures within the system must be capable of generating an infinite variety of meanings. Last but not least, the system must be familiar to more than one individual to properly function as a “language”. One immediate implication of employing this combinatorial feature to understand other symbolic systems such as a programming language lies in the fact that we simply cannot explicitly tell computers to do anything we want. This is common sense today. If this continues to hold true for a long time ahead (hypothetically), doesn’t this unique feature of language solely belong to human cognition?

It is commonly understood in Film Theory that although viewers are typically not literate in the grammatical conventions of film, they can understand the intentions behind cinematographic choices in a symbolic sense. Assuming that a movie (visual symbolic system) works like a language, what would its syntactic structure look like? We can arrive at a close understanding of the conventions governing film by way of a structuralist point of view. According to structuralism, we can analyze how films convey meaning through the arrangement of shots, scenes and sequences. In this way, movies have a temporal component to how they express thought similar to how natural human language has a phonological component. Shots in a film can be juxtaposed in a particular way such as pace, transition or direction so as to specifically convey a supplementary meaning on their own, independent of the content (formal elements) in them. This is best exemplified in the Kuleshov Effect which was named after Soviet filmmaker Lev Kuleshov upon his famous experiment in which he assembled three distinct shots, one of a man with a blank facial expression, a bowl of hot soup and an attractive women in different orders only to realize that each sequence told a different story even though the shots were the same. 

The above example thus reiterates a common hypothesis in linguistics that  the generative capacity of language comes from syntactic derivation. At this point, two major schools of thought in conception of grammar dominate the field of linguistics namely generative grammar and constraint-based grammar.  I’m also satisfied with Jackendoff’s description of the linguistic approach to studying language which was hard to grapple with in the beginning. “The most rewarding way to investigate language is still from the mentalist point of view—focusing especially on what it takes for a child to acquire facility in a language.” Lot of the challenges in the scientific study of language seem to rely on the largely under-charted territory of interfaces. Finally, it also strongly appears to me that language is just one means that the human brain is accustomed to, in interacting with the knowledge of the world, although I am uncertain of the veracity of such a bold claim.


  1. Ray Jackendoff, Foundations of Language: Brain, Meaning, Grammar, Evolution. New York, NY: Oxford University Press, USA, 2003.
  2.  Martin Irvine, “Introduction to Linguistics and Symbolic Systems: Key Concepts