“You can’t just make up a word”: The Semiotic-Pragmatic Approach to Philosophizing Strong AI

“All human thinking is in symbols” – C.S. Pierce

ABSTRACT

What is the most useful conception for the affordances of strong artificial intelligence or artificial general intelligence, (henceforth, AGI) as it is realized, to draw practical assumptions about the intellectual futures of humanity? Contrary to dangerously simplistic popular notions, the semiotic method tells us that human-like AGI will never be able to internalize meaning the same way humans do, and as a result of which, will not have the capacity to exhibit enough independent agency that warrants a “moral status” or “personhood”. This essay attempts to synthesize research from the fields of philosophy of AI, cognitive science, philosophy of mind and semiotics to address the consequences of automating the full range of symbolic-cognitive capabilities of the human species.

Key concepts: symbolic system; philosophy of mind; cognitive science; philosophy of AI; semiotics; intelligence. 


INTRODUCTION

Contemporary AI philosophy and popular science have expelled deterministic predictions about the nature of computing technological development, such as the “intelligence explosion”, “singularity” and “superintelligence”. This speculative narrative assumes that once machines reach human-level intelligence (as a consequence of human scientific development), they will acquire the ability to “self-actualize”, and therefore express its own desires, needs and intentions. My goal in this paper is to challenge this notion and prove that such radical foresight may be fatally mistaken along with why such a hypothetical event is not plausible or even remotely useful. The motivation driving AGI research is not so much so that we want to progenate a new species to provide company to humanity or even replace humanity, but to automate complex human problem-solving in a human-machine symbiotic fashion. However, while this field of study remains largely conjectural, active research has been being carried out since interest in AI sprung up in the 1950s. Whether the AGI technology is realized or not, we are presented with profound scientific questions of the characteristic uniqueness of the human mind and its future.

My inquiry is rooted in a curiosity of AGI “machines” solely based on its imminent ability to communicate in natural language (since it is commonly agreed to be an important criterion for human-level intelligence). I suspect applying the symbolic-cognitive hypothesis reveals important consequences for such a long-term goal (to mechanize general intelligence) in AI research. What motivates my inquiry is an intellectual dispute with the impression that human-level artefacts (artificial minds) will soon emerge self-conscious and develop a “mind of its own”. My contention is that AGI technology, from a phenomenological standpoint, will primarily be a tool, yet a powerful one to augment human cognitive agency that may perhaps exponentially accelerate human intellectual progress. For the sake of argument, I use the term AGI as how John Searle defines it: “a physical symbol-system capable of having a mind and mental states”. 

The computer is and always has been an interface to the symbolic world represented inside (which in itself is an artefact of human symbolic-cognition). Therefore an artificial intelligence machine will primarily remain a human-created symbol system. While there are compelling views for the implausibility of manufacturing intelligence based on the simulation argument, that is, just because a computer simulates human thought, does not mean it is the real thing, we are missing the whole point here in plain hindsight. What will be the utility functions of a machine that simulates human thought be? AGI will, therefore, be an extension of the “continuum” of the human symbolic cognition. “Artefactual Intelligence, not Artificial Intelligence.” — we will not be able to replicate human intelligence but achieve just enough to enhance human cognitive capacities.

The consequences of studying the philosophy of AGI are that it re-informs the way we think about the nature of such a speculative technology that purports to simulate all human intelligent behaviour. The symbolic-cognition hypothesis shines light on and urges us to take utmost advantage of the fact that artificial Intelligence is an exponentiation of the kind of cognitive offloading humans have been doing for centuries (50, 000 years or so). Computation, as an artefact of human symbolic-cognition itself, is an automation of semiosis. As such, my inquiry does not particularly bother with the technological feasibility of such a machine but rather aims to study the implications of one by means of insight borrowed from the fields of Piercian semiotics, cognitive science and philosophy of mind.

“What deserves emphasis is not these mundane observations themselves, but their powerful consequences.” ~ John Haugeland

LITERATURE REVIEW

1. Current AGI Research

Computing technological progress has not been deterministic in the past and prompts no reason to assume that it will for AGI. It may be disastrous to conclude precisely the nature of AGI designs we will interact with in the future. However, based on what AI researchers are actively working on, we can perhaps map out a way for what is to come for the sake of philosophical examination.  

“the scientific goal of mechanizing human-level intelligence implies at least the potential of complete automation of economically important jobs (even if for one reason or another the potential is never activated)”. ~ Nils Nilsson

While narrow AI research is engaged in multiple different routes or “narratives”, the prevalent goal of all AGI or “strong AI” research seems to be similar. The computing technologies we have had so far have been implemented, explicitly or implicitly, with a goal to “augment human intelligence”. AGI research aims to expand this cognitive need: “to automate rational thinking to make the best decisions with limited computational resources.” (Ben Goertzel)

2. General Intelligence

AGI research is inherently an interdisciplinary field converging insight into the same problem of defining general intelligence from computer science, cognitive science, neuroscience, psychology and linguistics. The standard usage of the term “intelligence” among modern human communities seems to ascribe to an ability to solve problems. This is important because the only way a social community can unanimously accept that a machine is, at any point, exhibits human-like intelligent behaviour, albeit externally, is if everyone can agree that it actually is. Minsky offers a flexible definition:

“Our minds contain processes that enable us to solve problems we consider difficult.“Intelligence” is our name for whichever of those processes we don’t yet understand.”

However, it is perhaps even more critical to outline what AGI researchers are defining the same term in the context of scientific development. Lo and behold, the field is ripe with a variety of growing definitions. For instance, the physical-symbol system hypothesis claims that a symbol system, with an ability to store and manipulate symbols, is necessary for intelligence. Another common view is that the ability to process and communicate in natural language is a testament to intelligent behaviour. Levesque brings up a functional approach, essentially saying that we ought to shift the discussion from defining what counts as intelligence to what counts as “intelligent behaviour = agent making intelligence choices” for a more productive understanding. Lesquevez also draws a crucial distinction in human intelligence- “mechanical” and “ingenuity”, which I think is worth noting. Mechanical is what symbolic-cognitive artefactual activities like algebra demands from us, the strict procedurality of following rules and syntax, with the latter requiring a meaningful absorption.

A quick retrospective survey of the history of AGI ventures and designs unveils that much of AGI research has been directed at natural language processing. This makes sense because to mechanize human-level intelligence, automating the species’ primary meaning-making system sounds like a good start. Most notable successes include ELIZA that was built in the 1960s to model human-like interaction, in 2011, IBM’s Watson beat the best human players at Jeopardy and The Turing test has been commonly accepted as a close approximation of confirming general intelligence of an artefact. Perhaps a more accurate test, such as garnering an insightful response from an intelligent machine when given a piece of artwork, a song or a movie, would prove its understanding of the relational function of the “interpretant”.

“Until the last have no intelligence at all. But that does not yet resolve the more basic paradox of mechanical reason: if the manipulators pay attention to what the symbols mean, then they can’t be entirely mechanical because meanings exert no mechanical forces; but if they ignore the meanings, then the manipulations can’t be instances of reasoning because what’s reasonable depends on what the symbols mean. Even granting that intelligence comes in degrees, meaningfulness still does not; that is, the paradox cannot be escaped by suggesting that tokens become less and less symbolic, stage by stage. Either they’re interpreted, or they aren’t.” (Haugeland)

IBM’s ‘Watson’: What we already have

What we expect of an AGI design

3. Semiotic Theory

The human invention of the stored-program computer escorted a revolutionary distinction in the functions of symbols (data) of ones that can do things and those that mean things. Here, I lay out two important semiotic principles that guide my discussion.

PRINCIPLE 1: The first signs have a material-physical-perceptible form

“So we normally take symbols to be objects in one domain that, by virtue of similarity or convention, stand for something, objects in another domain. The first domain is usually concrete and easily seen or heard, while the second one may be less concrete, less accessible, and sometimes purely abstract.” (Lesquevez)

“A sign is something by which we know something more” (CS Pierce). Here the sign is the perceptible-physical-material substrate and the something more is the abstraction, which is another set of signs and symbols.

PRINCIPLE 2: Dialogism

Hobbes said that human thought is inner discourse which makes sense because according to semiotic theory meaning is dialogic that is activated by symbols and we have already established that all human thinking is in symbols, according to the Pierce view of semiotics. The most pertinent role played in semiosis is the that of the interpretant which conducts the relational or mediating function of producing another set of symbols for the first sign. This is the key activation that anchors an entity’s cognition with shared ideas and the outside world.

4. Symbolic Cognition

As the core human operating system which remains the source of computation and all other artefactual cognitive agencies, this concept hypothesizes that cognitive faculties such as reasoning and decision-making are a result of “automatic” symbol manipulation. This unique human symbolic capacity to make meaning, however, depends on shared symbols within a community, rather than occurring within an individual’s mind. Human symbolic-cognition engages with symbols for two purposes: for communication (to send information) and problem-solving (computation). However, in the realm of cognitive technologies, both activities are interdependent. Information processing is required for communication across time and distance, and communication (control of information flow) is required for pulling the right data during a computation. Over an evolutionary period of time, humans have become smarter because they have been able to make higher levels of abstraction based off of the stuff the previous generation left behind in extended memories, in a continuum of symbolic-cognitive advancements (the cumulative or ratchet effect). 

“Symbolic cognition enables us to “go beyond the information given”  (Bruner, Peirce) in any concrete instance of an expression or representation. We go beyond physical perceptions to activate meanings, ideas, higher levels of abstraction, values, and responses that are not physical properties of symbolic forms but mental and physical associations that we learn how to correlate with them — and “output” our meanings in further symbols, actions, or emotions.” (Martin Irvine)

The crux of my argument is that it is highly unlikely that we can build artificial “minds” with a capacity to assimilate into its nature, the meaning of signs and symbols that it is able to manipulate like humans do because it takes a community with shared concepts and ideas to make an exchange, not solely a single superintelligent entity in isolation. The process of meaning-making, in other words, semiosis, transpires not even in the human mind, but in a community of minds. The AGI technology could mediate, communicate, transmit and store meanings, but it is according to semiotics, impossible for an artefact to realize it itself. With the advancement of NLP, computers could begin to process the vast array of information on the web, but the results will only point out to more “signs and symbols” by means of computational routes.

Perhaps no other evidence underscores my impression better that John Searle’s Chinese Room. This thought experiment involves a room divided by a door, with one filled with native Chinese speakers and another non-Chinese speaker, with no knowledge of the language. However, the non-Chinese speaker is provided with a rule-book for exactly the response to elicit for particular Chinese characters and symbols. As a result, the entities in two rooms are able to carry out a meaningful conversation in Chinese (by means of slipping papers underneath the door separating the rooms), even though the non-Chinese speaker has no sense of the meanings transmitted through the symbols. The thought experiment clearly demonstrates that since the symbols themselves contain no semantic content, the machine computing them will never be able to understand the meaning the symbols in function elicit. However, this marks another fundamental question for AGI: does general intelligence require “meaning-making”, or if “symbols-processing” sufficient? After all, the need to develop AGI is to help us (humans) solve problems not, to put it dramatically, relinquish the responsibility of human civilization to a colony of robots. Information theory was about preserving structures across time and distance. This is the process of meaning-making, aka semiosis, but that does not mean the electricity on its own understands the meaning of the information humans exchange with its help.  

The Turing machine itself has no way of guessing what the symbols are intended to stand for, or indeed if they stand for anything.” (Levesque)

“A sign, or representamen [the physical perceptible component], is something which stands to somebody for something in some respect or capacity. It addresses somebody, that is, creates in the mind of that person an equivalent sign, or perhaps a more developed sign. That sign which it creates I call the interpretant of the first sign.” – cs pierce

The only kind of “interpretation” the system is engaged in is the actions it performs as it is being designated with symbols “that do things”, rather than them “meaning things”. Hence, what humans naturally do with symbols, as a result of the species’ innate semiotic competence is make meaning, but also compute as a secondary function, whereas what computers do with symbols naturally is merely compute.

“Interpretation implies a special form of dependent action: given an expression, the system can perform the indicated process, which is to say, it can evoke and execute its own processes from expressions that designate them.” (Newell and Simon)

CONSEQUENCE 1: Why self-replication is unlikely upon the “intelligence explosion”

The “display” of intelligence does not mean the real thing. Therefore, a machine may be able to accurately manipulate symbols based on other human symbol-systems. However, since meanings are not properties of the signs or symbols themselves, and rather the process of the interpretation of them, rendering the event unobservable, we will never be able to discern the machine as engaging in meaningful communication. Such a machine can only “represent” meaning, not internalize it itself because symbol-cognition is exclusively a human capacity (species-specific). Hence, a machine that is as intelligent as humans will never be able to “express” meaning that comes from its own desires, values or goals. It can represent meaning that is being communicated, transferred and exchanged among humans across space and time.  

“What an automatic formal system does, literally, is “take care of the syntax” (i.e., the formal moves).” (Haugeland)

CONSEQUENCE 2: Intersubjectivity

For meaning to be “meaningful”, it requires two or more agents. Therefore, semiotic theory proves that intelligence does not require an understanding of the semantic content, in the way we coin the term intelligence in the technology community today.

“Meanings are public, not private.” 

“Entering language — and crossing the symbolic threshold — means entering a world of meanings that are always collective, intersubjective, public, and interpersonally shared and communicable.”  ~~ (Martin Irvine) 

All this talk about “machine bias” insofar produced by narrow AI systems, would have been predicted if previously theorized from a semiotic perspective. Symbols may or may not mean anything. They only stand for other symbols which stand for something else (surprise, more symbols). Even if a computer can “process” natural language, in actuality, it would be converting the signs in the language such as words, sentences and verbs into numerical representations which lose the same intended meaning of the initial signs. 

“Meanings and values learned and discovered in our sign systems are not “located” “in” anyone’s head or anywhere, but are cognitive-social events that we activate and instantiate by using our “cognitive-symbolic resources.” (Martin Irvine) 

Imagine machines can finally process natural language as efficiently as humans do. I do think it is worth thinking about what it would look like when two or more such machines will be able to communicate with another. Will it be meaningful? Or would it just be a set of logical responses to a set of questions? According to semiotics, it will never be in a symbolic-cognitive position to say “I understand you”, “You know what I mean?” or “Was that clear” except of course if it was programmed to use those expressions as conversation fillers to display intelligent behaviour, sort of like faking it.

CONSEQUENCE 3: Externalized, Distributed and Collective Cognition

If we ever reach a point of automating the complete linguistic competencies of the human species, AGI will still only be an interface to a more powerful distributed human cognition according to the “delegated agency or cognition” concept first provided by Latour. Consequently, human-level intelligent machines will remain an“external symbol storage”, lending itself to an extension of human symbolic-cognitive capabilities, rather than a replacement.

Cognitive semiotics (embodied cognition) also offers a different view to the mind-body problem- essentially all the physical stuff point to or “stand for” all the mental stuff such as intentions, beliefs, desires, thoughts and values. These sign relations or functions are the fundamental causation of human meaning-making. Hence, our intellectual capacities do not necessarily reside in our minds but exist pervasively across networks of meaning-making in human communities.

CONCLUSION

Understanding AGI through the semiotic method reveals implications for the philosophy of AI, including the premature drive to anthropomorphize potential human-level intelligent artefacts. This helps us see clearly the progressive nature of cognitive artefacts as products of human symbolic-cognition can not even remotely pose an existential risk, essentially nullifying false speculations fear-mongered by popular scientific experts. What do we get as a consequence of automating human symbolic-cognition?

Much of the dangers supposed of the hypothetical event of the technological singularity emerges from an overestimated extrapolation of current approaches to narrow AI which are fundamentally different from those needed to achieve artificial general intelligence. Therefore we can put on a show to make a machine look like it is truly understanding the world through its interactions, but we don’t have a valid proof test. Since signs are literally anything in the physical universe, automating the “core human operating system” can strengthen communication and computation, perhaps naturally leading to better connectivity and more efficient complex problem-solving.

Okay, sure, once we have formalized the entire symbolic-cognition, machines might be able to form thoughts based on the rules we teach it, that’s how humans “make” meaning after all (rule-governed). However, what purpose will that serve? What will be the utility functions? My aim is not to just pose rhetorical questions, but rather to urge us to think deeply about the effects of developing increasingly symbolic-cognitively powerful computing technologies. More relevant questions would be, what kind of cognitive activities will we most likely off-load? What will that mean for human experience in a much more technologically advanced world?

While we may succeed in automating human-level intelligence (perhaps even human-level linguistic competencies) into a material artefact, semiotics helps us accept that AGI machines will never be as semiotically competent as human beings are and have been since the birth of the Homo sapiens.


REFERENCES

Jiajie Zhang and Vimla L. Patel. “Distributed Cognition, Representation, and Affordance.” Pragmatics & Cognition 14, no. 2 (July 2006): 333-341.

Stuart Russel and Peter Norvig, “Artificial Intelligence: A Modern Approach”, 3rd Ed., 2009.

Martin Irvine, “The Grammar of Meaning-Making: Signs, Symbolic Cognition, and Semiotics.”

Martin Irvine, “Introduction to Linguistics and Symbolic Systems: Key Concepts”

Martin Irvine, “Semiotics 1.0: Basic Introduction.”

Martin Irvine, “Introduction to the Theory of Extended Mind and Distributed Cognition”.

Ben Goertzel, “Artificial General Intelligence”, eds. Pennachin, Casio, Springer, 2007.

Nils Nilsson, “Human Level Artificial Intelligence? Be Serious!” AI Magazine, Volume 26, Number 4 (2006).

Newell, Allen and Simon, Herbert, “Computer Science as Empirical Inquiry: Symbols and Search”, Communications of the ACM, No. 3, Volume 19, March 1976.

Allen Newell, “Intellectual issues in the history of AI”, Carnegie-Mellon University, 10 November, 1982.

Marvin Minsky, “Society of Mind”, Simon & Schuster, 1988.

John Haugeland, “Artificial Intelligence, the very Idea”, The MIT Press Cambridge, Massachusetts, 1985.

Ray Kurzweil, “The Singularity is Near: When Humans Transcend Biology”, 2003.

Ray Kurzweil, “How to Make a Mind”, 2013.

Patrick Tucker, “The Singularity and Human Destiny”, The Futurist, 2007.