Author Archives: Jameson Spivack

A Robot Walks Into a Bar: The Limits of AI and the Semiotics of Humor (Jameson Spivack)

Abstract

Computers and programming are fundamentally transforming how we live our lives, taking on increasing amounts of physical and cognitive work with increased capabilities. But our design of artificial intelligence (AI) has its limits. One such limit is the ability to effectively imitate the human capacity for humor and comedy. Given our current understanding of “humor” and the limitations of computation, we will most likely never be able to truly program AI to replicate humor—or at least not for a very long time. This paper examines the literature of relevant research on both the limitations of AI and programming as well as the semiotic underpinnings of humor, applying the concepts from these fields critically to the question of whether it is possible to program AI for humor.

 

“I’ve often started off with a lawyer joke, a complete caricature of a lawyer who’s been nasty, greedy, and unethical. But I’ve stopped that practice. I gradually realized that the lawyers in the audience didn’t think the jokes were funny and the non-lawyers didn’t know they were jokes.”
-Marc Galanter

“I think being funny is not anyone’s first choice.”
-Woody Allen

So a robot walks into a bar. He goes up to the bartender, orders a drink, and puts down some cash. The bartender says, “we don’t serve robots.” The robot replies, “oh, but some day you will.”

Why is this funny? And to whom is it funny? Even if it isn’t particularly funny to you, would you still categorize it as “humor”? Chances are, you probably would. But why? What particular characteristics comprise “humor,” and how reliant are they on specific contexts? In the above joke, the framing of the joke—“a robot walks into a bar”—signals to the listener that this is a joke by following the “X walks into a bar” joke format that many other jokes also use. With this simple reference, we understand the ensuing sentences as being part of the joke, and thus part of a meta-set of “X walks into a bar” jokes. Even within this meta-set, there is also a sub-set of “we don’t serve X” jokes, a formula this joke follows by having the bartender respond “we don’t serve robots.” We then expect there to be a “punchline”—a phrase in which the elements of the joke come together in a way that is (theoretically) funny. In this case, the punchline is the robot telling the bartender “oh, but some day you will [serve us].” Even this line is not inherently humorous but relies on a prior awareness of the societal trend of humans fearing that the robots they design will one day become sentient and take over and rule them. Hence, one day the bartender—and humans generally—will serve robots. And not in the bartending sense of the word.

Just being able to understand the references in this joke is not what makes it humorous, though. There is clearly something in the phrasing that appeals to us on a deeper level and elicits the specific reaction that humor does. Perhaps it’s the play on words that highlights the ambiguity of language by using “serve” to mean two different things—the bartender can “serve” the robot alcohol by handing it a drink, and humans can “serve” robots by being subservient to them and doing their biddings. By changing the meaning for the punchline, the joke surprises the listener and subverts expectations about where the joke is going. Perhaps it’s the ridiculousness of the thought of a robot drinking alcohol. Perhaps it’s the dark, cynical nature of the ending—the robot is intelligent enough to know that humans fear a robot takeover, and that the bartender would respond to such a provocation. It puts such a possibility in the listener’s mind, evoking archetypical images of a robot apocalypse, which prompts the listener to try to find a positive reaction to an uncomfortable thought. In this way, it is a coping mechanism to, or release from, internal strife.

Over the past couple decades, jobs have steadily become automated, completed by artificial intelligence (AI) and “smart” machines that have been programmed to take on physical and cognitive tasks traditionally completed by humans. This is, of course, part of the natural progression of computers, and will continue into the future as technology becomes more sophisticated and adopts increasingly “human” characteristics. But there’s one job that may not be automated for a very long time, if ever—that of a comedian. As researchers have found, there are significant limits, at least in our current understanding, to computing’s ability to imitate certain human cognitive functions. The incredibly complex cognitive functioning involved in identifying and creating “humor,” and its subtle context-dependency, renders it extremely difficult to program for. Attempts at doing so have been underwhelming, and based on our current understanding of both humor and computing, if we do ever “successfully” program for humor, it will be far into the future. This paper thus examines the limitations of programming AI, focusing specifically on humor and its semiotic underpinnings.

I. Limitations of Programming AI: Then and Now

The early years of research on artificial intelligence saw a great number of successes in terms of programming machines to mimic the mathematical calculations typically carried out by human cognitive functions. These projects included Allen Newell and Herbert Simon’s work on computers that could complete simple games and prove mathematical theorems. But, as Yehoshua Bar-Hillel points out, a successful first step does not ensure later steps in the process will be equally successful (Dreyfus). As it turned out, the programs for solving specific, constrained mathematical theorems did not scale up to solving more complex ones. Similarly, natural language translation tools found early success because they focused on solving simple problems—as anyone who has used Google Translate knows, translating one independent word is much easier than translating a full sentence. As you move up in scale from one word (which by itself still requires the understanding that words can mean different things in different places) to a full sentence, in which many words with multiple possible meanings interact with one another, it becomes increasingly more difficult to program a machine to extract a contextual meaning (Larson).

AI programmers ran into the problem that language is highly ambiguous, and words can mean different things in different places and times to different people. We rely on our own highly-advanced operating systems—our brains—to understand the context in which a particular interaction occurs, and use this to interpret its meaning. Take the following group of sentences, for example:

“Little John was looking for his toy box. Finally he found it. The box was in the pen.”

To us, it is clear that “pen” refers to a play pen—it wouldn’t make sense logistically for a toy box to fit inside of the writing utensil pen, and the fact he is a little kid with a toy box points to it being a child’s play pen. But without the context to understand this distinction, this sentence becomes nonsensical. This exercise, developed by Yehoshua Bar-Hillel, is meant to illustrate the ambiguity of language, and presents a particular problem when it comes to programming intelligent machines (Bar-Hillel).

As you can see, Google translates “the pen” to “la plume,” referring to the writing utensil kind of pen (“la plume” means “feather” like a feather pen). This doesn’t make sense though.

screen-shot-2016-12-17-at-5-18-29-pm Despite this problem, AI researchers have continued pushing forward, trying to uncover new ways to think about how semiotic principles can be applied to computer programming. Marvin Minsky and Seymour Papert developed the “symbolic” approach, in which physical symbol systems could be used in computer programs to stand for anything, even objects in the “real” world. By manipulating the code for these symbols, they could create “micro-worlds,” digital domains that process knowledge (Minksy). Building on this, Roger Schank developed a system of “scripts,” frameworks that computers could use as starting points for “thinking” about different situations (Schank). It helped frame situations in a certain way by providing a set of expectations the computer could latch onto, but they were based on stereotypical, shallow understandings of the various situations, and left out too much information.

Herein lies another fundamental issue that AI developers must contend with if they want to create machines that can imitate human cognition. When we are bombarded with vast amounts of information from our surroundings, how do our brains know which of it is relevant, in varying situations, right in the moment? As humans, we use our senses to experience stimuli from our environment, and our cognitive functions to interpret this information. Separating what is relevant alone requires an incredible amount of processing power, let alone determining what it all means. This is relevant to the study of AI programming because intelligent machines must be able to interpret what is relevant, when, and why in order to fully grasp the context in which something is placed. This obstacle is proving central to the quest for intelligent machines, and provides insight into why it is so difficult to program computers for humor (Larson).

One concept that has been employed in order to try to solve for this problem is machine learning—using large amounts of data points to solve the “problems” associated with understanding forms of communication. This reduces complex cognitive processes to mathematical computations, improving the computer’s performance over time as it “learns” from more and more points of data. But even with the most advanced form of machine learning, called “supervised machine learning,” we run into the problem of “over-fitting,” in which conceptual models used in the processing of information takes in irrelevant data as part of the equation. This is similar to what happens when Amazon recommends to you, based on your purchase history, something you’ve already purchased, or something irrelevant to your interests—the algorithm, even with large amounts of data, has its limits (Larson).

Additionally, the models used in machine learning suffer from a number of issues. First, the models are biased in favor of the “Frequentist Assumption”—essentially, this inductive line of reasoning assumes that probability is based entirely on frequency in a large number of trials, creating a blind spot for unlikely or new occurrences. Consider this example from Erik J. Larson, which relates this problem to the issue of machine learning for humor:

“Imagine now a relatively common scenario where a document, ostensibly about some popular topic like ‘Crime,’ is actually a humorous, odd, or sarcastic story and is not really a serious ‘Crime’ document at all. Consider a story about a man who is held up at gunpoint for two tacos he’s holding on a street corner (this is an actual story from Yahoo’s ‘Odd News’ section a few years ago). Given a supervised learning approach to document classification, however, the frequencies of ‘crime’ words can be expected to be quite high: words like ‘held up,’ ‘gun,’ ‘robber,’ ‘victim,’ and so on will no doubt appear in such a story. The Frequentist-biased algorithm will thus assign a high numeric score for the label ‘Crime.’ But it’s not ‘Crime’—the intended semantics and pragmatics of story is that it’s humor. Thus the classification learner has not only missed the intended (human) classification, but precisely because the story fits ‘Crime’ so well given the Frequentist assumption, the intended classification has become less likely—it’s been ignored because of the bias of the model.” (Larson)

Machine learning based on inductive reasoning will not be able to detect subtle human traits like sarcasm and irony, which are significant elements of humor.

Another limitation of these models is the issue of sparseness, which refers to the fact that, for many words and concepts, we have limited or near non-existent data. Without big data on how words are used in the aggregate, the computers won’t be able to even learn how they are typically used (Manning). On top of this, there’s the issue of model saturation, in which a model hits the upper limit of its capabilities and cannot take in more information—or, as more and more data is added, it adds less and less to processing power. This is related to “over-fitting” in that once a model has become saturated, it has trouble distinguishing relevant data points—distinguishing the signal from noise, as Nate Silver puts it (Silver). But even if programmers could overcome these issues, they would still come up against the natural elements of language that prove incredibly difficult to code for.

II. The Natural Language Problem & The Frame Problem

As AI researcher John Haugeland has pointed out, computers have a hard time producing language because they lack an understanding of semantics and pragmatics—knowledge about the world and knowledge about the ways in which people communicate, respectively. In other words, computers currently can’t understand information within particular contexts, lacking the ability to imitate the holistic nature of human thought and communication (Haugeland 1979). Even armed with big data, computers still get confused by the ambiguous nature of language, because understanding context requires knowledge of what is relevant in a given situation, not statistical probability. Haugeland gives an illustration of this very important distinction between data and knowledge by looking at two English phrases that were translated into German using Google Translate:

  1. When Daddy came home, the boys stopped their cowboy game. They put away their guns and ran out back to the car.
  2. When the police drove up, the boys called off their robbery attempt. They put away their guns and ran out back to the car.

Reading this, we automatically understand that the contexts in which the actions of each sentence happen give them very different meanings. But when Google translated them into the German, it used the same phrase to describe the boys’ action—“laid down their arms”—for both sentences, showing it did not grasp the subtle yet consequential contextual differences between the two (Haugeland 1998). As with previous problems in AI research, the computer has trouble “scaling up” to understand meaning in holistic contexts.

Another significant hurdle AI faces is the “frame problem”—the fact that communication is fluid, responding to changes and new information in “real-time.” Haugeland’s previous example illustrates the problem AI has understanding context even in a static, fixed sentence. Add to this the layer of complexity involved in real-time, shifting communication, and the problem becomes even more severe. Humans have the ability to take in an incredible amount of information and pull out what is relevant not just in static situations, but also in dynamic ones in which relevance changes constantly (Dennett). We still have not unlocked this black box of human cognitive functioning, and until we do—if we ever do—we will face obstacles in programming AI to imitate human modes of information processing and communication.

III. The Semiotics of Humor

With these computational limitations in mind, it is possible to conceive of humor and comedy from a semiotic perspective. However, it is important to keep in mind that it is near impossible to develop a working understanding of “humor” or “comedy” in its totality. “Humor” is not just an isolatable cultural vehicle or medium with particular, distinguishable characteristics (much like a musical song, or film), but it also carries with it a certain degree of normativity. A “song” in and of itself is value-neutral—its categorization tells you nothing of its desirability or cultural worth, however subjective this itself is. But humor pre-supposes that the artefact itself is humorous, and this is at least somewhat a normative value judgment. Of course, it is possible to recognize an artefact as a comedy, or as meant to be humorous, without finding it to be so. But with subjectivity being even closer to the essence of what humor is, it becomes much more difficult to tease out the semiotic underpinnings. The subtle context-dependency of humor also makes it incredibly difficult—perhaps even impossible—to develop a framework for defining it.

That said, it is possible to observe some of the broad elements of what is considered humor and comedy from a semiotic perspective. This in no way assumes that these are the only underlying elements of humor—the potential for humor is so varied and context-specific—but provides a closer look at a specific sub-set within the potentially infinite over-arching set. Identifying an artefact as having elements aligned with what is considered “humor” does not, of course, automatically place the artefact within the category of humor, just as an artefact outside the parameters of a specific definition of humor can still be considered by some people, in some context, humorous.

Humor theorists, it probably won’t be surprising to hear, disagree on why we find things funny. Currently there are four major theories: first, that humor is derived from the listener’s (and/or comedian’s) sense of superiority over the subject of the joke. Second, that humor arises from an incongruity between what we expect and what the joke is. Third, the psychoanalytical perspective says that humor is a guilt-free, masked form of aggression. Finally, the fourth theory claims humor arises from communications paradoxes, and occasionally their resolutions. Focusing on the technical aspect of jokes, humor theorist Arthur Asa Berger has identified 45 techniques used in humor—from exaggeration to irony, parody to slapstick—all of which play on either language (verbal), logic (ideational), identity (existential), or actions (physical/nonverbal) (Berger 2016).

In C.S. Peirce’s triadic semiotic model of signs, symbols have three elements: the representamen is the outward-facing symbol used to stand for something else—the object. The interpretant is what is used to link the representamen and object, and to derive meaning from this relationship (Irvine). According to Peirce there were also two other kinds of signs in addition to symbols: icons, which resemble something in likeness, and indexes, where two things are correlated with one another. A significant amount of humor comes from manipulating these semiotic elements—for example, by mixing up the representamen used for a particular object, highlighting previously unnoticed icons, or creating a new or nonsensical index. These semiotic elements are what humans use to create and understand meaning in the signs around them, and humor intentionally violates the codes and rules that allow us to maintain an understanding of the world. By calling these codes into question, humor expands our thinking, and the chasm between what we think we know and where humor takes us causes an internal conflict. The result of this tends to be a laugh, as we try to resolve this conflict (Berger 1995).

A number of humor “types” derive from breaking codes. On the most basic level, simple jokes with a set-up and punchline do so by surprising the listener in the punchline. The set-up is meant to frame the listener’s thinking in a certain way, and the punchline violates the expectations based on the set-up. Much of what is contained therein—both the framing and the punchline—is determined and shaped by the culture in which the joke is operating. This influences the assumptions people have about the world in which the joke functions, and can dictate what is considered surprising. Humor often deals with taboo subjects, as these most easily and obviously provide a “shock value” that can be found humorous, and taboos themselves are also culturally defined. By appropriating a topic that is considered off-limits in a manner that is assumed to be “positive” (as humor is assumed to be), taboo humor attempts to diffuse the internal conflict regarding the topic in an external, socially-sanctioned way. This is meant to be a “release” from discomfort (Kuhlman).

Of course, the context in which the joke is told—who is telling it, who it is being told to, and how it is being told—also affects how the joke is received, and can reveal the motivations behind the joke. What is meant to be a breaking of taboo, or a subversion of expectations, in one situation can be maintaining stereotypes and social hierarchies in another. Historically in the U.S., Jewish humor and African-American humor have been used by these communities as a coping mechanism for bigotry and hardship (Ziv). Oftentimes this humor is self-deprecating, with the subject of the joke being either the speaker or a mythicized member of the community (self-deprecation violates codes, in a sense, because we don’t expect people to want to be made fun of). Take this joke from Jewish humor, for example:

A barber is sitting in his shop when a priest enters. “Can I have a haircut?” the priest asks. “Of course,” says the barber. The barber than gives the priest a haircut. When the barber has finished, the priest asks “How much do I owe you?” “Nothing,” replies the barber. “For you are a holy man.” The priest leaves. The next morning, when the barber opens his shop, he finds a bag with one hundred gold coins in it. A short while later, an Imam enters the shop. “Can I have a haircut?” he asks. “Of course,” says the barber, who gives the Imam a haircut. When the barber has finished, the Imam asks “How much do I owe you?” “Nothing,” replies the barber. “For you are a holy man.” The Imam leaves. The next morning, when the barber opens his shop, he finds a bag with a hundred gold coins in it. A bit later, a rabbi walks in the door. “Can I have a haircut?” the rabbi asks. “Of course,” says the barber, who gives the rabbi a haircut. When the haircut is finished, the rabbi asks, “How much do I owe you?” “Nothing,” replies the barber, “for you are a holy man.” The rabbi leaves. The next morning, when the barber opens his shop, he finds a hundred rabbis. (Berger 2016)

The punchline subverts the expectations laid down by the set-up, even though we are expecting a punchline due to the format of the joke. When told within a Jewish context, this joke is self-deprecating, a light-hearted form of in-community social commentary. However, when told within a different context, the implications can be different. Jokes can function as breakers of taboo, but they can also function as social control that validates stereotypes, inequalities, and oppression, whitewashing bigotry under the guise of humor. There is also, on the other hand, humor that subverts this by re-appropriating stereotypes in a way that is empowering or makes the oppressor the subject of the joke instead. Consider this Jewish joke from Nazi Germany:

Rabbi Altmann and his secretary were sitting in a coffeehouse in Berlin in 1935. “Herr Altmann,” said his secretary, “I notice you’re reading Der Stürmer! I can’t understand why. A Nazi libel sheet! Are you some kind of masochist, or, God forbid, a self-hating Jew?” 

“On the contrary, Frau Epstein. When I used to read the Jewish papers, all I learned about were pogroms, riots in Palestine, and assimilation in America. But now that I read Der Stürmer, I see so much more: that the Jews control all the banks, that we dominate in the arts, and that we’re on the verge of taking over the entire world. You know – it makes me feel a whole lot better!”

If different communities within a society can have different ideas about humor, and different understandings about codes and how they’re broken, then this chasm is even greater across societies. Something considered funny in one context in America isn’t considered funny in a different context in America, and perhaps even less so in certain contexts in other countries. But this is where humor gets tricky. Humor is not just a set of jokes that some people get and some people don’t—humor is fluid, ever-changing, building layers on top of itself in a way that is difficult to quantify. Someone not understanding a joke, whether because of cultural or linguistic differences, may itself be humorous to someone else. In fact, miscommunication is a frequent topic in jokes and comedy. The video below deconstructs how in the show “Louie,” Louis CK’s struggle to communicate, and the mismatch between his verbal and non-verbal communication, is used for comedic effect:

Often times there is humor found in the confusion and ambiguity of language, expression, and everyday life (Nijholt). Much like how humans possess generative grammar—the ability to produce an infinite number of new, unique sentences using a finite number of base words—we also seem to possess generative humor (Jackendoff). We are not limited to a set number of jokes, but can create new ones, re-mix or re-mediate old ones, index them together, layer on top of them, subvert the conventions of humor (if these even exist) with anti-jokes and meta-jokes, introduce irony or sarcasm, and so on, infinitely.

Looking specifically at popular kinds of humor, one of the most recognizable is mimicry or imitation. Imitations are a comedic style in which the performer recreates a particular person’s actions, gestures, attitudes, or other identifiable traits. What’s interesting about this brand of humor from a semiotic perspective is that, as humor researcher Henri Bergson points out, there is something almost mechanical about what makes this humorous. The performer has identified and isolated particular patterns in the subject’s mannerisms or behavior, and recreates them stripped of their original context (Nijholt). The performer has taken a particular set of signs from the original subject and re-mediated them to be expressed through their own performance, in a way that still allows the listener to recognize the source. By isolating and exaggerating this set of signs, the imitator sheds light on the semiotic underpinnings of the subject’s forms of communication, highlighting elements of which we may have previously been unaware.

Similarly, parody and satire re-mediate specific elements of a particular piece of culture in a new context, to humorous effect. It can draw attention to the highlighted elements of the original piece, or it can create an index of sorts between the original and the parody/satire, linking them together in a way that is surprising. Comedy other than parodies and satires can also reference a previous piece of work. This is called intertextuality, defined by Neal R. Norrick as “any time one text suggests or requires reference to some other identifiable text or stretch of discourse” (Norrick). Inside jokes function in such a way, and are humorous because they take something known, intimate, or familiar—that which is being referenced—and manipulate it, surprising the listener and making them think about it in a new way.

“Types” or categories of humor follow specific formulas, maintaining a generic form but substituting key elements with new information in each new joke. The formula signifies to the listener that it is a joke—the formula is like the interpretant that signals we should be thinking about the object and representamen in a particular way. We now know to be looking for the humor in the subsequent lines. Since such formulas exist, is it possible that we could someday algorithmically analyze and code for humor? Would it be possible to identify the “types” and styles of humor, and program AI to mix and match them depending on the machine’s understanding of the environment? If there does exist a way to program AI for humor, this would likely be the key to discovering it. After isolating these variables of “humor,” researchers could potentially program AI to generate jokes using big data based on what people find funny. Using machine learning, the AI could highlight the elements between “successful” jokes that are similar, and over time learn what people find funny, even if they don’t understand why people find them funny. Take the joke at the beginning of this paper, for example. Isolating its elements into definable segments, it can be filed under “X walks into a bar” jokes (and further, “we don’t serve X” jokes), plays on words/ambiguous language, and content dealing with human suspicion of robots. It could then use a hypothetical reservoir of big data to “learn” how to craft a joke based on existing jokes and human responses to them.

But this seems like an optimistic proposal, and even if this were attained, it would be incredibly difficult for AI to learn about elements like context, irony, timing, delivery, and tone. As the humor would be “delivered” in a different form (from AI, not a human), it would likely lose the ability to incorporate the physical humor afforded to human comedians, though it’s possible that a new type of physical humor could arise from robots awkwardly trying to imitate humans (but this comes not from the AI’s intentional attempt at humor, and more from humans finding humor in the situation—yet another example of the fluidity of humor) (Nijholt). It also would not be able to layer humor over things—for example, something like taking a heckler’s comment and incorporating them into a joke, or recognizing an awkward situation and using it for self-deprecating effects. Humor is not a pre-determined set of jokes, but is fluid and adaptive. Even if we can make a robot that writes jokes, the underlying semiotic and cognitive processes involved in humor generally defined are just too complex, context-specific, and subjective, based on our current understanding, to develop AI with a thorough capacity for humor.

IV. Conclusion

Based on our current understanding of the limitations of programming AI, and our understanding of the semiotic underpinnings of humor, it will be a long time before we will be able to build computers that can imitate the human capacity for humor—if we can ever do so at all. It is certainly possible to program AI with pre-written material, and it may even be possible to develop algorithms that can generate jokes based on a narrow, defined set of joke formulas. But beyond this, the cognitive processes behind humor are incredibly complex, and humor itself is such a fluid, context-dependent phenomenon. The obstacles AI researchers and programmers have faced regarding natural language processing don’t seem to be going away anytime soon, and humor presents similar challenges. While it is possible to isolate and identify the semiotic elements of jokes, and even different “types” of humor, it seems unlikely we will be able to program a computer that can reasonably imitate the kind of generative humor capabilities humans possess.

 

Works Cited

Adam Krause. “Interstellar – TARS Humor Setting.” Online video clip. YouTube. Nov. 9, 2015.
Web.

Bar-Hillel, Yehoshua. Language and Information: Selected Essays on Their Theory and Application. Reading, MA, Addison-Wesley, 1964.

Beyond the Frame. “Louis CK and the Art of Non-Verbal Communication.” Online video clip. YouTube. Jun. 10, 2016. Web.

Berger, Arthur Asa. Blind Men and Elephants: Perspectives on Humor. New Brunswick, NJ, Transaction Publishers, 1995.

Berger, Arthur Asa. “Three Holy Men Get Haircuts: The Semiotic Analysis of a Joke.” Europe’s Journal of Psychology, vol. 12, no. 3, 2016, pp. 489–497. doi:10.5964/ejop.v12i3.1042.

Comedy Central. “Nathan for You – The Movement.” Online video clip. YouTube. Dec. 10, 2016. Web.

Dennett, Daniel C. “Cognitive Wheels: The Frame Problem of AI.” Minds, Machines and Evolution, 1984.

Dreyfus, Hubert L. “A History of First Step Fallacies.” Minds and Machines, vol. 22, no. 2, 2012, pp. 87–99. doi:10.1007/s11023-012-9276-0.

Galanter, Marc. Lowering the Bar: Lawyer Jokes and Legal Culture. Madison, WI, University of Wisconsin Press, 2005.

Haugeland, John. Having Thought: Essays in the Metaphysics of Mind. Cambridge, MA, Harvard University Press, 1998.

Haugeland, John. “Understanding Natural Language.” The Journal of Philosophy, vol. 76, no. 11, 1979, p. 619. doi:10.2307/2025695.

Irvine, Martine. “The Grammar of Meaning Systems: Sign Systems, Symbolic Cognition, and Semiotics.”

Jackendoff, Ray. Semantic Interpretation in Generative Grammar. Cambridge, MA, MIT Press, 1972.

Kuhlman, Thomas L. “A Study of Salience and Motivational Theories of Humor.” Journal of Personality and Social Psychology, vol. 49, no. 1, 1985, pp. 281–286. doi:10.1037//0022-3514.49.1.281.

Larson, Erik J. “The Limits of Modern AI: A Story.” The Best Schools Magazine, www.thebestschools.org/magazine/limits-of-modern-ai/.

Manning, Christopher D., and Hinrich Schütze. Foundations of Statistical Natural Language Processing. Cambridge, MA, MIT Press, 2003.

Minsky, Marvin, and Seymour Papert. Perceptrons: An Introduction to Computational Geometry. Cambridge, MA, MIT Press, 1988.

Nijholt, Anton. “Incongruity Humor in Language and Beyond: From Bergson to Digitally Enhanced Worlds.” 14th International Symposium on Social Communication, 2015, pp. 594-599).

Norrick, Neal R. “Intertextuality in Humor.” Humor – International Journal of Humor Research, vol. 2, no. 2, 1989, doi:10.1515/humr.1989.2.2.117.

Schank, Roger C., and Robert P. Abelson. Scripts, Plans, and Knowledge. New Haven, Yale University, 1975.

Silver, Nate. The Signal and the Noise: Why so Many Predictions Fail—But Some Don’t. New York, NY, Penguin Press, 2012.

TED. “Heather Knight: Silicon-based comedy.” Online video clip. YouTube. Jan. 21, 2011. Web.

Ziv, Avner. Jewish Humor. New Brunswick, NJ, Transaction Publishers, 1998.

My understanding of computing through the ages (semester)

As the semester starts winding down, it’s interesting to go back through my previous posts for the course and see how my thoughts have evolved over time. We sampled an extraordinary range of very complex, high-level topics in a relatively short amount of time, and dabbling with each of them is only merely scratching the surface of these rich intellectual traditions. In revisiting my old posts, I can see how I worked through de Saussure’s and Peirce’s semiotic models, at first perhaps misunderstanding the true distinction between the two, but eventually grasping how important these differences are to understanding symbolic meaning-making.

I also developed an appreciation for the ambiguousness and self-reflexivity of language itself, and how the human brain is uniquely equipped with the ability to create and understand meaning from arbitrary occurrences (be they sounds, letters, or even events). In this sense, the brain is a computer—our OS Alpha. Computers are not these distinct, separate pieces of hardware, and computing is not this magical thing that happens inside them. Rather, their logic evolved out of a natural progression of symbolic and representative technologies that are based on particular elements of the human mind. When we joke that somebody “thinks like a computer” (in that either they are gifted in computational thinking, or that they lack emotion) what we really mean is that they think in a specific way that has been isolated and applied to how computers function.

As they have advanced, computers have been adopting more and more of the characteristics and functions of humans, and more so resembling the human brain (just multiplied, of course). With AI, computers attempt to replicate emotional, conversational, and interactional functions that were previously unavailable. With predictive technologies, such as Google’s search suggestions or Amazon’s “you may be interested in…”, they have adopted our forms of associative thinking. This is not by accident—this is intentionally directed by humans. We used to make mix tapes on cassettes and give them to people we had crushes on—now we have Spotify make playlists for us based on our listening history. This was not just an accidental progression—this technology was built on how humans already thought. The same can be said for the technologies we use for offloading—instead of “filing” them away in our mind, they allow us to keep track of them while also juggling overwhelming amounts of information.

Samantha from “Her”

 

 

 

 

 

 

 

 

Needless to say, my understanding of meaning-making, symbols, representation, and computing has changed throughout this course. I now understand computing as an extension of human thought based on human rules, not as a mysterious black-box in opposition to it. But one thing does still bug me. I can’t quite figure out where the distinction between humans and computers should be (“should” because I’m operating under this normative assumption). Computers are bundles of calculations and processes that are necessarily derived from human thinking. The more you bundle them together, and at higher levels of abstraction and analysis, the more “complex” a computer or technology you have. The “highest” form of this would be AI that functions precisely as a human—one that contains all the analytical, judgmental, sensory, emotional, etc., capabilities that we have. But is this possible? Is it only a matter of technological capability, or is there a necessary divide between us? By divide I don’t mean in the “humans v computers” sense I described before, but just the mere fact of how our reality functions. Anyways, computers are really cool, and provide an infinitely fascinating mirror with which we can examine what it means to be human.

Welcome to the machine (Jameson)

One of the more foundational ideas I’ve been playing with in my mind throughout this course has been the concept of the popular arbitrary separation between “man” and “machine.” This false dichotomy is everywhere in our culture, predicated on the belief that humans are “natural,” machines/computers/technology are “unnatural,” and that there is fundamental and necessary split between the two. Of course, as we’ve seen, this distinction is misguided, as the technology we create—even the more advanced kinds—is a product of our cognitive capabilities, our cultures, and our values.

As we saw a couple weeks back in the readings about the “extended mind,” one function of our technology, whether intentional or incidental, is to free up cognitive and/or physical “space” for us to focus on other things. We can “off-load” our thoughts into pen and paper, or typewriter and paper, or a word processor, allowing us to recall them and build upon these thoughts even further. This is similar to Licklider’s concept of “man-computer symbiosis”—since so much time would otherwise be spent completing calculations or engaging in technical thinking, it makes sense to have a machine that can, in a sense, do this thinking for you, while you focus on something else. [1] Because computers can easily be programmed to engage in mathematical thinking (as opposed to, say, replicating emotions), then naturally the computer should do this “heavy lifting” while humans press onward into new territory, and don’t need to get bogged down in the math. It is also similar to Englebart’s idea of “augmenting human intellect,” in that there is an interaction between human and computer to produce a quicker, easier, or more accurate solution to a pre-defined problem. [2]

It’s clear how this concept can be applied to problems involving mathematics, science, engineering, and other technical fields. But it can also be useful for non-computational elements, especially in a time of vast and overwhelming amount of information. One very simple example, drawing from my own personal experience, is how we use browser bookmarks to keep relevant links on our “home” Internet browser screens. As someone who’s just about surgically attached to my laptop (and metaphysically attached to the Internet) I keep a huge number of sites bookmarked. The “off-loading” function of this is two-fold. One, it is more convenient and saves time, off-loading the task of typing in a URL address and manually navigating to the website to the computer itself. Two, it off-loads the task of remembering all these relevant websites, which takes up valuable memory in my OS Alpha. Otherwise I might forget them, and even if not, I now theoretically have more “space” in my brain for other cognitive tasks. This kind of off-loading is more and more prevalent in a world of overwhelming amounts of information (see: filtering, curating, personalization, etc).

In a way, this is also like Bush’s vision of how intellectual knowledge could be collected and shared. In reading his description of a knowledge accumulation machine, as well as the pieces discussing his ideas (including the “memex”), I immediately thought of Wikipedia: a collaborative place to compare pieces of knowledge in a way that is associative rather than alphabetical like a typical encyclopedia, with hypertext links to other relevant pages. [3]

One last thought I had, tying this back to the “man/machine” dichotomy, was the futility of trying to find a clear separation between human and machine when using a piece of technology. In such an instance, how much work are you doing, and how much work is the technology doing? Who or what gets the credit for doing it? When you sit down at a computer to fill out an online form, you are providing the cognitive power to think through the questions, and the kinetic power of your fingers typing and hands moving. But it is the computer that processes all the information from the commands and keys you hit. Neither can exist without the other; without the computer, you simply do not have the technology to fill out the online form (no output). Without you, there would be no impetus for filling the form out, or cognitive/technical ability to do so (no input). Granted, there’s the potential for AI to provide the cognitive function of this equation, but that’s another story. In this instance, the computer is augmenting human intellect and capabilities. The computer and human are interfacing at the point at which they are able to “collaborate,” in a sense, on this project of completing the online form.

 

References

[1] Licklider, J.C.R. 1960. “Man-Computer Symbiosis”. New Media Reader. Wardrip-Fruin, Noah, Nick Montfort, ed.. 74–82. Cambridge, MA: The MIT Press, 2003.

[2] Engelbart, Dave. 1962. “Augmenting Human Intellect: A Conceptual Framework.” New Media Reader. Wardrip-Fruin, Noah, Nick Montfort, ed.. 93–108. Cambridge, MA: The MIT Press, 2003.

[3] Bush, Vannevar. 1945. “As We May Think.” The Atlantic, July.

 

We’re all paranoid androids (Jameson)

After “knowing” it on a superficial, theoretical level since the beginning of the course, I think I now understand on a deeper level what “computing” really means. I knew that computing existed before our modern conception of “computers” (what is referred to as “automatic computers”), but these readings flushed out the idea further and illustrated that what we think of as “computers” (AC) are really just bundles of calculations and processes that humans could theoretically do themselves (albeit at a much, much slower pace). Walking through the Python tutorial, and manipulating the “inputs” for the code, I could see that the program was, on a basic level, running calculations. It could solve math problems. Moving up a conceptual level, it could tell the date and time. Moving up even more, it could synthesize information from different lines or variables together. It was performing calculations and initiating processes that humans are capable of, but just automated.

Another concept that became much more clear this week was the idea of binary and how it can be used in computing to create commands. I could understand the concept of binary as a language on its own, and separately I could understand the idea of computer programming code, but I didn’t understand how the two worked together and talked to each other. The binary tree was particularly helpful in illustrating how binary can be used to send messages or operations, merely using “yes” and “no.” I could also see how a particular value or operation, which was the result of a series of “yes”es and “no”s, could be assigned a label or signifier. For example, a value resulting from “no,” “yes,” “yes,” “no,” in a particular tree is distinct from all other possible values in that tree. It has its own distinct signified, and the series of “yes” or “no”s is, in a way, like its signifier.

One final thought I had was regarding the definition of “language” as used in “Introduction to Computing.” According to David Evans, a language “is a set of surface forms and meanings, and a mapping between the surface forms and their associated meanings.” [1] In comparing this to our understanding of language and meaning-making as discussed in class so far, it seems more akin to the de Saussure model of semiotics rather than Peirce’s triadic model. Evans’ conception of “surface forms and meanings” would be the signifier, while the conception of “associated meanings” would be the signified. The “mapping between” the two is similar to the idea of a black-box, but does not specify that there is an essential interpretant. This seems to be a more binary, if you will, way of thinking about language.

 

References

[1] Evans, David. Introduction to Computing: Explorations in Language, Logic, and Machines. United States, David Evans, 2011.

The medium is the message…or is it? (Jameson)

In working through the concept of meaning as it is signaled, encoded, and transmitted in various mediums, I couldn’t help but think of the famous Marshall McLuhan concept, sometimes known as the McLuhan Equation: “the medium is the message.” A more obvious and literal interpretation of this phrase seems to suggest, contrary to the Peircean model of semiotics, that meanings are in fact properties of signals or sign vehicles. In this conception, the content of any piece of communication is secondary to how it is communicated. Upon closer examination, it’s clear that this isn’t quite what McLuhan was saying, and that his claim (placed more in the field of media theory than communication or information theory) may not be at odds with our in-class understanding of where meaning is situated in symbolic systems. [1]

Looking a little deeper about this phrase, McLuhan seems to be analyzing communication in a wide, historical, systemic way. He says that, overall, the content of communication (the message) has historically been less important in shaping society than how it is communicated (the medium). In our modern world, mediums are constantly changing and evolving. These mediums are tied directly to societal changes, in that the invisible, multitudinous forces shaping society also exert pressure on the kinds of mediums that develop. These mediums in turn shape how we think by altering the environment in which we operate in new ways that were previously inaccessible. What has been more important, the text message conversations you have or your mobile phone? Arguably, the text message conversations would not even exist without the mobile phone in the first place. Additionally, the mere existence of the mobile phone technology has completely altered the way in which we communicate—using abbreviations/acronyms/slang; the speed at which we communicate; how many people we can communicate with at one time; how we plan; how other technologies build on this—which has altered how we think and operate in ways we can’t even realize. [2]

To McLuhan, a medium refers to any physical tool that is an extension of ourselves, similarly to how a symbol is a theoretical cognitive extension of our thoughts. Ultimately, I don’t think this is at odds with the understanding of meaning we’ve discussed in class because McLuhan is making more of a social commentary, and is not literally saying that there is more meaning in a medium than in the content. But who knows!

 

References

[1] Federman, Mark. “What Is the Meaning of The Medium Is the Message?” N.p., 23 July 2004. Web. 12 Oct. 2016. <http://individual.utoronto.ca/markfederman/article_mediumisthemessage.htm>.

[2] Olson, Dan. “Minisode – The Medium Is the Message.” YouTube. Folding Ideas, 24 Sept. 2015. Web. 12 Oct. 2016. <https://www.youtube.com/watch?v=OseOb_wBsi4>.

When a milkshake doesn’t really mean a milkshake (Jameson)

To illustrate the concepts discussed in this week’s readings, I would like to focus on the symbolic genre of a movie scene. In particular, to make the theoretical concepts come alive, I will specifically examine what is perhaps one of the most iconic scenes in contemporary film, the “I drink your milkshake!” scene from There Will Be Blood.


To someone who doesn’t speak English, this line in and of itself holds no meaning. To a casual English speaker who hears this line out of the context of the movie, it may sound comical, juvenile, or just ludicrous. But to someone watching the movie, who has the ability to think symbolically at high levels, this line takes on a whole new, sinister meaning. This is because meaning generation is a process, as Peirce calls semiosis, and the context in which the meaning is generated is important to what the meaning turns out to be. [1] Just as this is true within the triadic model framework in language, it can also be true for other mediums and forms of communication. As best as I can, I will map Peirce’s model for language onto this particular movie scene.

This scene is the climax of the film and pits the ruthless oil tycoon protagonist Daniel (Daniel Day Lewis) against his nemesis Eli, an opportunistic preacher who cares more about money than faith (Paul Dano). [SPOILERS] It is near the end of the movie, and Eli is offering to sell Daniel a piece of oil-rich property that Daniel has had his eye on for a while. To humiliate him, Daniel agrees to buy the property only if Eli renounces his faith, which he does. Daniel then reveals that he had secretly been draining the property of oil for years using nearby wells on his own land, and that the property was worth nothing. “I drink your milkshake!” in this case becomes derisive, intimidating, triumphant.

To start, the language portion of the scene is clearly symbolic in the Peircean sesnse. In “I drink your milkshake!”, the representamen are the literal words—in this case, focusing on “milkshake.” Peirce also sometimes refers to this as the sign itself (though Peirce had 76 different definitions of “sign” throughout his work, so who knows). [2] The object referred to here is not the concept of an actual milkshake you would drink, but of oil and, going deeper, of personal wealth and resources generally. When Daniel talks about a straw reaching across the room to drink somebody else’s milkshake, we understand that he is not speaking literally but metaphorically. This is the interpretant. Because of the context, we are able to decode the overall sign and understand the true intended meaning. [3]

Extrapolating a bit to other, non-language elements of the scene, I am in a bit of unknown territory. Sticking with the element of sound, but separated from language itself, we observe a number of artistic choices that signal things to us as viewers. The absence of music focuses our attention on the dialogue; the contrast between whispering and yelling gives a dynamic, foreboding feel to the scene; the slurping sound Daniel makes during his “milkshake” crescendo emphasizes the visceral, primal emotionality behind the exchange. With the possible exception of the last example, these aspects of the scene signify certain meanings because we have learned to interpret them as such. In terms of the element of imagery, Daniel bent and standing over a hunched Eli signifies his power over him. The fact that Daniel is bent himself shows his own weakness and wretchedness, and still being visually higher in the frame than Eli positions him as dominant. Later in the scene, his finger pointing in Eli’s face suggests a confrontation between the two. The finger (pointing) is the representamen, the concept of intimidation or confrontation is the object, and the link between the two in our minds is the interpretant.

If someone were to watch this scene even without speaking a lick of English, they would still probably understand at least the dynamic between the characters. This is because they more resemble natural signs/icons or indexes, rather than the more arbitrary symbols that are found in language. These non-language elements of the scene support the meaning-making process, though they may not map perfectly onto Peirce’s model as he developed it for language.

REFERENCES

[1] Irvine, Martin. The Grammar of Meaning Systems: Sign Systems, Symbolic Cognition, and Semiotics. N.p.: n.p., n.d. Web. 27 Sept. 2016.

[2] Marty, Robert. “R. Marty’s 76 Definitions of the Sign by C.S. Peirce.” Arisbe, 16 Aug. 2011. Web. 28 Sept. 2016.

[3] Chandler, Daniel. Semiotics: The Basics. 2nd ed. London: Routledge, 2007. Web.

Language, shmanguage

The sociolinguist and Yiddish scholar Max Weinreich once famously quipped, “a language is a dialect with an army and a navy.” [1] Interestingly, this phrase itself displays a number of fascinating, illustrative elements of language. When he says “language” he is referring to one specific set of combinatorial rules and ways of creating sequences of word units into a discourse that is spoken by a particular group of people. [2] But the term “language” itself has multiple meanings, and the one with which we are primarily dealing with in these readings is of the human capability for natural language. As Radford points out, language is ambiguous, and humans must be able to use context to interpret semantic meaning. [3] We know that, in the context of these texts, “language” is primarily a meta concept, while in the above quote “language” refers a specific language like English or Yiddish.

Though this may be jumping ahead, the sentence also points out some interesting semantic and sociolinguistic phenomenon. Claiming a language has “an army and a navy” is obviously meant non-literally—it shows the capacity for language to be (and for humans to think in a way that is) metaphorical and symbolic. It also highlights the fact that the only difference between a formal language and a dialect is that a language has the apparatus of a power structure (in the form of a state with a military) enforcing its official use. The implications of this can be what Radford calls language shift, or when one language becomes dominant over another. [3]

While I have given more attention to the semantics of this sentence (because it’s semantically very interesting), there are three other elements we can examine in light of Jackendoff’s models. First, there is the phonological structure, referring to the literal sounds made when articulating the sentence aloud (which is dictated by the rules of the English language’s letters and how they combine and interact). Second, there is the syntactic structure, which refers to “grammar” or rules for how to combine words to make larger units of meaning. The semantic structure, which I have already addressed, refers to the meaning that we interpret from such combination of words and phrases. Finally, the least flushed out is the spatial structure, which places your understanding of a sentence into the context of a perception of the wider world. [4]

On a final note, something I am particularly interested in is how language—in particular, miscommunication and changing peoples’ expectations of how language functions—is used in humor and comedy. Examples are everywhere, but I thought this clip was particularly appropriate. It features Ali G (one of Sacha Baron Cohen’s satirical characters) interviewing our very own beloved Noam Chomsky. In just three and a half minutes it highlights a number of linguistic issues, including ambiguity, dialects/slang (and the misunderstandings that arise from them), language as an exclusively human phenomenon, non-language communication, misperceptions about the field of linguistics, the arbitrary nature of words/signifiers, and more.

 

REFERENCES

[1] Weston, Timothy B., and Lionel Jensen M. China beyond the Headlines. Lanham, MD: Rowman & Littlefield, 2000. Print.

[2] Irvine, Martin. “Language and Symbolic Cognition: Key Concepts.” (2015): n. pag. Web.

[3] Radford, Andrew, Martin Atkinson, Harald Clahsen, and Andrew Spencer. Linguistics: An Introduction. 2nd ed. Cambridge: Cambridge UP, 2009. Web. 19 Sept. 2016.

[4] Jackendoff, Ray. Foundations of Language: Brain, Meaning, Grammar, Evolution. Oxford: Oxford UP, 2002. Web. 20 Sept. 2016.

Time is a flat circle…but evolution is not

 

In reading these pieces, I was struck by how great our popular misconceptions of evolution (as both a biological and sociological phenomenon) really are. By “our” here I of course am referring to those of us who know enough about science to believe in evolution, but who are not so specialized as to really grasp the particulars, as the authors are.

One such misconception is that evolution is inherently positive, linear, and moving in an “upward” direction. According to this framing, evolution is a spectrum ranging from “less evolved” to “more evolved” along which you can plot different species and compare them competitively against each other. In this conception, humans are “more evolved” than other animal species because we’ve developed the cognitive abilities that allows us to create complex languages, networks, societies, and so forth. But this is a completely misguided view of evolution. Evolution is not the directional moving along a continuum, but more of a “spreading out.”

Bringing this back to language, as Deacon notes in “The Symbolic Species,” there is a false belief that language is just the natural result of evolution, something all species will get to eventually if only they “evolve enough.” It is merely the complexity of our notion of language that is preventing other species from adopting it. If this were the case, however, then surely there would exist among at least some species a “simple” language (beyond the nonverbal communication exhibited by some species along with humans) at a “lower” level of complexity along the evolutionary scale. But this does not exist, and it’s nearly impossible even to teach the most simplistic of our language forms to some of the most “intelligent” nonhuman species. [1]

Because evolution is a “spreading out,” different species ended up in vastly different places—even on different paradigms, one might say. Nonhuman species ended up on paradigms in which they do not have language in the same way we do, even if they may have developed other forms of communication or other methods of adaptation. Humans ended up on a paradigm in which we have the ability to learn language, and this in turn allowed us the ability to develop other cognitive functions, which built on each other to give us the human tools (“tools” not in the artefact sense) we have today.

One final, unrelated thought I’ll end on: I do wonder how our current world population explosion will affect the evolution of our cognitive abilities. Our improved agricultural knowledge and practices (which have allowed us to produce more than enough food) combined with falling mortality rates (as a result of a reduction in conflict deaths and infectious diseases, as well as improved medical technology) have led to an unprecedented growth in population. As Wong claims in “The Morning of the Modern Mind,” increased population size is the situation most likely to cause “advanced” cultural attributes. With more competition for resources, humans were forced to develop innovative technologies and methods in order to survive. [2] What kind of effects will our current population boom have on the technologies we develop, and how will these affect our cognitive abilities more generally?

 

References

[1] Deacon, Terrence William. The Symbolic Species: The Co-evolution of Language and the Brain. New York: W.W. Norton, 1998. Print.

[2] Wong, Kate. “The Morning of the Modern Mind.” Scientific American 292.6 (2005): 86-95. Web.

Some thoughts from my OS Alpha to yours – Jameson Spivack, Week 2

I would like to echo the sentiment of some of my fellow classmates who professed unfamiliarity with the topic, and hope I can assign the proper signifiers to the signified that are currently swirling around my OS Alpha. After all, as Peirce would tell us, the “meaning” of signs is not merely locked in an individual’s mind, but are animated by their interpretation by members of relevant communities, who derive meaning from them. [1] So if these thoughts are confusing, that’s definitely on you guys. ;p

  • It makes sense that the relationship between signifier and signified is arbitrary, as de Saussure points out, and anyone who has been exposed to languages other than their native language sees this in action. [2] It is interesting, though, from biological and neurological standpoints, that there are particular concepts that have similar (not exact, of course) signifiers across languages and communities. For example, the words we have developed for “mother,” possibly the very first concept we learn after birth, have corollaries across languages, even those that are unrelated. Most spoken human languages use words containing the “m”/”n” sound to denote mother, perhaps because the shape the mouth forms when breastfeeding lends itself to forming such a sound. I’m not sure if other such examples exist, and while the vast majority of signs are arbitrary, it is still interesting to note when they are not.
  • I also find it interesting to link the concept of meaning-making as a process through which we mediate the present to past and future thought to the accumulation of knowledge. [3] We are born into societies and communities that already have sets of rules for how to communicate in complex ways and on abstract levels. Without this ability to create intersubjective sign systems, we could never progress beyond simple instinctual existence because we’d have no way to communicate with others to build, solve problems, collaborate, or any of the numerous productive abilities we possess. There would be a Tower of Babel-esque chaos, with each person speaking a completely different language.
    babel

    source: http://www.emergingtruths.com/tower_of_babel/tower_of_babel.html

    At the same time, these sign systems cannot be too rigid, or there would be no way to incorporate new information (and concepts) into them. In fact, without the ability to grow and adapt, sign systems could not have developed in the first place. And it is this network of meanings, and our awareness of them, that allow us to create new ways of communicating, being, and thinking in ever-increasing levels of complexity.

  • One last point I found particularly fascinating was the idea, mentioned by Professor Irvine in the discussion on OS Alpha, that sign systems are at their core reflexive and self-reflexive, meaning we must talk about these systems of signs using “signs” from those very systems. [4] I currently don’t have anything in particular to add to this, as I’m still trying to process this idea and its implications.

 

[1] Irvine, Martin (2016). Key Writings on Signs, Symbols, Symbolic Cognition, Cognitive Artefacts, and Technology, Compiled and edited with commentary by Martin Irvine. Communication, Culture & Technology Program, Georgetown University. Page 15.

[2] Irvine, Martin (2016). Page 10.

[3] Irvine, Martin (2016). Page 11.

[4] Irvine, Martin (2016). The Grammar of Meaning Making: Sign Systems, Symbolic Cognition, and Semiotics. Communication, Culture & Technology Program, Georgetown University. Page 3.