# From Bits to Meaning – Ruizhong Li

Information Is a Physical Order

This week’s readings about information theory remind me of the book I read last semester: Why Information Grows: The Evolution of Order, from Atoms to Economies. The author, Cesar Hidalgo, maintains that information is not a thing, rather, it is the arrangement of physical things. It is a physical order. The contrary side to the order is randomness, therefore, the information grows by overcoming the randomness. This point of view aligns with Claude Shannon’s opinion. Borrowing the idea from thermodynamics, Shannon calls the random, uncertainty principle “entropy”. What is the most important, Shannon proves that the entropy can be measured and controlled, which provides possibility to reduce the uncertainty to generate and communicate information. Since entropy can be measured, the same logic, the information can be measured as well. The basic measurement is used to quantify how much information will potentially be used to encode, transmit, and decode electronic signals. The measurement introduces the concept, bit (binary digit), which comes from Boolean logic.

Why can’t we extrapolate from the “information theory” model to explain transmission of meanings?

Because the information theory model excludes the meaning.

The signal transmission model is based on binary math, combined with the logic of probability theory. It is the method for signal transmission, not meaning transmission. People always blur the boundary between signal transmission and meaning transmission because we have the ability to interpret messages and infuse them with meaning. But what is going through the physical wires is not the meaning, but the signal. The information itself doesn’t contain any meaning.

Where are the meanings in our understanding of messages, media, and artefacts?

Meaning is created during semiosis.

Meaning exists nowhere but the process when we perceive the signals. As socially symbolic beings always live in technically mediated symbol systems and use information to exchange meanings. Since human are social and collective animals, the meaning-making happens during a certain community, in which people share a common ground on interpreting specific signals. The way that people interpret signals mean is embedded in the social structure and context. It doesn’t mean that the meaning exists somewhere. The meaning-making is a dynamic process, although to some extent it is subject to the social context.

What is needed to complete the information-communication-meaning model to account for the contexts, uses, and human environments of presupposed meaning not explicitly stated in any specific string of symbols used to represent the “information” of a “transmitted message”?

Information theory + semiotics = the whole story.

Shannon’s signal transmission model perfectly explain how information encoded and decoded in a physical aspect. What it leaves out is the meaning system. Semiotics bridges this gap between bit and meaning. Semiotics addresses the process from symbols to meanings, which help complete the process from bit, a kind of symbol, to meaning. Information theory only talks about the input and output on the both sides of the black box, but semiotics clarifies what is inside the box. For example, the triadic structure and the parallel architecture help understand the process of interpreting the symbol from object – representamen – interpretant, in phonological, syntactic, and semantic aspect simultaneously.

[1] Irvine, Martin. “Introduction to the Technical Theory of Information.

[2] Cesar, Hidalgo. “Why Information Grows: The Evolution of Order, from Atoms to Economies.

So the meaning context of a text message is a desire to communicate without the limitation of space. So already, there’s an immediacy to the meaning context of a text message. When someone encodes a message into their phone to be sent, they are tapping pixels that are recognizing the designated spaces that are being tapped as electronic signals to be represented as typographic characters on the screen. After the message has been encoded into these characters, when the sender sends the message, the entire message that has been represented as electronic signals is sent via networks of radio waves to the recipient’s device, which is always idly working to receive signals encoded specifically for that device (through a phone number). The receipt of this message is often represented by an audible signal (sometimes a popular Beach Boys chorus). The recipient will use the phone’s software to locate the message that has already been decoded by the device into the form of typographic characters – however it won’t necessarily be exactly the same as the encoded message. If it’s a different phone, the font, interface of the software, and colors can be completely different than what the message looked like in the encoded message. However, because the meaning process is independent of the encoding and decoding of the actual signals, this does not necessarily affect the transmission of meaning.

While there is no meaning embedded into the actual electrical signals or radio waves that are carrying meanings, they are still part of the meaning making process. This is what it means to be in the digital age of information. There is information, in the technical sense, travelling around us always along highways for information through radio waves, copper wires, fiber optic cables, and so on. Even take this blog post; the purpose is to respond to the readings and will be used in class, and will be on the website far before I make it to the classroom. And when the website is accessed in class, the text I inputted into this text box will be viewable as characters in a string of posts by everyone else in the class. Thus the encoding of information is complete; the decoding will happen in the classroom, later in the semester when I want to track what I’ve learned and review all my blog posts, even next semester when I want to review what I learned in this course when I re-access the website. And the information saved in the website stored in a server somewhere will travel those information highways to reach my computer, my phone, my iPad and the text and my interpretation of it will function as a sign – a different sign in every context.

# Coding and Decoding a text message- Roxy

“Sign” , according to C. S. Peirce, is a static unit or individual representational form that can be interpreted as part of a collective sign system. Signs can be signs are because they can magnify the particular feature of a particular object. For instance, in French, a “mother-in-law” is called as “ la belle-mère.” “Belle” means beautiful and pretty. “mère” means mother. It is also true in China, we call a mother-in-law as “qin jia mu”, here, qin means close and related by blood. We can see that we have to emphasize that we are soooo close because we are not that close in reality. Language, here as a symbol, functions in the daily life.

Thanks to the internet, people in this world closely connected to each other as never before. The online version of communication can get rid of the facial expressions, gestures and tones, it is easier to be interpreted. So, how a text message, an email message, or social media message works? What kinds of communication acts understood by communicators are involved?
In The Information Paradox, Shannon’s words were cited to explain the first theoretical model of a mathematical theory of communication. “ A source sends a message. An encoder generates a distinct signal for the message, as prescribed in a code book. The channel is the medium that carries signals from the source to the receiver. a decoder on the receiver end converts the signals back to their original form, using the same code book, and the message has arrived.”

In the first step, the encoder generates a distinct signal for the message. Senders play a decisive role in a communication action. The sign can only be a sign when it is interpreted by a sender in a particular way. So, if the senders’ intention is unknown, this sign cannot be interpreted. A German man has taught his dog Adolf to give a nazi salute when hearing “Adolf sit, give me the salute.” In this case, the situation is much more complicate. The sender of this sign is only a trained dog who cannot understand the meaning of this gesture, so the dog is not violating Germany’s anti-Nazi laws. The sender also decides the way of encoding. A signal can be encoded in various ways, but there exist better options of one idea. A lot of outside factors can matter. For example, If a signal is highly required by an environment, this signal can be sent in many ways. When I want to answer a yes-no question, I can say “yes”, “sure”, “correct”, “of course”, etc. But If I want to mention a term, I can only use “algebra”, or “physics”.

In the last step, a decoder converts the signals back to their original form. The process of decoding is answering a series of yes-no questions. Although the answer of each question can only be 0 or 1, but the probabilities of 1 and 0 are different. I have to distinguish “me” and “not me”, and then the “is a person” -“ is not a person” question. The “is a tiger”-“not a tiger” question. We can see that we have to cut a sign into several questions and then decode them respectively. A receiver has to choose the same code book of interpreting the codes. Although we do have some conventions, they cannot cover every aspect, detail, and trivia of life. That could be the first reason the receiver may not get the sender’s meaning.

Another reason that may distract the receiver is the noise exists in the medium that carries signals from the source to the receiver. Sometimes we can abstract the thing we talk about from the real world. When we sit in a theatre, we can tell the difference between on stage and in reality. That is because there is a intangible wall between you and the play. But, usually, we recognize this world by taking them as an entity.

I can never make sure that the text message I sent can be full understood by you.

Nazi salute. (2016, October 8). In Wikipedia, The Free Encyclopedia. Retrieved 11:29, October 8, 2016, from https://en.wikipedia.org/w/index.php?title=Nazi_salute&oldid=743191847
Irvine, Martin. “Introduction to the Technical Theory of Information.”
Denning, Peter J., and Tim Bell. 2012. “The Information Paradox.” American Scientist 100 (6): 470–77.
Hall, Stuart. “Encoding, Decoding.” In The Cultural Studies Reader, edited by Simon During, 507-17. London; New York: Routledge, 1993.
I have two questions:

1.Do we have some expression in particular languages cannot be translated or deciphered by other languages? I mean, if you can explain a 15-letter English word by 1000 Chinese words, you can still translate it.

2. Why different languages can be unbalance? In Chinese, we use different words to indicate elder cousins, younger cousins, male cousins, female cousins, maternal cousins, and paternal cousins, but in English, there is only one word: cousin. And in French, they use “quatre-vingt-dix-neuf” to indicate 99, quatre means 4, vingt means 20, dix means 10, neuf means 9. It is really like a formula: 99=4*20+10+9. Why they don’t have the words like ninety.

# Meaning Preserving in Communication System – Jieshu Wang

### Why can’t we extrapolate from the “information theory” model to explain transmission of meanings?

As Professor Irvine mentioned in yesterday’s Leading by Design class, Samuel Morse was the first person who gave meanings to electronic current pulses. But it was not until Claude Shannon founded Information Theory, had this signal-code-transmission model been formally established as a discipline.

However, Shannon ignored meaning, so it is ambiguous where new information comes from[i]. The information theory he established and its predecessor mathematical theory of communication (MTC) both are not interested in meaning, reference, or interpretation of information. Instead, they mainly “deal with messages comprising uninterpreted symbols[ii]” that are at the syntactic level, not semantic information.

Let’s look at Shannon’s illustration of communication system, the simplest information system.

Claude Shannon’s original diagram for the transmission model, 1948-49. Source: Irvine, Martin. “Introduction to the Technical Theory of Information.”[iii]

All information, no matter it is an email, a phone call, or a song, is transformed and transmitted from its sender to the receiver through the pattern showed in the image above. For example, the pattern of Morse Code consists of dots, dashes, and spaces that are meaningless before they are decoded. That’s why we can’t extrapolate from the information theory model to explain the transmission of meanings.

### Where are the meanings?

During the process of information transmission in communication systems, the meaning is not lost but exists in the sign referent model proposed by Paolo Rocchi[i]. According to Rocchi, information has two parts–sign and referent. “Meaning is the association between the two.” It is learned and stored in our brains and can be transformed and transmitted by machines. Once it is decoded into recognizable signs, the association—meaning—is ready for us to discover.

Those associations can also be conducted by computers and become more and more important for scientific discovery because the association capacity of the human brain is biologically limited. Computers could serve as a good cognitive artifact for us to offload this cognitive effort.

Using simple observation and intuitive induction reasoning conducted while bathing, Archimedes associated the behavior pattern of water with physical forces, and ultimately discovered the law of buoyancy. But modern physics does not work that way. For example, the discovery of gravitational waves earlier this year largely attributed to many sophisticated machine learning algorithms whose job, in a nutshell, were filtering all kinds of noises and screening out the most promising signals picked up by the supersensitive sensors. Basically, we offload the effort of associating signal patterns (sign) with astronomical events (referent) to computers. Computers are making, storing, and looking for meanings on behalf of us.

### What is needed to complete the information-communication-meaning model to account for the contexts, uses, and human environments of presupposed meaning not explicitly stated in any specific string of symbols used to represent the “information” of a “transmitted message”?

In order to complete the information-communication-meaning model, first of all, we need a sign system shared by the members of the community. According to C.S. Peirce, the sign system consists of an object, an interpretant, and a representamen[iv]. The object is what the sigh refers, i.e. the referent. The meaning making process is hiding in the relationship among the three parts.

Second, during the design process of communication machine, our idea of the meaning is built into the machine, so that when the machine is used to transform and transmit information, the meaning of the information will be preserved in the association of sign and referent implanted in the machine[i]. For example, when people are designing computer language, they would also design a dictionary in which every code corresponds a specific logic action.

After the information is decoded, the receiver uses the sign system that he/she shares with other community members to interpret the message and finally interprets the meaning.

#### References

[i] Denning, Peter J., and Tim Bell. 2012. “The Information Paradox.” American Scientist 100 (6): 470–77.

[ii] Floridi, Luciano. 2010. Very Short Introductions : Information : A Very Short Introduction. Oxford, GB: Oxford University Press. http://site.ebrary.com/lib/alltitles/docDetail.action?docID=10485527.

[iii] Irvine, Martin. “Introduction to the Technical Theory of Information.”

[iv] Chandler, Daniel. 2007. Semiotics: The Basics. 2nd ed. Basics (Routledge (Firm)). London ; New York: Routledge.

# Coding and Decoding at the ZOOHackathon – Lauren Neville

This week’s readings paralleled my experiences this weekend and helped me better understand the layers of abstraction and message coding involved in the making process. I spent the whole weekend at the National Zoo and the first annual ZooHackathon intended to bring designers and coders together to build solutions to illegal wildlife trafficking. The first night of the hackathon, we were given statements of how leaders in the wildlife field would like to combat illegal wildlife trade.

We were then sent off to form teams of engineers and designers to begin finding solutions to this problem. After discussing the problems, my team chose to create a phone app connected to email and Whatsapp for the law enforcement agents working in Uganda to use. They had expressed that the work they do is often very dangerous and that they would want a way to communicate that they were in danger to each other in a swift way. We ended up designing an app that at the touch of a button would send a prescripted panic message and your geolocation to everyone else connected to your team.

This app simplified the tedious process of writing a string of letters and sending individual messages to different members of your team with your location when you are in immediate danger. It revolutionized the law enforcement messaging system just by having a preprogramed system ready and updating it’s own information constantly from mapping technology.

This is the great affordance of the bit and digital technology. As stated, “Information theory [and digital encoding] works because we can reliably represent and reconstitute the material components of shared symbols.” I spent the weekend not writing their message, of course, but actually writing the message of that system into the computer to then display to them.

Using Javascript and Boolean Logic, I was able to write text write in components that asked the user to type in the phone numbers they eventually wanted the numbers to go to. Then I defined that component by stating “multiline: true” and “editable: true.” Javascript is a layer of language between the human and the computer. I was writing boolean logic statements into the software program, but of course these readings helped me remember that all of the components under the layers of Javascript were also written at a different time as a bit boolean logic statement.

The complex layering of messaging that went into just building another communication messaging system during this hackathon was truly amazing. Of course, I understand that none of the meaning within this project was an actual property of this data that I was using to relay instructions. “Meaning is not ‘in’ the system; it is the system,” was a very relevant point that I took from Introducing Information Theory. I was understanding that the meaning was our relationship to the instructions, our intentions, the designs we conceptualized.

Stuart Hall, “Encoding, Decoding.” In The Cultural Studies Reader, edited by Simon During, 507-17. London; New York: Routledge, 1993.

Martin Irvine, “Introduction to the Technical Theory of Information

James Gleick, The Information: A History, a Theory, a Flood. (New York, NY: Pantheon, 2011).

# Mad Max 3: Beyond the Infosphere (Two Messages Enter, One Message Leaves!) – Alexander

When first engaging with this course’s subject material, I often found myself getting lost in the thickets, so to speak. Perhaps it’s due to the unfinished/unpolished nature of Peirce’s writing, but I had a hard time wrapping my mind around some of the connection we were trying to establish between semiotic theory and its practical deployment. But this week’s reading made a lot of things click into place for me.

One of the concepts I’ve found most interesting is the dialogic and communal nature of the meaning-making process. As Floridi says, “In many respects, we are not standalone entities, but rather interconnected informational organisms or inforgs, sharing with biological agents and engineered artefacts a global environment ultimately made of information, the infosphere.” Interconnected is a term we’ve encountered many times throughout this course, but this was the first I’d heard of the infosphere. Analyzing now familiar semiotic concepts through Floridi’s epochal lens was fascinating. This computer science and ICT induced fourth revolution has had very significant ramifications for our self-conception as semiotic beings. I’m personally interested in the Internet of Things, so the idea of rendering inanimate objects animate struck a chord with me. What effect does this elevation of information and data, which is decidedly non-alive in the conventional sense, have on us as a society? If information and data shifts to the centre of our ideological framework, supplanting the outmoded anthropocentric (for lack of a better term) model, then anything that can be merged or interact with data and information deserves a seat at the table. This would include traditionally “dead” objects, like cars, clothes, computers and even cities. These objects are now “speaking” to us. This relates back to semiotics because we’re dealing with an inflection point of meaning. Our preexisting conception of the subjects and objects of communication, how messages are transmitted, the “language” of transmission, and the very framework with which we undertake communicative acts are all under review in this new informational age.

Allowing my mind to run free, I started to think of what the next step of this information revolution would look like. Perhaps the next generation will adhere to a neo-animist philosophy based on RFID style implantation. A sort of technological paganism. The pendulum swings back. We can already see this generational rift taking place. A smartphone means different things to a digitally native child than to their digital immigrant parent. Now imagine growing up communicating with Siri, or Alexa, or Cortana. As a tool, or a relation to the world, your way of viewing traditionally inanimate objects is going to be radically different. I was (pleasantly) surprised to see Floridi greet me at the precipice of this cliff when he said “This animation of the world will then, paradoxically, make our outlook closer to that of pre-technological cultures, which interpreted all aspects of nature as inhabited by teleological forces.”

To tie this back to our prompt, cultural context and linguistic literacy are required to effectively derive meanings from messages. I found the reading’s example of the Rosetta Stone to be particularly instructive. Even before the discovery of the Rosetta Stone, Egyptian hieroglyphics were always information. The Rosetta Stone was simply an interface through which their meanings could be rendered accessible to the hieroglyphically illiterate symbolic agent. This is also a useful illustration of meaning not being contained within any specific individual’s mind, as the last person to know Egyptian hieroglyphics was long dead by the time the Rosetta Stone was discovered. Meaning was created within the symbolic system once the requisite elements were integrated. The actual shapes and forms that made up hieroglyphics were more or less inconsequential. They didn’t change pre-to-post Rosetta Stone discovery, yet the ability to derive meaning from them did.  Additionally, situating the transmitted symbols in the correct meaning register was necessary, which the Rosetta Stone did by placing the hieroglyphs next to Ancient Greek and Demotic language. So if I were to text someone a fully functional code to which they did not have the decoder to, meaning could not be derived because they would lack the necessary contextual or linguistic literacy.

Lastly, something that caught my attention from the readings was the question of identity in a world of mass production. When Floridi talked of “the metaphysical drift caused by the information revolution”, I was reminded of this passage from “The Conscience of the Eye” by Richard Sennett, in which he discusses the symbolism of our post-modern architectural landscape and urban planning:

“The ancient Greek could use his or her eyes to see the complexities of life. The temples, markets, playing fields, meeting places, walls, public statuary, and paintings of the ancient city represented the culture’s values in religion, politics, and family life. It would be difficult to know where in particular to go in modern London or New York to experience, say, remorse. Or were modern architects asked to design spaces that better promote democracy, they would lay down their pens; there is no modern design equivalent to the ancient assembly. Nor is it easy to conceive of places that teach the moral dimensions of sexual desire, as the Greeks learned in their gymnasiums—modern places, that is, filled with other people, a crowd of other people, rather than the near silence of the bedroom or the solitude of the psychiatrist’s coach. As materials for culture, the stones of the modern city seem badly laid by planners and architects, in that the shopping mall, the parking lot, the apartment house elevator do not suggest in their form the complexities of how people might live. What once were the experiences of places appear now as floating mental operations.” – Richard Sennett

Apart from the term “floating mental operations” reminding me of the location of meaning, the concept of identity is one I believe may hold some semiotic importance. Philosophers from Nietzsche to Georg Simmel to Louis C.K. have all discussed about the effects of modern (urban) living on an individual’s relation to not only the world around them, but to themselves. It seems to me as if a necessary part of semiotic communication and meaning-making is identity, as a person sending a message must have some conception of themselves and the identity of the intended recipient. So I believe it is worthwhile to examine how these large scale revolutionary changes to the very foundation of our society are affecting our self-perception, and how that might alter the ways we utilize language and conceive of its meaning.

References

1. Floridi, Luciano. Information: A Very Short Introduction. New York: Oxford University Press, 2010.
2. Irvine, Martin. Introducing Information and Communication Theory: The Context of Electrical Signals Engineering and Digital Encoding. Google Doc.
3. Sennett, Richard. The Conscience of the Eye: The Design and Social Life of Cities. New York: Knopf, 1990. Print.

# p5.js I Guess… -Carson

Even with the introduction video and the introduction Dr. Irvine wrote for us, I found the concept of Information Theory difficult to grasp. Until… I got to the Denning and Bell essay, The Information Paradox, and everything started to make sense.

In Denning and Bell’s essay they say “Computing without reference to meaning works for communication channels but not for computation in general” (p.476). Then they go on to talk about how people who pay to play Word of Warcraft do not pay for the physical computing, but for the story that the computer generates. So the gamer is only on the meaning side?

This made me think about the art work I make in p5.js. I start by creating a program. I take bits that don’t mean anything on their own and put them together in a code like this:

Then, when I run this program I get an image like this:

In this case, am I on both sides of the spectrum? Or does this not count because I already have knowledge that when I put these bits together they are going to command the computer to do something specific?

My high school librarian once bet the senior class she would be able to guess a word of our (the senior class) choice out of the whole Oxford English Dictionary in less than 20 guesses. The catch was, that every word she guessed wrong we would have to say “before” or “after” depending on whether or not the word we chose was before or after the word she guessed. With every word she guessed her odds grew while the selection of words shrunk. This reminded me of Shannon’s use of bits. Asking simple Yes/No questions until you arrived at the correct answer. The librarian won the bet, it only took her about 11 tries before she got the word. It was “graduation” so not very clever on our part.

References:

James Gleick, The Information: A History, a Theory, a Flood. (New York, NY: Pantheon, 2011).

Martin Irvine, “Introduction to the Technical Theory of Information“.

Peter Denning and Tim Bell, “The Information Paradox.” From American Scientist, 100, Nov-Dec. 2012.

Ronald E. Day, “The ‘Conduit Metaphor’ and the Nature and Politics of Information Studies.” Journal of the American Society for Information Science 51, no. 9 (2000): 805-811.

# Currents and Cryptography (Becky)

I tried to modify the information theory diagram to account for meaning-making in a nice, concise way. But it quickly became very crowded. Narrative prose will have to do, and I’ll start with this essay as an example.

The first word of this sentence was “The,” and it has meaning to me. I understand the meaning of the capital T, for instance; in this case it means the beginning of a sentence, a new thought, given the period and space before it. I know all this and more because I’m part of a community of symbolic beings that understands English-language conventions.

So I tell the computer to make a capital T. Thanks to a helpful keyboard a very smart person invented and software and memory and Boolean algebra and electricity and more, I can do that without speaking computer speak (though ASCII tables are nice windows). My key press is translated into my computer’s “language” (to steal a particular linguistic term) of 1s and 0s. The representation that I understand as the capital letter T appears on the screen.

An English speaker could look over my shoulder and understand what that letter and all these letters mean, a process which Peirce explains in more depth.

And I could send these words or images in an email to someone else. Thanks to machinations I do not yet understand (but hope to!), what I do would be translated into electrical pulses that correspond to binary values that are transmitted through ethernet cables to another mechanical device some distance away that can decode them. Or they’re sent over radio waves. Or in light pulses. Or something else.

In any of these cases, the goal in terms of the information theory model is to replicate a “message,” as Shannon put it, as completely as possible. The model does not describe the transmission of meaning in the Peircean sense but rather the transmission of information in the form of bits. (Of course, someone capable of making meaning out of abstract ideas had to create that model in the first place.)

For my email message to have meaning at its destination, some member of a symbolic species on that end must be capable of making that meaning. A sign doesn’t exist until it is interpreted as such. This means the actor at the destination must be operating in the same context as the source. He or she must understand English or have a good translator. The medium matters for meaning as well. Most users of email know that all caps mean SHOUTING and should be used sparingly. Terseness is OK in texts, but could be rude in email. And so on.

I don’t want to venture too far into book-report territory, but I found the readings helpful illustrations of the meaning-making process—the stories about cryptography that Gleick retells, for example. There are also a few scenes in Imitation Game, the movie about Turing and the Enigma, that might help and are conveniently insertable into this essay. Take this one (a dramatized version of what actually happened, of course).

Based on the meaning of already-decoded messages and their knowledge of language conventions, code breakers understood that certain words—greetings, the weather—always came up in German messages. They built a machine that could focus on words that they already understood would be there. (Has technology advanced to a point where computers using algorithms can identify these seed words?) Floridi hints at something similar with his Normandy discussion (44).

On the source rather than destination end, it seems that Day is illustrating how the development of information theory’s conduit metaphor and its application to nontechnical areas were influenced by a specific meaning community—a Cold War environment. He says information studies should be rethought for today’s context.

This week—even more than others—I’m thinking about AI. When all of these factors and more are considered, it is no wonder the task of building a new kind of human machine is so difficult.

# Zentropy and Inforgs

I didn’t come up with that title  it’s from an album and it has nothing to do with this post (but it is cool, and random, and I’m writing about entropy).

Entropy may have been Von Neumann’s joking way of connecting concepts in thermodynamics to information, but when extrapolated to semiotics it’s system of informer – informee – informant creates a model that is Peircean in it’s application. If I understand this concept as it relates to information, the more structured your communication there is a small chance of it’s misinterpretation, whereas a much a message with more entropy (randomness) has a larger chance of being misinterpreted. The random information adds noise to our message, and we somehow need to parse out what is accurate.

I want to move on to Inforgs, a Philip K. Dick-esque term for how individuals are becoming interconnected informational organisms. I would argue that we have always been this way, but it wasn’t until discourse moved online that we could comment in real-time and with such rapidity. With our creation of online profiles on major social networking sites we have created the ability to create two separate personas: online and offline. Although code-switching in Hall’s use is usually applied to linguistics (bilingual speakers mixing the high language and low language) it also helps explain online identities.

When we upload new pictures or share content, we deem what information is suitable as an avatar for our natural selves. However, something that is not touched on, is our ability to create a completely fake persona, enter weird twitter, where personas are built from cultural references. Their is also an identity that we do not get to build for ourselves, this is the ITentity. I once studied the use of RFID’s in school ID cards. School’s in large districts stopped tracking in-class attendance and instead relied on the derivative data from the chip to see if the students were really on school grounds. Through our online interactions, we leave a constant trail of metadata and derivative data behind us, I will be interested in studying the models for how this information is used (and it’s limits).

Hall, Stuart. “Encoding, Decoding.” In The Cultural Studies Reader, edited by Simon During, 507-17. London; New York: Routledge, 1993.

Martin Irvine, “Introduction to the Technical Theory of Information

Luciano Floridi, Information, Chapters 1-4. PDF of excerpts.

# Decoding sarcastic texts – Amanda

Prompt: Following on with a specific case: how do we know what a text message, an email message, or social media message means? What kinds of communication acts understood by communicators are involved? What do senders and receivers know that aren’t represented in the individual texts? Our technologies are designed to send and receive strings of symbols correctly, but how do we know what they mean?

I was recently listening to my younger, tech-savvy brother talk about the online dating app, Tinder. On Tinder, you have the opportunity to message complete strangers and get to know them through what is essentially text or instant messaging. I’m fascinated by how people can create meaning over text, and actually get to know each other’s speaking style (in a sense), without ever having to meet one another.

I’ve found myself comparing the idea of a “first message” on Tinder to text message conversations I would have with a close friend or family member. This week’s homework, particularly Dr. Irvine’s video, made me reflect on the meanings that I take away from each type of conversation.

Take, for example, a text message I receive from a close friend I haven’t heard from in a week (in other words, this is the first text in a new conversation). “I am so done, I just want to jump off a building,” my friend writes. Because this is a close friend, I am able to recall past conversations and interactions together, and I am confident that my friend is not suicidal, but instead, she is just being her typical sarcastic self. I might reply with more sarcasm and a note of encouragement because she has sent me a cue that she’s not having a good day, and I am responding in a way that reflects on our relationship. The conversation continues, and I decode her words like I would listen to her talk in a conversation. There is enough that we understand about each other as senders and receivers that, if someone who didn’t know each of us looked at the conversation, they might bot be able to make meaning out of what we say.

However, if a stranger on Tinder sent me a message for the first time and said the same line as above, I would be very concerned and I would most likely feel conflicted. Is this person sarcastic? Or is this person going through a crisis, and if so, how do I help? How would I reply? I don’t know this person, and I don’t know how he or she will interpret my message. There would be a lot of confusion, and I don’t think I would get the meaning of the text that was sent. But I wonder if another visual cue, such as a particular emoji, would help me better understand the context in which the person sent the message and help me better decode what the sender is trying to say?

This brings me back to Dr. Irvines video, where he says, “The meaning of our messages comes from the human symbolic systems that surround them – social uses of technically mediated expression” (5:40). Furthermore, Dr. Irvine explains how we make meaning symbolically “on the fly” – we create the meaning when we perceive the signals, or to put it in other terms, when we decode the data (Irvine, 5). These ideas remind me of some of the basic definitions from the Piercian semiotic model because of the meaning that is typically “imbedded” in the text that we read – but it seems as though the meaning changes depending on who you’re talking to, and more importantly, how well you know the person. It seems as though some ways of speaking through text message, such as sarcasm, is better understood when it is used between two people who know each other well and can take meaning from a set of words that are organized in a specific way. And, thinking back to last week’s reading, at first I wondered: if a sign is not a symbol unless it has meaning, would a text message from a stranger symbolize anything? I think that perhaps it would. I’m also interested in further discussing Stuart Hall’s studies on encoding and decoding as presented in our reading (particularly the television communicative process discussed on page 509).

References: