Category Archives: Week 7

Information transmission and generation

Why is the information theory model essential for everything electronic and digital, but insufficient for extending to models for meaning systems (our sign and symbol systems)?

“Shannon’s classical information theory demonstrates that information can be transmitted and received accurately by processes that do not depend on the information’s meaning. Computers extend the theory, not only transmitting but transforming information without reference to meaning. How can machines that work independently of meaning generate meaning for observers? Where does the new information come from?”

Denning and Bell pose this question in their introductory piece for solving the information paradox—the conflict that emerges from the classical view of information in which it can be processed independent of its meaning and the empirical fact that meaning, and thus new information, is generated in such process—as applied to computers today.

Shannon’s classical information theory posed that information could be coded and transmitted by a sender in a way that was redundant enough to avoid noise and equivocality, thus allowing a receiver to decode it and make sense of the message. The main concern was to “efficiently encipher data into recordable and transmittable signals” (Floridi, 2010, p. 42). In his Very Short Introduction to Information, Floridi explains that MTC (the Mathematical Theory of Communication proposed by Shannon) applies  so well to information and communication technologies, like the computer, because these are syntactic technologies, ready to process data on a syntactic level (2010, p. 45). As explained by Floridi, for there to exist information, according to the General Definition of Information (GDI), there must be data that is ‘well formed’ and has meaning. ‘Well formed’ refers to it being “rightly put together according to the rules (syntax) that govern the chosen system, code, or language being used” (2010, p. 21). Shannon’s theory deals with information at this level in order to find a way to encode and transmit it.

The question posed by Denning and Bell emerges because we see people today communicating and creating through interactive computer programs, so how is meaning emerging from a transmission of this level of data? The point they make solves the paradox by relying on a more comprehensive theory of information, posed by Rocchi, which poses information has two parts, sign and referent, and that it is in the link between the two where meaning emerges (p. 477). Moreover, they also explain that the interactive features of computers today allow for the creation of meaning as every time there is a new output from a users’ interaction with a computer, meaning (new information) emerges because the user is putting together sign and referent, making sense of the transmitted data.

The authors paraphrase Tim Berners-Lee in his interpretation of this process on the web, “someone who creates a new hyperlink creates new information and new meaning” (p. 477), and in doing so, they help illustrate how the sociotechnical system that is the Internet, which can be seen from a systems perspective that takes into account its different modular components and the way they interact, can also be seen from the perspective of information transmission. In both accounts, the system only makes sense once all components are considered, not only the sender, receiver, channel and message, but also the processes by which the message is linked to a referent within the broader system. The classical information theory model then can be complemented by this understanding of information and its two parts, and still remain an essential part of our electronic information and communication technologies.

Peter Denning and Tim Bell, “The Information Paradox.” From American Scientist, 100, Nov-Dec. 2012.

Luciano Floridi, Information: A Very Short Introduction. Oxford, UK: Oxford University Press, 2010.

Communication theory and its social consequences

I could find two different perspectives on what “information meaning” means in this week readings. In Floridi (2010),  “’[m]eaningful’ [information] means that the data must comply with the meanings (semantics) of the chosen system, code, or language in question (p. 23). In this sense, inside of a system, machines interpret meanings previous programmed and set up. Such meanings only make sense in the context of communication among nonhuman actors – data and machine – and is what enacts the systems operability.

Another use for the word “meaning” appears in the explanation of Shannon’s theory of information given by Denning and Bell (2012). Here, an emitter sends a message, which is encoded according to a codebook and converted into signals. These signals follow their way until reaching the decoder on the receiver side, where they are decoded based on the same codebook, acquiring its original format as a readable message. In this model, the medium used to transport the message does not understand the “meaning” of the information that has been carried out. The meaning is given by connecting the signals to their referents. These referents are collectively shared, thus people assign meaning to it.

From the point of view of the Internet-based communication system that we have today, this is a powerful principle: the machines don’t need to know the information that is transported. Vint Cerf, one of the TCP/IP protocol designers, in the Foreword section of the book Great Principles of Computing (Denning and Martell, 2015) explains the wide diffusion of TCP/IP, the main Internet communication protocol, based on that principle. For him, the fact that application designers on the web do not have to understand as IP packets are transported, and at the same time, the protocol does not depend on the type of information in transit, have contributed to what the Internet is today, and for its impressive stability, despite of the fact that the network receives a plurality of new applications every day, not initially foreseen.

Despite of all these characteristics of the theory of communication and its consequences for our current way to communicate online, the fact that data have become valuable for the stakeholders who run the Internet require us to reflect on a new set of questions. Nowadays, Vint Cerf works for Google, which has developed software to read our Gmail messages and offer products and services through its ads. Are these machines interpreting meaning or not? Internet service providers using the discourse of network optimization demand the right to prioritize content when transporting data on the Internet, what has generated the necessity of new regulations, such as net neutrality provisions. Are they requiring the right to distinguish packets meaning or not?

The political aspects and the social consequences of this discussion are truly interesting.

Syntheses of information and meaning; and how to see invisible world through communication tools.

Syntheses of information and meaning; and how to see invisible world through communication tools.


It may sound more philosophical and debatable, but in every side and aspect of life and environmental patterns we realize unification of content and form paradigms. It also may be “metaphorical description of the processes of the information technology[i]”. While one of them is always intangible (content/idea) another one needs to be made from opposite nature (tangible) to reflect idea(s) and values of the object (or subject). Like book, being tangible device made from paper and ink, represents and deliver ideas hidden behind commonly accepted symbols – letters and words. The same approach we see when we use electronic devices to transmit the information through space and time. Despite, the communication signals, like dots and dashes (electric impulse or its absence), cannot own the meaning or values of transmitted information they represent, however, without them cognitive subjects (human beings) cannot transfer their intangible ideas and values to other subjects in the surrounding environment. “Because information is always represented by physical means[ii]”, through signs and symbols our invisible (and intangible) world become visible for others to learn and exchange thoughts and emotions. (However, in the sample of human being, words and deeds may be also used not to reflect, but hide the real intentions or thoughts.) Once we encoded our ideas and values in letter/words or sounds/waves (like a source of information) to pass them to others, they will be decoded and understood (at the last destination) properly, if we accept the same standards of communication tools (languages, sings, signals, etc.)


“Because every number corresponds to an encoded proposition of mathematics[iii]”, to maintain the important level of communication system between electronic devices we need to take care of sustainability of passed information throughout the whole line of the communication system.

Apparently, modern electronic devices help us not only to pass intangible ideas and values, they also serve to discover more hidden tangible world around us, especially if we go more in two extremes – to the world of elementary particles (atoms and molecules) or cosmic spaces of unseen remote objects. The specifically designed sound and light signals are sent to and reflected from the studied object bring substantial information about the object, even if it is hidden because of its extreme disposition from the researcher. Also, the purposely designed signals (their encoding and decoding systems and devices) help us to control man-crafted devices in huge distances.


Through the communication system and tools we can interact with different types of apparatus, receive intended information and manage their activities whether it is discovery machine locates on the Mars or it is tiny device implanted into the human body. The technical progress of the informational and electronic system gives us an opportunity to provide manmade machines with more capacity and load them with certain information to elaborate and analyze inputs and present sought results for further use. Hence, information technology era started from Morse alphabet brings us to the edge of creation of Artificial Intelligence, which may become a source of ideas and values itself (intangible phenomena) and use people as the particles in the information system. “Such a world will first gently invite us to understand it as something `a-live’ (artificially live)[iv]”.


[i] Ronald E. Day University of Oklahoma The “Conduit Metaphor” and The Nature and Politics of Information Studies, p. 807

[ii] Peter J. Denning and Tim Bell The Information Paradox, p. 471

[iii] James Gleick, Information, A History, A Theory, A Flood, p. 19

[iv] Luciano Floridi, Information, A Very Short Introduction, p. 19

Information, human brain and machine

This weeks’ readings explain the process of information spreading from the view of information theory, a more systematic way to help us understand the basic concept of information and transmission.

Shannon in his marvelous paper A Mathematical Theory of Communication introduces a simple model to illustrate the communication system: an information source, a transmitter, the channel, the receiver and the destination work together to make information spread from one side to another one.


Basic model of information theory

First I want to check the whole system from a rather static perspective. In this system, the core concept is undoubtedly information. Floridi is his book Information: A Very Short Introduction discusses the hierarchical structure of information-related concepts. An interesting example is about the silence, which has more information than tautology described as unary bird. In reality, the most famous application may be Miranda Warning:

You have the right to remain silent. Anything you say can and will be used against you in a court of law. You have the right to talk to a lawyer and have him present while you are questioned. If you cannot afford to hire a lawyer, one will be appointed to represent you before questioning, if you wish one.

The silence of suspects in fact serves as a strong law signal and can have huge influence on attitude of judges.

Another interesting thing is the existence of noise source. A good example is the compression algorithm for images, videos and audios. As a noise source, compression algorithm adds extra information into the original one, making it looks better or worse.


A bad compression algorithm makes Lena green and fuzzy

2. Transmission
Concerning the dynamic mechanism of information transmission, I’d like to say it is an amazing idea to introduce the physics term entropy to this area. So from the perspective of entropy, the process of information transmission is usually a process of reducing probability and increasing entropy, thus bringing people more information.

Shannon uses entropy to define the threshold between decipherable and indecipherable codes, as well as the boundary between reliable and unreliable channels, according The Information Paradox. After all, it is always harder to find truth hidden beneath rumors by deducing useful information from public media than experiencing personally. Just consider the formula for the average length of the optimal code: L = –∑ {Pi * log Pi}. Suppose we get one set of code by experiencing personally and two sets of code after hearing rumors, which the probabilities of two are respectively x and 1-x. Thus, the optimal code length of the former one is 0, similar to the example of unary bird and result of the latter one is positive, meaning the latter one has more information. However, the scandal of deduction and the existence of misinformation and disinformation can finally reverse the result, which means the former one can contain more information.

3.Does meaning matter?
Actually, it is not hard to understand that traditional information theory focuses on the mechanism of information transmission itself. Of course, there is no difference between speaking English and speaking some alien languages regarding the transmission itself. We just speak whatever we like to say and our audiences try their best to determine what we are talking about. The only difference is the presupposed mapping relationship in their mind. According to the basic model of transmission system given by Shannon, the process of understanding and interpreting received information does not fall within the scope of traditional information theory, which just treats the information receiver, or destination, as a black-box system.

That’s exactly the biggest problem. The classical information theory is not wrong, but limited. I can understand that in Shannon’s age, the whole recognition of Human-Computer Interaction is based on Von Neumann Architecture, a rather hardware-based theory.


Von Neumann Architecture

However, when it comes to information transmission in reality, we mostly research human beings’ behaviors instead of machines’. So can human beings’ brain be regarded as Von Neumann Architecture? My answer is no because Von Neumann Architecture does not match the structure of brain, which consists of innumerable neurons and synapses. To study the behaviors of human beings, obviously we need a more complex model and that is what Neural Network scientists are doing now. Compared to Von Neumann Architecture, Neural Network involves more interactions within the whole system.


Neural Network

Back to human beings’ behavior. This week Professor Barba gives us some good reading materials about NASA and Human-Computer Interaction revolution. In Apollo control room, people just work like Von Neumann Architecture, receiving instructions from Flight Director and executing every order. The new interaction media technology, nevertheless, changes the way we interact with computers, asking interactors to be source of meaning themselves. Actually, regarding participants in information transmission process as black-box systems without meaning is an outdated view, instead we need to open the black-box system and find out a new theory that can help improve the traditional information theory.

[1]Luciano Floridi, Information: A Very Short Introduction. Oxford, UK: Oxford University Press, 2010.
[2]Peter Denning and Tim Bell, “The Information Paradox.” From American Scientist, 100, Nov-Dec. 2012.
[3]James Gleick, Excerpts from The Information: A History, a Theory, a Flood. New York, NY: Pantheon, 2011.
[4]Shannon, Claude Elwood. “A mathematical theory of communication.” ACM SIGMOBILE Mobile Computing and Communications Review 5.1 (2001): 3-55.
[5]Bodker, Susanne. “When second wave HCI meets third wave challenges.”Proceedings of the 4th Nordic conference on Human-computer interaction: changing roles. ACM, 2006.

Readings on Informatin Theory, a joke, a metaphor, and an ouroboros

Chen Shen

This week’s reading unfold from Shannon’s Information Theory and the information paradox. This reminds three things about that. So I write the post in three parts.

A Joke

Shannon used entropy to define the minimum length of a code, and “any shorter code would be ambiguous and could not be uniquely decoded”. It reminds me of a joke I read years ago:

In a bar, three programmers were drinking and chatting. A said, “B7F340Q”, B chuckled and replied “TTX4352” and A yukked. “What are you guys talking about?” asked C, “We developed a system that designate a code to every possible joke, the one A said was about a clumsy thief”, replied B, “and the one I said is about a drinking pope”.

“Interesting!” C exclaimed, “I will give it a try: MT9293CK”

A and B laughed so hard and fell on the floor.

“What’s the joke I tell?”

“You fool,” A puffed, “no joke is designated to that code!”

A joke is a joke. According to Shannon, it’s clearly impossible to name every possible joke with a string. The codes they told is a 7 digits string mixed with roman letters and Arabic numbers, giving a maximum of 367 (about 8*1010) possible permutations. But the interesting part of the joke is the possibility to represent a joke with a much shorter code and this code plays a same hilarious effect on those who can decode. If we are  not going to map all the possible jokes, say only a selected 1,000, the conversation between A and B could totally make sense. To cite Shannon,  “the fundamental problem of communication is that of reproducing at one point either exactly or approximately a message selected at another point”. Though a code has much lower H than the joke it represents, which seems to violate Shannon’s law, but here telling the joke-code is, in fact, a collective action. The sender and receiver have to spend quality time encoding the jokes before they can establish this kind of connection. The code itself has no relation whatsoever with the joke before encoding, which means the meaningful part in this communication is not in the transmission of code but in the encoding and decoding process happens in sender and receiver’s mind. Here code plays the same role as animal language we learnt weeks ago, where certain signifier represent certain concepts or things, while the signifier has no syntax structure.

This kind of correlation can be established via any kind of signifier, and not restricted to one’s native language. And the same signifier can play different roles when interpreted in minds from different cultures. In fact, I can think of an example of an expression with both Arabic number and English letter, but neither Indian nor British understand the expression, Chinese do.

The expression is 3Q. It means nothing to an American ear (except for those who interpret it as a collective of IQ EQ and AQ), but every Chinese young people understand it even if they’ve never heard of it. Because in Chinese the Arabic number 3 pronounce as /san/, followed by /kju/, making it a homophone with “thank you”. The semiosis behind this expression intrigues me, because a Chinese doesn’t have to know it ex ante and successfully deduce the meaning, making it different from the one-on-one mapping in animal language, then is this deduction a kind of syntax language behavior?

A Metaphor

Another thing got me thinking about the reading is the comparison between reading The Information: A History, a Theory, a Flood in both English and Chinese edition. Truth to be told I’ve always preferred original works assuming such readings establish a direct link between me and the author and grant me more. For The Information: A History, a Theory, a Flood, I spent three hours for the English edition, and did not completely understand. Then I read the Chinese edition, it cost me only 12 minutes and cleared my former confuse. It’s not a totally fair comparison because if reading for the second time in spite of language is probably going to be easier. But the 15:1 time ratio cannot be simply explained away. So I think about the two different reading by the information model by Shannon. For English edition, the cognitive process is like this:


Reading, by nature, is to stimulate my mind to form the thoughts mirrored in the author’s mind. Writing/reading model is not the only way for this purpose, many different signals can lead to a similar feeling.  A beautiful prose, a faded picture, the melody of one’s childhood lullaby, the flavor of homemade cuisine, all leads to a feeling of nostalgia. But language is, without doubt, the most delicate  and nuanced medium. In this example, my final “gain” by reading this book is

Gain My English = Thought Author * Encoding Author English * (Signal/(Signal + Noise)) * Decoding My English

All factors here are smaller than 1, making the conversion rate Gain My English/ Thought Author definitely smaller than 100%.

For this instance, we can safely suppose English is the author’s native language and Encoding Author English is almost 100%.

And since the text I got is nearly identical to what his wrote (in the sense of text), noise plays an insignificant part and (Signal/(Signal + Noise) is also close to 100%.

Then the conversion rate is simply Gain My English/ Thought AuthorDecoding My English

For Chinese edition, my cognitive process is like this:


My final gain can be represented as:

Gain My Chinese = Thought Author * Encoding Author English * (Signal/(Signal + Noise)) * Decoding Transistor EnglishEncoding Transistor Chinese * (Signal’/(Signal’ + Noise’)) * Decoding My Chinese

Signal’/(Signal’ + Noise’) for printed texts approximates 100%. As a result, the conversion rate is Gain My Chinese/ Thought Author  Decoding Transistor EnglishEncoding Transistor Chinese  * Decoding My Chinese

To compare my result from two different editions, we can simply divide them:

Gain My English / Gain My Chinese =Decoding My English  / (Decoding Transistor EnglishEncoding Transistor Chinese  * Decoding My Chinese)

From this week’s experience that I read at least 10 times faster and no less comprehension in Chinese than in English

Gain My English / Gain My Chinese <1/(10 * (Decoding Transistor EnglishEncoding Transistor Chinese))

Decoding Transistor EnglishEncoding Transistor Chinese is a translator’s translate rate, meaning  that so long as the translator get a translate rate greater than 10%, which is really a low threshold, I get more from reading in Chinese.

So my conclusion here is, if the translator is proficient in both English and the field this article is in, I’d better read the translated edition. But this conclusion relies on the hypothesis noise means little in the transcription of books. For another medium, this may not always be this case. Conversations for example, if I choose to listen to a translator’s version, then I suffer double noise which might impair my relative gain from listening in Chinese.

So for the time being, maybe the best choice for me is to find corresponding Chinese edition if possible and try reading both editions. Making the progress diagram much like a parallel circuit, to use it in a metaphor way, the combined resistance will be smaller than either branch in the parallel circuit.

In fact, I write this part responding to Ronal E.Day’s article The ‘Conduit Metaphor’ and the Nature and Politics of Information Studies, which is no doubt the hardest reading for me this week. I got totally confused especially about the Cold War context. I roughly sense the author is against Wiener’s claim about the conduit metaphor. In the paper, E.Day argued that

“The irony of this formulation is, of course, that both The Republic and Wiener’s The Human Use of Human Beings make their arguments using a rhetoric that is rich in metaphors and other literary tropes. Thus, both the epistemological and the social claims of Wiener (and as we have seen, Weaver’s) texts are simultaneously established and made problematic by the very rhetorical devices that operate in their texts. “

But both Plato’s cave and Wiener’s metaphor serve as a way to express a concept, rather than the base on which the concept is built. Just like my “computation” that ends with a metaphor. Both the computation and the circuit metaphor point to (but in different level of clarity) the same fact: I can get relatively more in reading if I can find both editions. It demonstrates that we can approach a same fact or concept in different ways, be it rational deduction or rhetorical metaphor. There’s a very profound tale in Chinese Buddhist sutras I’d like to share, for text convenience I skip the detailed names.

A nun went to a master seeking his interpretations on the Canon. The master said, “I can’t read Sanskrit, you read to me”. “If you can’t even read”, laughed the nun, “how can you claim to understand the Canon”.  The master pointed to the moon in the sky, explained, “The truth has nothing to do with text. The truth is like the moon above, and text is like my finger. My finger can point to where the truth is, but it doesn’t mean fingers are the truth. And it doesn’t mean one has to use fingers to see the moon”.

As a result, to me, the article The ‘Conduit Metaphor’ and the Nature and Politics of Information Studies is arguing that Wiener was using a  wrong finger and it has little to do with the moon. I’m not defending the conduit metaphor here, in fact, I don’t quite understand the metaphor. Things above are just my thoughts on the reading.

An Ouroboros

The first reading of this week is P.Denning’s The Information Paradox. I happened to be reading a book about famous paradoxes in the history these days, and it occurred to me that a great number of paradoxes were caused by self-reference: Liar’s paradox, Socratic paradox, Russel’s paradox. I used to think this ouroboros is philosophers’ and linguists’ problem, but now I know one of the most factual  discipline, mathematics, also suffered from this eternal ghost. This week’s reading pointed to another interesting thought experiment in history, Turing, and his Universal Turing Machine. The Gleick book doesn’t explain in detail how Turing solve the halting problem, so I looked it up in some other books and articles, e.g. Engine of Logic by Martin Davis and Cantor, Gödel, and Turing –  An Eternal Golden Diagonal by Weipeng Liu. The simplicity and universality of Turing machine amazed me, and the way he solved the Halting Problem, is self-reference once again. And by demonstrating that, Turing showed us a program is just another kind of data, no clear-cut demarcation. How heroic was Hilbert’s manifesto “Wir müssen wissen. Wir werden wissen“, but it seemed that we may not know.



Since the coding in Morse Code is based on frequency, why they don’t define certain letter combinations as code. e.g. th, er, in, con, tion, which has higher frequency in the English corpus than the least used single letters.


Luciano Floridi, Information: A Very Short Introduction. Oxford, UK: Oxford University Press, 2010.

James Gleick, Excerpts from The Information: A History, a Theory, a Flood. (New York, NY: Pantheon, 2011).

Peter Denning and Tim Bell, “The Information Paradox.” American Scientist, 100, Nov-Dec. 2012.

Ronald E. Day, “The ‘Conduit Metaphor’ and the Nature and Politics of Information Studies.” Journal of the American Society for Information Science 51, no. 9 (2000): 805-811.

Davis, Martin. Engines of Logic: Mathematicians and the Origin of the Computer. Reprint edition. New York: W. W. Norton & Company, 2001.

Weipeng Liu “Cantor, Gödel, and Turing –  An Eternal Golden Diagonal” Accessed October 19, 2016.

Lenna’s Meaning: A Discussion of a Digital Image – Jieshu

The image of Lenna, probably is the most transmitted and analyzed digital image in the world. So, I think it is a perfect example for discussing the relation between digital information and symbolic meanings.


Lenna’s Image

In 1973, in order to complete a research paper in image processing, an assistant professor at USC called Alexander Sawchuk scanned a 5.12X5.12-inch-square of a centerfold from Playboy with three analog-to-digital converters[i]. It became the most widely used standard test image thereafter[ii].

1. The Processing and Transmitting of Information Are Irrelevant to the Meaning

Lenna’s image was selected without specific purposes, demonstrating that digital processing, at least the research of digital processing is irrelevant to the meaning. The reason for Sawchuk to choose this image was that he was tired of those boring pictures existing in his system. Just in time, a colleague came by with an issue of Playboy. Being attracted sexually, of course, he decided to use Lenna’s image on Playboy in his paper. The arbitrariness of the selection showed that the meaning of the image had nothing to do with his research. Even though from a hindsight, it was evident that the image was perfect for image processing algorithm tests because it mixed different properties very well, such as “light and dark, fuzzy and sharp, detailed and flat[i]”, those properties were merely physical attributes of pixels on the screen when it was displayed, also irrelevant to the meaning of the image.

屏幕快照 2016-10-17 下午9.58.34

Some examples of image processing tests using Lenna’s image. Clockwise from top left: Standard Lena; Lena with a Gaussian blur; Lena converted to polar coordinates; Lena’s edges; Lena spherized, concave; Lena spherized, convex. Source:

The digital representation of the image is also irrelevant to the meaning. Sawchuk used three analog-to-digital converters in his scanner. The converters were responsible for red, green, and blue respectively. That is to say, each pixel in the scanned image is digitally represented by and only by three numbers[iii].


Each pixel has three numbers representing red, green, and blue respectively.

As we can see from the website (as shown below) of USC Signal and Image Processing Institute (SIPI) where Sawchuk used to work, the original image consists of 512X512 pixels. Each pixel has three numbers representing three colors. Each number is 8 bits (1 byte), so each pixel is 3 bytes. In turn, the whole image is 3X512X512=786,432bytes, that is 768 kb, as shown in the screenshot below.


A screenshot of SIPI website including Lenna’s picture.

So, basically, the image that we see as a naked girl with a blue feather hat is merely made of 786,432 numbers. In other words, in the digital representation of the image, there are only numbers, no naked girl, no hat, no feather, and no symbolic meanings.

2. However, the meaning is reserved, and extended

If there are only numbers, why do people enjoy talking about the story about Lenna? Actually, Lenna was seen as a symbol for the field of image processing, so important in computer science that she was invited to many academic conferences and was surrounded by crazy fans immediately. The sale of Lenna’s issue (Nov. 1972) was over seven million copies, becoming Playboy’s best-selling issue ever.

I think the answer lies in three levels.

First, the meaning of the image is reserved “in the physically observable patterns[iv]” of numbers. According to Paolo Rocchi, “information always has two parts—sign and referent. Meaning is the association between the two[iv].” The associations are stored in our brains. For example, great contrast in brightness is perceived by human as an edge. When the edges form a specific pattern, it would be associated with a face.


The red lines show edges that are associated with a human face.

Sometimes, we don’t need a high resemblance to perceive a pattern as a face.


A hill on the Mars, misperceived as a human face due to the pattern caused by great contrast in brightness.

After we recognized a human face in Lenna’s image, a higher abstraction—the facial expression, body gesture, and accessories—indicates the gender. In this way, we receive the meaning of Lenna’s image—a naked girl wearing a feather hat.

Second, what is the meaning of “a naked girl wearing a feather hat”? It means sexual attraction to males. What does it mean to use a sexually attractive image in a highly academic context? It might mean a male chauvinistic tendency in the academic community. That was why the usage of Lenna’s image caused controversy. A high school girl even published an article on the Washington Post to discuss the negative impact of Lenna’s image to female students who decided to keep away from Computer Science[v].

Third, as a standard test image, Lenna’s image was frequently associated with computer science. Over time, Lenna became a symbol in computer science. In November of 1972, if people saw this image, they would say: “Oh, she is a Playboy Playmate.” But in October 2016, if people saw this image, they would say: “Oh, this is the famous Lenna. She is somehow important in the history of computer science.”

3. Discussion

Although information transmission is irrelevant to human meaning, the design of information transmission is relevant to the human meaning-making process. The reason why only three converters were used by Sawchuk was based on human perception of colors, which was in turn based on the three types of cone cells in the retina. Each type of cone cells could sense a part of the electromagnetic spectrum that was perceived as red, green, or blue. I’m sure the original image on paper reflects infrared ray, too. But infrared is outside human visible spectrum, so the information in the infrared spectrum is useless, at least in this context. That’s why RGB system was enough to represent and transmit most meanings in images.

At last, I was wondering, computers are able to recognize patterns such as faces and houses, too. The associations are stored in algorithms and memory. Does it mean computers are capable of meaning making, too?


[i] Jamie, Hutchinson. 2001. “Culture, Communication, and an Information Age Madonna.” IEEE Professional Communication Society Newsletter 45 (3).

[ii] “Lenna.” 2016. Wikipedia.

[iii] Prasad, Aditya. 2015. “Ideas: Discrete Images and Image Transforms.” Ideas. September 19.

[iv] Denning, Peter J., and Tim Bell. 2012. “The Information Paradox.” American Scientist 100 (6): 470–77.

[v] Zug, Maddie. 2015. “A Centerfold Does Not Belong in the Classroom.” The Washington Post, April 24.