Author Archives: Weiqiang Yao

Paper as a Lasting Medium

Abstract

In our contemporary age where digital remediation of sign systems is common, paper as perceptible material substrate for static, two-dimensional visual sign systems is still being widely used. This is because paper has affordances that new technologies do not have and new technologies also bring about new constraints. In the future, paper will probably be totally replaced because of the further advance of technologies and human beings’ shifted expectations.

Introduction

Since paper is invented in China in 2nd century, it has been one of the major media of human being’s communication and information storage. In the contemporary digital age, paper is still being used widely and intensively. In 2014, the U.S. consumed 73,093 tons of paper product and China consumed 108,750 tons (Green America, 2017). U.S. uses about 68 million trees to produce paper and paper products every year (The Paperless Project, 2014).

Paper is quite a versatile kind of material which is used in many aspect of human beings’ life. For example, paper can serve as mere wraps or containers that protect or preserve other substances. Or paper itself can form human symbolic artifacts without the participation of other sign systems. The art of paper folding and the Chinese art of paper cutting are good examples. These paper-related arts are themselves symbolic and iconic sign systems that people use to convey many layers of meanings–Chinese paper cutting, for example, does not only represent the image it depicts, but it also has the meaning of good wishes, which is a layer of meaning given by conventions or cultural encyclopedia.

To be clear, what I want to mainly talk about in this article is paper as a media that serve as perceptible material substrate for tokens of static, two-dimensional visual sign systems like written languages and images.

A little history

Signs systems like written languages and paintings have been invented employed by human beings as media of communication since the dawn of civilizations, but paper was not a universally used media until it was invented in 105 AD by Chinese people (Stavrianos, 1998). Before settling down on paper, those two-dimensional sign systems wander through different other material substrates. About 3000 BC, people in the river plain of Mesopotamia inscribed written language in clay, and Egyptians discovered papyrus as a portable but unstable writing surface. Chinese people used bamboo strips linked with thread to create bulky scrolls that store information in an ancient version of Chinese characters around 1500 BC.

People around the world were adept at making use of local materials. Wax, leaves and wood had all been once considered handy materials to store written information from 5th century BC. People started to write and draw on parchment after it is invented in the region of Mediterranean during 2nd century BC.

And then paper stepped onto the stage of history. It made its way from Asia to Arab, to Muslim world and eventually to Europe. By 15th century, paper had become common in Europe. In 19th century, paper went through a major revolution with its main ingredient switched from rags to wood pulp–to cope with the greater demand brought by economic prosperity (worldhistory.net, 2017).

Since then, paper has been used by people all around the world from different cultures that employ various sign systems–different languages, different vocations, different walks of life, all these mean different sign systems, but those static, two-dimensional visual sign systems all conform to paper, their common perceptible material substrate–until its dominance is shaken by new technologies.

Affordances of paper

Affordance is a concept adopted from cognitive psychology that means the action and interpretation something can afford (Norman, 1999). Compared with previous materials that bore the same types of signs systems, paper has many advantages, thus it bears more affordances.

Firstly, paper is light, compared to clay and bamboo strips. There is a Chinese idiom, “学富五车” that can be directly translated into “One’s mind bear more knowledge than could have been contained in five cartloads of books”. It is an expression that can be dated back to the time when books in the form of bamboo strips is so heavy that can only be carried around with carts dragged by horses. Secondly, paper is thinner and more flexible than papyrus and parchment. Also, paper is far less expensive that parchment made from animal skin, so it can be massively produced and became available for people throughout societies.

These advantages in properties give paper more affordances as a communication medium that is suppose to spread sign systems through time and space than previous materials. Its lightness and thinness makes it easy to be carried around. For example, mailmen send one’s mail to where it is intended, which is a practice still used now though in more efficient ways than ancient times. Paper’s flexibility reduces the risk that structures of sign systems reside on it is to be compromised.

Coming down to practical functions, Paper serve mainly as a display interface that connects different systems, which in most cases are two dimensional sign systems and human being’s core operation system to recognize signs. Books are good example for this case. Along the path of technological development, sign systems in the book was firstly handwritten (or drawn), and then printed, but the progress of technological advance has not changed the properties of the interfaces.

Paper also serve as a handy tool for human beings to offload some of their cognitive processes (Dror & Harnad, 2008). A journalist would take notes during interviews on his notepad so that he can keep a record of important facts and quotes. Someone who is heading to supermarket would figure out a shopping list on a piece of used paper before he sets out so that he does not need to murmur all the things he wants to buy to himself all the time. A student who is trying to learn mathematics would be very used to listing some equations or drawing some diagrams on scratch paper. A piece of delicate painting always starts with a rough sketch. As a matter of fact, the concept of this kind of quick cognitive offloading is well adopted by new technologies. For example, in Processing, a java-based programming language, programs are referred to as sketches, in the spirit of quick graphics prototyping (Shiffman, 2015).

Moreover, due to paper’s physical properties, like lightness and flexibility, it can easily be attached to other objects, so that it is frequently used to label things. Tags attached to clothes on a shopping mall’s racks indicate their prices and other things buyers need to know. Paper boxes are used not only for containing objects, but also are often used for giving an illustration of their properties. The concept of labeling has also been adopted by contemporary technologies, e.g. hashtag function in social media and different other forms of metadata.

New technologies

Nowadays, paper’s position as a major kind of media for human beings’ communication has been challenged by devices that can digitally remediate the types of sign systems afforded by paper.

Computer is the most important example. Computer was designed under the concept of metamedium, medium for other medium (Manovich, 2013). Computer can afford not only the static, two dimensional sigh systems that can be borne by paper, but also more complex time-based media like audio and video. Digital remediation of tokens from existing sign systems also allows users to search and compare among different patterns more easily, which is certainly one of the constraints of paper. More importantly, computer allows users to manipulate the sign systems like editing and re-arranging, instead of mere composing and receiving, by enabling symbols for action, which means symbols for controlling other symbols (Irvine, 2014). Users are also free to offload some of their cognitive processes by employing digital applications like calculator and notepad. The automation of some of the repetitive or standardized processes makes things a lot easier. Also, data can be attached to other data like tags to clothes for the sake of explanation or description. Links can be made to bring groups of data from different locations together, as Vannevar Bush envisioned for Memex, a imaginary prototype for modern computer (Bush, 1945).

E-book like Amazon Kindle is another significant example, for they are designed to imitate the look and feel of physical books, magazines and newspapers. The electronic ink and the designation for users to hold it with one hand are employed by Kindle to conform with human being’s form factor, which means the product tries hard to fulfill human beings’ habituated expectations–in this case, reading printed books. But it can also go beyond printed books. For example, in the process of reading a e-book on Kindle, users can create and share notes, look up words in dictionaries, adjust the font and size of text and search for certain content (Amazon, 2017). In addition to mere interpretation of given signs, e-book allow users to do some manipulation on their own. To sum up, users can quickly get familiar with the product using the obvious inference from the detectable features given by their cultural encyclopedia, and also make use of other desirable affordances to make their using experience more enjoyable or efficient.

Why haven’t they totally replaced paper yet?

New technologies like computer and e-book have introduced many new affordances to the perceptible material substrate that display two-dimensional sign systems, but the reason why paper is still widely used is because new technologies have also brought about new constraints.

One of the most significant is that electric power is indispensible for the digital re-mediation of the sign systems that appear on paper. If users want to access data that are stored online, Internet is also needed. Robinson on an uninhabited island would not choose a computer over a book though potentially he can access all the existing knowledge of human beings through it. This is not the case for paper. After a book is produced, it can work for a fairly long time in terms of displaying sign systems before its material fails without additional power input, though the content and form of sign systems are fixed. Constraints of new technologies indicate that paper as an interface for displaying sign systems cannot be replaced under certain circumstances, such as limited power input and no access to Internet. We are used to carrying a book with us when we take metros or a long flight.

Besides the difference brought by the constant need of power for digital remediation, paper as a tool for offloading cognitive processes have many other advantages over computer, smartphone or tablet. When we want to take some quick notes, we can just grab a piece of paper and a pen, and start writing. We don’t have to pull out a device, turn it on, intiate a certain software and start writing. When writing is done, we don’t have to go out of our way to do the process of saving. (Everyone using electronic devices for sign system manipulation must have gone through the pain of losing hours’ hard work just because they forget to save it.) If we want to take what we have taken down with us, we just tear the piece of paper from the notepad, fold it and put it in our pocket. In the case of electronic devices, however, we have to take the device that is certainly bulkier than a piece of paper with us because the display and interpretation of the digitally remediated tokens requires specific hardware and software. We certainly can’t fold the devices to fit them into certain amount of space because of the current limitation of hardware.

When we are scrawling on a paper, we can write or draw any diagrams, shapes, or charts as we wish with little restrictions. We can even use shorthand that is only recognizable for ourselves, as long as it is interpretable for future reference. But we can’t do this with computers or other current electronic devices. For example, in Microsoft Word, we are enabled to insert in our article different kinds of charts to illustrate our points, but they are all pre-defined and fixed by the software, so that while using this function, we can just follow the existing patterns, but not handily create our own pattern. Along the way of development of computer, scholars have brought up the idea that computer serve as a platform for users to develop their own tools for manipulating sign systems (Kay & Goldberg, 1977), but this vision is yet to be accomplished.

Different perceptible material substrates for the same type of sign systems also have unintended consequences. Firstly, medium is the message, meaning that “message” of any medium or technology is the change of scale or pace or pattern that it introduces into human affairs (McLuhan & Lapham, 1994), thus the media where a type of sign systems is remediated is not indifferent, e.g. digital media brought about the information overload and fragmented reading pattern (Liu, 2005). Secondly, sign remediated by pen on a piece of paper is attach with users’ idiosyncrasy that is recognizable but does not compromise the original communication intention. For example, handwriting is one’s own way of re-tokenize the type  of signs from written language. Thus, it always enable one more layer of interpretation than the same type of signs that is digitally remediated.

Amazon Kindle and other e-book have done a good job imitating printed books, but not enough–there are still aspects that do not conform with the human beings’ form factors. For example, its refresh rate–the time that devices take to turn to the next page–prevent users from flipping through the pages, which users are inclined to do while reading a printed book.

Conclusion

Contemporary technologies still have a long way to go from totally replacing paper, but I do think they stand a fair chance, for some of the technical problems are on their way of being tackled down. Further development of solar energy technologies might make electronic devices as self-sustainable as paper. Foldable electronic screens are being developed (Gibbs, 2017). With other constraints being solved, I can’t see a reason why future computers would not cover all the affordances of paper.

But the more important question lies in the human beings’ habituated expectation of paper as a perceptible material substrate for static, two-dimensional sign systems. While people who are born in digital age are gradually replacing people who immigrate to it, human beings will become more and more used to the digital remediation of the familiar types of sign systems that bear more affordances, just as people shift from previous writing materials to paper.

Works cited

Green America. (n.d.). Retrieved December 15, 2017, from https://www.greenamerica.org/

Facts About Paper: The Impact of Consumption – The Paperless Project – Join the grassroots movement. (n.d.). Retrieved December 15, 2017, from http://www.thepaperlessproject.com/facts-about-paper-the-impact-of-consumption/

Stavrianos, L. (1998). A Global History: From Prehistory to the 21st Century (7 edition). Upper Saddle River, NJ: Pearson.

HISTORY OF WRITING MATERIALS. (n.d.). Retrieved December 14, 2017, from http://www.historyworld.net/wrldhis/PlainTextHistories.asp?historyid=aa92

Norman, D. A. (1999). Affordance, conventions, and design. interactions, 6(3), 38-43.

Affordance-Interface-Semiotic-Intro.pdf. (n.d.). Retrieved October 30, 2017, from https://drive.google.com/file/d/0Bz_pbxFcpfxRLWlnWkRoVGpZY2s/view?usp=sharing&usp=embed_facebook

Dror, I., & Harnad, S. (2008). Offloading cognition onto cognitive technology. John Benjamins Publishing.

Shiffman, D. (2015). Learning Processing, Second Edition: A Beginner’s Guide to Programming Images, Animation, and Interaction (2 edition). Amsterdam: Morgan Kaufmann.

Manovich, L. (2013). Software Takes Command (INT edition). New York ; London: Bloomsbury Academic.

Martin Irvine. (n.d.). Key Concepts in Technology: Week 7: Computational Thinking & Software. Retrieved from https://www.youtube.com/watch?v=CawtLHSC0Zw&feature=youtu.be

Bush, V. (1945, July). As We May Think. The Atlantic. Retrieved from https://www.theatlantic.com/magazine/archive/1945/07/as-we-may-think/303881/

Kindle Paperwhite E-reader – Amazon Official Site. (n.d.). Retrieved December 16, 2017, from https://www.amazon.com/Amazon-Kindle-Paperwhite-6-Inch-4GB-eReader/dp/B00OQVZDJM/ref=sr_1_1?ie=UTF8&qid=1513431152&sr=8-1&keywords=kindle

Kay, A., & Goldberg, A. (1977). Personal dynamic media. Computer10(3), 31-41.

McLuhan, M., & Lapham, L. H. (1994). Understanding Media: The Extensions of Man (Reprint edition). Cambridge, Mass.: The MIT Press.

Liu, Z. (2005). Reading behavior in the digital environment: Changes in reading behavior over the past ten years. Journal of documentation61(6), 700-712.

Gibbs, S. (2017, September 12). Samsung plans to sell a Galaxy Note with a foldable screen in 2018. The Guardian. Retrieved from http://www.theguardian.com/technology/2017/sep/12/samsung-galaxy-note-foldable-screen-2018-smartphones

Google Art Project–Continuation of “The Museum Idea”

In the Voices of silence, Malraux used photographic reproductions of artifacts to form a interface that enables the general public to have the access to fine arts. Intentional or not, Google Art Project is an attempt of continuation of Malraux’s “The Museum Idea”. This idea can trace back to Morse’s meta-painting Gallery of the Louvre.

Like the Voices of silence, Google Art Project also represents artifacts by using a two-dimensional interface. The difference is that contemporary technologies make this project far more powerful than what Malraux did, thus bring “the museum idea” to another level.

Technologies bring about new affordance and eliminate old constraints. What Malraux was trying to do is to re-conceptualize the artifact into a normative “art history” frame. Therefore, the photos of artifacts were selected by the author, and when they were put together into a book, they were fixed. The readers have to follow the lead of the author and take in whatever the author think is the best “prototype” of a genre.

That is not the case in Google Art Project. The website of this project features the “favorite” function that enables users to create their own collection. And because of the massive memory space of the servers, people who put together this project don’t have to go through painstaking selection of prototype. They can simply add a new artifacts to the current selections.

Moreover, due to the advance of technologies, pictures on the project can do far better than pictures in a book. Users can click on a painting and zoom in to an extent where the texture of the canvass can be clearly seen. Google arts and culture experiment enables users to view artifacts in ways that they might never thought of before. In addition, an actual visit to a museum can be simulated on the website.

However, some of the constraints that the project inherit from its predecessor can not be eliminated. However hard it tries, the nature of the project can not be changed–re-tokenizing the artifacts on a two-dimensional interface. Thus some forms of arts like sculptures can not be fully appreciated. Some other constraints include dissociating cultural object from their material origin and estranging the work from their original function, which can cause some misinterpretation.

From the private collection of upper class, to museum, to “the museum idea”, and then to the recent attempt like Google Art Project, arts become more and more democratic, as intended by Malraux and Morse. Minimizing the cost and giving access of fine arts to more people is another advantage of the project compared with its predecessors, since the book Voices of silence is worth more than 40 dollars on Amazon.

The paradox remains that the democratization of arts weave it more closely into the social and cultural encyclopedia, but different forms of arts are further homogenized and dislocated from their original functions. When arts are becoming more and more symbolic, It might be hard for people to remember once there was a time when the word arts did not mean anything.

Semiotics: a way of inquiry into CCT

From what we have learnt thus far this semester, I think it’s adequate to say that semiotics is a way of inquiry into all three components of our program: communication, culture and technology.

In the area of communication, we can find that people trade their ideas and thoughts by employing shared sign systems within a certain community. Along the course of the development of human beings’ communication, we have made some major inventions that effectively enhance the efficiency of communications–languages, paper, printing to radio, telephone, TV and computer. But the truth is the vibration of particles in air itself does not convey any meanings, nor does inks on a  paper or electricity in a circuit. It is the shared sign systems, e.g. language, gestures, established by conventions among communities of people that convey meanings which envelop people’s ideas and thoughts. That’s what is missed in signal-code-transmission model of information brought by Shannon in terms of meaning transmission. Since we are symbolic species, the sign systems can never be left out of the processes of communication.

Sign systems can only function when a group or community of people have a shared, collective and intersubjecitive cognitive encyclopedia. A distinct culture emerges when enough shared sign systems are employed, like a distinct language, a certain way of living, style of painting, architecture and so on. If we look around, we will find a lot of signs that embody the culture we are in. If one doesn’t possess the collective knowledge necessary for “decoding” the signs, which is the case when a foreigner first come to a country, he will encounter with what is called “culture shock”.

It seems to me that technologies, especially the information and communication technologies, are usually intended to augment the symbolic capacities of human beings. The most significant example of this is undoubtedly computer. Computers allow people to offload some of the symbolic-processes and they have become a part of their users’ extended cognition. For example, people don’t have to memorize everything, they can simply store it into computers, just as they used to take something down on papers or carve it onto the wall of a cave. This example shows that computers also belongs to the symbolic continuum of technologies which can be inferred from the revised system of cognitive phases proposed by Renfrew. The way we design the technological artifacts have everything to do with our symbolic capacity, which can be seen from the the four key affordances of digital interactive interfaces explained by Murray and the concept of “metamedia” brought up by Kay. Moreover, the way we wield the technologies are also symbolic or conventional, e.g. programming language.

In the semiotic point of view, we are surrounded by signs. Using this way of inquiry may help us find something that can’t be found from other point of view.

Computer as Metamedium

Nowadays I can’t imagine a single day without using computers (including my laptop and smartphone). Everyday, I encounter different kinds of media through those two machines. I deal with text when I do my readings for the courses.  Various images keep popping up with I browse around the Internet. Video and audio hit my cognitive system through computer Interfaces when I watch basketball games or Rick and Morty. Moreover, text, image, audio and video are the media employed when I use instant messaging software on my smartphone to contact my friends in China.

These symbolic activities that we take granted of in our daily life are possible because of a set of concepts established by Alan Kay and other pioneers in the field of computer science of his generation. Kay thought of computer as a Metamedium, which is a platform where all possible media used for human expression and communication can be implemented, re-mediated, and manipulated, in contrast to the early idea for computer that one machine designed to serve a certain set of purposes. And here we are–I can manipulate all kinds of media through my laptop and smartphone. And I don’t need to buy a new computer If I want to deal with a new kind of media, instead I just need to implement some software in most cases.

Kay also developed the Graphical User Interface to facilitate users’ learning, discovery and creativity. His design was influenced by cognitive psychology and aimed to enable three human mentalities–enactive, iconic and symbolic, which referred to mouse, icons and windows, and programming languages, respectively. These three types coincide with Peirce’s typology for signs, though using “enactive” instead of “indexical”. Computers we are using nowadays basically inherit and further develop those features.

However, there are visions that are not achieved. For example, both Kay and Engelbart envisioned a scenario  where users to create, share and alter their own tools for media manipulation, but now people tend to use computers to consume media created by others. Programming language literacy is still only possessed by a small fraction of people. The reason of this setback may lie beyond technologies, but in economics or society. Kay envisioned his Dynabook as a medium for learning, experimentation and artistic expression not just by adults, but also children of all ages. But from what I have observed for children in their learning process today, computer is more distraction than useful instrument. The reason, I think, is that at least in China, average student is taught to use some certain software, but not systematic computational thinking as Jeanette Wing would suggest. A second reason is that students consumes media created by others more than create their own.

As I am learning and trying to do researches as a student, the interface I want to propose is about note-taking, for reading is one of my major tasks. For the time being, I am used to reading on paper book or computer while taking notes on my notebook by pen. I wonder if there will be more efficient ways. For instance, a note-taking software that can store my excerpts combined personal thinking as a whole and in the meantime link every point in my notes to the place where I generate it in the original material so that I can refer to it repeatedly later. This idea resembles the “hypertext” proposed by Ted Nelson, of which usually we only use a type of it–“chunk style” hyperlink from page to page.

What has been achieved and What hasn’t

The readings of this week take us back into the history of the development of computation and show us how where the original ideas about computational technologies we take for granted nowadays come from. In the article As We May Think, which is significant along the way, Vannevar Bush proposed many visions, most of which have been achieved and become common in our daily life.

Bush suggested that the size of substrate that held information would be reduced so that all the information in the Encyclopedia Britannica can be hold in match box size material.  For sure. My portable hard drive which weighs a little more that 100 grams and has a size of a soap case can hold 2 terabytes of information, which is approximately equivalent to all the information that books in the libraries of Georgetown University hold.

Bush also envisioned that a researcher would free his hands by talking to a computer to do recording, which has also came true for the time being when there is programs that recognize voices of human and enable speech input, e.g. Siri in iPhones, though they are far from perfect.

Perspicaciously, Bush predicted that computation would be a manipulation of symbols. This notion was elaborated more than a half century later in Computation is Symbol Manipulation by John Conery. In this article, Conery describes computation as a sequence of state transitions, and he defines states as a set of symbols. Other than that, Bush also alleged that new symbolism must precede the process of problem solving. He happened to be right again. Languages of programming are the prerequisite of development of any programs.

Bush thought the process of selecting information the people needed is difficult because human minds operated in association rather than in alphabetical order by which information is usually stored. To address this problem, he offered the solution of “building a trail” between two piece of information, which is part of what Hyperlinks do nowadays. We go even deeper. Other than alphabet, we tagged data with various metadata so that we can associate them together in different patterns and consult them conveniently.

However, not everything envisioned by Bush has been achieved. For example, he imagined that a direct path between the electric current in computer circuit and the biochemical electric current in human’s brain would be built. It would mean a lot for human, if this vision were to be achieved, for it would totally change the human computer interface, even our cognitive capacities and it would be a huge step toward artificial intelligence.  Other than that, even though we have now developed machine far powerful than Memex, we still cannot say that we fulfill Bush’s intention for such kind of machine–to transmit and review the results of research efficiently, because there are always too many of them. Scholars make painstaking efforts to keep up to date in his own field. That’s quite a paradox that when the problem also upgrades when the solution to it improves.

Programming Languages and Human Mind

Programming languages may have a rather “geek” tone to outsiders. Before I have learnt something about programming languages, I tended to think that they are languages from a very different world where weird people huddle up and create something that is the opposite of human. But now I find out from Mr. LeMaster’s Expressive Computation course of this semester and this week’s readings that programming languages have everything to do with our ingenious minds.

There is no doubt that programming languages are highly symbolic meaning systems, for the status of a electric circuit would mean nothing without human’s interpretation using a shared and collective “dictionary”. Symbols in these systems can be divided into three categories. The first one includes symbols for meanings, numbers, strings or other values that can be recognized by programming languages. The second one includes symbols that refer to pr represent other symbols, like different kinds of variables that can be defined to store values. We can find equivalent of those two categories in natural languages, which are good indications of the symbolic processes that happen in human mind–we can use languages not only to mean things, but also to describe languages themselves, e.g. linguistics.

But there is a third kind of symbols in programming languages that has no counterpart in natural languages–symbols that are intended to perform actions to other symbols. Why this kind is exclusive to programming languages? Because we don’t need languages to tell us how to perform a cognitive process. When we pay our checks in a restaurant, we don’t need to talk out aloud like “The bill is 10 dollars, tax is one dollar and the tips should be 2.2 dollars” to put the right amount money on the table before we leave. The reason why programming languages have such a third kind of symbols, I think, is because softwares do not think on their own, and every decision they make are prearranged. They collapse when they run into something new. In contrast, cognitive processes of human mind take place automatically and naturally, and new things can be taken into existing symbolic systems.

However, since modern computation was developed by many ingenious minds and it is the outcome of centuries cumulative human symbolic thought, we can learn a thing or two from so-called computational thinking. As Jeanette Wing stated in Computational Thinking, computational thinking is reformulating a seemingly difficult problem into ones we know how to solve. It’s rather a way of thinking, or a philosophy rather than reachless cutting-edge technologies. This definition for computational thinking reminds me of the methods Mr. LeMasters brought up in his class. He suggested that before we tried to write any code, we should write some “pseudo codes”, which was a sketchy outline in natural language of the symbolic processes of the program we were trying to develop. From this perspective, computational thinking is rather “human” and fundamental. No wonder Jeanette Wing would suggest an early and popular education on computation.

Notes Application On Iphone

After thousands of updates, the notes application on iphone are now delicate and capable of many different tasks. It is one of my favorite app.

I want to remind myself that I will have a semiotics class on Thursday afternoon, so I create a piece of note like this. Firstly, I typed the full name of the class on the first line, and then type the time of it in the second, all with the virtual keyboard of the phone. What I was doing is offloading my memory to an external media that exists beyond my brain. If I have a habit of using the notes app on my phone to relieve the burden of my memory and checking it regularly, the app would become a part of my extended mind, through which my cognition reach the environment around me and create an affordance that is not achievable only by my mind. My mind can forget things, more than occasionally, but as long as I check my notes regularly, things will be done in time. The fact that I create notes to remind future self also reflects that cognition can be distributed over time.

If you look closely to the picture, the background of the notes contains the grains of actual notebook paper, which is the object of this app if it is seen as a representamen in Peirce’s terminology. We can tell that the designer of this apps clearly wants to borrows some properties of actual notebook also by looking at the paintbrush function. Why do the designer imitate the properties of actual notebook? Maybe he has learnt some semiotics and knows that people sometimes shift back and forth between properties of representations and those represented. For example, we use the paintbrushes in the notes as well as virtual keyboards.

After I typed some notes, I suddenly decided to calculate the amount of time we will have spent in this class throughout the semester. It is too complex a multiplication to figure it out only by my mind, so I use the paintbrush function to help my mind with the calculation (though of course, the calculator app will be a better option). Contrary to the fact that the typed notes are only records of my thoughts for future reference, the equation is a crucial part of how I did the calculation. It is another example of extended cognition.

If I want to share my notes to my fellow classmates to remind them of the class and tell them to cherish the happy time we have in the class, I would touch the button in the upper right corner and this piece of note can be sent through multiple ways. When my classmates receive the note, I would have offload cognition to other individual by ways of technologies, by which we can tell that cognition can be distributed among the members of a social group. Though they use different brands of cell phones, and my handwriting is extremely ugly, they will perceive the information conveyed by the notes because of our shared cognitive scaffolding–English language and numerals.

A Reflection on Daily Routines

As inhabitants of infosphere (borrowing the term from Floridi), we are so used of employing text messages, e-mails and other modern technologies to enhance our communication with others everyday that we are almost unaware of the informational and symbolic processes involved in these activities. A reflection on our daily routines, along with the concepts from this week’s reading, might help us probe into and get a deeper understanding of what we do everyday but hardly bother to ask how.

Let’s take text messages for an example. We take out our mobile phone, compose the message with letters, punctuation marks and blanks, all of which means sets of binary bits from a mobile phone’s perspective. When we finish composing and press “send”, the phone transmits signals that contains those sets of binary bits to another mobile phone by ways of cell service providers.  Then the phone on the other end receives and “decodes” those bits into letters, punctuation marks and blanks, so that the receiver of the text messages can read it. This process conforms to the communication model which features information source, transmitter, noise, receiver and destination in a conduit form, elaborated by Shannon and Weaver.

The reason we can understand the meaning of a text message is, firstly, that the signal system mobile phones use enable texts on one end can be reproduced or “re-tokenize” exactly or with little loss on the other so that we can convey what is intended to another person. The second reason, which I think is more important, is that text messages rely on written language, a presupposed, shared and collective symbolic system, to convey meanings.

As is stated in many articles, meanings is not properties of electronic signals. Strings of “1” and “0” do not mean anything concrete to we human beings. We can extract meanings from texts on the screen because we recognize them as tokes of a type that we are already familiar with. At the end of the day, meanings lie in conventions, like languages, and the ways we translate languages into binary bits that is recognizable by computers and other devices.

Email messages are similar to text messages, encoding texts into binary bits and decoding on the other end. Social media messages combining texts, images and emojis are a little more complicated, but the core processes that ensure transmission and understanding is still the same–binary bits and presupposed symbolic systems. Images, which belong to a system we are also familiar with, can also be translated into binary bits and get reproduced pixel by pixel.  Though the meaning systems of emojis are a little tricky, but the convention of using it is gradually forming.

Therefore, the shared symbolic system is what was left out from the communication model, and is what will complete the model.

Power of Idioms

Idioms are frequently used in both languages I know a little of — Chinese and English. In English, people would say “break a leg” to a drama actor who is about to step on the stage, and describe the weather using the phrase “it’s raining cats and dogs”, which two examples would be inexplicable to a foreigner at his first sight. In Chinese, idioms fall into different types, of which the most common one is made up of phrases with four Characters. In both languages, idioms have deep root in cultural environment from which they were born. Foreigners cannot understand idioms or using the local language “idiomatically” because they are not familiar with the culture.

Idioms are powerful because they can go beyond the literal or original meaning, and introduce another layer of figurative connotation, which is also generally agreed upon like a language itself. People are so used of idioms that one even has to point it out when he uses an idiom literally. For example, if I were rowing a boat with a friend in Potomac River, I would say to him, “Now we are literally in the same boat.”

Peirce would probably categorize idioms under the term “argument”, for the ground apprehended by interpretants are symbolic, which means in a semiotic sense that the interpretation of a idiom (like “in the same boat”) as a certain meaning (in the same situation) is conventional, shared and learned. Some might argue that the literal meaning of “in the same boat” bear some resemblance to its figurative meaning, therefore it is iconic. However, according to this reasoning, thousands of other phrases whether once uttered by human or not can replace the phrase, say, “on the same tree”. So it is conventions and shared knowledge of a phrase’s figurative meaning make it an idiom. But that doesn’t mean that idioms contains no iconic or indexical elements. I deem the figurative meaning (interpretant) of an idiom symbolic because the convention play a more important role in its forming, using and spreading.

Another interesting thing I found out is that there is a Chinese idiom have  the same meaning as “in the same boat”, which translated into English is “two grasshoppers on the same string.” As we can see, two idioms that have the same meaning from two language use entirely different ways of expression, which is another proof that the pairing between an idiom and its figurative meaning is arbitrary and conventional. Moreover, the shared meaning reveals that language is not essential for a thought or an idea, which is probably another idea Peirce would agree with.

 

Try Out Terms and Concepts on Film/Video

Films or videos form a symbolic system where directors and producers try to convey some meanings through the “language” of shots, with or without the help of actual language which is another universally accepted and well-studied symbolic system. Often, shots without language move people a lot. In the film the Revenant, there are shots of the hero, played by Leonardo DiCaprio, walking through a vast ice land, ragged and tired. Referring to the context, viewers can perceive from those shots his resolution to seeking the revenge (or the Oscar award, in real life).

In the symbolic system of films or videos, the display of different kinds of light, on the screen is the perceptible component, or the material form that human beings can interact with, which is required by all signs and symbols. According to Peirce’s typology of signs, films or videos should belong to iconic signs for they bear more resemblance to our real life than any other media.

To break it down, I think the shots in the films or videos are the minimal meaning elements in this symbolic system, which to some extent is similar to lexicon in languages. They are the minimal units in the system to push the story forward.

In the meantime, the arrangement of shots can be compared to syntax. Though the rules of arranging shots is far less rigorous and generally agreed upon like syntax and more subject to users, there are certain patterns that are widely used. I’ve come across some of them while working as a intern editor in a news organization, like long shots followed by mid-shots and close shots.

The meaning that directors or producers want to deliver to the viewers through a set of shots can be viewed as the counterpart of semantics in languages.

Thus, Jackendoff’s “parallel architecture” can be applied here–the shots, the arrangement of shots and the meanings can fit into the tripartite model with three modules interface with each other and output the product we eventually see (and hear).

However, there are certain limitation of applying the model of language to films or videos, because other things that smaller in scale that a shot but certainly have meanings are not accounted for. For example, the costumes in Game of Throne are delicately designed with sigils of different houses. They are undoubtedly symbols but there is no place for them in the model. Other instances including musics or natural languages used in film or videos should also be considered.