Author Archives: Xiaoyi Yuan

What’s New About New Media? — the Smart Watch As an Example

Xiaoyi Yuan

Abstract: “New media” has been a buzzword for decades. However we rarely ask ourselves what’s new about it. Traditional approaches in communication studies have identified new media as revolutionary technologies that no old medias can transcend. Through the example of the smart watch as new media, this paper takes a new approach merged from metamedia theory and distributed cognition to further explain the newness of new media beyond simplified generalizations of new media features. Moreover, the new approach opens up possibilities for interdisciplinary research on interesting issues of new media and human cognition.

 I. Introduction

What strikes me most throughout the semester is the question: What’s new about new media? There has been a significant amount of theories devoted to explain new media and how it has revolutionized our life. Traditional communication or media disciplines define new media as digital media facilitated by computer technology, Internet, and digitalization. In that traditional sense, computers are new media because “the key to the immense power of the computer as a communication machine lies in the process of digitalization that allows information of all kinds in all formats to be carried with the same efficiency and also intermingled” (McQuail, 2010).

Traditional approaches consider new media as one followed by old ones: newspaper, books, film, and broadcasting. This approach studies media as individual artifacts and focuses on the study of how new media differs from old ones and emphasizes how revolutionary new media is. However, what has been taken for granted was the interdependency between “old media” and “new media” — in other words, how new media is always the simulation and modification of old media. Also, we rarely ask ourselves what is media and what is fundamentally new about new media.

This article uses a system approach and explores the definition of “new media” from the perspective of metamedia theory and distributed cognition. However, before we launch into this approach, we have to historically recognize that “media” refers to a broad range of concepts. “’Media” has been used to refer to a broad range of concepts, “the term media has become a master metaphor-concept, a reified abstraction, a term used for so many objects, systems, and technologies that its descriptive value seems to work only in marketing and productive development. We talk about ‘the media’ and generally mean the older idea of ‘mass media’ or ‘mass communications’—radio and TV (broadcast and cable), advertising, ‘the press,’ or ‘news media’—through the media categories continue to morph in the post-Internet, pan-digital ‘media environment.’ ‘The media’ (as used in politics and PR) can reflect the older idea of ‘the press’…” (Irvine, 2012). In this short paper, we cannot discuss all aspects of new media. So I will mainly focus on discussing what’s new about new media technologies and how new media technologies create new interfaces for humans to interact with those technologies and other human beings.

Abstract concepts and theories are better explained and understood through a concrete example. Here I introduce the example of the smart watch and give a brief systematic review of its evolution. The smart watch represents an example of how new technologies demonstrate what’s new about new media. Compared with older models of smart watches, the new models show the rapid development of integrated and interactive technologies.

 

II. The Evolution of Smart Watches

There’s no strict definition of what a smart watch is. Usually a smart watch refers to a computerized wristwatch with functions that are beyond timekeeping. One of the most notable smart watches is the iWatch developed by Apple. However, long before the invention of iWatch, there were smart watch models developed by other companies. Moreover, the idea of smart watches has been discussed for decades. In this section, I will give a systematic overview of the evolution of smart watches.

1. Dick Tracy Watch— Early smart watches Idea Transcending the Era and Medium That Gave Them Birth

_DTWATCH

Figure 1: The 2- Way Wrist Radio in the comic strip Dick Tracy

Dick Tracy is a comic stripwritten by Chester Gould which debuted in 1931 about an intelligent and highly successful police detective named Dick Tracy. The 2-Way Wrist Radio (Figure 1) worn by Tracy and other members of the police force became one of the most recognizable icons of the comic strips. Detectives wearing this wristwatch could communicate directly with police headquarters through radio.

Even though fictional gizmo should not be included in the evolutionary path of real-life technological development, fictional technologies are always allegories of our longing for fulfilling social needs through technology. In this case, this is shown in the example of wearable technology. The smart watch idea in Dick Tracy transcends its contemporary technologies, which reflect the notion that any one of our technologies are based on both ideas and technological development—they are not invented by magical forces.

2. Early Models of Smart Watches: Calculator Watch and Game Watch

Forty years after the cartoon depiction of the fictional smart watch in Dick Tracy, there were real technologies that implemented the idea of integrating functions beyond timekeeping into watches. One of them is the calculator watch (Figure 2) that was first introduced in the 1970s by Casio. Figure 3 is the inner structure of the calculator watch as shown in a patent drawing made by the inventor Nunzio A. Luce in 1976. By de-blackboxing its material structure, it reveals how two functions – the calculator function and the timekeeping function – work together (watch CMOS, calculator PMOS, encoder,and decoder). In addition to this watch, there were other types of calculator watches in the record of American patents. However, the basic functions and mechanisms were similar. The introduction of these early smart watches showed that people wanted features that integrated merging two media functions together in one watch.

Cfx400c

Figure 2: A calculator watch developed by Casio

Untitled

Figure 3: detailed block diagram of the structure of a calculator watch assembly according to this invention

Another model similar to the calculator watch was the game watch, developed by Nelsonic Industries (Figure 4). Although it appears to be more advanced than calculator watches, the inner mechanics of game watch was at the same technological level, and were just variations on the theme of the calculator watch.

Untitled1

Figure 4: Game watch developed by Nelsonic Industry

3. Current Smart Watches: Mac OS/Android Smart Watch and Mechanical Hybrid Smart Watch

What’s more familiar to us are the smart watches developed by hi tech industries that are not specialized in watch productions: Sony, Samsung, Moto, LG, and Apple. Interestingly, there are also mechanical hybrid watches (Figure 5) that keep the traditional components of mechanical watch but also add the digital screen on the outer glass layer. However, no matter whether it is the hybrid or solely a digital display (iWatch), the computerized digital display and Internet/Bluetooth connections requires a much more advanced motherboard compared to its predecessors (Calculator Watch and Game Watch.) The functions of current smart watches are similar with our smart phones—people might call it a watch version of smart phones, except the smart phone’s buzz functions are enhanced in the smart watch by haptic feedback (iWatch). Since watch contact with your skin is all the time, there’s one more way for you to receive notifications.

To include the brief history of the “evolution” of smart watches is not to demonstrate that the smart watch technology develops in a linear way. Quite the opposite: it is to show that the old models of smart watches (calculator watch and game watch) are not enough for us to explain how we get to the current ones. Just as traditional communication education introduces old and new media from newspaper, books, broadcasting, film, and Internet/digital media as separate technological artifacts, it is not sufficient to account for the real evolution process of new media. There is a technological and more importantly, social, cultural, and political dynamic that has happened between any of the old and new media just as significant technological advancements happened between the old and current type of smart watches: Figure 6 shows the inner components of LG “G Watch” and its motherboard. Comparing to the block diagram of the calculator watch (Figure 3), the G Watch’s functions are backed up by technologies such as integrated circuit (microchips), Bluetooth, long-lasting batteries, accelerometer, and large memories (short term 512 MB RAM and 4GB long term memories.)

Screen Shot 2015-04-27 at 9.30.27 PM

Figure 5: Hybrid watch developed by Kairos


LG-G-Watch-Components-SummaryLG-G-Watch-Motherboard-Chips (1)
Figure 6: De-blackboxing LG Smart Watch: the Motherboard

III. What’s New About New Media? – Merge Metamedia Theory and Distributed Cognition Theory

Are “calculator watch” or “game watch” new media? We might not have a consensus answer to this question. However, this paper does not aim to address questions on this specific level, but rather, the example poses a crucial question on a higher level: how should we define “new media” and what’s fundamentally new about it? What’s the implication of those new technologies integrated inside of each smart device? How can we further explain the newness of new media?

The reason for me to raise this seemingly counterintuitive question is to avoid falling prey to identifying every newly developed media as “revolutionary.” Consider the example of print technology, which some people call as the “Renaissance computer”, “By 1500, over 280 European towns had some form of printing press. From these presses, books were distributed in unprecedented numbers… The new, capital—intensive print technology of the early sixteenth century was able to produce almost flawless replicas of a given text over and over again. At once, the symbolic power of the book is redefined” (Rhodes, 2015). The way that we describe the revolutionary feature of book media is just like how we describe our current “new” (digital) media: it is unprecedented; it changes how we interact fundamentally, and it changes everything.

However, rather than just recognizing its newness, what’s fundamentally “new” about new media? I argue that new media allows digital devices to be “metamedia” that can represent other media that distributes human cognition in a revolutionarily new way. In the rest of this paper, I will review literatures on how other scholars define and describe “new media.” Then I will introduce my analytical approach: to merge Manovich’s metamedia theory and distributed cognition theory to discuss the question: what’s fundamentally new about new media?

1. Literature Review on Definitions of New Media in Early 1990s

New Media & Society, a highly ranked international journal specializing on issues of new media as related to society, featured several articles in its first issue on the definitions of “new media.” This first issue was released in 1999 when Internet technology and personal computers started to become pervasive on a global scale. The issue included articles by scholars from both U.K. and U.S., in which they delineated approaches and perspectives on how to define “new media.” Those perspectives, interestingly, are still the mainstream understanding of new media for our current time.

Roger Silverstone stated that the definition of new media had been ambiguous and assumed (Silverstone, 1999). Scholars such as Flichy and Poster identified several important issues and perspectives that are worth further exploration as they pertain to the question, “what’s new about new media” by examining the evolution of Internet and digital technologies that facilities the rise of new media (Flichy, 1999) and new media’s impact on social interactions (Poster, 1999). Other scholars take a more sociotechnical approach: Ronald Rice argued that we need to focus more on the underlying dimensions of attributes available in all communication forms instead of focusing on the particular medium (Rice, 1999). Likewise, Sonia Livingstone has a similar perspective: “What’s new for society about the new media? It must locate technological developments within the cultural processes and associated timescale of domestic diffusion and appropriation.” Also, many other scholars proposed issues associated with specific social/cultural/political/economic factors: new media and information and knowledge based economy (Melody, 1999); How new media creates network capitalism (Robins, 1999); new media and democratic politics (Coleman, 1999); new media and public participation (Rakow, 1999); new media and its implication for future journalism (Paylik, 1999); and new media and globalization: Westernization and the use of English on the World Wide Web (Kramarae, 1999.)

During the rise of the Internet and the introduction of personal computers in the mid-1980s through the early 1990s, research such as those above set up the general agenda of later research and perspectives on new media. It is not to say that such research are not valuable. However, media technologies are changing with the development of cognitive technologies (artificial intelligence) and wearable technologies (Google Glass/Smart watch) and more importantly, the reconfiguration of interrelated social, cultural, political, and technological networks. Confronting more complicated situations, we should rethink the assumptions that are taken for granted in mainstream media studies: to study media as individual artifacts and which simplifies the newness of new media into a set of several or multiple features. Next, I will merge two conceptual models: metamedia theory and distributed cognition and argue that new media allows digital devices to be “metamedia” that can represent other media, which distributes human cognition in a revolutionarily new way.

2. Offloading Human Cognition in a Better Way: “Metamedia”

If we want to know what’s fundamentally new about new media, we should first ask what’s not new about it. The question can trace back to the interaction between human and artifacts.

Imagine you want to get various kinds of groceries you need in a grocery store. What do you do if you cannot remember everything you want to get? – Of course you write down a list! It doesn’t matter if you use a more traditional way (pen and paper) or a newer way (your phone). The point is that the external artifacts help you to think and memorize. That’s the theory that Andy Clark proposed named “extended mind,” as opposed to “Brainbound theory.”

The “brainbound theory” insists that human cognition depends directly on neural activity alone. “According to BRAINBOUND, the (nonneural) body is just the sensor and effector system of the brain, and the rest of the world is just the arena in which adaptive problems get posed and in which the brain-body system must sense and act” (Clark, 2008). However, Clark believes that media artifacts are in the loop of the human cognition process as extensions of the human mind, “Maximally opposed to BRAINBOUND is a view according to which thinking and cognizing may (at times) depend directly and noninstrumentally upon the ongoing work of the body and/or the extraorganismic environment. Call this model EXTENDED. According to EXTENDED, the actual local operations that realize certain forms of human cognizing include inextricable tangles of feedback, feed-forward, and feed-around loops: loops that promiscuously criss-cross the boundaries of brain, body, and world. The local mechanisms of mind, if this is correct, are not all in the head. Cognition leaks out into body and world” (Clark, 2008).

Therefore, the shopping list you write, according to Clark, is in the “thinking loop” with our human inner cognition. We offload our cognitive processes onto external media artifacts. Similarly, the discipline called “distributed cognition” also has such arguments, “Distributed cognition is a scientific discipline that is concerned with how cognitive activity is distributed across internal human minds, external cognitive artifacts, and groups of people, and how it is distributed across space and time” (Zhang & Patel, 2006). They believe that information-processing tasks require the processing of information distributed across internal minds and external artifacts.  Moreover, external representations are more than just inputs and stimuli to the internal mind. While we interact with machines, we offload and distribute our cognitive processes onto those machines.

Human beings have been distributing cognition onto external artifacts since the invention of language. We can only keep our thinking internal until we can communicate with others by language (Clark, 1998). Even before the invention of language, human beings invented the way to keep records by tying knots. Early “media technology” manuscripts and books helped distribute human cognition to a larger scale. Printing technology, for the first time in human history, enabled mass production of intellectual works. However, no matter whether it is early book media or more recent film, music record, or broadcasting, human beings interact with these media in a passive way. In other words, we offload and distribute our cognition onto older media technologies passively.

Then how about distributing human cognition onto more advanced artifacts—cognitive technologies, instead of passive media interfaces such as pen and paper? How does the newer media artifacts (smart phone, computer, smart watch, Google Glass, virtual reality, or media technologies based on artificial intelligence) revolutionize the way of distributing our cognition?

Here I introduce another important concept: “metamedia” proposed by Lev Manovich. Manovich believes that “new media” is “new” because new properties (i.e. new software techniques) can always be easily added onto it. The “metafunction” that new media possesses is revolutionary, compared to old media. Manovich also believes that the “meta” feature of new media can help people in distributing human cognition — “The prefixes ‘meta’ and ‘hyper-‘ used by Kay and Nelson were the appropriate characterizations for a system which was more than another new medium that could remediate other media in its particular ways. Instead, the new system would be capable of simulating all these media with all their remediation strategies… Equally important was the role of interactivity. The new meta-systems proposed by Nelson, Kay and others were to be used interactively to support the processes of thinking, discovery, decision making, and creative expression” (Manovich, 2013).

To translate the concept metamedia and distributed cognition into a simple example that we experience on a daily basis: huge amount of research devoted to prove that we are spending and wasting too much time on smartphones, Internet, or computers. However, those researches might neglect the fact that smartphones and computers (metamedia technologies) help human to offload and extend their cognition in a “meta” (unlimited) way. As Manovich mentioned in his book Software Takes Demand, “A computer can simulate a typewriter—getting input from the keyboard and arranging pixels on the screen to shape the corresponding letters—but it can also go far beyond a typewriter, offering many fonts, automatic spelling correction, painless movement of manuscript sections…” (Manovich, 2013). Metamedia has the feature of infinity because it can always be used for presenting other media. A computer or a smartphone does not only have note-taking functions; rather, it has the platform that opens up for more functions and creations. For example, the online Apple App Store, where different software applications can be uploaded and downloaded, significantly extends the materiality and physicality of smartphones, computers, or other “smart devices” in a way that no other old media technologies can surpass. “They call a computer ‘a metamedium’ whose content is ‘a wide range of already-existing and not-yet-invented media” (Manovich, 2013).

Therefore the newness of new media is its metamedium function where it can simulate “old media” functions infinitely. New media is the intergradation of old and new, “metamedium contains two different types of media. The first type is simulations of prior physical media extended with new properties, such as “electronic paper.” The second type is a number of new computational media that have no physical precedents… (such as) hypertext and hypermedia (Ted Nelson); interactive navigable 3D spaces (Ivan Sutherland), interactive multimedia (Architecture Machine Group’s ‘Aspen Movie Map’)” (Manovich, 2013).

Thinking about Manovich’s definition of new media through distributed cognition and extended mind: human beings are able to distribute cognitive processes onto new media technologies that: 1, simulate the form of old media; 2, are automated; 3, have new logics (e.g. hyperlinks); and, 4, represent other medias unlimitedly. New media is, by no means, has no history. Just the opposite, new media is the convergence of old medias and thus distribute human cognition with new logic and with automation. There’s no scientific conclusion on whether this new way of distributing human cognition will change human brains since compared to the time span of evolution of human species, new media is just a split second. This new perspective of defining new media, however, opens up the possibilities of interdisciplinary research on media studies and cognitive science.

 

IV. Conclusion

This paper analyzes what’s new about new media technologies and how the metamedium feature of new media enables humans to offload and distribute cognitions in a new and better way. Compared to the old models of the smart watch, current models integrate the notion of wearable technologies from a more mature market: smartphones. This paper aims to go beyond the simplified generalization of the newness of new media and open up a fresh perspective. A smart watch carries almost all the functions of a smartphone and allow human beings to distribute cognitive processes in a more seamless way. Similarly we can analyze any other emerging media technologies from this merged metamedia and distributed cognition approach. This approach, however, is just a start. It opens up additional interesting research questions such as: How does new media technologies better distribute human cognition? How do our addictive and repetitive behaviors of the use of smart devices relate to human cognition? How do smart devices help young children in their cognitive development? Further research is needed to explore these arenas.

Bibliographies:

Clark, Andy. Supersizing the Mind: Embodiment, Action, and Cognitive Extension: Embodiment, Action, and Cognitive Extension. Oxford University Press, 2008. Print.

Clark, Andy. Supersizing the mind: Embodiment, action, and cognitive extension: Embodiment, action, and cognitive extension. Oxford University Press, 2008.

Coleman, Stephen. “The New Media and Democratic Politics.” New Media & Society 1.1 (1999): 67-74. Print.

Flichy, Patrice. “The Construction of New Digital Media.” New Media & Society 1.1 (1999): 33-39. Print.

Hollan, James, Edwin Hutchins, and David Kirsh. “Distributed Cognition: Toward a New Foundation for Human-Computer Interaction Research.” ACM Transactions, Computer-Human Interaction 7.2 (2000): 174–196. Print.

Kenney, Briley. “The Top Hybrid smart watches That Merge Classic Analog and ‘Smart’ Timekeeping Features.” smart watch.org. N.p., 5 Nov. 2014. Web.

Kramarae, Cheris. “The Language and Nature of the Internet: The Meaning of Global.” New Media & Society 1.1 (1999): 47-53. Print.

Livingstone, Sonia. “New Media, New Audiences?” New Media & Society 1.1 (1999): 59-66. Print.

Luce, Nunzio A. “Electronic calculator watch structures.” U.S. Patent No. 3,955,355. 11 May 1976.

Manovich, Lev. Software Takes Command: Extending the Language of New Media. , 2013. Print.

Melody, H. William. “Human Capital in Information Economies.” New Media & Society 1.1 (1999): 39-46. Print.

McQuail, Denis. Mass Communication Theory. 6th ed. N.p., 2010. Print.

Pavlik, V. John. “New Media and News: Implications for the Future of Journalism.” New Media & Society 1.1 (1999): 54-59. Print.

Poster, Mark. “Underdetermination.” New Media & Society 1.1 (1999): 12-17. Print.

Rakow, F. Lana. “The Public at the Table: From Public Access to Public Participation.” New Media & Society 1.1 (1999): 74-82. Print.

Rhodes, Neil, and Jonathan Sawday. The Renaissance Computer : Knowledge Technology In The First Age Of Print. London: Routledge, 2000. eBook Academic Collection (EBSCOhost). Web. 28 Apr. 2015.

Rice, E. Ronald. “Artifacts and Paradoxes in New Media.” New Media & Society 1.1 (1999): 24-32. Print.

Robins, Kevin. “New Media and Knowledge.” New Media & Society 1.1 (1999): 18-24. Print.

Silverstone, Roger. “What’s New About New Media.” New Media & Society 1.1 (1999): 10–12. Print.

Zhang, Jiajie, and Vimla L. Patel. “Distributed Cognition, Representation, and Affordance.” Pragmatics & Cognition 14.2 (2016): 333–341. Print.

“Take A Peek Inside A smart watch. Is LG’s G Watch Worth $229?” Finance Twitter. N.p., 10 July 2014. Web.

Thoughts for final project– Web Interface design (cross cultural?)

I had a hard time synthesizing all the concept and theories we came across throughout the semester. I am still trying to figure out the exact topic that I want to explore for my final project. But I think the problem-solving way of thinking is very useful. I asked myself: what do I really want to know? Lots of ideas came up to me and one of them is something I have been thinking about since long time ago, that is the cross cultural web interface design. New questions and new perspectives emerged when I thought about it through our class theories. I haven’t had any answer or argument for this topic but I think it is good to start with questions and observations.

In the class of last semester (Semiotics and Cognitive Technology), we had a class on interfaces and design principles. The articles written by Murray was very impressive to me. It talks about the cultural element in digital medium, “I argue for the advantage of thinking of digital artifices as parts of a single new medium, which is best understood specifically as the digital medium, the medium that is created by exploiting the representational power of the computer” (Murray, 2012.)  Although the web interface design is relatively new comparing to design of other media (books, videos, or a pencil), it does not only serve technical and practical functions, rather, it is more cultural/social/commercial oriented. There are some bullet points that I have for thinking through web design:

  • It is INTERACTION between Machine and Humans
  • NOT just for serving certain practical functions (Provide information);
    To better show their business orientation;
    Be inviting: making people feel comfortable

Questions:

  1. Why should designers be aware of cultural differences when design web interfaces? — users get impression of the business when seeing the website and every one of us are cultural, symbolic species.)
  2. How to be inviting and how to show business orientation when cross cultures?
  3. To what degree, in reality, do designers aware of cultural differences? (Are designers locals?)
  4. How do users/interactors make sense of the interface?  — We are symbolic species and we make sense of the “symbols” (images, language, icons, or videos) intuitively and simultaneously.
  5. Do other cross cultural research can contribute to cross cultural web interface design?– cross cultural is something that has been studied for many years (longer than cross cultural interface design)

I’ve done some literature review about the mainstream cross cultural web interface design research. Before laying out the research, I’d like to show some examples:

1. U.S. version of Pizza Hut Website vs. Chinese version

Screen Shot 2015-04-14 at 11.12.47 PMScreen Shot 2015-04-14 at 11.14.27 PM

I found it very interesting because the design of the website correspond with its different business orientation in two countries. The Pizza Hut in China is framed as a place with nice environment and a good place for family or friends gathering. And it is not just pizza. Whereas in the U.S. Pizza Hut is more like a fast food.

2. McDonald

Screen Shot 2015-04-14 at 11.16.38 PM

 

Screen Shot 2015-04-14 at 11.18.38 PM

Yes, Chinese people stop thinking about being healthy the minute they step into McDonalds.

3. KFC

Screen Shot 2015-04-14 at 11.19.49 PM

 

Screen Shot 2015-04-14 at 11.19.57 PM

It’s almost striking for me seeing the difference in KFC. So the bar on the top of Chinese version website has: Balance Diet; Ordering Online; App Download; Company Responsibility; Join Us; News Center; Everyday Exercising; Children’s Land; Franchise; Contact Us. The bar on the top of U.S. site has ones only about food.

There are many research on web interface design. Besides fundamental principles of usability, there are some characteristics for cross cultural design: language translation; layout (banners, menu items, orientation…); symbols (icons, navigation elements); content; multimedia; color (color semiotics varies across cultures and impact consumer expectations)

Geert Hofstede has been doing cross cultural communication research and he found out that there are 5 dimensions of cross cultural web interface designs:

1.power distance (autocratic society vs. democratic)

2. collectivism vs. individualism

3. femininity vs. masculinity (Masculine cultures are competitive, assertive, materialistic; feminine cultures place more value on relationships and quality of life.)

4. Uncertainty avoidance (High uncertainty avoidance cultures are more emotional, and control changes with rules, laws and regulations. Low uncertainty avoidance cultures are more pragmatic, and have as few rules as possible.)

5. long vs. short-term orientation (Long-term oriented societies are oriented to the future, and are pragmatic, rewarding persistence and saving. Short-term oriented societies are oriented to the present and the past, rewarding reciprocity in social relations and the fulfilling of social obligations.)

I don’t know if only it happens for food business. For example, the Apple Inc. website in U.S. looks exactly the same with Apple website in other countries (only language differences.)

Democratization of High Arts: Online and Offline

I am not a “high art” person and the only art knowledge that I have was gained from accompanying many friends of mine to various museums, a place nowadays more acknowledged as “tourist resorts”.

Both Benjamin and Malraux care about the high art and the public. In other words, they asked the question: what will happen if “new” technologies facilitates high arts to reach to ordinary people? Benjamin uses the word “exhibition value”, “with the emancipation of specific artistic practices from the service of ritual, the opportunities for exhibiting their products increase.” Benjamin’s intention was not to excluding films and photography from the category of arts but by recognizing the possibility of new technologies brought to paintings, he raises question on the relation of the mass to the art, “while efforts have been made to present paintings to the masses in galleries and salons, this mode of reception gives the masses no means of organizing and regulating their response.”

Now the question is rather “Are museums mediations of paintings (arts)?” than “can paintings be appreciated in crowded museum/galleries?” So when we talk about digital museums, we were actually talk about mediation (digital museums) of mediations (museums). The configurations of museums are set up to serve certain purposes. When you stand in front of a painting in a museum, there are people around you and there are other paintings around you. The environment of the museum (sound, temperature, light, etc.) is something that Benjamin might call “aura”. The painting you see might be moved from other museums or other places, but the real time experience you gain when starring at it is unique. Now the question is: for ordinary people, do they really care about such uniqueness? Famous museums are crowded with people from all over the world. Museums are in the traveling books, depicted as somewhere you can get to know the local culture and to see masterpieces. Everybody who visit The Louvre take pictures of Mona Lisa. People feel excited and mark their museum experience as meaningful.

Google Art Project tries to mediate the “offline” museum experience online. It has 360 degree presentation, HD photos of each arts from hundreds of museums, functions that enables people to present artworks collectively, and many other functions. I think it is a great learning tool, especially for laymen who have no formal education on arts or without any knowledge of artworks. The discussion of realness of artworks in Google Art is useful but the ultimate intention of this project is not to restore the realness (aura) but to provide it as resources to the public. I understand the concern that whether Google Art Project will distort or devalue artworks. For purpose of democratization of art education, the “fictional exhibition function” of Google Art Project outweighs its potential distortion or devaluation artworks, just like how real museums give access to the general public.

It’s a joint effort for both online and offline museums to democratize “high arts”. “High arts” doesn’t have to be mysterious and “hard to reach”. What Google Art does starts a new way of learning. It’s fun to have the Pinterest interface and it’s fun to use 360 degree view to see the real museum online. More importantly, “being fun” is ultimately important to the public.

Interfaces: What Make Interactions Possible

It’s surprising to see the “new media” functions/concepts had been proposed long before we thought it would be. The concept of interactive interfaces, hyperlinks, etc.

Manovich’s definition of new media is that new media can not only simulate old media, but also opens for future design possibilities, done by both technical and non-technical people. So the new media can be understood as platforms of platforms. It’s dynamic and creative. Manovich discusses several historical technology conceptual model to explain what makes “new media” possible, DynaBook and Hypertext.

The overlapping of Manovich and Alan Kay’s idea lays in their use of the term “interface”. Although most of the time when we say interface, we refers to the surface, visible design principles. However, Manovich and Kay use the term interfaces as the underlying principles that makes technological designs possible. Just like Alan Kay says in the video, “If you want people to go along with you, you have to involve with the same conspiracy. And user interface is a conspiracy that I hope people get interested in.” Also Kay’s words, “the music is not the piano” reveal his idea that by only looking at hardware and software itself cannot fully explain the technology’s ability of augmenting human intelligence, it is how hardware and software open the possibility for people to use/create new functions. There are many examples in our current age that how technology opens up possibilities to new functions— many apps in iTunes store are developed by individuals instead of institutions. Also the website mentioned by Kay, Etoy, where child can program on their own.

  • Describe the conceptual, technical, and design steps that enabled computers and computation to be used for information access and processing with any kind of medium by ordinary, nontechnical users.

This topic in the syllabus and the concept of hypertext remind me of the experience that I had in a used bookstore. We are using the concept of hypertext everyday: looking for certain information we need by Google search or just by  “ctrl+F”. The conceptual model of hypertext is revolutionary. It allows us to get access to information in a random order and augment our ability of information processing. The experiences that I had in that bookstore was that because the trade used books with customers, a customer can trade his/her books with their store credits. The way they keep record of customers’ credits information is by writing on a paperback card. What happened the day when I went to the store with my friend was that they cannot find the piece of paper that writes my friend’s credits. The woman spread all the cards on the table and try to find it by my friend’s last name. Unfortunately the order of cards was messed up and not in alphabetical order. When I was watching at their way of searching information, I see the importance of using a digital spreadsheet. The conceptual model that Word Excel (or other spreadsheets) is to enable people to record correlated information. In this case, it is the credits information correlates with each customers. Also there is not so many technical barriers for ordinary nontechnical people to use Word Excel (except some functions need more programming).

Take one step back, our computers, phones, tablets, or other technologies have so many softwares like this. Users of those technologies are mostly nontechnical people. The accessible interfaces make it possible for them to interact with machine and other users. However, problems still exist. Alan Kay said, for all media, the original intent was symmetric authorizing and consuming, ““Apple with the iPad and iPhone goes even further and does not allow children to download an Etoy made by another child somewhere in the world. This could not be farther from the original intentions of the entire ARPA-IPTO/PARC community in the ’60s and ’70s.” From my perspective, copyright problems are smothering creativities. One of the most prominent examples that I can think of are “patent trolls” referring to the phenomenon that some companies make money by suing start-up technology companies. They can destroy a small start-up company overnight. When I read this week’s article, I couldn’t help contracting the ideal image of “new media” and its great potential (this optimistic feeling mostly comes from Manovich) and the reality (I wish I am not pessimistic by thinking this way.) No matter thinking optimistically or not, re-center the idea of interface as what makes interaction possible is useful. Hypertexts, DynaBooks are all conceptual models proposed way before the invention of real physical technologies that implementing those concepts. And remember, those concepts were not new but highly intuitive. 

Computational Thinking Versus Doing Research

This week’s materials focus on computational thinking and why it’s beneficial to us. So I kept asking myself: Why do we need computational thinking? As for me, I had a hard time trying to figure out how to ask research questions and how to write high quality research paper. I think there’s something about computational thinking that are paralleled with research processes.

Both Daniel Hills and Jeannette Wing mentioned that computational thinking is about hierarchy, thinking at multiple levels of abstraction. I think it is also true for how to boiling down a research question. Research questions are not answered simply. Usually we need to boil down the major question into different little answerable questions and link every one of them logically to answer the major one. Today Professor Tinkcom talked about how to do academic writing to different audiences (as a guest lecture, new content this year, and I am a TA so I was there.) He said that academic writing is about problem solving and it always start with questions that you genuinely want to know. He also mentioned how to work on each part of your writing and how to finally put them together. I think the process of breaking down and then synthesize is just like the level of abstraction in computational thinking. For example, when programming with Python, you want to find out how much positive words, neutral words, and negative words are there in one article. Like what professor Irvine said, computation always starts with functions, not with physical technologies. So now the question is “find out the words”. It’s something conceptual we want to do. So next we want to ask ourselves: what does it mean by positive/neutral/negative? There are many resources we can refer to online sentiment words dictionaries computer scientists build that categorize every English word into three categories: positive words, neutral words, and negative words. Then we might think, recognizing a word to be positive/neutral/negative is subjective and varies according to different contexts. So how can we use the static sentiment words dictionaries? Well, we can do some adaptation to the dictionary. Then we run Python program to compare each word in the article with the dictionary to get the result about the percentage of positive/neutral/negative words. Also, when we write Python scripts, there are many symbols we need to define as well.

The thinking process I described above actually is not computation specific. The logical problem solving process exists in many ways of our life. I think what I learned most from reflecting computational thinking is how to logically connect each part of my research to better answer the major question that I raise. Sometimes I read research articles that jump into conclusions or the conclusion is not answering the major question (I think I make the same mistakes too.)

Another thing I think is interesting about computational thinking is that not all programmings and algorithms are dealing with a grand question, instead most of the time, it solves questions that sound like minor and not important. For example, we want to use a program to do math. We all can do math but computers can do it faster, over and over again. Computers are not doing anything that at a very high level of complexity but it’s humans who boil the question down to many sub-questions and use the automaticity feature of computer to tackle problems.

Playing with Thoughts and Theories (Technologies, Functions of Technologies, and Humans In Terms of Books and Reading)

While reading this week’s materials about distributed technology, I accidentally saw an article about “e-reading isn’t reading” on Slate Magazine. What’s ironic is that the article itself requires me to read it digitally. However, I am very curious about how ideas or arguments in this article connect to what we read for the class on distributed agency and distributed cognition.

What titillates me in that article is the title “e-reading isn’t reading”. Instead of denying a technology (“e-books aren’t books”), it denies a function. It touches the core question that we are solving in this class. We agree on the argument that technologies are evolving but general functions played out by technologies remains the same. Then why does this article argue that the function of reading is not implemented by e-reading? Interestingly, it approach this question from the perspective of human-technology interaction, which is also the subject we are trying to address for this week. Also I think the thoughts revealed in this article represents a typical way of thinking on issues about human technology interactions. Here is its primary arguments:

  • Reading with touching real books is real reading because touching fulfills functions of books. Functions of books are not only texts but also real material interaction.
  • real mechanical pressure (presses pr squeezes) as appose to touching screen. “Swiping has the effect of making everything on the page cognitively lighter, less resistant. After all, the rhythmic swiping of the hand has been one of the most common methods of facilitating “speed-reading.”
  • tons of other examples that author uses to prove that real interaction with physical books leads to deep reading.

So I want to make a comparison between the way of thinking in this article with Actor Network Theory. From ANT’s perspective, I could argue:

  • It is hard to say function of books (i.e. reading) is decreased or augmented. It’s both. Also, it is true that interacting with physical paper feels different with interacting with digital screens but we also cannot ignore the function of ubiquitous reading brought up by digitalization. However, the function of reading still exists in whatever forms we are implementing. Amazon’s new ways of publishing (self-publishing) creates more efficient ways of communicating. It makes publishing easy without high costs and get distributed globally.
  • The delegation process of human symbolic interaction (reading and writing) to e-books is more complicated than that to paperback books. “Actions emerge out of complicated constellations that are made of a hybrid mix of agencies like people, machines, and programs and that are embedded in coherent frames of action. The analysis of these hybrid constellations is better done with a gradual concept of distributed agency than with the dual concept of human action and machine’s operation (Werner Rammert, 2008.) So when analyzing interaction with either e-books or paperback books, we should see the complexity of invisible power behind visible technologies. Human’s wishes for portable books and ubiquitous reading has been there for hundreds of years. It is the current digital technology that physically carries out this long term human wish.
  • The industry of e-books motivates its development, especially Amazon Kindle. By keeping down the price of the physical reading tool— Kindle, Amazon benefits greatly every year from selling virtual book copies. The costs (production and distribution) of virtual copies are very low, comparing to hardcopy books. I used to work in book publishing industry in China. Physical paper is the primary cost of book production (I think in the U.S. the cost of patent related fee is much higher.) Significant benefits of Amazon grow the e-book market.
  • Policies: It’s counterintuitive when thinking about buying a 11 dollar ebook without really own it. Amazon ebooks are prohibited from sharing, which is done by technical means. You just cannot share it!
  • Culturally, no matter in business world or education, we’ve been digitalizing the action of reading (we use website as course syllabus and professor distribute readings in PDF versions). We always think about e-reading in the form of reading “books”. But we also devote so much time on twitter reading, Facebook reading, blog reading, etc. You name it. The habit/preference/tendency of e-reading is not only in book forms (book technologies) but also slowly built by other reading/writing devices.

It is very interesting to see how much we can break down just from a concept of e-reading. Each of the argument above actually is just a beginning. There are much more things to explore based on each of them because there are multiple invisible forces contribute to the current visible models. We (I think “we” = scholars) cannot just narrow ourself in comparing or analyzing visible forms, like the article I mentioned in the beginning does.

 

What’s “New” about New Media? (System Thinking/Approaches)

What’s “new” about new media? This question has haunted me since I started my undergrad in Journalism in China. Yes, we read text books like the one written by McQuail and I still remember clearly the first class I had in my Freshman year is to explain every one of the media (telegraph, book, magazine, newspaper, film, and computer). Also, we are so used to the language/metaphor such as impact or influence of “new” media. It is so easy just taking granted that new media (computer/Internet) is fast, pervasive, and changes everything. But the problem is that there are too many assumptions embedded if we think in a linear way. I think this week’s reading did a good job on challenging those assumptions we took granted for media studies: what’s the definition of new media? what new questions can we ask when we get rid of technology determinism?

Both McLuhan and Debray proposed ideas to system thinking. As McLuhan says, “the medium is the message because it is the medium that shapes and controls the scale and forms of human association and action.” The “message”, in McLuhan’s words, does not equal to the “content” we commonly use. McLuhan thinks that the form (materiality) of technology development ties closely with its social/political pre-conditions and consequences. What’s communicated in a medium is not just the content but everything that this technology makes possible. It echoes the idea of transmission that Debray later proposed in his article. Culture is transmitted through medium from the past to the present that makes human civilization possible.

The system thinking (or maybe we can call it “complex theory”) enables us to question the cliche that Internet and computer mediums are most radical progress we made throughout human history. It is the distinction made by professor Irvine between medium and mediation. Medium/media refers to social-technical implementations and interfaces of sign systems for communications whereas mediation means functions of mediums (e.g. same functions of paperback books and books on digital screens). So what’s new about new media is not technology itself but how it converges different functions of human symbolic interactions. I like how Manovich summarize the difference between old and new media. He said that the popular understanding of new media affects distribution, presentation, and production and old media can only affect one dimension (such as printing can only affect how information is distributed.) Also, Manovich in his book Software Takes Demand, talks about how software works together to converge all the media functions. His idea reminds me that in the past, those news media and some social media used to only have websites instead of apps. And in the past few years, the tendency becomes to be: every media has to has its own app! It’s very interesting to think the consequences and maybe gaps (?) in the way of how multiple apps converge media functions. Some new mobile apps try to build platforms to manage different other apps. It is like in “old” media form, we read books and we take notes but in the computer or mobile devices, reading and writing at the same time becomes even “harder” for us to do. By “harder” I mean, we need to jump between different “interfaces”. And of course, we can tile multiple pages onto our computer screen and look at them at the same time. But I keep thinking, is there any other ways to make it more “convenient” and more “natural” for us to do? I don’t know, maybe make the computer screen NOT in a tiled fashion?

 

“Semiotic School” of Communication and How It Takes Me to Re-think Google Gallery

While reading through every one of the readings for this week, several question bears in my mind that prevented me to lose focus: what’s in the communication model that Shannon and Weaver propose? Many scholars, afterwards, argue against that model for being too linear and semantics are not included. So what does it mean by bringing into semantics into communication models and information theories? How can we apply this realization into media and information technologies around us? What’s the implications?

I know that was lots of questions but it indeed helped me to keep having a clear mind when going through so many different theories. So first in this blog, I want to summarize and clarify several concepts that I think is important to me and how those crucial concepts lead me to re-examine the technology: Google Gallery.

Before goes into theoretical part, I want to share my conversation with Siri, which I think is pretty relevant to this week’s topic.

 

IMG_6993IMG_6992

 

According to John Fiske (in Steven Maras article), there are mainly two schools of approach of Communication:

(1) process school

(2) semiotic school

People in process school care more about how senders and receivers encode, decode, and how to use channels and media to convey accurate information; Whereas semiotic school thinks communication as production and exchange of meaning. It is a good start to classify the thoughts we encounter in this week. However, most of the time, the boundary of these two schools are not a clear cut by which I mean, process schools care about meaning production in communication process, but in a different way with semiotic schools.

 

1. Process School: Floridi and Gleick (and of course those early information theoriests)

Even though the “semantics” part of Lucaino Floridi’s book Information: A Very Short Introduction is not convincing (P34), the way he talks about 4th revolution, the information age, is very interesting and helpful as a background knowledge. He describes information age as an environment that is friendlier to informational creatures, “We are witnessing an epochal, unprecedented migration of humanity from its ordinary habitat to the info sphere itself, not least because the alter is absorbing the former. As a result, humans will be informs among other (possibly artificial) inforgs and agents operating in an environment that is friendlier to informational creatures” (Floridi, 2010.) I think his system view as oppose to technology determinism echoes what professor Irvine said in the video, “When we study technologies, we are not studying properties of machines. We are studying extensions and implementations of core human capabilities and collective symbolic thought. Our ability to exploit the material dimensions of symbols and multiple techniques of encoding and necessity of technical mediation, meanings, and intentions in symbolic form.”

What’s interesting is that scholars in this school do not deny the existence and importance of “meaning” in communication but rather, in my opinion, they understand “meaning” as something fixed. However, meaning are actually the pre-requisition of communication. It is underlying as a system, a network and communication work we have are interfaces of the meaning system. The so-called interfaces we use frequently in the class is similar to Stuart Hall’s concept of “phenomenon form.” So next I want to talk about semiotic school of Communication.

 

2. Semiotic School: Carey, Schramm, and Hall

Hall proposes the concept “phenomenon form”. He considers meaning as the power of communication, “If no ‘meaning’ is taken, there can be no ‘consumption’. If the meaning is not articulated in practice, it has no effect.” Communication is always in practice but meanings are not. Encoding and decoding are not points of stages in the communication process, but “determinate moments”, “the apparatus and structures of production issue at a certain point, in the form of a symbolic vehicle constituted within the rules of ‘language’.” Moreover, Hall doesn’t think “distortion” within the communication process are risks, as oppose to Schramm’s idea of building up the “same pictures in head”. Receivers, according to Hall, are not obliged to accept what’s encoded.

I think for me Stuart Hall’s theory is more convincing, or I can say, helped me to think through real world examples better. With his ideas in mind, I think about Google Gallery project again. What is communicated in the gallery. Is conveying the most “accurate message” as Google Gallery’s ultimate goal? No, obviously, it is not. The gallery collects art works from physical galleries all around the world. With the help of Internet, good interfaces (maybe you might think it is not good enough) and the ability of zoom in/out etc. It is ridiculous to think that Google Gallery is just trying to convey the most accurate message. What is the message here? The painting? The environment around the painting in the gallery? I think it is a minor question. The major question is: how meaning transmitted through Google Gallery that is impossible without it. How does it augment our ability of meaning making?

I don’t know all the answers to those questions but I think it indeed convince me one thing about communication: the accuracy of communication is always not as important as how communication reconfigure/reconstruct our meaning network.

“Mona Lego”

Lots of important concepts coming out this week. It is interesting to see how we extend terms in literature/linguistics and semiotics to a broader sense. Intertextuality can extend to intermedial and dialogism is not only for the meaning making in a specific area.

I’ve been in the CCT Remix class before and we spent the first few weeks struggling with “what is remix and what is not?” I was surprised that Professor Irvine’s article is in Nava’s book because the main theories we draw in that Remix class is Nava’s. Nava tries to draw a line between “surface remix” and “deep remix” (that is what professor Irvine called Remix+). Since Nava’s speciality is in music remix and mashups, he writes articles on classification of music remix, mostly based on surface remix. Therefore in that class, what we emphasized was surface remix, that is borrowing materials and remix them. It was indeed an interesting process since we can use remix as a tool, a form, to benefit us (for instance, educational purpose.) However, remix is not only a form. Surface remix is a way to help us think, and using it as a tool is not ultimate purpose. I think there’s a quote from the Remix+ article is very thought provoking, “Turing to interpretive remix genres based on sampling, quotation, and encyclopedic cross-referencing in contemporary musical forms, we find that the technical means for combinatoriality can be used to disclose the underlying  recursive, generative, dialogic processes of the expressive forms.”

Also I found there’s a page is very useful on the PPT (The dialogic principle and the cultural encyclopedia: interpreting pop and appropriation art), page 22. It gives me a clear direction of thinking step by step to analyze how an art work “make sense” to us. Any art work is composed by segmentation. We can start analyzing it by its components. But at the same time, the work, as a whole conveys meaning that is more than the mathematical summation of segmentation. In this post the art work that I chose is Lego Mona Lisa. I will first analyze both the surface/composition of it and then I will focus on how to relates to other art works and how it embedded in the whole cultural encyclopedia.

An independent “Brick-Builder” Eric Harshbarger has been building professional LEGO buildings since 1999. He used to only focus on 2D mosaics and murals built out of LEGO bricks, and recently he creates few 3D sculptures. I think the LEGO Mona Lisa, as what he called “Mona Lego” is quite interesting (also because I don’t know much about art and gallery paintings and Mona Lisa is among the few things I know.)

Firstly he did two “Studs-Out” fashion mosaics (Figure 1&2). “Studs-Out” means the bumps on the bricks face out toward the viewer. The bigger one (Figure 1) is about 6 feet wide by 8 feet tall and requires over 30,000 pieces. However, it only uses the 6 basic LEGO colors: black, blue, green, red, white, and yellow. According to Eric, very little glue was used. Only the “hanger” pieces along the top were glued as well as any pieces which spanned the “seams” of the 33 baseplates underneath.

Screen Shot 2015-02-16 at 6.32.11 PM

 

Figure 1

booth_1

Figure 2

Now he is working on a “Studs-Up” fashion Mona Lego (Figure 3&4.) The bricks are stacked atop one another normally and the viewer is looking at the side of the bricks.

mona_1

Figure 3

mona_0

Figure 4

Then it is interesting to think how we interpret an art work like this? There are many parody versions of Mona Lisa on the website: Mona Lisa with make-ups, zombie Mona Lisa, Mona Lisa in the galleries, etc. However, I think this types of art is “higher” (maybe there’s a better word) than mere parodies. It needs creativity, aesthetic values, and special techniques. I want to relate Mona Lego with Andy Worhol’s work. (I forgot where this argument comes from, but probably professor Irvine) that Worhol’s work is to de-aetheticize and re-aetheticize previous arts. I think here is the same story that he re-aetheticize the original painting Mona Lisa through the aesthetics of lego (lego color and lego materials.) When we look at the Mona Legos, it definitely reminds us of the original painting (how beautiful it is; how famous it is; how important it is to the art world; or maybe how expensive and mysterious it is too.) All those ideas will be reminded by looking at Mona Lego, even though it is not an original one! But this creativity also sparkles new ideas: how can the author do this? Lego is not only for kids! It opens up a new “genre” of art. It is not just use materials from the past (Eric doesn’t go to Louvre Museum and cut the original painting, or print out a photographs of the original painting, which is more like “surface remix”). He use new materials to link the past ideas to the future. He constructs a link that demonstrates meaning is dialogic, forwarding to the future, to more re-interpretations.

Of course, there are “surface remix” going on in his Mona Lego works. But I think this work can help us to think through how “surface remix” and “Remix+” work together that make art generative and dialogic.

Jackendoff and His People

This week I re-read the Jackendoff’s parallel architecture piece. I have to admit that there are still parts of it that I cannot understand, but reading it one more time resolved questions that I had from last semester. Jackendoff and his students work outside of linguistic field as well, such as comics and music. I found the research on comic is very interesting. It is also very rewarding by reading Jackendoff’s original research on parallel architecture on linguistics and the research based on this original one.

In Jackendoff’s article, he proposes one basic and important point about human being’s language competence, that is the f-knowledge. He thinks that f-knowledge of language requires two components: (1) finite list of structural elements, such as lexicon; (2) finite set of combinatorial principles, such as grammars. As oppose to generative grammar which consider phonology and semantics as secondary components of language under syntax, Jackendoff argues that meaning is not secondary and the generatively of language doesn’t completely come from generatively of grammars and syntax. I found a very good quote from his article where he explains what is “meaning”: “It is the locus for the understanding of linguistic utterances in context, incorporating pragmatic considerations and ‘world knowledge’… it is the cognitive structure in terms of which reasoning and planning take place. That is, the hypothesized level of conceptual structure is intended as a theoretical counterpart of what common sense calls ‘meaning’.”

Here I want to talk about one more interesting research his student Neil Cohn did on sequential image comprehension. I think it is a very good illustration about how semantics and syntax work independently but correlates to each other. In his article The Limits of Time and Transitions: Challenges to Theories of Sequential Image Comprehension, Cohn talks about when juxtaposing two images, it often produces the illusory sense of time passing, as found in the visual language used in modern comic books. He found out that any linear panel-to-panel analysis or loosely defined principles of connection between sequential images are inadequate to explain people’s understanding, “Sequential image comprehension must be thought of the union of conceptual information that is grouped via unconscious hierarchic structures in the mind.”

It’s still hard to understand entirely his research but I got several interesting take away from his research. One of them is to see how he breaks down the elements of comics:

(1) moment-to-moment— between small increments of time

(2) action-to-action— between full ranges of actions

(3) subject-to subject— between characters or objects in a scene

(4) aspect-to-aspect— between aspects of a scene or an environment

(5) scene-to- scene— between different scenes

(6) non-sequitur— have no apparent meaningful relation

If you want to learn more about his research, here is his website, http://www.visuallanguagelab.com/vitae.html It is a good way to understand Jackendoff.