Category Archives: Week 9

Metamedium: a great idea yet to be fully implemented

Manovich (2013)’s historical approach explaining the development of digital media as a metamedium is inspiring. He shows with primacy that, even if it is possible to visualize a combinatorial process at place when comparing old media and the computing devices, there is nothing inevitable or deterministic about this development. In fact, such devices were built, constrained not only by people, but also by the market dynamic. On the one hand, the new properties emerged with the metamedium had to be “imagined, implemented, tested, and refined” (p. 97). On the other hand, industry interests and decisions also influence the kind of devices that the broad population will be able to experience. As Manovich (2013) affirms, “the invention of new mediums for its own sake is not something which anybody is likely to pursue, or get funded (p. 84). It does not go unnoticed that although researchers such as Alain Kay and Adele Goldberg imagined a metamedium that would allow computer users not only to consume existent media, but produce new ones, and being themselves programmers, the industry has not invested on these attributes as devices’ mainstream characteristics – neither in 1984, when the first Macintosh was launched, or in 2010, when Apple’s Ipad impressed the market.

The concept of a metamedium announces that it not only simulates old media, but it also has unprecedented functions. One can write using computers, as used to do using papers, but the “view control” (Manovich, 2013) is totally different, once one can change the fonts, cut and past, or yet change the structure of the text presented, to name a few possibilities. It is true that, as the author perceptively shows, although conceived some decades ago, some affordances are not fully developed yet, such as the Doug Engelbart’s spatial features to structure the visualization of text. Even though, the capacity of organizing text using computing devices is unprecedented.

Computing devices are also interactive. The possibilities that they open to support problem solving situations go far beyond previous calculators (Manovich, 2013). As a metamedium, computers bring the possibility of engaging the learner in a two-way conversation” opening new possibilities for teaching-learning methods (Kay and Goldberg, 1977). The history has shown less changes in education that imagined by such scholars, though. Why?

Nicholas Negroponte, from the same generation of Kay and Goldberg, settled at MIT, launched, in 2005, the One Lap Top Per Child (OLPC) project. Policymakers from developing countries received it with enthusiasm. Negroponte promised a device – with standard software included –  per 100 dollars each, to change teaching and learning process. The project was seen by many as the solution for the lateness in adopting digital media at schools. Latin American countries, including Brazil, invested a lot in this project. I then conducted a study with a colleague at Columbia University to understand mobile learning in Brazil and the results show that the OLPC failed in many distinct ways. Our focus was the public policies aspects of the project, but from the readings I can see that the device itself was completely different from what Kay and Goldberg once imagined and also from what Negroponte made people think that it would be. The device was locked-down (The future of the Internet and how to stop it, Zittrain, 2008), with limited affordances to allow students to create new media. The screen was small (although bigger than other classroom devices), the processor and memories were also limited.

Image result

OLPC is in the left


Despite the fact that the uses imagined for such devices could create a better teaching and learning environment – and they created indeed in some classrooms where they were organically adopted  –  their affordances would not generate a new level of student, a metamedium student, I would say, that would be able to create new media, new tools, according to their necessities and personal trajectory. And this is a huge gap in this project, focused on developing countries.

Going further, as Manovich points out, from the point of view of the media history, the most important characteristic of a metamedium is that it is “simultaneously a set of different media and a system for generating new media tools and new types of media” (p. 102). This refers to the capacity of a user not only to transform a text, but also create mash-ups, remixes, machinima. The problem is that, as extensively studied by some scholars (Lawrence Lessig, Jack Balking, Aram Sinnreich), while the affordances are available, the limits imposed by intellectual property regulations, not only through laws, but also through technological and digital rights management (DRM) tools, have restricted the metamedium capacities massively. And because the industry also works in shaping the narrative about this kind of digital practices, naming “pirate” people that engage in such kind of activities, in a derogatory way, I believe that industries deliberately contribute to prevent the development of metamedium devices, metamedium students and metamedium users as whole.

“War and Peace” drivers for technological progress. — Galib

“War and Peace” drivers for technological progress. — Galib

Throughout the human history, we can notice that confrontation between countries and nations or individuals struggling to survive were the major sources to invent new technics to fight and be stronger to meet environmental challenges or threats from enemies. In some condition, we can outline that the most significant factors for technological process within the recent history and in the last century were wars and confrontation between empires. The speed of evolutionary process in all areas of sciences was increased in times. Discoveries in hard sciences significantly changed the way of life of the earth population and brought new challenges to deal with. To meet new and fast changing realities it has become important to learn new cognitive approach among new generation. Thus, technical revolution tremendously influences to the whole education process, studying process are being modified on yearly base to indicate revelations in sciences and prepare future society members to meet challenges of rapidly changing world. Apparently, to grow the new generations with appropriate set of mind, another phenomenon happened gradually in 20th century, when scientific discoveries and innovations became more available through commercial markets and education process. Hence, the technological progress stepped in its “Peace” stage of existing.

I remember, when I was a schoolboy I was studying MS-DOS and later Norton Commander operational systems and was amazed by features of sophisticated machines which are impossible to use in modern realities anymore.

comps of 80th

Seemingly, such driver as equalization (complementing) new software opportunities with innovations of hardware capacities, may also explain another phenomenon of rocket rapid speed of technological progress. Along with development process of hard sciences – discovering new materials, revelation in nuclear and atomic levels, cosmic space exploration, etc. – the electronic devices became possible to produce. Thus, new hardware demanded (and they sill demands) new opportunities to be employed, which resulted with occurring new software programs and operational systems. On the other hand, rapidly changing realities and competition in the commercial markets also demanded more new software programs to be invented and meet new requirements. Complimenting each other both sectors of science and economy push the whole technological progress further. Sometimes, it is hard to realize which factor is the most crucial within this process. It already became like a “chain reaction” of cause and consequences relations, where it is hard to distinguish which comes first “chicken or eggs”. Also, the massive findings within different (mostly hard) sciences became possible to employ (crossed use) them in multiple areas of science. Consequently, new interdisciplinary fields of sciences appeared, like biophysics, sociocybernetics, etc. Completing, triggering and motivating each other, all scientific areas may be considered like sources for new inventions[i]. The brightest example of interdisciplinary research may be mentioned ecology, for example.


Hence, we can notice the new trend in the scientific progress, both paradoxically contradictive and complementary each other – such as specialization[ii] and integration.


Responding to the technical and education progress new machines needed to be more friendly use and interactive to satisfy users’ requirements. Their interfaces’ features should indicate their abilities to provide users with certain options and become for both archiving or searching information[iii]. Therefore, the computer machines became as a metamedium devices[iv] for new level of representation of the synthesis between forms and content and gates for users to the globalizing environment.


As a reaction to scientific progress, the new social portal movement appeared as well and easily integrated to other sides of cognitive-computational-scientific process. Now, social sites are used in all spheres of human life in attempt to connect (or even adjust) human needs with rapidly changing realities driven by technological progress.

[i] Douglas Engelbart,”Augmenting Human Intellect”, NMR, p. 6

[ii] Vannevar, Bush. 1945. “As We May Think.” Atlantic, July, p. 5

[iii] Licklider, J. C. R. 1960. “Man-Computer Symbiosis”, NMR, p. 4

[iv] Manovich, Lev. 2013. Software Takes Command. International Texts in Critical Media Aesthetics, volume#5. New York ; London: Bloomsbury, p. 101

Software – evolve with hardware and human beings

From the first computer ENIAC born in 1946 to 21st century’s personal laptop, scientists and engineers spend decades changing those gigantic machines into small boxes that most people can afford. When we talk about development of computers, we will refer to Moore’s Law, mention the evolution from vacuum tube to integrated circuit and concentrate on cutting-edge algorithms. However, few of us pay enough attention to software. People are accustomed to taking the development of software as granted. This week’s readings lead us to consider the significance of software as media and effect on human beings’ cognition.

1. Hardware and software
It is incontrovertible that software, at least software born before 2016, is based on hardware. For example, without VR headset and powerful graphics card we can not enjoy the pleasure of VR technology. Software is artificial, so we need something that is artificial too as carrier, which is hardware now.

We have to admit that hardware development inspires people to think about software with better performance and more elaborated GUI. Of course I don’t mean that only the development of hardware can stimulate developers’ imagination. On contrary, developers always have tons of new ideas. I can see players complaining everyday about similar things like why Grand Theft Auto V leaves few rooms for characters to enter or why the graphics of World of Warcraft can not be as real as Games of Thornes or The Lord of Rings. Actually, developers can do so. They can be crazy enough to create interactive CG or seamless map that is the same big as the whole America. However, the problem is that whether we have powerful hardware or not to execute the instructions. For developers, existing workstations may consume ten years or more for them to achieve this goal; For players, such a game with unprecedented scale and graphics may require them to purchase equipment worth more than one hundred thousand dollars. The development of hardware just means lower price, thus allowing developers’ effort to be profitable. To some extent, Moore’s Law not only indicates how fast the performance of hardware improve, but also predicts how software evolves.


The improvement of GPU’s performance in iPhone 

Therefore, hardware just decides the ceiling of software. Also, it decides the ceiling of users’ imagination(not developers’, but maybe scientists’). It is really interesting to observe those non-players’ or light players’ reaction to AAA games. Many of them mistake games for movies because the graphics is beyond their imagination. For myself, I could hardly imagine the popularity of online shopping and social media ten years ago. In fact, the process is that hardware decides what software developers can create, software encourages users to think about the possibilities and users’ reaction in turn makes developers introspect themselves and draw experience from success or failure.

maxresdefaultGraphics evolution of Tomb Rider

2.Human beings and software
One thing that interests me most of this week’s readings is that Manovich in his book Software Takes Command mentions that software itself has become a new medium. We are interacting with computers and mobile phones everyday, but computer itself can not be a social medium without software. Actually , that is the significance of metamedium and hybrid media.

The core statement by Alan Kay about metamedium is “a wide range of already-existing and not-yet-invented media”. So what does “not-yet-invented media” mean? Manovich answers this question by making clear explanation of multi-media and hybrid media. Modern software is not the simple combination of different forms of media. For example, I will never call a slide consisting of pictures and words software because software is not only about presentation. Software should contain change or mutation, in biological language. Therefore, I will call PowerPoint software.

One thing that confuses me for a long time is the difference between program and software. Of course a simple digital calculator is software, but I guess few of us will call a console program with the same function software. An important concept of programming is called encapsulation, which means data and procedure are integrated to provide complete function. Then programs are assembled together in similar way to form software. Finally, it is users that interact with software. For modern users, graphical user interface, which should be logical and legible, is an indispensable part of software. That is why we care about the user experience of software but not programmer experience of coding. From this perspective, software is really a media concept rather than a technical or computer science concept.


Is it software or just program?


concept of encapulation

However, a challenge faced by software designers is how to evaluate the learning ability of users. Alan Kay is a supporter of free software environment that allows users to build their own media, but the idea is contradicted to mature business logic that software should be stable and developers should make subtraction rather than addition. Now the problem can be solved by strictly distinguishing productive software from software for ordinary consumers, but maybe the development of software, especially the popularization of graphical programming, will change all things finally.



Visions Unfulfilled

Chen Shen

This week’s reading is quite new to me even though it’s part of the computer history and an amazing one. I felt their ideas should take a more notable position in computer education. Not to exaggerate, the pioneering works done by Bush, Kay, Licklider, Sutherland,  Engelbart et al directedly paved the road to how we interact with computers as well as networks.  One thing strikes me the most is though their works were done in the primitive stage of the computer, their visions were profound and many of them were not yet, after 40 years, fulfilled. All these make you wonder, what if the computer development history chose another route, what will it be like the computer I’m typing these words with, if still typing.

All of the pioneers’ work aim at a similar goal: augmenting human intellect. For Alan Kay, he wanted computers to be used for learning, discovery, and artistic creation; for Licklider, he wanted computers to facilitate formulative thing and help men controlling complex situations; for Engelbart, he wanted computers to “increase the capability of a man to approach a complex problem situation, to gain comprehension to suit particular needs, and to derive solutions to problem”, so on so forth.

After half a century’s rapid development, computers’ computing speed and storage capacity exceed the pioneers’ imaginations: in his paper Man-Computer Symbiosis, Licklider even suggested “we shall not store all the technical and scientific papers in computer memory” to save space and money. But today storage is no longer an issue as long as it’s personal information and knowledge.

But storage alone is no indicator of better intellect, if not worse. With the option to easily unload knowledge and information to external cognitive devices, men tend to remember less, and justified the tenet “knowledge is useless” since all knowledge is just one click away. This is an illusion, of course, no matter how brilliant Google is, it can only search based on keywords one provided, limiting the possible outcomes  within the scope defined by concepts one already know.

Either storage or speed, they’re the material part of a human-computer system. The material part of the system seem to outgrow the visions, but the conceptual  leaps they for are still beyond the horizon.

In Manovish’s Software Takes Command, we see how Kay’s ambition. As a metamedium, the computer is nothing like its precedent media inventions, it’s not a genre or style or format, but a new level of symbol abstraction and information processing. Along with language, writing, the computer plays the part as the third symbolic leap of human history. It has the ability to simulate any existing media, which means it can assimilate them, putting their paradigm under its umbrella. By this ability, computer prevails all other media. But Alan’s aspiration for computer is not just a “universal media player”, it should be used to produce not-yet-invented media, as the prodigy did with Dynabook. We clearly see this vision’s not fulfilled.

But why? There are some reasons I can think of.

The first is the changing role of computer as it became more and more common. At the visionaries’ time, computers were too expensive that computer “connect to one another by wide-band communication lines and to individual users by leased-wire service”, so that “the cost of gigantic memories and the sophisticated programs would be divided by the number of users”. Along with this extreme scarcity, people treated highly their share of time with computers, trying their best to get the most out of it. I still remember back in the 80s our school installed the first computer, people waited in line to try this “magic box” and explore all the possibilities. But ever since the new century, computers became so cheap and common, the ubiquitousness undermined its role. Now few still regard computers as incredible tools to boost personal experience, but mundane technologies as a car. People use computers to accomplish certain goals, just as they drive cars to get to destinations, few would forgo traveling but just drive around to explore what else can you do with a car. Computers are trivialized.

Another reason is similar to the first one, it’s the over-commercialization of computer and the corresponding media market. Being a metamedium, a computer can fulfill all one’s needs to enjoy existing media. With the digitalization trend of all kinds of media, the resources one can enjoy with a computer is literally limitless, rendering the need to invent “new media” moot.

The relatively high threshold of programming is another reason. To fully fulfill Alan and Engelbart’s dream, common users of computer must have a basic knowledge and experience of computer coding because coding is the only way you get a computer really customized, tailored to your personal needs. But computer language started off hardly understandable and scare off lots of users. Programming language had a steeper learning curve than most of other skills: for a motor maneuver, an art skill before you truly master it you can perform it with a tolerable compromise. But for coding, not a single error is tolerable. What makes this worse, is the complex nature of algorithm procedures. In a long series of instructions, when errors occur, the ultimate outcome (if there is outcome at all) is totally unpredictable. That’s to say it’s hard to trace your error and calibrate your code from a “wrong” output. Programming does get a lot easier and natural in the past two decades, but all kinds of easy applicants had already been made and optimized, as a common user of computer, for almost all your whimsy with your computer, there’re off-the-shelf software or applicants you can use.

Another reason hindering the conceptual revolution, especially for our time, is the rise of computer-substitutes. Like iPad and smartphones. They can perform almost the same function as computer in terms of “media player”, and they’re cheap and fast enough to replace computers as one’s personal digital assistance. But by nature, iPad and smartphones are merely tools to simulate existing media, the system’s highly closed and secluded, making it an awful tool to create. Just as the label “consumer electronics” suggests, they’re meant to consume, not to compute. In recent surveys, some countries had the least percentage of youngster  using computer or laptop in 30 years, due to the uprising trend of using consumer electronics in lieu of computer.

All the pioneers’ vision rely on computer literacy and programming literacy of common people, by which standard people in our time are no better than those 50 years ago. One cannot help feeling disappointed seeing half a century ago visionaries’ outstanding conceivement of computer. But sometimes I wonder is it even possible. After all “intellectual improvement” is no human nature. After the discovery of steam, of electricity, only a small amount of humankind tried to employ the new power to extend the possibility of men, while others enjoyed and consumed their innovations and inventions. History repeats.

BTW, the amazing route planning software in Kay’s video was finally fulfilled not very long ago by the app OmniFocus. If you add activities with location context, it would inform you with a map view what you can do around your current location.


The Transformation of Computation is Still a Long Way to Go – Jieshu

The histories of computation in this week’s reading are fascinating. I got a glimpse into the days when the concepts of personal computer and network were just starting to form in those great minds, including Alan Kay, Vannevar Bush, and Douglas Engelbart. They contributed a lot to the transformation of computation from the context of military and government to our daily life, which in turn changes the whole world.

1. Conceptual Transformations

Although today’s personal computers and the Internet are taken for granted by many of us, it was really difficult for those early pioneers to conceive. Like many other twentieth-century innovations, “general purpose” computation sprouted from military, government and business applications. Here, I try to identify some conceptual transformations from this week’s reading.

  1. In 1945, in his article As We May Think published on Atlantic, Vannevar Bush emphasized the importance of continuity of scientific records. He then envisioned a hypothetical system called memex that would be used to store, search, trail, and retrieve information in the form of microfilms[i].
  2. In the 1960s, J. C. R. Licklider proposed a man-computer symbiosis system that would enable men and computers to interact organically[ii]. He even foresaw applications like video conferences and virtual intelligent assistant[iii].
  3. Influenced by Bush and funded by Licklider, in the 1960s, Doug Engelbart suggested a system called H-LAM/T and tried to use networks of computers to augment human intelligence, in contrast to the contemporary school of artificial intelligence that tried to replace human intelligence with computers[iv].
  4. In PARC, Alan Kay envisioned a future where computational devices were used as “personal dynamic media” that could enable everyone to handle their own “information-related needs”. He wanted to turn “universal Turing Machine into Universal Media Machine[v].”

2. The New Design that is Old

They also designed many devices and systems definitely ahead of their time, such as Dynabook by Allan Kay, memex and hypertext by Bush, and mouse and computer networks by Engelbart. What amazed me is that many applications and features I consider as being novel stem from the brainchildren of those pioneers, and I never knew this fact in the past. For example, in the iPad app Sketches, you can use an Apple Pencil to draw lines that would automatically align to form rectangles. You also can rotate them and change their sizes. It’s very much like the system of Sketchpad and light pen developed by Ivan Sutherland in 1963, although there are more features available, such as colors and brushes.

3. Computer as a Metamedium

With the development of computing power, those pioneers’ visions gradually came true. in his Software Takes Command, Lev Manovich said that computer became a “metamedium” whose content was “a wide range of already-existing and not-yet-invented media[v].” This is exactly what characterize digital media as new media.

In the 1950s and 1960s, people are not that interested in sharing information with computers, because they already had a lot of media, such as TV, photography, and print[iv]. But digital media don’t merely imitate what conventional media do. Digital media enable us to create our own media. For example, the iPad app Garageband can not only imitate real music instruments like guitars and drums but also can record any sound and use it as a new tone to play music.


Sampler in Garageband enables you to record any sound and use it as a new tone

4. A Long Way to Go

The two apps I mentioned are both along the path initiated decades ago by those pioneers. However, their visions haven’t completely realized. Here are some examples.

  1. Kay envisioned a system in which everyone including children can build their own media by programming. But recent computing devices are drifting away from this vision. For example, Apple’s products are criticized being closed and not programming-friendly. But these products were embraced by many consumers. In my view, it may not be a bad thing because there are increasingly many applications available with which you can create your own media without programming, such as iMovie and Garageband.
  2. Another example is the Virtual Assistants such as Siri that have much in common with OLIVER (on-line interactive vicarious expediter and responder) proposed by Oliver Selfridge and mentioned by Licklider in his The Computer as Communication Device in 1968[vi]. OLIVER was described being able to develop itself by learning from its experience in your service. This is exactly what artificial intelligence researchers are doing but not doing well today.
  3. In “mother of all demos”, Engelbart presented a graphic road map that included to-do lists and shopping lists. Even today, I couldn’t find any application that could do this.
  4. Engelbart suggested a timesharing program called NLS that could be shared by hundreds of users. It surprises me that how lately this kind of applications appears. Even today, they couldn’t fulfill Engelbart’s vision. I once was a member of an online cooperative team of over 100 members. We couldn’t find an appropriate application that allowed us to read and edit a same file at the same time. In 2014, the best app we could find was Youdao Cloud Notebook but we frustratingly found that files were always overwritten by other people, wasting us a lot of time. Last year, we found a better app called Quip, but it still had many problems. Not to mention how far away it was from a “knowledge navigation and collaboration tool” imagined by Engelbart.


[i] Vannevar, Bush. 1945. “As We May Think.” Atlantic, July.

[ii] Licklider, J. C. R. 1960. “Man-Computer Symbiosis.” IRE Transactions on Human Factors in Electronics HFE-1 (1): 4–11. doi:10.1109/THFE2.1960.4503259.

[iii] Licklider, J. C. R. 1968. “The Computer as Communication Device.”

[iv] “CHM Fellow Douglas C. Engelbart | Computer History Museum.” 2016. Accessed October 31.

[v] Manovich, Lev. 2013. Software Takes Command. International Texts in Critical Media Aesthetics, volume#5. New York ; London: Bloomsbury.

[vi] Licklider, J. C. R. 1968. “The Computer as Communication Device.”