Category Archives: Week 9

Software as a Metamedium: Evolution is the wrong Analogy


Why have media-centric software applications and personal computing devices developed into what they are today versus some other way? What would the present look like had different decisions been made along the way? The full answer to these questions lies outside the scope of this post, however, we can peer into the thought processes of some of the key individuals to better understand them. Based on some of these examples, it becomes apparent that the collective direction of personal computing as a metamedium has developed from choices, and not by natural evolution. I appreciated Lev Manovich’s citation of Alan Kay on this topic: “The best way to predict the future is to invent it” (Manovich, n.d.)

Alan Kay and his team at XEROX PARC  helped establish this direction by making the design choice to consolidate media development applications within a unified framework. Lev Manovich describes how this combination afforded a vision “in which the computer was turned into a personal machine for display, authoring and editing content in different media, and how “By developing easy to use GUI-based software to create and edit familiar media types, Kay and others appear to have locked the computer into being a simulation machine for ‘old media” (Manovich, n.d.). These decisions set the heading for where and why we use software as a metamedium today, in particular when thinking about our personal computers and mobile devices.

We have been using Apple computers and devices as our running case study for how to de-black-box technologies and open them up to see the interwoven stories of design choices and functions. The Apple 1 was one of the earliest machines to economize our modern conception for the “personal” computer. But while the Apple 1 and its successors helped shift the industry toward this design concept, that wasn’t Steve Wozniak’s original thought process. In an interview with NPR, he commented on his mindset:

When I built this Apple 1… the first computer to say a computer should look like a typewriter – it should have a keyboard – and the output device is a TV set, it wasn’t really to show the world here is the direction it should go. It was to really show the people around me, to boast, to be clever, to get acknowledgment for having designed a very inexpensive computer. (“A Chat with Computing Pioneer Steve Wozniak,” n.d.)

One of the people “around him” was Steve Jobs, who helped monetize and scale the Apple 1 as a package of pre-existing ideas based on pre-existing decisions. The Apple 1 and its successors borrowed heavily from the preceding culminations of design decisions as of Alan Kay and his team, and Kay’s team was able to synthesize their concepts from preexisting tools for technical mediation of media, which in turn stemmed from millennia of applied “old media”.


A Chat with Computing Pioneer Steve Wozniak. (n.d.). Retrieved November 1, 2017, from


Manovich, L. (n.d.). Software Takes Command (Vol. 5). New York: Bloomsbury.

Rawlinson, N. (n.d.). History of Apple: The Story of Steve Jobs and the company he founded. Retrieved November 1, 2017, from


Grace Chimezie


How did computers become digital information processors and metadata platforms, how did it develop beyond earlier computation and context? To become a general purpose computer like our PC’s. In a way, computation, as we have it now, was and is a continuous build-up from everyone’s idea (those who contributed to the technologies and present means of computation). The war may have been a great force to implementing some of this ideas that were already in the pipeline, I want to argue that without the war, technology may have still gotten to this point of technological exposure. Why? Because the human mind is constantly developing ideas around general interaction and new ways of solving problems.
Computation has developed beyond designs for military, government, and businesses because of a continuous underlying theme of finding new ways of creating technologies that are understandable, simple in its complexity and solves our everyday problems from social interaction to living.

All of the breakthroughs in human design concept Although these earlier minds had blueprints and visions to making computation and software available to people, they would not have imagined it at the stage at which it is today.

Building on the development processes of computation

Computing has become a multimedia platform for creating transforming, transmitting and displaying all forms of digitizable content from simple text and photographs to high definition films and 3D. This is a fascinating story of creative development that takes basic models of computation and combines them with a more user-centered interface to software functions to becoming, Douglas Englebart Famous phrase “Augmenting human intellect”.

Manovich proposes three concepts that have brought us here media hybridization, evolution and deep remix. Furthermore, he argued that the process of translation from physical and electronic media technologies to software, all individual technologies, and tools that were previously unique to different media met within the same software environment.

This meeting had fundamental consequences for human cultural development and for the media evolution. It disrupted and transformed the whole landscape of media technologies, the creative professions that use them, and the very concept of media itself. E.g. 3D computer graphics, animation, social software search.

The next major wave of computerization of culture has to do with different types of software, social network, social media services and apps for mobile platforms.

The graphic user interface was justified using a simplistic idea that since computers are unfamiliar to people, we should help them by making the interface intuitive by making it mimic something users are already well familiar with, the physical world outside. This builds up to the kind of computational design we are left to interact and be exposed to

One of the conceptual developments came from the HCI community that develops out of an awareness that computers are cognitive artifacts and can be taken in any direction that we can design.


None of what we have now, in regards to computation and its development to being multi-purpose, was the sole idea of one person or organization but a combined efforts of creative geniuses some who didn’t start out on this path. These individuals built on already existing artifacts, some thought about new ways of improving what was socially acceptable and others pushed boundaries with it to what we have now. It is important that as we draw out our inferences in computation that we look into Kay’s body of work and his special interest in children and their usage of these technologies, and learn a thing or two about being free to learn new ways and implement our own ideas to the already existing body of work in regards to computation. come to think of it Englebart had a first prototype of what your google docs does.


Lev Manovich, 2012: Software Takes Command: Bloomsbury Academic

Has the consumer culture changed the way we think about the products we use?

As we have learned by studying different concepts throughout this course, there is no “magic”, when it comes to the world of technology and computers.  But somehow, it is so hard for people to understand how something works and why does it work in that specific way? While there are many theories and research on fields like cognitive sciences and psychology,  that can come with different explanations to the human brain and how we perceive information, I highly believe that by living in a world where we are consumers, by living in a consumer culture, we have lost the sense of participating in the process of building things, we now can just buy what we need, and just make sure that the things we buy work, and never worry about how those things work.

Today, you hear about Iphone X and the “new amazing features” that the new phone can offer to it’s consumers, or maybe you looked at the new Apple Macbook Pro, or Microsoft’s Surface laptop with new improvements and more ways to make it interactive.  So many new things, and in order to participate in the discussions happening in social media (because who doesn’t want to share their personal opinions with the world) and let the world know how “in” they are with the new technologies, you have to buy the newest products, because everyone else seems to use them, and you don’t want to stay behind, right?

I have done that mistake too, and part of this is because you never actually see what’s happening behind the visible layer, what’s behind that blackbox. To cite Bruno Latour, blackboxing is “the way scientific and technical work is made invisible by its own success. When a machine runs efficiently, when a matter of fact is settled, one need focus only on its inputs and outputs and not on its internal complexity. Thus, paradoxically, the more science and technology succeed, the more opaque and obscure they become.

Everyone knows about the new features, but I doubt that people actually know the history of how these new features were invented? And where did they come from?

Lev Manovich, in his book “Software takes command” makes the point that industry is more supportive of the new innovative tech & applications than academia is. Modern business thrives on creating new markets, new products, and new product categories.

But to analyze his point, new discoveries almost always don’t include new content but rather new tools to create, edit, distribute and share this content. To add new properties to physical media, it requires to modify it’s physical substance. But since computational media exists as a software, we can add new properties, new plug-ins, new extensions, by combining the services and the data.

Software lies underneath everything that comes later.

So, the next time you hear about the new cool features of a new product, think of the branding and  marketing side of it.

Ted Nelson and his idea of software, as mentioned in his article Way Out of the Box

“In the old days, you could run any program on any data, and if you didn’t like the results, throw them away.  But the Macintosh ended that.  You didn’t own your data any more.  THEY owned your data.  THEY chose the options, since you couldn’t program.  And you could only do what THEY allowed you — those anointed official developers”. This is a quote by Ted Nelson, in his article Way out of the Box.

In his article, Nelson brings to our attention all the possible ways that we can do things. Just because some companies (Apple and later Microsoft) took the paper simulation approach to the behavior of the software, doesn’t mean that that is the only way to do it. They got caught up to the rectangle metaphor of a desktop, and used a closed approach. Hypertext was still long rectangular sheets called “pages” which used one-way links.

Nelson recognized computers as a networking tool.

Ted Nelson’s network links were two ways instead of one-way.  In a network with two-way links, each node knows what other nodes are linked to it. … Two-way linking would preserve context. It’s a small simple change in how online information should be stored that couldn’t have vaster implications for culture and the economy.

This is an example that demonstrates not to get caught up by the whole computer industry, as software gives plenty of possibilities to look at new ways to implement, rather than just believing and thinking that there is only one way.

Alan Key’s idea of a computer as a “metamedium”, a medium representing other media, was groundbreaking. It is the nature of computational media that is open-ended and new techniques will be invented to generate new tools and new types of media.

Vanneva Bush’s article “As we may think” in 1945, discussed the idea of the Memex, a machine that would act as the extension of the mind, by allowing its user to store, compress and add additional information. It would use methods of microfilm, photography and analog computing to keep track of the data.

You can clearly see the metamedium idea at the Memex.  The second stage in the evolution of a computer metamedium is about media hybridization, which as Manovich explains, is when different medias exchange properties, create new structures and interact on the deepest level.

It was Douglas Engelbart who recognized computers not just a tool, but a part of the way we live our life. The mother of all demos, demonstrated new technologies that have since become common to computers today.  A demo featured first computer mouse, as well as introducing interactive text, video conferencing, teleconferencing, email, hypertext, and real time editing.


All these examples make you think about different ways that software could behave and interact, and how these pioneers continued to push their tools to new limits to create creative outcomes, even without having access to the technology that we have today.

It really is inspiring to look at their work and understand that sometimes it is us who creates limitations to our technology, sometimes pushed by the computer industry and other factors, but it is crucial to understand that there are no limitations to the development of software and graphical interfaces in order to create new ways of human computer interaction (HCI)


Bush, Vannevar “As We May Think,” Atlantic, July, 1945.

Engelbart, “Augmenting Human Intellect: A Conceptual Framework.” First published, 1962. As reprinted in The New Media Reader, edited by Noah Wardrip-Fruin and Nick Montfort, 93–108. Cambridge, MA: The MIT Press, 2003.

Latour, Bruno“On Technical Mediation,” as re-edited with title, “A Collective of Humans and Nonhumans — Following Daedalus’s Labyrinth,” in Pandora’s Hope: Essays on the Reality of Science Studies. Cambridge, MA: Harvard University Press, 1999, pp. 174-217. (Original version: “On Technical Mediation.” Common Knowledge 3, no. 2 (1994): 29-64.

Manovich, Lev. Software Takes Command. New York: Bloomsbury, 2016. Print.

Nelson, Theodor Holm.  WAY OUT OF THE BOX. EPrints, 3 Oct. 2009. Web. <>.

A History of Computational Design

Design is an inherently communal act. One must understand the user, their needs, their desires, and their context in order to fashion an object or experience that is useful to them. The readings this week showed the truth of this concept by taking us down the history of modern computation. The community of users for early computer technology were decidedly esoteric. The government – particularly the military branches – were using this technology for a very specific purpose, and the development of computational technology reflected this. All the specific, accumulated forms of communication and thinking within the military were then ported over to this nascent technology. The same process took place with the business community. Both had certain affordances and constraints that informed the shape and usability of the early computer. Due to its highly segmented user base, the technology was designed in a highly specialized manner for very particular purposes. The barrier to the knowledge needed to operate these early computers was relatively high, which is another conscious design choice made with these early user communities in mind.

One example of this is size. The early computers were these behemoth boxes that required a large storage space and energy source. The military and business communities had those two resources in spades, and as such, there was no real design reason to consider making the computer smaller. It took both the technological advances pertaining to Moore’s Law, as well as a new design target for the computer to shrink to a size suitable for the general population.

Another example of this would be the computer’s user interface. Initially, user interface was a heavily textual process. Lines of abstruse code would need to be input in order to access the features of the computer. The advances made by Xerox PARC in developing GUI technology were a crucial component in the broadening of the potential user base for computers. Now, instead of being required to learn a new language to use a computer, the average individual could find their way around using the far more intuitive graphical process. Just as it’s easier to understand a bathroom sign in a foreign country than it is to understand the foreign words written underneath it, the symbols on the computer make it easier to navigate and communicate our intentions, and usefully explore its features. GUI advancements were an incredible conceptual leap, and a crucial step in bringing about the Microcomputer Revolution. The design target for the computer had shifted from the highly specialized communities of the military and business community to the general population.

The important takeaway for me is that design is not destiny. It requires active decisions by a network of individuals, organizations and communities to produce a product or experience suited to a particular community. These decisions are not static, nor given. So it is incumbent on the designer to understand their role in this process, and take a sense of ownership over the design decisions they choose to make.


1. Lev Manovich, Software Takes Command, pp. 55-239; and Conclusion.

2. J. C. R. Licklider, “Man-Computer Symbiosis” (1960) | “The Computer as Communication Device” (1968)

3. Engelbart, “Augmenting Human Intellect: A Conceptual Framework.” First published, 1962. As reprinted in The New Media Reader, edited by Noah Wardrip-Fruin and Nick Montfort, 93–108. Cambridge, MA: The MIT Press, 2003.

Prophecies drive computational development

If I were asked about changes of computer, the most significant impressions from my using experience in the past twenty years would be size, portability, and practicability with connection of Internet. I merely assumed that designers made these improvements to comply with the demands of the times and people. After reading this week’s influential articles, I’m surprised that our achievements of computation have been predicted by great computer scientists 60 years ago. Based on historic designs in the field of military, government and business, the conception of human-computer cooperative interaction brings about software innovation, and it also motivates computer’s role as metamedium.

Many computer technologies and software designs have corresponding ideas from early times. A prototype of personal computer has already existed way earlier than we expected. Vannevar Bush introduced memex in his article “As we may think” in 1945, a device for storing information. Microfilm helps people to trail original records store in memex. This idea initiated the invention of hypertext. Sketchpad originated by Ivan Sutherland in 1963 provided great inspiration to many popular graphic design software techniques. I found Sutherland’s sketchpad was aligned with Licklider’s thinking of “man-computer symbiosis”, which was introduced three years earlier, in 1960. Licklider persisted that the symbiotic cooperation would be successful if man and computer interact on the same surface and “integrate positive characteristics”. Take a circle drawing as an example. Men have incentives and imagination to draw a circle, while computers are “fast and accurate” to satisfy men’s irregular drawing of a circle with a standard round shape. Men could barely draw a precise circle (unless they use compasses) without the help of computer, while computer could not operate with men’s creative thinking.

According to Manovich, along with Alan Kay’s invention of Dynabook, a term of metamedium was introduced to describe computer as a platform to “create new tool of working with the media types it already provides as well as to develop new not-yet-invented media.” Opening the iMovie app, we can simply use the front camera to take a short video, just like how we use a video camera. But the app is much more widely used for editing videos. We can combine two video clips together, adjust video length, add subtitles and filters. With the help of computer, a new video is invented. We can even share the video to friends through emails or social media networks where not-yet-invented media such as feedback from friends (text, image or video formats) sprout consistently.

Manovich also explained “media hybridization” as the new stage of metamedium evolution, suggesting “a more fundamental reconfiguration of media universe in which media properties are exchanged, and new structures are created.” From Manovich’s hint from the book, categorizing photos and videos on iPhone by places where the photos are taken is an example of media hybrids between GPS location and photography.

Although I’m not sure whether the following example counts as hybrids of text, audio, and photos and videos, I would like to share my thought here. Many social media platforms such as Facebook allow users to view a video at the bottom right corner of the screen. Users can still browse the post of the video on the main screen where the video becomes a temporary screenshot. It also enables people to make comments on the main screen when the video is playing in the corner. As Manovich describes, “media hybrids represent our experience in a new way by combing and possibly reconfiguring already familiar media representations.” Traditionally, users have no choice but to watch a video in full screen, and make comments afterwards. This new social media experience offers a new way of interaction. The combination of these media provides users with more information and the opportunity of multitasking on social media platforms.


Vannevar Bush, “As We May Think,” Atlantic, July, 1945.

Licklider-Man-Computer-Symbiosis-1960-NMR-excerpt.pdf. (n.d.). Retrieved November 1, 2017, from

Manovich, L. (2013). Software Takes Command, New York;London;: Bloomsbury.

A Tool for the People

Last class Deborah asked, “If World War II didn’t happen, would computing be at the place it is today?” It was a very intriguing question, and at the time I pushed back on it slightly. World War II offered incentive, I proposed, to turn theory into application. However now, after looking at Vannevar Bush and the other inventors of the computer at a metamedium, I’m less certain.

Figure 1: Atomic cloud over Hiroshima, taken from “Enola Gay” flying over Matsuyama, Shikoku

The second world war was a transformative moment for many academics. This transformation was not just because it allowed for the convergence of scholarly research, industry, and the military, nor just because the results were proof of concept. Those factors changed the nature of what was known and what could be designed, but the war also created an existential crisis for many researches involved in the construction of devices that caused mass destruction. There was a reactionary feeling to much of the computer design that came afterwards, a feeling that this knowledge needed to be reclaimed and used for more life affirming scholarship such as art and music.  This feeling was clear in Vannaver Bush’s essay,  “As We May Think:”

The applications of science have built man a well-supplied house, and are teaching him to live healthily therein. They have enabled him to throw masses of people against one another with cruel weapons. They may yet allow him truly to encompass the great record and to grow in the wisdom of race experience. He may perish in conflict before he learns to wield that record for his true good Yet, in the application of science to the needs and desires of man, it would seem to be a singularly unfortunate stage at which to terminate the process, or to lose hope as to the outcome” (Bush 1945)

The leading computer designers were disposed to correct course after a brutal war. In turning their attention back to academia, they looked to remediate the tools of learning, while simultaneously building on the affordances provided to enhance what was possible. Bush hypothesized the Memex machine and Sutherland created Sktechpad, expanding what was possible for graphic user interfaces (GUI). The rise of consumer products markets, mass production of electronics and advances in modularization also laid the ground work for rolling out a personal computer. “The world has arrived at an age of cheap complex devices of great reliability,” Bush wrote in 1945,  “and something is bound to come of it.”

There was simultaneously a feeling that in, this new world order scholarship, lacked the tools to keep up with how rapidly life was changing. Douglas Engelbart, describing the current state of society wrote: “Man’s population and gross product are increasing at a considerable rate, but the complexity of his problems grows still faster” (Engelbart 1962). Bush’s Memex and Engelbart’s framework for augmenting human condition were attempts to formulate the necessary tools to make humans more capable and expand what could be made and known.

Figure 2: Member in the Form of a Desk (Bush 1945)

The two ideas of human cognition and cultural products were combined in Alan Kay and Adele Goldberg’s proposed personal computer Dynabook. Dynabook was designed as a “metamedium, whose content would be a wide range of already-existing and not-yet-invented media” (Kay and Goldberg 1977). The “not-yet-invented” part of this definition was key, as they were theorizing a device that was truly interactive, which meant learnable and easily programmable. Kay and Goldberg envisioned a personal computing device that could be sculpted by the users to perfectly match their purposes, eg. “an animation system programmed by animators” (Kay and Goldberg 1977). In their paper, “Personal Dynamic Media,” they describe how Dynabook will come with the programming language Smalltalk, making programming simple enough for grade-school students to learn. Unfortunately, as Manovich notes, the computer as a simulation for media, rather than as a tool of tools, was the eventual implementation. However, Manovich also notes that the improvements in programming language in more recent years, have been closer to the Smalltalk vision. Perhaps efforts like CodeAcademy to demystify programming will see some pushback on the prepackaged software industry. While that seems unlikely at the moment, the disruption experienced by the music industry and the film and television industries when the tools of production and distribution became simpler and more cost effective, may indicate that nothing is monolith.

Works Cited

Alan Kay and Adele Goldberg, “Personal Dynamic Media,” First published 1977. As reprinted in The New Media Reader, edited by Noah Wardrip-Fruin and Nick Montfort, 93–108. Cambridge, MA: The MIT Press, 2003.

Douglas Engelbart, “Augmenting Human Intellect: A Conceptual Framework.” First published, 1962. As reprinted in The New Media Reader, edited by Noah Wardrip-Fruin and Nick Montfort, 93–108. Cambridge, MA: The MIT Press, 2003.

Group, 509th Operations. English: Atomic Cloud over Hiroshima, Taken from “Enola Gay” Flying over Matsuyama, Shikoku (Commentary by Chugoku Shimbun). August 6, 1945. A photograph that was taken from “Enola Gay” flying over Matsuyama, Shikoku. 

Lev Manovich, Software Takes Command. INT edition. New York ; London: Bloomsbury Academic, 2013. 

Vannevar Bush, “As We May Think,” Atlantic, July, 1945.



Conceptual Development of Computers

This week’s reading is about the history of computation, and sees computers as metamedia interfaces for interactive cultural creation, rather than just tools designed and used by the same person. The movement of the computer from being a device only used by military and big businesses to something that the everyday person will have at home required changes in the initial design of the computer.

When watching the video, I was amazed by the computer techniques Alan Kay was introducing. I used to think those functions and interactions between human and computers, such as Sketchpad can recognize human strikes and adjust the graph according to user’s requirements, are some advanced computer technology only available in today, however after watching the video I came to realize these functions and technologies enabling human computer symbiosis are already existing. As Manovich points out, computers, to some extent, are “remediate machines” that “expertly represents earlier media”, and are not expected to function any differently as it has first appeared. The development of technology, somehow, seems just the combination of already existing thoughts and softwares, while the hardware creates the ceiling of the software performance and better quality GUI.

However, if that’s the case, what are the alterations of computers that makes them evolve into “general purpose” information processing machines rather than the same as the earlier designs, which is just for military, government and business applications? I think the conceptual steps that somehow changes the nature of computer lies in how programs run the routine work while offering users a more interactive and transparent interface at the same time. Licklider claims in “Man-Computer Symbiosis”, that “Computing machines will do the routinizable work that must be done to prepare the way for insights and decisions in technical and scientific thinking.” Thus the time of users are allowed to put more efforts into things computers couldn’t help as much with, such as intellectual thinking, creating menings, formulate hypotheses, and other cultural activities. Also, new media, according to Manovich, is no more a sole province of the designers. Instead, with the highly interactive interface, it gives users the freedom to develop context according to their requirements. Alan Kay states this kind of interface as “truly intimate”, that he feels he can stick hands into the display and touch things in the screen directly. These are the concepts behind the design that make computers more reachable to common, nontechnical users.

As mentioned in readings, Memex is a good representation for this point. The memex, thought up by Vannevar Bush, was one of the earliest examples of a conceptual change to what a computer could mean to people. It took the abilities of the computer and tried to see how they could be implemented differently. Anyone who could make use of a library would find a memex to be even more useful. Its goal was to compress books and other written media into a searchable knowledge network that could be accessed via table-sized machines. While this device never came to be, it gave a good idea of the required technical and design needs for computers to be widely used.  Being small enough to fit in a home and being able to be mass-produced so they can be afforded by most people were essential; as was the addition of a screen that gave users feedback through a Graphical User Interface (GUI). While the memex wasn’t made, in the coming years the personal computer did which was even more versatile than even Bush had anticipated.


Alan Kay on the history of graphical interfaces

J. C. R. Licklider, “Man-Computer Symbiosis” (1960)

Lev Manovich, Software Takes Command.

From “Calculator” to “Computer”

After reading the articles this week, I have obtained a new perspective to view my computer. For so many times, I took the convenience of technology development for granted that I even have an illusion somehow that modern computer was born to be a modern computer.

I’m going to talk about what I’ve learned from a brief “graphical interface history”.

  • 1945: Vannevar Bush published his essay, “As We May Think”, predicting the future of computer and explicating how “Memex”(a portmanteau of “memory” and “index”) would work. He raised the idea that the next step for scientists was to make the recording of “confusing” and “complicated” knowledge easier and more explicit.
  • 1960: “Man Computer Symbiosis”written by J.C.R. Licklider outlined a plan of turning computers from military and commercial tools into cooperators of human beings in daily life.
  • 1963: Ivan Edward Sutherland invented “Sketchpad”, ancestor of modern computer-aided design programs.
  • 1968: Douglas Engelbart delivered the famous speech “Mother of All Demo”, showing a series of new things and concept including graphic interface, mouse and video conference. Inspired by Bush, Engelbart explored on new techniques that would do massive recordings of human activity thus augmenting human intellect. In his article, he said that “the intellectual worker must know the capabilities of his tools and have good methods, strategies, and rules of thumb for making use of them”.
  • 1970s: Alan Kay came up with the model of “Dynabook”and a new computing language “smalltalk”, deeply impressed by “Mother of All Demo”. he even created a new term “matamedium” to show the speciality of “Dynabook”.

From the timeline we can see that the concept of HCI (Human-Computer Interaction) has been passed generation by generation with the main idea of helping human beings better improve themselves. Bush’s personal library Memex, ,Licklider’s “man computer symbiosi”, Sutherland’s Sketchpad, Engelbart’s graphic interface and mouse and Kay’s Dynabook and smalltalk are all attempts of putting computer to a cooperator position in the relationship of human-computer. In this way, computer is not the machine that was once named “calculator” to solve a set of complex mathematical problems. The concept of “computer-as-technology” has been shifted to “computer-as-medium”. By the term “metamedium”, Kay tended to show a new property of computer — “being simultaneously a set of different media and a system for generating new media tools and new types of media”. It reaches to other media (for example, a music recorder, a digital album) exchanging properties and borrowing their unique features.

The role change of computer is also reflected in its Chinese name. I remember that when I was a kid, people called the computer “jisuanji”, which is the same meaning to “calculator”. In recent years, the name “diannao” becomes popular. In contrast to the Chinese character “Rennao”, “Diannao” has a literal meaning of “electronic brain” while the former one refers to “brain of human beings”.

Unfulfilled Principle:

By the model of Dynabook and smalltalk, Kay tried to enable people without background in computing create their own tools. However, what I observed in my daily life is that people are using computers to do things that can also be done with “traditional” tools (but would be more time- or energy- consuming).

Also, the question is still waiting to be answered that Bush raised in the article “As we may think” about whether it would be possible in the future that the path for people creating or absorbing material of record could be established directly.


Vannevar, Bush. 1945. “As We May Think.” Atlantic, July.

Licklider, J. C. R. 1960. “Man-Computer Symbiosis.” IRE Transactions on Human Factors in Electronics HFE-1 (1): 4–11. doi:10.1109/THFE2.1960.4503259.

Engelbart. 2003. “Augmenting Human Intellect: A Conceptual Framework.” First published, 1962. As reprinted in The New Media Reader, edited by Noah Wardrip-Fruin and Nick Montfort, 93–108. Cambridge, MA: The MIT Press,

Manovich, Lev. 2013. “Software Takes Command”. International Texts in Critical Media Aesthetics, volume#5. New York ; London: Bloomsbury.

The Changes in Computer

“In the 1950s-60s, no one in the computer industry could have imagined the digital media world of today”.

I got this excerpt from Dr. Irvine’s introduction video. No one can predict the future of technology accurately, because we’re living in a information explosion era. Everything in the computer industry is always changing rapidly. Several decades ago, computer is designed for government, military, and business use. We know that the first true computer in the world is financed and developed by the US Army Ordnance Corps for military use in 1946. The first computer is basically a big calculator, and it could occupy the whole room. It can be said that the Cold War is the major pushing factor for the creation of first computer. The fear of being inferior to other countries technologically pushes US to invest a lot in computer developing. And new weapons and military-use softwares like guided missiles and GPS need computers to run the whole system. Overall because of these military factors, the first computer came out, and it looked like this:

(photo from wikipedia Commons)

Then let’s see how Apple computer looks like today:

This is the latest macbook pro that released in Sep. 2017. Obviously we can tell it’s much smaller, more portable and good-looking. And it also applies GUI, the graphical user interfaces, which makes user can easily control the computer. It’s a design that allows users to interact with the computer with graphical icons and visual indicators. For the users and customers, the computers with GUI are more flexible, easy to control, and more entertaining. As Alan Kay has said about GUI, “For the first time, I felt like I was touching the information structure”, GUI system gives users new experience.


Except for the changes in outlook, the new macbook reveals the functional changes of computer. As we have discussed, computers in the last centuries were designed for military and business use, while today they have more to do with educational and entertaining use and instead of government use, they are more for individual use. Alan Kay contends that a computer is more like a “meta-medium”. We cannot see computer as a medium has its own fixed rule, but a platform that also represents the other media. For example, we can purchase apps from app store and different apps and softwares have different functions. With iTunes, individuals can enjoy more beautiful songs; with Youtube, people can watch any types of video they like; with BBC News, people can know what happens in the world; and with Python or JAVA, people can design their own softwares.

Besides, the new macbook also add a touch bar in the keyboard. The users can  actually control the computer through touching different bars. This change also increase Human-Compter Interaction (HCI). It also applies desktop metaphor into use. In this system, the computer monitor can be seen as a desktop, and the folders and documents will be placed in order in the desktop. Because the desktop is closer to our life, the desktop metaphor also makes it easier for the users to control the computer. And all those changes also show that the good design makes our life easier.


Martin Irvine, Intro to Information Theory in Meaning Systems. Retrieved from:

Martin Irvine, Introduction to the Technical Theory of Information. Retrieved from:

Alan Kay: Doing with Images Makes Symbols (1987). Retrieved from:

History of hypertext

With the development of civil society, we are gradually entering into the information era, the time period where information is exploding in our world. So it is a very important task for people to select out useful and necessary information. But with the emerging of those fundamental concepts and framework, to increase human ability to cope with complex problems, enhance human ability with the aid of machine programs.

Brought up by V. Bush, Memex is a new way for information management and can cope with the large scale of information. He considered that with such huge amount of information, experts in a single discipline is not able to read and track everything in this discipline because of difficulties lie in information searching methods that you have to go over everything. In that case, Memex came up, it is the machine equipment to scan and memory new resources into the type with explanations.

Augment human intelligence
Under the effect of Memex, Doug Engelbart started doing a research program, trying to enhance human ability to tackle complex problem with the aid of computing system to deal with methodology possibilities. And this making the creation of hypertext became possible. Using the NLS he build up the hypertext with colleges 500 miles away with this new symbol manipulating methods.

Ted Nelson had proposed the idea of Xanadu system, thinking that anything written by anyone can be memorized in the common hyper context, which is the medium for information resources considering connections of these non-committal nods, and thus making a network by connect these information nods together instead of inline order like old information system. And since then, there came several famous early systems, such as FRESS, NoteCards, Intermedia, Guide and so on. With the new kinds of media content like video, pictures, there are also hyper mediation. The relationship of information nods came into the link, providing searching path to audience, then came into graph and web. So there are several layer of the hyper text.

The hyper text is similar to human thoughts, without settled order, it is similar to the associated link in human thought. The nods of these systems can be not only text, picture or video, but can also being computer program and generate various possibilities. Then with the came up of hyper text protocol, web, the ARRA start the arpanet, there came the first establish of initiative internet.