According to the reading and our group discussion, we identified some concepts and technologies that enabled modern computing devices to become mediating, mediated, and metamedia platforms.
Concepts
GUI: The concept of GUI developed by Engelbart, Butler Lampson[i], Kay, and others allows everyone to easily navigate computing systems, thereby mediating other media.
OOP: Object-oriented programming (OOP) is a programming language model organized around objects rather than “actions” and data rather than logic. It is this concept that makes sure that your program could grow in size and complexity as well as keep in short and simple.
Simulation: Another concept that makes computing devices a platform of metamedia is the concept of simulation. As Manovich put it, “Alan Turing theoretically defined a computer as a machine that can simulate a very large class of other machines, and it is this simulation ability that is largely responsible for the proliferation of computers in modern society[ii].”
Supporting Technologies
As a metamedia platform, a computer can finish the procedure of inputting, editing and outputting. This procedure requires supporting technologies, such as transistors, the Internet, and the technologies of digitization, of sampling, of compressing, of software, and of display .
Sampling and Digitization: Technologies such as Fourier transform that “decomposes a function of time into the frequencies that make it up[iii]” enable us to convert between analog and digital signals. This ability allows for easy sampling, digitization, manipulation, storage, and transfer of media information in a high fidelity. For example, we can easily digitally and discretely capture an image on a paper magazine using a scanner that assigns three numbers representing RGB values to each pixel in order to store in a hard disk, represent on a display screen, and transfer to another computer the image. The sampling process is not perfect, for the scanner has a limit in resolution. Information beyond the highest resolution is lost. But it persists meanings that could be understood by human beings. This kind of sampling and digitization enables a computer to become a platform for all kinds of media, transforming from a “universal Turing machine” to a “universal media machine[ii]”.
Compression: File compression can reduce storage space and transmission time. One way compression works is by taking advantage of redundancies. “most computers represent text with fixed-length codes. These files can often be shortened by half by finding repeating patterns and replacing them with shorter codes[iv]”. If you store the same photo both in JPG format and BMP format, you will find that the ratio of sizes of the photo is 16:1. It means that you can store 15 times more JPG photos than before and when you upload them to the internet, it will only take you 1/16 time compared with the past. So that we could better edit files on computing devices, and turn a computing device into a metamedia.
Storage: From tapes to disk, to flash memory, and to cloud storage, these storage technologies help computing devices to store more files and help files in different formats to be displayed on one device at one time.
Software and Algorithms: In his Software Takes Command, Manovich stressed the importance of software, which in his opinion is where the “newness” of new media liesii. With software, we can easily manipulate existing media, and new properties could easily be added to existing media. iMovie, Word, Photoshop, Audition, CAD, 3D Max… These software enable average people to create media content in a way that was only accessible to professional users in the past. In addition, new software and new tools are constantly being created. For example, with C++ and other programming software, game designers produced many computer games, a new genre of software. Reddit, a social news aggregation website that was used to share media was programmed with Python. Thus, computers become what Kay and Goldberg coined as “metamedia”[ii].
Transistors: The constant miniaturization of transistors for the past decades exponentially enhanced computing power, as well as the competency to deal with media content, thanks to the Moore’s Law. Ten years ago, exporting a twenty-minutes 720p video file would cost my desktop computer two hours. Right now, my MacBook can easily edit 1080p videos in real time. It is largely because of the increase in computing power of computer chips.
Internet: From Engelbart’s oNLine System (NLS), we have achieved huge progress in linking computing devices together. With the rise of mobile internet, we are exposed to an increasingly ubiquitous computing environment. We constantly edit and share media content on the internet. We send pictures to friends and share texts and music on our social media every day. Recently in China, online live video broadcast becomes very popular. People are so fascinated by sharing their own and watching other people’s everyday lives that some popular live hosts’ value even raise to five million dollars.
Display: Right now we have many display technologies that fulfill different demands. The resolution of the display increased a lot, while the sizes of monitors are getting smaller and thinner. Along with increasingly powerful graphics processing units (GPU), this trend enables computing devices to represent media content with higher and higher fidelity, allowing for more and more sophisticated media manipulation. Here we’d like to emphasize two display technologies.
- The first one is electronic ink (E Ink) used in Amazon Kindle. We think E Ink technology meets the requirements that Alan Kay envisioned with his Dynabook[v]. He suggested that in order to use the Dynabook at any places, CRT was not preferred. He envisaged a display “technology that requires power only for state changing, not for viewing—i.e. can be read in ambient light.” E Ink technology definitely meets his requirement, saving power and tremendously extending the time between battery chargings. The use of E Ink on Kindle remediates the functions of books.
- The second one we’d like to talk about is touch screen technology that was initiated by Ivan Sutherland[vi] and that could easily “engage the users in a two-way conversation” envisioned by Kay[vii]. Touch screen technology allows us to directly interact with computing systems and facilely manipulate media files.
Unimplemented Dream
One of Alan Kay’s design concepts for access to software that has not been implemented is that everyone should learn how to program.
According to Alan Kay, a programming environment, such as programs and already-written general tools, can help people to make their own creative tools. In his prediction, different people could use a mold and channel the power of programming to his own needs. (software takes command) In addition, it can also help people to build the computing thinking, also be known as the complex-problem solving skill, since computing language is a procedural language. This mission has not been achieved yet. Nowadays, people still see programming as a task only can be solved by experts.
What We Want
In our discussion, we imagined a lot of interfaces, going beyond any commercial products we are using today, including Augmented Reality (AR) like Magic Leap and Microsoft’s HoloLens, Virtual Reality like Oculus Rift, MIT’s Reality Editor, eye tracking interface that could used by ALS patients, maps projected to the windscreens, Ray Kurzweil’s mind-uploading Nanobots, and virtual assistants that understand natural language such as Siri and Cortana. While we are busy with taking notes of our ideas, we couldn’t find a perfect note application with which we could not only type words but also draw sketches, build 3D models, record and edit audio and video clips. In other words, there’s no application that could deal with all media formats. So, here we describe an interface of a note application.It’s much like the system that Engelbart presented in “mother of all demos” in 1968[viii].
We totally understand that modern software are developed by different companies who have financial interests, therefore, they need to close their systems in order to lock users in. For example, a Photoshop PSD file cannot be read and edited in CAD software. The interface we envisioned could melt the boundaries between different software, enabling us to easily manipulate any categories of media, combining flow charts, pictures, texts, sound, and other media together without switching software.
For example, when we are taking notes in CCTP-711 class, we can type in Professor Irvine’s words in the interface. We can also record his sounds and have it translated into words with a built-in speech recognition module. When he talk about the “mother of all demos” video, we don’t need to minimize the app window and watch it on Youtube in a web browser. Instead, we can insert the video into our notes without switching to a web browser. When he talks about some ancient semiotical artifact, we can easily insert its 3D model into the edit area without installing any cumbersome 3D software like CAD. These media objects could be edited and rearranged at any time afterward. In a nutshell, it’s a knowledge navigation system.
To achieve this interface, we think there are something that needs to be done. First of all, media software should open their source code, or at least provide more open APIs to developers. Second, more powerful computing ability is needed in order to process so many media at the same time. Third, we also need cloud computing and fast networks to store and retrieve so much information quickly. Fourth, machine intelligence was needed to process natural language and respond you in a more natural way.
References
[i] Lampson, Butler. 1972. “GUI Designer at Xerox PARC.”
[ii] Manovich, Lev. 2013. Software Takes Command. International Texts in Critical Media Aesthetics, volume#5. New York ; London: Bloomsbury.
[iii] “Fourier Transform.” 2016. Wikipedia. https://en.wikipedia.org/w/index.php?title=Fourier_transform&oldid=748484336.
[iv] Denning, Peter J., and Tim Bell. 2012. “The Information Paradox.” American Scientist 100 (6): 470–77.
[v] Kay, Alan. 1972. “A Personal Computer for Children of All Ages.” Palo Alto, Xerox PARC.
[vi] Sutherland, Ivan. 1963. “Sketchpad: A Man-Machine Graphical Communication System.”
[vii] Kay, Alan. 1977. “Microelectronics and the Personal Computer.” Scientific American 237 (3): 230–44.
[viii] “CHM Fellow Douglas C. Engelbart | Computer History Museum.” 2016. Accessed October 31. http://www.computerhistory.org/atchm/chm-fellow-douglas-c-engelbart/.