Category Archives: Week 11

Living Engelbart’s Dream (Rebecca and Becky)

There is a lot to say about how computing got to the metamedia stage, and about where it could be going next. Lev Manovich puts the focus on software as unlocking computing’s potential as metamedia. But humans, of course, wouldn’t be able to build computing systems or interact with them if they were not members of a symbolic species who can make new meaning from abstractions. Not only can we make meaning with symbols on a screen, but we can in parallel make meaning out of other symbols as well, from sounds to videos to facial expressions. The hybridization of multiple mediums would be, well, meaningless without those capabilities.

Working with that foundation, electrical engineers, mathematicians, computer scientists, and more moved from massive mechanical artifacts that take inputs and produce outputs to electricity-powered interactive computing systems that automatically feed outputs back into the system to produce new meanings. With the advent of high-level programming languages that humans can relatively easily read and understand, the process of writing programs that computers can then execute became more efficient. Software for computing systems proliferated, allowing humans to offload some of their cognitive burden onto the machines.

Most notably, Bush, Sutherland, Licklider, Engelbart, and Kay advanced computer design by putting forth plans for interfaces and human-computer interaction that would support and augment human intellectual capabilities. In particular, Kay sought to establish the PC as a tool for learning. His vision was significant because it gave users, children even, the ability to manipulate programs to solve unexpected problems and develop new ideas and processes.

While Kay’s vision seemed clear, it is interesting to think that our two mainstream commercial options for operating systems (Mac and Windows) are closed to normal-user manipulation. Some software can be modified, but doing so requires programming knowledge that isn’t universally taught. Apps, and the relative ease with which they can be developed, are potentially current manifestations of Kay’s vision.

Though Kay’s learning concept was not standardized, as we read each new kernel of information about the DynaBook and his other ideas, it became clearer that he in many ways wrote the blueprint that developers would follow for decades. Many of the concepts have been attempted in real life or already standardized: text editing, the mouse, graphical user interfaces, screen windowing, pressure-sensitive keyboards, synthesizers that can hook up to PCs to make music.

A particularly transformative concept was Kay’s vision of personal dynamic media, which was designed to “hold all the user’s information, simulate all types of media within a single machine, and ‘involve the learner in a two-way conversation'” (Manovich 61). This could be viewed as an early description of various AI technologies available today, such as Amazon Echo or IBM’s Watson. Yet, as Manovich explains, it also generally applies to the interactions with software that would come to transform the way we understand media.

Meanwhile, Sutherland with his Sketchpad prototype emphasized the need to interact with data in different dimensions. The division of his screen into four separate quadrants could be viewed as an early predecessor to the concept of hypermediacy. Engelbart’s concept of view control, which allowed users to switch between different views of data, shows the importance that he placed on the concept of user perspective and indicates his understanding of how to “layer” mediums.

However, Kay’s development of the graphical user interface, which provided a “desktop” on which different programs could be displayed and layered, is something that we truly take for granted when using modern computing devices. For instance, both Rebecca and Becky have many programs running simultaneously to process text, listen to music, send texts, manage emails, navigate multiple webpages, and more. We can toggle between the various windows and tabs with easy keyboard shortcuts and little thought, thanks to Kay’s design concepts.

Yet, both Rs independently ended up at the same ideas about tweaking this concept: flattening the layered interface system. In a sense, Microsoft’s OneNote and Google Docs are headed in this direction. However, this could go further by first, for instance, including as part of the word processing interface a web browser so that users no longer have to switch between windows but rather have everything contained in the workspace in which they are operating. (Word has some internet search functionality, but the integration doesn’t go as far as we have in mind.) Eventually, all media software could be combined into one layer. This might be awkward to do, given current hardware limitations and given the drive to make devices smaller, but not impossible. It could work well with larger fields of view, such as with a virtual or augmented reality computing system that is not limited by display size. The goal would not be to simply play music in iTunes or edit movie clips in iMovie or draft documents in Word and then put them all together. Rather, the point would be to allow someone to use these various forms of media in one software platform in order to access them in a more integrated way.

These readings brought up a number of additional ideas for both of us, but for ease of reading, we’ll keep them brief and discuss in person. A common theme among Kay’s and others’ ideas seemed to be the concept of developing interfaces that are more adapted to the human body: the chair with the keyboard in it, for instance. This category also includes the idea of eliminating the keyboard altogether and just using voice or graphical input to interact with the computing system. This is an area that has not been explored fully, but that would potentially be of great benefit to all those with stiff necks and more. Innovation, and history in general, also seems to flow in cycles to a degree, with innovation, consolidation and refinement, and innovation again, roughly speaking. It seems as if we might be ready for that next age of innovation in the computational world.

 

Works Referenced

“Alan Kay — Doing with Images Makes Symbols.” Filmed 1987. YouTube video, 48:20. Posted by John DeNero, November 12, 2013. https://www.youtube.com/watch?v=kzDpfk8YhlE.

Bolter, Jay David, and Richard Grusin. Remediation: Understanding New Media. Cambridge, MA: The MIT Press, 2000.

Engelbart, Douglas. “Augmenting Human Intellect: A Conceptual Framework.” In The New Media Reader, edited by Noah Wardrip-Fruin and Nick Montfort, 93–108. Cambridge, MA: MIT Press, 2003.

Kay, Alan C. “A Personal Computer for Children of all Ages.” Palo Alto, CA: Xerox Palo Alto Research Center, 1972.

Kay, Alan, and Adele Goldberg. “Personal Dynamic Media.” In The New Media Reader, edited by Noah Wardrip-Fruin and Nick Montfort, 393–404. Cambridge, MA: MIT Press, 2003.

Licklider, J. C. R. “The Computer as Communication Device.” In Systems Research Center, In Memoriam: J. C. R. Licklider, 21–41. Palo Alto, CA: Digital Equipment Corporation, 1990.

Manovich, Lev. Software Takes Command. New Yor: Bloomsbury Academic, 2013.

Sutherland, Ivan. “Sketchpad: A Man-Machine Graphical Communication System.” In The New Media Reader, edited by Noah Wardrip-Fruin and Nick Montfort, 109–126. Cambridge, MA: MIT Press, 2003.

 

The Nature of 3D – Lauren and Carson

In his video and throughout his readings, Alan Kay explains how user interface design is much more than making friendly interfaces or aesthetically pleasing designs for the computer screen like we see today. We take for granted the fact that user interface draw from the very roots of cognitive distribution and meaning making. In Microelectronics and Personal Computer Kay talks about two basic approaches to personal computing: “The first one which is analogous to musical improvisation, is exploratory: effects are caused in order to see what they are like and errors are tracked down, understood and fixed. The second, which resembles musical composition, calls for a great deal more of planning, generality, and structure.” This applies to Kay’s idea of utilizing doing, images, and symbols to build and learn. Lauren and I wanted to put Kay’s thoughts into a different lane. How can this was of thinking be used when talking about natural sciences, more specifically environmental science or zooarchaeology?

E.O. Wilson, the father of biogeography says in Letters to a Young Scientist, “Everyone sometimes daydreams like a scientist at one level or another … fantasies are the fountainhead of all creative thinking. The images evoked are at first vague. They may shift in form and fade in and out. They grow a bit firmer when sketched as diagrams on pads of paper, and they take on life as real examples are sought and found.” Dr. Wilson’s book goes on to say that all discoveries are first fantasies and in many ways these fantasies are very visual and the process of making these fantasies is the foundation for new science. In this respect, the future of coding or designing in digital spaces is a new frontier for creative thinking. In simulations, there is also the making process that can lead to new questions and discovery. Today, the affordances of free and open access 3D modeling software is changing the realm of discovery and exploration in science. It allows for the creation of something that one can then look at from many different angles and in making comes method through both process and completion.

Lauren

For example, I experimented within several 3D modeling software programs including 123D Design and MeshMixer. In the process of making a 3D model of a sea anemone, questions arose about the structure and anatomy of the species itself. A 2­D image of a white sea anemone provided the basis for the creation. In the first trial of making the anemone (Image 1), the 3D object was turned and I noticed that my concept of the anemone still existed in a 2­D space because I only edited the piece on a flat plane.

Screen Shot 2016-11-10 at 5.57.08 AM

In the second trial (Image 2), I had to use my imagination and begin seeing the 2­D photo as if it was in a 3D.

Screen Shot 2016-11-10 at 5.57.40 AM

Finally, the finished version (Image 3) attempted to look like the photograph from a front flat view but then take into consideration how the sizing of each arm would be spaced from each other and how some sections were anatomically different than others.

Screen Shot 2016-11-10 at 5.57.57 AM

The playful nature I had to adopt for this creation aided in my own understanding of the potential anatomy of the sea anemone. I had to edit and reedit the piece to make sure it was physically possible but also resembled the picture I was given. In this context, we see making as method for I began asking new and complex questions about both the anatomy of the animal as well as the affordances of the software program. While this was just a personal experiment exploring software programs, I learned a lot about how my visual and spacial brain operated. I think that this is an exercise that could be beneficial for the development of “non­normal science” education. If ecologists and technologist are looking for a paradigm shift, it could be of their interest to allow more making and fantasy into curriculums and methods. Making instead of just seeing an image of the anemone caused me to ask more internal questions about how this animal might eat, grow, and breathe. The future of 3D modeling can combine the use of fantasy and new making in 123d Design and Meshmixer with the use of other softwares like 123d Catch that would copy and recreate exact dimensions of objects.

Carson

While I was at VCU I worked in the Virtual Curation Lab. There we would scan, replicate and print any kind of artifact we could get our hands on. This included some faunal remains. I did lots of work with 3D modeling at zooarchaeology. I preferred working with animal remains, because I felt like not only was I learning about the process of 3D modeling, I was also forced to learn the osteology of different north american animals. Sometimes I would print out the different structures and paint them to look like real bone (I will bring some stuff I have in) to be used in small exhibits around campus. Other times I would play around and try to create a frankenstein like 3D model of whatever animal I was working on, by taking bones from other similar animals (or the same species)  to try and create a full skeleton. Then I would sometimes think about what this animal would look like if it was real. For example, once I almost put together a whole raccoon skeleton by filling in with opossum bones where stuff was missing. I named it “Raccopossum” and kind of pictured an all grey raccoon with a long opossum tail. Also doing this it forced me to ask myself questions like, Would Raccopossum be a marsupial? Would it be able to stand on its hind legs at times like Raccoons can? Well obviously it would be a good climber… and so on. I do not think this symbiotic process would not have occurred without the availability of 3D modeling.  

15051970_10153946522787761_1845480488_o

 

References:

Kay, Alan C. “Microelectronics and the Personal Computer.” Scientific American 237, no. 3 (September 1977): 230-44.

Alan Kay and Adele Goldberg, “Personal Dynamic Media” (1977), excerpt from The New Media Reader, ed. Noah Wardrip-Fruin and Nick Montfort. Originally published in Computer10(3):31–41, March 1977.

Alan Kay’s original paper on the Dynabook concept: “A Personal Computer for Children of all Ages.” Palo Alto, Xerox PARC, 1972).

Wilson, Edward O. Letters to a Young Scientist. New York: Liveright Corporation, a Division of W.W. Norton, 2013.

The Cognitive Leap of Faith (Ojas + Alex)

When analysing the process through which computing became the dominant metamedia platform of the symbolic and cognitive technologies we utilize everyday, we were able to identify two main concepts; permanent extendibility and extended cognition.

Looking at permanent extendibility, what stands out to us is the transience of digital media. Through computing, we are able to mimic the ephemerality of much of our cognitive process. For example, just as the concept of “dog” or “book” exists in a cognitively incorporeal space, the files we save, edit, delete, or otherwise alter exist on our computers are impermanent. This extendibility is an incredibly powerful tool allowing for the multitudinous levels of abstraction necessary for modern cognitive and computational tasks. The Ship of Theseus paradox is an interesting thought experiment that we found pertinent to this particular concept. Namely, what or where is the cultural artifact if it is permanently extendible? It’s one thing in the triadic model of semiosis to be dealing with physical, unchanging artifacts, but where do we place artifacts that are by nature always in flux?

It seems to us as though extended cognition is the guiding principle by which advances in computational and software interfaces evolve. Last week, Ojas had a blog post on the “phenomenological trickery” of the mouse, which was a great example of closing the cognitive-technological gap. It’s very easy to take these interfaces for granted, but when compared to older forms of computers, such as Babbage’s Analytical Engine, we really see how much more intuitive modern interfaces have become. These advances have led to a lowering of the threshold for computational learning and interaction, which was crucial for getting to the levels of widespread adoption we’re seeing now.

As far as technologies go, we believe that the innovations in computer networking born out of XEROX PARC and ARPA have been hugely influential in our modern computational landscape. This is especially clear when we consider that one of the major features of computer networking that made it commercially successful was e-mail (Campbell-Kelly et al 284). Combining the affordances of word processing with networked computers across space allowed for conversations to take place over this network. Of course the lineage from Vannaver Bush’s Memex to Ivan Sutherland’s Sketchpad to Alan Kay’s Dynabook is well noted within the readings, but perhaps most importantly, Doug Engelbart’s GUI innovations were critical in bridging the cognitive-technological divide. Creating graphical interfaces that are intuitive and factor in the concept of affordance was an enormously important step in getting computers to become effective symbolic representation tools.

When reading about Alan Kay and his Dynabook, we were amazed at how many of the concepts and innovations he developed that have actually been implemented in our modern computational technology; most obviously the iPad. But there was also a fundamental distinction between Kay’s work and the technologies we have now. Kay’s designed the Dynabook to be used for “symmetric authoring and consuming”. The concept of sharing and the spirit of openness seem central to Kay’s design process. We surmised that this came from his desire for this technology to be used as a learning tool, or as Kay says “qualitatively extend the notions of reading, writing, sharing, publishing, etc. of ideas”. Our modern computers are often designed to be inherently siloed devices, which most likely comes from the particular incentives of commercialization. So we suppose one could say the differences a culture clash took place between the Kay/Xerox PARC/ARPA communities of innovation and the Microsoft/Apple corporations that were took those inventions and made them available to the masses.

An interface feature we propose to realize Alan Kay’s vision for the dynabook is the integration of image recognition in the cameras on our phones. Much like we can use natural language processing to interact with computers with textual search engines and virtual assistants, an image recognition feature in cameras allows us to interactively engage with our environments. For example, if someone were to use this feature on a piece in an art museum, they could scan the piece, and this would automatically hyperlink to relevant articles in art history, criticism, other museums the piece has been in, etc. This could lend itself to an open-source, networked, encyclopedic database of image-entries which students actively engage with by consuming, editing, and producing knowledge on the subject.

References

  1. Manovich, Lev. Software Takes Command, pp. 55-239; and Conclusion.
  2. Campbell-Kelly, Martin and William Aspray. Computer: A History Of The Information Machine. 3rd ed. Boulder, CO: Westview Press, 2014.
  3. Greelish, David. An Interview with Computing Pioneer Alan Kay
    . Time Magazine, 2013.
  4. Alan Kay’s original paper on the Dynabook concept: “A Personal Computer for Children of all Ages.” Palo Alto, Xerox PARC, 1972).

Mediation and Transparency (Amanda and Katie)

These days, it’s easy to assume that we can do just about anything on our computers. Whether we’re working on a laptop, desktop, tablet, or even a smartphone, we are capable of listening to music, drawing and creating graphics, shooting and editing photographs, writing content, and and much more. According to Alan Kay, computers are – and were – the first “metamedium,” consisting of media that have either already been invented or have yet to exist (Manovich, 23).

Instead of drawing a graphic at a desk using pen and paper, we can use a stylus on a tablet and create the same image. There is the illusion that we’re interacting with a pen and piece of paper, but instead, the stylus is sending signals that the computer picks up on, and the pixels create the image that imitates the job that a pen would perform. As Jay David Bolter and Richard Grusin mention in their book, Remediation: Understanding New Media, even ten years ago, these metamedia did not exist; people saw computers as devices that were used exclusively as numerical engines (23), while individuals such as Alan Kay envisioned they be used as a much more basic, day-to-day communication platform that even children could utilize (Kay, 321).

Now, however, we think of computers in a whole new way – “…we now think of them also as devices for generating images, reworking photographs, holding videoconferences, and providing animation and special effects for film and television” (Bolter & Grusin, 23). As computer technology progresses, it appears that new media keep building upon each other, leading us back to the concept of metamedia. There is media within the first metamedium – the computer – creating a new dimension of metamedia that constantly develop, progress, and evolve; this causes the language that we use with the computer machine – the interface – to change, and we adapt with it.

As computer technology has evolved, we have made the transition from hardware to software. As Manovich explains, software re-adjusts and re-shapes everything that it is applied to, just like other technologies such as the printing press, the alphabet, and even the first computers. Software plays a vital role in shaping how we use and interact with our computers, and in turn, this relationship shapes the way that we contribute to our specific human culture (Manovich, 14-15).

An interesting concept that stems from much of this week’s readings is the idea of the socio-technical system, or the ways in which we are conditioned by the technologies that we interact with each day. While it appears that our computers can do everything, there’s still much room for progress, and Kay pointed that out when discussing his Dynabook concepts. Our devices come to us in what we think is the complete package – all of the hardware is put together, all of the software has already been downloaded, and all that we have to do is use the apps the way we’ve been conditioned to use them. However, no one is taught how to de-blackbox their device. As Kay states in his TIME Magazine interview, people – or children – cannot create apps for each other (Greelish, 2013); every portable computer these days contains elements of the Dynabook idea; however, they all lack the collaborative and inventive concepts that Kay and his team had hoped for in the 1970s (Greelish, 2013).

We wonder: why don’t computers come blank, as a blackbox? Kay had imagined something more complex, and perhaps he thought that people would be able to adapt to such an idea where we could individually create our own software and apps. However, this involves a certain element of literacy and knowledge in the field of computing, which many people lack. Computers are not sold as black boxes, and therefore, there is no need for people to learn how to de-blackbox them. Children are not taught how to build computers or software, and thus, we are stuck in a socio-technical system where we have only learned the basic symbolism behind computer interaction and use.

Computing can be understood as the intersection between human intention and the symbolic process. In this intersection, though, we often treat computer systems as black boxes. The knowledge we gain (and share) about our devices is limited by how we learn through an interface, so we are in some ways conditioned by the technology we use.

Moving forward, interfaces like voice and touch recognition, along with ones that foster education, knowledge sharing and procedural literacy, could aid in realizing some of Kay’s ideas surrounding software that gives “life to the user’s ideas” (Kay, 234).

In connection with the idea of sharing/collaboration, Kay discusses the “helpful agent,” which hasn’t been realized yet (Greelish, 2013). For example, we cannot create an app and share it directly with friends — we have to submit it to the App Store, wait for formal approval and then release it via Apple. In other words, we have not achieved “symmetric authoring and consuming” of computer stored data (Greelish, 2013).

In short, we do know that building blocks of a computer systems comprised a metamedium that houses and manipulates different data using certain techniques to create and store that data (Manovich, 110). 

As computer systems continue to evolve, it will be interesting to see how the relationship between hardware and software develops as well as the level of transparency that programmers and users seek when using devices that typically mediate how we understand, store, retrieve and manipulate information.

References

Bolter, Jay David and Grusin, Richard, Remediation: Understanding New Media. Cambridge, MA: The MIT Press, 2000.

Kay, Alan C. Xerox PARC, Alan Kay, the Dynabook/Metamedium Concept, and Possibilities for “Personal Computers”

Kay, Alan C.  Kay’s original paper on the Dynabook concept: “A Personal Computer for Children of all Ages.” Palo Alto, Xerox PARC, 1972).

Kay, Alan C. “Microelectronics and the Personal Computer.” Scientific American 237, no. 3 (September 1977): 230-44.

Lampson, Butler. Butler Lampson’s original 1972 memo on the Xerox Alto computer, the first “personal” computer implementing a GUI Windows and mouse system and networked via Ethernet.

Manovich, Lev, Software Takes Command, pp. 55-239; and Conclusion.

On the Road to Metamedia – Jieshu & Roxy

According to the reading and our group discussion, we identified some concepts and technologies that enabled modern computing devices to become mediating, mediated, and metamedia platforms.

Concepts

GUI: The concept of GUI developed by Engelbart, Butler Lampson[i], Kay, and others allows everyone to easily navigate computing systems, thereby mediating other media.

OOP: Object-oriented programming (OOP) is a programming language model organized around objects rather than “actions” and data rather than logic. It is this concept that makes sure that your program could grow in size and complexity as well as keep in short and simple.

Simulation: Another concept that makes computing devices a platform of metamedia is the concept of simulation. As Manovich put it, “Alan Turing theoretically defined a computer as a machine that can simulate a very large class of other machines, and it is this simulation ability that is largely responsible for the proliferation of computers in modern society[ii].”

Supporting Technologies

As a metamedia platform, a computer can finish the procedure of inputting, editing and outputting.  This procedure requires supporting technologies, such as transistors, the Internet, and the technologies of digitization, of sampling, of compressing, of software, and of display .

Sampling and Digitization: Technologies such as Fourier transform that “decomposes a function of time into the frequencies that make it up[iii]” enable us to convert between analog and digital signals. This ability allows for easy sampling, digitization, manipulation, storage, and transfer of media information in a high fidelity. For example, we can easily digitally and discretely capture an image on a paper magazine using a scanner that assigns three numbers representing RGB values to each pixel in order to store in a hard disk, represent on a display screen, and transfer to another computer the image. The sampling process is not perfect, for the scanner has a limit in resolution. Information beyond the highest resolution is lost. But it persists meanings that could be understood by human beings. This kind of sampling and digitization enables a computer to become a platform for all kinds of media, transforming from a “universal Turing machine” to a “universal media machine[ii]”.

Compression: File compression can reduce storage space and transmission time. One way compression works is by taking advantage of redundancies. “most computers represent text with fixed-length codes. These files can often be shortened by half by finding repeating patterns and replacing them with shorter codes[iv]”. If you store the same photo both in JPG format and BMP format, you will find that the ratio of sizes of the photo is 16:1. It means that you can store 15 times more JPG photos than before and when you upload them to the internet, it will only take you 1/16 time compared with the past. So that we could better edit files on computing devices, and turn a computing device into a metamedia.

Storage: From tapes to disk, to flash memory, and to cloud storage, these storage technologies help computing devices to store more files and help files in different formats to be displayed on one device at one time.

Software and Algorithms: In his Software Takes Command, Manovich stressed the importance of software, which in his opinion is where the “newness” of new media liesii. With software, we can easily manipulate existing media, and new properties could easily be added to existing media. iMovie, Word, Photoshop, Audition, CAD, 3D Max… These software enable average people to create media content in a way that was only accessible to professional users in the past. In addition, new software and new tools are constantly being created. For example, with C++ and other programming software, game designers produced many computer games, a new  genre of software. Reddit, a social news aggregation website that was used to share media was programmed with Python. Thus, computers become what Kay and Goldberg coined as “metamedia”[ii].

Transistors:  The constant miniaturization of transistors for the past decades exponentially enhanced computing power, as well as the competency to deal with media content, thanks to the Moore’s Law. Ten years ago, exporting a twenty-minutes 720p video file would cost my desktop computer two hours. Right now, my MacBook can easily edit 1080p videos in real time. It is largely because of the increase in computing power of computer chips.

Internet: From Engelbart’s oNLine System (NLS), we have achieved huge progress in linking computing devices together. With the rise of mobile internet, we are exposed to an increasingly ubiquitous computing environment. We constantly edit and share media content on the internet. We send pictures to friends and share texts and music on our social media every day. Recently in China, online live video broadcast becomes very popular. People are so fascinated by sharing their own and watching other people’s everyday lives that some popular live hosts’ value even raise to five million dollars.

Display: Right now we have many display technologies that fulfill different demands. The resolution of the display increased a lot, while the sizes of monitors are getting smaller and thinner. Along with increasingly powerful graphics processing units (GPU), this trend enables computing devices to represent media content with higher and higher fidelity, allowing for more and more sophisticated media manipulation. Here we’d like to emphasize two display technologies.

  1.  The first one is electronic ink (E Ink) used in Amazon Kindle. We think E Ink technology meets the requirements that Alan Kay envisioned with his Dynabook[v].  He suggested that in order to use the Dynabook at any places, CRT was not preferred. He envisaged a display “technology that requires power only for state changing, not for viewing—i.e. can be read in ambient light.” E Ink technology definitely meets his requirement, saving power and tremendously extending the time between battery chargings. The use of E Ink on Kindle remediates the functions of books.
  2. The second one we’d like to talk about is touch screen technology that was initiated by Ivan Sutherland[vi] and that could easily “engage the users in a two-way conversation” envisioned by Kay[vii]. Touch screen technology allows us to directly interact with computing systems and facilely manipulate media files.

Unimplemented Dream

One of Alan Kay’s design concepts for access to software that has not been implemented is that everyone should learn how to program.

According to Alan Kay, a programming environment, such as programs and already-written general tools, can help people to make their own creative tools. In his prediction, different people could use a mold and channel the power of programming to his own needs. (software takes command) In addition, it can also help people to build the computing thinking, also be known as the complex-problem solving skill, since computing language is a procedural language. This mission has not been achieved yet. Nowadays, people still see programming as a task only can be solved by experts.

What We Want

In our discussion, we imagined a lot of interfaces, going beyond any commercial products we are using today, including Augmented Reality (AR) like Magic Leap and Microsoft’s HoloLens, Virtual Reality like Oculus Rift, MIT’s Reality Editor, eye tracking interface that could used by ALS patients, maps projected to the windscreens, Ray Kurzweil’s mind-uploading Nanobots, and virtual assistants that understand natural language such as Siri and Cortana. While we are busy with taking notes of our ideas, we couldn’t find a perfect note application with which we could not only type words but also draw sketches, build 3D models, record and edit audio and video clips. In other words, there’s no application that could deal with all media formats. So, here we describe an interface of a note application.It’s much like the system that Engelbart presented in “mother of all demos” in 1968[viii].

We totally understand that modern software are developed by different companies who have financial interests, therefore, they need to close their systems in order to lock users in. For example, a Photoshop PSD file cannot be read and edited in CAD software. The interface we envisioned could melt the boundaries between different software, enabling us to easily manipulate any categories of media, combining flow charts, pictures, texts, sound, and other media together without switching software.

For example, when we are taking notes in CCTP-711 class, we can type in Professor Irvine’s words in the interface. We can also record his sounds and have it translated into words with a built-in speech recognition module. When he talk about the “mother of all demos” video, we don’t need to minimize the app window and watch it on Youtube in a web browser. Instead, we can insert the video into our notes without switching to a web browser. When he talks about some ancient semiotical artifact, we can easily insert its 3D model into the edit area without installing any cumbersome 3D software like CAD. These media objects could be edited and rearranged at any time afterward. In a nutshell, it’s a knowledge navigation system.

To achieve this interface, we think there are something that needs to be done. First of all, media software should open their source code, or at least provide more open APIs to developers. Second, more powerful computing ability is needed in order to process so many media at the same time. Third, we also need cloud computing and fast networks to store and retrieve so much information quickly. Fourth, machine intelligence was needed to process natural language and respond you in a more natural way.


References

[i] Lampson, Butler. 1972. “GUI Designer at Xerox PARC.”

[ii] Manovich, Lev. 2013. Software Takes Command. International Texts in Critical Media Aesthetics, volume#5. New York ; London: Bloomsbury.

[iii] “Fourier Transform.” 2016. Wikipedia. https://en.wikipedia.org/w/index.php?title=Fourier_transform&oldid=748484336.

[iv] Denning, Peter J., and Tim Bell. 2012. “The Information Paradox.” American Scientist 100 (6): 470–77.

[v] Kay, Alan. 1972. “A Personal Computer for Children of All Ages.” Palo Alto, Xerox PARC.

[vi] Sutherland, Ivan. 1963. “Sketchpad: A Man-Machine Graphical Communication System.”

[vii] Kay, Alan. 1977. “Microelectronics and the Personal Computer.” Scientific American 237 (3): 230–44.

[viii] “CHM Fellow Douglas C. Engelbart | Computer History Museum.” 2016. Accessed October 31. http://www.computerhistory.org/atchm/chm-fellow-douglas-c-engelbart/.

Reduce, Reuse, Remediate – Joe and Jameson

A particularly interesting quote from these readings is found in Bolter and Grusin’s piece on remediation: “In addressing our culture’s contradictory imperatives for immediacy and hypermediacy, this film demonstrates what we call a double logic of remediation. Our culture wants both to multiply its media and to erase all traces of mediation: ideally, it wants to erase its media in the very act of multiplying them” (Bolter and Grusin).

This is a fascinating concept, and even more relevant today as we accelerate towards a world that is, in a sense, at the same time both media-ful and media-less. Not only are different mediums combined (and “remediated”) in novel ways—text, video, audio, imagery, etc, and the new forms that emerge from their combinations—but they are also becoming even more a part of our reality. They are no longer seen as mediating forces for and to the world, but exist as forces for and to the world. They are both everywhere and nowhere. They are not merely tools we use to express, capture, or understand something else, they are also the something else. They are ubiquitous yet invisible. This is the double logic of remediation: we want to access the “unmediated” meaning being represented “behind” the medium (the “object,” in Peircean terms), without the “mediation” in between us and the object. What this means is we get an explosion of media, while at the same time try to limit any evidence that media exists.

Alan Kay discussed the new power that the DynaBook (and computers in general) could have on our education. His view on technology as a commodity was that the DynaBook could be given away for free, and only it’s content would be sold. Today we see the exact opposite, as our systems come with a hefty price, but are pre-loaded with free software. It’s the software that enables remediation: we can represent endlessly, convert one message into another, and represent that conversion as a unique message. However, what we can not do is properly denote the link in this trail of representation. For instance, to make a GIF is to take a sequence from a recording out of context, and loop it, thus repurposing the old sequence with a new meaning. If you do not recognize the context of the GIF, you cannot comprehend the full message. A small tweak in a GIF could enable a user to click on it bringing them back to the full sequence or source material. We have the capabilities to do this in research where we mark our cognitive associations with citations, so anyone reading our work can access the papers which influenced us. This type of linkage would be more difficult as it relates to music sampling, you can’t click on a soundwave, and digital streaming services do not show producer credits. Ultimately, it will be interesting to see if new media built for short viewing, will be able to credit sources, as original content becomes increasingly repurposed and devalued.

Alan Kay, A Personal Computer for Children of all Ages. Palo Alto: Xerox PARC, 1972.

Jay David Bolter and Richard Grusin, Remediation: Understanding New Media.

Cambridge, MA: The MIT Press, 2000.

Vannevar Bush, As We May Think. Atlantic, July, 1945.

Everything starts with Smalltalk – Yasheng She / Ruizhong Li

TEAM: BEAUTY♡ PRETTY☆ SOCIETY♀ (Ruizhong & Yasheng)

I (Yasheng) just started learning how to do basic coding on Arduino and, to my surprise, the learning process was actually painless.

Here is an example on why learning to code is easy:

Stuff

Arduino’s software comes with a lot of examples and you can just open them up to practice, and when you feel like you are confident in what you are learning, you can just modify the codes to make a new project. The information in the red block in the right of the graphic shows that the programmers of Arduino explained what how to utilize different classes and their instances. It feels like learning a new language in just minutes and I did not even have to take exams. My experience is consistent with the essence of Kay’s vision: “to provide users with a programming environment, examples of programs, and already written general tools so the users will be able to make their own creative tools. (32)” The reason why I feel the basics of coding is easy to acquire is that coding is not a brand new language, rather it uses “already existing representational formats as their building blocks, while adding many new previously nonexistent properties. (23)” This makes us think, why didn’t we learning computing this way from the beginning? Instead of treating it like a blackbox, we should have studied how programing works from a fundamental level. The interface of Arduino helps me to interact with the hardware directly, and we can see evidence of such system in Kay, Nelson, and others’ theoretical framework. We take advantage of Kay, Nelson, and others’ system of interaction – meta-systems, to “support the processes of thinking, discovery, decision making, and creative expression. (53)”

I (Ruizhong) have been learning Javascript (p5js) for about one year. What impressed me is the “all-in-one” style of this programming language website. The programming environment is easy to manipulate. I started to learn the syntax of the language by playing with it. With bunch of examples, I could adjust some parameters in the code and simultaneously see what changes happened in the output. It is not necessary for me to start from drawing basic shapes to complicated-structured image. I could start from a macro-view of how all these codes function, and acquire the knowledge of syntax through practice. With the libraries imported into the code, I could also make use of to the “mature” “building blocks” which function as a whole with only a few lines of code to create my personalized meaning tool.

The experience of learning p5js contrasts with my learning experience with Python. With a tutor teaching us in a traditional way, I feel it is a little bit “overcautious” while we moving forward to the next stage. With a problem proposed at the beginning, we practice the routine of solving a problem using Python code from drawing flowchart to writing code. We cannot directly see the relation between the code and output. I have no idea of how the code should look like. It seems to me that the codes stay as a conceptual level, and can never be brought into reality. Therefore, for me advantage of GUI and software is enhanced with this contrast in my learning experiences.

It is not surprising to us anymore that how much we can do using computers yet it is surprising to find out how the idea we take granted is formulated by people like Alan Kay. We are especially fascinated by Kay’s concept of Smalltalk. Smalltalk, as an early stage of all object-oriented programming software, is remarkable in the sense that it simplifies the thought process into small building blocks and creating meaning using these building blocks feel just like using our brain – only externally. Thanks to technological advancement, we can now do almost everything on a small laptop, from drawing to making music, or in Alan Kay’s term a “personal dynamic media” system (metamedium) that grants us endless possibilities.

No wonder people call Alan Kay a visionary – he was correct in thinking of a computer as new media generation engine during a time when a computer metamedium was only coming into existence.

According to Alan Kay, “symmetric authoring and consuming is quite lacking in today’s computing for general public.” In Kay’s vision, people are allowed to create, manipulate, sequence and share media across the world with the software. To realize his vision, modern technologies like notebook, tablet, and smartphone have fulfilled his expectation in terms of physical forms. However, the problem lays within the lack of open source. Open source is not a new concept, yet it still receives little attention comparing to massive success of Smartphone, PCs, and other products. Kay’s vision is closer to the Linux system, manifested as Raspberry Pie, Arduino, and other small appliances. These small appliances all have an open source library and easy to understand interface, yet they require a certain level of literacy for users to fully take advantage of it usability. Smartphones and tablets, on the other hand, are made into more and more like black boxes, allowing people to communicate and to create only at a surface level. If there is a way to combine the flexibility of Raspberry Pie with the affordance of smartphone, a Kay-styled Dynabook can be possible.

Speaking of flexibility, Kay’s vison is still not made possible. We still have the overarching notion that technology evolves does not evolve with us, but it continues follows a standard practice. Kay maintains that, “There is also the QWERTY phenomenon, where a good or bad idea becomes really bad and sticks because it is ingrained in usage.” So in terms of usability design, a truly “dynamic” interface should grant users freedom to truly personalize their experience with the machines. Furthermore, literacy of technology should be reduced to teaching people “building blocks” instead of teaching them to “Press to unlock” (formally known as “swipe to unlock”).

  • Lev Manovich, Software Takes Command, pp. 55-239; and Conclusion.Follow Manovich's central arguments about "metamedium", "hybrid media", and "interfaces" and the importance of Allan Kay's "Dynabook" Metamedium concept.
  • Jay David Bolter and Richard Grusin, Remediation: Understanding New Media. Cambridge, MA: The MIT Press, 2000Martin Campbell-Kelly and William Aspray. Computer: A History Of The Information Machine. 3rd ed. Boulder, CO: Westview Press, 2014.
  • Kay, Alan C. “Microelectronics and the Personal Computer.” Scientific American 237, no. 3 (September 1977): 230-44.
  • Alan Kay and Adele Goldberg, “Personal Dynamic Media” (1977), excerpt from The New Media Reader, ed. Noah Wardrip-Fruin and Nick Montfort. Originally published in Computer 10(3):31–41, March 1977. (Cambridge, MA: The MIT Press, 2003), 393–404.
    
  • Kay, Alan C. “Microelectronics and the Personal Computer.” Scientific American 237, no. 3 (September 1977): 230-44.
    
  • Interview with Kay in Time Magazine (April, 2013). Interesting background on the conceptual history of the GUI, computer interfaces for "interaction," and today's computing devices.
    
  • Butler Lampson's original 1972 memo on the Xerox Alto computer, the first "personal" computer implementing a GUI Windows and mouse system and networked via Ethernet.