Category Archives: Week 10

Metamedium concepts as common practices

One of the most mind boggling aspects of studying the history of computing and networking is to realize how the concepts we discuss and with which we live today were thought of so long ago, before any of the technologies could be conceived as such. The fact that ideas about what could be done with computers, and ideas about what can be done with media, are abstract enough that they were thought of so long ago proves why they translate into so many possible actions. Defining computers as metamedia, Alan Kay explains this means their content is “a wide range of already-existing and not-yet-invented media” (Manovich, 2012, p.44). The possibilities are open-ended because the level of abstraction at which the media is treated allows for creativity to take different forms, but this of course will depend on other factors.
As Manovich explores in his study of “media after software,” that is, of how the use of software changed the way media are thought of, created, treated, shared, today, the fact that the computer evolved as a metamedium — allowing for the simulation of previous media but also for the creation of new media — was not a coincidence or pre-determined path. Creators seeked for this type of development over the years, as users interacted with these systems individually, but also collectively as networking with others and sharing and co-creating became increasingly possible. In writing his book, Manovich provokes the reader not only to inquiring into the history of media and how we understand what a medium is and therefore what a metamedium is today, but also to probe the limits of the metamedium of today.
The examples he provides however, to show how this metamedium is used through abstract conceptualizations that allow us to treat media diversely, also shed light into places were caution is needed. A good example is data visualization, a term that has become popular over the past few years as the use of data for evidence-base storytelling became prominent for organizational and news outreach. This concept is heard increasingly among more audiences,

Robotics as evidences of Human-Computer Interactions (Galib Abbaszade)

Robotics as evidences of Human-Computer Interactions

Galib Abbaszade

Throughout human history, people try to communicate by covering huge distance. Sending letters with symbols or words, some artifacts and signs, or even by lights of balefires or even by smoke from them. Gradually while the scientific-technical progress has been increasing the symbols of communications also were changes to indicate complexity of newly emerged and more sophisticated ideas and thoughts. Apparently, digitalization of words started from Morse inventing of sending dots and dashes electric signals to remote signal receivers. From that period of time communication stepped into new era of digital-symbolic cognition. Within the last decades of 20th century there were invented digitalization algorithms of  audio and video signals, which gives us tremendous chance to hear and see in big distance.

By that we learned how we can transmit throughout big distance not only thoughts, but emotions and feelings. We designed standards and separation of commands to pass information from the source to the receiver(1). It opened various opportunities for many areas of life and professional activities. It became of source for new digital platforms for both entertainment and professional areas of expertise. Digitalization of audio and video files, analog-digital (and vice versa) options made real development of such fields as arts and education, communication and media, economic and social activities. Hence, new media phenomenon emerged, which has many distinguish characteristics from conventional media artifacts, such as television, feature films, books and magazines (2). However, pictures and sounds being just mathematical algorithms are not produced by computer itself, but used to transmit images and audio files it distance through the complex assembling of “blackbox” devices inside of the computers(3).

In fact, it changes all human life, but bring some challenges as well. Among with freedom of expression it may damage of privacy personal information and way of life(4). However, using cameras for security reasons and connecting them even in on-line main news stream make our life more secure and personal property more safe.

To my mind, essentially, the apopheos of the technical progress and digitalization of audio and video files is emerging robots which can hear and speak human languages, see and performance human activities and by that became an artifacts of Human-Computer Interaction (HCI) (5). They accumulate all possible scientific-technical progress’ findings and artifacts and became not only moving images producing sounds, but they may act in real life regime along with, understand us and be useful for multiple purposes. In addition they could have more capacity for active memory, can easily speak in many languages, perform more precisely than human being certain commands and react faster to find demanded information through their search ability being connected in internet data bases. Copying human behavior and reactions, they can be useful as babysitters as well as policemen assistance to cease criminals. Apparently, this scientific-technial progress “researchers have self-identified as exploring the materiality of digital information”(6). Surprisingly, modern robots are educated to produce material values, create emotions, by singing songs or drawing pictures, by that stepping in new era of digitalization of audio and video files, produced by artificially invented “creators”.

_______________

(1) Lev Manavoich “The Language of New Media”, MIT Press, 2001, page 29

(2) Lev Manavoich “The New Media Reader”, MIT Press, page 17

(3) Hal Abelson, Ken Ledeen, Harry Lewis, “Blown to Bits; Your Life, Liberty and Happiness After Digital Explosion”, Pearson Education Inc., 2008, page 73

(4) Hal Abelson, Ken Ledeen, Harry Lewis, “Blown to Bits; Your Life, Liberty and Happiness After Digital Explosion”, Pearson Education Inc., 2008, page 235

(5) Lev Manavoich “The New Media Reader”, MIT Press, page 16

(6) Jean-Fransua Blanchette, “A Meterial History of Bits”, University of California, Los-Angeles, page 1043

The more hybrid new media are, the more disruptive they tend to be

In 2013, UNESCO launched its “Policy Guideline for Mobile Learning”, where they defined mobile learning as “the use of mobile technology, either alone or in combination with other information and communication technology (ICT), to enable learning anytime and anywhere.” They continue “Learning can unfold in a variety of ways: people can use mobile devices to access educational resources, connect with others, or create content, both inside and outside classrooms.” (p. 6) (emphasis added).

Interestingly, such concept has crucial correspondences with the Alan Kay’s personal computer prototype, the Dynabook. More than 40 years ago, when smartphones and tablets didn’t even exist, the underlined ideas above were already conceptualized. First, the main affordances of mobile technologies that we have nowadays are the possibility of using them “anytime, anywhere as [users] may wish” (Kay, 1972, p. 3). Second, the computer would be “more than a tool” (idem), it would be something like an “active book” (p. 1). Third, imagining some of the possibilities for the new device, Kay predicted: “Although it can be used to communicate with others through the ‘knowledge utilities’ of the future such as a school “library” (…) we think that a large fraction of its use will involve reflexive of the owner with himself (…)” (p. 3). Finally, Kay and his team imagined that personal computers would involve “symmetric authoring and consuming” – the creation of content by students aspired by UNESCO and its followers.

I already discussed the intellectual property constraints for expanding the metamedium potential of current devices and software in my previous post. I would like to expend some time now reflecting on the continuum v. disruptive characteristics of what we have now available in terms of hardware and software and what was imagined decades ago.

Kay’s devices draws from 1972 are similar to calculators with bigger screens and keyboard. Nowadays, there are some models of smartphones that keep such characteristics. The Blackberry is one of them, sold by 200 dollars.

kay

Source: http://techland.time.com/2013/04/02/an-interview-with-computing-pioneer-alan-kay/

 

blackberry

You can buy it here:

https://www.bhphotovideo.com/bnh/controller/home?O=&sku=1289656&gclid=CPKbqun-m9ACFU5MDQodT9EECA&is=REG&ap=y&m=Y&c3api=1876%2C92051677682%2C&A=details&Q=

Nowadays, this kind of smartphone is not the most common, though. The keyboard, which used to be hardware has become software, accessed through a larger touch screen. Obviously, in the Blackberry model, there is a software to interpret the physical keyboard, but it is a “grey software” (Manovich, 2013, p. 21), not visible to the users as they are in the most recent models that have digital keyboards (e.g. iphones and others).

The migration from hardware to software brings the possibility of unlimited keyboard keys, that in my opinion, is not already well explored by designers. Some affordances of the digital keyboard are just copied of the physical ones, as when you press the key for a longer time and see more options. Additionally to it, they could have other layers of keys, and more configurable keys for the user to decide the punctuation signals, letters or symbols that hey should contain. For elderly people, they should also have configurable response level depending on how strongly a key is pressed. Many of them, including my parents, feel more comfortable when putting more force in their touch, which generates a lot of undesirable responses from the software (e.g. repeated numbers on the screen). The possibility of calibrating the software response in accordance to how intense the user’s touch is would help them a lot.

I understand that the more hybrid (Manovich, 2013) media are, the more disruptive they tend to be. For the author, “in hybrid media the languages of previously distinct media come together. They exchange properties, create new structures, and interact on the deepest levels.” (p. 169). If a specific metamedia intends to “remediate” (p. 59) older ones, such as an old map book turned into a digital map, it is easier to see the continuum line between them. Instead, when the digital map acquires other features and affordances, allowing the user to avoid traffic or police stations where police officers are stopping young people after midnight to check their level of alcohol, it becomes more disruptive. In my hometown, São Paulo, a very crowded city, many taxi drivers have become heavy users of apps such as Waze during hush hours. They don’t need to turn on the radio to know what other drivers are sharing about the traffic. They trust on the app. Similarly, young people have become more dependent on apps that allow them to know in advance where they can find police stations in their way to home, complementing the online communities that they have created to alert and notify members about the same issue.

Thinking about the continuum vs. disruptive tasks regarding photographs, while taking thousands of pictures using smartphones represents more of a continuum with previous non-digital tasks, the possibility of searching for photographs on Google through the upload of a similar picture is very disruptive. Also, the search for photos through #hashtags as allowed by Instagram is very powerful and a kind of unimaginable affordance some years ago. However, I see the domestic storage process of digital photos very similar to the use of old physical boxes brought to the living room when our family was visiting us, and I don’t know the reason for that.

In this way, the more hybrid and disruptive photo apps, digital books and other kinds of software become, the more changes we see in how we experience the world.

Digitization – change people and media

Digitization is a perfect summary of what traditional media will become in modern information age with the development of powerful CPU and GPU, high-resolution screen and more user-friendly interface. Just as Alan Kay suggests, a digitized medium can be a metamedium with the ability to re-explain and reproduce itself.

1. Why do people need digitization?
I’m sure that more and more people choose to read words on screen not because of curiosity. Instead, digitized media attract people since their properties decide that people can learn information easily. Here are some properties of digitization media that make us feel comfortable:

    • Coherent: No matter the simple combination or re-explanation, digitization essentially is a physical process. The most complex algorithm designed for digitization should be executed by silicon-based chips and what it needs to do is to restore the original content on screen. We are still using English words and grammar when beginning to use digital equipment. The picture will not change to another one when it is posted online. Thus, I can not find many reasons for people who can not accept digitization.
  • Informative: It is controversial. Some people claim that digitized media cram too much noise information, or trash information into us. It is true. Digitization means information can be transmitted with lower cost and higher speed, thus inevitably producing useless information. However, digitization also makes our media more informative by presenting us with more dimensions of information. I watched American President Election last night, and digitized map let me know conveniently about the situation of every county.
  • Visualized: Digitization means the possibility of visualized and interactive user interface, which can help realize the “better democracy” mentioned by Manovich. Books for children always contain more pictures than words and experienced presenters will never force audiences to read long paragraphs. No matter children or adults, images are always more understandable than words. Graphical programming is an application of this idea. Digitized interface allows people to build logic frame by dragging and combing modular logic parts together.

360截图-24515125

Blockly – visualized programming interface by Google

2. What does digitization mean to media?
Digitization technology lower the threshold for creating new artistic works. In past, people relied on real tools to create music, movies and other artistic works. For example, classical musicians can hardly creating a perfect piece of music without testing with piano or violin. However, digitized tools like Adobe Audition or FL studio allow creators to view details of every track, test the performance with computer-generated audio signal and easily mix audio effects that imitate real instruments together. For a modern music creator, s/he even does not really know how to play an instrument. Admittedly, traditional music played by large-scale orchestra still represent the highest achievement of human beings in music field, but we need to pay attention to the fact that new genre of music like electronic music has ruled American popular music market. On Youtube, famous electronic producers like Alan Walker and TheFatRat has earned over one hundred million hits. Composition is not the game for minority anymore, but a skill like programming or edit graphics that can be taught through online videos.

phir-mohabbat-8211-remix-8211-dj-lijo-8211-fl-studio-remake

es2-a

Comparison between traditional stave with digital music editing software

Moreover, digitization technology make hybrid media, not only the mechanical combination of traditional media forms but re-explanation of traditional media, possible. Just think about the starting age for movies, the age of silent film. To make voice, early movies even need dubbers to speak directly to audiences. Undoubtedly, such a kind of mechanical combination can distract audiences from movie itself. Modern Hollywood movies of course have solved the problem of voice. What’s more, modern movies use special effects, 3D technology and visual reality technology to give audiences totally different experiences. Another form of media I want to mention is video game. People criticize video game as meaningless thing that is only for amusement several years ago, but few of them will say so once they know what modern video games are like. Modern video games integrate game, movie, novel and music, creating a new form of media that overturns old recognition of media. An obvious effect is that the boundary between different media has become less clear. Video Games such as Heavy Rain, Beyond: Two Souls and Until Dawn only require players to press certain key in turn point of plot to make choice, and what you need to do except for this is to sit in sofa and watch the whole story. It is really hard to tell whether they are games or not. Some people comp up with a term “interactive movie” to call those games, and I guess maybe one day it will become an independent form of media.

Digitization

Chen Shen

This week’s reading focus on digitization, so I’ll try to use the knowledge from the readings to walk through the concept and application of digitization.

First of all, what is digitization? It is a process to use digital devices to represent objects or signals, be it text, image, audio or other forms of information. As the suffix in digitize suggests, this maps a trend worldwide to transform all existing information and media into the digital form.

So why do we do this? There’re lots of reason due to the innate defects of analog signal. For example, they are hard to transmit and operate, impossible to copy without information loss, they degrade during the time, and they are usually more expensive to store than the digital version.

Then how do we digitize a signal? If the object is not time-based, we assign certain digital numbers to all the possible variations in the format, and they present them spatially as how the original signal is arranged spatially. If the object is time-based, like audio, we divide the signal up into fixed intervals and measure all the properties of the signal in that a single time segment and use digital numbers to represent the measurement. By doing this, the whole time sequence is transformed into linear segments with digital numbers to represent analog properties.

The following section will consider closely some of the examples of digitization: texts, image, audio, and video.

Text

Texts may be the easiest media to digitize. It is not time-based, and the building blocks of text are of very limited numbers. Consider a typical typewriter, it’s acceptable to say a typewriter can produce all the texts in modern English.

1

So, if we assign a number to every possible key (a status of the possible variation) on a typewriter, the text is digitized. There are less than 50 keys, doubled by the SHIFT function, making it less than 100 visible output, which means a 7-digits binary code is adequate to represent any key on a typewriter. Though the string 0100 0001 seems much more complicated than a simple elegant “A”, the digital property makes binary strings extremely easy to store and transmit  for computers than its analog counterparts. Along with the exponential increase of digital storage technology, we have more space to store the digital format of all texts ever made, and all texts going to be made in a foreseeable future. To better represent all possible letters, ASCII is implemented, with redundancy. In ASCII, code 32-126, 95 codes in total, are assigned for printable letters. And printable letters are all that is perceivable by humans. As a result, ASCII coding can digitize all English texts without any information loss. To expand the spectrum to cover other languages, we use UNICODE. In the early stage of UNICODE, more than twenty thousand Chinese characters were included, only making the code length to expand to 25 digits. Right now UNICODE has evolved to 9.0, with a 35-digits-length, enough to cover almost every known character in all languages, even some newly born emojis. Another important reason to digitize text is based on the fact that texts convey its meanings independent of its physical appearances, unlike visual or audio signals, the text itself is already encoded by our language system. No matter what the typeface one is using, the word “text” stimuli almost same reflections in the recipients’ mind. So the texts  digitizing process is almost impeccable.

Visual

In visual digitizing, an important principle is shown: no signal channel is in unlimited bandwidth, neither the source of the signal nor human sensory systems. Human perceives an image by reconstructing mental images in the brain from stimulus  to the visual system. Though the visual light has a spectrum with infinite possible variations (we put aside the discrete nature of light determined by photon for now), the human visual system can only distinguish a limited selection of the value and hue of the light signal, defined by the color space.

2

All color perceivable in the color space is, in fact, a mix of the stimuli to three kinds of cone cells in our eyes. They can respectively sense short, middle, and long wavelengths and our mind combine their level of stimuli to form a color sensation. Due to the limited resolution of any kind of cone cell to distinguish wavelengths within a minimal frequency difference,  we can map 0-255 to the level of stimuli to a single kind of cone cell and use a triple string to represent a color. This is our familiar RGB color system. And mental image is a vast array of points of certain color, so to digitize an image, first, we establish an array of points, which is also called the resolution of a picture, like 1920 x 1080. Then we divide the original image up into this array, measure the color in all the cell, transform them into an RGB number, and store the whole array of number. It’s both our limit to distinguish light’s wavelength and the limited density of visual cells that make the digitization possible. But the digitization of image is much more complicated and troublesome than that of text. Because different display devices (the decoding end) have different color space, the same color, (127,127,127) for example, may look slightly different on different display devices. It’s not just for display but also for capturing (the encoding end). For example, the DSLR camera of Nikon and Canon tend to capture the same object in different hues due to the difference of their COMS/CCD and image processing system. The difference is not obvious, but unlike text, a slight change in the overall hue can place a great sensational difference on the viewer, so all industries related with image representing and reproducing have a rather high standard of color management.

3

Audio

Audio is a time-based media. As our visual system, the audio organs are also of limited bandwidth and resolution. From the knowledge we got from readings weeks ago, we know if the sampling rate is at least twice the highest frequency perceivable. So typical .mp3 files use a 44.1kHz sampling rate.

4

And a sound wave can be presented by its pitch, duration, loudness, and timbre. In a time segment, we can assign numbers to represent the pitch loudness and timbre. The more digits we use to represent the soundwave segment in a time slice, the more information of the original sound is transformed and stored. In early versions of .mp3, typical bandwidth is 64kbps. When reaching 128 kbps the sound is really as good as one can perceive, and a high quality .mp3 file can excess 320kbps  which is the typical bandwidth for compact discs. Audio digitizing shares the same problem as analog: the process of reproducing sound from signals, either analog or digital, is more complicated than that of displays by visual signals. The AC/DC part, the speaker part, even the listening environment part can all affect the ultimate audio sensation. But this is not the problem of audio digitizing, but of the whole speaker system.

Video

The video is easy to deal with when we already know how to digitize both image and sound. The video is no more than the aggregation of the two. The film industry already demonstrated to us by 24 frames per second, a human would perceive pictures as a continuum. The mind would trick itself to add the time property to discrete pictures.

Shared Properties of Digital Media

Once the signal is digitized, they share some common properties no matter what their original form is. First is the ability to be perfectly copied and transmitted. Binary signals with redundancy can almost eliminate the possibility of a copying error. With this, the term origin and copy loses some of the meaning: once a file is copied (or send out on the net), it’s really impossible and pointless to claim which instance is the original one. Another important thing is they become searchable. This is one of the most important reasons for digitizing. It takes milliseconds to locate a single word in a whole series of the Encyclopedia Britannica. And except for active search wherein human provide keywords, the incredible computing power of computers can also search for patterns in data, finding new links and information that’s totally new to human. The third is being operable. It’s much easier to operate a text file than printed letters on a paper, and digital file can do something totally impossible for the analog counterpart. Like the filters in Photoshop, they can easily change the tone, expression style, blurriness, or contrast of an image, and can do them multiple times with the ability to retrieve the unaltered file. Another interesting thing for a digital file, the function “blur” has nothing to do with the physical process of blur, but an algorithm that changes a set of digits in the file in a certain way and makes the digital image perceived as blur. This is an important property of digital operations, they’re by nature algorithm operating on file by flipping certain digits.

Conclusion

Human perceives analog signals, which means even if the signals are digitized we still have to go through the D-A process for us to perceive them. But even with this, digitizing is important for it grant us with the ability to better copy, store, operate, and transmit information. And a media file in the computer is an illusion. For the computer, a visual file is not that different from a song, a text, a clip. They are all long strings of 0 and 1s with additional digits to label the file format. Without decoders and output devices, an image file has no shared properties with a picture at all.

The difference between analog and digital media, to me, is rather quantitative than qualitative. Because even the analog signal, bonded by nature mechanisms, is still discrete, only in a very unobservable way. Time can no longer be broken into units smaller than Planck time, light no smaller than a photon, so the analog signal  is also discrete signal only much more variations than the binary.     

Computing Devices as Metamedia -Jieshu

Most of our contemporary computing devices are metamedia, according to Kay and Manovich. Kay and Goldberg called the computer “a metamedium” whose content is “a wide range of already-existing and not-yet-invented media.” From this week’s reading, I identify some reasons.

First, they can be used to represent other media[i]. PCs, smart phones, and tablets are able to represent images, videos, music, books, and other media that are sampled and discretized into numbers.

Second, modern computing devices can edit, combine, and augment other media “with many new properties”, as mentioned by Manovich in his Software Takes Command. For example, iMovie on my MacBook can be used to edit videos. You can insert still images, add music or other audio tracks into the videos, and cut off green screens in the video frames, enhancing the collective performance of individual medium. Another example is an image editing software called Prisma, with which you can render your picture with the style of some famous artworks (as shown below). According to one of Manovich’s propositions mentioned in his New Media: Eight Propositions, the functions of iMovie I mentioned above can be done by human manually, but at a much slower pace, such as manually stroking the shape on a film and cutting the remainder with a scissor[ii].

未标题-1

A photo rendered with the style of The Great Wave off Kanagawa by Hokusai.

Third, metamedia can create new media that do not exist in the past. For example, computer games are a new genre of medium that emerged from modern computing devices. There’s no counterpart of computer games before the information age. One way to create new media with metamedia is hybridization, mentioned by Manovich in his Software Takes Command, which creatively fuses different media together. In order to design a computer game, game designers need to use specialized computing devices to combine digital 3D models, photography, film, scene design, storytelling, history, music, artworks, and other media together.

Furthermore, as Manovich proposed in his Software Takes Command, computing devices have the potential to generate “new media tools”. For example, computers can be used to develop new media software and algorithms. A perfect example is Kay’s Smalltalk program that was designed to allow users to develop their own software. For instance, musicians developed with Smalltalk a system called OPUS that was able to convert sounds of a keyboard into a digital score. A seven-grade girl who never coded before even made a drawing system with Smalltalk[iii].

The Transition of Computing to “Better Democracy”

Kay’s vision was to transform “universal Turing machine” into “universal media machine[i].” This transition in concepts of computing allows for “better democracy[ii]”, as Manovich put it. In other words, it supports average people to manipulate media much more easily and cost-effectively, without professional training. From then on, companies were trying to build personal computing devices with graphic interfaces.

As Kay proposed in his A Personal Computer for Children of all Ages, the price of a Dynabook is $294 in 1972[iv], proximately $1675 for today. Thanks to Moore‘s Law, we can spend much less than that to get a good computer with powerful media processing ability today.

Differences Between Two Kinds of Media

In our digital world, there are a lot of media content. Some of them are captured digitally, which is continuous (part of our media and technical mediation continuum), such as JPG files of digital photos and MP3 files of live music. Some are created totally in software environments, which is new (specific to computation and digital media), such as images drawn with Photoshop from scratch and music generated by AI. These two categories of media have many differences.

First of all, they are generated differently by definition. Media captured digitally are generated through sampling and discretization from analog signals, while media “born digitally” are generated by algorithms, exactly speaking, through the collaboration of humans and algorithms. Thus, continuous media definitely have a source in the real world, while the media “born digitally” are not necessarily so. For example, the digital photo of Lenna was scanned from a magazine, so it has a source in the real world —the printed photo on the magazine. On the contrary, an image produced in a software environment does not need to have sources in the real world. Even if it indexes an object in the real world—e.g. a caricature of a real person—it’s resemblance to its object may vary significantly.

未命名_meitu_1

Left panel: a digital photo of astronaut Claude Nicollier repairing the Hubble Telescope (Source: NASA). Right panel: an image depicting the same event created using software. (Credit: Jieshu Wang)

Second, the resolutions of continuous media are limited by the devices that capture them and the methods used to sample and digitalize them, but media “born digitally” have the potential to enhance their original resolutions. For example, Lenna’s image was sampled with a 512*512 scanner. That is to say, if you zoom in the photo, it will become fuzzier and fuzzier, until individual square pixels are recognizable. But things are different with images generated by algorithms. For example, an iPad app named Frax can generate fractal images that can be zoomed in and zoomed out vastly without a decline in resolution.

2

Lenna’s image gets fuzzy while being zoomed in.

Video: Zooming in and zooming out images in Frax do not cause a decrease in resolution.

Third, continuous media cannot be produced automatically, while media “born digitally” can. For example, as Manovich mentioned in his New Media: Eight Propositions, 3D Non-Player Characters (NPCs) in computer games move and speak under software control[ii]. For instance, in the game of Assassin’s’ Creed, NPCs are generated randomly and would respond to your behavior according to algorithms. For example, running into an NPC will raise your notoriety. Walking down the street with a high notoriety would draw the attention of nearby NPCs and enemies, which might cause combats. Another example is the fact that music captured digitally has to be recorded in a physically existed concert hall or a recording studio. But music generated by algorithms can be composed and produced automatically without human interference as long as the necessary models and variables are provided.

Video: This piece of music is produced by Google’s Magenta program, which is designed to use machine learning systems to create art and music systems.

There’s no absolute boundary in between

However, I don’t think there exists an absolute boundary between the two. They are overlapping bands on a continuous spectrum. Here are my reasons.

First of all, they are both sampled and then discretized signals. While continuous media are obviously sampled from analog signals, media “born digitally” can be seen as samplings of continuous algorithms.

Second, they both need decoding equipments to convert them into perceptible signals in order to be interpreted by human users.

Third, media that completely generated by software do not exist. Media “born digitally” also need human involvement. Software needs to be designed by programmers. In addition, the rules they use to generate media must follow people’s social conventions and mental models for media. Moreover, many software-generated media use digitalized analog media as building blocks, or they try hard to imitate effects of analog media. For example, Google Earth uses digitalized satellites images to build its 3D maps. Garageband allows users to choose from sound effects that perfectly imitate the timbres of real musical instruments. All Photoshop filters are aimed to reproduce real painting brushes, even though they can produce effects that don’t exist in real life, as Manovich stated in his Software Takes Command[i].

Finally, media “born digitally” can also be sampled using methods that are used to sample analog media. Midi files can be converted into Mp3. Frax images can be exported into JPGs. As Manovich said in his Software Takes Command, the newness of new media “lies not in the content but in the software tools used to create, edit, view, distribute, and share this content.” Therefore, as long as the two kinds of media can be processed and distributed with the same software, they are unified. Both continuous media and media “born digitally” can be seen as new media.


References

[i] Manovich, Lev. 2013. Software Takes Command. International Texts in Critical Media Aesthetics, volume#5. New York ; London: Bloomsbury.

[ii] Manovich, Lev. 2002. “New Media: Eight Propositions.” In The New Media Reader, edited by Noah Wardrip-Fruin and Nick Montfort. The MIT Press.

[iii] Kay, Alan, and Adele Goldberg. 1977. “Personal Dynamic Media.” Edited by Noah Wardrip-Fruin and Nick Montfort. Computer 10 (3): 31–41.

[iv] Kay, Alan. 1972. “A Personal Computer for Children of All Ages.” Palo Alto, Xerox PARC.