Category Archives: Week 10

The Art of Audio


Warning: Use of undefined constant user_level - assumed 'user_level' (this will throw an Error in a future version of PHP) in /home/commons/public_html/wp-content/plugins/ultimate-google-analytics/ultimate_ga.php on line 524

I have always been an avid music fan. Growing up I was surrounded by my father’s dabbling on a variety of world instruments, my friends live jazz shows, and the hundreds of CDs I was constantly swapping out of various CD players. I found that while I was not particularly inclined to play music, I was highly attuned to acoustics and sound quality. Towards the end of high school, as I was increasingly involved in studying video film, I learned the basics of working with a semi-professional live production sound board. It wasn’t until many years later that I had an opportunity to put those skills to use.

Working at a DC restaurant with live music three nights a week, and a very low-budget set-up, I found myself wanting the sound quality to be just a little bit better. During shifts I would tweak the settings on the sound board, and through a bit of trial and error along with some helpful requests from bands, became comfortable with knowing how to adjust the sound played by the musicians into what I felt was the most acoustically pleasing form. But while I was learning how to make the physical adjustments to analog waveforms, able to understand at a basic level the physics of audio waves and how each dial altered them, I’d never really understood what went into recording audio for conversion into a shareable media format.

Given my love for live music I frequently would try to record shows on my cameraphone, but always found them significantly lacking from what I remembered of the in-person listening experience. Then I received the opportunity to take my passion and basic knowledge to the next level. At one such live show, my skills at live production were noticed and let to an offer to produce future shows that were destined for full scale production into music videos. In this new role I worked to get up to speed with the differences between simply live production and recording for remediation, learning a great deal about the how audio engineers overcome the challenge of capturing live experiences and translating them into digestible media files – and how media must be tailored to meet the socio-cultural expectations of the viewer/listener.

Live Sound vs Recorded Sound

Concerts are immersive experiences. Listeners are surrounded by sound, coming directly from speakers but also reverberating off of walls and being absorbed by soft surfaces and the bodies around them. The result is a very distinct “concert” sound that leaves the music a bit blurred for each listener. This blurriness of the music stems at its root by the physical analog nature of waveforms that limit the speed of sound.

As the sound technician, how do I recreate that sound? These physical aspects of the live experience mean that I can’t just take the audio from a singers microphone, sync it with camera footage, and expect listeners to be satisfied with what they are hearing. Whereas the concert attendee is hearing the singers voice from many different angles, the listener at home has only the usual two speakers on their device, that are hopefully in stereo to give just a hint of depth.

This means that to recreate a concert sound I have to place many more mics around the venue to capture the other versions of the singers voice that are formed by reverberations within the space. These multiple recordings must be then overlaid on top of the primary recording to give the listener the audio sensation of the live concert experience.

When tailoring to the socio-cultural expectations distorts reality

Of course, while I may place multiple microphones around the venue to capture the variations of a singers voice, I’m not recording the loud talkers standing at the bar behind the concert goer. Listening to a live concert at home leaves out the very real sounds of that in-person experience. But sometimes, real experience sound is not what we as a society of consumers of experiences via visual interfaces want, and this is no more true than in sports audio.

The two classic examples of how the audio associated with the representation of sports in digital format differs greatly from the analog “live” experience are Nascar and professional soccer. When watching Nascar at home on a TV, or computer, viewers will hear not only the stereo roar of cars whizzing by but also the roar of the crowds cheering. However, in person the sound of the cars is so loud that it drowns out even your friend sitting next to you, let alone the sound of the crowd cheering. In professional soccer coverage viewers hear a satisfying thump every time a player kicks the ball. This thump clearly syncs what they are seeing with the sound they would expect to hear if they themselves were kicking a soccer ball. However if one thinks for a moment they would realize that there is no way they would be able to hear the thump of a kick from the stands of a soccer arena.

The ability to hear the crowds at a Nascar race and the thump of a kicked ball are audio illusions demanded by viewers that would not be present for in-person attendees. Some are honestly achieved through well placed microphones such as those that track the ball on the field and capture the sound of the kick. But others, like the roar of the crowds at Nascar, are added in from other sources, since even the best microphones would struggle to separate the sound of the crowds from the sound of the cars. These are examples of the affordances offered to creators of audio/visual media that enable them to render an experience for viewers such that preset socio-cultural expectations of the live experience are met.

Mars, Roman. “The Sound of Sports.” (Podcast) 99 Percent Invisible. 8/11/14. https://99percentinvisible.org/episode/the-sound-of-sports/

Digitization and Medamedium


Warning: Use of undefined constant user_level - assumed 'user_level' (this will throw an Error in a future version of PHP) in /home/commons/public_html/wp-content/plugins/ultimate-google-analytics/ultimate_ga.php on line 524

“Digital cameras record our images, digital networks record our purchases, and global positioning technology in our cars or cell phones pinpoints our personal location” – Murray

Having already taken CCTP 506, I am familiar with the whole analog vs digital divide. We learned about the nature of continuous-discrete dichotomy that is part of a larger socio-technical system. As Dr. Irvine explains, we “use terms of digital media as objects, rather than artifacts, and we forget that media is a continuum system that can be designed to be used in different forms and formats, which now days we say digital.”

Integral the process of digitalization is to the development of modern technological achievements. In tandem with such processes is the concept of meta medium, the ability to remediate and build on existing media. Digital camera integrates taking a picture and developing it in one artifact. Before these two actions were separated, where you took a picture using a camera and then developed it in a dark room through a chemical process.

About 6 years ago, I got a Nikon Camera DSLR for Christmas and have been using it ever since. I want to use this week’s discussion as an opportunity to de-blackbox my camera so I can finally understand how it works. Looking at the concepts and affordances presented in Murray’s Inventing the Medium: Principles Of Interaction Design as a Cultural Practice will give me a better sense of how to break down this process into parts.

Murray introduces the four representational properties of digital environments, the procedural, participatory, spatial and encyclopedic affordances that “provide a core palette for designers across applications within the common digital medium” (p. 9).

Design Affordances 

The handle extension o the right side of the camera gives people a visual clue of how to hold it so the camera sits fits nicely in my hand. Other affordances to keep in mind is the interface-type screen facing the user that allows them to see the scenes they captures. There are several turntables and buttons to press on the side of the screen that allows easy access to scroll through everything. The lens can automatically turn to focus on a subject or you can do it manually. This incorporates what Murray describes as “participatory design”, engaging the potential users of a new system in every step of the design process as collaborating members of the design team.

GUI

The screen facing the user is an example of the Graphical User Interface that increases the interaction between human and camera. Settings on the screen include language, date/time and specific effects for different subject-matters.

Inner Design

The basic principle of and design of a camera looks something like this:

  

The object will release different light rays and the camera’s function is to capture the light ray. Light that has been focused through the lens of a camera must pass through a round diaphragm on its way to being registered as an image on the camera’s sensor.

Something else that we should think of when talking about metamediums is the concept of biomimicry– the design and production of materials, structures and systems that are modeled on biological entities and processes. Ron White describes how the camera’s aperture serves the same function as the human eye’s pupil. The diaphragm is the camera’s version of the eye’s iris. Does the concept of biomimicry add an extra layer or “medium” to a meta-medium?

Modular Design 

I thought this picture was really cool because it gives us an x-ray vision of the modular design inside the camera. You can see all the parts are part are part of a much larger system- much like a human system that, again, ties into the concept of biomimicry.

 

References

Martin Irvine, Introduction to Symbolic-Cognitive Interfaces: History of Design Principles (essay).

Janet Murray, Inventing the Medium: Principles of Interaction Design as a Cultural Practice. Cambridge, MA: MIT Press, 2012. Excerpts from Introduction and Chapter 2.

Ron White and Timothy Downs. How Digital Photography Works. 2nd ed. Indianapolis, IN: Que Publishing, 2007.

iPads and Empowered Artists


Warning: Use of undefined constant user_level - assumed 'user_level' (this will throw an Error in a future version of PHP) in /home/commons/public_html/wp-content/plugins/ultimate-google-analytics/ultimate_ga.php on line 524

While drawing obvious inspiration from the past – notably borrowing a lot of design philosophy from Alan Kay’s Dynabook – the iPad makes me feel like I’m in the future. Nothing makes me feel like a futuristic doctor on an interstellar journey more than scribbling some notes on my iPad with my trusty Apple Pencil. The iPad is a device that clearly illustrates Kay and oldberg’s concept of a “metamedium,” in which computation allows the simulation of myriad preexisting media and the invention of many new forms of media unique to computation. And sure, the iPad allows the creation of “a number of new types of media that are not simulations of prior physical media,” the iPad shines more to me when it simulates and augments traditional media with new technological affordances (Manovich, pg. 329).

The iPad has swiftly become a truly all-in-one device for me. I view photos on Instagram; I watch videos on Netflix; I write notes in GoodNotes. Multiple different forms of traditional media are accessible to me with nothing but a switch of an application. The beauty of the iPad as “metamedium” in addition to housing so many simulations of traditional mediums in one device is what I can do with these mediums thanks to affordances of software as a medium. Notably, these new affordances to traditional media are strikingly apparent within notes apps. With the touch of a few buttons and scales, I have access to multiple different pen tips and colors. The software endeavors to replicate certain extent forms of media like pencils and highlighters, while allowing the strokes of these tools to be instantly deleted, undid, or transformed digitally. Additionally, the iPad allows me to have hundreds of notes and notebooks of pencil-taken notes all housed within a device that’s smaller than a single notebook.

One idea that both empowered and unnerved me within Manovich’s Software Takes Command was the fact that metamedium only have the constraints that a developer programs into a certain media-creation application. Such an idea is awesome because it paints a picture of endless, ever-evolving forms of media thanks to computational affordances, but concurrently this realization highlights a significant divide between programmers who create media software and the artists and designers who use the software. Most designers and artists view media creation applications as toolboxes that are unchangeable until the latest version is announced, but how much more interesting art could be made if the programs were less commercial and the fields of development and art merged more often and fluidly? With metamedium, the confinements of what artistic media can be virtually nonexistent. Empowering artists to create their own tools is the only step missing to unlock this full potential.

 

Affordances of Stitcher


Warning: Use of undefined constant user_level - assumed 'user_level' (this will throw an Error in a future version of PHP) in /home/commons/public_html/wp-content/plugins/ultimate-google-analytics/ultimate_ga.php on line 524

Allan Kay stated that computer no longer considered a single medium, but a medium for other media processed by user-actived software (Irvine 11). I think that this idea can similarly be applied to many popular apps including Stitcher, a podcasting app that I use daily. Stitcher is essentially a podcast database that enables users to listen, download and share podcasts. The four representational affordances described by Janet Murray ,encyclopedic, spatial, procedural and participatory, are integral to the effective design and usage of apps( Murray 52). Most obviously, the encyclopedic and the participatory principle is represented by the Stitcher app because the app is essentially an encyclopedia of podcasts. Users can search podcasts based on particular topics or search a show based on the name of the podcast. The subject of podcasts hosted on Stitcher vary from business focused to comedic.

Interactivity in the Stitcher app can clearly be explained by Murray’s definition, the structures by which we script computers with behaviors that accommo-
date and respond to the actions of human beings (Murray 13). When users engage with shows on Stitcher, the software in the app begins to source similar podcasts for the user to listen to. In Stitcher,  favoriting a podcast results in newly released episodes of the podcast to be downloaded for offline listening. This is clearly representative of both the procedural and participatory affordances of computing.  The procedural property is its ability to represent and execute conditional behaviors (Murray 51). By “favoriting” a particular show, the user is initiating the process of adding the show to the ” Favorites” list and inidcating his or her interest in podcasts similar to the one he or she favorited. Without user engagement from listening to or favoriting podcasts, the app would simply be a podcast encyclopedia. User engagement makes the application more dynamic and flexible because engagement leads to push of curated content for each user.

Initially, it was difficult for me to identify a representation of the spatial affordance of computing in Stitcher, however, through deeper reading I was able to understand where the spatial affordance is in Stitcher. The spatial affordance of computing refers to how users feel as if the interfaces are not simply objects and images with which one interacts, but actual spaces and “sites.” (Murray, pg. 71) This can clearly be seen once a user selects the icon of the podcast. Users are taken to a show “site” where he or she can see what other shows are available to listen to, when they were released and whether or not the show is downloaded in the app already.

 

Martin Irvine, Introduction to Symbolic-Cognitive Interfaces: History of Design Principles.

Janet Murray, Inventing the Medium: Principles of Interaction Design as a Cultural Practice. Cambridge, MA: MIT Press, 2012.

 

Week 10-Reading Response


Warning: Use of undefined constant user_level - assumed 'user_level' (this will throw an Error in a future version of PHP) in /home/commons/public_html/wp-content/plugins/ultimate-google-analytics/ultimate_ga.php on line 524

I still remember my Dad once told me that one of the coolest things in the early nineties was to see someone holding a “Da Geda (or Big Brother)” in hand, with the long antenna pull out, speaking as loud as possible in the crowd: “Hey! Can’t hear you! Say it again!” Then everyone would cast envious glances at this guy—technically–the “Da Geda” in his hand.

Figure.1 “Da Geda/Big Brother” — Motorola International 3200

It’s big and weighs over one pound and looks like a black brick. Phone call is the only function it has. The connection quality was so poor that people need to yell on the phone. And the battery life is short, merely maintaining a 30-minute call. Yet “Da Geda” was still in great demand at that time. As the first mobile phone entering the Chinese market, it was once a symbol of status because of its high price–basically 25,000 RMB in the late 1980s.

In fact, the International 3200 means more in cell phone history: It became the first hand-sized digital mobile phone that used 2G digitally encrypted technology, evolving from the original analog cell technology developed in the late 1960s. Analog was the 1G used in cell phones. And digital ones are the 2G. Let’s scrutinize the small leap here. Both of them use the same radio technology, but digital phones use it in a different way. Unlike analog system, where signals between the phone and the cellular network cannot be fully used, the digital system can compress those signals and manipulate them easier.  “The trick we do in digitizing is representing in mathematically discrete chunk sequences what occurs in continuous perceptible forms like visual representations and sounds.” So basically, 2G was digitalization of 1G. And digitization is about quantization. Digital phones could convert our voices into binary information and compress it. It’s said that the compression allows between three and ten digital cell phone calls to occupy the space of a single analog call.

Like the question raised in the reading material: “what are the important differences in the media states or formats?” In the case of 1G to 2G evolution, why becoming a digital one is important? For the users, it means data transmission, bigger channel capacity, more secured communication, better voice quality, longer battery life, and smaller size. 2G was a breakthrough but far from a perfect one. It does not allow complex data communication like video. “…the differences in the digital artefacts is its continual openness to software processing and transformations beyond any initial physical or recorded state.” More generations were to come later. Now, 5G is under development, promising superior speeds in most conditions to the 4G network. Our cell phones today are mature enough to process any media forms in a digital way. Texts, images, videos…all kinds of media artefacts that once lived in physical materials did never and will never leave. Actually, digital media gives them an “immortal life” by overexposing them in those black little boxes on our hands. It is true that writing, text, and images have now become more powerful and more widely distributed symbolic forms than ever before. Basically, we are living in the hologram of our own history.

Credits to:

Martin Irvine, Introduction to Symbolic-Cognitive Interfaces: History of Design Principles.

Lev Manovich, Software Takes Command.

Peter Wegner, “Why Interaction Is More Powerful Than Algorithms.” Communications of the ACM 40, no. 5 (May 1, 1997).

 

Wikipedia-Encyclopedia in Digital Age


Warning: Use of undefined constant user_level - assumed 'user_level' (this will throw an Error in a future version of PHP) in /home/commons/public_html/wp-content/plugins/ultimate-google-analytics/ultimate_ga.php on line 524

Zijing Liu

I still remember that on my 7-year-old birthday, my parents gave me a set of encyclopedia as a birthday gift. It has four volumes, each has hundreds of pages, narrates knowledge in an interesting and easily understandable way. I thought it was the whole world knowledge.

Now we have digital media, which are far more “encyclopedic” than we could ever imagine.

Figure 1 Two different “encyclopedia”

Murray brought up four affordances of digital media interfaces, which were encyclopedic, spatial, procedural, and participatory. (Murray 1997) An excellent example is Wikipedia. It is a platform where anyone can obtain, revise, distribute and share knowledge. Anyone is able to create a new article, make changes to improve it, and gain knowledge from it. Volunteers may participate in this digital media anonymously or with identity if they want. Based on Wikipedia statistic, currently, the English Wikipedia includes 5,746,452 articles and it averages 559 new articles per day.

 The most distinct affordance of Wikipedia is participatory. Wikipedia developed in such a wide range mostly based on the participation of numerous users. All the articles are written by volunteers. If anyone finds something wrong, or things have changed as time goes, s/he can revise it or rewrite it. So basically, everyone can be part of the construction of Wikipedia.

Wikipedia is a digitalized “encyclopedia”, similar to the encyclopedia my parents gave me, (which also reminds me that digital media is a metamedia that “simulates” existing media), yet more comprehensive and constantly updated. Even though the correctness might be weakened intentionally or unintentionally since more contributors have joined in the process, Wikipedia tries to be factual, neutral, and commonly agreed, instead of being absolute correct or authoritative.

Therefore, for most of the time, people check Wikipedia because they want to know the commonly accepted knowledge, even if it might not be 100 percent correct. In this case, abstraction, as a procedural affordance, comes into effect. When we open any word in Wikipedia, the first thing we see is a short paragraph, an abstraction of its definition. People can obtain knowledge in a quick way, which is exactly the design principle that Wikipedia follows (Hawaiian word wiki means “quick”). Also, each article contains a table of content, which helps users to navigate to particular parts they want.

Figure 2 Abstraction on each Wikipedia page

Further, digital encyclopedia greatly expands its spatial affordance, because space on the Internet is unlimited. It is obvious that the knowledge on Wikipedia cannot be printed within four volumes.

 

References

  • Martin Irvine, Introduction to Symbolic-Cognitive Interfaces: History of Design Principles.
  • Janet Murray, Inventing the Medium: Principles of Interaction Design as a Cultural Practice. Cambridge, MA: MIT Press, 2012.
  • (2018, November 03). Retrieved from https://en.wikipedia.org/wiki/Wikipedia

Interaction Design and Google Maps


Warning: Use of undefined constant user_level - assumed 'user_level' (this will throw an Error in a future version of PHP) in /home/commons/public_html/wp-content/plugins/ultimate-google-analytics/ultimate_ga.php on line 524

Huazhi Qin

As what Janet Murray mentioned in the book, all things made with electronic bits and computer code belong to a single new medium, the digital medium, with its own unique affordances. (Murray) In other words, a digital artifact can be considered as part of a medium with four affordances – encyclopedic, spatial, procedural, and participatory – more or less. That reminds me of how the Google Maps app is elaborately designed to involve users in.

Obviously, Google Maps displays an encyclopedic trait. Its database covers almost every country, state, city, and even every street and building all over the world. The type of services it offers range from location searching to route planning and navigation, as well as changing real-time traffic status. This show its unequaled storage potential far beyond the legacy paper maps. To some extent, the large amount of information included in Google Maps show its encyclopedic trait in the spatial layer. When the “time”, namely real-time information, is displayed in it, the map is a dynamic medium and is encyclopedic in the temporal layer.

According to Murray, the spatial affordance refers to virtual spaces the designers created that are also navigable by the interactors. (Murray) Its graphics users interface exemplifies the Google Maps as a spatial medium. To be specific, the searching bar, menus, and manipulable icons are all the examples. The concept of modularity, or black-boxing, and semiotics, or human symbol systems, we learned in the previous weeks can also be seen here. For instance, multiple specific locations are categories and folded into restaurant, bars, and so forth. When users want to find more about the restaurant nearby, the icon of knife and folk will lead them to what they are looking for.

Furthermore, how Google Maps “teaches” users to use the app shows its participatory affordance, Murray said that “the designer must script both sides, interactor, and digital artifact so that the actions of humans and machines are meaningful to one another”. Also, Google Maps is the digital design which “is selecting the appropriate convention to communicate what actions are possible in ways that the human can understand”. (Murray) The blue dots show the user’s current location. The red pin shows the location the user is searching. Also, in real-time status, the red route shows traffic congestion, while the green lines show where is no traffic delays.

At last, procedural affordance can be seen when the users enter something ambiguous, in which case users will be led to relevant information based on some keywords. Google Maps also shows “no result” to deal with absent information.

Reference

Janet Murray, Inventing the Medium: Principles of Interaction Design as a Cultural Practice. Cambridge, MA: MIT Press, 2012.

 

Weekly Writing for Week 10


Warning: Use of undefined constant user_level - assumed 'user_level' (this will throw an Error in a future version of PHP) in /home/commons/public_html/wp-content/plugins/ultimate-google-analytics/ultimate_ga.php on line 524

Banruo Xiao

Janet Murray’s reading summarizes several key terms we accessed in previous weeks. At the same time, she also raises many new ideas and definitions to further explain the interaction design and affordance.

Interaction Design

In this part, one of the most interesting key point Murray mentions is designers are often engaged in a process of refinement. Every design is developed from an immature medium. Through the improvement of technology and a deeper understanding of culture and human, the immature medium, no matter it is an online platform or a mobile application, will become more user friendly. Instagram is an appropriate example. Comparing to the original version, the interface of the current updated Instagram is clear. Each function is almost represented by a well-known icon. A new user probably does not need to learn how to use it. It is user friendly, because the designer imitates the human’s real life gesture and behavior and creates a similar environment online. In specific, for example, when a user wants to exit a current page, he/she no longer needs to click the close icon. Instead, he/she just simply slides the page to right. The page will immediately disappear from the interface. The improvement is significant because it successfully imitate a human behavior – closing a book – to imply that the user is going to stop reading the page.

Affordance

In the meantime, Janet Murray emphasizes many times in her book the four affordances of a computer: encyclopedic, spatial, procedural and participatory. Computer is an encyclopedic medium since it contains and transmits tons of information that human can get access to. It is a spatial medium because it creates virtual spaces for users to navigate. It is a procedural medium because of it is ability to represent and execute conditional behavior. It is a participatory medium as it allows users to interact with it or with other users. Instagram can also be a proper example. Users keep updating information on Instagram to help it become an informational platform. Instagram enables users to navigate through their personal page to others’. At the same time, it is pretty stable and offers more possibilities for users to interact with the application. As a social network platform, it is obviously participatory, no matter for human computer interaction or human to human interaction.

A successful platform can be a symbol assembling many academic definitions and scholarly outcomes, and Instagram is one of the symbols.

week 10


Warning: Use of undefined constant user_level - assumed 'user_level' (this will throw an Error in a future version of PHP) in /home/commons/public_html/wp-content/plugins/ultimate-google-analytics/ultimate_ga.php on line 524

Last week we talked about the long history of digital interfaces. Going off the the topic of the first computer, which was designed as a fast but giant calculator for business and government use. By that time, computer as a calculator is a single medium, whose purpose is to calculate complete math.

But the modern computer nowadays, Kay argues that these computers are “no longer considered a single medium, but a medium for other media processed by user-actived software (Irvine 11). So, based on our experience as well as what kay and Manovich states, modern computer is a representative of metamedia, which is “a platform housing many existing and new media—was realized (Manovich 162).” Indeed, that for now, the use of computer is more than doing calculation, we can even create new language or media with computers. Also Kay calls computers the first metamedium whose content is “a wide range of already-existing and not-yet-invented media (Manovich 44).

As Manovich says, “the computer metamedium is simultaneously a set of different media and a system for generating new media tools and new types of media. In other words, a computer can be used to create new tools for working with the media types it already provides as well as to develop new not-yet-invented media.” So, with combining old and new media, computers can access internet, seeing videos, making music, making film, coding etc. These interfaces are those old media augmenting with new media. Based on this observation, we can conclude that modern computers are not only “mediums” but “metamedia”.

Going through the history of digitals and computers, we can see how computers are transferred from simple medium to metameida. And I wonder how it will goes in the future. Maybe it will transferred to what we have seen in fiction movies, the “Jarvis”, that we don’t need to depend on platform anymore, and we can swipe the web pages in the air.

Manovich, Lev. Software Takes Command: Extending the Language of New Media. London; New York: Bloomsbury Academic, 2013.

Martin Irvine, Introduction to Symbolic-Cognitive Interfaces: History of Design Principles (essay).

Interactive Affordances in GarageBand for iOS


Warning: Use of undefined constant user_level - assumed 'user_level' (this will throw an Error in a future version of PHP) in /home/commons/public_html/wp-content/plugins/ultimate-google-analytics/ultimate_ga.php on line 524

TIanyi Zhao

We are living in the world fulfilled with “digital culture,” a unique social and cultural experience that tightly combine with computational technologies. Being continuously optimized in the interaction design to afford more tasks, PCs, tablets and mobile phones are empowered to well function in every aspect in our daily life so that there is a tendency that people would rather create and edit on digital media than the analog media. For example, more music arrangers are willing to produce without an “analogue” sound on the applications. GarageBand for iOS, a featured digital audio workstation developed by Apple Inc., is a good example with implementations of the affordances of digital interactive design.

In her book Inventing the Medium: Principles of Interaction Design as a Cultural Practice, Janet Murray raises four representational affordances of the computer—encyclopedic, spatial, procedural and participatory. First, GarageBand interprets the encyclopedic property well. With the Sound Library, users can instantly get access to a considerable and expanding collection of free loops and instruments. Besides, users can leverage with diverse genres and styles to personalize the organization of audio tracks. Furthermore, with a synthesizing library consisting of abundant audio patches, users can morph among different sound effects in real time. For a music arranger or composer, amateurs or professionals, the preset and continuously being updated database is a heaven that can satisfy various demands for source materials.

Figure 1. The Sound Library of GarageBand for iOS.

(Source: https://www.apple.com/ios/garageband/)

Second, the special affordance has been perfectly applied to the interfaces of GarageBand, especially on the interfaces of different instruments. The picture following shows the interface of modern drum, emulating the real ones with virtual drum heads and placement. Without any tedious explanation in text, the GUI guides users to produce different sounds by simply tapping different areas and drum heads, creating a hyperreal space and simulations for users.

Figure 2. Interfaces of Drums

(Source: https://www.apple.com/ios/garageband/)

Third, GarageBand is also a procedural medium. On the interface Track Controls Panel interface, it is clear to see that all the tracks are executed in bars as an organizing framework, which “reinforces our tendencies toward linear or unisequential design.” (Murray 53) However, each track can be ordered multi-sequentially. For example, the guitar track can be recorded for the first 4 bars and then set as an endless loop. Also, users can add or delete tracks in any bar as needed. So in GarageBand, the melody and composition can be altered easily, as bits in computer, in such procedural design.

Figure 3. Track Controls Panel

(Source: https://www.macworld.co.uk/how-to/iosapps/edit-garageband-3506877/)

At last, the participatory property of GarageBand helps users in social participation. The finished song can be shared with different social media, such as Facebook, YouTube and SoundCloud. The more amazing feature is that GarageBand supports real-time performance of a band. The bandleader creates a jam session and then other members can join. With their own iOS device, each band member can play a musical instrument and compose automatically when connected in Wi-Fi or Bluetooth.

 

Works Cited

Irvine, Martin. “Introduction to Symbolic-Cognitive Interfaces: History of Design principles.

Murray, Janet H. Inventing the Medium: Principles of Interaction Design as a Cultural Practice. The MIT Press, Cambridge. 2012.

https://www.apple.com/ios/garageband/(2018)

https://help.apple.com/garageband/ipad/2.3/#/chsf2f99ff5. (2018)