Category Archives: Week 10

reMarkable: Digitization without Pixels

The “reMarkable” tablet purports to replicate the feel of drawing, reading, and writing on a paper surface – on a digital device. Click on the video below for a brief description from the company:

There is nothing novel about the concept behind this device. The marketability of this tablet suggests that something has been lost in the attempt to replicate the experience of writing with a pen, pencil, and paper. In his intro to this unit, Professor Irvine says that “we most commonly use digital media to simulate, emulate, or reproduce the experience of analog media, that is, representing symbolic forms that can be created and received by human senses and perceptual organs.” (Irvine, Key Design Concepts for Interactive Interfaces and Digital Media).

What differentiates this tablet from another with a  traditional pixelated screen is simply a difference in how we are able to convert our input into a legible mark on the screen, and how our eyes are able to see it. While the video above is a marketing video, it does provide some useful insight into how these differences play out starting at the 1:10 mark. Whereas traditional screens are visualizing content by lighting up millions of pixels to replicate the image, this particular tablet is magnetizing a synthetic ink to the screen surface, allowing natural light to reflect off of it.

Presumably, the input process is simply indicating where on the screen to magnetize that ink. Whereas a normal pen would release ink from the tip of the pen, in this case, the pen is indicating on the surface where the device should attach ink from the other side.

I was not able to figure out how or if the device converts that input into a digital format, as the device is capable of exporting drawings into common formats such as pdf or other image formats. That would be my question for the class, and/or Professor Irvine.

 

Sources

Lev Manovich, Software Takes Command, pp. 55-239; and Conclusion.

Martin Irvine, Key Design Concepts for Interactive Interfaces and Digital Media

https://en.wikipedia.org/wiki/Analog-to-digital_converter

https://remarkable.com/

Thoughts and Reflection on Digitization and Metamedia

Having already taken CCTP 506, I was familiar with the whole notion of the analog-digital divide. We learned about the nature of the continuous-discrete dichotomy, and how fundamental the process of digitization has been to modern technological advancements. From the music we listen to on our digital devices, to the movies and TV shows we watch on our streaming services, the entire modern media landscape is built on the process of digitization.

The concept of metamedia is also been crucial to understanding our modern technological landscape. The ability to remediate and build on existent media has been foundational to the explosion of symbolic artifacts – as expressed through media and content – we’ve been creating and consuming in this era.

But what are the design ramifications of these concepts? Well, this is where tracking the modern history of technological advance is vital. Looking back at Alan Kay’s Dynabook and Ivan Sutherland’s Sketchpad shows us the lineage of design for devices that utilize digitization and metamedia. Our modern platforms and devices (smartphones, iPads, laptops, etc) are all built on the concepts and features of these technologies. The importance of combinatorial design principles is made evident when we juxtapose the older technologies with our newer ones. At the heart of both is the idea that utilizing the process of digitization in the name of metamedia will open up the door to further creative technological advancements.

What I’m interested in is what the next step will be. What will computational design look like in the next few decades? Concepts like ubiquitous computing, perceptual computing, and of course both virtual and augmented reality are gaining steam. It’s important that in the pursuit of design advancements, we understand that what makes our modern devices so transformational is their ability to act as platforms for metamedia, and they have a very rich design history behind them.

References

  1. Irvine, Martin Key Design Concepts for Interactive Interfaces and Digital Media
  2. White, Ron and Downs, Timothy How Digital Photography Works. 2nd ed. Indianapolis, IN: Que Publishing, 2007.
  3. Manovich, Lev. The Language of New Media: “What is New Media ” (excerpt). Cambridge: MIT Press, 2001.

 

MAKING SENSE OF IT ALL: TECHNOLOGY CREATED FOR US AND NOT WITH US

Grace Chimezie

…Technology is anything that wasn’t around when you were born.

Alan Kay.

Image found on Google

Introduction

I am not looking into a case study this week. However, I am re-evaluating and taking stock of my thought process to make sense of all the information I’ve soaked in this week, in regards to technologies I interact and would interact with. These revelations give a better understanding of our technologies, and institutions in our society we now recognise which includes our cultural-historical continuum of symbolic representation on different kinds of physical substrates (material media from paper to TV screens and digital representations.

According to Irvine, We enact interpretative responses to representations by communicating with software process that are intentionally designed to facilitate further interpreting representations with the symbolic media resources we are using.

The technologies we’ve created are not to solve the continuous problems of mankind according to Alan Kay for many years it has been a tradition to attempt to cure our society’s ills through technology.

A look into  Blindmaps as a product

…Everything digital and computational is physical and material

Image: BlindMaps

One of the best ways I could deduce understanding of the whole process and principles being explained using a product was to refer to BlindMaps, which won the IxDA Award under ‘Empowering’ in 2015. It is a tool for the blind and visually impaired. Their “ basic idea is to make the white cane a connected device which can act as an interface to the urban environment and to the user’s smartphone.” This product is a great example of how new technologies like smart-phones and bluetooth-enabled haptic interfaces (touch sensors) can be integrated into existing tools to improve a long-standing problem.

This brings me to the idea of computation, which incorporates the history of using signs and symbols for representing levels of abstraction, for conceptual modelling and for creating the symbolic structures for the rule based operations and relations in logic and mathematics. All of our uses of interactive digital multimedia (on our meta media platforms) depend on standardized designs that enable interoperability (use in any appropriately designed device or computer system ).

On the other hand, we did not get here by chance but where opened up to this concepts by great minds such as Alan Kay and Englebert. As early as 1950’s selected artist, filmmakers, musicians and architects were already using computers. Developing their software in collaboration with computer scientist working in research labs. Most of this software was aimed at producing only particular kinds of images, animations or music that emulated the ideas of their authors.

The metamedia principle as explained by Manovich analysis, got me thinking of the affordances the combinatorial features of digitizable forms of media has provided us. Especially with the new wave of content creators and how these stand alone media, can be combined with softwares in more flexible, interpretable way and has become an open means for transformations beyond any initial physical or recorded state.

All these still fall under the substrate or design principles for maintaining the perceptible and interpretable forms of our collective, shared, symbolic repertoire.


Conclusion

A funny idea that I noticed this week is that the way computational way of black-boxing changes constantly with the knowledge users have of their mediums. Almost like a game of hide and seek these knowledge are most times hidden in plain site. However after all these years there has yet to be new ideas and implementation of one of the greatest principles and theories to the usage of our computational devices as proposed by Alan Kay and Englebert. Thanks to them the idea of using our mediums isn’t a luxury afforded to an opportuned few but open to as many as are willing to receive, understand and use them.

 

References

Kay, Alan, and Adele Goldberg. “Personal Dynamic Media.” Computer 10, no. 3 (March 1977):31–41. https://doi.org/10.1109/C-M.1977.217672. Reprinted in The New Media Reader, edited by Noah Wardrip-Fruin and Nick Montfort, 393–404. Cambridge, MA: The MIT Press, 2003.

 

Manovich, Lev. Software Takes Command: Extending the Language of New Media. London; New York: Bloomsbury Academic, 2013.

 

Experiencing film based photography

Digital Media minor

During my junior year in undergrad, a new minor was being offered, and that was a minor in Digital Media, and students had the option to choose that minor with an emphasis in Computing, Digital Communications or Digital Arts. The program that I had chosen to major in was Computer Science and it didn’t allow me to take a lot of classes outside of the engineering school, but I was always interested in other things like photography and design and arts in general, so this new minor was my perfect chance to study other interests of mine, and I choose the Digital Media minor with the emphasis in Digital Arts.

I remember the first few classes, we talked about this dichotomy of analog vs. digital and how these terms have changed and influenced professions that are becoming more and more popular today.

Now I understand that such dichotomy is not that accurate to use because media is so much more complex than that, and it’s part of our social-technical system. As Dr. Irvine mentions, sometimes we use the terms of digital media as objects, rather than artefacts, and we forget that media is a continuum system that can be designed to be used in different form and formats, which now days we say digital.

One of my interests is photography, and I always wanted to know more about it. Now days it’s so easy to take a picture. In most cases, you only need your phone to snap a photo, and there you have it, ready in seconds. You can download editing programs on your phone, change the filters, add some changes and now you have a second photo, which is a reproduction of the first photo, but with a few changes in it.

Black and white darkroom photography 

So, I decided to take my interest a bit further and take a class in Photography, and after talking to the teacher, I found out it was a class in darkroom photography. As I said earlier, today the process of taking a photo is so easy and going through a film based class was something very different to me, so I decided to give it a try.

As I found out, for a lot of years, the art of photography has been a chemical process, and the images are captured on a film, which is super-sensitive and it takes time (between 40 min to more than an hour) to develop just one image. You use different chemical solutions, a developer, a stop bath and a fixer, and then the film has to dry, so it’s a whole process of following a recipe, and it’s so easy to mess up.

Process of developing film using different chemicals

A darkroom to develop the film

Film photography is usually considered as analogue to distinguish from digital photography. As I learned, this had to do with the light meter which is considered an analog instrument. The light meter is also present on a digital camera, but you have the option to control the brightness and the amount of light in it.

The darkroom photography was one of the best experiences that I had, understanding how technology transforms from one medium to another.

Sure, I agree that digital cameras have made the process of taking a picture so easy, and you’re in control of all the elements to capture the perfect shot, but they have similar characteristics too.

In a digital camera the image is captured and stored in a memory card, and in the black and white photography, the image is “stored” in the film. For both, you have to understand the basic concepts of angle and light and distance from the object you’re trying to capture.

But a lot of advantages of a digital camera is that you have reduced costs, since you don’t have to go through the process of buying the chemicals and the film, you can just re-use the memory card to store and save the files from your camera to your computer and use it over and over again.

So, in a way, we are designing technology and digital media to make our life easier in the sense that we can have more control with the products that we’re making, but the beauty of a medium is that it can be transformed and evolved in different forms, but in a way it is never lost, and media in general is part our social-technical system. We can have new design elements and mediations but we have to keep in mind all sign, symbols and material are part of our cultural history and they will be used in one way or another.

 

Reference:

Irvine, Martin Key Design Concepts for Interactive Interfaces and Digital Media

Manovich, Lev. The Language of New Media: “What is New Media ” (excerpt). Cambridge: MIT Press, 2001.

White, Ron and Downs, Timothy How Digital Photography Works. 2nd ed. Indianapolis, IN: Que Publishing, 2007.

Computer as a metamedium

As what I learned from last class, computer was first invented to solve a set of complex mathematical problems. At that time, computer is medium, simulating the function of calculator (and even to some extent, the role of abacus). Then, after the Second World War II, the goal of a society was shifted from augmenting armies or accumulating interest into building up a better living environment for human beings. Meanwhile, several computing gurus redefined computer: Bush and Licklider predicted that computers would be cooperators of human beings in daily life; Suterland invented “Sketchpad”; Engelbart focusing on augmenting human intellect introduced new properties of computer like graphic interface and mouse. Following the series of changes, Alan Kay’s model of “Dynabook” is a new revolution in computer history.

Kay described his “Dynabook” as being able “to simulate all existing media in an editable/ authorable form in a highly portable networked form”. He said that the main point for a Dynabook was to qualitatively extend the notions of “reading, writing, sharing, publishing, etc. of ideas”

From Alan Kay’s conception, we can see that modern computer is a metamedium rather a medium:

  • It can represent most other media while augmenting them with many new properties. The audio editing software “GoldenWave”can broadcast music as well as visualising the sound waves; We can also listen to music on QQ Music as well as  make comments and read other people’s comments; the application of “GarageBand” provide amateurs a handy tool to produce music.
  • It is active. While users keep telling computers what to do next, computers also give users feedback in return.
  • It can handle virtually all of its owner’s information-related needs. Today, there are various software (like excel, power point) to meet our needs of information. As Manovich mentioned in his book “Software Takes Command”that there was no such thing as digital media, there was only software, to some extent, the properties of computer are software techniques defined to work on particular types of media exologies, content and media data.
  • It is also a system for generating new media tools and new types of media.

From what I learned this week, I will say that computer is a spread of our sense, an artifact of our cognition to the world.

We may not have the time to go to a live house, but we can watch the video of it on the Internet at a later time; we can’t see how an electron run around a centron, but a model of an atom built by the computer can make a five-year-old kid easily understand the micro-world.

However, just as what Noah Wardrip-Friun commented on hyperlinks that the Web implemented only one of many types of structures proposed by Nelson, People should not be satisfied with the extension with the help of internet. Although nowadays we do have offline softwares like PhotoShop and Illustrator to draw and reproduce images, this is far from enough. Most of the computing softwares are augmenting our sense of visible and auditory sensation but seldom our sensation of feeling and touching. I remembered that once in an interview, Kay said that modern computer had done far more enough. In fact, an ideal computer model of Alan Kay is one that users can wear on. In this case, I guess that perhaps in the future, people can “feel” from a computer.

How we benefit from digitization

The invention of the Dynabook brought about the concept of metamedium introduced by Kay and Goldberg. Computer is not simply a medium, but is “a metamedium whose content is a wide range of already-existing and not-yet-invented media.” Based on this definition, media can be divided into two types. Firstly, as a remediation machine, computer simulates a range of early media and adds new properties to these media. Texts can be edited with different fonts, while photos can be easily modified by changing different styles of filters and contrast. We are able to combine, cut, adding subtitles to existing video clips by video production software. Secondly, computer presents “new computational media that have no physical precedents”. Hybrid media achieve the reconfiguration of different media types with the help of computer, and this novel combination of media types could hardly realize in the past. Movie’s special affects, for instance, the properties of graphic design, cinematography and 3D animation are exchanged and interacted in a deeper level. It seems that most of us take our using experience of metamedia for granted, however, we should always keep in mind that all media development is inseparable from digitization.

Digitization provides new possibilities for metamedium

According to Manovich, digitization is a process of “converting continuous data into numerical representation”. Time-based music, a continuous, physically perceptible data (analog) becomes programmable through sampling. It is then converted into discrete data (digital) and assigned numerical values. In the past times, if we want to record music played by different instruments in a band, we could only gather all musicians to play the music continuously until we get the best version of the record. Representing in a mathematic way, we could easily locate a specific segment that we want to edit. It becomes possible to make music remixes by combining different pieces of songs together and adding more computational sound effects, which is widely used in pop music nowadays. Specifically, the embedded sound effects such as drum and bass are previously programmed in the computer. As Manovich asserted that new media have principles of automation and variability, computer in part helps us automatically create sound effects, thus inventing a new music with new properties.

djay 2 is an app for music remixes. Even new beginners can mix music perfectly through the interface like a professional DJ using a turntable in physical world.

The consequence of digitization is significant. New media is interactive that users can engage in media creation, which solved Kay’s concern about the availability and adaptability of programming tools for ordinary users. Moreover, digital computer “has capabilities to create new kinds of media for expression and communication.” Physical images, texts, sounds, as analog information are digitized in the numerical forms. After new media have been invented in computer, they have to return back to analog forms that enable human to perceive and recognize. According to Irvine, “everything computational and digital facilitates our core human symbolic-cognitive capabilities.” In my point of view, analog-digital-analog continuum creates a spiral for us to continuously accept new forms of media created by ever developing computational design and digitization, and it helps upgrade human’s cognition to the world.

References

Irvine-Computers-as-Cognitive-Interfaces-and-Metamedia-820.pdf. (n.d.). Retrieved November 8, 2017, from https://drive.google.com/a/georgetown.edu/file/d/1AhgCQs8pcYpj6uODZtPf2mdCSjlpuW14/view?usp=drive_web&usp=embed_facebook

Manovich, L. (2013). Software Takes Command, New York;London;: Bloomsbury.

Manovich-The_Language_of_New_Media-Principles-excerpt.pdf. (n.d.). Retrieved November 8, 2017, from https://drive.google.com/a/georgetown.edu/file/d/0Bxfe3nz80i2GOVd4XzQydXhjSjQ/view?usp=sharing&usp=embed_facebook

Reid, B. (2014, May 22). Popular App djay 2 Updated With Tons Of New Features, iPhone App Gone GREE For The First TIme Ever. [blog post]. Retrieved from http://www.redmondpie.com/popular-app-djay-2-updated-with-tons-of-new-features-iphone-app-gone-free-for-the-first-time-ever/

eBooks – remediation or a new fronteir?

A few years back I interviewed at Penguin for a position in their very small ebook division. They were looking into designing a deluxe ebook that could be rolled out like a high-end edition. At that point, eBooks were in common usage with multiple devices designed specifically for eBooks as well as software applications for laptops and tablets. The design question wasn’t, “what should an eBook look like?” but, “how do we expand the book?” Though I didn’t get the job, it’s still something I think about as the years go by and the deluxe ebook fails to materialize outside of a handful of Amazon search results. What are the challenges and what ways of thinking about eBooks might be worth exploring?

Andrew Piper argues in, Book Was There: Reading in Electronic Times, that there is a different embodied experience to reading online or even on an eReader when compared to the codex book. There is a weight, heft, smell and texture to physical books as well as the slow repetition which comes with turning pages.  Piper links the pleasures of reading to the feeling of holding, hypothesizing that holding a book gives readers the sense that all human knowledge is in their literal grasp. He further extrapolates that into humanity’s love of miniaturization: the process of making something huge approachable. Somehow, in all this talk about miniaturization, Piper misses the point that this is also happening with digital computational devices.

Piper’s thesis is that the pleasures of books can not be found in their digital representation, a bias he arrives at despite outlining affordances that, for the most part, are not confined to the codex book. In creating eBooks, designers paid close attention to simulating and expanding the affordances of books. We could take, for example, Piper’s beloved page. In an eBook, text is presented only to the extent that it fits on the screen without scrolling. This presentation subdivides the text into approachable and discrete units to replicate the feeling of pages. If I am reading a book on an eReader, which has a mid-sized screen, the text fills the digital display and provides margins. When I read the same book on my phone, the text still only fills the screen,  I simply see fewer words and have more “pages” to move through via a simulated “flip” that comes from touching the edge of the screen. Additionally the text is shown on a simple uncluttered display so as not to distract from process of reading. One affordance of books, the ability to look quickly and see how much content a reader has left, is not possible on the flat screen which can only display part of the text at any time (in a readable format). As quantified data, the text of the eBook only displays the amount of text that fulfills human design principles (like scale). This contrasts with the codex book’s continuous form. To make up for this lack, a new affordance in the form of a sliding gauge, not unlike what appears when playing a video fie or listening to an audio file, can be called up to visually represent the reader’s progress. (Interestingly, in older formats of sound and video there was no easy way to check your progress, but this is an expected affordance – and therefore constraint – of the digital format).

Figure 1: Screen Shot of an eBook displayed on an iPhone using the Nook application

The digital format also provides brand new affordances. The codex book had discreet units  in the form of pages and words and chapters as well as a table of contents and potentially an index to assist in navigation. However, strings of words were not searchable as they are in the digital format. Additionally, notes can also be added or interesting passages marked, using a layering technique, so that the underlying form is not permanently altered as it would in it’s analog form.

Other affordances and constraints of eBooks come from their mediation. Ebooks are read on tablets, phones, and computers – our metamediums – the book itself is a combination of display software (Nook, Kindle, ect.) and digitized text which has been grouped to be displayed in a certain order and in a certain style. The text of the book is obtained (legally) by sending a request on the metamedium to a book distribution company who sends the digitized packets of data back to the device to be displayed on a pixelated screen. If the user has the display software on multiple devices, the book can be read on these different devices. A device can also access any of the books available to the reader. However, digital rights management software, often included in eBooks, as well as proprietary display software that is designed to only display books from proprietary formats, makes the legal sharing of eBooks extremely difficult. These constraints however, are not native to the digital format but are instead the result of corporate practices and legislation.

This brings me back to my question, what would a Deluxe eBook look like? What other affordances might be unlocked due to the eBooks digital format and the ability of the metamediums to simulate other media. In my search for Deluxe eBooks, I saw found two, one for a Ken Burns eBook and one for Dolly Parton that seemed like easy cases for creating mixed media projects. Both artists work in non-text media, so the addition of video and audio content to the ebook, easily done with a metamedium that can support different data formats, makes sense. Mixed media projects like picture books could also benefit from the use of animation in addition to text. However, in long form fiction and non-fiction books, other media formats might be considered distracting to the pleasures of reading. If you were reading a book and suddenly animations began to move on the page it might pull you out of the moment. Outside of mixed media, what could be offered that might enhance the cognitive work being done while reading?

Figure 2: Screenshot of the Google results of “Deluxe eBook edition)

One idea would be hypertext and the ability to link to rich sources of information if the reader was inclined. If a recipe was mentioned, linked text could bring the reader to that information, which while not necessary to the story, might be of interest. Playlists and other tie in media could be available to turn on or off. These versions of Deluxe eBooks would not be dissimilar to new editions which include additional prefaces or introductions which provide additional context. More expansive changes to eBooks, however, could be possible through modifications to the book display software.

Alan Kay and Adele Goldberg’s describe a metamedium as “a machine … designed in a way that any owner could mold and channel its power to his own needs,” (Kay and Goldberg 1977). Display software is designed in a way that limits readers power to manipulate the text of the books they read, rather than allowing the reader to take advantage of the affordance of the metamedium. While readers can layer comments over the text, they do not have a way to share these comments with other readers on the platform. Text can be copied, but it cannot be cut or edited to create personalized versions of the story/narrative. Pictures and art that the reader has created or thinks are relevant, can not be inserted. There are a myriad of ways the needs of the reader could be channeled into a more interactive relationship with their books, behaviors that are demonstrated in many fan cultures. There is wealth of possibility if books are understood not as totalized objects, like the codex book, but as digitally fluid. As Manovich describes in his principle of variability, “a new media is not something fixed once and for all, but something that can exist in different, potentially infinite versions” (Manovich 2002). 

Works Cited

Alan Kay and Adele Goldberg, “Personal Dynamic Media,” First published 1977. As reprinted in The New Media Reader, edited by Noah Wardrip-Fruin and Nick Montfort, 93–108. Cambridge, MA: The MIT Press, 2003.

Andrew Piper, 2013. Book Was There: Reading in Electronic Times, Chicago; London;: University of Chicago Press.

Lev Manovich, 2002. The Language of New Media, 1st MIT Press pbk. ed. Cambridge, Mass: MIT Press.

 

What Computer Brings Us

According to Alan Kay, the computational media environment should always hold support for the users to think, discovery, and create. His actual implementations of computer software, such as sketchpad, tries to turn computers into a kind of personal dynamic media where users could express their ideas, and generate new thoughts via man-machine interaction. Though there would be sophisticated programs running behind the screen, the interface is quite simple and user-friendly: it understands the user instructions and alters them into some symbolic visual icons according to a certain algorithm, users can use these symbols to express and further develop their ideas, “like examining a physical object in the real world”. In general, Alan Kay was trying to reinvent computers into a dynamic platform for culture creation, where each individual user could benefit from and generate their own ideas.

Moreover, Manovich points out that computers are reinvented from a fast calculating tool to a metamedium, which represents most other media while augmenting them with many new properties, handles all of its owner’s information-related needs, and continuously generates new media tools and new types of media. Computers can be regarded as the “remediation machine”, where mediums with distinct properties are put together in a single computational environment, and thus can build up connections between each other. With these connections, computer is able to help its users to “generate new information from old data, fuse separate information sources together, and create new knowledge from old analog sources.” Or put it in a shorter form, “New media is new as we can always add new properties to it.”

With all these characters, computing devices facilitate our needs for constant innovation, cultural acquisition, and symbol creation. To some extent, I would like to regard this “metamedium” as a huge, open, and neutral platform, where with a certain standard, that allows all sorts of software running their own functions and build up connections with each other. So far I cannot think of another way to combine virtual and reality, or easier approaches to create meanings and connections between different properties. The following video is a good example: it is a combination of real life video and imaginary creatures/impossible combinations. Computing devices helps to build up the connection between video and 3D animations, thus his inspirations can actually be conducted and realized.

 

 

As I might have mentioned in one of the former posts, a picture could only be a piece of memory or an art work when it is only saved in the camera. However, once it is posted on some social platforms, new meaning is added to the same piece of picture. As social media platforms would ask the user to add descriptions and locations when posting the picture, it then symbolizes a story that the user wants to tell people; and even if the picture is posted without any description, it still conveys the message of experience sharing. Thus, though the initial intention of taking a picture is to capture a moment, social intentions could be added to the same piece of picture with computational devices (metamedium) and the applications running on this platform (medium), even though nothing in the picture has essentially changed. The medium itself conveys some message.

One of the significant consequences, according to Manovich, is that “new media as a whole is given to the user, have the far-reaching effect of shaping the contemporary culture.” As the metamedia platform depends on standardized design, and with the core design principles are based on symbolic and cognitive capabilities, it is equally open to all of the users, which means individual users can have a relatively similar chance to voice their opinions and create symbols and meanings according to their own cultural values. While inheriting nature from the traditional mediums as carriers of knowledge and information, computing devices as the new medium, is providing a more open and free platform for users to develop their own meaning systems and spread their ideas (such as the generation of wemedia and internet personalities), thus the culture is shaped in a more multivariant way.

 

References:

Martin Irvine, Key Design Concepts for Interactive Interfaces and Digital Media
(The concepts that enabled interactive “metamedia” computing)

Manovich, Lev. 2013. Software Takes Command. International Texts in Critical Media Aesthetics, volume#5. New York ; London: Bloomsbury.

Introducing My Old Friend: Canon 60D

This week we has entered into the conversation of digital media. Because my interests lie in photography and graphic design, I have to deal with digital media a lot in my daily life. Therefore, this week I would like to introduce one of my best friend, my camera—Canon 60D.

It’s not a very fancy high-quality camera, but I have been using it for 4 years. I started to use it when I had no understanding of photography principle and even cannot focus on one object. And now I can manually set up the shutter, ISO, and aperture and finally take a satisfying photo. It records the very important moment of my life. I feel like it’s more than a digital camera, it’s an old friend of mine.

The outer design of Canon 60D

Because of blackbox, at first I had no idea of how this camera works. But I saw the outer design, the fairshaped outlook really attracted me.

  • Affordance

The handle bar gives people a visual clue of how to hold it. When I actually hold this camera, I can feel it fits my hands and figures perfectly. And the small window (viewfinder) in the back side indicates that people can see the scenes through the it. There are several turntables in the body of camera, the turnable design implies people can turn it. And when you actually turn this turn table, you can see the changes in values of shutter, ISO, and aperture, at the same time the photo will also change with these values. Beside, the essential of the camera is the lens ring, it’s also turnable. The design of lens ring will also give people a hint. People can adjust the lends ring to focus best on the object.

  • GUI (Graphical User Interface)

There’s a screen in the backside of camera. People can press the buttons and turn the turntables to change the values. First, they can set up the language. There are numerous languages they can select, so people from all over the world can choose whatever languages they’re comfortable with. And then, a professional photographer will start to adjust the effects and several qualities of image. In addition, when you turn the camera to the video model, you can see the video you record through this small screen. And thusly you can make adjustments timely.

And all of these design increases the interaction between human and the camera.

The inner principle and design of Canon 60D

The basic principle of camera is very simple:

The objects will release different light rays, and the lenses focus and capture the light rays and turn them into image. Diaphragm determines amount of light to be entered and shutter speed determines time of exposure.

De-blackboxing Canon 60D: The modular design inside

The design of camera is also follows the universal design principle. And it basically reveals the modular design. The camera body comes as a package and the lenses, matte boxes, high speed motors and many other things are added on to make the camera fit the needs of users.

Because of blackboxing, we can use the camera to take gorgeous without seeing the inside and knowing every part and working principle of the digital camera. But actually it’s a very complex system inside the small “black box”. The following photo is the motherboard of canon 60D. From this photo, we can see the nodes and small metal components. Each components has its own function and plays a crucial role. Without one single part, the camera cannot work in the right way. 

The Final Step: Photoshop

As Ron White writes in his book How Digital Photography Works, the control of light is essential in the process of photo taking. However; we cannot control the weather when we’re taking photos outside. And so we need photoshop to sightly “beautify” the image. From my understanding, the best photos are all from photoshop (I’m kidding). With photoshop, we can change the light, whiteness, and add filters to the photo to make it seems more beautiful.

features of new media

New media, to say is the new medium deal with cultural objects with the new network communication technologies. As Manovich said that: new media is focused on cultural and computing. New media now is an always changing concept today as the technology of networks, mobile portable is all evolving, new media is basically related to computing machines.

And with seeing the foot-mark of different concepts and principles for how new media developed over years, the main features of new media can coordinate with these concepts.

digital

the digital realization of media form make sure that the content can be memorized as data, isolated the context from psychical form. being in this form allow data can be processed in a non liner way and can thus speed up.

Interactive design

Interactive design of software or application is making new media industry more convenience and attractive, the interface design makes the interactive software become possible. So as now on the internet, we can get more involved into some events with devices as interface of smart phone, computer, now interactive television. It is enhancing human ability for innovation or perception, memory, augmenting our mind with various forms like artificial intelligence. The interactive design in new media can be displayed in various aspects, including communication platform, explanation for texts, and user to user or user, user to computer interact in game playing, etc.

Combination and accumulative step

what we have now as a new media is paralleled from the content of traditional media like newspaper, books. It is the consequence of accumulation of archives cultural conventions. So the principle of modularity can be seeing here, the hypertext, graphics are being used here as the data form under control of digital software, to generate new form of media. The combination can go this way: as we are reading an article, it is the normal text form, but it has the hypertext that can direct user to another context, is we have new words that cannot understand, we touch or point the words, the dictionary can explain to us, even animation graphics are inserted in the text. that is the results of same data under different algorithm, which can originally done by human. But with the fast computing speed, the algorithm generate the new phenomena, like videos,:” transformation of quantity into quality.”