Category Archives: Week 10

Cell phones as part of a socio-technical system

When we think of a cell-phone now days, we immediately associate it with different things based on the function and the goal we want to achieve, and it is so much more than just to be able to call someone.

Keeping in mind the media richness theory (sometimes referred to as information richness theory or MRT),  cell phones now days are designed to offer different options and are able to reproduce visual social cues, such as gestures, body language through the video option. This translates to a richer communication medium, that has become part of our society.

It is important to understand that these affordances ( as Zhang explains the term) are available because they were designed, using modular and combinatorial design principles, as Dr. Irvine explains, that takes a lot of iterations in order to come up with a product that is both functional and practical.

But because we don’t see the different layers, this becomes a complex idea, and it is hard to understand each part and see how they fit in the whole product.

After we “de-blackbox” this technology, it is also important to understand that this artifact is part of a socio-technical system.

When talking about socio-technical system, we talk about the interaction of people and the technology in an environment.

So, let’s break it up a little more and see these interactions.

(image source: https://skateboardingalice.com/papers/2010_Rogers_Fisk/create_model_small.png)

So now, we can think of the technological features of a cell phone, the tasks that it can accomplish, how we interact with the technology itself, and how the technology is used in the environment.

References:

Latour, Bruno “On Technical Mediation.” Common Knowledge 3, no. 2 (1994): 29-64.

Irvine, Martin “Media, Mediation, and Sociotechnical Artefacts: Methods for De-Blackboxing” (introductory essay)

Zhang, Jiahie and Vimla L. Patel. “Distributed Cognition, Representation, and Affordance.” Pragmatics & Cognition 14, no. 2 (July 2006): 333-341.

 

 

Be aware of the complexity

The topic this week is really a wake-up call for any linear and simplified mindset when dealing with systems – “Systems thinking is non-reductionist and non-totalizing in the methods used for developing explanations for causality and agency: nothing in a system can be reduced to single, independent entities or to other constituents in a system.”

I recently looked into the popular “McKinsey methodology”, MECE principle is probably the most identical one. As the name suggests, MECE refers to “Mutually Exclusive, Collectively Exhaustive”. When applying this framework to any problems, “MECE principle suggests that all the possible causes or options be considered in solving these problems be grouped and categorized in a particular way. Specifically, all the information should be grouped into categories where there is no overlap between categories (mutually exclusive) and all the categories added together covers all possible options (collectively exhaustive).” (Cheng, 2011)

Now I have two main critiques of this principle:

1: The very foundation of this principle is that all the factors within the system act on a linear causal relationship.

2: It assumes the system is stable and transparent.

For the first point, it is more likely that the way the system functions is not the sole interaction between two single factors, it would be a collective reaction in which many entities, subsystems, agencies within the system work together. So, each of them could be the cause and the effect at the same time. Any attempt to simplify the procedure would end up in a partial and incomplete map of what going on with the system.

For the second point, the principle understates the difficulty of de-black boxing the system. The system is a dynamic concept which means when we look at the perceptible “representations and interface cues and conventions and the results of processes returned”, we are looking at the outcome of a dynamic process. That being said, there would be uncertainties in this process. So, it would be hard to assert that we have exhausted all the possibilities.

A good example could be when a part of India had a serpent issue (too many serpents), the government announced that anyone would be awarded with a serpent killed. But after a while, people started to raise serpent themselves in order to get the award with a died body. The government soon found out about this and abolished the policy. This makes people release the serpent they have and the result? More serpent than the policy was initiated.

The government in this scenario used a simple causal reasoning: the number of the serpent would decrease as long as they are being killed. But something else happened during the procedure and led to a different result.

Graphite and diamond are both consisted of carbon, just a different pattern of the structure of the element. But they end up with a totally different character. When we look at a single point, we would simply miss the whole picture.

How iPhones interface

From my perspective, an artifact is forever interfacing other objects since it cannot leave any sociotechnical system. When human design, invent and purpose an artifact, it bears the expectation that it will be utilized by human, thus its function must embody any mean to interact and interface users. Consequently, I think iPhone has in its root become a part of a sociotechnical system because human design it to be the mobile device that you can browse the Internet, answer the phone and receive and send text messages. It is so closely related to mundane life that it ought at least become some medium to document the huge sociotechnical and sociocultural system that host us.

Furthermore, an iPhone receive input from copious aspects from our daily life; when it outputs, it also transports almost everything we can imagine that is around us. an iPhone is the node to many subsystems, and the liaison that connects different social groups together for its accordance of communication and expression. How iPhone has affected our life is obvious in the past few years when people have got addicted to social media and rely on it to complete many rituals and practices, such as dating. iPhones themselves has also become a symbol in the pop culture as a representation of your social class and ideology. Thus, iPhone is the incarnation of how technology and society interact.

Unmanned Aerial Vehicle in Film Industry

A UAV (Unmanned Aerial Vehicle) is defined as a “powered, aerial vehicle that does not carry a human operator, uses aerodynamic forces to provide vehicle lift, can fly autonomously or be piloted remotely, can be expendable or recoverable, and can carry a lethal or nonlethal payload”. Such definition passes information about the observable structures, unobservable technical & physical operations as well as unobservable system dependencies.

Focusing on the UAV we studied, the observable structures usually contain the vehicle (the aircraft itself contains the body, sensors, power supply, actuators), the payload ( here in our topic is the camera set on it) and a remoter. Each of the three is a second-class black box that can be studied.

In the unobservable technical and physical operations, fundamentally, it is an analog-digital-conversion, ADC,  system which transmit the photographer’s human behavior on the remote control layer into digital signals containing the information of the command to the aircraft. Compared to the manned craft, such radio-transmitted digital commands replace the physical cockpit controls.

When thinking of the unobservable system dependencies, different design principles and regulations based on the functions the UAVs fall into come to me firstly. Unmanned Aerial Vehicles was originally developed for training the military personnel in United States and later were used in a variety of spheres. They are not only important tools in military reconnaissance and scientific investigation, but also playing vital roles in film industry and even shutterbugs daily life(they are toys to some people). However, for different types of UAV, there are certainly different regulations on the designing, producing and even the selling and usage of them. There are a lot of things that need to be limited such as flying height, controlling precision and flying area.  When put it into this specific period of history, the civilization of the UAV is led by the relatively peaceful social relationship all around the world, the popularization of computing technology and many other social elements.

All these elements, or to say systems are certainly shaping the development of the civilian UAV while at the same time the civilian usage of UAV is also shaping the human society. As a UAV falling into different function may bring some different social effect, to stick to our topic, in the following parts, I’d like to focus on the usage of it in the film industry

Lower Cost in Traditional Aerial Shot

(Beginning, 2:59, 3:18 By the way, I’m not really into the storyline of this video even though I’m a fan of BTS. But there are some really beautiful and magnificent shots in it. Visually, it seems not to be a “cheap” music video like the mainstream style of k-pop mv.  )

Such shots can only be seen in those films with great amount of investment or the BBC documentary. However, nowadays,  even such a poor entertainment company who runs only one group of artists is able to give such beautiful images.

Before the extensive usage of UAV filming, photographers manage such aerial shot by sitting in a helicopter and record by themselves. However, it was expensive, long-time-taking and, which is the most serious problem, dangerous. There were even reports about helicopters wrecked during filming. In 2002, two team members of Cameron died because of the wrecked during the film making. Of course, the UAV itself is not a hundred percent safe-guaranteed. When I worked with a photography crew in Thailand, the UAV we are using rushed to one of our photographers and hurt his head because of a wrong operation of the person who controls the craft.

Low altitude shooting and continuous shots

( 0:15)

In The Expendables 3, the first several shots with fierce gunfight,the rushing trains and the hovering helicopter are finished with just one UAV in ten days. In the past, in order to finish them, the whole team might need to be carefully organized and use a helicopter to capture the image with more than thirty days.

Many of the low altitude shots are not able to be finished by cooperating with helicopters especially when the object is moving in high speed. But with a UAV, when the object, like a train or a car, and the UAV carrying proper camera are well operated to rush to each other both in very high speed, incredibly amazing shots can be filmed, conveying a great visual effect.

However, as UAV is used more and more in the film industry, what come to my mind is that when CGI develops good enough, is it possible for this technology to take place the usage of UAV which means that all those brilliant shots are made instead of filmed? If that happens, then what makes a film be film, instead of a CGI visual show?

 

P.S The Six Functional Categories of UAVs

  • Target and decoy – providing ground and aerial gunnery a target that simulates an enemy aircraft or missile
  • Reconnaissance – providing battlefield intelligence
  • Combat – providing attack capability for high-risk missions
  • Logistics – delivering cargo
  • Research and development – improve UAV technologies
  • Civil and commercial UAVs – agriculture, aerial photography, data collection

Understanding Televisions

“Understanding technologies, especially our media and computational technologies– as part of our cultural and social systems, and not as a separable domain.” This statement is easy to be taken for granted. Technologies do shape our lifestyles, environment, and perspectives to view the world, but the more important question is how this process happens? And what role do technologies play in a shaping process? According to Latour, technologies are not “instrumentality” which was my thought before reading articles this week, but an “inter-agency” emphasizing the interaction between human and machines. To view technologies as “inter-agency” is to reexamine their functions and symbolic meanings in the whole socio-technical system.

I would like to take televisions as an example. Television is a vital medium in the history of human beings, or in other words, in a sociotechnical system, which largely alters people’s habitual behaviors of absorbing information, spending their time with their families, and so on. As a black box, technologies inside a television are opaque and invisible, which requires us to decloak of its normal invisibility. I would like to follow the instructions organized by prof. Irvine to de-black box televisions as a medium technology.

First of all, a digital television consists of three major parts, the front-end, transmission and allocation, and the terminal. Each of these parts has its physical and symbolic functions.The front-end is responsible for dealing with sources, transmitting original signals, and dealing with the transmission in order to gather data and signals and digitize analog signals. The transmission and allocation involve satellites, cable wires, network or microwaves, and cable broad bands. The terminal is what we usually call a “television”, which adopts a screen and a set-top box (a receiver). Simply speaking, they play the role as a source, an encoder or transducer, and a receiver.

Moreover, television as a sociotechnical artifact, delegates what should be done by humans to itself. For instance, before the era of television, people have to go to a theatre to watch a show, sometimes they also need to wait for the troupe to come to their cities. But with television, they can simply just sit at home comfortably without wasting time in transportation. In the process stated above, television plays the role of an actant which substitutes humans to do something.

As McLuhan says, “medium is message”. Television is one of the most powerful and popular media in 20th. Its characteristics, such as immediacy and hypermediacy, result in new forms of the message based on televisions, like TV news, TV series, and reality shows.

In addition, a medium is also an environment which cultivates certain genres of culture, value, and social function. For example, the first debate between president’s candidates on TV was between Kennedy and Nixon. Kennedy reversed his disadvantageous situation by showing on TV with decent clothing and excellent speech and won the election eventually. This example is often used to illustrate the magic televisions have to shape audiences’ impressions. Based on this observation, we can tell that televisions are not only transmitting “content” or “information”, but also “a two-way interface for mediating social-cultural authority, value, meanings.”

Although with the emergence and popularity of laptops and YouTubes, the importance or power of television eclipse, it is remediated into these new-coming media in a way with the redistribution of value and power. Besides that, remediation affects the scenarios televisions are used and their symbolic meanings. Television often represents the union of a family because it is placed in the living room where people spend their time together. If people are alone, they incline to use laptops or other media to watch videos instead of televisions.

Reference:

1.Irvine, M. (2018). Media, Mediation, and Sociotechnical Artefacts: Methods for De-Blackboxing.

2. Latour, B. (1994). On technical mediation. Common knowledge3(2), 29-64.

3. Latour, B. (1990). Technology is society made durable. The Sociological Review38(1_suppl), 103-131.

4. Zhang, J., & Patel, V. L. (2008). Distributed cognition, representation, and affordance. Cognition Distributed: How Cognitive Technology Extends Our Minds16, 137-144.

Distributed Cognition of Ear-Microphone

I really like watching the performance of dance music. And we can always find an interesting fact on the stage that when a group sings the same sound, some singers use hand-microphones while others use ear-microphone.


hand-microphone AND ear-microphone

Here I want to attach a performance to prove this interesting finding. You can see from this performance, only the main vocal uses hand-microphone while other sub vocals and rappers use ear-microphone. You can clearly find the differences between hand-microphone and ear-microphone. You can hear the breath of the main vocal and the sound is louder and deeper. When the main vocal sings his part, he seldom has some dance movement, and he can stay there and sing. If he has dance movement which needs hand, he can only uses his left hand with his right hand occupied by the hand-microphone. His group members stand behind him as dancing partners. However, for dancing performance, it requires much more for those who uses ear-microphone. When they sing their part, they need to dance and sing at the same time. Since their keys are not as high as main vocal, they can make it. With the cooperation of hand-microphone and ear-microphone, this famous performance ‘NEVER’ with more than 12 million views has been presented to audience.

According to Latour, ear-microphone as a kind of technology, is delegated by singers the work of holding the microphones and freeing both of their hands on the stage. This can reflect that social relations can shape technical relations while technical relations can also shape social relations. Ear-microphone as the tool and technology are designed by people and be delegated the responsibility of helping the singers holding the microphones all the time on the stage. It is invented because of the chase for better performance of society and its invention can help improve the visual and auditory effect and guarantee better performance to the audience. This effect of delegation is positive and active.

According to Zhang and Patel in the article ‘Distributed Cognition, Representation and Affordance’, It is the interwoven processing of internal and external information that generates much of a person’s intelligent behavior. Here I want to mainly analysis the external information of the ear-microphone. The existence and usage of ear-microphone is closely related to social-technical system and the external representations are the shapes and positions of the symbols, the spatial relations of partial products, which can be perceptually inspected from the environment. The ear-microphone is linked to audience, singers, medium, culture and environment. From the perspective of audience, the existence of ear-microphone has several advantages and necessity.

  • First, it can make sure the better performance effect. For idol groups who sing dance music mostly, dance and sing are both necessary. The usage of ear-microphone free both the hands of the singers and the whole team can cooperate to perform better dance.
  • Second, it can make sure the better auditory effect. If the singers use hand-microphone, the distance between microphone and mouse is variable. Consequently, the sound which is captured by microphone and sampled and quantized and then turned into analog is different for hand-microphone. However, the ear-microphone guarantees the fixed distance between microphone and mouse so that it can make sure the voice heard by audience is stable all the time. This difference is especially noticeable when the singers need to rotate on the stage. Normally the speeds of hand and head are different. So, under this circumstance, the sound is easy to become unstable and either too high or too low. The usage of ear-microphone can avoid this circumstance as much as possible and give the audience a better sense of listening to the singers.
  • Third, it can make sure the better visual effect. If the singers hold the hand-microphone for a long time, half of his or her facial expression will be covered. On the stage, the infection and resonance with the audience is relevant to dance, song, and, facial expression. Compared to hand-microphone, the size of ear-microphone is very small and it is often designed with inconspicuous color such as black and carnation. So that the existence of ear-microphone can make sure that the singers or groups’ stage performance is transmitted to the audience to the greatest extent, which can arouse the resonance of the audience.

From the perspective of singers, the existence of ear-microphone can free the singers more. Since the weight of hand-microphone is not light, the usage of ear-microphone can lighten the load on their hands and guarantee the better stage performance. From the perspective of medium, voice is one of the ways of information transmission. However, the usage of ear-microphone doesn’t have advantages compared to hand-microphone because the sound is much more energetic, louder and deeper with hand-microphone. Consequently, since the main vocal doesn’t have much dance movement, he still chooses the hand-microphone which has the better sound effect and it can make sure his voice is transmitted to the greatest extent. From the perspective of social cultural environment, if most people prefer dance music, the advantages of ear-microphone are significant because it can combine songs and dances and guarantee better performance effect. If most people and fans prefer lyric songs which do not need much dance, the hand-microphone is still very important in the music market.

Reference:

1.Latour, B. (1994). On technical mediation. Common knowledge3(2), 29-64.

2.Latour, B. (1990). Technology is society made durable. The Sociological Review38(1_suppl), 103-131.

3.Zhang, J., & Patel, V. L. (2008). Distributed cognition, representation, and affordance. Cognition Distributed: How Cognitive Technology Extends Our Minds16, 137-144.

The history of music player: a sociotechnical analysis — Wency

The history of music player: a sociotechnical analysis — Wency

It seems to become a real commonplace for most music amateurs today to take a pair of earpieces and an iPhone with them to listen to music anywhere and anytime. Either you are hiking, going to the gym, on a travel, working on a project, studying for an exam, it has become a habit that you put on your earpieces and start to enjoy the music as well as getting better focused on your job.

However, things are different if we move back even just 10 years when iPhone was not as popular as today and people needed external music players (i.e. MP4) for them to listen to music. Further, if you ask your parents about their experience as music amateurs when they were young, they probably have a huge nostalgia back to the days when several children gathered together in some cool kid’s house and listen to the popular song on the tape or few years later, feeling extremely cool about walking on the street, carrying a huge Walkman, shaking their whole bodies with the music beats.

  1. From ancient live performance to phonograph and records: a need to preserve the temporal audio piece

People’s need of enjoying various forms of art started in the very earlier time when people made instruments and did live performances. They also invented notes to communicate with each player as well as preserve the songs. However, the key, rhythm, beats, etc. of the song could be preserved, the performance itself couldn’t. For those pure instrumental performance, it might not be a huge problem since people were able to produce many tokens of the original instrument as well as tokens of the original performance so long as they follow the rules recorded on the notes, but what about the singers? Words, notes per se would never be able to fully record the original voice of the singer. At this point, therefore, we see the need of humans to invent a technology, a durable tool that could extend, maintain the skills and thus break the limitation of time and space, they need to maintain the entire succession of accumulated elements for future innovation (Latour, 1994, p.61; see also Latour, 1991, p.109).

The phonograph, to some extent, sees an affordance which is latent in the environment (Zhang & Patel, 2006, p.336). While phonograph incorporates a desire to preserve the temporal music or broader, sounds, accompanying that desire would be a need to play that audio piece in the future. Therefore, records and LP records, 10 years later, are invented to play the audio piece and to keep refining the standard.

  1. Records –> tapes –> Walkman: an increasingly need of portability and convenience.

If the invention of the phonograph and records are intended to preserve and play music which is still focus on the music per se, the long-term transition from records to MP3, nevertheless, might be more or less deviated. Can we invent a technology that would allow users to carry them so that they could listen to music everywhere? Can we make them smaller, lighter so that users won’t be annoyed while carrying them? The idea of new innovation, the transition of an idea to a project, a project to an object, is not only incorporating the people who inhabit it but also they wish to effect: product development team sees the potential of the user’s need and imbues such potential into the update of new innovation. On the other hand, users’ earlier roles, habits, functions also provide a precondition for such innovation (they’ve get used to the earlier mode of listening to music and they themselves would see some lacks on the current product, e.g.: while going out for a travel, they might want to carry something with them to kill time) (Latour, 1994,  p.49). The music player, therefore, is becoming smaller and smaller and Sony reached one of its annex by selling the Walkman since 1979.

  1. From Walkman to MP3: analog to digital

The well-known Sony Walkman didn’t last forever, unfortunately. While the main technology inside it was the vibration of magnetic sheet, the external noise and the dust accumulated on the sheet would be inevitable to lower the tone quality. As the whole technology industry’s working better and better on computing and the utilization of digitization, this also facilitate the development of digital music. Through the analog to digital transition, people were therefore able to numerically view the noise and eliminate them more accurately, the quality of listening to music, were therefore guaranteed.

At the same time, the users need is incited and diversified by the environment as well as their previous experience, if we think about how users were more and more used to the better quality provided by the digital music, more and more of them might not be willing to return back to the Walkman age when there were too much noise interrupted, it would not be difficult to relate this to the modern distributed cognition theory where the boundary of cognition is not individual anymore, the external sources, tools in the environment should also be considered in the range (Hollan, Hutchins, & Kirsh, 2000, p.175, p.193).

Meanwhile, the blossom of digital music allows the musicians to delegate part of the real performance in the recording room to some electrical synthesizing effect (the computer is later be programmed to be able to simulate a ton of sounds from different kinds of instrument which gives musicians multiple choices as well as save a lot of money to find the players for each instrument needed in the song (Latour, 1994, p.39). But at the same time, modern musicians are required and trained to be more and more adjusted to work with those new electrical technologies and new occupations are created in a macro social level.

  1. From MP3, MP4 to Streaming media: not music itself but the market of music player devices is under a revolutionary– mixed affordance: the whole industry

The story seems to finish if we are only focusing on listening to music conveniently with high quality and without the limitation with time and space. Nevertheless, the reality today seems to be unpredictable 10 years ago. In the digital age, everything can be integrated and thus be defined as a computer, while MP3 is focusing on digital music which also provides a large potential of storage, why don’t we add some new features onto it? Can we watch videos, download pictures, play games on that as well? The iPod touch is later known for offering anything but communication that a smartphone does.

But as the price is becoming higher and higher as the new music player is becoming more and more function diversified, why not just directly integrate the music listening function onto the smartphone? It’s highly integrated, convenient and much cheaper than purchasing an external device. Such need thus accelerates a whole revolution in the streaming media where Apple music, Spotify, Pandora, etc. came out almost overnight and everyone starts to download those application on their phones to listen to music. While these applications substitute the previous music players, they are competing with each other as well as collaborating with the smartphone industry such as Apple store. An economic mediation through time and space is thus emerging as well (Latour, 1994, p.45).

  1. What would be the next?

That could be a tough question. In fact, technologies are not fetishes, they are unpredictable, not means but mediators (Latour, 1994, p.53). Both the society and technology are mediating what’s currently mediating themselves, the development of technology is thus non-linear. We are living at a period when technology, society, politics, economics and culture are all serving as agencies where the whole system is shifting from single actor to many agents, from homogeneous agency to hybrid constellations and from hierarchy to framed interactivity (Rammert, 2008, p.13-16). Meanwhile, the larger system is broken into subsystems, or modules which work relatively enjoy independencies but are also dependent on and interacting with each other invisibly (Irvine, 2018, p.2). Like what we learned in 506, the whole complex system is like a blackbox which contains infinite blackboxes inside. To understand a system, we need to define it, explore the interaction of its components as well as elaborate its relation with the external environment.

Date Event
1857 Leon Scott de Martinville’s Phonautograph 

The phonautograph could record, but could not reproduce sounds. The original design for the phonautograph eventually led to the gramophone.

1877 Thomas Edison’s Phonograph 

The phonograph made recorded music possible. The device recorded sound, including human voices.

1887 Emile Berliner’s Gramophone 

Emile Berliner created the Gramophone, the first device to play a disk of recorded music, in 1887. The gramophone made recorded music accessible.

1896 Gramophone on the Market 

By 1896, the gramophone was on the market as a Victrola, playing disks of recorded music. This is the first commercially available record player.

1905 Beginning of 78 RPM Standard 

The 78 RPM standard was introduced. This enabled shoppers to be sure that their records would play on their Victrolas, and play correctly. This remained the standard until the introduction of the LP in 1940.

1954 First Transistor Radio 

In 1954, the first transistor radio allowed listeners to take music with them, as the radio was now small and portable.

1962 First Portable Stereo 

The first portable stereo integrated speakers into a record player, allowing people to take their record player with them, moving it wherever they went.

1963 Audio Cassette 

The audio cassette offered music in a smaller and more portable format than ever before. Audio cassettes also enabled the first mix tapes.

1965 Release of the 8-Track Tape 

The 8-Track tape brought recorded music into cars, long before audio cassette players were integrated into car stereos.

1979 The Walkman 

In 1979, the first personal music player was released by Sony. The Walkman combined an audio cassette player and headphones.

1983 The First Compact Disc 

The Compact Disc offered higher quality recording, and increased durability compared to an audio cassette. By 1984, portable CD players were available.

1998 First MP3 Player 

The first MP3 player, playing audio files, was released in 1998. The player eliminated the need for another media to hold music.

2001 Apple’s First iPod 

Apple released its first iPod, taking the MP3 player mainstream in 2001. The iPod made digital music significantly more popular.

2007 iPod Touch 

Apple released the iPod Touch. The iPod Touch served as a music player, but also offered access to the Apple App Store, games and other features.

Table 1. (Evolution of Music Players Timeline, n.d.)

References:

  1. Evolution of Music Players Timeline. (n.d.). Retrieved March 27, 2018, from http://www.softschools.com/timelines/evolution_of_music_players_timeline/406/
  2. Hollan, J., Hutchins, E., & Kirsh, D. (2000). Distributed cognition: Toward a new foundation for human-computer interaction research. ACM Transactions on Computer-Human Interaction,7(2), 174-196. doi:10.1145/353485.353487
  3. Irvine, M. (2018). Media, Mediation and Sociotechnical Artefacts: Methods for De-Blackboxing.
  4. Latour, B. (1994). On Technical Mediation. Common Knowledge 3, no. 2. 29-64.
  5. Latour, B. (1991). Technology Is Society made Durable. A Sociology of Monsters: Essays on Power, Technology and Domination, edited by John Law, 103-31. London, UK; New York, NY: Routledge, 1991.
  6. Rammert, W. (2008). Where the Action is: Distributed Agency Between Humans, Machines, and Programs. Social Science Open Access Repository (SSOAR).
  7. Zhang, J., Patel, V. (2006). Distributed Cognition, Representation and Affordance. Distributed Cognition Pragmatics and Cognition Pragmatics & Cognition,14(2), 333-341.