Author Archives: Mariana Leyton Escobar

A Sociotechnical Approach to Software as Politics

Mariana Leyton Escobar

Abstract

This essay uses secondary sources, mainly Gabrielle Coleman’s “Coding Freedom: The Ethics and Aesthetics of Hacking” (2012) and Manuel Castells’ “The Internet Galaxy: Reflections on the Internet, Business, and Society” (2003) to perform a preliminary analysis of how the development of software came to be the center of two ways of thinking about technology. The main concern is how this could be explored with the method proposed by actor-network theory. The findings set the stage for a more focused analysis based on primary data collection.


In a compelling anthropological account of the evolution of the free and open source software culture, Gabrielle Coleman shares the following poem:

Programmers’ art as

that of natural scientist

is to be precise.

complete in every detail of description, not

leaving things to chance.

reader, see how yet

technical communicants

deserve free speech rights;

see how numbers, rules,

patterns, languages you don’t

yourself speak yet.

still should in law be

protected from suppression,

called valuable speech!

(Schoen, cited in Coleman 2012, p. 161)

This poem is compelling enough, making the case for why programming code should be considered, and thus protected, free speech. Indeed, as Coleman (2012) explores in her study, the free and open source movement developed a culture around “broad, culturally familiar visions of freedom, free speech, rights, and liberalism that harks back to constitutional ideals” (p. 2). In this sense, the poem is made more complex because it represents a movement with specific ideals. But it goes beyond that, as the poem, written in 1999, was actually part of a larger, worldwide protest against the arrest of then six-teen year old, free and open source software developer Johansen (Coleman, 2012).

One of the ways in which DVDs are protected so that they are not copied and distributed without permission is to encode encryption in it, something known as digital rights management (DRM). DRM are types of access control technologies developed to restrict use of proprietary hardware and copyrighted works established by the Digital Millenium Copyright Act of 1998 as the software not-to-meddle-with. In other words, the DMCA establishes, among other things, that the “production and dissemination of technology, devices, or services intended to circumvent” measures that control access to copyrighted works, such as DRM technologies, is a crime.

Johansen had written, along with two anonymous developers, a piece of software called DeCSS that would allow people to unlock an encryption encoded in DVDs to control their distribution. The poem is in fact a transcoding of the code of such software (Coleman, 2012, pp 161, 170). A piece of the code can be seen in the image below, a snapshot of a page in Coleman’s book.

A piece of the contested code (Coleman, 2012).

A piece of the contested code (Coleman, 2012).

The poem then becomes a technical artifact that is part of a complex sociotechnical system built around the philosophy of creating and sharing software free of restrictive intellectual property rights.

That sentence contains several components that will be expanded in this essay in order to explore the free and open source software movement as a sociotechnical system that has emerged in parallel to the commercial software sociotechnical system with the development and expansion of personal computers, the Internet, and the web. By following the actor-network theory method proposed in social and technology studies (STS), it will offer a preliminary analysis of how the development of software came to be the center of two ways of thinking about technology. Using secondary sources, it will evaluate the type of nodes and links that would need to be followed to explore this question in a subsequent, more focalized study.

Sociotechnical Systems

“To conceive of humanity and technology as polar opposites is, in effect, to wish away humanity: we are sociotechnical animals, and each human interaction is sociotechnical. We are never limited to social ties. We are never faced only with objects.” (Latour, 1999, p. 214)

In 1986, Langdon Winner, an STS scholar, wrote the popular essay “Do Artifacts Have Politics?” posing the idea that they do. In his view, technology should not be seen from a deterministic perspective by which it is expected to have specific impacts on society, but he calls attention to the fact that the social deterministic theories of technology — that consider not the technology but the socioeconomic system in which it is embedded — go too far in removing any interest from it. Not denying the usefulness of a social constructivist approach, to understand how artifacts have politics, Winner argued, the technological artefacts themselves had to be taken seriously. Without focusing on a specific technology, his argument is that artifacts have politics insofar as they are the result of structuring design decisions, decisions that once the artifact is finalized and put in the world, influence “how people are going to work, communicate, travel, consume, and so forth over a very long time” (Winner, 1986, p. 5).

A good example for both ideas — that a technological artifact can structure how people organize and that this influence can last for a long time — is the QWERTY keyboard configuration. The QWERTY design does not favor any specific design requirement, neither for the users nor for the hardware (or now software) that holds it, and yet it has not changed since its inception and it is likely it will continue to last. Paul David (1985) offers a great account of the “one damn thing follows another” story that led to this situation based on the concept of path-dependence. This economics concept explains how certain outcomes can result from “historical accidents” or chance “rather than systemic forces” (p.332).

Among the three factors David identifies are determinant in the history of the QWERTY keyboard is a need for “technical interrelatedness” (p. 334) which is the need for system compatibility or interoperability among different parts of a technical system. The typewriter in this case was considered an instrument of production as it was at first mostly bought by businesses that would invest in training workers to memorize and efficiently use the QWERTY keyboard. Thus, the compatibility that was valued by the time the market for the typewriters started to grow circa 1890 was that of the keyboard with human memory. In this way, not only the keyboard, but a specific design of a keyboard, had structured the organization and budget of a business in a way that eventually determined that we are still using a layout designed for typing with ten fingers on phones in which we type with two thumbs. This type of back and forth with technology structuring social forces and then being shaped by those very forces is at the very center of what is meant by sociotechnical system.

A definition

Through a philosophical characterization of technical artifacts (as opposed to natural or social objects) and their context of use, Vermaas et al (2011) propose a baseline concept of the matter at hand. To begin with, a system can be defined as “an entity that can be separated into parts, which are all simultaneously linked to each other in a specific way” (p. 68). A sociotechnical system is a hybrid system — a system in which the components that make it up are essentially different, or put in the authors’ words, “components which, as far as their scientific description goes, belong in very many different ‘worlds’” (p. 69). A sociotechnical system is then a hybrid system in which certain components are described by the natural sciences and others by the social sciences (ibid.). In a such a system, there can be many users at one time and they can take on the role of user, operator, or both (ibid). A sociotechnical artifact is then the “redefinition of technology” as a node in a sociotechnical system (Irvine, 2016).

The Social and the Technical?

Recognizing the effect that the cultural structuring of technological innovations could have, and that social and or cultural developments could be understood by looking at the technical base of such development, Régis Debris (1999) proposed mediology as a methodology to explore “the function of a medium in all its forms, over a long-time span (since the birth of writing), and without becoming obsessed by todays media” (p. 1). Indeed, Debris did not refer to a study focused on “the media,” but focused on the relationship between what he refers to as “social functions” such as “religion, ideology, art, politics” in their relationship with the “means and medium/environment [milieux] of transmission and transport” (ibid). The focus of this methodology is on the relations between “the social” and “the technical” but by expanding the definition of the latter to include not just the technical artifact, the medium, but also its environment.

While Debris’ (1999) proposal expands what is to be understood from “the technical,” he maintains a duality between that and “the social,” something that actor-network theory (ANT), another method to explore sociotechnical systems, removes. Bruno Latour, one of the key proponents of this approach, argues such dualism needs to be discarded because, misguided, it has only served to hide a more complex reality: that humans are “sociotechnical animals, and each human interaction is sociotechnical” (1999, p. 214). In Pandora’s Hope (1999), Latour offers the telling of a “mythical history of collectives” by which he explores eleven levels through which human and non-human objects (actants) are theorized to have co-evolved together, as well as four interpretations for what technology mediation means, to explain how humans and non-humans can “fold into each other.” His theoretical analysis aims to show how it is that humans and non-humans are part of one same process that has happened throughout history which has resulted in the current “collective” — ANT’s term for the assemblage of humans and non-humans, used instead of the term “society.

 

Technical mediation and four moments of association in ANT

The four ways in which technology is a mediator are important to understand a key concept to use ANT as a method of analysis, as they are the means by which agency is distributed in a network. Collectives change as humans and non-humans articulate different associations among them according to specific purposes:

  • Translation: the means by which the goals of two or more actors (human or non), articulate their individual goals.
  • Composition: the means by which the articulated individual goals become a different, composite one through successive translation.
  • Enrrollment: the process by which the joint production of the association formed produces outputs through a blackboxed process (a process in which only inputs and outputs can be observed, while the process between them is not easily discernable). This moment can vary depending on how many components are coming together, their type of goals, etc. Once the actors can align their goals and create a blackbox, they become, as one, a new actant is created, leading to the last step.
  • Displacement: the creation of a new hybrid, a composite of human(s) and non-human(s), which forms a new collective with distinct goals and capacities. (Latour, 1999, pp. 176–198)

ANT as a methodology then can be used to understand how agency is distributed in different phenomena (not just “social” phenomena, hybrid phenomena) of which sociotechnical artefacts are a part. To apply it, Latour (2007) explains it is necessary to be extremely observant and collect all data that evidences traces of humans or non-humans components establishing links among each other to pursue certain goals. By doing this, and through a process of thorough description of thick data, he suggests it is possible to understand how agency is distributed among humans, non-humans, mediators, events, and blackboxes that hide some assemblage of them (Latour, 2007).

By retracing these links, reversing the blackboxing, and exploring their historicity, we can use ANT to understand why sociotechnical systems work the way they do, at what moments there were alternatives for it, and in what way the system found some level of equilibrium by blackboxing some assemblages. In this case the focus will be on understanding how and the development of software came to be the center of two ways of thinking about technology.

***

ANT is a theory filled with new terminology that can be very confusing. As a thorough account of it goes beyond the scope of this essay, I include as a supplement to this section a selection of the Glossary shared by Latour in In Pandora’s Hope (1999).

BLACKBOXING:

An expression from the sociology of science that refers to the way scientific and technical work is made invisible by its own success. When a machine runs efficiently, when a matter of fact is settled, one need focus only on its inputs and outputs and not on its internal complexity. Thus, paradoxically, the more science and technology succeed, the more opaque and obscure they become. (p. 304)

COLLECTIVE:

Unlike society*, which is an artifact imposed by the modernist settlement*, this term refers to the associations of humans and nonhumans*. While a division between nature* and society renders invisible the political process by which the cosmos is collected in one livable whole, the word “collective” makes this process central. Its slogan could be “no reality without representation.” (p. 305)

CONCRESCENCE:

A term employed by Whitehead to designate an event* without using the Kantian idiom of the phenomenon*. Concrescence is not an act of knowledge applying human categories to indifferent stuff out there but a modification of all the components or circumstances of the event. (p. 305)

EVENT:

A term borrowed from Whitehead to replace the notion of discovery and its very implausible philosophy of history (in which the object remains immobile while the human historicity of the discoverers receives all the attention). Defining an experiment as an event has consequences for the historicity* of all the ingredients, including nonhumans, that are the circumstances of that experiment (see concrescence). (p. 306)

HISTORICITY:

A term borrowed from the philosophy of history to refer not just to the passage of time-1999 after i998-but to the fact that something happens in time, that history not only passes but transforms, that it is made not only of dates but of events*, not only of intermediaries* but of mediations*. (p. 306)

MEDIATION vs. INTERMEDIARY:

The term “mediation, .. in contrast with “intermediary*,” means an event* or an actor* that cannot be exactly defined by its input and its output. If an intermediary is fully defined by what causes it, a mediation always exceeds its condition. The real difference is not between realists and relativists, sociologists and philosophers, but between those who recognize in the many entanglements of practice* mere intermediaries and those who recognize mediations. (p. 307)

NATURE:

Like society*, nature is not considered as the commonsense external background of human and social action but as the result of a highly problematic settlement* whose political genealogy is traced throughout the book. The words “nonhumans*” and “collective*” refer to entities that have been freed from the political burden of using the concept of nature to shortcut due political process. (p. 309)

NONHUMAN:

This concept has meaning only in the difference between the pair “human-nonhuman” and the subject-object dichotomy. Associations of humans and nonhumans refer to a different political regime from the war forced upon us by the distinction between subject and object. A nonhuman is thus the peacetime version of the object: what the object would look like if it were not engaged in the war to shortcut due political process. The pair human- nonhuman is not a way to “overcome” the subject-object distinction but a way to bypass it entirely. (p. 308)

SETTLEMENT:

Shorthand for the “modernist settlement,” which has sealed off into incommensurable problems questions that cannot be solved separately and have to be tackled all at once: the epistemological question of how we can know the outside world, the psychological question of how a mind can maintain a connection with an outside world, the political question of how we can keep order in society. and the moral question of how we can live a good life-to sum up, “out there,” “in there,” “down there,” and “up there.” (p. 310)

SOCIETY:

The word does not refer to an entity that exists in itself and is ruled by its own laws by opposition to other entities, such as nature ; it means the result of a settlement* that, for political reasons, artificially divides things between the natural and the social realms. To refer not to the artifact of society but to the many connections between humans and nonhumans*, I use the word “collective*” instead. (p. 311)

***

Computing and the Internet — Communities, Programming, and Values

The history of computing and the Internet has been told from many perspectives over the years, and a theme that emerges consistently is about how different communities of users emerged and co-evolved along the technology in different ways. This section will highlight how this co-evolution is not determined by the technologies themselves, but by the interactions between actors who use, tinker with, and expand on the technology, and how the technology changes along these actions. In such way, computing, networking, and software can be seen as sociotechnical artifacts that are part of a sociotechnical system. They don’t evolve on their own and don’t determine what people do with them. Users and technologies come together to develop a sociotechnical system in which users can use and/or create applications for the computers and the Internet in turn shape the way in which users and technologies assemble. In this process, blackboxing can take place in a variety of places, but the focus here will be on how the development of software came to be the center of two ways of thinking about technology.

In Principles of Computing, Denning and Martell (2015) explain how computing can be understood as a science in itself because in its most abstract conception, it is a matter of processing information. As such, computing can be applied to a number of different domains (such as security, artificial intelligence, data analytics, networking, robotics, etc.) because, as a method to process and generate information, it is about following certain principles that can be combined in a number of different ways in different domains to achieve different objectives (Denning & Martell, 2015, pp. 13–5). Computing as a method in itself then does not determine what can be done, but can guide its application through principles based on communication, computation, recollection, coordination, evaluation, and design (ibid). As such, computing opens up a world of opportunities for those interested in developing a computing application for specific domain. This is what Mahoney (2005) explores in the different histories that emerged as communities of practitioners got together to develop specific domains, thus bringing more attention to those aspects facilitated by computing. He focuses on the different aspects of computing that were developed by different groups, such as data processing and management for the scientists and engineers creating it, the private sector or for government.

Software

Software is how we “put the world into computers” (Mahoney, 2005).

Mahoney emphasizes how historians of computing are only beginning to explore the history of software. While he emphasizes the importance of removing the focus from the machine to include its use, history, and design, in order to understand this history properly, he also says that “associated tasks such as analysis, programming, or operation” need to be understood. This echoes Latour’s urging for analysis of traces of all activities in a sociotechnical system. For Mahoney, understanding the history of software was important because the software is what “actually gets things done outside the world of the computer itself,” and the communities that develop software are the ones filling the gap between “what we can imagine computers doing and what we can actually make them do” (Mahoney, 2005, p. 128). He says that in not understanding this history, we miss out on understanding that this process is not determined and so we don’t learn what the alternatives are. This is important because software is how we “put the world into computers,” and to do that entails on “how we can represent in the symbols of computation portions of the world of interest to us and how we can translate the resulting transformed representation into desired actions” (Mahoney, 2005, pp. 128–9). The history of computing then is not just about how transistors, chips, and screens, but about how different groups of people used such components to develop some areas, based on the principles of computing, in terms of their interests in a way that selects how to represent the world.

To put this in a more concise way, Alan Kay explained computers as a meta-media, a medium whose content is “a wide range of already-existing and not-yet-invented media” (Manovich, 2012, p.44). Because the computing doesn’t set rules for what can be done with computers but what principles should be followed to use computing in general (Denning & Martell, 2015), the range of the “not-yet-invented media” remains wide. Moreover, technology in general (not just computers) also follow two key principles: “cumulative combinatorial design” and “recursiveness,” which explain that technologies are made of components of previously made technologies, which can be used as components later on (Arthur, 2011).

To the extent the computer was developed to be a general-purpose machine — and the Internet designed as a general-purpose, “dumb network” meant only to transport data, — users can develop applications for this meta-media by developing software. In doing so, and following the principles mentioned, users can use software to represent and combine the formats of previously existing media, remix, and expand on them, thus contributing to the meta-medium. If the computer allowed users to manipulate information more easily, the Internet added to that by allowing users to do so while connecting with each other.

In that way, in Software Takes Command, Manovich (2013) shares Mahoney’s concern for software by explaining that “software has become our interface to the world, to others, to our memory and our imagination—a universal language through which the world speaks, and a universal engine on which the world runs” and yet its history has remained mostly unexplored (p. 2). For Manovich, the key to understand about software and its representational function is that, by digitalizing information so it can speak the language of computers, we transform it in a substantial way:

“In new media lingo, to “transcode” something is to translate it into another format. The computerization of culture gradually accomplishes similar transcoding in relation to all cultural categories and concepts. That is, cultural categories and concepts are substituted, on the level of meaning and/or the language, by new ones which derive from computer’s ontology, epistemology and pragmatics. New media thus acts as a forerunner of this more general process of cultural re-conceptualization.” (Manovich, 2002, p. 64)

Under this light, the focus on software development is emphasized rightly as it turns out that programming code and algorithms to “put the world into computers” entails a decision-making process of what to represent of the world and how to do it. To the extent that more of our activities are then mediated by software-based technologies, they are being mediated by decisions that had to consider alternatives of representing the world to begin with. At the same time, our networked technologies have developed in such a way that the use of some software is highly distributed across the globe, and so the interactions among users and developers of software can be seen as a sociotechnical system made of a wide array of human and non-human components, including the designers and users of software, as well as all the components necessary for software to exist and function.

 

Software as a Sociotechnical System

To explore software as a sociotechnical system then would entail exploring the history of computing and the development of the Internet, along with a whole array of details depending on what aspect of software one is interested in.

In this case, the focus is on how software became the center of two ways of thinking about technology as evidenced by the emergence of a community that values the “free and open” aspects of software, while another one emerged that valued the commercial aspects of software while promoting the idea of “quality software.”

Gabriella Coleman’s (2012) anthropological account of the free and open source software community and the way in which they developed technological and material practices, along with their own vision of liberal ideas, along with Manuel Castell’s (2003) sociological explanation of how four levels of  “Internet culture” developed with the emergence and initial expansion of the Internet will serve as secondary data to explore the initial nodes and links in the sociotechnical network that would need to be explored to account for such development. While neither of the authors uses ANT, both emphasize the interaction of networked individuals and collectives with technology and, without falling into a techno-deterministic approach, give technology the sufficient “importance” to guide the ANT analysis that would put such technology in the same place as human actors.

Emergence of a community

While free and open source software is not a new concept, as such was the method to develop and share software in the initial stages of computing, the focus on it as a philosophy to think about technology has developed more recently. Coleman (2012) explores how a community of free and open source software developed internationally as the self-identified hackers were able to connect with each other around FOSS projects and thus develop two main components: a material one based on the practice of developing software, and an own vision of liberal ideals. As she explores the ways in which the FOSS community struggled with intellectual property laws in order to promote a system of software development that did not necessarily commodify it, she finds that the community values the liberal idea of free speech, but opposes that of commodifying everything. A romanticized interpretation of liberalism is what soothes this tension (p.3-4). In her account, on top of telling the development of encounters with the law, (part of which is explored in the first segment of this essay), another important moment emerges as software commercialization begins to boom and the not-yet-so big community of software developers develop two ways of thinking about software. In 1976, after it became clear that hackers were sharing the source code for Microsoft products, Bill Gates wrote a letter to the then called “hobbyists” in an attempt to explain why developing software outside of a commercial venture would not be sustainable as it would not develop “quality software” (p. 65). A decade and many more developments later, Richard Stallman was establishing the Free Software Foundation, the GNU Manifesto, and the General Public License.

For Castells (2003), four main “Internet cultures” emerged as the Internet propagated: the techno-elites, the hackers, the virtual communitarians, and the entrepreneurs. The techno-elites were the original Internet architects and the community that spread from there, which valued meritocracy and openness both in their method of work and in their design, which is why the Internet is based on open standards. However, for hackers, the open source was not enough, it had to also be free, not in terms of cost, but in terms of freedom to share, understand and tinker with. Castells argues that while the Internet was developed with open Internet protocols, the concern was radicalized by the hacker culture with the “struggles to defend the openness of the UNIX source code” (p. 39-43). Such struggles eventually turned into the movement for free and open source software explored by Coleman. The other two layers explored also serve to understand the sociotechnical context of these developments. On the one hand, the virtual community aspect of the Internet culture calls attention to the easiness with which users can form networked communities across the globe, an important aspect of the FOSS movement. In addition, the entrepreneurial layer brings to front the opposing force that led to software being the focus of a discursive battle over software (p. 52-60). With the advent of digital technologies, the market for new digital products emerged and thus the eagerness to protect the intellectual property of those products.

An encoded poem as a piece of the sociotechnical

From both accounts, the free and open source software community must be read as global and as part of a network that includes the history of computing and the Internet, the history of the expansion of these technologies, the history of intellectual property law (as well its global expansion), as well as their different ideological, cultural, economic and political contexts. As explored by Coleman, the FOSS culture has spread not only through the development of software but by the sharing of such development, online and offline, as she discovers the importance of in-person events for these hackers (2012). As theorized by Castells (2009), the power that networked communities can leverage with the Internet and related technologies has changed and it has the potential to have global impacts. To the extent that the FOSS community continues to expand and openly challenge liberal ideals and ways of thinking about software and technology in general, understanding this complex sociotechnical network is pressing.

The poem quoted above, under this light, becomes a much more complex piece of the sociotechnical puzzle. It is an expression in the name of freedom that not only makes a cultural and political statement by equating code with speech, it also takes the form of a protest artifact by being the transcoding of a piece of contested software. That software in itself is a transcoding of one way to represent the world in the world of networked computers — one way that turned out to activate a network of legal, economic and political arrangements that, in affecting that piece of software, affect all other coded speech. In such a way, this artefact does indeed have politics, but under the light of ANT, it does so in a much more complex way than it sounds.

Bibliography

Arthur, W. B. (2011). The Nature of Technology: What It Is and How It Evolves (Reprint edition). New York: Free Press.
Castells, M. (2003). The Internet Galaxy: Reflections on the Internet, Business, and Society (1 edition). Oxford: Oxford University Press.
Castells, M. (2009). Communication Power. Oxford: Oxford University Press.
Coleman, E. G. (2012). Coding Freedom: The Ethics and Aesthetics of Hacking. Princeton: Princeton University Press.
David, P. A. (1985). Clio and the Economics of QWERTY. The American Economic Review, 75(2), 332–337.
Debray, R. (1999, August). What is Mediology? Le Monde Diplomatique.
Denning, P. J., & Martell, C. H. (2015). Great Principles of Computing. Cambridge, Massachusetts: The MIT Press.
Irvine, M. (2016). Understanding Media, Mediation, and Sociotechnical Artefacts. From Concepts and Hypotheses to Methods for De‐Blackboxing. Communication, Culture & Technology Georgetown University.
Latour, B. (1999). Pandora’s Hope: Essays on the Reality of Science Studies. Harvard University Press.
Latour, B. (2007). Reassembling the Social: An Introduction to Actor-Network-Theory (1st edition). Oxford; New York: Oxford University Press.
Mahoney, M. S. (2005). The histories of computing(s). Interdisciplinary Science Reviews, 30(2), 119–135. https://doi.org/10.1179/030801805X25927
Manovich, L. (2002). The Language of New Media (Revised ed. edition). Cambridge, Mass.: The MIT Press.
Manovich, L. (2013). Software Takes Command (INT edition). New York ; London: Bloomsbury Academic.
Vermaas, P., Kroes, P., Franssen, M., Poel, I. van de, & Houkes, W. (2011). A Philosophy of Technology: From Technical Artefacts to Sociotechnical Systems. San Rafael, Calif. (1537 Fourth Street, San Rafael, CA 94901 USA): Morgan & Claypool Publishers.
Winner, L. (1980). Do Artifacts Have Politics? Daedalus, 109(1), 121–136.

Evernote: A Case Study

By Jieshu Wang and Mariana Leyton Escobar

evernote-home-screen

Evernote’s Home Screen: “Remember Everything”

Evernote is cross-platform app designed for note taking, organization, and archiving.

As a cognitive artifact

From Evernote’s blog: “Our goal is to improve the lives of everyone around the world by giving them a second brain and a perfect memory.”

Things you can do with Evernote: notes taking (text, image, & audio), record thoughts, manage lists, collect articles…

evernote-information-model-from-a-users-perspective

Evernote information model, from a user’s perspective

  • From an individual view, Evernote changes the nature of tasks.
  • From a system view, Evernote enhances the performance of the system of human and Evernote.

As a modular system

 

Evernote high level architecture

Evernote high level architecture

  1. Shards (Pre-Google era):
    1. Modules for storage: 1 shard for 100,000 users. Each as an island, with no cross-talk or dependencies.
    2. Physical structure: 2 two SuperMicro boxes + 2 Intel Processors + RAM + Seagate drives + RAID configurations
  2. Hub-and-spoke centralized structure:
    1. Hub: web servers
    2. spoke: your devices
  3. Networking: through HTTPS port 443
    1. all “web” activities
    2. all client synchronization via Thrift-based service APIs (Evernote is a module in the whole Internet)
  4. Modular Data Structure
    1. UserStore
    2. NoteStore
    3. Each has more modules in them. Interfaces (arrow lines, UserStore Service & NoteStore Service)
  5. Business layer & organization
    1. servers in Google (also modular)
    2. app development in Evernote
  6. Set up separate companies to deal with specific issues
    1. Evernote GmbH in Switzerland to manage data (with two data centers on the west coast of US)
    2. Yinxiang Biji for China

screen-shot-2016-11-30-at-3-33-29-am

screen-shot-2016-11-30-at-3-35-03-am

Synchronization

From a consumer’s point of view, synchronization is a process through which files in different locations or devices are updated to the same latest versions.

How to sync?

  1. Each NoteStore object has two identifiers:
    1. A globally unique identifier (GUID): unique
    2. An update sequence number (USN) : increase when change
  2. Protocol: Evernote Data Access and Management (EDAM) is a protocol for exchanging
  3. Evernote data with the Evernote service.
  4. Each Evernote account has a variable called updateCount. it is the highest USN.
  5. Sync type: full & incremental
  6. steps: serials of functions

As a socio-technical system

Marketing strategies can be telling

Looking for users offline

evernote-partners

And online.blog_image_vivo_final-1

logo_telefonica_azul

Evernote’s partner in Brazil is Vivo, local branch of Spain’s Telefonica

download

Samsung was another partner through its Galaxy Note Phablet

 

Legal agreement between user, Evernote, and now Google

Data storage presents a challenge for services that store data for users. Evernote announced recently the choice to switch from having their own servers to hiring storage service (cloud service) with Google.

google-cloud-platform

The choice for this particular cloud service on top of others has to do with Google’s Machine Learning tools.

google-machine-learning

And legal agreements have to do with how data is handled too, for which Evernote, on top of a range of legal information, lists its Three laws of Data Protection

Evernote's 3 Laws of Data Protection

Evernote’s 3 Laws of Data Protection

And link to Google’s.

Both companies explain they have to respond to law enforcement data requests but that they are stringent in accepting the request. Both produce transparency reports (though these are limited in how much data they can share).

Business strategies

 Yinxiang Biji (印象笔记)

Yinxiang Biji (印象笔记)

A separate app to work better with the Chinese Internet, A separate company and a separate data center located in China.

  • Political factors
    • banned public notes and notebooks
    • no allow to share to Facebook and Twitter
    • avoid to lose the whole Chinese market due
    • to migrating to Google Cloud
  • business factors
    • Payment method: Alipay, WeChat Pay
    • Social media: Sina Weibo, WeChat, Douban
    • Chinese customer support
    • Chinese APIs

 

 

 

 

 

 

A complex modular address space

Using the web as a source of information and place of interaction has become such second nature that it is not often we stop to think of all the steps—one right after the other, at the speed of light—that are necessary for us to be able to open one website or perform a Google search. I would say that it is like a massive system of networked cables and antennas, technical standards, hardware and software, big data bases, and multimedia content is put in motion when we hit “Enter” after entering a query on Google that is meant to serve us a list of results , but it wouldn’t be the most accurate description. It is rather like when we hit that “Enter” button and obtain that list of results, we have actually entered an ongoing socio-technical process of information flows and political, social, economic, and cultural relations among both human and non-human actors across the globe that has kept that massive system active for decades now.

I use a Google Chrome browser to go on the web on my laptop. The way this browser has been set up is that I no longer need to go to google.com in order to do a Google search; I can simply type my query into the address space in the tool bar and hit “Enter” and it will recognize it as a such. This step-saving feature has also been incorporated in browsers such as Firefox and Safari, as shown in images below.

search-directly-on-address-space-on-chrome

search-directly-on-address-space-on-mozilla-firefox

search-directly-on-address-space-on-safari

 

It is an example of an evolution of a user interface based on use. As more people rely on commercial search engines to start browsing information on the web, browsers developers noticed they could save the user the step of having to first go to a search engine’s site to then perform a query. While this small step seems insignificant, it has to be noticed that when I type google.com on my browser’s address space, a communication process begins between my computer and a network of servers, routers, and a big data system (the Domain Name System) that takes several back and forth ‘hops’ across a global network in order for mgoogle-come to access the famous search box screen.

 

It’s true that this digital saga can take only seconds, if that, here in Washington DC. But connectivity speeds vary greatly across the globe and any mili-second that can be reduced from the process of a user searching for something on a commercial search engine—that is, from how easily the user can interact with the interface provided, how fast can it perform a query that provides her with the results she actually hopes to find, how fast can it serve a list so that the user doesn’t go away—can be of great value. Making the address space function as the window to the search engine directly saves time and keeps users happy.

In my case, once I enter a query on the address space of the Chrome browser, a communication process starts from my computer directly with the Google’s web server, there is no need to ask the DNS anything. As soon as Google’s server gets it, they run it against a different big data base, an index of search terms that has been continuously updating through Google’s crawling Googlebot for years and that it’s processed according to over 200 factors before serving me a result list.

query-on-chrome-address-space

It seems that this address space, created in the first instance as the tool through which a user would enter the domain name (that is, the name of the website, say, wikipedia.org) of the site she meant to access—either because she would know it or because she would guess it—has evolved through user and developer experience to become a more complex modular function. It is modular in the sense that it is made up of modules, “unit[s] whose structural elements are powerfully connected among themselves and relatively weakly connected to elements in other units” (Baldwin and Clark, 2000, p. 63). The weak links to other units in the browser are evident in the sense that the address space can be used independently of other functions in the browser, such as the bookmarks or the browsing history. But there are also less evident links established in this modular function.

As it can be seen in the image above, a known feature of Google’s search engine is the Autocomplete function, by which the search box provides the user with a series of potential search terms he or she may mean based on data Google collects on their past searches along with data collected about other people’s searches, including Trending stories. This address space then is made up of modules that not only combine more efficiently certain functions a user takes in a browser, but that also link it to databases of personal information on the user individually but also on Google’s millions of users collectively, as well as to algorithms that process these data according to user input.

The description so far has only reached the action of performing a search on a browser through its address space, and yet several functions, interactions, and connections can be observed. The browsing on the web we take for granted is part of an ongoing process of many flows of information that happen so quickly it is easy to miss them, but they reveal a lot about how many process actually take place on the web.

Making sense of the Internet in a time of political confusion

It is fascinating to read simultaneously this week about the history of the Internet and about how people in the United States are dealing with how applications of the Internet may have influenced the outcome of a presidential election in significant ways. In Computer: A History Of The Information Machine (2014), Martin Campbell-Kelly and William Aspray explain the evolution of the Internet and emphasize that when email was invented, it was not expected to be the “smash hit that it was” (p. 284). Indeed, the interest in the network of networks for communication among people was the big surprise that drove much of the growth of the Net.

Throughout its various phases of developments, the social aspect of sharing information with others and finding common-interest spaces continued to develop until the point in which we are now. Going from bulletin board systems to different versions of social spaces, social media applications are today the doors to the Internet for many people. In the US, “79% of internet users (68% of all U.S. adults) use Facebook,” according to the Pew Internet Project. And after the surprising result of the recent election, a debate has spurred about the influence of this platform in the shaping of public discourse.

In fact, Campbell-Kelly and Aspray mention that the ubiquity of computing really increased with mobile computing, which gave a great momentum to what they call “the phenomenon of social networking” (p. 299). In their concluding words in writing about the history of the Internet, the authors emphasize a really good point: there is an evolving dominance of “a small number of giants” (p. 305), such as Google, Facebook, Amazon, or Twitter, that are benefitting from a “systematic collection and use of personal data on individuals” (ibid). When we think about how we now interact with the socio-technical system that is the Internet, this has to be considered because these companies are now developing the software with which we do so.

As Manovich (2012) explains in his overview of software studies, “software has become our interface to the world, to others, to our memory and our imagination—a universal language through which the world speaks, and a universal engine on which the world runs” (p. 2). If we reckon the ubiquity of the Internet and the way we use, and then we reckon there are only a few companies writing the software “on which the world runs,” it is clear there is an issue if there is no accountability to them. As Prof. Irvine explains (n.d.), it is hard to explain what the Internet is because it is not one thing. As noted above, it is a socio-technial system, which means it is composed of several parts. In this memo, I focus on this one aspect, the software through which we interact with this system and with others through it, but this does not mean that other aspects are less relevant for discussion. In this case, the software through which we interact on the Internet is very relevant as has been evidenced by discussions this week after the surprising election results in the US.

It is worth noting this is not the first time a country questions the role of social media in the shaping of public opinion and its influence on national politics. A few months ago, a similar story occurred in Bolivia. A referendum would go to vote to decide whether or not the president could go to reelection for a fourth time, and the result surprised the president when it (barely) went to the No. Support for the president was thought to be a given. Both supporters and opposition to the referendum discussed how social media influenced the outcome. The former condemned its use calling them “tools of the imperialism” that only “collect garbage” and serve to spread “difamation and lies”; the latter saying they are tools to democratize access to information.

In both the US and Bolivia cases, election season was marked by a series of scandals that were spread, along with misinformation and malicious comment, on social media. And in both cases the results were uncertain and then said to be influenced by how conversation took place on social network applications. The difference was that, in Bolivia, social media apps were either taken to represent “the US imperialism” as a whole, or as a neutral platform. In the US, the presence of the company is starting to be clearer. Opinion articles now call on Mark Zuckerberg himself to find solutions to the problem of misinformation spreading and to at least acknowledge how echo chambers easily form in his platform (see for example the NYT or Vox). There is a call for accountability, even if Zuckerberg is in denial for now.

As Campbell-Kelly and Aspray rightly emphasize, “a platform dependent on voluntary sharing of personal information is highly dependent on not alienating users” (p. 302). Because this platform is run by a private company, how accountable it is depends on how much users ask of it. If users acquiesce to how the company operates through continued unquestioned use, these “small number of giants” don’t have an incentive to be more open about how they develop software. In Internet governance parlance, this is an issue of privatized governance of the Internet. How the Internet functions on its application layer is decided in private driven by private concerns. If we go back to the idea that the socio-technical system that is the Internet is of a particular kind because, as put by Prof. Irvine, “it is part of our long history of symbolic cognition, technical mediation, and communication,” it is clear that claiming accountability today is paramount. And given that the reach of these platforms is global, the accountability has to account for users in other countries in which information spreads locally. Do local interactions with information through the algorithms of social network applications vary due to contextual differences? This also has to be investigated by companies like Facebook if it is to get serious about how its algorithms influence public opinion.

Zuckerberg explains Facebook is a tech company, a platform, not a media company, and therefore does not need to be regulated as such. But the term platform is tricky when it deals with so many aspects of our interactions with the Internet. Through a systems-view of this socio-technical system, it is clear that an interaction with the Internet requires several layers of infrastructure, physical and virtual, to be in place. It also requires users to be in synch regarding how to communicate on this platform, for which a set of symbolic and cultural artifacts are put into play as well. When an interaction happens through Facebook, interactions among actors are activated through this “platform” in ways that must be acknowledged. In his presentation about why the term “platform” is tricky (The Politics of Platforms), Gillespie shows how by presenting themselves as a platform, a company like YouTube or Facebook can get away with several types of interactions all at once, as seen in the image below. There is interactions with end users, advertisers, media partners, lawmakers.

Gillespie on platforms

Gillespie on platforms

The history of computers and the Internet has developed in such a way that today they are part of our everyday (mis)understanding of the world. Still, while it is true that, also from a systems perspective, the socio-technical system that is the Internet is good in hiding its complexities, they are tangible and can be felt when they become problematic. As users of these systems, and as citizens who use these systems in symbolic ways that are part of our civic engagement, we have to find ways to continuously challenge the blackboxing of these complexities. If recent events in politics make anything evident, it is that demanding accountability from the “giants” is key.


Martin Campbell-Kelly and William Aspray. Computer: A History Of The Information Machine. 3rd ed. Boulder, CO: Westview Press, 2014

Martin Irvine, Introducing Internet Design Principles and Architecture: Why Learn This?

Lev Manovich, Software Takes Command: Extending the Language of New Media. London; New York: Bloomsbury Academic, 2013.

 

Metamedium concepts as common practices

One of the most mind boggling aspects of studying the history of computing and networking is to realize how the concepts we discuss and with which we live today were thought of so long ago, before any of the technologies could be conceived as such. The fact that ideas about what could be done with computers, and ideas about what can be done with media, are abstract enough that they were thought of so long ago proves why they translate into so many possible actions. Defining computers as metamedia, Alan Kay explains this means their content is “a wide range of already-existing and not-yet-invented media” (Manovich, 2012, p.44). The possibilities are open-ended because the level of abstraction at which the media is treated allows for creativity to take different forms, but this of course will depend on other factors.
As Manovich explores in his study of “media after software,” that is, of how the use of software changed the way media are thought of, created, treated, shared, today, the fact that the computer evolved as a metamedium — allowing for the simulation of previous media but also for the creation of new media — was not a coincidence or pre-determined path. Creators seeked for this type of development over the years, as users interacted with these systems individually, but also collectively as networking with others and sharing and co-creating became increasingly possible. In writing his book, Manovich provokes the reader not only to inquiring into the history of media and how we understand what a medium is and therefore what a metamedium is today, but also to probe the limits of the metamedium of today.
The examples he provides however, to show how this metamedium is used through abstract conceptualizations that allow us to treat media diversely, also shed light into places were caution is needed. A good example is data visualization, a term that has become popular over the past few years as the use of data for evidence-base storytelling became prominent for organizational and news outreach. This concept is heard increasingly among more audiences,

We really do need to think like computer scientists

In Introduction to Computing: Explorations in Language, Logic, and Machines, Evans (2011) makes the argument that computing science should be taught as “a liberal art, not an industrial skill” (p. ix). He explains that everyone should study computing because “1. Nearly all of the most exciting and important technologies, arts, and sciences of today and tomorrow are driven by computing. “ and “2. Understanding computing illuminates deep insights and questions into the nature of our minds, our culture, and our universe” (p. 1). In after only a few lessons into the Code Academy’s Python introductory course, Evans’ motivation becomes clear as the logic of computing puts into course a manner of thinking that is very distinct and in a way empowering. Even within the first lessons in the course, when learning how to properly format instructions in the Python programming language, the logic of thinking from an instructive perspective has different feeling than other manners of thinking. Moreover, the idea that you have to think logically and on a step by step basis is empowering because it gives you the feeling of control; even if basic programming instructions, you are giving instructions, from which results emerge.

An argument explained by Denning and Martell (2015) is that computer science is a science branch of its own because it has its own approach to discover facts. Moreover, they argue that its approach is different than other sciences because it is transformative, not just focused on discovery, classification, storage, and communication. “Algorithms not only read information structures, they modify them” (p. 15). And generative, not just descriptive. “An algorithm is not just a description of a method for solving a problem, it cause s a machine to solve the problem” (ibid). This way of thinking is felt right away when writing a few lines of basic code by which I, as the programmer, could define variables and determine how they would behave in relation to other variables and different logical instructions. However, the idea of machine learning leveled this empowering feeling as I kept going with the module.

Both Denning and Martell’s and Evans’ proposals make sense for todays world. On the one hand, distinguishing the scientific approach of computer science from other sciences is primordial on a more massive level at this point. While computer scientists already know this as they rapidly advance the field — we are already speaking about artificial intelligence and high levels of machine learning —, the public may not be as clear about the wide world that is computing. As explained by Wing (2006), “thinking like a computer scientist means more than being able to program a computer. It requires thinking at multiple levels of abstraction” (p. 34), but the main narrative about programming we have today may not tell the whole story. On the other hand, Evans is right, computing is everywhere and understanding it can only helps us better understand ourselves and our culture.

Making computing more widely used is a challenge on several fronts, but the one that came to my mind considering the history of computing told by Campbell-Kelly and the increasing amount of news and media we see today about algorithms, machine learning and artificial intelligence, is the constantly widening digital divide that is part of the computing field. The fact that computing has to be more accessible has been pushed forward by policies emerging in different sectors and levels, which is why a website with such a greatly designed self-learning software like Code Academy exists for free today. However, even such programs may not fully illustrate to users how fastly the field is growing, and this lack of awareness means those who are not learning this logic are being left behind.

As I mentioned at the beginning, the feeling of empowerment by being the one giving instructions was great when I started the learning module. As I was thinking about this, Ada Lovelace’s argument came to my mind: “The Analytical Engine has no pretensions whatever to originate any thing. It can do whatever we know how to order it to perform” (Evans, p. 35). However, after I was done with a few more lessons, I reached the stage in which you can program interactivity with the user, and I realized that keeping the computing logic in mind is essential not just when I want to code something, but while constantly interacting with ubiquitous computing.

In an interview with mathematician Cathy O’Neil, author of the book ‘Weapons of Math Destruction,’ she explained that algorithms are such a big part of our lives because they process not just the information that we personally input into our computers, but information about us that companies process in order to make decisions that affect our lives. Big data that profiles people in order to advertise services or information to them may end up causing harm because, as she puts it, is used to categorize people in winners and losers. If the algorithms determine you are more likely to be persuaded by voting, for example, there is a type of information that will reach you that wouldn’t if you are categorized differently, even if mistakenly. Our access to information is mediated by algorithms, and I think this means that the logic of computing has to be part of our media literacy as well.

When consuming and processing information today, it is important we develop a layer of thinking in which we question how information was processed by algorithms in order to be shaped the way it is. If there is anything that taking the Python course made clear to me is that nothing in computing is accidental — it may have been instructed by mistake, but it doesn’t happen out of chance. What happens happens because it has been instructed to happen. When we apply for services, such as health insurance, and receive certain information, we have to be able to question how our profiles were processed. And when consuming information online, we also have to be constantly asking why we find some information instead of other. The issue, as put by O’Neil, is that we as a society blindly trust algorithms: “we don’t push back on algorithmic decisioning, and it’s in part because we trust mathematics and in part because we’re afraid of mathematics as a public.” This is highly problematic when we consider it gets in the way of our interaction with culture and knowledge in society today.

As noted, these challenges are started to be faced in different manners today. The idea that everyone should learn basic programming is increasingly part of the narrative, especially in developed countries. In 2013, for example Code.org was launched, funded heavily by the private tech industry, to promote this idea for children by giving tools for teachers, schools, and kids. And the US government has been investing more in getting people to be part of the STEM (science, technology, engineering, and math) field. Part of this effort should include learning the abstract computing thinking method not only to create but to consume. As Evans explains, when a computer scientist is faced with a problem, they think about it “as a mapping between its inputs and desired outputs” and they thus “develop a systematic sequence of steps for solving the problem for any possible input, and consider how the number of steps required to solve the problem scales as the input size increases” (p. 16). As consumers of information, we need to also consider how our information has gone through a number of steps before reaching us and thus is shaped in a particular way. We do this when we think about the news we consume: we know there is a journalist who researched and wrote an article along with an editor and that editing decisions when into the topic, the framing, the placing of the news item, etc. We need to add a layer of thinking in which we consider that information was also selected and processed through the use of algorithms. We need to be able to imagine the mappings mentioned by Evans, but for this we need to know they are there.


David Evans, Introduction to Computing: Explorations in Language, Logic, and Machines. 2011.

Peter J. Denning and Craig H. Martell. Great Principles of Computing. Cambridge, MA: The MIT Press, 2015.

Martin Campbell-Kelly, “Origin of Computing.” Scientific American 301, no. 3. 2009.

Jeannette Wing, “Computational Thinking.” Communications of the ACM 49, no. 3. 2006.

Information transmission and generation

Why is the information theory model essential for everything electronic and digital, but insufficient for extending to models for meaning systems (our sign and symbol systems)?

“Shannon’s classical information theory demonstrates that information can be transmitted and received accurately by processes that do not depend on the information’s meaning. Computers extend the theory, not only transmitting but transforming information without reference to meaning. How can machines that work independently of meaning generate meaning for observers? Where does the new information come from?”

Denning and Bell pose this question in their introductory piece for solving the information paradox—the conflict that emerges from the classical view of information in which it can be processed independent of its meaning and the empirical fact that meaning, and thus new information, is generated in such process—as applied to computers today.

Shannon’s classical information theory posed that information could be coded and transmitted by a sender in a way that was redundant enough to avoid noise and equivocality, thus allowing a receiver to decode it and make sense of the message. The main concern was to “efficiently encipher data into recordable and transmittable signals” (Floridi, 2010, p. 42). In his Very Short Introduction to Information, Floridi explains that MTC (the Mathematical Theory of Communication proposed by Shannon) applies  so well to information and communication technologies, like the computer, because these are syntactic technologies, ready to process data on a syntactic level (2010, p. 45). As explained by Floridi, for there to exist information, according to the General Definition of Information (GDI), there must be data that is ‘well formed’ and has meaning. ‘Well formed’ refers to it being “rightly put together according to the rules (syntax) that govern the chosen system, code, or language being used” (2010, p. 21). Shannon’s theory deals with information at this level in order to find a way to encode and transmit it.

The question posed by Denning and Bell emerges because we see people today communicating and creating through interactive computer programs, so how is meaning emerging from a transmission of this level of data? The point they make solves the paradox by relying on a more comprehensive theory of information, posed by Rocchi, which poses information has two parts, sign and referent, and that it is in the link between the two where meaning emerges (p. 477). Moreover, they also explain that the interactive features of computers today allow for the creation of meaning as every time there is a new output from a users’ interaction with a computer, meaning (new information) emerges because the user is putting together sign and referent, making sense of the transmitted data.

The authors paraphrase Tim Berners-Lee in his interpretation of this process on the web, “someone who creates a new hyperlink creates new information and new meaning” (p. 477), and in doing so, they help illustrate how the sociotechnical system that is the Internet, which can be seen from a systems perspective that takes into account its different modular components and the way they interact, can also be seen from the perspective of information transmission. In both accounts, the system only makes sense once all components are considered, not only the sender, receiver, channel and message, but also the processes by which the message is linked to a referent within the broader system. The classical information theory model then can be complemented by this understanding of information and its two parts, and still remain an essential part of our electronic information and communication technologies.


Peter Denning and Tim Bell, “The Information Paradox.” From American Scientist, 100, Nov-Dec. 2012.

Luciano Floridi, Information: A Very Short Introduction. Oxford, UK: Oxford University Press, 2010.

Considering the algorithm

“Media of representation shape our understanding of the world. They do not just contain information; they also determine what can be communicated. They provide the loom on which we can weave the fabric of human culture.” (Janet H. Murray, 2012)

If we are to consider the different components of the digital applications we use today to obtain information and communicate with others from a systems point of view that considers the affordances, constraints, and interfaces that emerge among users and the apps in a cognitively distributed way, an interesting group to analyze is that of algorithms.

Algorithms are the step by step instructions programmed into software that are meant to process specific input in a specific way to obtain a specific result. The results sought depend on the many potential uses of an application, and each is pursued through a series of algorithms that go through a step by step process. Each of these steps represent an interface in which information is processed according to a structure and logic based on cultural webs of meaning that, as explained by Murray (2012), would be “making meaning” as it goes.

Let’s consider a search algorithm. If I search for the term “Black Lives Matter” on Google, Facebook, and Twitter, I obtain different outputs and displays of this social movement campaigning against systematic racism.[i] From a perspective of distributed cognition (Zhang and Patel, 2006), we can observe mixed affordances in the display of results each search engine produces. In each application, information is organized according to physical, perceptual, and cognitive affordances (Zhang and Patel, 2006) that allow me to interact with it according to the cultural conventions of each platform.

Google search resultsOn Google, for example, I first obtain a list of “All” results which begins with an ad, followed by the link to the movement’s official website, to a subsection of news items, a link to the movement’s official Twitter account, the Wikipidia entry, and more Twitter links. I only see one image on this display, and I have the options of filtering results by media type (images, video) and content type (News and Shopping), and am offered further search tools.

On both Facebook and Twitter, on the other hand, the first result list displayed is the “Top” list, and each offer the option of filtering by posting time, (“Latest” on Facebook, “Live” on Twitter). The results of these social media apps are posts by people or organizations, and each contains information about how widely is been shared (number of views, comments, shares, likes, retweets).

Facebook search resultsIn both apps, there are various images of various sizes, changing the perceptual feeling of what is more prominent, each according to the conventions of each site.  Facebook has been promoting video consumption lately so the first category of results displayed are videos. What follows are “Top Public Posts” and “Posts from Friends and Groups.” As it is with Facebook, the display of results provides cognitive affordances for me to interact with this information based on what most of a Facebook-selected group of users are doing (“Top Public Posts”) and what a self-selected group is doing (“Posts from Friends and Groups”). Twitter also offers this choice but it requires further interaction with the platform for me to filter information to “People You Follow”. The social aspect of the information is much more prominent in the affordances of the Facebook and Twitter displays than on the Google one, and there seems to be more indication about what makes the results relevant (how widely and recently they have been shared) on the social apps than on Google (which does display time stamps for news items and Twitter links but doesn’t give indications of relevance on the other results).

Twitter search results

These results have been sorted by algorithms that processed my input query using databases and criteria that I can’t see or know, but once they are displayed, they provide a space— an interface—of affordances and constraints for me to interact with information about this movement. There is a moment in which I interact with this information that is blackboxed, but regardless of this, each result display is a space of distributed cognition among the algorithms, their displayed results, and my (and other users) interaction with them. As put by Murray in the above quote, it is a space of meaning making, but as each is a space designed according to cultural conventions, it “determines what can be communicated” (affordances and constraints), and thus how this meaning can be constructed.

According to media scholar Tartleton Gillespie (2014), algorithms are now “a key logic governing the flows of information on which we depend,” and thus require our understanding and attention. From the design thinking point of view of this seminar, it is interesting to consider the algorithm as one of the key components (which are blackboxed) with which we interact in the cognitively distributed systems of this “media of representation.” Algorithms themselves are constructed according to a cultural symbolic logic with which we interact and provide us with affordances and constraints to interact with information. (Blackboxed) algorithms are then an important component to consider when analyzing how we construct meaning in distributed cognition systems today.

[i] https://en.wikipedia.org/wiki/Black_Lives_Matter

Janet Murray, Inventing the Medium: Principles of Interaction Design as a Cultural Practice. Cambridge, MA: MIT Press, 2012.

Jiajie Zhang and Vimla L. Patel. “Distributed Cognition, Representation, and Affordance.” Pragmatics & Cognition 14, no. 2 (July 2006)

Tarleton Gillespie, “The Relevance of Algorithms.” Media technologies: Essays on communication, materiality, and society (2014)

De-blackboxing in research

It is a typical tale that media technologies are received by societies with two reactions, either to celebrate it and imagine a utopia in which the technology finally solves one major issue as if by magic and thus represents a technology that can be liberating for individuals and societies, or to condemn and fear it by emphasizing features that can have “negative effects”. We have seen these narratives in many forms in regards to all sorts of media from books, to the telephone, to television, and the Internet. A systems view of media, technologies, and sociotechnical artefacts however present us not only with an argument against the technological determinism aspect of these type of responses, but also invites us to consider that our relationship with media technologies, and technologies in general, is not as simple as a “social construction of technology” either.

“To conceive of humanity and technology as polar opposites is, in effect, to wish away humanity: we are sociotechnical animals, and each human interaction is sociotechnical. We are never limited to social ties. We are never faced only with objects.” (Latour, 1999, p.214)

My questions for this week revolve around using the systems view not only to think about design, but also to think about how to use this approach for research of media technologies. How to work out this de-blackboxing then would be one of the first challenges. The next one may be to determine the level to which the de-blackboxing can serve a specific research question. A systems view would mean to use the principles of modularity, recursiveness, and combinability to make sense of how different components are combined together, how they interact, and the combined effects and dynamics they create, each on its own level, with the broader social system of which they are part. This means that instead of taking media technologies as closed units of analysis, we need to look further, decompose them, and make sense of which level(s) may be more relevant for analysis. When Latour (1999) is going through the eleven layers of his “Myth of Progress,” he explains that each of the sociotechnical layers he discusses is different from the one below/above it, as each has gone through an iteration that has changed it, either from the human/“subjective” side or the non-human/“objective” side. Considering this to approach a media technology means that an analysis would not only have to consider the role of the technology in a group, but also its evolution, and specifically its evolution in regards to specific groups. The analysis also has to be specifically tailored to those components that are relevant.

As an example, a hashtag on Twitter, as a media technology, could be decomposed in various ways. As a feature of social network sites, it could be decomposed into its technical components (the hashtag serves as a link, it also organizes a page on which all tweets that included it are displayed in reverse chronological order according to popularity, posting time, media use, and other options, it is part of a social media outreach repertoire popularly used, etc.). As a term used by a social movement, it has a particular social, cultural, historical context, one that makes it part of a larger system of actors/actants and processes. As a hashtag or key term on the web, it also becomes part of a larger system, one that includes information about this topic and that can be linked across the web. (Or, it may be part of a larger collection of information online but, because it is part of a proprietary platform, it may not actually be linked to all information to which it could be linked.) Decompositions may go a number of different ways, which is why this approach is helpful in making sense of the different dynamics that take place when we speak of sociotechnical systems. Moreover, another issue to analyze is that of the different iterations of the sociotechnical mutually shaping each other. An analysis of a hashtag would have to also consider how the hashtag use has evolved over time, if there are specific moments that can be considered to define each iteration, what was left behind in each iteration, etc. But to a certain extent, not all components could be realistically de-blackboxed and analyzed, so defining this types of limits in research design could be a helpful discussion.

Social network sites as cognitive technologies

Social network sites, viewed as collectively symbolic, cognitive technologies (Cole, 1996; Norman,1991), are technologies that materialize previously ideal cultural artifacts such as birthdays or ways of sharing information. Moreover, because it seems that, as Hollan, Hutchins and Kirsh (2000) hoped, the idea of continuously re-designing social platforms according to user uses and needs is a prominent practice in the social media business, these technologies are constantly evolving as they adapt to and also adjust these cultural artifacts over time.

If we consider certain actions we take on social network sites to interact with our “Friends”, we can see many of them are digitalized versions of things we used to do before having social media in our lives, such as wishing someone a happy birthday, but that have also been shaped in new forms by incorporated features in the technology—such as the Facebook feature that notifies you of your Friends’ birthdays and prompts you to post on their wall or send them a message. While we would congratulate people on our different social networks on their birthdays before social media, it is likely we would not have remembered everyone’s birthday and send them any type of message as often. Congratulating someone on their birthday is in itself a cultural norm; the idea of commemorating the date someone is born is a cultural act that is learned through ages of doing so across cultures, and part of institutions in various forms. At elementary schools, kids can sing to their classmates; and people can expect some paid time off work either through informal norms or laws, when they are older. And on social network sites, people can expect to be reminded of their contacts’ birthdays and to have their birthdays notified to them. Moreover, while wishing somebody a happy birthday could have been an ideal artifact, sometimes materialized in a gift or a card, on social network sites it is always materialized (and permanently archived) if we regard a digital message as such.

In wanting to keep this site focused on interpersonal communication, Facebook designers understood the significance of the birthday as a cultural artifact and modified the feature over the years to make it more prominent. Facebook algorithms will show you popular activity in your network of contacts and birthdays tend to be one, so a contact’s birthday tends to be on someone’s wall making it an “event” on a user’s feed. When you might have not been enthused about sending a message to a contact, seeing more people do it may give you a push and thus establish new lines of communication across networks. The construction of the birthday changes along with the emerging dynamics that take place on social network sites. In this way, our collective symbolic cognition about this artifact is carried on in an ever-changing way.

A similar example can be seen on Twitter, where we can see how users on social network sites also shape these cognitive technologies according to the type of use they need from it. When Twitter was originally designed, it did not have the now very famous Retweet feature, as the action of re-tweeting somebody was not considered by the designers. Once users started getting ahold of the system, they realized that part of how they wanted to communicate was by spreading information effectively within the character limit of the platform. A re-tweet allows a user to share somebody else’s message in a format that gives information about the post being somebody else’s (in saying it’s a re-tweet) and that links to that somebody, thus giving proper credit and making that someone easy to find (by linking to their username). While this practice, which required users to copy-paste somebody’s Tweet and format it as a re-tweet, was becoming popular, the designers of the platform took notice and made it into a button that facilitates the task. The platform now has a key feature that emerged from how users decided to share information through a collectively agreed on criteria. Retweets are now not only an indicator of how many times a message has been spread, but a symbol of popularity within the Twitter community. We can see here as well how social network sites, as cognitive technologies, serve as both mediators and objects of culture. Their users make use of them not only to express cultural norms but also to shape them, while at the same time these objects carry meaning on which users act.


Michael Cole, On Cognitive Artifacts, From Cultural Psychology: A Once and Future Discipline. Cambridge, MA: Harvard University Press, 1996.
Donald A. Norman, “Cognitive Artifacts.” In Designing Interaction, edited by John M. Carroll, 17-38. New York, NY: Cambridge University Press, 1991.
James Hollan, Edwin Hutchins, and David Kirsh. “Distributed Cognition: Toward a New Foundation for Human-computer Interaction Research.” ACM Transactions, Computer-Human Interaction 7, no. 2 (June 2000)