Mariana Leyton Escobar
This essay uses secondary sources, mainly Gabrielle Coleman’s “Coding Freedom: The Ethics and Aesthetics of Hacking” (2012) and Manuel Castells’ “The Internet Galaxy: Reflections on the Internet, Business, and Society” (2003) to perform a preliminary analysis of how the development of software came to be the center of two ways of thinking about technology. The main concern is how this could be explored with the method proposed by actor-network theory. The findings set the stage for a more focused analysis based on primary data collection.
In a compelling anthropological account of the evolution of the free and open source software culture, Gabrielle Coleman shares the following poem:
Programmers’ art as
that of natural scientist
is to be precise.
complete in every detail of description, not
leaving things to chance.
reader, see how yet
deserve free speech rights;
see how numbers, rules,
patterns, languages you don’t
yourself speak yet.
still should in law be
protected from suppression,
called valuable speech!
(Schoen, cited in Coleman 2012, p. 161)
This poem is compelling enough, making the case for why programming code should be considered, and thus protected, free speech. Indeed, as Coleman (2012) explores in her study, the free and open source movement developed a culture around “broad, culturally familiar visions of freedom, free speech, rights, and liberalism that harks back to constitutional ideals” (p. 2). In this sense, the poem is made more complex because it represents a movement with specific ideals. But it goes beyond that, as the poem, written in 1999, was actually part of a larger, worldwide protest against the arrest of then six-teen year old, free and open source software developer Johansen (Coleman, 2012).
One of the ways in which DVDs are protected so that they are not copied and distributed without permission is to encode encryption in it, something known as digital rights management (DRM). DRM are types of access control technologies developed to restrict use of proprietary hardware and copyrighted works established by the Digital Millenium Copyright Act of 1998 as the software not-to-meddle-with. In other words, the DMCA establishes, among other things, that the “production and dissemination of technology, devices, or services intended to circumvent” measures that control access to copyrighted works, such as DRM technologies, is a crime.
Johansen had written, along with two anonymous developers, a piece of software called DeCSS that would allow people to unlock an encryption encoded in DVDs to control their distribution. The poem is in fact a transcoding of the code of such software (Coleman, 2012, pp 161, 170). A piece of the code can be seen in the image below, a snapshot of a page in Coleman’s book.
The poem then becomes a technical artifact that is part of a complex sociotechnical system built around the philosophy of creating and sharing software free of restrictive intellectual property rights.
That sentence contains several components that will be expanded in this essay in order to explore the free and open source software movement as a sociotechnical system that has emerged in parallel to the commercial software sociotechnical system with the development and expansion of personal computers, the Internet, and the web. By following the actor-network theory method proposed in social and technology studies (STS), it will offer a preliminary analysis of how the development of software came to be the center of two ways of thinking about technology. Using secondary sources, it will evaluate the type of nodes and links that would need to be followed to explore this question in a subsequent, more focalized study.
“To conceive of humanity and technology as polar opposites is, in effect, to wish away humanity: we are sociotechnical animals, and each human interaction is sociotechnical. We are never limited to social ties. We are never faced only with objects.” (Latour, 1999, p. 214)
In 1986, Langdon Winner, an STS scholar, wrote the popular essay “Do Artifacts Have Politics?” posing the idea that they do. In his view, technology should not be seen from a deterministic perspective by which it is expected to have specific impacts on society, but he calls attention to the fact that the social deterministic theories of technology — that consider not the technology but the socioeconomic system in which it is embedded — go too far in removing any interest from it. Not denying the usefulness of a social constructivist approach, to understand how artifacts have politics, Winner argued, the technological artefacts themselves had to be taken seriously. Without focusing on a specific technology, his argument is that artifacts have politics insofar as they are the result of structuring design decisions, decisions that once the artifact is finalized and put in the world, influence “how people are going to work, communicate, travel, consume, and so forth over a very long time” (Winner, 1986, p. 5).
A good example for both ideas — that a technological artifact can structure how people organize and that this influence can last for a long time — is the QWERTY keyboard configuration. The QWERTY design does not favor any specific design requirement, neither for the users nor for the hardware (or now software) that holds it, and yet it has not changed since its inception and it is likely it will continue to last. Paul David (1985) offers a great account of the “one damn thing follows another” story that led to this situation based on the concept of path-dependence. This economics concept explains how certain outcomes can result from “historical accidents” or chance “rather than systemic forces” (p.332).
Among the three factors David identifies are determinant in the history of the QWERTY keyboard is a need for “technical interrelatedness” (p. 334) which is the need for system compatibility or interoperability among different parts of a technical system. The typewriter in this case was considered an instrument of production as it was at first mostly bought by businesses that would invest in training workers to memorize and efficiently use the QWERTY keyboard. Thus, the compatibility that was valued by the time the market for the typewriters started to grow circa 1890 was that of the keyboard with human memory. In this way, not only the keyboard, but a specific design of a keyboard, had structured the organization and budget of a business in a way that eventually determined that we are still using a layout designed for typing with ten fingers on phones in which we type with two thumbs. This type of back and forth with technology structuring social forces and then being shaped by those very forces is at the very center of what is meant by sociotechnical system.
Through a philosophical characterization of technical artifacts (as opposed to natural or social objects) and their context of use, Vermaas et al (2011) propose a baseline concept of the matter at hand. To begin with, a system can be defined as “an entity that can be separated into parts, which are all simultaneously linked to each other in a specific way” (p. 68). A sociotechnical system is a hybrid system — a system in which the components that make it up are essentially different, or put in the authors’ words, “components which, as far as their scientific description goes, belong in very many different ‘worlds’” (p. 69). A sociotechnical system is then a hybrid system in which certain components are described by the natural sciences and others by the social sciences (ibid.). In a such a system, there can be many users at one time and they can take on the role of user, operator, or both (ibid). A sociotechnical artifact is then the “redefinition of technology” as a node in a sociotechnical system (Irvine, 2016).
The Social and the Technical?
Recognizing the effect that the cultural structuring of technological innovations could have, and that social and or cultural developments could be understood by looking at the technical base of such development, Régis Debris (1999) proposed mediology as a methodology to explore “the function of a medium in all its forms, over a long-time span (since the birth of writing), and without becoming obsessed by todays media” (p. 1). Indeed, Debris did not refer to a study focused on “the media,” but focused on the relationship between what he refers to as “social functions” such as “religion, ideology, art, politics” in their relationship with the “means and medium/environment [milieux] of transmission and transport” (ibid). The focus of this methodology is on the relations between “the social” and “the technical” but by expanding the definition of the latter to include not just the technical artifact, the medium, but also its environment.
While Debris’ (1999) proposal expands what is to be understood from “the technical,” he maintains a duality between that and “the social,” something that actor-network theory (ANT), another method to explore sociotechnical systems, removes. Bruno Latour, one of the key proponents of this approach, argues such dualism needs to be discarded because, misguided, it has only served to hide a more complex reality: that humans are “sociotechnical animals, and each human interaction is sociotechnical” (1999, p. 214). In Pandora’s Hope (1999), Latour offers the telling of a “mythical history of collectives” by which he explores eleven levels through which human and non-human objects (actants) are theorized to have co-evolved together, as well as four interpretations for what technology mediation means, to explain how humans and non-humans can “fold into each other.” His theoretical analysis aims to show how it is that humans and non-humans are part of one same process that has happened throughout history which has resulted in the current “collective” — ANT’s term for the assemblage of humans and non-humans, used instead of the term “society.
Technical mediation and four moments of association in ANT
The four ways in which technology is a mediator are important to understand a key concept to use ANT as a method of analysis, as they are the means by which agency is distributed in a network. Collectives change as humans and non-humans articulate different associations among them according to specific purposes:
- Translation: the means by which the goals of two or more actors (human or non), articulate their individual goals.
- Composition: the means by which the articulated individual goals become a different, composite one through successive translation.
- Enrrollment: the process by which the joint production of the association formed produces outputs through a blackboxed process (a process in which only inputs and outputs can be observed, while the process between them is not easily discernable). This moment can vary depending on how many components are coming together, their type of goals, etc. Once the actors can align their goals and create a blackbox, they become, as one, a new actant is created, leading to the last step.
- Displacement: the creation of a new hybrid, a composite of human(s) and non-human(s), which forms a new collective with distinct goals and capacities. (Latour, 1999, pp. 176–198)
ANT as a methodology then can be used to understand how agency is distributed in different phenomena (not just “social” phenomena, hybrid phenomena) of which sociotechnical artefacts are a part. To apply it, Latour (2007) explains it is necessary to be extremely observant and collect all data that evidences traces of humans or non-humans components establishing links among each other to pursue certain goals. By doing this, and through a process of thorough description of thick data, he suggests it is possible to understand how agency is distributed among humans, non-humans, mediators, events, and blackboxes that hide some assemblage of them (Latour, 2007).
By retracing these links, reversing the blackboxing, and exploring their historicity, we can use ANT to understand why sociotechnical systems work the way they do, at what moments there were alternatives for it, and in what way the system found some level of equilibrium by blackboxing some assemblages. In this case the focus will be on understanding how and the development of software came to be the center of two ways of thinking about technology.
ANT is a theory filled with new terminology that can be very confusing. As a thorough account of it goes beyond the scope of this essay, I include as a supplement to this section a selection of the Glossary shared by Latour in In Pandora’s Hope (1999).
An expression from the sociology of science that refers to the way scientific and technical work is made invisible by its own success. When a machine runs efficiently, when a matter of fact is settled, one need focus only on its inputs and outputs and not on its internal complexity. Thus, paradoxically, the more science and technology succeed, the more opaque and obscure they become. (p. 304)
Unlike society*, which is an artifact imposed by the modernist settlement*, this term refers to the associations of humans and nonhumans*. While a division between nature* and society renders invisible the political process by which the cosmos is collected in one livable whole, the word “collective” makes this process central. Its slogan could be “no reality without representation.” (p. 305)
A term employed by Whitehead to designate an event* without using the Kantian idiom of the phenomenon*. Concrescence is not an act of knowledge applying human categories to indifferent stuff out there but a modification of all the components or circumstances of the event. (p. 305)
A term borrowed from Whitehead to replace the notion of discovery and its very implausible philosophy of history (in which the object remains immobile while the human historicity of the discoverers receives all the attention). Defining an experiment as an event has consequences for the historicity* of all the ingredients, including nonhumans, that are the circumstances of that experiment (see concrescence). (p. 306)
A term borrowed from the philosophy of history to refer not just to the passage of time-1999 after i998-but to the fact that something happens in time, that history not only passes but transforms, that it is made not only of dates but of events*, not only of intermediaries* but of mediations*. (p. 306)
MEDIATION vs. INTERMEDIARY:
The term “mediation, .. in contrast with “intermediary*,” means an event* or an actor* that cannot be exactly defined by its input and its output. If an intermediary is fully defined by what causes it, a mediation always exceeds its condition. The real difference is not between realists and relativists, sociologists and philosophers, but between those who recognize in the many entanglements of practice* mere intermediaries and those who recognize mediations. (p. 307)
Like society*, nature is not considered as the commonsense external background of human and social action but as the result of a highly problematic settlement* whose political genealogy is traced throughout the book. The words “nonhumans*” and “collective*” refer to entities that have been freed from the political burden of using the concept of nature to shortcut due political process. (p. 309)
This concept has meaning only in the difference between the pair “human-nonhuman” and the subject-object dichotomy. Associations of humans and nonhumans refer to a different political regime from the war forced upon us by the distinction between subject and object. A nonhuman is thus the peacetime version of the object: what the object would look like if it were not engaged in the war to shortcut due political process. The pair human- nonhuman is not a way to “overcome” the subject-object distinction but a way to bypass it entirely. (p. 308)
Shorthand for the “modernist settlement,” which has sealed off into incommensurable problems questions that cannot be solved separately and have to be tackled all at once: the epistemological question of how we can know the outside world, the psychological question of how a mind can maintain a connection with an outside world, the political question of how we can keep order in society. and the moral question of how we can live a good life-to sum up, “out there,” “in there,” “down there,” and “up there.” (p. 310)
The word does not refer to an entity that exists in itself and is ruled by its own laws by opposition to other entities, such as nature ; it means the result of a settlement* that, for political reasons, artificially divides things between the natural and the social realms. To refer not to the artifact of society but to the many connections between humans and nonhumans*, I use the word “collective*” instead. (p. 311)
Computing and the Internet — Communities, Programming, and Values
The history of computing and the Internet has been told from many perspectives over the years, and a theme that emerges consistently is about how different communities of users emerged and co-evolved along the technology in different ways. This section will highlight how this co-evolution is not determined by the technologies themselves, but by the interactions between actors who use, tinker with, and expand on the technology, and how the technology changes along these actions. In such way, computing, networking, and software can be seen as sociotechnical artifacts that are part of a sociotechnical system. They don’t evolve on their own and don’t determine what people do with them. Users and technologies come together to develop a sociotechnical system in which users can use and/or create applications for the computers and the Internet in turn shape the way in which users and technologies assemble. In this process, blackboxing can take place in a variety of places, but the focus here will be on how the development of software came to be the center of two ways of thinking about technology.
In Principles of Computing, Denning and Martell (2015) explain how computing can be understood as a science in itself because in its most abstract conception, it is a matter of processing information. As such, computing can be applied to a number of different domains (such as security, artificial intelligence, data analytics, networking, robotics, etc.) because, as a method to process and generate information, it is about following certain principles that can be combined in a number of different ways in different domains to achieve different objectives (Denning & Martell, 2015, pp. 13–5). Computing as a method in itself then does not determine what can be done, but can guide its application through principles based on communication, computation, recollection, coordination, evaluation, and design (ibid). As such, computing opens up a world of opportunities for those interested in developing a computing application for specific domain. This is what Mahoney (2005) explores in the different histories that emerged as communities of practitioners got together to develop specific domains, thus bringing more attention to those aspects facilitated by computing. He focuses on the different aspects of computing that were developed by different groups, such as data processing and management for the scientists and engineers creating it, the private sector or for government.
Software is how we “put the world into computers” (Mahoney, 2005).
Mahoney emphasizes how historians of computing are only beginning to explore the history of software. While he emphasizes the importance of removing the focus from the machine to include its use, history, and design, in order to understand this history properly, he also says that “associated tasks such as analysis, programming, or operation” need to be understood. This echoes Latour’s urging for analysis of traces of all activities in a sociotechnical system. For Mahoney, understanding the history of software was important because the software is what “actually gets things done outside the world of the computer itself,” and the communities that develop software are the ones filling the gap between “what we can imagine computers doing and what we can actually make them do” (Mahoney, 2005, p. 128). He says that in not understanding this history, we miss out on understanding that this process is not determined and so we don’t learn what the alternatives are. This is important because software is how we “put the world into computers,” and to do that entails on “how we can represent in the symbols of computation portions of the world of interest to us and how we can translate the resulting transformed representation into desired actions” (Mahoney, 2005, pp. 128–9). The history of computing then is not just about how transistors, chips, and screens, but about how different groups of people used such components to develop some areas, based on the principles of computing, in terms of their interests in a way that selects how to represent the world.
To put this in a more concise way, Alan Kay explained computers as a meta-media, a medium whose content is “a wide range of already-existing and not-yet-invented media” (Manovich, 2012, p.44). Because the computing doesn’t set rules for what can be done with computers but what principles should be followed to use computing in general (Denning & Martell, 2015), the range of the “not-yet-invented media” remains wide. Moreover, technology in general (not just computers) also follow two key principles: “cumulative combinatorial design” and “recursiveness,” which explain that technologies are made of components of previously made technologies, which can be used as components later on (Arthur, 2011).
To the extent the computer was developed to be a general-purpose machine — and the Internet designed as a general-purpose, “dumb network” meant only to transport data, — users can develop applications for this meta-media by developing software. In doing so, and following the principles mentioned, users can use software to represent and combine the formats of previously existing media, remix, and expand on them, thus contributing to the meta-medium. If the computer allowed users to manipulate information more easily, the Internet added to that by allowing users to do so while connecting with each other.
In that way, in Software Takes Command, Manovich (2013) shares Mahoney’s concern for software by explaining that “software has become our interface to the world, to others, to our memory and our imagination—a universal language through which the world speaks, and a universal engine on which the world runs” and yet its history has remained mostly unexplored (p. 2). For Manovich, the key to understand about software and its representational function is that, by digitalizing information so it can speak the language of computers, we transform it in a substantial way:
“In new media lingo, to “transcode” something is to translate it into another format. The computerization of culture gradually accomplishes similar transcoding in relation to all cultural categories and concepts. That is, cultural categories and concepts are substituted, on the level of meaning and/or the language, by new ones which derive from computer’s ontology, epistemology and pragmatics. New media thus acts as a forerunner of this more general process of cultural re-conceptualization.” (Manovich, 2002, p. 64)
Under this light, the focus on software development is emphasized rightly as it turns out that programming code and algorithms to “put the world into computers” entails a decision-making process of what to represent of the world and how to do it. To the extent that more of our activities are then mediated by software-based technologies, they are being mediated by decisions that had to consider alternatives of representing the world to begin with. At the same time, our networked technologies have developed in such a way that the use of some software is highly distributed across the globe, and so the interactions among users and developers of software can be seen as a sociotechnical system made of a wide array of human and non-human components, including the designers and users of software, as well as all the components necessary for software to exist and function.
Software as a Sociotechnical System
To explore software as a sociotechnical system then would entail exploring the history of computing and the development of the Internet, along with a whole array of details depending on what aspect of software one is interested in.
In this case, the focus is on how software became the center of two ways of thinking about technology as evidenced by the emergence of a community that values the “free and open” aspects of software, while another one emerged that valued the commercial aspects of software while promoting the idea of “quality software.”
Gabriella Coleman’s (2012) anthropological account of the free and open source software community and the way in which they developed technological and material practices, along with their own vision of liberal ideas, along with Manuel Castell’s (2003) sociological explanation of how four levels of “Internet culture” developed with the emergence and initial expansion of the Internet will serve as secondary data to explore the initial nodes and links in the sociotechnical network that would need to be explored to account for such development. While neither of the authors uses ANT, both emphasize the interaction of networked individuals and collectives with technology and, without falling into a techno-deterministic approach, give technology the sufficient “importance” to guide the ANT analysis that would put such technology in the same place as human actors.
Emergence of a community
While free and open source software is not a new concept, as such was the method to develop and share software in the initial stages of computing, the focus on it as a philosophy to think about technology has developed more recently. Coleman (2012) explores how a community of free and open source software developed internationally as the self-identified hackers were able to connect with each other around FOSS projects and thus develop two main components: a material one based on the practice of developing software, and an own vision of liberal ideals. As she explores the ways in which the FOSS community struggled with intellectual property laws in order to promote a system of software development that did not necessarily commodify it, she finds that the community values the liberal idea of free speech, but opposes that of commodifying everything. A romanticized interpretation of liberalism is what soothes this tension (p.3-4). In her account, on top of telling the development of encounters with the law, (part of which is explored in the first segment of this essay), another important moment emerges as software commercialization begins to boom and the not-yet-so big community of software developers develop two ways of thinking about software. In 1976, after it became clear that hackers were sharing the source code for Microsoft products, Bill Gates wrote a letter to the then called “hobbyists” in an attempt to explain why developing software outside of a commercial venture would not be sustainable as it would not develop “quality software” (p. 65). A decade and many more developments later, Richard Stallman was establishing the Free Software Foundation, the GNU Manifesto, and the General Public License.
For Castells (2003), four main “Internet cultures” emerged as the Internet propagated: the techno-elites, the hackers, the virtual communitarians, and the entrepreneurs. The techno-elites were the original Internet architects and the community that spread from there, which valued meritocracy and openness both in their method of work and in their design, which is why the Internet is based on open standards. However, for hackers, the open source was not enough, it had to also be free, not in terms of cost, but in terms of freedom to share, understand and tinker with. Castells argues that while the Internet was developed with open Internet protocols, the concern was radicalized by the hacker culture with the “struggles to defend the openness of the UNIX source code” (p. 39-43). Such struggles eventually turned into the movement for free and open source software explored by Coleman. The other two layers explored also serve to understand the sociotechnical context of these developments. On the one hand, the virtual community aspect of the Internet culture calls attention to the easiness with which users can form networked communities across the globe, an important aspect of the FOSS movement. In addition, the entrepreneurial layer brings to front the opposing force that led to software being the focus of a discursive battle over software (p. 52-60). With the advent of digital technologies, the market for new digital products emerged and thus the eagerness to protect the intellectual property of those products.
An encoded poem as a piece of the sociotechnical
From both accounts, the free and open source software community must be read as global and as part of a network that includes the history of computing and the Internet, the history of the expansion of these technologies, the history of intellectual property law (as well its global expansion), as well as their different ideological, cultural, economic and political contexts. As explored by Coleman, the FOSS culture has spread not only through the development of software but by the sharing of such development, online and offline, as she discovers the importance of in-person events for these hackers (2012). As theorized by Castells (2009), the power that networked communities can leverage with the Internet and related technologies has changed and it has the potential to have global impacts. To the extent that the FOSS community continues to expand and openly challenge liberal ideals and ways of thinking about software and technology in general, understanding this complex sociotechnical network is pressing.
The poem quoted above, under this light, becomes a much more complex piece of the sociotechnical puzzle. It is an expression in the name of freedom that not only makes a cultural and political statement by equating code with speech, it also takes the form of a protest artifact by being the transcoding of a piece of contested software. That software in itself is a transcoding of one way to represent the world in the world of networked computers — one way that turned out to activate a network of legal, economic and political arrangements that, in affecting that piece of software, affect all other coded speech. In such a way, this artefact does indeed have politics, but under the light of ANT, it does so in a much more complex way than it sounds.
Arthur, W. B. (2011). The Nature of Technology: What It Is and How It Evolves (Reprint edition). New York: Free Press.
Castells, M. (2003). The Internet Galaxy: Reflections on the Internet, Business, and Society (1 edition). Oxford: Oxford University Press.
Castells, M. (2009). Communication Power. Oxford: Oxford University Press.
Coleman, E. G. (2012). Coding Freedom: The Ethics and Aesthetics of Hacking. Princeton: Princeton University Press.
David, P. A. (1985). Clio and the Economics of QWERTY. The American Economic Review, 75(2), 332–337.
Debray, R. (1999, August). What is Mediology? Le Monde Diplomatique.
Denning, P. J., & Martell, C. H. (2015). Great Principles of Computing. Cambridge, Massachusetts: The MIT Press.
Irvine, M. (2016). Understanding Media, Mediation, and Sociotechnical Artefacts. From Concepts and Hypotheses to Methods for De‐Blackboxing. Communication, Culture & Technology Georgetown University.
Latour, B. (1999). Pandora’s Hope: Essays on the Reality of Science Studies. Harvard University Press.
Latour, B. (2007). Reassembling the Social: An Introduction to Actor-Network-Theory (1st edition). Oxford; New York: Oxford University Press.
Mahoney, M. S. (2005). The histories of computing(s). Interdisciplinary Science Reviews, 30(2), 119–135. https://doi.org/10.1179/030801805X25927
Manovich, L. (2002). The Language of New Media (Revised ed. edition). Cambridge, Mass.: The MIT Press.
Manovich, L. (2013). Software Takes Command (INT edition). New York ; London: Bloomsbury Academic.
Vermaas, P., Kroes, P., Franssen, M., Poel, I. van de, & Houkes, W. (2011). A Philosophy of Technology: From Technical Artefacts to Sociotechnical Systems. San Rafael, Calif. (1537 Fourth Street, San Rafael, CA 94901 USA): Morgan & Claypool Publishers.
Winner, L. (1980). Do Artifacts Have Politics? Daedalus, 109(1), 121–136.