A smarphone is, stripped down to its largest most basic components, a box of aluminum, plastic and glass. In it, however, are several hardware and software components that work to allow for countless functions. Among many others, basic smartphone today can hold apps for organizing, like clock, calendar, note-taking, maps, and weather apps; apps to communicate with others, like chat, email, and apps for social network sites; apps to access documents, music, and news in text and audio, and apps to create content, such as writing or photo, audio and video editing apps. The hardware components include a camera and a microphone, allowing the user to record information. Each of these apps is made of software that has different components, such as the code that runs the basic functionality of the app and the code that displays its mobile-friendly user interface — this code is also made up of components of specific strings of code that process specific tasks. As Arthur (2009) explains, technologies are made up of components that are technologies in themselves and are ordered hierarchically.
A smartphone also includes hardware and protocols that allow it to connect to the Internet, which means that its owner is able to communicate and engage with others through the use of these apps. In this way, they fall into the category of cultural software, as they “support actions we normally associate with “culture,”” like creating, accessing, and sharing information and knowledge online, communicating with other people, and engaging in cultural experiences and the online information ecology (Manovich, 2012, p. 21-3). When we interact with these apps, there are components that are visible and many that are not. The app is made of code, which are instructions for how data should be processed and displayed. What I can see is the user interface, the front end, the result of whatever process takes place through the app’s code. What I can’t see is the instructions and how the processes take place.
When I open Facebook on my iPhone, for example, the first thing I see on the top is a search box, which is the outcome of code that displays a blue rectangular box in which I can enter text. Once I enter a search term and hit “Search,” I get a list of results, which is the outcome of code that tells the application how to look for the term I entered within some database and in what order to show it to me. Those instructions, those criteria, those databases — that I cannot see. Below the search box, a row of three buttons offer me the choice to do a live webcast, take a photo, or “check-in.” The app has access (if I provide it) to my camera, my microphone, and my location. Below that, there is the “What’s on your mind?” box, the publishing tool through which I can post text, hyperlinks, photos, videos, live videos, check-ins (again), a “feeling/activity” or tag friends. I can see these options provided to me, in that order, and take one of these actions within the app, but I can’t see how the code behind it prioritizes my post in relation to others to publish in other people’s News Feeds.
This cultural piece of software then, allows me to perform cultural activities in which I take some decisions and have some information, but that are also part of decisions made by others — decisions as to what data I can access and how what I share can be accessed by others.
These apps also fall into what Norman (1991) calls cognitive artifacts, “artificial devices that maintain, display, or operate upon information in order to serve a representational function and that affect human cognitive performance” (p.17). I interact with these apps by entering information to be processed in some way, and this in turn affects how I continue to process it. As explained by Norman, these type of artifacts not simply amplify our capabilities, but they change the nature of the tasks we perform (p. 19). If I want to share a picture with a friend without this software, I have to get a physical copy of it, meet my friend, and show it to her, and in doing so I can see firsthand that she sees it and her reaction to it. If I use the software, I need to only press a few buttons and wait for her to see it when she will. If I post it on Facebook, I assume it will appear on her News Feed but can only confirm if she signals back through a Like or comment or if I ask her; I don’t know how Facebook’s algorithms decide to show information on her side of the screen. Again, there are decisions in my use of cognitive, cultural software that are out of my control.
On top of this, the software is designed in a very precise way. We have had web and mobile applications for long enough now that companies understand the importance of user-centered design. A company like Facebook spends much time and money researching its users and designing user experiences that engage people in the app and provide options to share information in specific ways. Much like the pushing and pulling doors Norman speaks about in The Design of Everyday Things (2002), the Facebook user interface is full of intuitive buttons that guide the user into streams of activities to publish photos in a certain way, share specific information (like geographical location) publicly, interact with people in specific moments (think birthday reminders or the option for “feeling/activity”), etc. We have interacted with the platform enough that Facebook has determined certain things we like to do and has incorporated them in their menu of actions and designed their steps for us to now follow. Surely they invest time thinking about how to make these designs pleasant, but they guide our behavior anyway.
The algorithms that are behind these cognitive, cultural applications then are also black boxes within the black box that is the smartphone, a component of this technology that is a technology in itself and as such contains components that are visible and others that are invisible to its users. It is designed in a way that is user-centered in the sense that it is friendly, easy to use. But it is also designed in a way that guides behavior, and as cognitive artifacts, this means they affect how we interact with information and how we communicate with others in ways that are out of our control and out of our sight
Brian Arthur, The Nature of Technology: What It Is and How It Evolves. MIT Press, 2015
Lev Manovich, Software Takes Command. New York: Bloomsbury Academic, 2013
Donald A. Norman, The Design of Everyday Things. 2nd ed. New York, NY: Basic Books, 2002.
Donald A. Norman, “Cognitive Artifacts.” In Designing Interaction, edited by John M. Carroll, 17-38. New York, NY: Cambridge University Press, 1991.