Author Archives: Mariana Leyton Escobar

Modular designs and modular actions

An iPhone is a combination of hardware and software components structured together to work as a small computer. Systems thinking offers an approach to make sense of this combination by focusing on its modular design. In developing the smartphone, as with other media technologies, engineers and designers had to structure its different components as a system of subsystems that could inter-relate to work together as necessary. An iPhone, then, is modular in the sense that it is made up of modules, “unit[s] whose structural elements are powerfully connected among themselves and relatively weakly connected to elements in other units” (Baldwin and Clark, 2000, p. 63), and figuring out what those modules are and how they relate to one another can shed lights into some of its features.

The structure of an iPhone is designed so that hardware and software interact, and so the user interacts with the phone mostly through software. Both hardware and software are structured modularly, which can be explored by going through the steps a user would go through to post an Instagram video.

To post a video on Instagram, I need to first turn on my phone and access the app. For this to happen, an operating system (OS) must be in place to manage how the software and hardware will interact, and this is a first module I encounter as I press the iPhone’s round button, which sends a signal to the OS and turns it on, gives me the option to enter my passcode, and shows me the home screen where I find the app’s widget. The software of the phone is first structured as an operating system that “manages computer hardware and software resources” [1] and thus allows applications to run on the phone.The OS in the iPhone is in itself a module of software, one that is designed by Apple and called iOS.

User interface on Instagram

User interface on Instagram

A characteristic of modules is that, while they serve a purpose in themselves, they can also interconnect with other modules to serve other purposes as needed and as wanted. To be able to do so, they have interfaces (specifications of how to interact with such module). The specifications to interact with iOS are visible only to an extent. Because it is an OS designed by Apple, it is meant to work only with its hardware and is thus closed source. Users can develop applications to run on it by following specifications set by Apple and share or sell them through the Apple Store.

After opening the app, I select the photo icon on the bottom menu, which turns on my camera, select the video option on the sliding menu, which changes the camera mode and turns on my microphone, and film for a specific amount of time. In going through these actions, I am interacting with an array of modules designed by Instagram engineers and designers with the iPhone in mind. Instagram is a social network site that allows users to share one photo at a time with their followers. In itself, Instagram is a module among the universe of social network sites out there, which has figured out that photo-sharing is a key activity for users and designed an app around that concept. As an app, it is software that allows me to take several actions, each one of which can be described as a module in itself. For example, if I view my streamline before posting anything, I interact with a function that connects to the Internet to get data and displays that data to me as snippets in which I see a photo, information on the user who shared it and comments.

To post a video, I interact with a module of functions that connect the app to the camera and microphone and display options for me to capture video, edit it in specific ways, write a description, post it on Instagram and then share it on other sites. The various modules in the iPhone are interacting while I take these actions even though not all of them were designed by the same people and only do so as needed. When I go through my stream, the app does not need to interact with the phone’s camera and microphone, but it does so swiftly when I turn on that option, since the app was designed for the iOS and thus has access to them. All the while, however, it needs to connect to the Internet, for which it relies on the phone’s hardware. For each of these components to work together, they have interfaces that allow them to interact in specified ways, as well as a user interface through which I interact with it.

An iPhone, then, can be described as an interface of interfaces, each of which allow us to interact with specific modules of software that interact with each other through an OS that allows different functions to use the different features of the phone as needed. At the same time, this modular design is structured in a way that lets its modules be open only as wanted. The example of the OS shows that Apple can open some of its interfaces and thus promote innovation, as Lidwell, Holden and Butler explain it can  (2003, p. 136), but it can do so through specific regulations, thus curbing the levels of innovation that can be reached.

Carliss Y. Baldwin and Kim B. Clark, Design Rules, Vol. 1: The Power of Modularity. Cambridge, MA: The MIT Press, 2000.

Lidwell, William, Kritina Holden, and Jill Butler. Universal Principles of Design. Revised. Beverly, MA: Rockport Publishers, 2010.

[1] Operating system:

Out of our control and out of our sight

A smarphone is, stripped down to its largest most basic components, a box of aluminum, plastic and glass. In it, however, are several hardware and software components that work to allow for countless functions. Among many others, basic smartphone today can hold apps for organizing, like clock, calendar, note-taking, maps, and weather apps; apps to communicate with others, like chat, email, and apps for social network sites; apps to access documents, music, and news in text and audio, and apps to create content, such as writing or photo, audio and video editing apps. The hardware components include a camera and a microphone, allowing the user to record information. Each of these apps is made of software that has different components, such as the code that runs the basic functionality of the app and the code that displays its mobile-friendly user interface — this code is also made up of components of specific strings of code that process specific tasks. As Arthur (2009) explains, technologies are made up of components that are technologies in themselves and are ordered hierarchically.

A smartphone also includes hardware and protocols that allow it to connect to the Internet, which means that its owner is able to communicate and engage with others through the use of these apps. In this way, they fall into the category of cultural software, as they “support actions we normally associate with “culture,”” like creating, accessing, and sharing information and knowledge online, communicating with other people, and engaging in cultural experiences and the online information ecology (Manovich, 2012, p. 21-3). When we interact with these apps, there are components that are visible and many that are not. The app is made of code, which are instructions for how data should be processed and displayed. What I can see is the user interface, the front end, the result of whatever process takes place through the app’s code. What I can’t see is the instructions and how the processes take place.

When I open Facebook on my iPhone, for example, the first thing I see on the top is a search box, which is the outcome of code that displays a blue rectangular box in which I can enter text. Once I enter a search term and hit “Search,” I get a list of results, which is the outcome of code that tells the application how to look for the term I entered within some database and in what order to show it to me. Those instructions, those criteria, those databases — that I cannot see. Below the search box, a row of three buttons offer me the choice to do a live webcast, take a photo, or “check-in.” The app has access (if I provide it) to my camera, my microphone, and my location. Below that, there is the “What’s on your mind?” box, the publishing tool through which I can post text, hyperlinks, photos, videos, live videos, check-ins (again), a “feeling/activity” or tag friends. I can see these options provided to me, in that order, and take one of these actions within the app, but I can’t see how the code behind it prioritizes my post in relation to others to publish in other people’s News Feeds.

This cultural piece of software then, allows me to perform cultural activities in which I take some decisions and have some information, but that are also part of decisions made by others — decisions as to what data I can access and how what I share can be accessed by others.

These apps also fall into what Norman (1991) calls cognitive artifacts, “artificial devices that maintain, display, or operate upon information in order to serve a representational function and that affect human cognitive performance” (p.17). I interact with these apps by entering information to be processed in some way, and this in turn affects how I continue to process it. As explained by Norman, these type of artifacts not simply amplify our capabilities, but they change the nature of the tasks we perform (p. 19). If I want to share a picture with a friend without this software, I have to get a physical copy of it, meet my friend, and show it to her, and in doing so I can see firsthand that she sees it and her reaction to it. If I use the software, I need to only press a few buttons and wait for her to see it when she will. If I post it on Facebook, I assume it will appear on her News Feed but can only confirm if she signals back through a Like or comment or if I ask her; I don’t know how Facebook’s algorithms decide to show information on her side of the screen. Again, there are decisions in my use of cognitive, cultural software that are out of my control.

On top of this, the software is designed in a very precise way. We have had web and mobile applications for long enough now that companies understand the importance of user-centered design. A company like Facebook spends much time and money researching its users and designing user experiences that engage people in the app and provide options to share information in specific ways. Much like the pushing and pulling doors Norman speaks about in The Design of Everyday Things (2002), the Facebook user interface is full of intuitive buttons that guide the user into streams of activities to publish photos in a certain way, share specific information (like geographical location) publicly, interact with people in specific moments (think birthday reminders or the option for “feeling/activity”), etc. We have interacted with the platform enough that Facebook has determined certain things we like to do and has incorporated them in their menu of actions and designed their steps for us to now follow. Surely they invest time thinking about how to make these designs pleasant, but they guide our behavior anyway.

The algorithms that are behind these cognitive, cultural applications then are also black boxes within the black box that is the smartphone, a component of this technology that is a technology in itself and as such contains components that are visible and others that are invisible to its users. It is designed in a way that is user-centered in the sense that it is friendly, easy to use. But it is also designed in a way that guides behavior, and as cognitive artifacts, this means they affect how we interact with information and how we communicate with others in ways that are out of our control and out of our sight


Brian Arthur, The Nature of Technology: What It Is and How It Evolves. MIT Press, 2015
Lev Manovich, Software Takes Command. New York: Bloomsbury Academic, 2013
Donald A. Norman, The Design of Everyday Things. 2nd ed. New York, NY: Basic Books, 2002.
Donald A. Norman, “Cognitive Artifacts.” In Designing Interaction, edited by John M. Carroll, 17-38. New York, NY: Cambridge University Press, 1991.