Author Archives: Mary Herring

WordPress v. Webflow: Creative capabilities and constraints of website creation platforms

Mary Margaret Herring


This paper offers a comparative analysis of the interfaces of the popular content management systems WordPress and Webflow. WordPress and Webflow have become widely adopted tools for website creation they offer powerful graphical user interfaces (GUIs) that allow users of any technical skill to create and maintain a web presence. This democratization of web creation has enabled many nontechnical users to publish their ideas or products online. However, GUIs can over simplify a program’s complexity by relying heavily on metaphor and ease of use instead of illustrating new possibilities. Do the GUIs of WordPress and Webflow increase or limit users’ creative capabilities? To answer this question, a comparative analysis of WordPress and Webflow will be conducted using Murray’s (2011) digital affordances as a framework.


It goes without saying that having a web presence is extremely important. Yet, many people with a product, idea, or story may not have the technical skills required to develop their own sites from scratch. Luckily, content management systems with graphical user interfaces (GUIs) – like WordPress [i] and Webflow – allow users to create websites without writing a single line of code. These products have greatly democratized web creation by creating interfaces that are reminiscent of other programs that nontechnical users are already familiar with. But, does a reliance on convention limit the creativity of WordPress and Webflow users?

WordPress and Webflow offer an interesting case study because they appeal to both novice, nontechnical users and experienced developers. Further, these two sites have very different interfaces that should yield a rich comparison. After examining the limitations of GUIs that draw heavily from metaphor, this paper will compare WordPress and Webflow. The comparison will be structured using Murray’s (2011) four affordances of digital media. Overall I argue that Webflow maximizes the affordances provided by digital media more successfully than WordPress and is likely easier to use by novice users.

Design Challenges in Democratizing Web Development

Content Management Systems

Creating a website with traditional front-end web development practices can be a painstaking process. Even on the most basic level, websites consist of many files: HTML files dictate the site’s structure and content, styling is added with CSS files, and JavaScript files make the site more interactive. After publishing the site by uploading these files to a server, editing the site’s content can be difficult. In order to swap out an image on a site, for instance, the administrator would not only need to upload the new image file to the server, but would also have to modify the HTML file where the image was displayed to call the new image. Then, the HTML file would also have to be re-uploaded to the server. In order to make content management more seamless, systems called content management systems (CMSs) have been developed. These systems facilitate content creation and publishing and have been used to create blogs, eCommerce sites, portfolios, and nearly everything in between (Cabot, 2018).

CMSs have become increasingly popular due to their ease of use and wide range of functionalities. In 2018, Martinez-Caro et al. found that nearly 50% of web pages are implemented using CMSs which usually consist of three elements: the content manager’s files, a hosting provider where these files are stored, and a linked database to store site information (Martinez-Caro et al., 2020). Rather than directly uploading files of code to the server, CMSs act as a more user-friendly intermediary where content can be stored and edited. To facilitate this, most CMSs have an administration area where pages, posts, and functionalities can be added to the site. This administration area is referred to as the back-end while the part of the site that a visitor would see is called the front-end (Martinez-Caro et al., 2018).

Martinez-Caro et al. (2018) believe that CMSs have become so popular in part because they are accessible to a wide audience while still offering a plethora of functionalities. To make their programs accessible to users who may not have a technical background, CMSs implement back-end GUIs that allow non-technical users to quickly create stylized sites by utilizing rich-text editors and drag-and-drop functionalities. Enabling non-technical users to create and maintain a web presence has democratized web publishing. Yet, many CMS services, including WordPress and Webflow, aim to create scalable products that can be used by both non-technical web creators and large corporate clients. While these services share the same mission, the platforms operate in quite different ways.

Since its launch in 2003, WordPress has become the most widely used CMS. As of December 2020, over 39.2% of all websites were implemented using WordPress (W3 Techs). WordPress is a highly collaborative open-sourced software. The WordPress Core, which configures access to a user’s files, WordPress settings, database configuration, and dashboard settings, is developed by The Core Development Team (Hughes, 2017). However, plugins, which can be implemented to add functionalities to WordPress sites, and themes, that stylize content, can be developed by anyone. The design rules for plugins and themes are well documented and a massive repository of these crowdsourced add-ons exist making site functionalities and designs highly customizable. Users can create content from a block-style page and post editor. Alternatively, free and paid versions of plugins like Elementor or WP Bakery provide extended drag-and-drop content creation elements. Finally, users can fine tune their site’s style in the customizer.

A newer competitor to WordPress is Webflow. According to Sanchez-Olivera (2019), Webflow “attempts to fill the space between DIY software like Wix and Squarespace, traditional content management systems like WordPress, and actual front-end web development.” This is accomplished by providing a “Photoshop-like” GUI that creates HTML, CSS, and JavaScript files as content is added to the drag-and-drop editor. This allows users to create custom templates for dynamic content and static pages alike. Unlike WordPress, however, Webflow is a for-profit platform that offers hosting services and charges users to export their files after they’ve created their sites. Nevertheless, this platform has garnered attention from the web design community for its visual design capabilities.

Both WordPress and Webflow offer an interesting case study because they market their sites to both novice users and experienced developers alike. These sites simplify the website creation process by allowing users to develop content in GUI, What You See Is What You Get (WYSIWYG) editors. But, as Gentner and Nielsen state, “the problem with WYSIWYG is that it is usually equivalent to What You See Is All There Is (WYSIATI)” (1996, p. 75). How do these sites simplify and abstract the process of web development to be utilized by nontechnical users? Does this limit the capabilities of experienced developers? Before offering a comparative analysis of the two website creation platforms, the intentions of GUI pioneer, Alan Kay, and Gentner and Nielsen’s (1996) Anti-Mac Interface will be discussed to determine the limits and benefits that GUIs offer.

Alan Kay and The Anti-Mac Interface

In the same way that WYSIWYG website builders rely on GUIs to allow non-technical users to publish websites, GUIs were employed in the 1970s to make computers more accessible to the general public. In his book, Software Takes Command, Manovich (2013) highlights the inconsistencies between Alan Kay’s intentions as the “pioneer of GUIs,” and the way that GUIs have been commercialized. Manovich writes that “Kay wanted to turn computers into a ‘personal dynamic media’ which could be used for learning, discovery, and artistic creation” (2013, p. 61). To do this, Kay and his team simulated the way that people interacted with pre-existing media on the computer while adding functions that the prior media was lacking (Manovich, 2013). For instance, the folder icon was used to demonstrate to users that files could be grouped together and organized like their physical counterparts. However, files can also be searched for words or phrases in a way that paper files cannot. This helped users feel more familiar in their digital environment and understand the machine’s capabilities. Apple and other technology companies readily adopted this convention to help make interfaces more intuitive by drawing heavily from metaphors from the physical world (Manovich, 2013). In this way, the GUI should be so intuitive that it becomes transparent. But, as Manovich argues, instead of the transparent interface that was adopted commercially, Kay and his colleagues had envisioned a “GUI as a medium designed in its every detail to facilitate learning, discovery, and creativity” (2013, p. 100). While it is clear that GUIs have greatly democratized the use of computers and website creation, it is worth wondering if the transparent interfaces limit users’ capabilities.

To consider alternatives to the transparent interfaces implemented commercially, Gentner and Nielsen (1996) discuss an alternative to Apple’s Mac interface. In 1996, one of Apple’s human interface guidelines stated the importance of using metaphor in design. However, Gentner and Nielsen (1996) argue that using metaphor in designs can constrain users and designers. Users may become constrained by the use of metaphor when the digital implementation of the metaphor lacks or has additional features that the physical equivalent lacks. In the example of the file folders used above, physical file folders are not searchable. A person cannot walk up to a file cabinet, type in a search query, and receive a list of relevant physical documents. However, they can do this on the computer. In this example, by solely relying on metaphor to demonstrate what users can do, a user may not know that they can search for relevant documents. Instead of relying on metaphor, Gentner and Nielsen (1996) recommend integrating more language based commands and providing a richer representation of objects that reveal more about the contents the objects contain.

According to Gentner and Nielsen (1996), Apple’s human-interface guidelines also stipulated that users should always be in control of the interface. One way of giving users control is by implementing a direct manipulation interface where users interact directly with objects. This allows the user to understand exactly where they are in the task that they are completing. However, this also means that users must be involved at the atomic level of the operation. Gentner and Nielsen state that by directly manipulating content users are reduced to assembly line workers that complete monotonous tasks instead of executives who can issue high-level commands (1996, p. 74). While these are just a few examples of how Apple’s human-interface guidelines may be violated for the better, it becomes clear that ease of use is often at odds with the functionalities of a technology. This sentiment is summarized by Gentner and Nielsen who state that “[t]he GUIs of contemporary applications are generally well designed for ease of learning, but there is often a trade-off between ease of learning on one hand and ease of use, power, and flexibility on the other hand” (1996, p. 79). When this tradeoff is applied to web creation platforms, I ask: in what ways do the GUIs of Webflow and WordPress limit and empower design capabilities and functionalities for novice and experienced users?

WordPress v. Webflow

A comparative framework: Maximizing the affordances of digital media

With this tradeoff in mind, we can now begin to compare WordPress and Webflow. When comparing these two platforms, Murray’s (2011) grid of digital media affordances will be used. Murray writes that “[t]he digital designer has two responsibilities: to create the artifact that best serves the needs of the people who will interact with it, and to advance the digital medium as a whole” (2011, p. 87). To advance the digital medium, Murray (2011) argues that the four affordances of digital media must be maximized. She writes that digital media afford procedural, participatory, spatial, and encyclopedic actions (Murray, 2011). Digital media are procedural because computers are able to execute conditional behaviors in ways that prior forms of media could not. Computers also afford participation by taking inputs from users and producing outputs accordingly, and vice versa. The spatial affordance of digital media builds upon the participatory and procedural properties of digital media by offering an abstract space for users to interact with digital content. Finally, computers are encyclopedic because they can transmit and organize more information than any other previous media (Murray, 2011). While this list of the affordances of digital media may not be comprehensive, it offers a framework that emphasizes utilizing all of the functionalities afforded by digital technology in the creative and expressive ways that Kay intended.

Maximizing Procedural Affordances

As machines that are controlled by conditional logic, computers and digital media are fundamentally procedural. Yet, computer users may never realize the underlying procedural behavior of computers. This is because these behaviors are shrouded in layers of abstraction that hide the complexity of the operations performed by the device. To manage this complexity, Murray (2011) recommends developing modular programs. Irvine defines modularity as “conceptual models of systems with interconnected subcomponents that can be implemented in real constructed things” (n.d., p. 1). Developing subcomponents makes it easier to add or remove parts of the system as long as all parts of the system comply with a universally accepted set of design rules. These modular subcomponents maximize the procedural capabilities of technologies by dividing effort and coordinating tasks and decisions (Baldwin & Clark, 2000). Therefore, examining the modularity of WordPress and Webflow will be an indicator of how much the platforms maximize the procedural behaviors afforded by digital media.


As an open-source program, WordPress is massively modular and encourages developers to create and modify subcomponents of their site. In order to understand how this is possible, it is important to first take a look at the relationship between the WordPress core, themes, and plugins. The WordPress core is a set of files that are developed by The WordPress Core Development Team. These are the main files that control access to uploaded content, dictate WordPress settings, configure the database and dashboard and allow for additional features to be added to the site (Hughes, 2017). According to the WordPress Plugin Developer Handbook, there is one cardinal rule: “Don’t touch the WordPress core” (WordPress Community, n.d., Plugin Handbook). This is because the core files will update with each new instance of WordPress. Instead of making changes directly to the core files, developers recommend using plugins to add additional functionalities to the site.

Plugins greatly expand the functionality of a WordPress site without modifying the core files. On the most basic level, WordPress plugins are PHP files with a WordPress plugin header comment (WordPress Community, n.d., Plugin). Plugins operate by using hooks, actions, and filters to tap into the core files. Hooks are inserted into the core files and act as placeholders for plugin developers to tap into. Actions are a type of hook that allows the developer to add or change a functionality of the core file and filters are hooks that allow the user to alter content as it is displayed (WordPress Community, n.d., Plugin). When the site runs, WordPress identifies the plugin files by searching through the plugins folder to find PHP files with WordPress plugin header comments. By adding hooks in the core files, plugin developers are able to greatly expand the functionality of their site with actions and filters. Because of its crowdsourced nature, these design rules are well documented in the Plugin Handbook (WordPress Community, n.d.). Further, problems with a plugin will not cause the entire site to crash. The site can easily be repaired by uninstalling the plugin. This modularity has also been a boon to novice web creators who can activate any of the plugins in the repository without touching a line of code.

While plugins extend the functionality of WordPress sites, themes control the way that site content is displayed in the browser. Like plugins, theme development is crowdsourced and massively modular. On WordPress, users can create static pages or dynamic content like blog posts. Themes typically contain different templates for displaying these varying types of content. To determine which template to use, WordPress has a default template hierarchy that first determines what type of page is being requested, selects a template based on the order in the hierarchy, and then uses that template (WordPress Community, n.d., Theme). There are only two design rules stipulated by the core for theme development. First, a theme must contain an index.php file which acts as the main template file and a style.css file that contains styling information. Second, the theme cannot add a critical functionality ­– a site should not fail to function solely because the user changes the theme (WordPress Community, n.d., Theme). Themes are modular because they merely contain templates for displaying the content. Aside from the way it is displayed, the content remains unaffected by the theme that is installed. This allows experienced developers to create their own WordPress themes and novice users to benefit from free and paid professionally designed themes from the repository that are easily customizable.

The template hierarchy used by WordPress.

WordPress’s template hierarchy. This is the process that the system takes to determine what template to use.


Webflow differs quite drastically from WordPress. It is worth noting that the for-profit nature of Webflow means that there are fewer guidelines about the way that the software works. Yet, in interacting with the visual editor, the way that the program operates becomes more clear. Unlike WordPress, there is no distinction between core files, plugins, and themes. Rather, all creation and editing takes place in the visual editor or designer.

On Webflow, this distinction between content and style is eliminated. When creating static pages, users have complete design over all elements of the site – from the header and footer, to the way that the body text is displayed. Much like using a text box in Microsoft Word, users add containers for their text and then directly add text or media into those containers. This generates clean and easily readable HTML. For example, the following screenshot of a section of a website created with Webflow generates the following HTML (images from “Webflow vs. WordPress,” n.d.).

A section of Webflow's site

The way the site appears to users (Image from “Webflow vs. WordPress,” n.d.).

The HTML generated to design that site.

The HTML that Webflow generated for this section of the site (Image from “Webflow vs. WordPress,” n.d.).










This customization is fabulous for users who have wanted to create WordPress themes from scratch but lack the knowledge of PHP to do so. However, creating each individual page is a tedious task. To avoid this, Webflow allows users to save and reuse blocks of code and also offers a CMS service where users can add custom fields of content and create custom templates. For example, a user creating a template for a blog may want each entry to have a blog title, date, author, category, blog post, featured image, and featured color field. The user can add or delete these fields as they find necessary and when they publish the post, they just upload the relevant content into each of these fields. Then, to stylize the posts, they can create a template that accepts content from the fields. This streamlines the coding process by utilizing the digital medium’s procedural behaviors to display similar forms of information using conditional logic.

Webflow’s heavy reliance on traditional web development practices means that the learning curve may be quite steep for users unfamiliar with front-end development. However, there are several tools that may help non-technical users get sites up and running. Webflow offers several free and paid templates created by developers. This offers users a site that is already mostly built and simply requires inputting content. Also, while Webflow lacks plugins, they do offer a library of integrations which are bits of code developed by third-parties that can be embedded into sites. While Webflow believes that the lack of plugins are beneficial, embedding content using iframes has been known to slow down performance because of the increased memory required (MDN Contributors, 2020). It is also worth noting that integrations can only be embedded in a page and therefore are not able to add functionality to an entire site in the same way that WordPress plugins can. There are also extensive “hacks” that can be used to upgrade site functionality that will be discussed later on. Yet, these hacks are nowhere near as extensive as WordPress’s plugin repository.

While the lack of modularity offered by Webflow may be viewed as a negative, Webflow emphasizes the simplicity of the platform. Webflow believes that this is preferable to a more modular design because there is no extra code from plugins to slow down the site, there are no automatic updates to core files (because there aren’t any), and there’s no PHP (“Webflow vs. WordPress,” n.d.).


While Webflow’s creators praise the product’s simplicity, WordPress’s modularity has proved to be one of its greatest assets because it makes the program scalable. Murray states that scalable programs are defined by “being able to accommodate more users, more data, more related procedures without having to be reengineered” (2011, p. 55). WordPress’s scalability is likely part of why it is the most popular CMS (W3Techs, 2020). Large companies are able to manage massive sites by delegating user roles and the functionality of the site can easily be extended with plugins. While Webflow enables collaborators to be added to the course and users can edit their exported files, extending the sites’ functionality requires the user to understand how to code additional functions instead of simply installing a plugin if one doesn’t exist in the more limited repository. Because of WordPress’s modularity and scalability, it maximizes the procedural affordance offered by digital media more successfully than Webflow.

Maximizing Participatory Affordances

Digital media afford participation because the user and computer communicate in a meaningful way. The way that this communication takes place is deeply rooted in the procedural affordance of digital media. Computers, for instance, may ask for an input, use predefined procedures to decide what to do with this information, and then produce an output. Most of the time, however, the procedures used by the computer are irrelevant to the user. Most procedural operations performed by digital media are hidden from the user so that the user only sees the result (Murray, 2011). This process of abstracting elements to hide their complexity is called encapsulation (Baldwin & Clark, 2000). While encapsulating items is important for productive communication between the user and digital medium, visibility is also important. Murray writes “[t]o achieve the design goal of visibility, the designer must support the creation of a coherent mental model of the computer’s processing” (2011, p. 329). Here, visibility offers the user an understanding of how the program is operating so they know the possibilities and limitations of the action that is being completed. For these reasons, a balance of encapsulation and visibility will be an important benchmark when analyzing the participatory properties of WordPress and Webflow.


Because WordPress is composed of open-source software, there is an abundance of information on how to develop plugins and themes for WordPress. However, the interface of the software is largely encapsulated and possibly quite mysterious to novice users. When users create posts or pages of content, they use a rich text editor that is reminiscent of Microsoft Word. The mental model encouraged by this WYSIWYG editor is not representative of how the system actually works. HTML is a markup language used by documents that are displayed in browsers. Content is marked as elements like headers, paragraphs, or bullets to give the page structure. This is especially important when considering things like search engine optimization as search engine crawlers rank headings as being more important than body text. But, most rich text editors emphasize style over structure. In Microsoft Word, for example, it does not matter if I use the Heading 1 style defined by my computer or if I simply change the font, weight, and size of my lettering – it will all look the same on the printed page. On the web, however, lacking to designate elements in the proper way may lead to problems like poor search engine ratings. While WordPress’s content editor does a wonderful job of encapsulating the complexity of web content by providing a familiar rich text editor, this design does not encourage users to form an accurate mental model of how the technology actually works.

The themes and plugins that can be installed on WordPress sites are also encapsulated. The vast repository of plugins available means that novice users can easily extend the functionality of their sites by installing and activating the plugins. The GUI that WordPress provides makes this easy to do; adding plugins can be accomplished without ever seeing a line of code. The mental model that plugins bring to mind is apt for the way that plugins function. As previously discussed in the above section on procedural affordances, plugins allow users to extend the functionality of a site without touching the WordPress core files. In this instance, the metaphor of a plugin being something additional that can be added or removed easily corresponds well to the function of a WordPress plugin. Similarly, themes are well encapsulated and can be implemented by clicking a few buttons. The idea that a site can change themes while the content remains the same also is a useful mental model.


While WordPress’s content editor conceals important aspects of the web development process from the user, Webflow’s drag-and-drop GUI is built around generating clean HTML files. In fact, the GUI is so reliant on the conventions of HTML that Webflow designer Sanchez-Olivera (2019) states that “designing in Webflow requires (and encourages) thinking less like a graphic designer and more like a front-end developer.” The Photoshop like GUI abstracts the process of coding by offering a visual editor that transforms text box-like objects full of content into lines of code. Similarly, changes to color, size, and alignment are compiled in the CSS files and animations and interactivity are added in JavaScript files. All of these files can be seen by the user as they create in the visual editor. When a user clones a theme, the structure and style of the page can similarly be found in the visual editor. By encapsulating the processes that are taking place, Webflow offers a design environment that non-technical users will be familiar with. While the mental model that the GUI inspires may be difficult to learn because of its deep foundations in traditional web development, it requires users to understand more about the roles of HTML, CSS, and JavaScript files in the content that they are producing.

Webflow’s CMS also encapsulates the processes taken by the software in a way that encourages a useful mental model. By allowing users to create their own custom fields, Webflow sustains the flexibility that programmers have when creating websites. Further, Webflow allows users to create custom templates in a way that is much more reminiscent of traditional web development. This differs from WordPress’s content editor that has fixed input areas and does not allow the user to create custom templates within a theme – although experienced developers may create their own custom themes and templates to get around this issue.

While the visual editor and CMS affords participation in a way that WordPress does not, the lack of plugins may be more intimidating to non-technical users. Instead of pushing a button to activate a plugin, Webflow offers a series of modifications that can be added to a site called F’ing Sweet Hacks. F’ing Sweet Hacks are a collection of 45 “hacks” that users can copy and paste into their code. Each hack has a video that walks the user through each line of the code to explain what the code does and how it can be modified. While this is an awesome learning tool for users who are unfamiliar with programming, the processes of the program are not abstracted in a way that is helpful to novice users.


Webflow’s structure enables users to create content in a way that mimics the flexibility of traditional web development. This is made possible by abstracting the creation of HTML, CSS, and JavaScript files into a visual, drag-and-drop editor. Webflow’s CMS also abstracts the creation of templates more effectively than WordPress because the user has the ability to create custom fields and design their own templates. This flexibility is more representative of the options that a front-end developer would have at their disposal. Although developers with a working knowledge of PHP can create their own custom fields, themes, and templates on WordPress, I argue that Webflow does a better job at abstracting and encouraging a mental model for the processes that are occurring when they design and add content to their site.

Maximizing Spatial and Encyclopedic Affordances

While the procedural and participatory affordances of WordPress and Webflow received in-depth discussion, the discussion of their spatial and encyclopedic affordances will be combined. This is due to the fact that the spatial and encyclopedic affordances of digital media draw extensively from the procedural and participatory affordances and much of this content will overlap with the previous discussion. Because the encyclopedic properties of WordPress and Webflow are mostly the same, an abbreviated discussion will be included in the comparison section.

Murray (2011) writes that “digital space is created out of bits rather than bricks, and it rests upon the procedural and participatory affordances of computation” (p. 70). Digital media create space by using abstracted representations of physical objects like file folders, windows, and trash bins. By clicking, dragging, and manipulating these items, space is created. Murray (2011) points out, however, that we can only interact with these items in a way that reinforces their functions as pieces of information or chunks of code. In this way, the spatial affordance relies upon the procedural and participatory affordances of digital media. Therefore, when analyzing how Webflow and WordPress maximize the spatial properties afforded by digital media, it will be important to pay special attention to how manipulation of the objects reinforces their functions as pieces of code.

Because of the computer’s ability to store and transmit massive amounts of information, digital media are encyclopedic. Murray (2011) argues that the encyclopedic capability of the computer raises the expectations of the designer to clearly communicate and display content in a way that makes it easy for users to sort through and find information. When analyzing WordPress and Webflow’s maximization of the encyclopedic affordance of digital media, it will be important to determine how the sites allow content creators to organize their content in a clear and consistent manner.


While WordPress’s customizer allows users to change colors and fonts, it does not really allow users to interact with the objects in the theme’s header and footer in a spatial manner. Similarly, in the blocks editor, where content is added, users can add different blocks of code but do not have much control over how this content is displayed in the page. However, there are a number of drag-and-drop plugins that users can add to build their content visually. Yet, the lack of objects that can be manipulated and the highly encapsulated nature of the theme files leads to an interface that does not maximize the visual affordances offered by digital media.


Webflow’s visual editor maximizes the spatial affordances offered by digital media by creating a drag-and-drop interface that only allows the user to manipulate elements in ways that will lead to the generation of HTML and CSS files. This is perhaps why Sanchez-Olivera (2019) states that Webflow encourages users to think like front-end web developers. Rather than allowing users to drag-and-drop content in a way that would lead to unresponsive websites or generate poorly formatted code, Webflow reinforces the objects’ functions as pieces of code. For this reason, understanding basic front-end development principles equips Webflow users with a knowledge of what can and can’t be done in the visual editor. Overall, Webflow’s visual editor does a great job of maximizing the spatial affordances of digital media by representing lines of HTML code and objects in the visual editor and allowing users to set CSS properties in a GUI.


The visual editor of Webflow allows users to manipulate visual objects in a way that is not possible in WordPress. Because WordPress relies on PHP templates for themes, the number of customizable elements available to the user vary depending on what theme is installed and the reasons for this inconsistency are kept hidden from the user. This issue is avoided in Webflow’s visual editor because users start with a blank canvas and can see the code that is being generated. Overall, Webflow maximizes the spatial affordance provided by digital media in a more successful way than WordPress.

Both WordPress and Webflow maximize the same encyclopedic functions of the digital medium in similar ways. Webflow touts its ability to generate clean HTML files that make its pages easily searchable by search engine crawlers. However, there are a number of plugins available on WordPress, like Yoast, that enable users to approve their site’s indexability. Because they are both CMSs, WordPress and Webflow also maximize the computer’s encyclopedic functions of storing and displaying content. For this reason, WordPress and Webflow are comparatively similar in this regard.


WordPress and Webflow are two popular web creation platforms that allow nontechnical users to create web content with GUIs. While this has led to a democratization of web creation, Gentner and Nielsen’s (1996) “Anti-Mac Interface” offers several compelling reasons that GUIs may limit a user’s understanding of the functions that a computer or program can complete. However, after examining Murray’s four affordance of digital media (2011), it becomes clear that GUIs that maximize the procedural, participatory, spatial, and encyclopedic affordances provided by digital media can allow users to interact with technology in rich and meaningful ways. Due to its open-sourced nature, WordPress is a massively modular system and maximizes the procedural affordances of digital media by dividing tasks and functions into subcomponents. For this reason, WordPress is scalable and can be used by novice users and massive companies. However, Webflow’s visual editor encapsulates the content and stylizing processes of web development in a way that encourages helpful mental models to the user and therefore is more effective in facilitating productive communication. In this way the drag-and-drop GUI editor maximizes the participatory and spatial affordances by providing users with a useful way to create HTML, CSS, and JavaScript files without actually coding. Overall, I believe that a novice user may not only find it easier to use Webflow because of the visual editor, but will learn more about web development in the process.

If considering improving their product, I would urge Webflow to maximize the procedural affordances offered to them by creating a more modular system. While there is a budding community of integration and theme developers for Webflow, it pales in comparison to WordPress’s repositories. It seems that WordPress’s modularity has made it extremely versatile and that may be one of the reasons that it is so widely used. WordPress, on the other hand, may greatly benefit by using metaphors in their interface that encourage more apt mental models. While the encapsulation of information makes the platform less intimidating to novice users, it also does not reveal much about how the program works. This may be accomplished by offering more resources to new web creators about the function of themes and plugins and how they implement the content a user adds. A more holistic understanding of this process will likely provide an important mental model that the users can utilize when designing their website. As a final note, it is worth stating that both WordPress and Webflow are fascinating case studies and more research on their interfaces may illuminate ways to get more nontechnical users involved in the process of website creation.


Baldwin, C. Y., & Clark, K. B. (2000). Design rules. MIT Press.

Cabot, J. (2018). WordPress: A content management system to democratize publishing. IEEE Software, 35(3), 89–92.

Gentner, D. & Nielsen, J. (1996). The anti-Mac interface. Communications of the ACM, 39(8), 70-82.

Hughes, J. (2017, January 19). An introduction to WordPress core files. Themeisle.

Irvine, M. (n.d.). Introducing Modular Design Principles. Unpublished Manuscript.

Manovich, L. (2013). Software takes command: Extending the language of new media. Bloomsbury.

Martinez-Caro, J.M., Aledo-Hernandez, A.J., Guillen-Perez, A., Sanchez-Iborra, R., & Cano, M.D. (2018). A comparative study of web content management systems. Information, 9(2), 27.

MDN Contributors. (2020, November 8). <iframe>: The Inline Frame element. MDN Web Docs.

Murray, J. H. (2011). Inventing the Medium: Principles of Interaction Design as a Cultural Practice. MIT Press.

Sanchez-Olvera, A. (2019, February 13). Why Webflow is the best web design program right now. Prototypr.Io.

W3Techs. (2020, December 6). Usage statistics of content management systems. W3Techs: Web Technology Surveys.

Webflow vs. WordPress. (n.d.). Webflow.

WordPress Community. (n.d.). Plugin handbook. WordPress.Org Developer.

WordPress Community. (n.d.). Theme handbook.  WordPress.Org Developer.

[i] This paper discusses the open-source rather than the for-profit hosting service provided by

“Bounded Browsers” in the NYT Cooking App

Mary Margaret Herring

For this response, I decided to download the New York Times Cooking app for iOS. Since the content featured on this app is also featured on the New York Times Cooking website, it should be an interesting case study when determining how the bounded features of the app compare to the website.

Immediately, I noticed that the content featured on the app and website were extremely similar. Both the website and the app prominently displayed the same seasonal categories of recipes. After choosing a category of recipe, both versions of the site display all the recipes in that category. Then, you select the recipe that you’d like to view and the app and website take you to the recipe. On both platforms, users must login to access the cooking site and are able to save recipes to their virtual recipe box. Users can also mark recipes as ‘cooked’ to keep track of what they have and haven’t made and add private notes to the recipes.

At first, it seemed that the app would be preferable to the website because users could receive more personalized suggestions based on their app usage. However, since users login before accessing the cooking site, their interactions with the site can easily be recorded. In fact, the site could even do this without the login by using cookies to identify users (Karp in, 2015). For this reason, the ability to receive personalized content or view previously saved recipes on the app isn’t different from the website.

But, one feature that exists on the app that is not on the website is the “Start Cooking” feature. This makes it easier to view the recipe while you’re cooking from your phone. The first page displays the ingredients needed with an interactive checkbox feature next to each ingredient. The next page has each step of the recipe with the current step that the user is on bolded so that it is easier to access. As a person who often uses their phone as a resource while cooking, this feature made me very excited because it did all the work of zooming into the recipe and preventing the screen from sleeping ­– eliminating the need to touch the screen with messy hands.

An external link opened within the NYT Cooking app for iOS.

A linked page accessed in the NYT Cooking app. When accessing this page, there is no search bar and the interface always provides a back arrow (top-left) so the user can return to the recipe.

Since the app had only one feature that the website was missing, this led me to question why NYT had even made a cooking app. However, it soon became clear that the app exploits the principles of the web by containing the user in a much smaller section of the web. As Carrie Anne points out in the Crash Course video (2017) on The World Wide Web, hyperlinks allow users to link to other pages of content by clicking on material that is highlighted in the page a user is accessing. When a user does clicks on a link in the app, they are taken to the page but a back arrow exists to bring them back to the recipe. Even if they click on more links on the second page, they always have the back button to return them to the recipe. The modified browser in the app exploits the ‘web-like’ design of The World Wide Web by keeping it encased in an interface that easily allows the user to return to the Times’ recipe they were previously viewing. However, on the website, when a user clicks on an external link, they are taken away from the recipe to the linked page (why didn’t they choose to have this page open in a new tab? It’s so easy to do!). The user is then shown more content on the new page and is likely to get distracted or find relevant information somewhere else! Unless the user clicks the back button in the browser, their place in the site is lost and they will have to jump through a number of hoops to find that recipe again. By doing this, the app creates a “bounded browser” that limits a users’ ability to freely jump from link to link by encouraging them to return to the content on the app. As Irvine (2018) states, “[a]pps thus de-Web the Web on many levels by simultaneously exploiting modularity and the open architecture of the Web and Internet for bundling specific functions and services that work only on the device-branded app” (p. 5).

Further, the bounded browser that appears in the Cooking app does not have a search feature. So, if the user finds a piece of information that sparks them to try a new search query, they completely have to exit the app or search from the app’s repository of recipes. For these reason, NYT’s cooking app has the economic benefit of retaining users for longer periods of time and getting them to access their recipes, as opposed to competitor’s sites. This action is contrary to that of the web because it keeps the user coming back to the Times’ recipes instead of freely browsing.

References (2015, Sep 28). The Internet: HTTP & HTML [Video]. YouTube.

CrashCourse. (2017, Oct 4). The World Wide Web: Crash Course Computer Science #30 [Video]. YouTube.

Irvine, M. (2018, Nov 12). The World Wide Web: From Open Extensible Design to Fragmented “Appification.” Unpublished Manuscript.

The Internet as a Design Philosophy

Mary Margaret Herring

Throughout this week’s readings, I couldn’t help but relate to a scene in The IT Crowd that aired in 2009. In this scene, two IT ‘nerds’ lend their non-technical manager “the internet” for her employee of the month acceptance speech in an attempt to humiliate her in front of the company. They give her a black box with a blinking red light on top and explain that the “elders of the internet” have lent it to her for this special presentation. She is thrilled and eager to present it in her speech and the IT guys are ready for her lack of tech knowledge to be displayed to their coworkers. Much to the IT employees’ dismay, though, no one really knows what the internet is and believes that it is, in fact, the black box.

Once you get past the laugh tracks, I think that this scene from The IT Crowd was genius in the way that it captured how little internet users actually understand about the internet. While an operational knowledge of how to perform certain tasks is needed to ‘use’ the internet, the entity of the internet is veiled. When thinking about what it means to be “on the internet,” it is important to realize that the internet is not a thing but a vast infrastructure and a design philosophy. Irvine (2018) sums this up in a compelling manner by stating “The internet – both as an information infrastructure and as the networked media sources that we use and create – is enacted and performed as an ‘orchestrated combinatorial complexity’ by many actors, agencies, forces, and design implementations in complex physical and material technologies” (p. 9).

To be “on the internet” simply means that a device is running a TCP/IP software and has an active IP address (Irvine, 2018, p. 6). But all of the jargon used in the previous sentence makes this seem quite confusing. I will try to apply the concepts I learned this week to elaborate on this process. The transmission of information between networked computers on the internet relies on protocols. Two important protocols are the Internet Protocol (IP) and the Transmission Control Protocol (TCP). To be connected to the internet, each computer needs an IP address. The IP address is a lot like a postal address and provides a location where information can be sent. When someone streams a song in Spotify, for example, the data is broken down into smaller packets that are routed asynchronously to their destination. When they arrive, the TCP works to reassemble the packets and ensure that all components of the packet have arrived. This process continues over and over again until the entire song has been played. The importance of following protocol becomes clear in the example above. If the data was broken down in a way that the TCP was unfamiliar with, it would not be able to reassemble the packets. As Vint Cerf explains, “the internet is really a design philosophy and an architecture expressed in a set of protocols” (, 2015).

I am unsure of how to reframe the conversation about the internet to be one that is more accurate. This is mainly because it is easier to view the internet as a uniform technology and people don’t need to understand what the internet is to use it. Most of my hesitancy to understand what the internet is comes from the complex jargon used and fear of asking a ignorant question. I do think that the Crash Course and videos are very helpful when bringing these ideas to a general audience, though.

I am sure that I greatly over-simplified a very complex process but hopefully the gist of this was correct. After reading and watching the videos this week, I am still a bit confused on how the information travels. The Crash Course videos talked about queries for information going from LAN to WAN to mega routers but I found that to be quite confusing. Also, what are the ‘intermediary computers’ that packets travel through? Could we talk a bit more about these processes in class?

References (2015, Sept. 10). The Internet: IP Addresses & DNS [Video]. YouTube.

Irvine, M. (2018). The internet: Design principles and extensible futures. Unpublished manuscript.

The Direct Manipulation Interface of Target’s iOS App

Mary Margaret Herring

In our readings for this week, Irvine (2019) offers a model of our natural symbolic process cycle. In this model, representations or expressions (like written symbols or patterns of pixels) allow us to make interpretations which are output as actions (Irvine, 2019). In this response, I will be focusing on the representations and understanding how they lead users to interpret the information displayed. To illustrate this, I will use Target’s iOS app as a case study.

Target’s app follows the conventions of a direct manipulation interface to provide a satisfying user experience. Shneiderman et al. (2016) write “the central ideas in such satisfying interfaces, now widely referred to as direct manipulation interfaces, are visibility of the objects and actions of interest; rapid, reversible, incremental actions; and replacement of typed commands by a pointing action on the object of interest” (p. 230). Apple also discusses the importance of direct manipulation interfaces in their Human Interface Guidelines for iOS, stating “through direct manipulation, they [the user] can see the immediate, visible results of their actions.”

The homepage of Target's iOS app

The homepage of Target’s iOS app.

The first principle or attribute of a direct manipulation interface is that objects of interest are visible (Shneiderman et al., 2016). Target does this well by displaying a prominent search bar at the top of the screen. The red color likely draws the users eye there first and invites the user to input text to be shown relevant products. Beneath the search bar sits a “Shop your store” section that demonstrates to users how they can get items from target. This section utilizes icons and short, bold headings to communicate that certain elements may be important to the user without overwhelming them with information. One interesting way that Target uses metaphor to display relevant information to users is by having some of the content boxes go off the screen horizontally (see the “Build your list” div in the “Shop your store” section). This communicates that the user is able to slide the contents of that section horizontally to see other options. By showing the most relevant information first but encouraging the user to see other possibilities, Target successfully reduces the amount of information displayed but encourages the user to investigate the other information there, if it is relevant.

The top bar of the Target app

The nav bar at the bottom of the Target app

The top and bottom of the app. Users can go back by clicking on the left arrow at the top of the screen and can navigate within the app with the options at the bottom of the page.

The second principle or attribute of direct manipulation is that actions should be easily reversible (Shneiderman et al., 2016). The app demonstrates this to users by providing back arrows at the top of most pages and leaving the menu at the bottom of the page constant no matter where the user is in the app. This consistency likely encourages the user to explore other pages of the app but ensures them that there will be a way to get back to a familiar page quickly. However, the app is not consistent in displaying the back arrow at the top of the page. The lack of this widely used convention on some of the pages may leave new or inexperienced users confused when navigating through the site. The site also uses incremental actions to keep the user in control. When shopping by category, the user goes through a number of steps before actually seeing the product. Here is an example search for pet costumes through the categories interface:

The "shop by categories" page in the Target iOS appThe Halloween category page on the Target iOS appThe Pet Costumes category page on the Target iOS appTarget's selection of pet costumesThe product page for pet costumes

This design allows for people who are browsing and once again relies on icons and headings rather than complicated text.

This brings us to the last principle of direct manipulation interfaces: replacement of complex syntax with buttons or icons (Shniederman et al., 2016). I think that the app does a really great job of this overall. There is very little text on the homepage or in any of the product pages. Rather, the app communicates categories of content to the user with icons. The content of the categories pages are broken up into smaller sections where a few categories are displayed using icons and headings and then there is a color advertisement (e.g., Hyde and Eek! Boutique in the sequence of images above). Of course, the user can click on a product to receive more information about it. This way of waiting until a user requests the information reduces the user’s cognitive load and only reveals relevant information.

The representations used in the Target app indicate that the user can either search for relevant products or explore different categories of content. I would be interested to see how many users utilize the “shop by category” feature of the site because that is not something that I would use on a regular basis but it is clear that Target has devoted quite a lot of time into the interface of these sections.


Apple. (n.d.). iOS themes. Human Interface Guidelines.

Irvine, M. (2019). From cognitive interfaces to interaction design: Displays to touch screens. Unpublished Manuscript.

Shneiderman, B., Plaisant, C., Cohen, M., Jacobs, S., Elmqvist, N., & Dikopoulos, N. (2016). Designing the user interface: Strategies for effective human-computer interaction (6th ed). Pearson.

Computers as “Personal Dynamic Media”

Mary Margaret Herring

While doing this week’s reading, I was fascinated by Kay’s idea of computers as a “metamedium.” Kay envisioned computers as a medium for displaying other forms of media in a number of different formats. Based on this idea of computers as a metamedium, I would like to think about any unfulfilled or uncompleted design ideas today that Kay and others may have had in that first wave of design concepts.

To answer this question, I would like to start by reflecting on the notion of transparency in interfaces. When completing a task like writing an email, users are typically focused on the action of composing the message rather than navigating through the interface of their computer and email program. In this way, the interface should be transparent in allowing the user to focus on the task at hand. To encourage this transparency, many GUIs are designed to simulate other instances of completing the task in a way that users are already familiar with. To apply this to the email example, an email interface will likely try to have many other elements of word processors that the user is already familiar with to appeal to their past experience and make it easier for them to use. Manovich summarizes the importance of simulation in Kay and Goldberg’s design of the Dynabook by stating, “[i]n short, when we use computers as a general-purpose medium for simulation, we want this medium to be completely ‘transparent’” (2013, p.70). While Manovich was discussing a transparent interface for modeling data, it still seems clear that users will want interfaces to be relatively transparent when they are engaging in any sort of goal oriented behavior.

But, in Manovich’s book, it becomes clear that Kay recognized that computers as a metamedium can afford many things that the original medium could not. Take for example a PDF. When designing an interface for a PDF viewer, it is important to replicate the experience of reading a book or a printed sheet of paper. The experience of reading a book is achieved by allowing the user to highlight or annotate PDFs or jump to a page in the document. However, there are also certain functions that computers can perform that are not possible in books or printed sheets. For instance, users can search PDFs for certain words or phrases. PDFs can also contain hyperlinks that make it easier for users to find related information. Because computers as an interactive medium afford these actions, it would be a shame not to take advantage of these functions. For this reason, the computer as a metamedium should enable users to utilize these functions as much as possible.

In his history of modern computing, Irvine writes that “[m]any design innovators like Kay and Nelson continue to say that the computer revolution hasn’t yet begun” (2018, p. 12). I wholeheartedly agree with this statement. Kay envisioned the computer as “a ‘personal dynamic media’ which could be used for learning, discovery, and artistic creation” (Manovich, 2013, p. 61). Yet, somewhere along the line, it seems that the user became viewed as a passive consumer rather than an active agent. While simulating the design of technologies that users are familiar with in technology is a great way to get users to feel more comfortable using that device it also limits the number of things that users can do with that technology. It seems that there is a delicate balance between acquainting users with their digital environment and allowing them to see and use the additional functions that the technology affords as a metamedium.

A brief personal note on how this reading applies to my research interests:

I’m interested in addressing the problem of disinformation from a user interface standpoint. My main quandary is how we can redesign the way that social media displays news to prompt users to think critically about the source and content before hitting the like or share button. Nine times out of ten when I mention my research interests to someone, they say “well users are lazy and don’t want to think critically” or “social media encourages passive scrolling.” While there is merit to this view, I also think that there are dozens of opportunities to take advantage of the affordances of social media and redesign it in a way that makes users more critical while preserving the good qualities (e.g. interactivity, community building) functions of social media.

As of yet, I’m not sure what these design changes might be. But reading about Kay’s original vision of computers as ‘personal dynamic media’ was extremely exciting for me and made me hopeful that some of these solutions might exist.


Irvine, M. (2018). Computing with symbolic-cognitive interfaces for all media systems: Design concepts that enabled modern “interactive” “metamedia” computers. Unpublished Manuscript. 

Manovich, L. (2013). Software takes command. Bloomsbury.

Computational Thinking

I found this week’s readings on the principles of computing to be especially interesting. The computer science courses that I’d taken only covered the syntax of programming languages. In these courses, I learned how to make remedial programs with C++, JavaScript, and Jython (a Java implementation of Python), but I never really understood the underlying design of the computing technology that they were implemented on. In one of these courses, I even remember talking about bits and bytes and learning how to decode numbers and letters into binary. But, we never talked about what the bits were. Even though we talked about information theory last week, I found Evans’ (2011) book to be extremely helpful when explaining how binary questions lessen uncertainty. His example of the number of bits in the outcome of tossing a six-sided die really helped me to conceptualize how computers can efficiently convey information. Instead of evaluating whether or not the die landed on one number, the computer can speed up the process by checking if it is greater than or equal to 4. By doing this, the program has a 1:1 chance of being true rather than a 1:6 chance.

On a broader level, an overarching theme that I noticed throughout the reading was that computer science is an interdisciplinary field. Wing (2006) argues that students, teachers, and parents need to know that “one can major in computer science and do anything” (p. 35). Her reason for thinking this is that computational thinking is a way of analytical thinking that could yield solutions to problem in any discipline. Further, Evans (2011) argues that computer science has roots in engineering, hard science, and the liberal arts. It is intuitive to think of computer science as a form of engineering or science, but the link between computer science and liberal arts seemed less clear. However, Evans’ argument that computer science has strong connections to the Trivium and Quadrivium was quite compelling. These arguments re-framed the way that I viewed computer science.

I also really enjoyed the LinkedIn Learning course on programming. Davis’ (2019) analogy of programming being like a recipe was particularly insightful. While this course was helpful, I’ve been taking a course on LinkedIn Learning called PHP for WordPress by Casabona (2020) and I encountered a common theme that we’ve been discussing in this course: modularity. For this reason, I’ll focus on the PHP course. In the course, Casabona (2020) kept referencing functions that PHP has created. He then mentioned that WordPress built on the functions that PHP created to make functions that are useful to WordPress developers. When he said that, I immediately thought of this course. Irvine (n.d.) writes that modular designs are “conceptual models of systems with interconnected subcomponents that can be implemented in real constructed things” (p. 1). While a program isn’t exactly physical, PHP developers defined functions based on the existing subcomponents of the programming language. WordPress developers then expanded upon these functions to make certain tasks easier for theme developers. These layers of abstraction make it easier for new developers to perform certain functions. But, if developers need a function to do something that no predefined function can do, they can always create their own function using the subcomponents of PHP. This example wasn’t a part of our reading, but I found it really interesting that developers could collaborate and modify things using the predefined PHP functions in WordPress.


Casabona, J. (2020). PHP for WordPress. LinkedIn Learning.

Davis, A. (2019). Programming foundations: Fundamentals. LinkedIn Learning.

Evans, D. (2011). Introduction to computing: Explorations in language, logic, and machines.

Irvine, M. (n.d.). Introducing modular design principles. Unpublished manuscript.

Wing, J.M. (March 2006). Computational thinking. Communications of the ACM, 49(3), 33-35.

Meaning in the Signal-Code-Transmission Model

Mary Margaret Herring

While the signal-code-transmission model of information satisfactorily accounts for the technical transmission of messages, it does not lend much help when decoding the meaning of a message. In Shannon’s original signal-transmission model of information, the source sends a message to the transmitter who encodes the message into a signal. At this point, noise may be introduced into the system. The message is then decoded and arrives at its destination (cited in Irvine, n.d.). This linear process perfect sense on a purely technical level. For example, a person sending a message by telegram would fill out the form with their message. Then, an operator would transcribe the message into Morse code to a receiver who would decode the message and deliver it to the recipient. For this reason, the information theory model is essential in understanding the mathematical and engineering processes needed to get signals from the sender to the receiver.

However, the signals linear model. As Irvine (n.d.) notes, the meanings of the messages are not “baked into” the medium. In the example of the telegram, the only thing that is actually sent is a signal. Information about the message’s cultural context or assumed background knowledge is not included. From this strictly technical point of view, whether or not the message is meaningful is irrelevant. Floridi (2010) writes that an advantage of digital systems is that they can be understood equally well represented semantically, logio-mathematically, and physically. With digital technologies, Floridi writes “[i]t is possible to construct machines that can recognize bits physically, behave logically on the basis of such recognition, and therefore manipulate data in ways which we find meaningful” (2010, p. 29). But, this still raises the question of how meaning systems can be included in a model where the meaning of the message is irrelevant.

Since the signal-transmission model explains how digital signals can be encoded and decoded into meaningful messages, perhaps it is time to apply the sign-referent interpretation proposed by Denning and Bell (2012). They argue that information contains both signs and referents that we use to make sense of digital information. Denning and Bell use the example of seeing a red light (sign) and our brains commanding us to stop (the referent). In the same way that we know to stop at a red light, we also know that a blue-underlined phrase usually indicates that the text contains a hyperlink to another file or webpage. By relying on these signs, we can make meaning from *perhaps* meaningless content.


Denning, P. J., & Bell, T. (2012). The information paradox. American Scientist, 100(November – December), 470-477.

Irvine, M. (n.d.). Introducing information and communication theory: The context of electrical signals engineering and digital encoding. Unpublished manuscript.

Luciano, F. (2010). Information: A very short introduction. Oxford University Press.

Affordances of Digital Media

Mary Margaret Herring

One part of this week’s reading that I found to be particularly interesting was Janet Murray’s (2012) argument that new media should instead be called digital media. She writes, “[c]alling objects made with computing technology ‘new’ media obscures the fact that it is the computer that is the defining difference not the novelty” (Murray, 2012, p. 8). Since this computing technology is central to the way that we interact with digital media, I’d like to apply Murray’s ideas to the screen interface on my laptop. To do this, I will start by examining Murray’s argument that there are four affordances of digital media and then discuss how those affordances shape the way that we interact with digital media.

Murray argues that digital media are procedural, participatory, spatial, and encyclopedic (Murray, 2012). Because computational technologies are programmed to execute conditional behaviors, Murray (2012) argues that media are procedural. Further, digital technologies are participatory because they are scripted for both the user and the machine. This allows the interactor and machine to interact in a way that is meaningful to each other (Murray, 2012). Murray (2012) also states that digital media are spatial because it creates digital space and encyclopedic because they are able to store more forms of information than any medium before could. While all affordances are certainly interesting, I will focus on the encyclopedic and spatial properties that digital media afford.

I will apply these affordances to the home screen on a MacBook. Because computers are able to store large amounts of data, many people use digital media to store a large number of files and applications. The home screen creates a digital space where the interactor can access their files placed on the desktop and applications in the toolbar. To navigate this digital space, the laptop affords actions like using the trackpad or mouse. Similarly, text or shortcuts may be entered on the keyboard. These affordances allow users to interact with the space by clicking, dragging, and typing. Often users will organize files on their desktop. The designers of the Mac’s graphical user interface worked with the affordances offered by the laptop to allow the user to interact with content on the screen. As the home screen demonstrates, the encyclopedic and spatial properties that digital media afford enable users to interact with media in new ways.

But, the true genius behind the way that the graphical user interface (GUI) comes from signifiers rather than affordances. Norman (2013 as cited in Kaptelinin, 2013) distinguishes between affordances and signifiers writing, “[a]ffordances define what actions are possible. Signifiers specify how people discover those possibilities.” For example, Murray (2012) notes that file folders on the desktop can be renamed and organized like physical folders. This element of the graphical user interface (GUI) mimics conventions of the physical world and makes the organization of information intuitive to the user. In this way, the folder signifies the possibilities of organizing information on the computer by drawing on the user’s prior experience with physical file folders.

A brief tangent: In undergrad I conducted some A/B testing on a news sharing social media site that I created. While I wanted to include research about affordances in this study to motivate the modifications that I was making to the site, I found it extremely hard to understand. After this reading, I now understand the difference between affordances and signifiers and realized that I was actually looking to modify signifiers on the site rather than affordances. This clarification of a research keyword opens up a ton of possibilities for my future research!


Kaptelinin, V. (2013). Affordances. In The Encyclopedia of Human-Computer Interaction (2nd ed.).

Murray, J.H. (2012). Inventing the medium: Principles of interaction design as a cultural practice. MIT Press.

Cancelling Technology vs. Society Dualism

Mary Margaret Herring

Over the past few weeks, we have discussed the importance of viewing technology and society as two parts of the same coin. If explaining the importance of unifying the notions of technology and society to someone, I would cite Cole’s cultural psychology to point out that forms of technology are cultural artifacts and could not exist in the way that they do without serving a cultural function. Further, I would argue that technology and society are co-mediated and therefore should not be viewed as distinct entities.

If we view technology as cultural artifacts, it becomes clear that technology and culture cannot be separated. Cole (1996) writes that an artifact is an “aspect of the material world that has been modified over the history of its incorporation into goal-directed human action” (p. 117). From this definition, it becomes clear that Cole believes that humans create artifacts to make it easier to accomplish certain tasks. For instance, the pulley was created to enable humans to lift heavy objects with little effort. The creation of a device like a pulley is embedded in layers of societal and cultural need. Pulleys may have originally been used to lift loads of water from wells and are now used for a variety of tasks like transporting construction materials to the tops of skyscrapers. Yet, it’s hard to imagine a form of technology as simple as a pulley lasting for thousands of years without serving a purpose. The pulley was created to enable humans to perform a task that couldn’t ordinarily be done. This brings us to the cultural part of Cole’s theory. Cole (1996) writes that culture can be understood as all of the artifacts used by a social group. I interpret this to mean that the artifacts accumulated by a group of people largely reflect that group’s motivations. In the same way that the pulley arose out of the human need to automate or simplify tasks, technology – as an artifact – arises out of human need. For this reason, culture is deeply embedded in technology and technology is deeply embedded in our culture.

Additionally, technology acts as a mediating force through which societal institutions can be transmitted. Debray (1999) illustrates this well when he uses the example of a nation. He argues that we can see the mediating factors of a nation when we examine the networks underlying this idea like roads and postal codes. It is important to realize that these concepts are not distinct because they are co-mediated and operate as a system. The nation is somewhat dependent on the networks of roads and postal codes and the roads and postal codes would be pointless without the unifying idea of a nation. When we extend this example to technology, we see that the idea of technology could not function without many complex networks, like internet connectivity, underlying it. Like the nation and road system, technology cannot be distinct from culture because our culture relies on technological systems and our cultural values are deeply embedded in technology.

For these reasons, it seems plausible to dismiss the idea that technology is distinct from society. As Irvine (n.d.) writes, it seems absurd to talk about the effects that technology can have on society as if they are distinct, causally related entities. Rather, it makes more sense to view technology and society as members of a system that are connected.


Cole, M. (1996). Cultural psychology: A once and future discipline. The Belknap Press of Harvard University Press.

Debray, R. (August 1999). What is mediology? (Martin Irvine, Trans.). Le Monde Diplomatique.

Irvine, M. (n.d.). Understanding media, mediation, and sociotechnical artefacts: Methods for de-blackboxing. Unpublished Manuscript.

Cognitive Artifacts – Business Cards and Clocks

Mary Margaret Herring

When reflecting on this week’s prompt, I thought about the business card as an example of an everyday cognitive artifact. Norman (1991) defines a cognitive artifact as “an artificial device designed to maintain, display, or operate upon information in order to serve a representational function” (p 17). In this way, business cards can be thought of as cognitive artifacts because they store information about a person on a card that can be shared. On a business card, a person usually lists their name, title, and contact information as well as the logo of the business that they are working for. The card can be shared with others to help them remember information about the business person – like their title or where they work – and how to get in touch. Business cards allow us to off-load memory onto the card.

While considering business cards, it became clear that the cognitive function of business cards has been replicated online. Businesses use “Meet our Staff” pages to introduce their employees and provide their contact information. Or, a person can consolidate their business cards by adding the peoples’ information to their phone as contacts. Similarly, Google knowledge panels pull contact information from websites and display this information in the search results for easy access. It seems that while the method of storing and displaying information online is much more complex than handing someone a paper business card, the function of the business card as a way to off-load memory becomes clear.

Another cognitive artifact that I considered was a timer. I often set timers while cooking so that I can focus on other tasks while one part of my meal cooks. By setting a timer, I can free up some mental space by not having to keep a mental record of how much time has passed. It seems that the design of timers or clocks can be manipulated in interfaces as well. For example, when I use Netflix on my phone, I have to swipe down on the screen to see what time it is. The clock is hidden from me. Similarly, when a person presents a PowerPoint presentation, their clock is hidden. I suspect that these interfaces are designed in this way to keep the audience from keeping tabs on the time. Netflix wants users to stream their content longer and PowerPoint wants the audience to focus on the presenter’s message. It is harder to accomplish both of those tasks when there is a clock reminding everyone that they have assignments to complete today or another meeting to attend in 5 minutes!

By viewing technologies as symbolic-cognitive artifacts, the designer is able to better understand the function that the technology needs to have. Irvine (n.d.) states that “[w]e are simply at one point in a longer ‘cognitive continuum’ that begins with language and symbolic representation, and expands into our ability to think with and represent multiple levels of abstraction” (p 2). If we apply this example to the business card example, we can see how such a simple artifact can be adapted into a technological form with basically the same function. Apps allow users to swap contact information, businesses upload their contact information to Google knowledge panels to make it easier for users to find and many (higher-level) employees can be found on company websites. While these technologies are much more complicated, it is clear that their function is not that different than a simple business card.


Irvine, M. (n.d.). Introduction to Cognitive Artefacts for Design Thinking. Manuscript in Progress.

Norman, D.A. (1991). Cognitive Artifacts. In J.M. Carroll (Ed.), Designing Interaction: Psychology at the Human-Computer Interface. Cambridge University Press.