Author Archives: Alexander Macgregor

Building The Cloud: Principles in Information Technology Design


If one watches the tech space for a long enough period of time, they will start to notice a recurring pattern of new and trendy technologies being touted as the “Next Big Thing”. A quick glance at the current state of our tech discourse reveals a bevy of tools and technological phenomena that promise to be socially and economically transformational. Virtual/Augmented Reality, Artificial Intelligence, Big Data, Internet of Things…all of these technologies have taken up prime position in our tech consciousness, but it is the advancements in Cloud Computing that I find most interesting.

The transition from our traditional Information Technology infrastructure to the Cloud environment has been one that encompasses many of the principles central to the digital economy. As we move more and more of our social lives and economic activity to the Internet, design concepts such as scalability, distribution, resource pooling, virtualization, and rapid elasticity will form the bedrock foundation for how we create, move and compute in this new digital environment. Yet while we acknowledge their ubiquity and importance in modern times, it’s important to understand the history of these design principles and how they have taken shape during the evolution of Information Technology.

In this paper, I intent to trace that history – from the era of tabulating machines and large mainframes to the modern cloud era – and show how the Cloud is the furthest advancement of the design principles at the foundation of Information Technology, crucial in unlocking the business potential of this new, modern digital economy.


A Quick History of the Early IT Environment: From Hollerith to IBM System/360

Merriam-Webster defines Information Technology as “the technology involving the development, maintenance, and use of computer systems, software, and networks for the processing and distribution of data” (“Information Technology”, Merriam-Webster). In the early portion of the 20th Century, this meant using tabulating machines, both large and relatively limited in their functionality. These machines, designed by Herman Hollerith, were electro-mechanic in operation and used punch cards symbolically representing data to execute large computations. The first major application of these tabulating machines was the 1890 US Census, which was completed in a much faster and cost-effective manner than the previous census. It quickly became apparent that these tabulating machines would be computationally useful in business contexts for large firms like the railroad companies that dominated the era, particularly for tasks such as accounting, payroll, billing and tracking inventory (Zittrain, 11-12).

Figure 1: Herman Hollerith’s Electric Tabulating Machine. (

The high threshold to a level of functional knowledge needed to operate these machines meant that the firms using them preferred to rent them from Hollerith instead of purchasing them outright. This way, there was a direct vendor they could appeal to in the case of something going wrong.

Decades of progress in computational hardware and theory led to the onset of the electronic computer era of the mid-20th Century, in which the tabulating machine gave way to the mainframe computer as the dominant player in the Information Technology arena. In the early years, these computers were behemoths that would take up entire rooms, but compared to the tabulating machines that came before them, mainframe computers had much more functionality and versatility, and could process amounts of data previously unheard of.

Figure 2: IBM System/360. Released in 1965, went on to become one of the most commercially successful and influential product in the history of computers. (NBC)

Extremely expensive and requiring customization, these computers were initially used only by government agencies and large corporations, but would go on to serve as the backbone of the 20th Century Information Technology landscape. They were designed to process the ever-increasing deluge of data being produced by the modern economy, which included such novel concerns as real-time financial transactions, logistical computing, and enterprise resource planning, speed ordering from wholesalers, and instant reservation services (“IBM Mainframes”, IBM). IBM was the principal player in the design and construction of the mainframes of this era, but they faced competition from a group of companies known by the acronym BUNCH – Burroughs, UNIVAC, NCR, Control Data Corporation, and Honeywell. All of these companies were responsible for much of the innovation surrounding these new machines serving as the engine of 20th Century business innovation.

With the 1980s came the Person Computer revolution, which took computers – once the domain of business and government – and put them in the hands of the general population. Information Technology models would have to adjust to this proliferation of computational literacy. The ability to link computers together was an idea pushed forward by this new computational environment, and the Internet – in development since the 1960s – was brought to the masses (Campbell-Kelly & Aspray, 275). Networking – a concept previously only used by higher education institutions and select organizations – was now a major possibility for the enterprise computing community. The further increases in the amount of data production led to larger, more powerful mainframes and servers being built, but a new Information Technology model would be needed to truly wrestle with the features and components of this new digital environment.

Figure 3: Digital Equipment Company’s Microvax 3600. Unveiled in 1987. We can see the computer shrinking and becoming more ubiquitous. (Computer Weekly)

Design Principles of the Early IT Environment: RAS and More

In terms of principles used for the design and building of mainframe computers, the acronym “RAS” – which stood for Reliability, Availability, and Serviceability – was the accepted industry guiding philosophy. According to IBM – the industry leader in mainframe design – “ When we say that a particular computer system ‘exhibits RAS characteristics’ we mean that its design places a high priority on the system remaining in service at all times.” (“Mainframe strengths: Reliability, Availability, and Serviceability”, IBM).

According to IBM, reliability is when “The system’s hardware components have extensive self-checking and self-recovery capabilities. The system’s software reliability is a result of extensive testing and the ability to make quick updates for detected problems” (“Mainframe strengths: Reliability, Availability, and Serviceability”, IBM). One can imagine how this would be crucial when dealing with systems highly sensitive to downtime. In mainframes of the early enterprise environments were not as reliably built as they are today, and calculations would take orders of magnitude longer. This, along with their steep purchase price, meant that if they were not reliable in their operation, the vendor is likely to lose a customer. This was early in the adoption stage, so discomfort with the technology would only be exacerbated by an unreliable machine.

Availability was defined as the ability for “the system [to] recover from a failed component without impacting the rest of the running system. This term applies to hardware recovery (the automatic replacing of failed elements with spares) and software recovery (the layers of error recovery that are provided by the operating system)” (“Mainframe strengths: Reliability, Availability, and Serviceability”, IBM). While the automated nature of this process would only be available in later mainframe models, the basic rationale behind this design principle is to account for system failure. These machines were incredibly complex and multifaceted, so a single failed component deactivating the entire machine is not an unwanted feature. Ideally, the machine would be able to keep running while the replacements and/or fixes are made, giving it a robustness.

Serviceability was considered to be in effect when “The system can determine why a failure occurred. This capability allows for the replacement of hardware and software elements while impacting as little of the operational system as possible. This term also implies well-defined units of replacement, either hardware or software” (“Mainframe strengths: Reliability, Availability, and Serviceability”, IBM). Good design accounts for the instances in which something goes wrong, and serviceability speaks to that concern. Diagnostic analysis is a large part of any computational maintenance, and being able to identify and fix the problem once discovered without compromising the operation of the entire system would have been a large advantage in the early Information Technology environment.

While RAS served as a central guideline for mainframe design, there were other design concepts that were beginning to take root in the early Information Technology landscape. Variability was once such concept that became central as technology progressed. Early mainframe systems had to be custom built for their purpose, such as IBM’s Sabre (Semi-Automatic Business Research Environment), which was a central reservation system designed and built just for American Airlines (“Sabre, The First Online Reservation System”, IBM). The mainframe industry would soon abandoned this customized model for a more modular model of inter-compatible families of mainframe computer systems that would be applicable for many purposes and in many business contexts.

Figure 4: Promotional material for the American Airlines-IBM SABRE airline reservation system. (IBM)

As the population grew exponentially, so did the amounts and types of data that needed to be crunched by businesses. Instead of computing one single massive problem, the computers of this era needed to be able to compute numerous smaller, simpler transactions and data points. Real time transaction processing was a feature of mainframe computers that was a key to unlocking many of the abilities we now have, such as airline reservations and credit card authorizations. Mainframe computer IT designers dealt with this requirement by increasing the number of I/O (Input/Output) channels for connectivity and scalability purposes (“Mainframe Hardware: I/O Connectivity”, IBM). 

A Quick History of the Modern IT Environment: Clients and Clouds

The immediate predecessor to the cloud computing model was the client-server model. According to Ron White,

“In a client/server network, one central computer is the file server. The server contains programs and data files that can be accessed by other computers in the network. Servers are often faster and more powerful than personal computers…Personal computers attached to a server are the clients. Clients run the gamut from fat clients – computers that run most programs from their own hard drives and use a minimum of network services – to inexpensive thin clients that might have no hard drive at all. They run programs and graphics using their own microprocessor, but depend entirely on a server to access programs and store data files. A dumb terminal is a monitor, keyboard, and the bare minimum of hardware needed to connect them to the network. It uses the server’s microprocessor to perform all functions.” (White, 318)

The key design feature of this model is that multiple client computers are networked and can connect to a central server, onto which the model offloaded multiple computational functions and resources. Whether it’s a file server as detailed above, or a print server allowing everyone on the same network shared access to a printer, or a communications server allowing shared access to internal email system and Internet services, the client is able to collaborate with other clients in their network.

Figure 5: Illustration of the Client-Server model. (The Tech Terms Computer Dictionary)

This had massive implications for enterprise environments, consequently creating an entire industry around enterprise Information Technology management. The client-server model would go on to become the dominant Information Technology model of the 1990s and early 2000s, and it was out of this model that cloud computing was born. By taking the server in the client-server model and replacing it with a collection of interconnected servers run and maintained by a cloud hosting company, many design principles were able to evolve and realize their full potential.

Figure 6: Traditional Hosting Model vs Cloud Hosting Model. (Wicky Design)

Large tech companies – such as Amazon, Salesforce, Microsoft, Google and IBM – would build giant warehouses containing 100,000 servers, and then rent out their portions of their mammoth server capacities to other companies. These cloud hosting services would be capable of replacing much of the high-cost Information Technology work previously done on-site (Patterson & Hennessy, 7). They were also able to offer a whole new model of services through the cloud. By breaking it down into 3 distinct layers – infrastructure, platform, and application – these companies could segment the specific Information Technology services being used by modern businesses and offer customizable services tailored to the Information Technology desires and needs of the specific client (Campbell-Kelly & Aspray, 300).

Design Principles of the Modern IT Environment: Old Ideas, New Technology

Although the technological manifestations are novel, many of the design principles that went into architecting the cloud computing Information Technology model are borrowed from older IT models.

Distribution and Redundancy

As we saw in mainframe RAS design, a high priority was placed on the system being on and available at all times. This design principle is taken to the next level and fully actualized by cloud computing. The nature of our modern socioeconomic environment requires a system ready to process activity 24 hours a day, 7 days a week. One of the main promises of the cloud is to always be available, which is accomplished through redundancy and distribution. Whereas the early mainframe model was susceptible to shut downs if the mainframe computer malfunctioned, the client-server model was able to improve on this disadvantage by distributing the computation load to multiple servers that would jointly handle requests. If one server went down, the other servers would be able to pick up the slack. Although this model was an improvement, it still makes the network susceptible to breakdown if the server site is compromised. Cloud computing addresses this concern by further decentralizing the computational hardware. Instead of on-site servers, the cloud servers are stored in server farms across the country and globe, accessed via the Internet. This mitigates the risk of regional difficulties, and allows for a much more distributed computational network.

Figure 7: Cloud servers in various locations across the globe, accessible via the internet. (My Techlogy)

Rapid Elasticity

The ability to scale your Information Technology architecture up and down as, and when, you need it is a key business feature of the modern economy. According to Rountree and Castillo, “The rapid elasticity feature of cloud implementations is what enables them to be able to handle the “burst” capacity needed by many of their users. Burst capacity is an increased capacity that is needed for only a short period of time” (Rountree & Castillo, 5). An example of this would be if your business were seasonal. You would need larger processing capabilities in-season than you would out-season, and a non-cloud Information Technology model you would be required to pay for computing resources that can reach that maximum threshold, but would be underutilized when your business is out of season. The customizability of the cloud allows for a “pay for what you use” model of resource allocation, much like utilities such as water or electricity. This extensibility is crucial to survival in the economic environment, allowing for businesses to grow at scales of efficiency previously unreachable.

Figure 8: Only pay for what you use. (London Metropolitan University)

Virtualization and Resource Pooling

As is the case with all technology, the moment of mass adoption is rarely the moment of invention. Technologies usually have long gestation periods, waiting until the stars align to cross the threshold of commercial implementation. Virtualization technology was no different. All the way back in 1972, IBM released their Virtual Machine Facility/370 operating software, designed to be used on their Systems/370 mainframe. Going through ebbs and flows of relevancy, the original virtualization system serves as the foundation of IBM’s current z/VM virtualization system. According to Rountree and Castillo, “With virtualization, you are able to host multiple virtual systems on one physical system. This has cut down implementation costs. You don’t need to have separate physical systems for each customer. In addition, virtualization allows for resource pooling and increased utilization of a physical system.” (Rountree & Castillo, 12). Without virtualization, the cloud would lose much of its ability to deliver cross-OS services to customers, and would lose the ability to function as SaaS, PaaS, or IaaS platforms.

Figure 9: Platform/Application/Infrastructure as a Service are all available through cloud virtualization. (Business News Daily)

Ease of Maintenance

Maintaining your computational infrastructure is a major factor when it comes to the design of your Information Technology environment. Hardware and software upgrades are a major resource sink for any organization with a robust Information Technology situation, and can slow down productivity and efficiency of the system. This was the case with the pre-cloud Information Technology mainframe and client-server models, where any maintenance had to be completed on-site, as that is where the hardware was stored. Now, as the hardware is hosted by the cloud service provider – be it Amazon or Microsoft or IBM –, they handle the updates and maintenance. As Rountree and Castillo put it, “You don’t have to worry about spending time trying to manage multiple servers and multitudes of disparate client systems. You don’t have to worry about the downtime caused by maintenance windows. There will be few instances where administrators will have to come into the office after hours to make system changes. Also, having to maintain maintenance and support agreements with multiple vendors can be very costly. In a cloud environment, you only have to maintain an agreement with the service provider” (Rountree & Castillo, 10).

Figure 10: Cloud vendors handle maintenance, leaving the client to focus on more important matters. (Elucidat Blog)

Modularity & Combinatorial Design 

Just as the mainframe model of the mid-20th century transitioned from highly specialized, custom built machines to general purpose machines, the cloud model is able to serve a multitude of customers due to it’s inherently modular design. According to Barbara van Schewick, “The goal of modularity is to create architectures whose components can be designed independently but still work together” (van Schewick, 38). The application-layer offerings of cloud computing vendors allow individuals and companies to create apps, websites, and other digital offerings by combining modules of pre-existing tools and building blocks (“How Cloud Computing Is Changing the Software Stack”, Upwork). A cloud computing data warehouse is a lesson in modularity, as the servers that fill them are standardized pieces of hardware that can be slotted in and out to shift capabilities and resources as needed. By following this modular design principle, the other cloud design principles – such as scalability and ease of maintenance – are augmented.

Figure 11: Microsoft Cloud Server Farm. (Data Center Frontier)



In conclusion, while the evolution of Information Technology has been a long and storied one, the design principles undergirding the progress have been somewhat consistent. The promise of enterprise computing has always been to expand and improve upon the capabilities of the computational landscape. From the tabulating machines of the early-20th Century to the 21st Century cloud computing services and platforms, we see certain features and design values hold throughout the various technological iterations. In what form the next advancement in the evolution of Information Technology will appear is uncertain, but having a grounding in these basic design principles will provide one with the necessary toolkit to understand and impact this field.



Zittrain, Jonathan. The Future of the Internet–And How To Stop It. Yale University Press, 2009.

Campbell-Kelly, Martin, et al. Computer: a History of the Information Machine. Westview Press, 2016.

White, Ron. How Computers Work. Que Publishing, 2008.

Rountree, Derrick, and Ileana Castrillo. The Basics of Cloud Computing: Understanding The Fundamentals of Cloud Computing in Theory and Practice. Syngress, 2014.

Patterson, David A., and John L. Hennessy. Computer Organization and Design: The Hardware/Software Interface. Morgan Kaufmann, 2014.

“Information Technology.” Merriam-Webster, n.d. Web. 15 Dec. 2017.

“IBM Mainframes.” IBM Archives,

“Mainframe Strengths: Reliability, Availability, and Serviceability.” IBM Knowledge Center,

“Sabre: The First Online Reservation System.” IBM100 – Icons of Progress,

“Mainframe Hardware: I/O Connectivity.” IBM Knowledge Center,

“Types of Network Server.” Higher National Computing: E-Learning Materials,

Mahoney, Michael S. “The Histories of Computing(S).” Interdisciplinary Science Reviews, vol. 30, no. 2, June 2005, pp. 119-135. EBSCOhost, doi:10.1179/030801805X25927.

Denning, Peter J., and Craig H. Martell. “Great Principles of Computing.” MIT Press, 15 Jan. 2015,

Schewick, Barbara Van. Internet Architecture and Innovation. MIT Press, 2012.

Wodehouse, Carey. “How Cloud Computing Is Changing the Software Stack.” Upwork, 25 Nov. 2017,

We Live In The Cloud Now.

In preparation for my final paper, I’ve been doing a lot of research of cloud computing and virtualization technology. What initially drew me in was the idea of further abstracting the computational process from the user’s viewpoint. The ability to access the full store of applications and features we’ve become accustomed to on a regular PC or enterprise suite, but without the accompanying hardware (or software) constraints. As I’ve learned about this process, I’ve come to see that certain design principles of the web are central to the functionality these technologies.

Cloud computing is an inherently combinatorial design technology. It takes the various software, IT infrastructure, and platform services, and combines them with the extensibility and network-ability of the internet to produce a widely accessible and scalable virtualized environment. The user is no longer constrained by the processing powers “on-site”, as they can access the servers of large corporations with industrial sized computational powers. This has been the key to the Software-as-a-Service, Platform-as-a-Service, and Infrastructure-as-a-Service models that are so dominant today. Anytime you use the Google Suite, Microsoft Suite or Amazon Web Services, you’re connecting to the mammoth server powers of that company.


Modular architecture of the software layer and hardware components is also a central design feature of these technologies, particularly when it comes to server design. In the 60s and 70s, the dominant mode of computing was to use “large” mainframes with dumb terminal ports used to access the mainframe and its computation power. Virtualization is essentially going back to that model, but using the presence of the internet to give modern mainframes the ability to provide an IP address that “thin clients” can connect to and access from anywhere with internet connectivity. This is highly modular as any thin client can access any server, so you’re not rooted to any one device. Easy to switch out. Easy to move. Scalable. Extensible.

In analyzing the role of the web in this technology, one much also reckon with the entire ecosystem of industry relations. Amazon, Microsoft, IBM and other Cloud Computing industry leaders are all deeply embedded in an existent socio-economic framework, and reliant on the older forms of infrastructure, originally built for the telegraph and telephone, to transmit their data. As we move much more of our computing to the internet-based cloud, the owners and operators of these transmission lines become highly important players in the ecosystem.

The internet has made it possible to access processing powers far beyond your own “on-site” capability, meaning you no longer have to go through the traditional effort of installation and maintenance. You can just sign in and access. From a user perspective, the emergence and predicted dominance of the cloud computing enabled Software-as-as-Service and Application-as-a-Service model will have dramatic implications for how we conceptualize and use computers. In his book “The Future of the Internet — And How To Stop It”, Jonathan Zittrain lays out a host of concerns around the “tethering” of appliances and privacy issues that could come out of this model, which is all the more reason to apply conscious design principle to this still growing technology.


1. Jonathan Zittrain, The Future of the Internet–And How to Stop It. New Haven, CT: Yale University Press, 2009.

2. Ron White, How Computers Work. 9th ed. Que Publishing, 2007

3. Janna Anderson, and Lee Rainie. “The Future of Apps and Web.” Pew Research Center’s Internet & American Life Project, March 23, 2012.

Some Thoughts on the Nature of the Internet

This week’s readings were helpful in fleshing out what is probably the most important technology/platform of our age – the Internet. The ways in which we, as users/laymen, conceptualize of the Internet has critical ramifications for how it is used, and and how it’s being continually built. One of the overarching points I took from the readings was just how open and non-deterministic the evolution of the Internet has been. A series of conscious choices and decisions by various actors are what led us to the current incarnation of the Internet. Had other choices and decisions been made, we could have had a radically different outcome. For example, had the NSF not relaxed restrictions on commercial use of the Internet in 1991, the vast majority of web content we now consume would have to be drastically altered or removed.

I found Ron White’s metaphor of the internet as a living organism, like a body, to be very useful in breaking down the popular conception of “the Internet” as a as an immutable, definitive technological platform or entity. Thinking of the various actors as organs and molecules that are subsumed by the entirety of the body helps to understand their limitations and contextual relationships that make up the Internet. It is truly a socio-technical system, and getting all of these actors and institutions to work together requires norms and standards. Before this class, I had been guilty of overlooked the importance of these standards, but I know understand how they truly make up the backbone of the Internet.

To be “on the Internet” means to be an active node in this amorphous, global network. You are a single actor using a machine that is engaged in a series of digital communications with many other machines within this network (and sub-networks). What I find interesting is the blending of physical infrastructure (much of which was designed and built with older forms of communication in mind) with the novel digital infrastructure of TCP/IP, WAN, LAN, peer-to-peer networks, DSLs, modems, etc. This is a great example of the process of combinatorial design, as we’re building on pre-existing technological foundations.

A case that illustrates the socio-technical nature of the internet is torrenting. Peer-to-peer file sharing has a romantic element to it that harkens back to the original ethos of the internet, which is non-commercialized/monetized sharing. There is a distinctly democratic, non-hierarchical nature to these networks. The Internet pioneers were using it to share academic files, but nowadays people use it to share a myriad of files. I am intrigued by the decentralized nature of these networks. The “what’s mine is yours, and what’s yours is mine” ethos is very evocative of the digital utopian hippies who were instrumental in developing the Internet.


  1. Ron White, How Computers Work. 9th ed. Excerpts from chapter, “How the Internet Works.” Que Publishing, 2007.
  2. Martin Campbell-Kelly and William Aspray. Computer: A History Of The Information Machine. 3rd ed. Boulder, CO: Westview Press, 2014.
  3. Barbara van Schewick, Internet Architecture and Innovation. Cambridge, MA: The MIT Press, 2012.
    Excerpt from Chap. 2, “Internet Design Principles.”

Thoughts and Reflection on Digitization and Metamedia

Having already taken CCTP 506, I was familiar with the whole notion of the analog-digital divide. We learned about the nature of the continuous-discrete dichotomy, and how fundamental the process of digitization has been to modern technological advancements. From the music we listen to on our digital devices, to the movies and TV shows we watch on our streaming services, the entire modern media landscape is built on the process of digitization.

The concept of metamedia is also been crucial to understanding our modern technological landscape. The ability to remediate and build on existent media has been foundational to the explosion of symbolic artifacts – as expressed through media and content – we’ve been creating and consuming in this era.

But what are the design ramifications of these concepts? Well, this is where tracking the modern history of technological advance is vital. Looking back at Alan Kay’s Dynabook and Ivan Sutherland’s Sketchpad shows us the lineage of design for devices that utilize digitization and metamedia. Our modern platforms and devices (smartphones, iPads, laptops, etc) are all built on the concepts and features of these technologies. The importance of combinatorial design principles is made evident when we juxtapose the older technologies with our newer ones. At the heart of both is the idea that utilizing the process of digitization in the name of metamedia will open up the door to further creative technological advancements.

What I’m interested in is what the next step will be. What will computational design look like in the next few decades? Concepts like ubiquitous computing, perceptual computing, and of course both virtual and augmented reality are gaining steam. It’s important that in the pursuit of design advancements, we understand that what makes our modern devices so transformational is their ability to act as platforms for metamedia, and they have a very rich design history behind them.


  1. Irvine, Martin Key Design Concepts for Interactive Interfaces and Digital Media
  2. White, Ron and Downs, Timothy How Digital Photography Works. 2nd ed. Indianapolis, IN: Que Publishing, 2007.
  3. Manovich, Lev. The Language of New Media: “What is New Media ” (excerpt). Cambridge: MIT Press, 2001.


A History of Computational Design

Design is an inherently communal act. One must understand the user, their needs, their desires, and their context in order to fashion an object or experience that is useful to them. The readings this week showed the truth of this concept by taking us down the history of modern computation. The community of users for early computer technology were decidedly esoteric. The government – particularly the military branches – were using this technology for a very specific purpose, and the development of computational technology reflected this. All the specific, accumulated forms of communication and thinking within the military were then ported over to this nascent technology. The same process took place with the business community. Both had certain affordances and constraints that informed the shape and usability of the early computer. Due to its highly segmented user base, the technology was designed in a highly specialized manner for very particular purposes. The barrier to the knowledge needed to operate these early computers was relatively high, which is another conscious design choice made with these early user communities in mind.

One example of this is size. The early computers were these behemoth boxes that required a large storage space and energy source. The military and business communities had those two resources in spades, and as such, there was no real design reason to consider making the computer smaller. It took both the technological advances pertaining to Moore’s Law, as well as a new design target for the computer to shrink to a size suitable for the general population.

Another example of this would be the computer’s user interface. Initially, user interface was a heavily textual process. Lines of abstruse code would need to be input in order to access the features of the computer. The advances made by Xerox PARC in developing GUI technology were a crucial component in the broadening of the potential user base for computers. Now, instead of being required to learn a new language to use a computer, the average individual could find their way around using the far more intuitive graphical process. Just as it’s easier to understand a bathroom sign in a foreign country than it is to understand the foreign words written underneath it, the symbols on the computer make it easier to navigate and communicate our intentions, and usefully explore its features. GUI advancements were an incredible conceptual leap, and a crucial step in bringing about the Microcomputer Revolution. The design target for the computer had shifted from the highly specialized communities of the military and business community to the general population.

The important takeaway for me is that design is not destiny. It requires active decisions by a network of individuals, organizations and communities to produce a product or experience suited to a particular community. These decisions are not static, nor given. So it is incumbent on the designer to understand their role in this process, and take a sense of ownership over the design decisions they choose to make.


1. Lev Manovich, Software Takes Command, pp. 55-239; and Conclusion.

2. J. C. R. Licklider, “Man-Computer Symbiosis” (1960) | “The Computer as Communication Device” (1968)

3. Engelbart, “Augmenting Human Intellect: A Conceptual Framework.” First published, 1962. As reprinted in The New Media Reader, edited by Noah Wardrip-Fruin and Nick Montfort, 93–108. Cambridge, MA: The MIT Press, 2003.

The Importance of Coding

This wasn’t my first go-round with CodeAcademy’s Python tutorial, so many of the introductory concepts were already known to me. That being said, it was still interesting to see how code and programming language fits into our existing linguistic conceptual structures. Unless you are a linguist, it doesn’t seem like that you are constantly aware of exactly how we use language, so the process of learning a new one can make apparent the underlying, tacit processes going on every time we communicate.

The concept of symbols that mean and symbols that do is made very transparent with programming language. We are required to set and define variables to be imbued with meaning in order to be useful in various contexts. We fill the objectively meaningless shell of a word with the numbers, strings, etc, that will become the meaning of this placeholder word. We are then able to make symbols do things using preset symbols and the symbols we defined. The actions these symbols result in when combined are dependant on the meaning packed into them. The program reads them, computes, and returns to us even more symbols packed with meaning. It’s symbols all the way down!

What I find interesting is how even those of us who don’t pay close attention to the programming languages and computations taking place in our technology are utterly beholden to them. Our entire lives are centred around computers, and this gulf between those who know and those who use highlights the importance of good design in this area. It is for this reason I also believe coding literacy should be a fundamental skill learned by all. Websites like CodeAcademy do a wonderful job of opening up these seemingly esoteric and black-boxed domains to make them more accessible to the general public.

The Importance of Metaphors

Having taken CCTP-711: Semiotics and Cognitive Technology last year, many of the concepts and theories we read for this week were somewhat familiar to me. Thinkers like C.S. Peirce, Claude Shannon, and Warren Weaver have given us a useful foundation to understand information and its transmission in this age of digital media and knowledge. At first, I wondered how these concerns were relevant to this course, but as was said in the Professor Irvine reading, “In the context of electronics for telecommunications and computing, we can describe the question of “information” as a design problem.”

Information design, to me, is how we try and take these decontextualized bits and bytes of digital transmission, and turn them into a message than can be meaningfully absorbed by the intended recipient. But in order to do so, we must first have an understanding of communication theory. What is a message? Where does information reside? How do we communicate with each other? It seems to me that the dominant metaphor being used in both electronic and non-electronic conceptualizations of communication and information transmission is that of the packet or container being filled with content and then transported to the recipient via a conduit of some sort. This is chief metaphor employed when we learn about TCP/IP.

A “digital highway”

But just as with every metaphor, using this conceptual model comes with some limiting consequences. Meaning doesn’t actually reside inside a container that can be transported from one location/mind to another. It is a collectively derived process, more akin to the Cloud, from which we all maintain and pull from. Meaning making relies on centuries of cultural symbol building. You don’t send language or meaning from your mind to another in the unilateral manner assumed by the content-container-transport metaphor, it’s a much more communal and socially constructed process engaged in by not just the immediate actors, but the entirely of the society and culture they live in. The network, with all its various nodes and interconnectivity, is a much better metaphor than the transportation highway that we so often use. Understanding this truth is instrumental in becoming a good practitioner of information design.

Linear point-to-point metaphor for communication.

The cloud. Perhaps a better metaphor.


  1. Martin Irvine, Introduction to the Technical Theory of Information
  2. Luciano Floridi, Information: A Very Short Introduction. Oxford, UK: Oxford University Press, 2010.
  3. Ronald E. Day, “The ‘Conduit Metaphor’ and the Nature and Politics of Information Studies.” Journal of the American Society for Information Science 51, no. 9 (2000): 805-811.

Affordances and Constraints of Apps

Apps present an interesting design construct when it comes to affordances. As the gateway to almost all of the digital content we consume, the app is a crucial piece of the digital world. We use them every day, for a myriad of purposes.


The first design element I notice about the app is its shape on the screen. Designing the app as a button icon affords a “pushing” (or tapping) action. We are familiar with buttons, and already know what to do with them when encountering one. The icon will usually consist of an app-relevant picture with the title of the app underneath, similar to a book or album cover. The app is positioned in a grid next to other apps. We are familiar with this concept of a row/grid of various titles from the other marketplaces of audio/visual content, like movie stores, book stores, or music stores. One constraint of this process is 2D substrate of the device. Unlike books or records or DVDs/VHS tapes, you cannot pick up the app and manipulate it in 3D. There is no back cover of the app, which is where a lot of information was stored in those older forms of media. This constraint can often be played with, or overcome, through the design choices of the particular app. There is a relatively infinite amount of information that can be stored on digital apps, and can be accessed through scrolling, which is a digital affordance we now have.

Opening the app

Once clicked, the app springs to life by covering the entire screen of the device. The phone/tablet turns from a marketplace of potential uses into a tool for one. The software takes over the entire screen of the device, allowing for not only a much larger range of visual features, but also the multi-touch features of the touchscreen. The full screen also affords a larger share of attention to the particular app. One constraint of this app interface is, again, the 2-Dimensional substrate. You cannot open an app the same way you open a book or album. The entire process takes place on a flat grid.

Design Principles

The designer of the app has to contend with the unique constraints of digital devices while pulling from the affordances we expect from traditional media and devices. The Murray reading did a good job of explaining how the introduction of digital media meant a new school of design was needed to explore this “problem”. Nowadays, we are building on this already established school of digital design affordances, but there is always more to add and refine. One of the central lessons I have gleaned from this course, so far, is that design is an exercise of choice. The individuals and institutions that have guided our design processes so far have made conscious and distinct choices to follow certain affordance paths and move away from certain constraint paths.


1. Murray, Janet. Inventing the Medium: Principles of Interaction Design as a Cultural Practice. Cambridge, MA: MIT Press, 2012.

2. Kaptelinin, Victor. “Affordances.” The Encyclopedia of Human-Computer Interaction, 2nd Ed., 2013.

3. Norman, Donald. “Affordance, Conventions, and Design.” Interactions 6, no. 3 (May 1999): 38-43.

The Mediology of the Smart Home

For another course I’m currently taking at CCT, we are required to come up with a product that we will shepherd through the development process. I’ve decided to use the Smart Home as my idea, and as such, I’ve been thinking quite a lot about the form and function of interconnected devices. How they “speak” to one another, how we speak to them (literally and figuratively), how we conceptualize “smart” technology, etc. After going through this week’s readings, I can now see the role mediology plays in acquiring a deeper understanding of our socio-technical landscape and the technical artefacts that populate it.

According to the Debray reading, “What is Mediology?”, mediology “…is a question, in the first approximation, of analyzing the ‘higher social functions’ (religion, ideology, art, politics) in their relationship with the means and mediums/environments [milieux] of transmission and transport.” This consideration is at the crux of the Human-Computer Interaction design concepts one must explore when looking at creating a Smart Home. Questions such as “How do we conceptualize the kitchen space?” and “What do we need and/or expect from the bedroom space?” are necessarily dealing with the relationship between “higher social functions”, such as social organization and community, and the environments of their instantiation. In Product Development, we are required to study Maslow’s hierarchy of needs because in order to develop a good and useful product, one needs to have a holistic understanding of both the individual user and the society they live in.

Due to its nature as the centre of so many activities, the space in which a multitude of needs are fulfilled, and as a construct containing a plethora of use-spaces, the home is one of the best examples of where this holistic thinking can, and needs to, be applied. Mediology’s focus on this nexus makes it a very useful companion to the traditional design disciplines and principles of the home. For example, the idea of the kitchen becomes mediological when we connect it to the “higher-level” concept of food politics. The idea of the bedroom becomes mediological when we connect it to the “higher-level” concept of intimacy and privacy. The bathroom becomes mediological when we connect it to the “higher-level” concept of waste disposal politics. The living room becomes mediological when we connect it to the “higher-level” concept of the socio-political ramifications of the burgeoning entertainment industrial complex. A smart home is looking to technologically mediate all of these spaces, and as such, must grapple with these “higher-level” concerns if it is to be designed efficaciously. Only once we approach this issue from a mediological lens can we see their true potential for both social and individual change.



  1. Regis Debray, “What is Mediology?” (Also as PDF.Le Monde Diplomatique, Aug., 1999. Trans. Martin Irvine.
  2. Martin Irvine, “Understanding Sociotechnical Systems with Mediology and Actor Network Theory (with a De-Blackboxing Method)
  3. Pieter Vermaas, Peter Kroes, Ibo van de Poel, Maarten Franssen, and Wybo Houkes. A Philosophy of Technology: From Technical Artefacts to Sociotechnical Systems. San Rafael, CA: Morgan & Claypool Publishers, 2011.
  4. Werner Rammert, “Where the Action Is: Distributed Agency Between Humans, Machines, and Programs,” 2008. Social Science Open Access Repository (SSOAR).

The Importance of the Internet as a System of Distributed Cognition

The readings this week provided a handful of extremely useful terms and concepts with which we can explore design theory and practice. While I believe them all to be of interest, I was particularly fascinated by the concept of “distributed cognition” and the implications it has on our technological landscape. The Hollins, Hutchins and Kirsch reading gave a compelling overview of the concept while applying it to various technological platforms and instantiations. Where I see it most clearly is in the use of the internet.

What I find interesting about distributed cognition is how it expands the boundaries of pertinent cognitive interactions to our broader environment, including the resources and materials around us. This expansion has been exponentially increased by the internet allowing us to access an far broader and deeper store of information and cognitive activities than ever before. Whereas pre-internet, I may have been constrained to interactions and information in my immediate physical environment or communication technologies of the era (telephone, books, tv, radio), I can now access real-time information from across the world in a fundamentally interactive way. The global community that has sprung up in the wake of the internet has undoubtedly had an effect on our cognitive processes and worldviews, which is the central idea of the theory of “distributed cognition”.

The networking of computers (and minds) that has been unlocked by this technology has had, and will continue to have, radical implications on how we conceptualize and interact with our socio-technological foundations and cognitive environment. I’m excited to see the Human-Computer Interaction implications of an entire generation growing up with such ubiquitous and immediate coordination of ideas and cognitive activities.


  1. James Hollan, Edwin Hutchins, and David Kirsh. “Distributed Cognition: Toward a New Foundation for Human-computer Interaction Research.” ACM Transactions, Computer-Human Interaction 7, no. 2 (June 2000): 174-196.