Building The Cloud: Principles in Information Technology Design


If one watches the tech space for a long enough period of time, they will start to notice a recurring pattern of new and trendy technologies being touted as the “Next Big Thing”. A quick glance at the current state of our tech discourse reveals a bevy of tools and technological phenomena that promise to be socially and economically transformational. Virtual/Augmented Reality, Artificial Intelligence, Big Data, Internet of Things…all of these technologies have taken up prime position in our tech consciousness, but it is the advancements in Cloud Computing that I find most interesting.

The transition from our traditional Information Technology infrastructure to the Cloud environment has been one that encompasses many of the principles central to the digital economy. As we move more and more of our social lives and economic activity to the Internet, design concepts such as scalability, distribution, resource pooling, virtualization, and rapid elasticity will form the bedrock foundation for how we create, move and compute in this new digital environment. Yet while we acknowledge their ubiquity and importance in modern times, it’s important to understand the history of these design principles and how they have taken shape during the evolution of Information Technology.

In this paper, I intent to trace that history – from the era of tabulating machines and large mainframes to the modern cloud era – and show how the Cloud is the furthest advancement of the design principles at the foundation of Information Technology, crucial in unlocking the business potential of this new, modern digital economy.


A Quick History of the Early IT Environment: From Hollerith to IBM System/360

Merriam-Webster defines Information Technology as “the technology involving the development, maintenance, and use of computer systems, software, and networks for the processing and distribution of data” (“Information Technology”, Merriam-Webster). In the early portion of the 20th Century, this meant using tabulating machines, both large and relatively limited in their functionality. These machines, designed by Herman Hollerith, were electro-mechanic in operation and used punch cards symbolically representing data to execute large computations. The first major application of these tabulating machines was the 1890 US Census, which was completed in a much faster and cost-effective manner than the previous census. It quickly became apparent that these tabulating machines would be computationally useful in business contexts for large firms like the railroad companies that dominated the era, particularly for tasks such as accounting, payroll, billing and tracking inventory (Zittrain, 11-12).

Figure 1: Herman Hollerith’s Electric Tabulating Machine. (

The high threshold to a level of functional knowledge needed to operate these machines meant that the firms using them preferred to rent them from Hollerith instead of purchasing them outright. This way, there was a direct vendor they could appeal to in the case of something going wrong.

Decades of progress in computational hardware and theory led to the onset of the electronic computer era of the mid-20th Century, in which the tabulating machine gave way to the mainframe computer as the dominant player in the Information Technology arena. In the early years, these computers were behemoths that would take up entire rooms, but compared to the tabulating machines that came before them, mainframe computers had much more functionality and versatility, and could process amounts of data previously unheard of.

Figure 2: IBM System/360. Released in 1965, went on to become one of the most commercially successful and influential product in the history of computers. (NBC)

Extremely expensive and requiring customization, these computers were initially used only by government agencies and large corporations, but would go on to serve as the backbone of the 20th Century Information Technology landscape. They were designed to process the ever-increasing deluge of data being produced by the modern economy, which included such novel concerns as real-time financial transactions, logistical computing, and enterprise resource planning, speed ordering from wholesalers, and instant reservation services (“IBM Mainframes”, IBM). IBM was the principal player in the design and construction of the mainframes of this era, but they faced competition from a group of companies known by the acronym BUNCH – Burroughs, UNIVAC, NCR, Control Data Corporation, and Honeywell. All of these companies were responsible for much of the innovation surrounding these new machines serving as the engine of 20th Century business innovation.

With the 1980s came the Person Computer revolution, which took computers – once the domain of business and government – and put them in the hands of the general population. Information Technology models would have to adjust to this proliferation of computational literacy. The ability to link computers together was an idea pushed forward by this new computational environment, and the Internet – in development since the 1960s – was brought to the masses (Campbell-Kelly & Aspray, 275). Networking – a concept previously only used by higher education institutions and select organizations – was now a major possibility for the enterprise computing community. The further increases in the amount of data production led to larger, more powerful mainframes and servers being built, but a new Information Technology model would be needed to truly wrestle with the features and components of this new digital environment.

Figure 3: Digital Equipment Company’s Microvax 3600. Unveiled in 1987. We can see the computer shrinking and becoming more ubiquitous. (Computer Weekly)

Design Principles of the Early IT Environment: RAS and More

In terms of principles used for the design and building of mainframe computers, the acronym “RAS” – which stood for Reliability, Availability, and Serviceability – was the accepted industry guiding philosophy. According to IBM – the industry leader in mainframe design – “ When we say that a particular computer system ‘exhibits RAS characteristics’ we mean that its design places a high priority on the system remaining in service at all times.” (“Mainframe strengths: Reliability, Availability, and Serviceability”, IBM).

According to IBM, reliability is when “The system’s hardware components have extensive self-checking and self-recovery capabilities. The system’s software reliability is a result of extensive testing and the ability to make quick updates for detected problems” (“Mainframe strengths: Reliability, Availability, and Serviceability”, IBM). One can imagine how this would be crucial when dealing with systems highly sensitive to downtime. In mainframes of the early enterprise environments were not as reliably built as they are today, and calculations would take orders of magnitude longer. This, along with their steep purchase price, meant that if they were not reliable in their operation, the vendor is likely to lose a customer. This was early in the adoption stage, so discomfort with the technology would only be exacerbated by an unreliable machine.

Availability was defined as the ability for “the system [to] recover from a failed component without impacting the rest of the running system. This term applies to hardware recovery (the automatic replacing of failed elements with spares) and software recovery (the layers of error recovery that are provided by the operating system)” (“Mainframe strengths: Reliability, Availability, and Serviceability”, IBM). While the automated nature of this process would only be available in later mainframe models, the basic rationale behind this design principle is to account for system failure. These machines were incredibly complex and multifaceted, so a single failed component deactivating the entire machine is not an unwanted feature. Ideally, the machine would be able to keep running while the replacements and/or fixes are made, giving it a robustness.

Serviceability was considered to be in effect when “The system can determine why a failure occurred. This capability allows for the replacement of hardware and software elements while impacting as little of the operational system as possible. This term also implies well-defined units of replacement, either hardware or software” (“Mainframe strengths: Reliability, Availability, and Serviceability”, IBM). Good design accounts for the instances in which something goes wrong, and serviceability speaks to that concern. Diagnostic analysis is a large part of any computational maintenance, and being able to identify and fix the problem once discovered without compromising the operation of the entire system would have been a large advantage in the early Information Technology environment.

While RAS served as a central guideline for mainframe design, there were other design concepts that were beginning to take root in the early Information Technology landscape. Variability was once such concept that became central as technology progressed. Early mainframe systems had to be custom built for their purpose, such as IBM’s Sabre (Semi-Automatic Business Research Environment), which was a central reservation system designed and built just for American Airlines (“Sabre, The First Online Reservation System”, IBM). The mainframe industry would soon abandoned this customized model for a more modular model of inter-compatible families of mainframe computer systems that would be applicable for many purposes and in many business contexts.

Figure 4: Promotional material for the American Airlines-IBM SABRE airline reservation system. (IBM)

As the population grew exponentially, so did the amounts and types of data that needed to be crunched by businesses. Instead of computing one single massive problem, the computers of this era needed to be able to compute numerous smaller, simpler transactions and data points. Real time transaction processing was a feature of mainframe computers that was a key to unlocking many of the abilities we now have, such as airline reservations and credit card authorizations. Mainframe computer IT designers dealt with this requirement by increasing the number of I/O (Input/Output) channels for connectivity and scalability purposes (“Mainframe Hardware: I/O Connectivity”, IBM). 

A Quick History of the Modern IT Environment: Clients and Clouds

The immediate predecessor to the cloud computing model was the client-server model. According to Ron White,

“In a client/server network, one central computer is the file server. The server contains programs and data files that can be accessed by other computers in the network. Servers are often faster and more powerful than personal computers…Personal computers attached to a server are the clients. Clients run the gamut from fat clients – computers that run most programs from their own hard drives and use a minimum of network services – to inexpensive thin clients that might have no hard drive at all. They run programs and graphics using their own microprocessor, but depend entirely on a server to access programs and store data files. A dumb terminal is a monitor, keyboard, and the bare minimum of hardware needed to connect them to the network. It uses the server’s microprocessor to perform all functions.” (White, 318)

The key design feature of this model is that multiple client computers are networked and can connect to a central server, onto which the model offloaded multiple computational functions and resources. Whether it’s a file server as detailed above, or a print server allowing everyone on the same network shared access to a printer, or a communications server allowing shared access to internal email system and Internet services, the client is able to collaborate with other clients in their network.

Figure 5: Illustration of the Client-Server model. (The Tech Terms Computer Dictionary)

This had massive implications for enterprise environments, consequently creating an entire industry around enterprise Information Technology management. The client-server model would go on to become the dominant Information Technology model of the 1990s and early 2000s, and it was out of this model that cloud computing was born. By taking the server in the client-server model and replacing it with a collection of interconnected servers run and maintained by a cloud hosting company, many design principles were able to evolve and realize their full potential.

Figure 6: Traditional Hosting Model vs Cloud Hosting Model. (Wicky Design)

Large tech companies – such as Amazon, Salesforce, Microsoft, Google and IBM – would build giant warehouses containing 100,000 servers, and then rent out their portions of their mammoth server capacities to other companies. These cloud hosting services would be capable of replacing much of the high-cost Information Technology work previously done on-site (Patterson & Hennessy, 7). They were also able to offer a whole new model of services through the cloud. By breaking it down into 3 distinct layers – infrastructure, platform, and application – these companies could segment the specific Information Technology services being used by modern businesses and offer customizable services tailored to the Information Technology desires and needs of the specific client (Campbell-Kelly & Aspray, 300).

Design Principles of the Modern IT Environment: Old Ideas, New Technology

Although the technological manifestations are novel, many of the design principles that went into architecting the cloud computing Information Technology model are borrowed from older IT models.

Distribution and Redundancy

As we saw in mainframe RAS design, a high priority was placed on the system being on and available at all times. This design principle is taken to the next level and fully actualized by cloud computing. The nature of our modern socioeconomic environment requires a system ready to process activity 24 hours a day, 7 days a week. One of the main promises of the cloud is to always be available, which is accomplished through redundancy and distribution. Whereas the early mainframe model was susceptible to shut downs if the mainframe computer malfunctioned, the client-server model was able to improve on this disadvantage by distributing the computation load to multiple servers that would jointly handle requests. If one server went down, the other servers would be able to pick up the slack. Although this model was an improvement, it still makes the network susceptible to breakdown if the server site is compromised. Cloud computing addresses this concern by further decentralizing the computational hardware. Instead of on-site servers, the cloud servers are stored in server farms across the country and globe, accessed via the Internet. This mitigates the risk of regional difficulties, and allows for a much more distributed computational network.

Figure 7: Cloud servers in various locations across the globe, accessible via the internet. (My Techlogy)

Rapid Elasticity

The ability to scale your Information Technology architecture up and down as, and when, you need it is a key business feature of the modern economy. According to Rountree and Castillo, “The rapid elasticity feature of cloud implementations is what enables them to be able to handle the “burst” capacity needed by many of their users. Burst capacity is an increased capacity that is needed for only a short period of time” (Rountree & Castillo, 5). An example of this would be if your business were seasonal. You would need larger processing capabilities in-season than you would out-season, and a non-cloud Information Technology model you would be required to pay for computing resources that can reach that maximum threshold, but would be underutilized when your business is out of season. The customizability of the cloud allows for a “pay for what you use” model of resource allocation, much like utilities such as water or electricity. This extensibility is crucial to survival in the economic environment, allowing for businesses to grow at scales of efficiency previously unreachable.

Figure 8: Only pay for what you use. (London Metropolitan University)

Virtualization and Resource Pooling

As is the case with all technology, the moment of mass adoption is rarely the moment of invention. Technologies usually have long gestation periods, waiting until the stars align to cross the threshold of commercial implementation. Virtualization technology was no different. All the way back in 1972, IBM released their Virtual Machine Facility/370 operating software, designed to be used on their Systems/370 mainframe. Going through ebbs and flows of relevancy, the original virtualization system serves as the foundation of IBM’s current z/VM virtualization system. According to Rountree and Castillo, “With virtualization, you are able to host multiple virtual systems on one physical system. This has cut down implementation costs. You don’t need to have separate physical systems for each customer. In addition, virtualization allows for resource pooling and increased utilization of a physical system.” (Rountree & Castillo, 12). Without virtualization, the cloud would lose much of its ability to deliver cross-OS services to customers, and would lose the ability to function as SaaS, PaaS, or IaaS platforms.

Figure 9: Platform/Application/Infrastructure as a Service are all available through cloud virtualization. (Business News Daily)

Ease of Maintenance

Maintaining your computational infrastructure is a major factor when it comes to the design of your Information Technology environment. Hardware and software upgrades are a major resource sink for any organization with a robust Information Technology situation, and can slow down productivity and efficiency of the system. This was the case with the pre-cloud Information Technology mainframe and client-server models, where any maintenance had to be completed on-site, as that is where the hardware was stored. Now, as the hardware is hosted by the cloud service provider – be it Amazon or Microsoft or IBM –, they handle the updates and maintenance. As Rountree and Castillo put it, “You don’t have to worry about spending time trying to manage multiple servers and multitudes of disparate client systems. You don’t have to worry about the downtime caused by maintenance windows. There will be few instances where administrators will have to come into the office after hours to make system changes. Also, having to maintain maintenance and support agreements with multiple vendors can be very costly. In a cloud environment, you only have to maintain an agreement with the service provider” (Rountree & Castillo, 10).

Figure 10: Cloud vendors handle maintenance, leaving the client to focus on more important matters. (Elucidat Blog)

Modularity & Combinatorial Design 

Just as the mainframe model of the mid-20th century transitioned from highly specialized, custom built machines to general purpose machines, the cloud model is able to serve a multitude of customers due to it’s inherently modular design. According to Barbara van Schewick, “The goal of modularity is to create architectures whose components can be designed independently but still work together” (van Schewick, 38). The application-layer offerings of cloud computing vendors allow individuals and companies to create apps, websites, and other digital offerings by combining modules of pre-existing tools and building blocks (“How Cloud Computing Is Changing the Software Stack”, Upwork). A cloud computing data warehouse is a lesson in modularity, as the servers that fill them are standardized pieces of hardware that can be slotted in and out to shift capabilities and resources as needed. By following this modular design principle, the other cloud design principles – such as scalability and ease of maintenance – are augmented.

Figure 11: Microsoft Cloud Server Farm. (Data Center Frontier)



In conclusion, while the evolution of Information Technology has been a long and storied one, the design principles undergirding the progress have been somewhat consistent. The promise of enterprise computing has always been to expand and improve upon the capabilities of the computational landscape. From the tabulating machines of the early-20th Century to the 21st Century cloud computing services and platforms, we see certain features and design values hold throughout the various technological iterations. In what form the next advancement in the evolution of Information Technology will appear is uncertain, but having a grounding in these basic design principles will provide one with the necessary toolkit to understand and impact this field.



Zittrain, Jonathan. The Future of the Internet–And How To Stop It. Yale University Press, 2009.

Campbell-Kelly, Martin, et al. Computer: a History of the Information Machine. Westview Press, 2016.

White, Ron. How Computers Work. Que Publishing, 2008.

Rountree, Derrick, and Ileana Castrillo. The Basics of Cloud Computing: Understanding The Fundamentals of Cloud Computing in Theory and Practice. Syngress, 2014.

Patterson, David A., and John L. Hennessy. Computer Organization and Design: The Hardware/Software Interface. Morgan Kaufmann, 2014.

“Information Technology.” Merriam-Webster, n.d. Web. 15 Dec. 2017.

“IBM Mainframes.” IBM Archives,

“Mainframe Strengths: Reliability, Availability, and Serviceability.” IBM Knowledge Center,

“Sabre: The First Online Reservation System.” IBM100 – Icons of Progress,

“Mainframe Hardware: I/O Connectivity.” IBM Knowledge Center,

“Types of Network Server.” Higher National Computing: E-Learning Materials,

Mahoney, Michael S. “The Histories of Computing(S).” Interdisciplinary Science Reviews, vol. 30, no. 2, June 2005, pp. 119-135. EBSCOhost, doi:10.1179/030801805X25927.

Denning, Peter J., and Craig H. Martell. “Great Principles of Computing.” MIT Press, 15 Jan. 2015,

Schewick, Barbara Van. Internet Architecture and Innovation. MIT Press, 2012.

Wodehouse, Carey. “How Cloud Computing Is Changing the Software Stack.” Upwork, 25 Nov. 2017,