There has been a lot of digital artifacts created in the past two decades built around various design principles. The success of these creations did not only rely on execution of tasks in the development cycle, but equally important as well are the very principles underlying each task. In this paper, I’ve chosen to examine where abstraction, modularization, and layering design principles were applied in some of the biggest technological breakthroughs of the 21st century to date focusing on the advantages of applying these principles, the challenges faced by the implementors/creators of these technologies, and how these design principles might impact future advancements. The examples are both for hardware (tangible) and software applications (immaterial). The application of the three selected design principles is widespread, indicating its vast importance in today’s new media. Despite the challenges in the implementation, the advantages far outweigh the difficulties and the way in which these were also implemented opens a huge opportunity for further developments. It’s not difficult to see the role it could play in digital artifacts of the next decades.
We are living in the new media age, where digital artifacts get created fast like the doubling of transistors in integrated circuits every 18 months as described in Moore’s Law (Dally, 2009). In the past decades, we have seen innovations and breakthroughs that could have only been science fiction in the early 1940s. A stark example are the portable computers we now have in our pockets that are more powerful than the first programmable digital computer, the ENIAC, that filled an entire room with 18, 000 vacuum tubes in 1946 (Computer History Museum, n.d.). We have come a long way.
Design has always been something that humans keep on improving on. From the stone hunting tools of the paleolithic era, to metal tools of the iron age. Humans create something out of necessity that revolves on its function only and then spends many years improving it, improving the design, establishing principles that will become the foundation of the next innovations, and so on and so forth.
The same can be said today. Except now, there are intangible principles that transcends functionality and separates a useful design to a top-notch one. Ask any designer or a programmer to create something as mundane as a simple snake game, and you’ll see more than just a flowchart or a list of functionalities as what you’ll probably see from a programmer two decades ago. Things have become more abstract, modular, and layered and there’s a lot of emphasis on good design and not just a design revolving around a simple function but a design that eclipses functionality. We go beyond just binary code especially in complex systems.
This paper is going to be examining some of the digital artifacts born in the past two decades when it comes to abstraction, modularization, and layering along with the advantages and disadvantages of its implementation and its role in the fast advancement of technology.
III. Data Abstraction
“Abstraction (in computer science) is the gathering of the general characteristics we need and the filtering out of the details and characteristics that we do not need” (BBC Bitesize, n.d.). The premise is stripping the specifics of certain elements in a system and using general terms. By Janet Murray’s example in her book, in a single abstraction, “fruit” can be used to describe apples, bananas, or grapes (2012). It seems like a very nonconcrete concept, which begs the question what is the practical application of abstraction? What role does it play in new media?
It’s difficult to grasp the purpose of abstraction because of how intangible it is. However, despite that, this principle is ever present in new media, technically and conceptually. To begin with, abstraction is used to create models which are then used to develop algorithms to achieve a goal or solve something. Technically, in Object Oriented Programming (OOP) which is the paradigm of the programming language Java, the language used in developing majority of Android applications in the past decade, abstraction is one of the main concepts (Javatpoint, n.d.). There are literally classes that are made abstract to hide the complexity of an implementation and simplify an algorithm. As a simple example, supposed an abstract class named Animal has a function called makeSound() (notice that true to its definition, these are general terms and functions). When a specific class named Dog implements the makeSound() function, it will be the sound of a barking dog, while a class named Cat implementing the makeSound() function will be the sound of meowing cat. The programmer who will use either classes will not worry about how the makeSound() is implemented, only that he/she can invoke it if needed for any purpose. The concept is the same even for complex systems.
Abstraction is embedded in some of the programming languages of today, as mentioned, especially in the programming language used to develop Android applications. That means that we probably encounter the very technical meaning of abstraction every time we use our smartphones. But abstraction as a concept doesn’t stop there. Let’s take for example, Facebook (Figure 1).
In the homepage, you see an option to post an update on your timeline. When you click the “post” button, the update then appears on your profile and your friends’ news feed. We don’t see what happens to the data, we don’t know how the data is processed and inserted into their database (and we don’t really have to know), we don’t know how complex their system is just to make sure your update appears on your friends’ feeds. We just know that the click of the post button does it. The button is therefore an abstraction of all the processes that happens in the back-end.
Let’s take another example: a chess game mobile application. When you play against the computer, you first select a skill level to play against and then select if you’ll play white or black. Once the game starts, take note that every countermove the computer makes is based on the current state of the board and the skill level that you’ve selected at the beginning. The skill level determines how many alternatives the computer will anticipate on before making a move or how far ahead it will look into the game before it decides its next move. The computer is an abstraction of all the set of rules and the strategy that the chess game algorithm deems best. It doesn’t care what processor the smartphone is running nor does it care about the memory available to it, it doesn’t care how input/output is captured by the application. You don’t also see the calculations it is making to counter your move. It just does it and the only representation of it is the countermove it makes after you make your move on the chess board. The complexity is hidden from the users.
It is also important to note that abstraction is not only limited to mobile applications, in fact, it is also used in library products (i.e. algorithm providers, middleware libraries, communication libraries, etc) where the design concept layering is also implemented. It is generally good practice because it simplifies a certain design as long as it is not overdone.
Abstraction is a powerful design concept which helps designers of all kinds, in every field, focus on the fundamentals and then take care of the minute details at a later date. This approach isolates the complexity of a design. In succeeding chapters, we will see how abstraction can coexist with layering and modularization in a single system.
Modularization is the process of dividing a system into multiple independent modules where each module works independently (Langlois, 2002). It is something that can be very tactile unlike abstraction when it comes to new media’s hardware. However, there is also a type of modularization that is immaterial. In this paper, let us examine the most used modularized systems of today: a smartphone and the system unit of a personal computer (Figure 2) and classify them as physical or immaterial modularization accordingly.
Personal computers have been around since the 1990s, but it wasn’t until the past two decades that separate parts started getting sold commercially. Nowadays, you can assemble your own system unit by buying each major part – independent modules – from different manufacturers (i.e. ASUS motherboard, Ballistix RAM, Seagate SSD, Intel processor, etc.) and have a customized system unit which can be more powerful than buying a ready-assembled one. This is top-level modularization and is obviously of the physical type. But what about in the deeper level, will we also find independent inner modules seamlessly working together to form a top-level module? In Figure 2 below, we can easily find the answer to this question with the motherboard as an example.
A computer’s motherboard is made up of various components, that although not as easily replaced or assembled like the system unit example earlier, are independent from each other in an operating system (OS) perspective. A computer must have an operating system (i.e. Windows, Linux, macOS, etc.) for it to know what to do and for humans to be able to tell the computer what to do. In Figure 3, the four major concepts of an operating system are itemized. The Intel processor in the customized system unit in the earlier example resides in the motherboard and the Ballistix RAM also resides in the motherboard. They are physically wired together in their slots in the motherboard but in the operating system, they are modularized such that the processor is under the process management “module” while the RAM is being managed by the memory management module. This is an example of immaterial modularization.
Another example of immaterial modularization that is very much used nowadays is modularization in software development (called modular programming). It is also worth nothing that with modularization, those that will use a certain module will be blind to the complexity of it – abstraction. This software design technique originates from way back 1960s but has endured ever since and is applicable in all major languages developed in the past three decades. This is a software development technique that is known by programmers nowadays. The calculator app in your smartphone, for example, is probably modularized according to “addition”, “subtraction”, “multiplication”, and “division” modules. Each module is called according to the user input. And as already shown in Figure 2, a smartphone is not only immaterially modularized but also, physically.
Much like a personal computer system unit, a smartphone has a motherboard, it also has a power source (the battery), a camera component, speakers, and etc. Each component is independent from each other, they are replaceable, and the parts are not necessarily manufactured by a single company. In the case of the iPhone 11, Broadcom supplies the Wi-Fi and Bluetooth chips (Kifleswing, 2020) while O-Film supplies the camera module (Neely, 2020). While in the case of Samsung S9+, Sony supplies their camera module while Qualcomm supplies its transceiver (iFixit, n.d.). Each part/module work together and are all managed by the smartphone’s operating system (i.e. Android, iOS, etc.) similar with a personal computer regardless of the brand of the phone – they both have modularized components.
Modularization has a lot of particularly important advantages. For physical modularization, it’s easier to manage and debug independent modules than a huge system of wires and connections. In the example of a personal computer, if your computer is not booting up, troubleshooting usually starts with checking if the power supply module is working. If you hear the fan and the led power indicator turns green, then it is working. The next step is to probably check if the RAM is working or if the SSD is failing, sometimes a technician swaps these components with a spare RAM or SSD and tries booting up again, and so on and so forth until the problematic component is identified. The rest of the components are left as is and the errant component is replaced, and the problem is fixed without having to buy a new system unit. It also allows room for more flexibility and options as is again, the case with a personal computer. As for immaterial modularization, on top of the ease of debugging and management, it elevates reusability and readability. With the OS example, if your WiFi stops working, you can just shut down and restart your WiFi service from the task manager (if you know which service it is) or turn off and turn on the WiFi adapter from the settings without having to restart the entire computer. Restarting is usually the last resort if everything else fails. It is an extremely useful design principle especially in these types of scenarios. In the simpler calculator example, the advantage is heavier on the reusability of the code. There will only be four modules that will be invoked repeatedly to calculate for each operation depending on the user’s input. More reused code, less source code size, more manageable, clearer and easier to understand by another team member in the case for more complex and intricate systems.
With all the above advantages, there’s only one disadvantage (or challenge) that I encountered while researching and it is that the modules must collaborate with each other very well and have a set of agreed-on standards for it to work. In the system unit example, with various vendors of a RAM or an SSD, each vendor must follow a certain standard such that their product will fit in the slots of a motherboard from MSI, ASUS, or any other major motherboard suppliers and that it must also work regardless of the operating system. A Solid State Standard, for SSDs for example, is explained by the Storage Networking Industry Association (SNIA, n.d.). Meanwhile, manufacturers of motherboards and developers of operating systems must also adhere to the standard to work seamlessly with other components and modules from various other vendors. The same scenario goes for smartphone components and operating systems.
As for the challenge in modularization in software development, a similar close collaboration is also needed. Developers in modules that work together must know the parameters and return types of the modules they’re working on in order to avoid variable type errors. It is also a good programming practice to provide comments on what a certain module does so when other programmers encounter it, they would know what it’s function is (called function headers).
Because of the massive advantage of modularization, it will most likely prevail in the next decades to come. But what is the future of this design principle in new media? MSI’s modular motherboard concept and LG’s modular smartphone might give us an idea as to what that future might hold for modularization (Figure 4).
In 2016, MSI has announced an April Fool’s joke of a fully modularized motherboard they called The One. It has been four years since then and it has become obvious that MSI doesn’t have any plans to produce modular motherboards like but they did introduce us to the concept of modular motherboards and how incredible that could be. In theory it supports all types of RAM, all Intel and AMD processors, all storage devices, and etc (MSI, 2016) – we could only imagine the endless potential of this theoretical product design. It’s basically the dream of computer enthusiasts alike. But unlike the modularized motherboard that we haven’t seen in fruition, modular smartphones are way ahead (although it hasn’t been very successful either). In the same year of MSI’s announcement of their theoretical motherboard, Google demoed their modular smartphone under Project Ara but then also announced the suspesion of this product later of the same year stating that the Ara smartphone will not be commercially available (Statt, 2016). Little is known to the reasoning behind the suspension, but we can only guess that it probably has something to do with the collaboration challenges of modularization (or maybe cost in production). A modular smartphone that went into market however, is the LG G5 with its LG CAM Plus module as an accessory. The options are very limited at the moment, and even with 1.6 million units of the G5 that were sold in the first month of release (Android Authority, 2016), the market of smarphone today is yet to embrace modular smartphones. Perhaps in the next few more decades.
Layering “involves organizing information into related groupings and then presenting or making available only certain groupings at any one time. Layering is primarily used to manage complexity, but can also be used to reinforce relationships in information.” (Oreilly, n.d.)
The most obvious form of layering in new media is the internet – interconnected computer networks. Table 1 below shows the conceptual models that the communication of the internet is built on. Although the modern internet is based on the simpler TCIP/IP Model, the OSI model helps visualize how network communication operates.
|Layer||OSI Model||TCP/IP Model|
|2||Data link||Network Access|
Table 1. The OSI Model vs TCP/IP Model
In summary, the network access layer combines layers 1 and 2 of the OSI model (Science Direct, 2017). It is pertaining to the data bits and how it is carried around the network (i.e. fiber optics, wireless, etc.) and it also addresses how the bits are converted into protocol units (i.e. MAC Addresses). The internet layer on the other hand is where IP addresses lives and are routed for data transmission while the transport layer connects the internet layer and the application layer. Lastly, the application layer specifies the protocol and interface used by hosts in a network, it focuses on end-user services and is in itself, an abstraction. It is easy to confuse what one layer does but it’s easy to remember that there’s a layer for a certain task/role. You might forget that the transport layer is what connects the internet and application layers but you know there is such a layer that does the connection. And this is why, layering is very effective in network communications.
Within the Application layer where the Presentation layer in the OSI Model resides are other layers (the internet is built of layers upon layers that communicates with each other through protocols and standards). Figure 6 below shows these inner layers.
Layering in current network communications also plays an important role in information security. Although it doesn’t guarantee a completely safe network, this design principle enhances the integrity of a system. In software security design there is a “defense in depth design principle”. It is “a concept of layering resource access authorization verification in a system and reduces the chance of a successful attack” (Merritt, 2013) This approach requires unauthorized users to bypass each layer’s authorization to gain access to a certain resource.
The internet is considered to be one of the most important inventions of the century. And in the future, rest assured that the internet layers will endure, if not improved further. This design principle will continue to be part of the internet even in the next years.
Examining the latest digital artifacts of the past two decades makes it very clear that the design principles abstraction, modularization, and layering plays a huge role in these successes. With the advantages far outweighing their disadvantages (challenges), inventors, designers, and programmers were able to create advanced technologies with a lot of potential for growth. Even better, these design principles are not isolated in themselves, an abstraction can also have layering and modularization, a layering design can also exist together with abstraction and modularization in a single system, and a modularized design can also have layering and abstraction. Examining these principles is crucial in understanding what role they will play in the breakthroughs of the next decades to come and how they have opened a huge opportunity to improving current systems and creating new ones for the advancement of technology.