Author Archives: Xiaomeng Xu

Abstraction, Modularization, and Layering in Digital Artifacts in the Past Two Decades: Advantages, Disadvantages, and Future Developments

 Xiaomeng Xu

I. Abstract

There has been a lot of digital artifacts created in the past two decades built around various design principles. The success of these creations did not only rely on execution of tasks in the development cycle, but equally important as well are the very principles underlying each task. In this paper, I’ve chosen to examine where abstraction, modularization, and layering design principles were applied in some of the biggest technological breakthroughs of the 21st century to date focusing on the advantages of applying these principles, the challenges faced by the implementors/creators of these technologies, and how these design principles might impact future advancements. The examples are both for hardware (tangible) and software applications (immaterial). The application of the three selected design principles is widespread, indicating its vast importance in today’s new media.  Despite the challenges in the implementation, the advantages far outweigh the difficulties and the way in which these were also implemented opens a huge opportunity for further developments. It’s not difficult to see the role it could play in digital artifacts of the next decades. 

II. Introduction

We are living in the new media age, where digital artifacts get created fast like the doubling of transistors in integrated circuits every 18 months as described in Moore’s Law (Dally, 2009). In the past decades, we have seen innovations and breakthroughs that could have only been science fiction in the early 1940s. A stark example are the portable computers we now have in our pockets that are more powerful than the first programmable digital computer, the ENIAC, that filled an entire room with 18, 000 vacuum tubes in 1946 (Computer History Museum, n.d.). We have come a long way.

Design has always been something that humans keep on improving on. From the stone hunting tools of the paleolithic era, to metal tools of the iron age. Humans create something out of necessity that revolves on its function only and then spends many years improving it, improving the design, establishing principles that will become the foundation of the next innovations, and so on and so forth.

The same can be said today. Except now, there are intangible principles that transcends functionality and separates a useful design to a top-notch one. Ask any designer or a programmer to create something as mundane as a simple snake game, and you’ll see more than just a flowchart or a list of functionalities as what you’ll probably see from a programmer two decades ago. Things have become more abstract, modular, and layered and there’s a lot of emphasis on good design and not just a design revolving around a simple function but a design that eclipses functionality. We go beyond just binary code especially in complex systems.

This paper is going to be examining some of the digital artifacts born in the past two decades when it comes to abstraction, modularization, and layering along with the advantages and disadvantages of its implementation and its role in the fast advancement of technology.

III. Data Abstraction

“Abstraction (in computer science) is the gathering of the general characteristics we need and the filtering out of the details and characteristics that we do not need” (BBC Bitesize, n.d.). The premise is stripping the specifics of certain elements in a system and using general terms. By Janet Murray’s example in her book, in a single abstraction, “fruit” can be used to describe apples, bananas, or grapes (2012). It seems like a very nonconcrete concept, which begs the question what is the practical application of abstraction? What role does it play in new media?

It’s difficult to grasp the purpose of abstraction because of how intangible it is. However, despite that, this principle is ever present in new media, technically and conceptually. To begin with, abstraction is used to create models which are then used to develop algorithms to achieve a goal or solve something. Technically, in Object Oriented Programming (OOP) which is the paradigm of the programming language Java, the language used in developing majority of Android applications in the past decade, abstraction is one of the main concepts (Javatpoint, n.d.). There are literally classes that are made abstract to hide the complexity of an implementation and simplify an algorithm. As a simple example, supposed an abstract class named Animal has a function called makeSound() (notice that true to its definition, these are general terms and functions). When a specific class named Dog implements the makeSound() function, it will be the sound of a barking dog, while a class named Cat implementing the makeSound() function will be the sound of meowing cat. The programmer who will use either classes will not worry about how the makeSound() is implemented, only that he/she can invoke it if needed for any purpose. The concept is the same even for complex systems.

Abstraction is embedded in some of the programming languages of today, as mentioned, especially in the programming language used to develop Android applications. That means that we probably encounter the very technical meaning of abstraction every time we use our smartphones. But abstraction as a concept doesn’t stop there. Let’s take for example, Facebook (Figure 1).

Figure 1. Facebook new post abstraction

In the homepage, you see an option to post an update on your timeline. When you click the “post” button, the update then appears on your profile and your friends’ news feed. We don’t see what happens to the data, we don’t know how the data is processed and inserted into their database (and we don’t really have to know), we don’t know how complex their system is just to make sure your update appears on your friends’ feeds. We just know that the click of the post button does it. The button is therefore an abstraction of all the processes that happens in the back-end.

Let’s take another example: a chess game mobile application. When you play against the computer, you first select a skill level to play against and then select if you’ll play white or black. Once the game starts, take note that every countermove the computer makes is based on the current state of the board and the skill level that you’ve selected at the beginning. The skill level determines how many alternatives the computer will anticipate on before making a move or how far ahead it will look into the game before it decides its next move. The computer is an abstraction of all the set of rules and the strategy that the chess game algorithm deems best. It doesn’t care what processor the smartphone is running nor does it care about the memory available to it, it doesn’t care how input/output is captured by the application. You don’t also see the calculations it is making to counter your move. It just does it and the only representation of it is the countermove it makes after you make your move on the chess board. The complexity is hidden from the users.

It is also important to note that abstraction is not only limited to mobile applications, in fact, it is also used in library products (i.e. algorithm providers, middleware libraries, communication libraries, etc) where the design concept layering is also implemented. It is generally good practice because it simplifies a certain design as long as it is not overdone.

Abstraction is a powerful design concept which helps designers of all kinds, in every field, focus on the fundamentals and then take care of the minute details at a later date. This approach isolates the complexity of a design. In succeeding chapters, we will see how abstraction can coexist with layering and modularization in a single system. 

IV. Modularization

Modularization is the process of dividing a system into multiple independent modules where each module works independently (Langlois, 2002). It is something that can be very tactile unlike abstraction when it comes to new media’s hardware. However, there is also a type of modularization that is immaterial. In this paper, let us examine the most used modularized systems of today: a smartphone and the system unit of a personal computer (Figure 2) and classify them as physical or immaterial modularization accordingly.

Figure 2. Modularized systems. (, n.d.,, n.d., Business Insider, 2017)  (from top left to bottom)

Personal computers have been around since the 1990s, but it wasn’t until the past two decades that separate parts started getting sold commercially. Nowadays, you can assemble your own system unit by buying each major part – independent modules – from different manufacturers (i.e. ASUS motherboard, Ballistix RAM, Seagate SSD, Intel processor, etc.) and have a customized system unit which can be more powerful than buying a ready-assembled one. This is top-level modularization and is obviously of the physical type. But what about in the deeper level, will we also find independent inner modules seamlessly working together to form a top-level module? In Figure 2 below, we can easily find the answer to this question with the motherboard as an example.

Figure 3. Operating System Concepts Hierarchy Diagram (, n.d.)

A computer’s motherboard is made up of various components, that although not as easily replaced or assembled like the system unit example earlier, are independent from each other in an operating system (OS) perspective. A computer must have an operating system (i.e. Windows, Linux, macOS, etc.) for it to know what to do and for humans to be able to tell the computer what to do. In Figure 3, the four major concepts of an operating system are itemized. The Intel processor in the customized system unit in the earlier example resides in the motherboard and the Ballistix RAM also resides in the motherboard. They are physically wired together in their slots in the motherboard but in the operating system, they are modularized such that the processor is under the process management “module” while the RAM is being managed by the memory management module. This is an example of immaterial modularization.

Another example of immaterial modularization that is very much used nowadays is modularization in software development (called modular programming). It is also worth nothing that with modularization, those that will use a certain module will be blind to the complexity of it – abstraction. This software design technique originates from way back 1960s but has endured ever since and is applicable in all major languages developed in the past three decades. This is a software development technique that is known by programmers nowadays. The calculator app in your smartphone, for example, is probably modularized according to “addition”, “subtraction”, “multiplication”, and “division” modules. Each module is called according to the user input. And as already shown in Figure 2, a smartphone is not only immaterially modularized but also, physically.

Much like a personal computer system unit, a smartphone has a motherboard, it also has a power source (the battery), a camera component, speakers, and etc. Each component is independent from each other, they are replaceable, and the parts are not necessarily manufactured by a single company. In the case of the iPhone 11, Broadcom supplies the Wi-Fi and Bluetooth chips (Kifleswing, 2020) while O-Film supplies the camera module (Neely, 2020). While in the case of Samsung S9+, Sony supplies their camera module while Qualcomm supplies its transceiver (iFixit, n.d.). Each part/module work together and are all managed by the smartphone’s operating system (i.e. Android, iOS, etc.) similar with a personal computer regardless of the brand of the phone – they both have modularized components.

Modularization has a lot of particularly important advantages. For physical modularization, it’s easier to manage and debug independent modules than a huge system of wires and connections. In the example of a personal computer, if your computer is not booting up, troubleshooting usually starts with checking if the power supply module is working. If you hear the fan and the led power indicator turns green, then it is working. The next step is to probably check if the RAM is working or if the SSD is failing, sometimes a technician swaps these components with a spare RAM or SSD and tries booting up again, and so on and so forth until the problematic component is identified. The rest of the components are left as is and the errant component is replaced, and the problem is fixed without having to buy a new system unit. It also allows room for more flexibility and options as is again, the case with a personal computer. As for immaterial modularization, on top of the ease of debugging and management, it elevates reusability and readability. With the OS example, if your WiFi stops working, you can just shut down and restart your WiFi service from the task manager (if you know which service it is) or turn off and turn on the WiFi adapter from the settings without having to restart the entire computer. Restarting is usually the last resort if everything else fails. It is an extremely useful design principle especially in these types of scenarios. In the simpler calculator example, the advantage is heavier on the reusability of the code. There will only be four modules that will be invoked repeatedly to calculate for each operation depending on the user’s input. More reused code, less source code size, more manageable, clearer and easier to understand by another team member in the case for more complex and intricate systems.

With all the above advantages, there’s only one disadvantage (or challenge) that I encountered while researching and it is that the modules must collaborate with each other very well and have a set of agreed-on standards for it to work. In the system unit example, with various vendors of a RAM or an SSD, each vendor must follow a certain standard such that their product will fit in the slots of a motherboard from MSI, ASUS, or any other major motherboard suppliers and that it must also work regardless of the operating system. A Solid State Standard, for SSDs for example, is explained by the Storage Networking Industry Association (SNIA, n.d.). Meanwhile, manufacturers of motherboards and developers of operating systems must also adhere to the standard to work seamlessly with other components and modules from various other vendors. The same scenario goes for smartphone components and operating systems.

As for the challenge in modularization in software development, a similar close collaboration is also needed. Developers in modules that work together must know the parameters and return types of the modules they’re working on in order to avoid variable type errors. It is also a good programming practice to provide comments on what a certain module does so when other programmers encounter it, they would know what it’s function is (called function headers).

Because of the massive advantage of modularization, it will most likely prevail in the next decades to come. But what is the future of this design principle in new media? MSI’s modular motherboard concept and LG’s modular smartphone might give us an idea as to what that future might hold for modularization (Figure 4).

In 2016, MSI has announced an April Fool’s joke of a fully modularized motherboard they called The One. It has been four years since then and it has become obvious that MSI doesn’t have any plans to produce modular motherboards like but they did introduce us to the concept of modular motherboards and how incredible that could be. In theory it supports all types of RAM, all Intel and AMD processors, all storage devices, and etc (MSI, 2016) – we could only imagine the endless potential of this theoretical product design. It’s basically the dream of computer enthusiasts alike. But unlike the modularized motherboard that we haven’t seen in fruition, modular smartphones are way ahead (although it hasn’t been very successful either). In the same year of MSI’s announcement of their theoretical motherboard, Google demoed their modular smartphone under Project Ara but then also announced the suspesion of this product later of the same year stating that the Ara smartphone will not be commercially available (Statt, 2016). Little is known to the reasoning behind the suspension, but we can only guess that it probably has something to do with the collaboration challenges of modularization (or maybe cost in production). A modular smartphone that went into market however, is the LG G5 with its LG CAM Plus module as an accessory. The options are very limited at the moment, and even with 1.6 million units of the G5 that were sold in the first month of release (Android Authority, 2016), the market of smarphone today is yet to embrace modular smartphones. Perhaps in the next few more decades.

Figure 4. Modular motherboards and smartphones (MSI, 2016,, 2016)

V. Layering

Layering “involves organizing information into related groupings and then presenting or making available only certain groupings at any one time. Layering is primarily used to manage complexity, but can also be used to reinforce relationships in information.” (Oreilly, n.d.)

The most obvious form of layering in new media is the internet – interconnected computer networks. Table 1 below shows the conceptual models that the communication of the internet is built on. Although the modern internet is based on the simpler TCIP/IP Model, the OSI model helps visualize how network communication operates.


Layer OSI Model TCP/IP Model
7 Application Application
6 Presentation
5 Session
4 Transport Host-to-host transport
3 Network Internet
2 Data link Network Access
1 Physical

Table 1. The OSI Model vs TCP/IP Model

In summary, the network access layer combines layers 1 and 2 of the OSI model (Science Direct, 2017). It is pertaining to the data bits and how it is carried around the network (i.e. fiber optics, wireless, etc.) and it also addresses how the bits are converted into protocol units (i.e. MAC Addresses). The internet layer on the other hand is where IP addresses lives and are routed for data transmission while the transport layer connects the internet layer and the application layer. Lastly, the application layer specifies the protocol and interface used by hosts in a network, it focuses on end-user services and is in itself, an abstraction. It is easy to confuse what one layer does but it’s easy to remember that there’s a layer for a certain task/role. You might forget that the transport layer is what connects the internet and application layers but you know there is such a layer that does the connection. And this is why, layering is very effective in network communications.

Within the Application layer where the Presentation layer in the OSI Model resides are other layers (the internet is built of layers upon layers that communicates with each other through protocols and standards). Figure 6 below shows these inner layers.

Figure 5. Presentation Layer layers. (Ragnarsson, 2015)           

Layering in current network communications also plays an important role in information security. Although it doesn’t guarantee a completely safe network, this design principle enhances the integrity of a system. In software security design there is a “defense in depth design principle”. It is “a concept of layering resource access authorization verification in a system and reduces the chance of a successful attack” (Merritt, 2013) This approach requires unauthorized users to bypass each layer’s authorization to gain access to a certain resource.

The internet is considered to be one of the most important inventions of the century. And in the future, rest assured that the internet layers will endure, if not improved further. This design principle will continue to be part of the internet even in the next years.

VI. Conclusion

Examining the latest digital artifacts of the past two decades makes it very clear that the design principles abstraction, modularization, and layering plays a huge role in these successes. With the advantages far outweighing their disadvantages (challenges), inventors, designers, and programmers were able to create advanced technologies with a lot of potential for growth. Even better, these design principles are not isolated in themselves, an abstraction can also have layering and modularization, a layering design can also exist together with abstraction and modularization in a single system, and a modularized design can also have layering and abstraction. Examining these principles is crucial in understanding what role they will play in the breakthroughs of the next decades to come and how they have opened a huge opportunity to improving current systems and creating new ones for the advancement of technology.

Behind the scenes of Internet

The advent of computers made all life and work easier. When it comes to a large number of pages, most of the pages are written in the same language (HTML) and delivered using the same protocol (HTTP). HTTP is a commonly used Internet language or protocol (standard), which allows friendly conversations between machines running Windows systems and machines running Linux. The web browser can interpret the HTTP protocol and render HTML into the form of artificial prototypes. Web pages written in HTML can be browsed anywhere, with computers, mobile phones, Pads, and even popular game consoles. Even if you use the same language, different devices need to agree on certain rules when communicating through the web, just like you have to raise your hand when you ask questions in class (I guess not so much now with Zoom classes). HTTP is the protocol used for communication in the Internet. Due to the existence of HTTP, the client (just like computer) will know that it needs to request a Web page firs and send this HTTP request to the server. The server is the computer specified by the URL. The server receives the request, then finds the web page you want, and then sends it back to the computer (client) and display it in the browser.

Each request/response starts by redirecting URL in the browser address, like For instance, open the browser like Chrome and enter http://www. Click Enter and you will come to the Google’s homepage. One thing you may not know now is that the browser does not actually use URL to request web pages from the server but uses Internet Protocol or IP address. It is like phone number or postal code as it’s used as an identification server, not an actual phone or address. Google’s IP address is You can open a new browser tab or new window, enter in the address bar, and then click Enter. Now you will open the same web page as when you just entered This is because people are generally better at remembering words than a long list of numbers. Realizing this process is the DNS system, which is equivalent to the dynamic directory of all machines connected to the Internet. When you type and press Enter, this address will be connected to its corresponding IP address. Since tens of millions of machines are connected to the Internet, not every DNS server can contain a list of all machines connected to the network. So there will be a system that will allow your request to be sent from this server to another server when the server cannot find what you want. Therefore, after the DNS system sees the URL of Google official website, it finds that it is located at, and then sends this IP address to your browser. Then, your browser will send a request to the server of this IP address and wait for a reply. If the whole process is normal, the server will send a message to the client (your browser) saying that everything is OK and then send the web page you want. The information sent is contained in the HTTP header.



Martin Irvine, Intro to the Web: Extensible Design Principles and “Appification”
Ron White, “How the World Wide Web Works.” From: How Computers Work. 9th ed. Que Publishing, 2007. P. 3

In the Age of Internet with Internet Thinking

How did Internet thinking come about? Productivity determines production relations, and the characteristics of Internet will be likely to affect its business logic to a certain extent. The building blocks of industrial society are tangible atoms, while the basic medium that constitutes the Internet world is intangible bits. This means that the economics of the industrial civilization era is kind of scarce, while the economics in the Internet era is rich. Moreover, a network structured Internet has no central node as it is not a hierarchical structure. Although there are different points and they have different weights, no point is in absolute authority. Thus, the technical structure of the Internet determines its inner spirit, which is decentralization, distribution, and equality. Decentralization is a very important basic principle of the Internet.

In a networked society, the value of an individual and an enterprise is often determined by the breadth and thickness or richness of their connections. The broader and thicker/richer your connection is, the greater value you may hold. This is also probably one of the basic characteristics of a pure information society. Your information content determines your value. So openness has become a necessary means of survival – if you are not open, you will not be able to get more connections. Therefore, I believe that the Internet business model should be based on equality and openness, and Internet thinking must also reflect the characteristics of equality and openness. Equality and openness can indicate democracy and humanity and in this sense, the Internet economy can be truly a people-oriented economy. In agricultural civilization era, the most important assets were lands and farmers. Yet the most important assets in the industrial era were capital, machines (machines are solidified capital), and people who have been alienated on assembly lines. In the early industrial age, alienated people were considered the most because they were also treated as machines as if they were just screws in the assembly line. Now in the era of knowledge economy, one of the core resources we have is data and another one is knowledge workers. Enterprise management will also move from a traditional multi-level approach to a more networked and more ecological approach. Let knowledge workers truly create values and let them become one of the most important players in any organization and society as a whole.

Martin Irvine, The Internet: Design Principles and Extensible Futures (Why Learn This?)

Tiktok’s path to success – from an interaction design perspective

When it comes to interesting interactive design applications, some think of niche apps with a unique sense of design and sometimes popular apps that can be seen everywhere in our daily life are, in contrast, rarely mentioned. But in fact, the reason for an app being recognized by most users on the market as a popular hit is inseparable from its unique and excellent interaction design.

Let’s take Tiktok/Douyin for an example. If you have ever used Tiktok, you would know that feeling when you thought you were only on Tiktok for five minutes but in fact, two hours had already passed. Thus, there is a popular saying in China that goes, “Five minutes for Douyin, two hours for the world.” According to the 2018 Douyin Research Report, the number of Douyin users is about 150 million. On average, every Chinese uses Douyin 13.5 days a month. In addition, Tiktok, the overseas version of Douyin, has also become one of the most downloaded apps in App Store as well as in the world. In the first quarter of 2018, it surpassed the download and usage data of world famous apps such as Facebook, YouTube, and Instagram. The number of monthly active users exceeds 500 million. As far as interaction design is concerned, the success of Tiktok is not not predictable. Whether it is UI or UX, Tiktok has almost no shortcomings.

Compared with the common waterfall and table-like app UI layouts, Tiktok directly presents a distinctive auto-play and vertical full-screen layout to users. Although this vertical full-screen type is only a different way of visual content presentation, its intuitive, orderly and efficient feedback can, to a large extent, make users feel immersed, which can also make users addicted to the app. At the same time, this way of simulating a first-person perspective also greatly improves the user’s comfort and makes it feel more real for users when using the app. One of the golden rules for interface design from Shneiderman says, “reduce short-term memory load” and Tiktok has done a good job of that.Pareto’s law tells us that everything in the world can actually conform to the 80%/20% law, meaning many outcomes roughly 80% of consequences come from 20% of the causes. There is no need for designers to always meticulously pursue excellences and it is not necessary to provide users with lots of comprehensive information loading because designers should try to avoid problems such as excessive content and fatigue of browsing when facing numerous lists and layouts. Designers should try to find the most critical 20%, and use most of their energy on the core and most critical content, providing a shortcut of greater benefits for most users. Based on the above interactive design considerations and Shneiderman’s golden rule, Tiktok has successfully reduced the user’s acquisition cost without making the users to do any complicated thinking and choices; shortened the steps and distance of operation, and greatly improved the user’s using experience. Therefore, Tiktok’s interaction design has no obvious disadvantages. Its comprehensive consideration in interaction design has largely contributed to the success of Tiktok today.



  • Ben Shneiderman, Catherine Plaisant, et al. Designing the User Interface: Strategies for Effective Human-Computer Interaction. 6th ed. Boston: Pearson, 2016.
  • Data, Q. (2018, 12 7). 2018 Douyin Research Report released. Retrieved from Tencent Cloud:
  • (2020, 10 25). Pareto principle . Retrieved from Wikipedia:

Interaction Design

Alan Kay views computers as “remediation machine” because he wants to turn them into a “personal dynamic media” and remediation means “the representation of one medium in another” and this tells us that there is a relation that’s connecting the old media and the new and as Manovich has put it – remediation is the new digital media’s defining characteristics. Kay also wants to create a computer as an umbrella, including all sorts of media or as Kay names it a “metamedium”. The next design step is simulation. Kay believes that “simulation is the central notion of Dynabook”.

I enjoyed reading through Moggridge’s book and I actually also quickly read about the fourth chapter as well and I found David Liddle’s three stages of technology use very interesting. Liddle divides the adoption of a technology into three stages: the enthusiast stage, the professional stage and the consumer stage. Not only is the adoption of technology very important but also the classification of users as it is part of interaction design. I personally think that there is no such thing as perfect design suitable for any group or user at any stage. Every design has a certain audience. If the design can satisfy the appetite of the users, then I think it is a successful design. The group of “enthusiasts” can almost be “ignored” for interaction designers because their own preferences are too obvious, and they are often intoxicated by the entertainment brought by the technology itself (there are lots of apple enthusiasts for example and they will buy apple products no matter what), but I think it is still very necessary to understand the opinions of this part of the group in the early stage of the design because these people often very familiar with the core advantages of the technology and they will probably have a more precise understanding of this technology. So for designers, knowing this kind of information will definitely help them consider the impact of this technology in the product and design and therefore make a prominent focus. The more difficult challenge probably lies in finding the good balance between the needs of your professional users and general pubic consumers. For example, FTP, File Transfer Protocol, is a software protocol for exchanging information between computers over a network. We can refer users who are accustomed to or often use FTP as “professional users” and refer those who basically don’t use FTP or only know its name (like myself) as “consumers”. So if you want to design an FTP software interface and conquer these users, interaction designers must consider the general consumers’ lack of professional background, and have a balanced consideration of technical and professional requirements and easy use in the design plan. Otherwise, for consumers, if they can’t make them work, they take them back to the store. This is the real threat.



Bill Moggridge, ed., Designing Interactions. Cambridge, MA: The MIT Press, 2007.

Manovich, Lev. Software Takes Command. New York: Bloomsbury, 2013.

Computational thinking

I think Wing’s article changed my take on computational thinking as I always think that this is something how science people think – logical thinking. And maybe only people who work in the field like computer science or engineering will need to know this computational thinking. But Wing labels it an “attitude and skill set” that everyone can learn and use. The emphasis is on solving problems by exploiting the fundamental concepts of computer science: abstraction, decomposition, recursion, separation of concerns, and so on. In sum, Wing equates computational thinking with thinking like a computer scientist.

Moving on to this week’s LinkedIn python learning and programming, it is very easy to understand for someone who has zero background in programming, like myself. I used to work at a high fashion jewelry company in New York and since it is an e-commerce company, we had two developers in house(at that time, now I think the developers team has expanded to at least five or six people) to write codes and develop and upgrade websites. Every time I passed by their desks, they were always on this black screen full of codes and writing stuff that I just couldn’t understand. I found that so interesting and now that I have finally come to contact with programming, it is not as super difficult as I thought in the beginning. Just like what Ms. Davis said in the video, there are hundreds of, if not, thousands of programming languages. And learning python as a start is good for beginners like myself because of its concise format. I used to think that programming is dull and boring and I always have this stereotypical programmers image deeply rooted in my mind: a dude (usually Asian) with glasses sitting in front of at least two or three computer screens with a monochrome hoodie; a bit socially awkward and may appear looking weird or creepy. Now I feel like that programming is a joyful thing and it can be cool. This is not just nerds do and it can be for everyone just like what Wing said! I followed the instructions on Ms. Davis’s video and tried “hello world” and the sense of accomplishment gained after using code successfully is beyond words.

Hello World, right now.

In [1]: print (“Hello, World!”)

Hello, World!



Davis, A. (2019, 7 12). Linkedin learning. Retrieved from Linkedin:

Jeannette Wing, “Computational Thinking.” Communications of the ACM 49, no. 3 (March 2006): 33–35.


Affordance and Design

In this week’s readings, Murray argues that by “calling objects made with computational technology new media obscures the fact that it is the computer that is defining difference not novelty.” And because of the fact that this term, new media, includes a variety of applications, it may encourage sloppy thinking and with sloppy thinking, as Norman also suggests, often leads to sloppiness in design. Thus, Murray recommends replacing this term with a single new medium, digital medium. She says that there are four representational properties of digital environments/medium: procedural, participatory, spatial and encyclopedic. Let us take a look at iOS keyboard. Why is the iOS keyboard good? Although the virtual keyboards of other systems are basically larger than the iOS keyboard in terms of size, their visual experience is still a mess in comparison.

The iOS keyboard is the ancestor of the (new generation) virtual on-screen keyboard. It has many innovative designs and technical applications. When the first generation of iPhone was launched, Apple made a detailed and comprehensive video introduction, including an introduction to the iOS keyboard. Steve Jobs introduced the iPhone at WWDC 2007 starting with a revolutionary user interface; he started with listing four mobile phones with full keyboards, Moto Q, BlackBerry, Palm Treo, and Nokia E62, which were called “smartphones” at the time. Their buttons and operations cannot be changed, whether you need it or not, it is always there. But different applications require different user interfaces and so in contrast, the iOS virtual keyboard only appears when needed. Different keyboards are used in different applications. For example, if you open up Safari on your iPhone, your keyboard will automatically provide you with a keyboard that has “.com”. At the same time, the virtual keyboard needs to solve the problem with accuracy and efficiency so that typing can be easy and fast. And the use of magnifying glass for easy cursor repositioning, automatic correction and phrase matching. There is also an innovative design that predicts the next letter, word, phrase or even name based on the dictionary and users’ typing habits or preferences. Speaking of another innovative design of the iOS keyboard, it is the enlarged display card when the key is triggered. Whether it is from the visual experience before, during or after the operation, the iOS keyboard makes people feel its implicit excellent performance. Although in fact we are just tapping the glass/screen, it enriches the experience. It is just a piece of glass. Why do some people have a better experience and some don’t? Of course, this is not too much related to glass. It is mainly because of the interactive interface. For example, the iOS keyboard is easier to press than the Android keyboard, so what are the advantages and features of the iOS keyboard? The answer “good design” is too general so let’s look at affordance.

Clear information focuses more on the organization, arrangement and presentation of information. This is more obvious in user interface design because the carrier of the affordance and the perceivable information are similar. Since the interface is displayed on one screen, then the information, the organization, arrangement, and presentation of the “scale” determine its pros and cons. The reason why the iOS keyboard looks better is not because it looks more like a physical keyboard but because of the exquisiteness of the scale. Just like the virtual keyboard, the interaction through the interface is a process. Under this affordance (in some cases, it can be used for input) is embedded with many other affordances, and its user experience is an integral process.


Donald A. Norman, “Affordance, Conventions, and Design.” Interactions 6, no. 3 (May 1999): 38-43.

Janet Murray, Inventing the Medium: Principles of Interaction Design as a Cultural Practice. Cambridge, MA: MIT Press, 2012. Selections from the Introduction and chapters 1-2.

Destroying the Wall that Separates Technology

After reading through this week’s readings, I also want to quote on Ani Di Franco from Professor Irvine’s article: “A tool can be a weapon, if you hold it right.” Technology has taken the pulpit of today’s society and as it continues to bring about a revolution of the way humans interact with each other, sceptics and believers alike persevere to assess its overall impact. Let’s take a look at colleges nowadays, especially connectivity in college relationships – among technology’s many facets. There is a deep relevance of technology in establishing relationships but some may argue: is it enough to justify the morphed social culture that encourages virtual connectivity as compared to the physicality of actually “being there”? With COVID-19 still being a global pandemic, virtual connectivity seems like the only way and it also may pose a new question: is virtual connectivity the new norm for our future life?

Whether you say yes or no to this question, we cannot deny the fact that technology is now everywhere in our life and I mean everywhere; no one can say technology is in its isolated, separate form or domain because the result of the emerging technologies is not social isolation but social integration. Look at schools now – everyone is using Zoom as an online teaching platform, a mediation to connect with students, teachers and faculty from all over the world. It is not just about connecting, rather, it is about socializing too. And it is not just socializing, rather, it is techno-socializing. Like what Debray suggests, we need to overturn “the wall”. Today, it is odd for one not to have a Facebook or Instagram account. It’s almost a social imperative to have a profile set-up; from class groupings to collegiate or business events, this platform has become the go-to place to stay informed. But most notably, it dares mimic presence with constant texts and calls – it has become an avenue to “socialize”. It connects people from different walks of life, from different geographical locations. It makes it possible to find a childhood friend whom you haven’t seen in years. Some may even note the impact of this new age unit to their sexual lives – how it has become a tool for casual hook-ups. These social media apps have already merged themselves seamlessly into our everyday life and have become part of our modern human culture.

A few years ago, I read through some pieces from Robert Romanyshyn’s book: Technology as symptom and dream. And he argues that technology is not a bunch of linear events that happened or occurred over time in our history, rather, it is “the enactment of human imagination in the world” (pg. 10). Romanyshyn regards the study of technology as a psychological reality, as creation and most importantly, as the making of a cultural dream (pg. 10). There are millions, gazillions of life living on this planet yet we humans are the only species that are blessed with the gift of language. It is such a powerful and amazing system of communication that we are able of sharing the information with precision. This also distinguishes us from other species because we can learn the information and pass on from generation to generation – a history. Romanyshyn says that dreams speak the language of images and dreams are patterns and webs of interconnections with aesthetic values (pg.14). Technologies as dreams are about creations. We create with human achievement, with history and with discovery and it is those stimuli, hopes, dreams, fears, images and inspirations that shaped our cultural world.



Regis Debray, “What is Mediology?”, from Le Monde Diplomatique, Aug., 1999. Trans. Martin Irvine.

Romanyshyn, R. D. (2006, originally published in 1989). Technology as symptom and dream.

Martin Irvine, “Understanding Media, Mediation, and Sociotechnical Systems: Developing a De-Blackboxing Method” [Conceptual and theoretical overview.]

Social Media is a Drug

Maybe it is true that human beings have developed symbolic thinking as early as the Middle Stone Age, as Henshilwood believes and it has been deeply rooted in our minds to this current day. We are surrounded by artefacts and as Cole suggests, we cannot view artefacts just as objects. He said, “Artefacts are simultaneously ideal and material. They coordinate human beings with the world and one another in a way that combines the properties and tools.” I am very interested in examining the cognitive artefacts in our everyday life and in this week’s essay, let’s look at social media.

Social media is ubiquitous and it is especially true for young people. Everyone I know uses at least two social media. Social media is a drug because we have a basic biological imperative to connect with other people, which directly affects the dopamine release in the reward pathway. Millions of years of evolution are behind that system to get us to come together and live in communities and to share things and socialize. So there is no doubt that social media that optimizes this connection between people is very addictive. I will admit that I am pretty addicted to social media, especially Instagram and Weibo. Every day I spend an average of 1 hour, 51 minutes on Weibo and 1 hour, 28 minutes on Instagram. It has definitely gotten much worse due to COVID. With the lockdown and social distancing, social media seems to be the only way for lots of people to connect. Yet, one major reason why I want to discuss social media is because it is going out of control.

The media technologies as cognitive technologies has become so advanced that sometimes I think social media apps know myself much better than I do. There is a classic saying that goes something like, “If you are not paying for the product, then you are the product.” Lots of people may just think, “Oh Instagram or Facebook is just a place for me to like pictures and connect with my friends.” Yet, the very goal of apps like these is to keep people engaged on the screen and to get people’s attention as much as they can. Have you ever noticed that sometimes you just search something on google and then you open up your Facebook page, and that exact thing you just searched is now appearing on your Facebook ad? That is by absolute no coincidence. The reason why companies like Facebook and Google is this mega-successful is because of the fact that they make great predictions. But how do they do that? Data. A ton of data. Everything we do on the internet and social media is being watched and measured: what kind of image does one look at and how long does one look at it. Think about our social media feed. Every time you refresh the page, something new pops up and it almost always is something that you may be interested in. This kind of cognitive technology is gradually modifying our behaviors, hacking into our psychology and exploiting the vulnerability in human psychology so that it can provide growth, engagement and user sign-ups for companies like Facebook or Twitter.

Think about your pencil. You wouldn’t not think of it as a tool because no one has ever blamed a pencil for meddling with political elections. But social media isn’t just a tool; it’s a carefully designed artefact that aims to use your own psychology against you. This is very bad and scary. What’s worse, the more I think about the ever-advanced cognitive technology behind social media, the more I worry about one day, scenes from Black Mirror or West World might become an reality. Yet, I have no willpower to get rid of social media for good because I am already addicted.



Kate Wong, “The Morning of the Modern Mind: Symbolic Culture.” Scientific American 292, no. 6 (June 2005): 86-95.

Michael Cole, On Cognitive Artifacts, From Cultural Psychology: A Once and Future Discipline. Cambridge, MA: Harvard University Press, 1996. Connected excerpts.

the Devil is in the Details: Apple’s User Interface

When I first read the readings for this week, I immediately thought of battery connections that I learned in junior school’s physics class. There are two battery connections, serial connection and parallel connection. In serial connection, there is only one current path so the current flows from the positive pole of to the negative pole. Therefore, if one part is damaged or disconnected, the entire circuit will be disconnected with no current, making everything stop working. Thus, in serial connection, everything is interconnected so either everything all works or all stops working. In a parallel connection, the current flowing from the positive pole is divided into two paths at the branch, and both path has current flowing. Therefore, even if one branch is disconnected or damaged, the other branch will still form a path with the main circuit. In this case, each branch is not interconnected and thus a modularity in the battery connection.

Now let’s look at my phone. I have an iPhone X and I purchased it two years ago and I have to say, this phone has served me very well. When it was first introduced by Apple three years ago, I was only intrigued by its evolutionary design and overall change, like the home button’s fingerprint sensor was replaced with Face ID plus all-glass design with super retina display. Yes, I got sold on this futuristic phone. Yet to my surprise, iPhone X’s user interface experience was even better. Every apple or iPhone fans might notice that all the iPhones and apple products share a highly unified design language and apple’s design details are always very amazing. For example, the Squircle. As seen blew, every apps and windows, from the first iPhone to the latest, share the same rounded rectangle designs. Especially with the coming of iPhone X, this design has been incorporated into the general shape and frame of iPhone X and all of the recent releases.

According to Clark and Baldwin, a good design needs to address information like, “architecture, interfaces and integration protocols and testing standards.” I think a good design should also be communicative. Let’s look at another example, iMessage. As you can see from this screenshot of mine, the background color of the text message bubble changes in depth – the earlier the message, the lighter the color.

When sending two or multiple messages in a row, the space between them is small and narrow (my screenshot). If there is an interval between messages, the upper and lower spacing of the text will become larger.

In my Apple’s interface experience, I saw various metaphors and hints, I saw the smooth experience brought by nonlinear animation, and I saw the process of carving details. This should be something that all designers continue to pursue, and what ultimately presents to users is an extra courteous experience.



Carliss Y. Baldwin and Kim B. Clark, Design Rules, Vol. 1: The Power of Modularity. Cambridge, MA: The MIT Press, 2000. Excerpts.