Category Archives: Final Project

Sacha Qasim: Final

How humans interpret symbols versus how computers are designed to digitally transform information 

The transformative advances in computing are not just foundational on math and programming. The very core of computing stems from the “themes of representation and automatic methods of symbolic transformation”[1]. This paper will delve into the history and evolution of computational thought and how we interact with computing systems and how modern computing interprets code. All while understanding the symbolic system in the physical architecture of computing and how it prefaces human interactions.

The nativity of modern computing was enabled through generations of human consciousness and symbolic thought. The modern computer is a complex blackbox that continues to evolve into more advanced systems. For decades its capabilities are continuously being enhanced, producing the bodacious technical innovation that it is today.

Dr. Martin Irvine argues that “modern computers come from the same source as language, writing, mathematics, logic, and communication- capabilities that are uniquely human and uniquely symbolic”[2]. From cave paintings to artificial intelligence- how did we get here? Symbolic cognition is the prerequisite in conceptual orientation and abstraction. From the time humans are in the womb, and the brain begins to develop, immediately natural language processing is part of gaining understanding. It is instinctive for humans to be in the pursuit of the acquisition of knowledge; defining signs and symbols into meaning. 

Abstract concepts can be represented in a plethora of ways done by enabling symbolic thought processes in subjects such as science, philosophy, and anthropology. This is not only applicable to abstract nontangible items but to finite mediums that we can grasp and interact with such as musical instruments, playing chess, cooking, etc.

In the production of interpreting and generating new expressions, symbolic systems are what combine these procedures. The substrate of the systems can be identified in four applications as Dr. Irvine explains;

  1. Open combinatoriality: a core generative feature of a natural feature, and other symbol systems.

This is an advantage in creating new concepts, new and unpredictable situations, communicating them through a language community.

  1. Symbol systems have built-in features for abstraction and reflexivity.

Metafunction- Using symbols to represent, describe, interpret other symbols.

  1. Symbol systems are intersubjective and collective, enabling all things social.

Features that make symbolic language and task the primary means for communication. 

  1. Symbolic cognition externalized and off-loaded in media memory systems = formation of cultures and societies

How societies are formed. From writing, media, art, computer systems, digital memory.[3]

Below is a graph designed by Dr. Irvine depicting The Continuum of Human Symbolic-Cognitive Capabilities. 

This graph illustrates the progression of how signs and units have evolved through abstract cognitive thought, lending itself to modern-day computing. 

The provisions in the applicability to modern computing is formulated through cognitive technologies. The word “technologies” is broad as it is applicable to any tool that ameliorates human capability. Technology is not just limited to what is the cool, niche, high-tech gadget, but creative production through human design such as, fire, the wheel, the refrigerator, etc. The difference between general technology and cognitive technologies is that they are collectively in the computer science landscape. The objective of how it is designed represents human cognition. Computer science that mimics functions of the brain all is based on previous patterns in symbolic cognition. The implementation of how symbolic cognition is applied and the design of cognitive technologies are interchangeable. For obvious reasons- humans are what design them. Current technologies are processing invisible archeology of human symbolic cognition- so much that we have normalized the humanoid designs in computer science such as Siri, Alexa, Roomba, and in extreme cases- Robot Sophia.

For instance, an iPhone presents the interface configurations and structure based on the “technically mediated symbol system but also serves as a metafunction”. “The iPhone as meta-medium: a medium designed to represent and process other media”.

The design of computing all begins at the binary structures of electronic cells and logic circuits. These logic circuits are switches in the architecture of computing guiding code to digitalize symbols: which have been ultimately imposed by human design. Meaning, computers have no logic. Through code, we have been able to implement key aspects of symbolism to computing structures by being able to distribute data by running source code files then “run” the binary code. This is the act of “interpreting” data (which is not truly the ability to interpret but the through layers of design that have been assigned in early coding). The information has collected enough data to project outputs in pixels, audio, and other mediums that cater to our human interpretation. This computing process is designed to be partial semiotic agents through the lens of collective symbolic cognition and shared symbol systems that are familiar and recognizable to human interpretation. 

Below is a graph of how binary code is placed and is congruent to the symbols we use daily. Binary being only true or false, 0 or 1 switches, there are divisions of one or the other, but until gates are implemented AND and OR gates will begin to factor, helping further data implementations. 

https://forums.newtek.com/showthread.php/143290-Boolean-Data-Type-and-Comparison-and-Logic-Gates-nodes

On how the computer is designed to “meaning preservation”, Denning and Martell explore the meticulous enabler in computing data; 

“When we dig a little deeper into how a machine transforms inputs, however, we can see an important aspect of the design of a program that we call ‘meaning preserving’. Consider the addition of two numbers, a and b. What does it mean to add two numbers? It means that we follow a series of steps given by an addition algorithm. The steps concern adding successive pairs of digits from a and b and propagating a carry to the next higher pair of digits. We have clear rules for adding pairs of numbers from the set {0,1,2,…, 9} and producing carries of 0 or 1. As we design a program for the algorithm we pay careful attention that each and every instruction produces exactly the incremental result it is supposed to…

In other words, the design process itself transfers the idea of addition from our heads into instruction patterns that perform addition. The meaning of addition is preserved in the design of the machine and its algorithms. 

This is true for any other computable function. We transfer our idea of what it means for that function to produce its output into a program that controls the machine to do precisely that. We transfer our idea of the meaning of the function into the design of the machine.

From this perspective the notion that machines and communication systems process without regard to the meaning of the binary data is shaky. Algorithms and machines have meanings implanted in them by engineers and programmers. We design machines so that the meaning of every incremental step, and output, is what we intend, given that the input has the meaning we intend. We design carefully and precisely so that we do not have to worry about the machine corrupting the meaning of what we intended to do…” 

Ultimately computation is code that is data-driven and displays as so through information representations. Peter Denning emphasizes computer science being “the study of phenomena ‘surrounding computers’ and returns to ‘computer science is the study of information processes’. Computers are a means to implement some information processes. But not all information processes are implemented by computers — e.g. DNA translation, quantum information, optimal methods for continuous systems”. [7] 

Through these computing and code are blackbox systems that are comprehendible to us through a series of symbolic layers of design and precise calculations. Language and art are at the forefront of these complex designs in our modern computing. Often we pursued math and science, this would not be possible without the continuum of language that has been enabled through human interpretation and being able to translate that into data, that feed commuting, software, and technology. These are practices that have been mitigated into what is most efficient and will continue to evolve appropriately as human cognitive abilities do so as well. 

A more applicable example of seeing how symbols evolve into better interactions with websites is through programming languages such as HTML, CSS, and JavaScript. Through many processes, a computer program is a guide to a computer to will be utilized as a set of instructions that are to be executed through the code. Such programming instructions are called statements. Which both HTML and JavaScript use the web browser in order the be executed.

How are HTML, CSS, and JavaScript designed to facilitate symbolic capabilities that we use? The ability to create two-dimensional frames with text, images, design are all coded through HTML, CSS, and JavaScript. The better designed, the more appealing it is for us to utilize regularly. With attractive and easy designs for interaction, a web design can lead to psychological dependency- addiction. Many social media i.e., Twitter, Instagram, Facebook all use the slot machine interface which is meant to draw in a user through “ludic loops” which are “cycles of uncertainty, anticipation, and feedback.”[1] It is important to acknowledge that anything that is digitizable can be represented in the two-dimensional substrate of the pixels. Including it changing and transforming by using software that can cut, paste, change the colors.

HTML and CSS:

First, HTML, CSS, and JavaScript are all files that use text editors. Within the files, each of the keywords that have specific meaning and functions that the browser is able to process the way we intend it to. 

Hyper Text Markup Language (HTML) is the standard programming tool to implement the language in creating a website. It is structured so that it can detail the data to the website through a series of elements and symbols as previewed earlier in this paper. An element is what defines declarations in the HTML code. The elements is what display the content onto the browser- whether it is Chrome, Firefox, Safari, etc. Without the elements, the browser determines how to display the content on the website.

Image: the skeleton of an HTML code.

Once the HTML content is programmed into the system, one will then use Cascading Style Sheets (CSS) language to guide the HTML code in how to be displayed on any particular device. Specifically, CSS is the definitive source in how to style a website with features enhancing the design, layout, and other variations of the display to accommodate a plethora of devices the website will be accessed through. Without CSS, HTML was becoming much more tasked for web developers as the aesthetics had to be added with the content of the website too. CSS relieved the HTML developers (also called front-end engineers) to focus on content and leave the aesthetic and major design elements to CSS.

Image: an example of CSS code implementing color, font, alignment, and sizing.

Another simple version seeing how CSS code works-

This image displays each of the values and the declaration that are guiding CSS. 

JavaScript:

JavaScript was invented by Brendan Eich in 1995 and became standardized in 1997. JavaScript is essential for web developers to learn along with wit HTML and CSS. 95% of all websites use JavaScript in the deep interface of their websites. How JavaScript travels with HTML files and enables on the fly coding activity that we are accustomed to. JavaScript is designed to eloquently determine interactions on any device; iPhone, desktop, tablet, etc.

JavaScript is designed to distribute computational load things that take more processing power. Not just fetching data but connecting to programs and analysis that take more computational power that can be used on the web server side, not solely to serve a local device. JavaScript programs many types of platforms when knowing what it will be connected to i.e., streaming media; that takes a lot of computational power. What the processing power will be on the streaming side and that it matches the capability of the client device.

JavaScript is capable of multiple features such as updating and changing HTML and CS code. Mainly it is able to calculate, manipulate, and validate data. Ultimately being the final step in designing a website adding more “logic” to the code through HTML and CSS elements. Therefore, JavaScript is much more dynamic and powerful than HTML or CSS as it is used to develop and manage large applications. But its main capability is to change HTML content, attribute values, hide and show elements. For CSS, JavaScript is able to change any styles. The browser knows it is JavaScript because of the specific file extension that is added to the code “app.js“. 

Image: JavaScript code for code art

 In this image, the JavaScript source code is displayed in how to develop art via code sourcing. 

This is the final image using all three languages; HTML, CSS, and JavaScript. 

The ultimate breakdown is how the web browser is able to analyze and process any data information by JavaScript to make it comprehensible to the human audience? The browser’s ability to understand JavaScript is a layered and complex system. For this example, I will use Google Chrome as the browser choice as it is what I am using currently and is the most widely used web browser. 

Image: an overview of how JavaScript works

The engine will take the string of code and carefully examine each symbol and character to match them to the implemented glossary. The engine will take any string of code and will tokenize it into an array of tokens. It is so specified, each element of the code is characterized. For example, if my string of code is; let x = 10. The engine will convert it into an array such as; 

Then, using this array of tokens, the Abstract Syntax Tree (AST) is generated by parsing. AST is the tree representation of the code source. This is a fascinating parallel to syntax trees in linguistics when we are breaking down sentences. Parsing is the key element in what defines each variable. 

Image: AST visualization

 Then Byte-code begins to generate which executes the code. Next Just In Time (JIT) compilation occurs and runs. The source code at this point has been compiled and being profiled. Specific to the Google Chrome V8 engine, Ignition completes the generation of the byte-code and profiling and at this point, the JavaScript code is up and running. Finally, “TurboFan is the optimization compiler inside V8, based on the info that Ignition collected, TurboFan starts to optimize the functions for better performance.”[11] 

In conclusion, advances in programming and technology all stem from symbolic and cognitive human connective behavior such as cave paintings, hieroglyphics, music, and much more. Through this paper, we examine the evolution of signs and symbols and how that advanced into binary code which became the foundation of computing. For computers to be is undeniably an exterior organ to us, the advances of programming have enhanced our affinity to these machines byways of aesthetically engaging platforms that are created by HTML, CSS, and JavaScript. After a foundational understanding of how these programs work the blackbox in how these source codes travel through so many blackboxes to be visible on this very screen deserves pause and admiration in the complexity of these systems. 

 

Citations:  

[1] Dr. Martin Irvine. The First Look at Recent Definitions of Computing: Important Steps in the History of Ideas about Symbols and Computing. 

[2] Dr. Martin Irvine. “Introduction to the Human Symbolic Capacity, Symbolic Thought, and Technologies.”

[3] Dr. Martin Irvine. Key Concepts in Technology: Symbolic Cognition and Cognitive Technologies.

[4] Irvine Youtube video. https://www.youtube.com/watch?v=FEnrsv_YTDE&ab_channel=MartinIrvine

[5] Dr. Martin Irvine. Introducing C. S. Peirce’s Semiotic: Unifying Sign and Symbol Systems, Symbolic Cognition, and the Semiotic Foundations of Technology.

[6] Dr. Martin Irvine. Intro to Symbol Systems, Semiotics, and Computing: Peirce 1.0”

[7] Peter Denning, Craig Martell. The Great Principles of Computing

[8] https://ihpi.umich.edu/news/social-media-copies-gambling-methods-create-psychological-cravings

[9] https://www.w3schools.com/js/js_intro.asp

[10] https://www.codecademy.com/learn/introduction-to-javascript

[11] https://medium.com/@mustafa.abdelmogoud/how-the-browsers-understand-javascript-d9699dced89b

Fordyce Alie_Evolutionary Computation

Evolutionary Computation:

Exemplifying the Interdependency Between Human and Machine

Image 1 source: “EQ in the AI Age: Humans and Machines Must Work Together.” Albert.

Abstract. What we do in computing is fundamentally human; computers are a product of us. Evolutionary computation is a framework for problem solving based on the natural biological process of evolution and self-adaptation. Computation, in turn, plays a large role in societal decision-making. Therefore, understanding the mutually influencing relationship between human and machine is critical for determining how the output of computation informs human decision-making and the impact it has on internal bias within the machine.

 

  1. Introduction

            Technology is a quickly evolving entity of society. We often discuss this evolution, since the discovery of electricity in the 18th century (although technological advancement greatly preceded electricity), as an independent linear progression. This paper uses evolutionary computation to argue that this advancement is not a linear progression, rather a development sequence very much intertwined with human the human condition and biological evolution. How we talk about computational development and emerging technology is more appropriately discussed within the context of human and biological processes. The clash between human and machine continues to grow as privacy and individual agency is threatened by ever-growing applications of artificial intelligence in daily life. This division can be broken down by showing how deeply intertwined and interdependent technology and humanity are and how each are very much influenced by the other. Evolutionary computation is an important approach to explore this codependent relationship, helping to reveal the interdependency evolutionary algorithms have with societal development.

Image 2 Source: Bentley, Peter. “Aspects of Evolutionary Design by Computers.”

Evolutionary computation, as shown above, is the precise intersection between evolutionary biology and computer science.

 

  1. Evolutionary Computation: An Overview

            Evolutionary computation refers to the intersection between biological processes and computer science. In An Overview of Evolutionary Computation it is defined as:

Evolutionary computation uses computational models of evolutionary processes as key elements in the design and implementation of computer-based problem-solving systems. There are a variety of evolutionary models that have proposed and studied which we refer to as evolutionary algorithms. They share a common conceptual base of simulating the evolution of individual structures via processes of selection and reproduction. These processes depend on the perceived performance (fitness) of the individual structures as defined by an environment. (Spears et al. 1993, 442)

Evolutionary computation, more precisely, evolutionary algorithms, are products of biological principles: “structures that evolve according to rules of selection and other operators, such as recombination and mutation” (Spears et al. 1993, 442). Selection occurs and is focused on the fitness level of an individual. Below is an example of a typical evolutionary algorithm:

Image 3 source: Spears, William M, et al. “An Overview of Evolutionary Computation.”

The above algorithm demonstrates a random population that is evaluated based on the fitness of each individual in relation to its environment. Selection has two parts: 1) parent selection and 2) survival (Spears et al. 1993, 443). Parent selection is a function deciding which individuals become parents and how many children they will have; children are a product of information exchange between parents and through mutation, otherwise known as recombination; then children are tested for survival under relevant environmental conditions. An example of this would be an automotive manufacturer designing a new engine and fuel system trying to optimize 1) performance, 2) reliability, 3) gas-mileage, and 4) minimizing emissions (Spears et al. 1993, 443). In this example, it is assumed that an engine simulation unit can test various engines and output a single value rating its fitness score. The initial step would create a population of possible engines, each engine receives a fitness score based on various metrics, then parent selection decides which engines have ‘children’ and how many (products of recombination between parent engines), and finally it is determined which engines ‘survive’. This provides a basic illustration of an evolutionary algorithm framework: evaluation, selection, recombination, mutation, and survival (Spears et al. 1993, 445). Below are two simplified diagrams illustrating the evolutionary algorithm framework:

Image 4 source: Soni, Devin. “Introduction to Evolutionary Algorithms.”

Image 5 source: Rudolph, Günter. “EVOLUTIONARY ALGORITHMS.”

Generally, the unique elements of evolution strategies in computation are recombination and mutation of both the population and the parameters of the algorithm (Rudolph 2015). This allows for self-adaptation, mimicking the biological evolutionary process.

            One key type of evolutionary algorithm is the genetic algorithm. Initially developed by Holland (1975), genetic algorithms play an important role in studying complex adaptive systems. Below is an example of a typical genetic algorithm:

Image 6 source: Spears, William M, et al. “An Overview of Evolutionary Computation.”

Adaptation is most familiarly known as a biological process of survival; organisms mutate and rearrange genetic material and successful adaptations lead to high probability for survival in a dynamic environment (Holland 1975). Holland, notably, built a mathematical model representing a nonlinear complex interaction mimicking that of genetic mutation. He applied his model to different fields – psychology, economics, artificial intelligence – demonstrating its universality and challenging accepted dogma regarding mathematical genetics (Holland 1975). He explored the models used in naturally occurring processes, representing the properties of coadaptation and coevolution. In genetic algorithms, bit strings represent single individuals and the selection process chooses two individuals (two bit strings) from the population for the next offspring recombination – including a probability for their relative fitness score (Rudolph 2015). The ‘offspring’ receives genetic material from both parent individuals, is mutated, and replaces the parent generation. This continues until termination is reached by encountered a specific criterion (Rudolph 2015).

            The following gif, developed by Soni, provides an example of an evolutionary algorithm at work. The gif displays multiple generations of dinosaurs learning to walk by way of manipulating body structure and muscular force. The right-most dinosaur depicts the largest number of generations, and hence, the most optimize and well-functioning walking form (Soni 2018). The earlier generations of dinosaurs were not able to effectively walk, but as the evolutionary algorithm mutated over generations it eventually reached an optimized state.

Media 7 source: Soni, Devin. “Introduction to Evolutionary Algorithms.”

            As described, evolutionary computation is heavily influenced by the natural process of evolution. However, since humans dictate the metrics and success criteria for the algorithm, there is a direct link between humanity and the machine algorithm, realized through the evolutionary process. Thus, the outcome from evolutionary computation is both human and machine driven.

 

  1. Evolutionary Computation and Human Symbolic Systems

Humans, in turn, have been heavily influenced by the outputs and process of evolutionary computation. Agre, in Computation and Human Experience, argues for a critical technical practice in the AI community (Agre 1997, xii). He writes,

What is needed, I will argue, is a critical technical practice – a technical practice for which critical reflection upon the practice is part of the practice itself. Mistaken ideas about human nature lead to recurring patterns of technical difficulty; critical analysis of the mistakes contributes to a recognition of the difficulties. This is obvious enough for AI projects that seek to model human life, but it is also true for AI projects that use ideas about human beings a heuristic resource for purely technical ends. (Agre 1997, xii)

Agre depicts the complex and imperfect relationship humans and machines share. Since the earliest iterations of modern technology, critiques and therefore corrections of machines have been a constant. This editing process comes from humans and thus is implicitly biased with human characteristics that inevitably fuel the outputs of computation. Computation has been used as a tool to explain human nature, as Agre explains:

 How, then, can computation explain things about people? The most common proposal is that human beings instantiate some mechanism, psychology tries to discover what that mechanism is, and success is judged by matching the input-output behavior of hypothesized mechanisms to the input-output behavior of human beings… This view of computational explanation had great appeal in the early days of computational psychology, since it promised to bring precision to a discipline that had long suffered from vagueness and expressive poverty of its concepts. (Agre 1997, 17-18)

The key here is recognizing how humans are affected by these computational explanations. Computation, broadly speaking, impacts society by making assertions about how humans do and do not behave. These assertions, or predictions, have real-life impacts on decision-making processes. In this way, computation, and specifically evolutionary computation, make decisions for humans.

Evolutionary computation is a population-based problem solver; an initial set of options (like in the automotive manufacturing design example of deciding on a new optimal engine and fuel system) implements a trial and error process within the given initial population. Within the evolutionary algorithm new generations are produced based on eliminating characteristics unfit for survival. This, biologically speaking, reflects processes of mutation and natural selection. The output of evolutionary algorithms are the individuals most fit for survival. This form of computation is used in almost all fields and, therefore, heavily influences decisions in all areas of humanity.

Chandler, in Semiotics: The Basics, offers insight to the concept of human symbolic systems:

All meaningful phenomena (including words and images) are signs. To interpret something is to treat it as a sign. An experience is mediated by signs, and communication depends on them… As a species we seem to be driven by a desire to make meanings: above all, we are surely Homo significans – meaning-makers. We cannot avoid interpreting things, and, in doing so, we treat them as ‘signs’. (Chandler 2018, 2-11)

Chandler makes it clear that throughout history human beings have been innately prone to make meaning of everything that surrounds them. Computation and machine-learning has fundamentally changed the relationship humans share with meaning and symbolic representation. In creating machinery that can decision-make in place of humans, a co-dependency has been created altering the way communication occurs and how meaning is created. Furthermore, Deacon infers that computation is a later generation of symbolic representation by way of explaining that:

We inhabit a world full of abstractions, impossibilities, and paradoxes… In what other species could individuals ever be troubled by the fact that they do not recall the way things were before they were born… The doorway into this virtual world was opened to us alone by the evolution of language, because language is not merely a mode of communication, it is also the outward expression of an unusual mode of thought – symbolic representation… symbolic thought does not come innately built in, but develops by internalizing the symbolic process that underlies language. So species that have not acquired the ability to communicate symbolically cannot have acquired thee ability to think this way either. (Deacon 1997, 21-22)

Computation is part of the evolution of language and its purpose as symbolic representation. In doing so, computation has altered what makes us fundamentally human: our cognitive ability and the form in which we communicate. Herein lies the mutually influencing relationship between human and machine; machines are created by humans and the machines influence the systems by which humans communicate. Thus, creating a mutual dependance and circular evolutionary structure.

 

  1. Conclusion

Evolutionary computation is a framework for problem solving based on the natural biological process of evolution and self-adaptation. Computation, in turn, informs much of societal decision-making and replaces many traditional forms of communication. Therefore, understanding the mutually influencing relationship between human and machine is critical for determining how the output of computation informs human decision-making and how the interdependency of the two play a role on their outcomes.

 

 

__________________________________________________________________________________

References

Agre, Philip. Computation and Human Experience. Cambridge University Press, 1997.

Bentley, Peter. “Aspects of Evolutionary Design by Computers.” UCL Department of Computer Science, www0.cs.ucl.ac.uk/staff/P.Bentley/wc3paper.

Chandler, Daniel. Semiotics: the Basics. 3rd ed., Routledge, 2018.

Deacon, Terrence William. The Symbolic Species: the Co-Evolution of Language and the Brain. International Society for Science and Religion, 1997.

“EQ in the AI Age: Humans and Machines Must Work Together.” Albert, Marketics, 26 Oct. 2019, marketics.ai/blog/humans-and-machines-must-work-together/.

Holland, J. H. (1975) Adaptation in Natural and Artificial Systems. Ann Arbor, Michigan: The University of Michigan Press.

Rudolph, Günter. “EVOLUTIONARY ALGORITHMS.” Technische Universität Dortmund, 2015, ls11-www.cs.tu-dortmund.de/rudolph/ea.

Soni, Devin. “Introduction to Evolutionary Algorithms.” Towards Data Science, Medium, 18 Feb. 2018, towardsdatascience.com/introduction-to-evolutionary-algorithms-a8594b484ac.

Spears, William M, et al. “An Overview of Evolutionary Computation.” European Conference on Machine Learning, vol. 667, 1 June 2005, pp. 442–459., doi:10.7554/elife.36495.007.

 

Media Citations

Image 1: “EQ in the AI Age: Humans and Machines Must Work Together.” Albert.

Image 2: Bentley, Peter. “Aspects of Evolutionary Design by Computers.”

Image 3: Spears, William M, et al. “An Overview of Evolutionary Computation.”

Image 4: Soni, Devin. “Introduction to Evolutionary Algorithms.”

Image 5: Rudolph, Günter. “EVOLUTIONARY ALGORITHMS.”

Image 6: Spears, William M, et al. “An Overview of Evolutionary Computation.”

Media 7: Soni, Devin. “Introduction to Evolutionary Algorithms.”

Danae Theocharaki_Final

The Symbolic Representation in the Evolution of Computing that Lead to Our Current Technological Advancements in Artificial Intelligence 

Abstract

This paper investigates the symbolic history behind computing and technology as it ultimately answers to the how, why and for what we use them today. The long history of computing lies behind the details that go into creating what we, in one word, refer to as software. Some of my guiding research questions were looking into the history of computing while focusing on the importance of human symbolic and semiotic systems as the basis of computing and the depiction of non-technological concepts as cultural and societal fragments that are then reflected on the actual technological innovations and advancement that are occurring in that specific period of time. Understanding how and why our human symbolic systems are the first step towards decoding the “black box” of computing. Highlighting the relationship between cultural and societal characteristics that are embedded in those symbolic systems, establishes the connection between the technological past and present. I further look into theoretical background and research from leaders in the field, and provide an example of how and why the recent presence of Artificial Intelligence that has infiltrated most aspects of our lives, is related to our cultural and binary symbolic systems of representation and why we have adapted to it so well.The evolution of computing has been able to take place as the sets of our very basic symbolic systems and understandings have allowed us to develop a much more complex system compromised of mathematical, computational and encoded information.

 

Over the last few decades we have witnessed technology completely take over the world as well as most aspects of our lives. For most, technology is a more recent phenomenon that has only existed among and for the newer and younger generations. However, in order to truly understand why and how technology has made such an intricate infiltration in our lives, we need to re-evaluate the origins of how and when it all began. A big part of understanding how it all started, is coming to terms with and understanding human symbolic capacity and systems. We need to deconstruct the idea that being human and having technology and machines are very separate things because in reality, the two latter are a result of the former and are representations and depictions of our physical and cognitive symbolic ideologies and culture (Irvine, 2020).  More specifically, even “[o]ur modern technical media are forms of symbolic artefacts developed for large-scale social systems” (Irvine, 2020, 2).

Map/Table 1. Professor Martin Irvine’s depiction of how our human-symbolic capabilities developed over time and what they signify or depict in each “phase”. From our class notes and readings “Prof. Irvine – CCTP-711 — Intro to Week 3 – The Human Symbolic Capacity” (2020).

Image 1. One of the first digital computers to exist. MIT’S Whirlwind Machine was introduced on March 5th 1955 and was the first one of its kinda to contain a “magnetic core RAM and real-time graphics”. From ComputerHope, 2020.

What we call today a computer or a laptop for example, are reflections and projections of machines, software and overall technological innovations and theories of the past. However, the crucial part is understanding how the discoveries of the past are immediately connected to the technology we have today and are the reason behind why it even exists. Often, we find it difficult and foreign to find some sort of connection with these devices and alienate ourselves because it is hard to conceptualize what this modern technology really is. What is the cloud? What is this artificial intelligence? The breakdown of how it all started can explain the answer to those questions. Our connection to our human symbology is what hides the reasoning behind the unexplained, also known as the things that go on behind the scenes. Binary numbers, coding and other symbol systems are used together and interchangeably to create the software and machines that have been built over the decades and have led the path for the ones we know today. In If more people understood the connection of our very symbolic culture and its history as a crucial part of the history and development of technology, we wouldn’t have as many “unexplained” and unanswered questions that further detach in people’s mind cultural patterns from tech.

Image 2. The Colossus was the first electrically programable computer created by Tommy Flowers between 1943-1945. Tasked to decode and decipher secret messages of the Nazi’s, had no RAM (memory) but used Boolean and logical and mathematical operations in order to execute the job it was created to do. Photo from ComputerHope, 2020.

Image 3. The Turing machine, create by Alan Turing in 1936 and considered the prototype of the modern computers we use today. From ComputerHope, 2020.

Michael Mahoney, a science historian, in his piece “The Histories of Computing(s)” (2005), explains the reasons behind why people feel this detachment and loose the cultural subtext that is behind all computers, machines and software. Most connect current or more machines and actual physical computer objects as descendants of the Turing machine, but since the machine wasn’t necessarily confined within the physical limitation of the objects itself but rather depicted a “schema”, that concept “could assume many forms and could develop in many directions” (Mahoney, 2005, 119). In doing so, it “assumed” a depiction of the various cultural meanings and understandings that humans and especially the group of people who were actively working on the development and creations of these applications, had and still give to our symbolic historical attributes that we give to a symbol system (Mahoney, 2005).  As an actual physical machine made out of even smaller material and parts, it would not really mean anything, it cannot stand alone. It is a compilation of

“histories derived from the histories of the groups of practitioners who saw in it, or in some yet to be envisioned form of it, the potential to realize their agendas and aspirations […]” the programs we have written for them, reflect not so much the nature of the computer as the purposes and aspirations of the communities who guided those designs and wrote those programs” (Mahoney, 2005, 119).

Connecting back to that idea, we realize how after all we are not so far different from these machines. We gave our own cultural meanings and understandings to symbols in order to benefit our needs. Even before the Turing machine, we can trace computing to what we know it as today, “back to the abacus and the first mechanical calculators and then following its evolution through the generations of mainframe, mini, and micro” (Mahoney, 2005, 121). Each technological era, time period or decade, started somewhere and adapted to the cultural circumstances and needs of the humans.

Image 4. An example of a symbolic system that used symbols, numbers, categorizations, etc., to solve and improve, what latter became, fundamental to creating the technology that we have today. This is a depiction of the SSEM’s very first program. The SSEM was “the first computer to electronically store and execute a program”. Designed in 1948 Frederic Williams and then built by his protégée, Tom Kilburn, whose notes those are. From ComputerHope, 2020.

Our human symbolic systems, which are better understood as our natural language and cultural-symbolic artefacts such as languages, writing, alphabets, mathematics and mathematical symbols, scientific symbols and signs, etc., are the first step in depicting human symbolic-cognitive capabilities. There is a mutual understanding that these were and are the very first methods or representation, safekeeping and external symbol storage of overall human culture and capability (Irvine, 2020). However, this is also the crucial part in understanding that because of these accomplishments and capacities, we were able to transcribe that into a digital system of information, computation, software systems and overall technological advancements of today’s world (Irvine, 2020). The “archaic” and initial symbolic systems that were created by humans are the reason behind why and how we now have the technological luxury to live with and among the systems, softwares and machines that we can no longer live without. These include anything from social media and the depiction of our lifestyles through videos, images, music, to Artificial Intelligence being a part of our day to day lives taking form in our smart phones, smart cars and even smart wearable medical devices.

Artificial Intelligence has been one of the most, if not the most, nuance concept(s) of the past few decades technological advancements. A.I. however, can characterize something very specific such as the artificial intelligence that is a part of our smart phones, or something more general such as data analytics, machine automation and more. Although a complicated concept to grasp, as most things related to AI still remain in the technological and computational “black box”, some aspects of AI have made it possible to bridge that gap between humans and machine without necessarily realizing it. Specifically, AI that is used is in our day-to-day devices such as smart phones and smart wearable tech, that have become not only permanent, but also highly dependable parts of our lives. What is behind this type of AI is all those binary, semiotics and symbolic figures, structures and meanings that humans have ascribed to what we constitute as computing and software. Artificial Intelligence in the form of IPAs (Intelligent Personal Assistants), overpowers its “black box” with the distinct anthropomorphic disposition it embodies. These IPAs have a daily presence in our lives because they also reflect certain societal and cultural concepts and notions.

Researchers Goksel-Canbek and Mutlu (2016) who have investigated the topic of IPAs as part of our regular habits, explain the various connections that can be established between the users and the IPAs that rely less on the actual physical machine and more so on the AI, usually the female voice or unseen presence. The evolution of our technology from that of pure binary code and symbols has progressed to such an extent that software can freely interact with their user/human without the need of having another human monitoring the program. Goksel-Canbek and Mutlu, as well as other experts in the human-tech field platform, have assigned different reasoning as of why we find such a strong connection and normalcy in IPAs, yet often struggle with other adaptations, forms and applications of Artificial Intelligence. The humanoid form attributed to these intelligent assistants, such as Apple’s Siri, Google’s Google Now, Microsoft’s Cortana, Amazon’s Alexa, etc. is according to Goksel-Canbek and Mutlu, partially due to Three-Factor Theory that makes us more comfortable with understanding this type of software and devices (Goksel-Canbek & Mutlu, 2016). The Three-Factor Theory justifies with the use of psychological evidence, peoples’ tendency to ascribe anthropomorphic forms, features and characteristics to non-living and non-human entities (Goksel-Canbek & Mutlu, 2016; Theocharaki, 2020; Cao et al., 2019; Nass et al., 1999). An evolutionary achievement that has allowed IPAs to develop into what they are and have the capabilities that they do, is Natural Language Processing (NLP). NLP is a great example of human symbolic capacity that has evolved over the decades as have our own societal and cultural understanding, perceptions and needs. Goksel-Canbek and Mutlu highlight the importance of NLP as it “the most crucial element for creating computer software that provides the human-computer interaction for storing initial  information,  solving  specific  problems,  and  doing  repetitive  tasks  demanded  by  the  user” (Goksel-Canbek and Mutlu, 2016, p. 594; Theocharaki, 2020) as they focus on how these IPAs are used for foreign language learning. Their software intelligence allows for such “machines” to work independently and interact on their own will (to some extent), knowledge and capability, while using natural human language and semantics (Goksel-Canbek & Mutlu, 2016).

Goksel-Canbek’s and Mutlu’s research (2016) constituted of performing a variety of test interactions between IPAs (specifically Siri, Google Now and Cortana) and students who want to learn a new language. They recorded and monitored multiple instances where students were asked to address questions towards the device and see how the IPAs interact, react and “behave” (Goksel-Canbek & Mutlu, 2016). They also compare the performance between the three different assistants that not only highlights the weakness and strength of each, but also perfectly illustrates how even though the software for all three IPAs might have a similar “story of origin” and definitely overlaps in many feature, criteria and “black-box content”, the billions of complex possibilities that our semiotic systems allow for, create differentiation and promote adaptability into multiple forms and usages. Even though they still lack the same potential that a real life language tutor would have, IPAs have gained the trust of so many people who use them on a daily basis to mostly facilitate their busy lives or even teach them something new, because they provide the extra humanistic feature that for example lacked from the Turing machine yet the former is the continuation of the latter. The software behind the IPAs, use the same symbolic systems and capabilities that lead to the abacus or the first physical calculator, but have evolved, developed and adapted to each level or stage of history they came across and reflect the human values and belief systems the time. 

 

Image 5 & 6. Screenshots from Goksel-Canbek’s and Mutlu’s (2016) research findings showing some results and notes from the interactions of the IPAs and the users/students while using Google Now, Siri and Cortana.

Conclusion 

The evolution of computing has been established and executed through the presence of human symbolic and semiotic systems that have adapted through out the decades allowing for the technological improvements and advancements that have led to the tech, machines and softwares that we use today. The technology that is available to us today isn’t a new invention nor a futuristic phenomenon. We often neglect to remember or realize that, it is rather a continuation of our primary symbolic systems that combined with the cultural, societal and contextual understanding of that time. Those two things work together to form the software, machines and technology that has evolved through out time. It is both a result and a reflection of our need to create and fill up the gaps or find solutions for the specific time’s needs and problems. In a way, the extreme could be to consider that even tech prototypes are no longer a thing, since nothing in tech arises from ground zero and one way or another, all findings are a continuation, improvement or expansion of another. 

 

References and Works Consulted

Agre, Phillip. (1997). Computation and Human Experience. Cambridge University Press. 

Cao, C., Zhao, L., & Hu, Y. (2019). Anthropomorphism of Intelligent Personal Assistants (IPAs): Antecedents and Consequences. In PACIS (p. 187).

Goksel-Canbek, N. & Mutlu, M. E. (2016). On the track of artificial intelligence: Learning with intelligent personal assistants. Journal of Human Sciences13(1), 592-601.

Irvine, Martin. (2020). CCTP-711: Week 3: Introduction: The Human Symbolic Capacity:
From Language and Symbol Systems to Technologies. CCT Program (course notes).

Irvine, Martin. (2020). Introducing C. S. Peirce’s Semeiotic: Unifying Sign and Symbol Systems, Symbolic Cognition, and the Semiotic Foundations of Technology.  CCT Program (course notes).

Kockelman, P. (2013). Agent, person, subject, self: A theory of ontology, interaction, and infrastructure. Oxford University Press.

Mahoney, Michael. (2005). The Histories of Computing(s). Interdisciplinary Science Reviews, 30(2), 119-135. 

Nass, C., Moon, Y., & Carney, P. (1999). Are People Polite to Computers? Responses To    Computer‐Based Interviewing Systems 1. Journal of applied social psychology, 29(5),1093-1109.

Theocharaki, Danae. (2020). CCT 505: Assignment #5– Putting it All Together. CCT Program (class assignment). 

Theocharaki, Danae (2020). CCT 505: Assignment #7 – Synthesizing Research Methods. CCT Program (class assignment).   

Theocharaki, Danae (2020). CCT 505: Assignment #6 – Identifying Research Methods and Questions. CCT Program (class assignment). 

 

Web Sources & Links

Map/Table 1: Irvine, Martin. (2020). CCTP-711: Week 3: Introduction: The Human Symbolic Capacity: From Language and Symbol Systems to Technologies. CCT Program (course notes).

Image 1: ComputerHope. “When Was the First Computer Invented?” Computer Hope, 30 June 2020, www.computerhope.com/issues/ch000984.htm.

Image 2: ComputerHope. “When Was the First Computer Invented?” Computer Hope, 30 June 2020, www.computerhope.com/issues/ch000984.htm.

Image 3: ComputerHope. “When Was the First Computer Invented?” Computer Hope, 30 June 2020, www.computerhope.com/issues/ch000984.htm.

Image 4: ComputerHope. “When Was the First Computer Invented?” Computer Hope, 30 June 2020, www.computerhope.com/issues/ch000984.htm.

Image 5: Goksel-Canbek, N. & Mutlu, M. E. (2016). On the track of artificial intelligence: Learning with intelligent personal assistants. Journal of Human Sciences13(1), 592-601.

Image 6: Goksel-Canbek, N. & Mutlu, M. E. (2016). On the track of artificial intelligence: Learning with intelligent personal assistants. Journal of Human Sciences13(1), 592-601.

 

Yanjun Liu – Final Project: Analysis of Computation from Coding Procedure

Analysis of computation from coding procedure

       What is computation? According to Merriam Webster Dictionary (https://www.merriam-webster.com/dictionary/computation), it is 1) the act or action of computing: CALCULATION, 2) the use or operation of a computer, 3) a system of reckoning, or 4) an amount computed. In short, computation is a procedure of the change of numerical values under systematic rules, which can be executed by people or digital devices. Alan Turing (1912-1954) regarded computation as “the evaluation of mathematical functions”. (Denning & Martell, 2015) Looking back at the history of human beings, from the distribution of weapons and food among clans in the Stone Age to the complex analysis of data and writing of code via computers today, computation has filled every stage of human history. Bohm and Jacopini (Böhm & Jacopini, 1966) pointed out that generally, we calculate numbers because of three kinds of things: 1) Perform instructions in strict sequential order (sequencing), 2) make a choice between two alternative calculations based on the outcome, true or false, of a test (choice) and 3) repeat a calculation many times until a test says to stop (iteration). With the development of computing devices and information technology, though nature remains the same, the interpretation of computing is changing and becoming more diverse. What does computation mean nowadays? Why are we having different ways of computing? How are we going to do that? What exactly happened when we are using a computing device? In this article, the above questions of computation will be answered by analyzing the procedure of coding using Python language. The main body is divided into two parts: 1) definitions of key concepts and 2) analysis of coding procedure.

       Code as a noun means “a system of symbols (such as letters or numbers) used to represent assigned and often secret meanings”, while it means “to put in or into the form or symbols of a code” as a transitive very and “to create or edit computer code” as an intransitive verb. (https://www.merriam-webster.com/dictionary/coding) Since code equals to symbols or congregation of symbols, we need to discuss the term “symbols” and its main carriers “language” before we go to the coding procedure.

       Symbols are “theories of signs” (Chandler, 201), a sign is defined as “something which stands for something. All meaningful phenomena (including words and images) are signs” (p.8). That means when we say a word (no matter using what language), or do a gesture, or see an object, except the physical existence of them, there are other meanings behind them that originated from different cultures and can be interpreted differently.

       Language is the main carrier of symbols and human being is the only species that can use it because of our cognitive system. From the linguistic study side, language consists of words, rules, and sentences. (Pinker, 1999) When we speak of language, we usually refer to “natural language”, a term that means “any human language acquired ‘naturally’ by being born into a language community”, or say, “our mother tongue”. (Irvine, 2020) When we are coding, instead of natural language, we are using programming language. According to Denning and Martell (2015), a programming language is “an artificial language with its own rules of syntax, used for expressing programs” (p.84), which is a “metalanguage”, a language developed for logic, mathematics, and computer programming. (Irvine, 2020)

       Why must computing on a computer require a different language? There are two reasons. First, the computer as a computing machine, can not recognize the natural language. Instead, it can only read binary code that represents everything by 0 and 1. A programming language is a language that can be translated into binary codes, thus enable the computer system to understand the orders from the input and then execute them by running programs.  Basic binary units are “bit” and “byte”, a byte consists of eight bits and it is also the basic unit of information processing. A program is “a set of instructions arranged in a pattern that causes the desired function to be calculated”. We are using computers in our daily lives because of their ability to run programs such as reading an e-book using PDF viewer, using Zoom to have meetings, or editing videos by using Adobe Premiere. Second, compared to natural language, the programming language is way more precise. Unlike human being, a computer cannot understand ambiguous information and it can only execute functions with clear instructions. We understand things and meanings of symbols by being in that related cultural or social environment, therefore we get it when a different word is used in various situations, while computer requires a clear definition of everything. You can’t tell a computer something simply like “let’s go!” and expect it “goes with you”. Maybe your friend understands it because you have a shared understanding of that expression, but for computer, you need to tell it what is that action and where that action leads to. We will further prove the necessity of this preciseness in the following part of analyzing the coding procedure.

       In my coding practice, I used Visual Studio Code as the code editor and Python as the programming language. Visual Studio Code is a source-code editor made by Microsoft, it can support many programming languages and have multiple functions such as automated debug, highlighting syntax, compile code, and so on. It is an “IDE”, integrated development environment that provides features that speed up code development. The programming language I used was Python, an interpreted language that is widely used in different areas with readable syntax rules, and a more general design purpose. It is been widely adopted because of its understandability, which can help the programmer to write codes and programs with clear logical expression. There are also many other programming languages existing for different purposes such as C++, Java, Javascript, Ruby, and so on. These languages usually have different syntax rules and a different definitions of functions.

       The code I wrote is really simple. The content is a little grading survey about the past online semester. The purpose is to get feedback from students about their general feeling about past experiences and their suggestions for the coming semester that is still going online if they gave a low grade. So, in my programming design, I need to present three basic parts: 1) Greeting and introduction about the survey, 2) proper questions, and 3) interaction feedback. Here is the screenshot of my source code (P1):

                                                                        (p1)

       As you can see, I wrote 17 lines of code, which are pretty short but have basically covered all of the requirements that I mentioned above. Different color indicates different construction parts and functions. For example, the orange part inside of the parenthesis is the text that will show on the screen; the green part is a unique function of programming language called “comment”, which will not be presented publicly but only among those who have the access to source code. It is designed to help programmers to have a better understanding of what exactly these parts of codes do, as a big program will usually be separated into different parts and programmed by many programmers. The other colors indicate rules and statements of Python of different functions, which I need to know beforehand, or there will be no ideal results, but errors come out because the computer cannot recognize what I mean. Usually, the wrong use of programming language statements that causing errors are called “syntax errors”, which happen when the written codes do not fit in the expected rules. Below are the running results of my codes:

Situation 1

       In Situation 1, when asked “are you ready?”, respondent typed “No” or any other answers that are not “yes”. the respondent did not prepare well (or simply don’t want to cooperate), thus, the program will end unless the respondent goes back to the first greeting part (like once again click on the link, if this survey is truly published somewhere online.)

Situation 2

       In Situation 2, the respondent typed “yes” and officially started the grading by following the rules “grade your online learning experience for this semester from 1 to 5”, 1 is the lowest while 5 is the highest, which includes the designs in the source code, the two “if” statements as they show in P1. Here, the respondent typed “5”, the highest grade, which means “very satisfied about the past online learning experience”, then, the program offered its feedback and ended the session.

Situation 3 

       In Situation 3, the respondent inputted a negative result, marked the experience as very bad, and only gave 1 point as the grade. The program, as set in the “else” part in P1, offered feedback and asked the respondent to email her suggestions to an email address.

       In all, the basic logic of this little program is as below:

                                                                               ↗ Grade > 4   →   session end

                                                             ↗ Yes

                                                                              ↘ Grade <= 4 → session end + suggestion feedback 

Greeting+Introduction  →ready?

 

                                                            ↘ other answers   →  session end

      In every application that we use in our daily lives, similar logics are presented in the source code using statements like “if…else” or “def” (definition). I would like to make a connection between programming and teaching, which both of the actions mean that you have to define things and follow certain rules, and the target audience, no matter it is a computer or a child, is learning from your statements to try to understand the meanings behind and act according to your instructions. The programming procedure clearly indicates the necessity of programming languages’ preciseness, it is also a procedure that the programmers translate their ideas into another language, programming language, just like what I did.

       But what exactly happened inside of the computer when I was programming? First, we need to know some basic knowledge about the hardware — the computer. Two main hardware devices are working when I am running the Visual Studio Code and writing codes. A CPU (central processing unit) is a hardware device that reads instructions from a program and executes them, one at a time, in the order prescribed by the program. A RAM (random access memory) is a hardware device that holds data values in locations that can be read or written by the CPU, it is called “random access” because it can access any random location in the same amount of time. (Denning & Martell, 2015, p.65) When my computer is on and I am doing something using it, my computer loads memory to RAM. Besides hardware, the interpreter (here as Visual Studio Code) is also functioning, it processes my source code each time it runs, line by line. Basically, there are three main ways to translate source code into machine code: compile it, interpret it, or a combination of both. (Davis, 2019)

       Also, when I am writing codes in a program, I am staring at a screen, typing on a keyboard, and click buttons with my mouse. These are called peripherals, which can be regarded as the extension of the human body.

       One last thing that I want to introduce is computational thinking, which is also something that I learned from the little programming experience. Computational thinking is a method, or say, model that we can apply in solving big and complicated problems. As Wing (2006) pointed out, “computational thinking is using abstraction and decomposition when attacking a large complex task or designing a large complex system.” By applying computational thinking, we redefine, decompose, create new concepts, and integrate small solutions together to form a complete answer. This way of thinking can be accessed by simply trying programming like what I did, by having a goal in my heart, and then separate it into different parts to explain and seek answers.

       Computer-related knowledge like programming is always confusing as people usually focus on its practicability instead of knowing the reasons behind usage and actions, it is something hiding in the black box but definitely not something that mythical. Programming does enable me to look at computer and computation from a brand new perspective.

 

Reference

Böhm, C., & Jacopini, G. (1966). Flow diagrams, turing machines and languages with only two formation rules. Communications of the ACM, 9(5), 366-371. doi:10.1145/355592.365646

Coding. (n.d.). Retrieved December 13, 2020, from https://www.merriam-webster.com/dictionary/coding

Computation. (n.d.). Retrieved December 13, 2020, from https://www.merriam-webster.com/dictionary/computation

Davis, A. (2019, July 22). The fundamentals of programming – Programming Foundations: Fundamentals Video Tutorial: LinkedIn Learning, formerly Lynda.com. Retrieved December 13, 2020, from https://www.linkedin.com/learning/programming-foundations-fundamentals-3/the-fundamentals-of-programming?resume=false

                  Denning, P., & Martell, C. (2015). Great principles of computing. The MIT Press.

Irvine, M. (2020). Linguistics, Language, and Symbolic Cognition: Key Concepts. Retrieved December 13, 2020, from https://drive.google.com/file/d/1DIN2gFzjugV8J7iCWyqTLY4zxWBzJqna/view

                 Pinker, S. (2000). Words and rules : the ingredients of language (1st ed.). Basic Books.

Wing, J. M. (2006). Computational thinking. Communications of the ACM, 49(3), 33-35. doi:10.1145/1118178.1118215

Realize Interaction: How JavaScript Executes on the Web

Chutong Wang

Abstract

Click the plus button to add some game cards to your shopping cart, press the Play button to start a song, scroll down the web page to enjoy dynamic web transition, play a weird web game … We browse on the web every day. It has become a daily part of modern life, just like we brush our teeth every day.  But interestingly, unlike everyone can explain the reason why we have to brush our teeth and how toothbrushes help us to clean remnants, most people can hardly illustrate how we realize these interactions. That is because these interactions are based on numerous layers of abstraction and numerous technological black-boxes. Among these black-boxes, JavaScript is the one which directly affects the web interaction and user experience. In this essay, I will focus this essential part of web development – how JavaScript makes the webs come “alive” to accept our requests and return the results we want.

Client, Server, Browser and Internet

To understand how JavaScript enhances our experience of web browsing, we could start at a quick glance of the working principle of the web.

The first web page – World Wide Web (WWW), was invented by Tim Berners-Lee in 1989. The original purpose of building this web was to sharing information between scientists in universities and institutions around the world. To achieve this goal, Tim also created HyperText Markup Language (HTML) to provide the structure of a web page and identify links for rendering other web pages so we can jump through different web pages easily; web browser for collecting requests from the client and displaying documents from the server; and HyperText Transfer Protocol (HTTP) for transferring the request and service between the server and the browser through internet. This mode is known for the client-server architecture (see Figure 1) and we still follow this pattern today.

Figure 1: Client-server architecture. Files are stored in the server, waiting for requests from the client side. HTML for structuring the whole web page, define what is title, subtitle, paragraph and etc. CSS for styling, define the font, font-size, color and etc. JavaScript for making animation and interaction on the web page. [From https://developer.mozilla.org/en-US/docs/Learn/Getting_started_with_the_web/How_the_Web_works]

Client and Server: Computers connected to the web through internet are called clients or servers. The tablets, PCs, and smartphones we use every day are all clients. While servers, on the other hand, are computers that store web pages. For example, if I click a hypertext called “For More Information”, a new web page will pop up to show more of what I am searching for. This action from the client side will send a request to the server, and the server will find the site or html file mapped with “For More Information” and send back to the client. The process is much like when you borrow a book from a librarian and the librarian brings you the one you want out of thousands of books. A server can locate anywhere in the world, we can send request to different servers just like we can borrow books from different libraries all around the world.  

Browser: Firefox, Safari, IE or Google Chrome, almost all of us have at least one web browser installed on our computers. Browsers are installed on the client computer and all the web browsers function as client, they play an essential role on building connection.  Browser can simply be regarded as an interface. It fulfills the client-server architecture so we, as clients, have a place to send requests and servers have a place to show the results. This process is controlled by the browser render engine inside the browser, which could turn the HTML documents it receives into commands for the operating system and the operating system will display the content on the screen.

Internet: Internet is a link that connects all the servers and clients together. If using the same analogy of borrowing books from the library, the Internet is the road from home to the library, while the HTTP could be regarded as traffic regulation.

<!– fail to embed the video here–>

Video: How the internet works. [From: https://www.youtube.com/watch?v=7_LPdttKXPc]

A Brief Introduction of JavaScript

As mentioned in Figure 1, while HTML and CSS are for structuring and styling, JavaScript is for making animation and interaction on the web page to engage the users. Most dynamic elements we could see on the web these days are written by JavaScript.

However, the first web browser created by Tim Berners-Lee can’t support any client-side scripting, which means the visual effects of web pages rendered by this browser are simple and monotonous to some extent. But HTML is a markup language could “easily be extended to support passing program code to a web browser”.

So in order to achieve client-side scripting, Netscape Communications released JavaScript in 1995 and developed Navigator 2.0 beta 3 browser with the ability of reading and executing JavaScript codes. It became so popular that it gradually phased out other scripting languages, and no other scripting language ever took its special place. Today, it is still one of the most popular languages in the world, widely used in web development, building web server, creating web and mobile application, game development and etc.

There are several factors that make this language so successful. First, it is the only programming language native to the web and can be used in both the front-end and back-end. Second, there is an incredibly low threshold to get start. It is easy to insert a piece of JavaScript codes in the HTML document with a <script> tag. And the browser with a JavaScript engine (all the mainstream web browsers have JavaScript engine) could then parse these codes and make them work. Third, the snowball effect makes JavaScript a language difficult to be replaced temporarily. Programmers developed numerous frameworks and libraries that helped developers to create excellent web interactions quickly. This makes the language very supportive since it already has a mature and vibrant community worldwide.

 

Interaction: Examples of How JavaScript Manipulate HTML Tags

Let’s use a piece of codes in W3Schools as example to explain how JavaScript works on the web:

Figure 2: Example of JavaScript Object Properties from https://www.w3schools.com/js/js_object_properties.asp . The left part is an HTML document with JavaScript in the <script> tag, and the right part is the corresponding web page.

The first line of the code <! DOCTYPE html> is a Document Type Declaration. It informs the web browser that “Hey! This is an HTML file” so the browser render engine will start to work to turn HTML text and tags into commands to the operating system and then display the web page on the screen. Every HTML tag has a close tag with slash to demarcate the range of this tag. <body> tag includes all the content that will show on the web page. Everything in the right part of Figure 2 is all belongs to <body>. And tags like <p> and <h2> are for identifying the function of the text, <p> for paragraph and <h2> for subtitle (abbreviation of heading 2). What’s more, there is a <p> tag with an id which could make this paragraph different from others. Each HTML file can contain many ids, but each id name must be unique. We can see although there is nothing inside the <p id = “demo”> </p>, the final web page still shows a line of text said “John is 50 years old.” That is because the JavaScript is taking charge of this part.

As we mentioned before, we could simply insert JavaScript codes in HTML document by using <script> tag. So, the code between <script> and </script> is JavaScript code. The “var” stands for variables. Variables are used for storing data values. If the code goes like:

        var person = “John Doe”;

Then we can say “person” is a variable. However, in this example, there is a set of variables. We have variables named “firstname”, “lastname”, “age” and “eyecolor” inside the “person”. Thus, we called “person” an object. In JavaScript, object stands for variables with many data values. Object is easy to use when we have to define a set of features. For example, a gallery website might need an object called “work” or “artwork” or whatever you what to name it with a set of variables like “artisit”, “size”, “creatingDate” and “price”. This makes the content more structured and saves a lot of time for boring repetitive coding.

Below our object “person”, we have a line – ‘document.getElementById(“demo”).innerHTML’. This is one of the most adorable part of JavaScript. The syntax of JavaScript is quite beginner-friendly compared to other programming languages since the semantic meaning of many JavaScript components are quite obvious. This line basically means grab the element with the id “demo” in the HTML document. And here the manipulation starts. JavaScript controls the element in HTML file with id “demo”, to make it display the “firstname” in object “person”, followed by string “ is ”, followed by “age” in object “person” and then followed by string “ years old.”. In this way, JavaScript accomplish its goal of making change in HTML file.

This example seems have nothing to do with interaction or animation. Although it uses JavaScript to display a line of text which is not existed in HTML, it is still displaying text, which doesn’t enhance our user experience at all! But if we think deeper of this example, this actually shows us an exciting function – JavaScript could change the appearance of HTML content readily and could insert programs in plain HTML web page. We all know that programming languages are able to make mind-blowing effect. HTML can’t make lovely animation, but we could make animations by JavaScript since it is a programming language, and insert them on the heading, next to a button or anywhere we want.

I will take JavaScript libraries using as example to further illustrate the interactions made by JavaScript. Libraries can be regarded as small programs already written by other programmers. What we have to do is to copy and paste code from the library to initiate functions in the library. In this website(https://ustaobao.glitch.me/), I insert two JavaScript libraries – AOS and Shine.js in the head of the html file, so the libraries could be loaded in advance and I could use the functions inside the libraries wherever I call them (see Figure 3).

Figure 3: HTML head of https://glitch.com/~ustaobao. Line 15is the CDN link of AOS, which is used for triggering animation when you scroll down. All the animations in this web page are created by AOS. Line 18 is the CDN link of Shine.js, which is used for making creative font style.

Then I initiate them in JavaScript file:

Figure 4: initiate shine.js in JS file.

Since AOS is a JavaScript library for creating quick CSS animation with simple principles, we could add it directly on the HTML tags:

Figure 5: Part of the HTML tags with AOS.

While shine.js, on the other hand, need more specific description of what I want what to do. So I use getElementById as the previous example (see Figure 4, line 81), to endow the element <h1 id=“tryshine”> with Shine.js function (see Figure 5, line 48) make the sentence looks like this:

In this example, I create scroll down animation and cool font style easily with JavaScript. Whether it’s a complex effect or a simple style, referencing methods are basically the same in JavaScript.

How Browser Understand JavaScript Code

Figure 6: Process of executing JavaScript code. [From: https://medium.com/@mustafa.abdelmogoud/how-the-browsers-understand-javascript-d9699dced89b]

Since JavaScript is written in human-readable way, we need to make the browser understand JavaScript code as we do. So, just like the HTML is understood by the browser through browser render engine, JavaScript could be parsed through JavaScript engine inside the browser. The first step is tokenization, engine maps each element of code to a token and an array of tokens of the JavaScript file. The second step is to convert this array of tokens into an Abstract Syntax Tree. The Tree represents the logic of our code and turn them into byte code. In this process, the engine analyzes data types (categorize what this data is used for) and hot functions (functions that appears many time in the code) to optimize the code through optimization complier in the engine (see Figure 6). And finally, translated into machine code by the complier (see Figure 7).

Figure 7: Transfer JavaScript code to Machine code. [From https://medium.com/@mustafa.abdelmogoud/how-the-browsers-understand-javascript-d9699dced89b]

A Side Note of Web Design

Although JavaScript can bring a lot of great visual effects to our websites, the reality is that overly complex visual design doesn’t improve the user experience. Conversely, miscellaneous interaction design may cause key information to be ignored or visual fatigue. How to ensure that the content of the website is modest and attractive without cluttered interaction is a problem that web developers often need to face.

The interactions that we take for granted today are actually carefully designed. Throughout human history, human interaction with the media has been largely one-sided [KGPDM1976]. From writing on slate or paper in ancient times to watching TV shows today, input and output are usually unilateral. Thus, web designers don’t have many direct patterns to draw from. Instead, all these designs have to borrow from the ways we use sign and symbol systems. When you click a button, the button will change the light and shade slightly just like when you press a button in real world. That’s a visual sign we detected for assuring valid click. Plus symbol for adding items, trash can icon for deleting. These are symbols refer to specific meanings. How to use these signs and symbols to make the appropriate interaction effect is crucial to the performance of the web page.

Conclusion

Thanks to the development of JavaScript, it has become easier and easier to add diverse interactions to the websites. But when it comes to web design, there’s a risk of putting the cart before the horse when it comes to flashy effects. Therefore, familiarity with JavaScript is almost as important as mastery of design concepts in building a website.


References: 

Aaron. (2009, February 19). How the Internet Works in 5 Minutes. https://www.youtube.com/watch?v=7_LPdttKXPc

Alen Kay and Adele Goldberg, (March 1977). “Personal Dynamic Media” in Computer, vol. 10, no. 3, pp. 31-41, doi: 10.1109/C-M.1977.217672.

Client/Server, the Internet, and WWW. (n.d.). Retrieved December 13, 2020, from http://www.robelle.com/www-paper/paper.html#servers

How the Web works. (n.d.). MDN Web Docs. Retrieved December 13, 2020, from https://developer.mozilla.org/en-US/docs/Learn/Getting_started_with_the_web/How_the_Web_works

Martin Irvine, Introduction to Symbolic-Cognitive Interfaces: History of Design Principles 

Martin Irvine, (new) From Cognitive Interfaces to Interaction Designs with Touch Screens

Mustafa Abdelmogoud, (2019, September 27). How the Browsers Understand JavaScript. Medium. https://medium.com/@mustafa.abdelmogoud/how-the-browsers-understand-javascript-d9699dced89b

Mustafa Abdelmogoud, (2019a, September 27). How the browser renders HTML & CSS. Medium. https://medium.com/@mustafa.abdelmogoud/how-the-browser-renders-html-css-27920d8ccaa6

Peter J. Denning and Craig H. Martell, (2015). Design in Great Principles of Computing. The MIT Press. 

Ron White, (2008). How Computer Works, 9th edition. Que Publishing. 

Shan Plourde, (2019, March 19). Why are we creating a JavaScript-only World Wide Web? ITNEXT. https://itnext.io/why-are-we-creating-a-javascript-only-world-wide-web-db8c3a340b9