Author Archives: Sacha Qasim

Sacha Qasim: Final

How humans interpret symbols versus how computers are designed to digitally transform information 

The transformative advances in computing are not just foundational on math and programming. The very core of computing stems from the “themes of representation and automatic methods of symbolic transformation”[1]. This paper will delve into the history and evolution of computational thought and how we interact with computing systems and how modern computing interprets code. All while understanding the symbolic system in the physical architecture of computing and how it prefaces human interactions.

The nativity of modern computing was enabled through generations of human consciousness and symbolic thought. The modern computer is a complex blackbox that continues to evolve into more advanced systems. For decades its capabilities are continuously being enhanced, producing the bodacious technical innovation that it is today.

Dr. Martin Irvine argues that “modern computers come from the same source as language, writing, mathematics, logic, and communication- capabilities that are uniquely human and uniquely symbolic”[2]. From cave paintings to artificial intelligence- how did we get here? Symbolic cognition is the prerequisite in conceptual orientation and abstraction. From the time humans are in the womb, and the brain begins to develop, immediately natural language processing is part of gaining understanding. It is instinctive for humans to be in the pursuit of the acquisition of knowledge; defining signs and symbols into meaning. 

Abstract concepts can be represented in a plethora of ways done by enabling symbolic thought processes in subjects such as science, philosophy, and anthropology. This is not only applicable to abstract nontangible items but to finite mediums that we can grasp and interact with such as musical instruments, playing chess, cooking, etc.

In the production of interpreting and generating new expressions, symbolic systems are what combine these procedures. The substrate of the systems can be identified in four applications as Dr. Irvine explains;

  1. Open combinatoriality: a core generative feature of a natural feature, and other symbol systems.

This is an advantage in creating new concepts, new and unpredictable situations, communicating them through a language community.

  1. Symbol systems have built-in features for abstraction and reflexivity.

Metafunction- Using symbols to represent, describe, interpret other symbols.

  1. Symbol systems are intersubjective and collective, enabling all things social.

Features that make symbolic language and task the primary means for communication. 

  1. Symbolic cognition externalized and off-loaded in media memory systems = formation of cultures and societies

How societies are formed. From writing, media, art, computer systems, digital memory.[3]

Below is a graph designed by Dr. Irvine depicting The Continuum of Human Symbolic-Cognitive Capabilities. 

This graph illustrates the progression of how signs and units have evolved through abstract cognitive thought, lending itself to modern-day computing. 

The provisions in the applicability to modern computing is formulated through cognitive technologies. The word “technologies” is broad as it is applicable to any tool that ameliorates human capability. Technology is not just limited to what is the cool, niche, high-tech gadget, but creative production through human design such as, fire, the wheel, the refrigerator, etc. The difference between general technology and cognitive technologies is that they are collectively in the computer science landscape. The objective of how it is designed represents human cognition. Computer science that mimics functions of the brain all is based on previous patterns in symbolic cognition. The implementation of how symbolic cognition is applied and the design of cognitive technologies are interchangeable. For obvious reasons- humans are what design them. Current technologies are processing invisible archeology of human symbolic cognition- so much that we have normalized the humanoid designs in computer science such as Siri, Alexa, Roomba, and in extreme cases- Robot Sophia.

For instance, an iPhone presents the interface configurations and structure based on the “technically mediated symbol system but also serves as a metafunction”. “The iPhone as meta-medium: a medium designed to represent and process other media”.

The design of computing all begins at the binary structures of electronic cells and logic circuits. These logic circuits are switches in the architecture of computing guiding code to digitalize symbols: which have been ultimately imposed by human design. Meaning, computers have no logic. Through code, we have been able to implement key aspects of symbolism to computing structures by being able to distribute data by running source code files then “run” the binary code. This is the act of “interpreting” data (which is not truly the ability to interpret but the through layers of design that have been assigned in early coding). The information has collected enough data to project outputs in pixels, audio, and other mediums that cater to our human interpretation. This computing process is designed to be partial semiotic agents through the lens of collective symbolic cognition and shared symbol systems that are familiar and recognizable to human interpretation. 

Below is a graph of how binary code is placed and is congruent to the symbols we use daily. Binary being only true or false, 0 or 1 switches, there are divisions of one or the other, but until gates are implemented AND and OR gates will begin to factor, helping further data implementations.

On how the computer is designed to “meaning preservation”, Denning and Martell explore the meticulous enabler in computing data; 

“When we dig a little deeper into how a machine transforms inputs, however, we can see an important aspect of the design of a program that we call ‘meaning preserving’. Consider the addition of two numbers, a and b. What does it mean to add two numbers? It means that we follow a series of steps given by an addition algorithm. The steps concern adding successive pairs of digits from a and b and propagating a carry to the next higher pair of digits. We have clear rules for adding pairs of numbers from the set {0,1,2,…, 9} and producing carries of 0 or 1. As we design a program for the algorithm we pay careful attention that each and every instruction produces exactly the incremental result it is supposed to…

In other words, the design process itself transfers the idea of addition from our heads into instruction patterns that perform addition. The meaning of addition is preserved in the design of the machine and its algorithms. 

This is true for any other computable function. We transfer our idea of what it means for that function to produce its output into a program that controls the machine to do precisely that. We transfer our idea of the meaning of the function into the design of the machine.

From this perspective the notion that machines and communication systems process without regard to the meaning of the binary data is shaky. Algorithms and machines have meanings implanted in them by engineers and programmers. We design machines so that the meaning of every incremental step, and output, is what we intend, given that the input has the meaning we intend. We design carefully and precisely so that we do not have to worry about the machine corrupting the meaning of what we intended to do…” 

Ultimately computation is code that is data-driven and displays as so through information representations. Peter Denning emphasizes computer science being “the study of phenomena ‘surrounding computers’ and returns to ‘computer science is the study of information processes’. Computers are a means to implement some information processes. But not all information processes are implemented by computers — e.g. DNA translation, quantum information, optimal methods for continuous systems”. [7] 

Through these computing and code are blackbox systems that are comprehendible to us through a series of symbolic layers of design and precise calculations. Language and art are at the forefront of these complex designs in our modern computing. Often we pursued math and science, this would not be possible without the continuum of language that has been enabled through human interpretation and being able to translate that into data, that feed commuting, software, and technology. These are practices that have been mitigated into what is most efficient and will continue to evolve appropriately as human cognitive abilities do so as well. 

A more applicable example of seeing how symbols evolve into better interactions with websites is through programming languages such as HTML, CSS, and JavaScript. Through many processes, a computer program is a guide to a computer to will be utilized as a set of instructions that are to be executed through the code. Such programming instructions are called statements. Which both HTML and JavaScript use the web browser in order the be executed.

How are HTML, CSS, and JavaScript designed to facilitate symbolic capabilities that we use? The ability to create two-dimensional frames with text, images, design are all coded through HTML, CSS, and JavaScript. The better designed, the more appealing it is for us to utilize regularly. With attractive and easy designs for interaction, a web design can lead to psychological dependency- addiction. Many social media i.e., Twitter, Instagram, Facebook all use the slot machine interface which is meant to draw in a user through “ludic loops” which are “cycles of uncertainty, anticipation, and feedback.”[1] It is important to acknowledge that anything that is digitizable can be represented in the two-dimensional substrate of the pixels. Including it changing and transforming by using software that can cut, paste, change the colors.


First, HTML, CSS, and JavaScript are all files that use text editors. Within the files, each of the keywords that have specific meaning and functions that the browser is able to process the way we intend it to. 

Hyper Text Markup Language (HTML) is the standard programming tool to implement the language in creating a website. It is structured so that it can detail the data to the website through a series of elements and symbols as previewed earlier in this paper. An element is what defines declarations in the HTML code. The elements is what display the content onto the browser- whether it is Chrome, Firefox, Safari, etc. Without the elements, the browser determines how to display the content on the website.

Image: the skeleton of an HTML code.

Once the HTML content is programmed into the system, one will then use Cascading Style Sheets (CSS) language to guide the HTML code in how to be displayed on any particular device. Specifically, CSS is the definitive source in how to style a website with features enhancing the design, layout, and other variations of the display to accommodate a plethora of devices the website will be accessed through. Without CSS, HTML was becoming much more tasked for web developers as the aesthetics had to be added with the content of the website too. CSS relieved the HTML developers (also called front-end engineers) to focus on content and leave the aesthetic and major design elements to CSS.

Image: an example of CSS code implementing color, font, alignment, and sizing.

Another simple version seeing how CSS code works-

This image displays each of the values and the declaration that are guiding CSS. 


JavaScript was invented by Brendan Eich in 1995 and became standardized in 1997. JavaScript is essential for web developers to learn along with wit HTML and CSS. 95% of all websites use JavaScript in the deep interface of their websites. How JavaScript travels with HTML files and enables on the fly coding activity that we are accustomed to. JavaScript is designed to eloquently determine interactions on any device; iPhone, desktop, tablet, etc.

JavaScript is designed to distribute computational load things that take more processing power. Not just fetching data but connecting to programs and analysis that take more computational power that can be used on the web server side, not solely to serve a local device. JavaScript programs many types of platforms when knowing what it will be connected to i.e., streaming media; that takes a lot of computational power. What the processing power will be on the streaming side and that it matches the capability of the client device.

JavaScript is capable of multiple features such as updating and changing HTML and CS code. Mainly it is able to calculate, manipulate, and validate data. Ultimately being the final step in designing a website adding more “logic” to the code through HTML and CSS elements. Therefore, JavaScript is much more dynamic and powerful than HTML or CSS as it is used to develop and manage large applications. But its main capability is to change HTML content, attribute values, hide and show elements. For CSS, JavaScript is able to change any styles. The browser knows it is JavaScript because of the specific file extension that is added to the code “app.js“. 

Image: JavaScript code for code art

 In this image, the JavaScript source code is displayed in how to develop art via code sourcing. 

This is the final image using all three languages; HTML, CSS, and JavaScript. 

The ultimate breakdown is how the web browser is able to analyze and process any data information by JavaScript to make it comprehensible to the human audience? The browser’s ability to understand JavaScript is a layered and complex system. For this example, I will use Google Chrome as the browser choice as it is what I am using currently and is the most widely used web browser. 

Image: an overview of how JavaScript works

The engine will take the string of code and carefully examine each symbol and character to match them to the implemented glossary. The engine will take any string of code and will tokenize it into an array of tokens. It is so specified, each element of the code is characterized. For example, if my string of code is; let x = 10. The engine will convert it into an array such as; 

Then, using this array of tokens, the Abstract Syntax Tree (AST) is generated by parsing. AST is the tree representation of the code source. This is a fascinating parallel to syntax trees in linguistics when we are breaking down sentences. Parsing is the key element in what defines each variable. 

Image: AST visualization

 Then Byte-code begins to generate which executes the code. Next Just In Time (JIT) compilation occurs and runs. The source code at this point has been compiled and being profiled. Specific to the Google Chrome V8 engine, Ignition completes the generation of the byte-code and profiling and at this point, the JavaScript code is up and running. Finally, “TurboFan is the optimization compiler inside V8, based on the info that Ignition collected, TurboFan starts to optimize the functions for better performance.”[11] 

In conclusion, advances in programming and technology all stem from symbolic and cognitive human connective behavior such as cave paintings, hieroglyphics, music, and much more. Through this paper, we examine the evolution of signs and symbols and how that advanced into binary code which became the foundation of computing. For computers to be is undeniably an exterior organ to us, the advances of programming have enhanced our affinity to these machines byways of aesthetically engaging platforms that are created by HTML, CSS, and JavaScript. After a foundational understanding of how these programs work the blackbox in how these source codes travel through so many blackboxes to be visible on this very screen deserves pause and admiration in the complexity of these systems. 



[1] Dr. Martin Irvine. The First Look at Recent Definitions of Computing: Important Steps in the History of Ideas about Symbols and Computing. 

[2] Dr. Martin Irvine. “Introduction to the Human Symbolic Capacity, Symbolic Thought, and Technologies.”

[3] Dr. Martin Irvine. Key Concepts in Technology: Symbolic Cognition and Cognitive Technologies.

[4] Irvine Youtube video.

[5] Dr. Martin Irvine. Introducing C. S. Peirce’s Semiotic: Unifying Sign and Symbol Systems, Symbolic Cognition, and the Semiotic Foundations of Technology.

[6] Dr. Martin Irvine. Intro to Symbol Systems, Semiotics, and Computing: Peirce 1.0”

[7] Peter Denning, Craig Martell. The Great Principles of Computing





Sacha Qasim: Week 13

Detailing the discoveries I made this semester, through a multitude of lenses has been enriching and ameliorated my understanding of Computing and the Meaning of Code.

To break down the frameworks of how code has affectively enhanced the progression of computing has been a fascinating journey as we worked through the complex, abstract blackboxes of code. As we gained a deeper understanding we were able to conceptualize how understanding semiotics and syntax at a novice level, helps understand computer science better.

Through the course work, we kept finding ourselves amused with how closely knit the subject areas of linguistics are involved in being able to communicate with our computers through code. We gained a deeper understanding of how natural language processing works on a human-to-human level and how this inadvertently overtime, was something that could be used with not only something not considered a being but a machine.

The continuum of human symbolic capacity is broken down with language being the heart of the symbolic capacity, abstraction, mathematics, computation, and software. Further, we delve into the theory of symbolic systems and technology in introducing Semitic theory. Throughout history, we are met with abstractions of art and symbols to be interpreted as a code signaling for humans to understand. Whether it was for aesthetic purposes and pleasure or for informing another, it holds significant value in what we are capable of creating through just symbols.

Humans are able to understand and interpret hundreds and thousands of symbols. For a computer to understand symbols immediately is a great deal to ask of it. Therefore, an advanced system of binary code is designed at the foundations of code to better communicate with computers. Through binary,  1’s and 0’s are used; as on and off switches. The computer interprets symbols through true and false negatives. Therefore, leaving computing at its most unostentatious form in two switches as the variables are used as instructions. The information is transmitted to the computer which is called a bit. Binary is necessary for any aspect of computing to function. Everything we see on our digital interfaces is a complex system of binary code that has been amalgamated together and used as information surrogates, transferring data into the system.

Professor Irvine hones on the essential tools embedded in modern computing and states it as “the crucial prerequisite for the use technology to computing…the development of notation, or language systems, sufficiently comprehensive to satisfy both the need for representation and the need to express and implement mechanisms for the transformation of expressions in the language”.

Progressing further into the abstract parallels of computing. The nativity of the internet and Web lies on the segments assessed above. Networked data and “multimedia” interfaces represent cognitive ability to build and comprehend. This is where natural language, to semiotics and symbols, binary code, tie together in how this is conducted. The most basic interface is HTML, CSS, and JavaScript. Through these computation forms of “programming”, this is used as the platform to cascade and convert binary code to something we can easily understand rather than shifting between multiple languages, and the conventional languages we speak but to understand the digital processes. With these basic steps, the design principles have been enabled, seamlessly transferring data through the servers. The metadata and data are supplied through these domains making the web interfaces so that we can comprehend the symbols and images with ease. These operations are mitigated through a multitude of commands within the system. 

The first half of this course was especially gripping, as it was my first exposure to any linguistics literature. To see what originally seemed very random and different subjects come together so closely knitted were key concepts of computing I would not have understood otherwise. While it felt like “jargon”, my coding experience has been enhanced as I understand more fully the nativity of how code works and not just plugging and chugging the system and taking for granted that these terminals are there by chance. 


Irvine, Martin. Important Steps in the History of Ideas about Symbols and Computing. 

Week 12: Qasim

Navigating the endless libraries of programming, statements and functions, algorithms, data structures, and computational thinking and software, the importance of these are what we rely on daily. Every day we have become accustomed to how these systems play an integral part in our lives. From our alarms, home thermostats, cars, and (cringe) a smart home device like an Alexa or Google Home. They all use complex programming systems to be able to achieve what it is built for. 

Through these blackboxes of algorithms, deeply wired programs, computation continues to be an integral part of our lives from our abodes to commercial interaction. These blackboxes broken down are simplified into symbol processes and syntax. These design principles rely on functions and statements which rely on more advanced syntax expressions. Without these features, communicating would hinder us from the connectivity we have with our social networks today. “Computation as the outcome of centuries of cumulative human symbolic thought for representing abstract patterns and processes” is how Dr. Irvine explains the nativity of code.

To program a computer is to give a heartbeat to it. Without properly assigning value and compiling a comprehensive code, the computer is unable to process intended objectives. To execute code, a computer begins with binary code at the very basics. For example; 001010100010110101 is how a computer is instructed to process data. Through compilers and interpreters, these tools ameliorate the execution of running the program. 

The foundations of code are compiled with structures that help organize the data. To start the code, there is a function,- a way to break up the code I.e. print() or input() to begin commanding the computer with demands. In Python, there are also hundreds of others. This helps the user breakdown the code and make it a comprehendible sequence so the computer can learn how to do it over and over again.

As Evans explains in his article, the complexity of functions is infinite but is used to perform successful outputs of computational code. The combination of how to make procedures, decisions, and applying procedures is a foundational skill to be successful in any computing project. 


David Evans, Introduction to Computing: Explorations in Language, Logic, and Machines. 

Qasim: Week 11

Delving into Computation Thinking through David Evans and Jeannette Wing we see parallels in how we interact with computations principles and concepts in code.

Wing harps on in their article “Computational Thinking” that there are immense amounts of problem-solving in the fundamentals of constructing the very basics of computational theory.

Computational thinking oscillates between a plethora of subjects varying between design systems, arithmetics, abstraction and decomposition of complex systems, and more. With this, we see in LinkedIn Learning the sheer ubiquity of what coding transfers into. 

Python is now the learning language while previously it was C++. This is used to simplify, unify, and enhance the complexity of issues through the terminal. These memory allocations, programmers are not concerned with the processing whether through numbers or data. 

Evans hones on the importance of why we should understand computing. Computing has increasingly become an important subject area. The process is of computer science follows the processing through information processing: which is the “sequence of steps”. This organizes and converts physical implementation into abstract information. Then the procedure takes place in how it describes the processing. The algorithm is the mechanical procedure that guarantees the completion of the code. 


Jeannette Wing. Computational Thinking. 

David Evans. Introduction to Computing: Explorations in Language, Logic, and Machines. 2011. 

Qasim Week 10

Computer systems are a complex integral of how communicate with our devices. We also call the complexity of these systems a blackbox, that is a system that continues to be decoded through many layers and redefining and conceptualizing language and human interaction. Especially communicating with these devices we interact with it on multiple spectrums. For one, visually. Is the content on the screen appealing and are we able to internalize it? This come with User Interface and User Design methods. We also have audio forms that make it easier for us to connect better. Specifically, Siri and Alexa are actors in being the liaison between the computer and human on a much more personalized level. These systems (Siri) are so complex we are seeing the evolution in how it communicates with us. The voice is more human and they use more casual lingo and are sometimes, a bit smart.

The transferrable data in how socialization and data socialization. As we remediate the representation of images we also are learning how to do so digitally. Computing needs to be a medium that needs to be conceptualized through accessible data whether it is imagery, audio, and for some people, touch. Data types combine together into one interface is efficient and not just complex in the programming, but how it appeals to us aesthetically.

Brad Meyers delves into other ways how we will communicate with computers and one that struck me was “gesture recognition”. With this in place, we can broaden the users who use computers much more broadly. The human intention applied into interface and user design in how we are becoming more creative with these systems. Other ways Meyers believes will be modes of how we communicate with computers and ways we do, include; multimedia, three-dimensionality, computer-supported work. Imagining these systems feels so far fetched and are hard to think of when we have become so accustomed to what we use now. It almost feels like imagining a new color. 

Sacha Qasim: W9 HTML

Click here to see the live URL!

<!DOCTYPE html>
<html lang=”en”>
<meta charset=”utf-8″>
<meta http-equiv=”X-UA-Compatible” content=”IE=edge”>
<meta name=”viewport” content=”width=device-width, initial-scale=1″>

<title>Sacha Qasim Week 9: HTML </title>

<!– import the webpage’s stylesheet –>
<link rel=”stylesheet” href=”/style.css”>

<!– import the webpage’s javascript file –>
<script src=”/script.js” defer></script>
<h1>Intro to HTML</h1>

For this week’s assignment, I will use HTML code to respond to the weekly assignment.
<br> HTML stands for HyperText Markup Language.
<br> HTML is a tool that web browsers respond to in applying and displaying language text, images, videos, links, and so much more!

Understanding How HTML Works
HTML is the foundation of language display on a computer. Syntax is one of the main variables that guides not just HTML but other programming languages. The syntax matters since these are rules by the systems and if not applied correctly, can lead to SYNTAX ERRORS
<br> For HTML to work, the developer needs to be familiar with TAGS. Tags are a special syntax in defining the element within the program. Tags will always start with a < or a >; this helps HTML indicate what type of element is going to be implemented.
<br> Tags that are generally used open are followed with a , h1, h2, h3, h4, h5, p, div, span, a, and much more. To end a tag and indicate to the program to run, apply <./ .I had to add a period between the “less than” sign since it wanted to run as a tag, otherwise you do not the period between the two symbols for it to run.

<h3> Questions </h3>
Why do only some links work when you apply them and try to hyperlink them using the <.a.> tag <./.a.>?

Side note:
To have made this pretty with colors, fonts, and sizing, I would have needed to use CSS- perhaps for another class.


Sacha Qasim: Week 8

Peter Denning delves into the deeply intricated frameworks of information, symbols, and computing and conceptualizes these processes by defining them through breaking them down. Denning hones on information processing and data in his book, “Great Principles of Computing” and assesses that all computer science is confined to “the study of information processes”[1].  Depicted through his chapters, Denning states the simplicity of computing as “information consists of (1) a sign, which is a physical manifestation or inscription, (2) a relationship, which is the association between the sign and what it stands for, and (3) an observer.” (Denning) Computers today are more intuitive than ever before and can ameliorate our lives in a plethora of ways beyond text information and interacting with information displays.

Computing at its most unostentatious form are variables of true and false, positive and negative, on and off, and 1 and 0’s components. This is how information travels through the computing system from electricity. This is also defined as binary code, which complicates progressively the further we delve into information and symbols. The information transmitted to the computer is called a “bit”[2]. This one bit can represent two unique things such as 1&0 OR 0&1. This carries on until 8 bits are collected which is equivalent to a Byte. The context of this is imperative as bits and bytes do not have any meaning by themselves; they need to be taken in the context of a symbolic system such as a series of binary code summed to 87 (01010111), color gray, capital Q, etc.

Binary is necessary for any aspect of computing to be completed. Something as pleasantly aesthetic as web images, use binary code to show artistic creations and photographs. Pixels are a key element in allotting an image to process on the computer screen. Specifically, a pixel is “a mathematical abstraction for color values mapped to an artesian space designed to be implemented in physical locations (= memory and display device addresses). Mathematical values are represented as an ordered set (a 3 tuple) for Red, Green, and Blue (each position from 0-255, yielding 256 values)”[3]. With the precise computational transfer, an image can be generated by the use of “bitmaps” and displayed on your window.

The CPU is what hosts the binary code and processes it. The CPU has three components; an arithmetic logic unit, registers, and a control unit. It converts data into information processes. But to be able to do so, there is a specific sequence of instruction, programming through its memory ram. In affording itself to the processes, it utilizes logic gates to transfer data and communicate to the CPU. 

[1] Denning, Peter. “Great Principles of Computing”

[2] Khan Academy. Unit: Computers

[3] Dr. Martin Irvine. Computational Thinking: Implementing Symbolic Processes

Week 7: Qasim

Through the series of articles and videos, I have assessed defining these subject areas by the following:

Source: Al Jazeera

The encode and decode the process of data type instance as data, is first to acknowledge everything is a data type. Whether it is symbols, emojis, pictures- they all are representative of information and are channeled into computing through the use of binary code which is rendered through text characters. So first, bytecode is used as the median characters in creating a character. The software then “stack design (the levels for rendering text through graphics controls and applications”[1]. Finally, the output is conducive of character shapes being transformed into pixel patterns through screen display. We also are able to conceptualize this through the Digital Images- Computerphile video; which delves into how pixels work, through the use of rgb, and how they are used within each pixel through the eight bytes of the three channels.

E-Information is a series of blackboxes deeply integrated within computing systems that are formatted into bits and bytes to be processed through the computer systems memory.[2]

[1] Dr. Martin Irivne, “Introduction to Data Concepts and Data Types”

[2] Dr. Martin Irivne, “Introduction to Data Concepts and Data Types”

Week 6: Qasim

Broadly conceptualizing code, delves into multi-dimensions in how we interpret information. Translating this is to transfer information through on/off switches (also known as binary code) are the catalyst of signaling data. But as we gather more information these items need more “gray” items to translate information. With this, we add more binary and in narrowing meaning.

Information is architecturally designed in the layers of it being the imperative, substrate layer for symbolic systems. This includes but is not limited to texts, graphics, images, audio and video.  It is not always flawless, but it is impressive how we are able to process communication in so many ways with the E-Information package being one of the main levels of data processed over the internet. It is easy to identify the difference between message via platforms. Before any data you send or receive, it is likely to have gone through a process which the data is split into 15 different ways, is processed and then successfully sent. Say you are sending a picture of your dog to a friend. First, it will be broken into small packets, then will go through the router (TCP), then the router (IP), to the local ISP, the lon mail provider, to the border gateway protocol, then from here will be further verified, especially if it is sensitive information (I.e. government), then further processed into the long-haul provider, into the local ISP of the receiver, and makes a full circle which will at the end reassemble the packets before it is displayed to the receiver. 

Everything you type on Facebook, even if deleted is collected data and will be used in categorizing the ad content you will see. Importantly we use these servers like Facebook, Apple, Verizon, etc to be able to access the internet. The internet which no one owns but we all use.

“We didn’t invent electricity, we designed it” 

Week 5: Qasim

*Had technical issues with XLE-Web on loading the parsed sentence (Taking over 2400 seconds to process the parse). 

In defining the nebulous nature of language, we use a plethora of factors in identifying types of human language. The natural language for that matter is the acquisition of common understanding in human interaction and communication. This is simply by luck in what community we are born into and learning to be “attuned to the patterns of sounds and grammar in the language community we are born into”[1].

Steven Pickner fuels the notion that language is words as well as rules (grammar). Words are what “memorized arbitrary pairing”[2]. We have conventionally trained ourselves that sounds and meaning paired together are what define the fruits of life. Enabling a language is concurrent with the arbitrary nature of identifying symbolic tones, units, cognitive beliefs, etc.


How can we tie this more into computing?

Side note: it is fascinating to conceptualize how we have trimmed all these massive ideas, concepts, into symbolic words (by illustration i.e. calligraphy) and more so, how we turned complex concepts all into binary code, transferring language.

[1] Martin Irvine, “Introduction to Linguistics and Symbolic Thought: Key Concepts”

[2] Steven Pickner, “Words and Rules: The Ingredients of Language”