Author Archives: Danae Theocharaki

Danae Theocharaki_Final

The Symbolic Representation in the Evolution of Computing that Lead to Our Current Technological Advancements in Artificial Intelligence 

Abstract

This paper investigates the symbolic history behind computing and technology as it ultimately answers to the how, why and for what we use them today. The long history of computing lies behind the details that go into creating what we, in one word, refer to as software. Some of my guiding research questions were looking into the history of computing while focusing on the importance of human symbolic and semiotic systems as the basis of computing and the depiction of non-technological concepts as cultural and societal fragments that are then reflected on the actual technological innovations and advancement that are occurring in that specific period of time. Understanding how and why our human symbolic systems are the first step towards decoding the “black box” of computing. Highlighting the relationship between cultural and societal characteristics that are embedded in those symbolic systems, establishes the connection between the technological past and present. I further look into theoretical background and research from leaders in the field, and provide an example of how and why the recent presence of Artificial Intelligence that has infiltrated most aspects of our lives, is related to our cultural and binary symbolic systems of representation and why we have adapted to it so well.The evolution of computing has been able to take place as the sets of our very basic symbolic systems and understandings have allowed us to develop a much more complex system compromised of mathematical, computational and encoded information.

 

Over the last few decades we have witnessed technology completely take over the world as well as most aspects of our lives. For most, technology is a more recent phenomenon that has only existed among and for the newer and younger generations. However, in order to truly understand why and how technology has made such an intricate infiltration in our lives, we need to re-evaluate the origins of how and when it all began. A big part of understanding how it all started, is coming to terms with and understanding human symbolic capacity and systems. We need to deconstruct the idea that being human and having technology and machines are very separate things because in reality, the two latter are a result of the former and are representations and depictions of our physical and cognitive symbolic ideologies and culture (Irvine, 2020).  More specifically, even “[o]ur modern technical media are forms of symbolic artefacts developed for large-scale social systems” (Irvine, 2020, 2).

Map/Table 1. Professor Martin Irvine’s depiction of how our human-symbolic capabilities developed over time and what they signify or depict in each “phase”. From our class notes and readings “Prof. Irvine – CCTP-711 — Intro to Week 3 – The Human Symbolic Capacity” (2020).

Image 1. One of the first digital computers to exist. MIT’S Whirlwind Machine was introduced on March 5th 1955 and was the first one of its kinda to contain a “magnetic core RAM and real-time graphics”. From ComputerHope, 2020.

What we call today a computer or a laptop for example, are reflections and projections of machines, software and overall technological innovations and theories of the past. However, the crucial part is understanding how the discoveries of the past are immediately connected to the technology we have today and are the reason behind why it even exists. Often, we find it difficult and foreign to find some sort of connection with these devices and alienate ourselves because it is hard to conceptualize what this modern technology really is. What is the cloud? What is this artificial intelligence? The breakdown of how it all started can explain the answer to those questions. Our connection to our human symbology is what hides the reasoning behind the unexplained, also known as the things that go on behind the scenes. Binary numbers, coding and other symbol systems are used together and interchangeably to create the software and machines that have been built over the decades and have led the path for the ones we know today. In If more people understood the connection of our very symbolic culture and its history as a crucial part of the history and development of technology, we wouldn’t have as many “unexplained” and unanswered questions that further detach in people’s mind cultural patterns from tech.

Image 2. The Colossus was the first electrically programable computer created by Tommy Flowers between 1943-1945. Tasked to decode and decipher secret messages of the Nazi’s, had no RAM (memory) but used Boolean and logical and mathematical operations in order to execute the job it was created to do. Photo from ComputerHope, 2020.

Image 3. The Turing machine, create by Alan Turing in 1936 and considered the prototype of the modern computers we use today. From ComputerHope, 2020.

Michael Mahoney, a science historian, in his piece “The Histories of Computing(s)” (2005), explains the reasons behind why people feel this detachment and loose the cultural subtext that is behind all computers, machines and software. Most connect current or more machines and actual physical computer objects as descendants of the Turing machine, but since the machine wasn’t necessarily confined within the physical limitation of the objects itself but rather depicted a “schema”, that concept “could assume many forms and could develop in many directions” (Mahoney, 2005, 119). In doing so, it “assumed” a depiction of the various cultural meanings and understandings that humans and especially the group of people who were actively working on the development and creations of these applications, had and still give to our symbolic historical attributes that we give to a symbol system (Mahoney, 2005).  As an actual physical machine made out of even smaller material and parts, it would not really mean anything, it cannot stand alone. It is a compilation of

“histories derived from the histories of the groups of practitioners who saw in it, or in some yet to be envisioned form of it, the potential to realize their agendas and aspirations […]” the programs we have written for them, reflect not so much the nature of the computer as the purposes and aspirations of the communities who guided those designs and wrote those programs” (Mahoney, 2005, 119).

Connecting back to that idea, we realize how after all we are not so far different from these machines. We gave our own cultural meanings and understandings to symbols in order to benefit our needs. Even before the Turing machine, we can trace computing to what we know it as today, “back to the abacus and the first mechanical calculators and then following its evolution through the generations of mainframe, mini, and micro” (Mahoney, 2005, 121). Each technological era, time period or decade, started somewhere and adapted to the cultural circumstances and needs of the humans.

Image 4. An example of a symbolic system that used symbols, numbers, categorizations, etc., to solve and improve, what latter became, fundamental to creating the technology that we have today. This is a depiction of the SSEM’s very first program. The SSEM was “the first computer to electronically store and execute a program”. Designed in 1948 Frederic Williams and then built by his protégée, Tom Kilburn, whose notes those are. From ComputerHope, 2020.

Our human symbolic systems, which are better understood as our natural language and cultural-symbolic artefacts such as languages, writing, alphabets, mathematics and mathematical symbols, scientific symbols and signs, etc., are the first step in depicting human symbolic-cognitive capabilities. There is a mutual understanding that these were and are the very first methods or representation, safekeeping and external symbol storage of overall human culture and capability (Irvine, 2020). However, this is also the crucial part in understanding that because of these accomplishments and capacities, we were able to transcribe that into a digital system of information, computation, software systems and overall technological advancements of today’s world (Irvine, 2020). The “archaic” and initial symbolic systems that were created by humans are the reason behind why and how we now have the technological luxury to live with and among the systems, softwares and machines that we can no longer live without. These include anything from social media and the depiction of our lifestyles through videos, images, music, to Artificial Intelligence being a part of our day to day lives taking form in our smart phones, smart cars and even smart wearable medical devices.

Artificial Intelligence has been one of the most, if not the most, nuance concept(s) of the past few decades technological advancements. A.I. however, can characterize something very specific such as the artificial intelligence that is a part of our smart phones, or something more general such as data analytics, machine automation and more. Although a complicated concept to grasp, as most things related to AI still remain in the technological and computational “black box”, some aspects of AI have made it possible to bridge that gap between humans and machine without necessarily realizing it. Specifically, AI that is used is in our day-to-day devices such as smart phones and smart wearable tech, that have become not only permanent, but also highly dependable parts of our lives. What is behind this type of AI is all those binary, semiotics and symbolic figures, structures and meanings that humans have ascribed to what we constitute as computing and software. Artificial Intelligence in the form of IPAs (Intelligent Personal Assistants), overpowers its “black box” with the distinct anthropomorphic disposition it embodies. These IPAs have a daily presence in our lives because they also reflect certain societal and cultural concepts and notions.

Researchers Goksel-Canbek and Mutlu (2016) who have investigated the topic of IPAs as part of our regular habits, explain the various connections that can be established between the users and the IPAs that rely less on the actual physical machine and more so on the AI, usually the female voice or unseen presence. The evolution of our technology from that of pure binary code and symbols has progressed to such an extent that software can freely interact with their user/human without the need of having another human monitoring the program. Goksel-Canbek and Mutlu, as well as other experts in the human-tech field platform, have assigned different reasoning as of why we find such a strong connection and normalcy in IPAs, yet often struggle with other adaptations, forms and applications of Artificial Intelligence. The humanoid form attributed to these intelligent assistants, such as Apple’s Siri, Google’s Google Now, Microsoft’s Cortana, Amazon’s Alexa, etc. is according to Goksel-Canbek and Mutlu, partially due to Three-Factor Theory that makes us more comfortable with understanding this type of software and devices (Goksel-Canbek & Mutlu, 2016). The Three-Factor Theory justifies with the use of psychological evidence, peoples’ tendency to ascribe anthropomorphic forms, features and characteristics to non-living and non-human entities (Goksel-Canbek & Mutlu, 2016; Theocharaki, 2020; Cao et al., 2019; Nass et al., 1999). An evolutionary achievement that has allowed IPAs to develop into what they are and have the capabilities that they do, is Natural Language Processing (NLP). NLP is a great example of human symbolic capacity that has evolved over the decades as have our own societal and cultural understanding, perceptions and needs. Goksel-Canbek and Mutlu highlight the importance of NLP as it “the most crucial element for creating computer software that provides the human-computer interaction for storing initial  information,  solving  specific  problems,  and  doing  repetitive  tasks  demanded  by  the  user” (Goksel-Canbek and Mutlu, 2016, p. 594; Theocharaki, 2020) as they focus on how these IPAs are used for foreign language learning. Their software intelligence allows for such “machines” to work independently and interact on their own will (to some extent), knowledge and capability, while using natural human language and semantics (Goksel-Canbek & Mutlu, 2016).

Goksel-Canbek’s and Mutlu’s research (2016) constituted of performing a variety of test interactions between IPAs (specifically Siri, Google Now and Cortana) and students who want to learn a new language. They recorded and monitored multiple instances where students were asked to address questions towards the device and see how the IPAs interact, react and “behave” (Goksel-Canbek & Mutlu, 2016). They also compare the performance between the three different assistants that not only highlights the weakness and strength of each, but also perfectly illustrates how even though the software for all three IPAs might have a similar “story of origin” and definitely overlaps in many feature, criteria and “black-box content”, the billions of complex possibilities that our semiotic systems allow for, create differentiation and promote adaptability into multiple forms and usages. Even though they still lack the same potential that a real life language tutor would have, IPAs have gained the trust of so many people who use them on a daily basis to mostly facilitate their busy lives or even teach them something new, because they provide the extra humanistic feature that for example lacked from the Turing machine yet the former is the continuation of the latter. The software behind the IPAs, use the same symbolic systems and capabilities that lead to the abacus or the first physical calculator, but have evolved, developed and adapted to each level or stage of history they came across and reflect the human values and belief systems the time. 

 

Image 5 & 6. Screenshots from Goksel-Canbek’s and Mutlu’s (2016) research findings showing some results and notes from the interactions of the IPAs and the users/students while using Google Now, Siri and Cortana.

Conclusion 

The evolution of computing has been established and executed through the presence of human symbolic and semiotic systems that have adapted through out the decades allowing for the technological improvements and advancements that have led to the tech, machines and softwares that we use today. The technology that is available to us today isn’t a new invention nor a futuristic phenomenon. We often neglect to remember or realize that, it is rather a continuation of our primary symbolic systems that combined with the cultural, societal and contextual understanding of that time. Those two things work together to form the software, machines and technology that has evolved through out time. It is both a result and a reflection of our need to create and fill up the gaps or find solutions for the specific time’s needs and problems. In a way, the extreme could be to consider that even tech prototypes are no longer a thing, since nothing in tech arises from ground zero and one way or another, all findings are a continuation, improvement or expansion of another. 

 

References and Works Consulted

Agre, Phillip. (1997). Computation and Human Experience. Cambridge University Press. 

Cao, C., Zhao, L., & Hu, Y. (2019). Anthropomorphism of Intelligent Personal Assistants (IPAs): Antecedents and Consequences. In PACIS (p. 187).

Goksel-Canbek, N. & Mutlu, M. E. (2016). On the track of artificial intelligence: Learning with intelligent personal assistants. Journal of Human Sciences13(1), 592-601.

Irvine, Martin. (2020). CCTP-711: Week 3: Introduction: The Human Symbolic Capacity:
From Language and Symbol Systems to Technologies. CCT Program (course notes).

Irvine, Martin. (2020). Introducing C. S. Peirce’s Semeiotic: Unifying Sign and Symbol Systems, Symbolic Cognition, and the Semiotic Foundations of Technology.  CCT Program (course notes).

Kockelman, P. (2013). Agent, person, subject, self: A theory of ontology, interaction, and infrastructure. Oxford University Press.

Mahoney, Michael. (2005). The Histories of Computing(s). Interdisciplinary Science Reviews, 30(2), 119-135. 

Nass, C., Moon, Y., & Carney, P. (1999). Are People Polite to Computers? Responses To    Computer‐Based Interviewing Systems 1. Journal of applied social psychology, 29(5),1093-1109.

Theocharaki, Danae. (2020). CCT 505: Assignment #5– Putting it All Together. CCT Program (class assignment). 

Theocharaki, Danae (2020). CCT 505: Assignment #7 – Synthesizing Research Methods. CCT Program (class assignment).   

Theocharaki, Danae (2020). CCT 505: Assignment #6 – Identifying Research Methods and Questions. CCT Program (class assignment). 

 

Web Sources & Links

Map/Table 1: Irvine, Martin. (2020). CCTP-711: Week 3: Introduction: The Human Symbolic Capacity: From Language and Symbol Systems to Technologies. CCT Program (course notes).

Image 1: ComputerHope. “When Was the First Computer Invented?” Computer Hope, 30 June 2020, www.computerhope.com/issues/ch000984.htm.

Image 2: ComputerHope. “When Was the First Computer Invented?” Computer Hope, 30 June 2020, www.computerhope.com/issues/ch000984.htm.

Image 3: ComputerHope. “When Was the First Computer Invented?” Computer Hope, 30 June 2020, www.computerhope.com/issues/ch000984.htm.

Image 4: ComputerHope. “When Was the First Computer Invented?” Computer Hope, 30 June 2020, www.computerhope.com/issues/ch000984.htm.

Image 5: Goksel-Canbek, N. & Mutlu, M. E. (2016). On the track of artificial intelligence: Learning with intelligent personal assistants. Journal of Human Sciences13(1), 592-601.

Image 6: Goksel-Canbek, N. & Mutlu, M. E. (2016). On the track of artificial intelligence: Learning with intelligent personal assistants. Journal of Human Sciences13(1), 592-601.

 

It was so interesting going through what we could call the history of computing. Through understanding the importance of semiotics and symbols one can see the true origin of computing. I think we often times forget the connection of the modern world with the past in terms of truly conceptualising the fact that all dots connect to the present and current “status” of how things are. In the initial weeks we further delved into the meaning and representation of symbols in human beings (“the symbolic species”) that allowed for certain behavioural, cultural and neurological characteristics to form and over time adapt to the specific moment of time and develop along with it. 

Through 

Operators

Conditional/Boolean expression: any expression that break down to either true or false 

Relational operators: work with 2 operands, returning true/false based on the relation to each 

  • Equality operator: “==”, used to evaluate the quality of operands
  • Conditional operator, “if” (block: statements group together, i.e. under if-statement)
  • Else-clause, cannot be used without an if statement  condition of if statement is checked, true –> print-if-statement, false –> else, print-else-statement
  • whatever is outside the block will be printed separately

I like going through the inLearning videos because they are a good way to jog my memory on basic yet crucial reasonings behind coding logic and syntax. I think these videos are interesting because the explanations/break-downs focus a lot on the 

Week 11

It was really interesting re-visiting this concept of what coding is. The emphasis placed on the order of things is crucial to the understanding of how programming is actually executed the same way any system needs the necessary steps and procedures to function and “come to life”. In the video the instructor mentions how “Order of steps will impact your final product. Order matters” (inLearning).  As any other system we’ve seen coding requires a correct sequence of steps like anything else that is “technology”. As Evans explains: “The list of steps is the procedure; the act of following them is the process. A procedure that can be followed without any thought is called a mechanical procedure” (Evans, 2011). So what we see happening while the code is running is this mechanical procedure of ordered instructions and commands. 

It is crucial to understand how important this order, syntax is because that is the only way you can pin point an error, a typo, a bug, etc. 

 

 

More notes: IDE – Integrated Development Environments 

Xcode – develop apps for Apple products, more sophisticated IP, syntax pallet  

Visual Studio Code – initially designed for script languages, supports many more with extensions. Q: what is its unique feature called? similar to autocomplete 

Android Studio 

Ruby Me 

So are all applications and programs where you can run, type and test code considered IDEs? i.e. Wing for Python, Eclipse for Java, etc. 

How come some language run only on certain programs and not others? 

From app to app

As I was trying to ‘decode’ the writing questions for this week and going through the material I was trying to think of a good example of a software feature that represents a symbolic-cognitive function that we basically take for granted. I realized that the concept of “space” in terms of interface is something so abstract when it comes to our phone and computers. I was on my phone switching from one app to another without comprehending what I’m actually doing, but I’m most definitely expecting for the apps to automatically open one after the other or switch from one to the next in no time. We have taken this movement and change from app to app or one tab to the next, opening and closing programs like it is no big deal. But in reality what we are doing is switching from one understanding of a specific room, space, concept to the next as if we are going from one physical store location to another store, from one room of our house to another. Each store in town and each room in your house have a different use/meaning/understand that you or your community have attributed to it. Similarly, every app for example represents something else, one is a game, another is an online store, another holds your stocks.  We are changing one space to the next because that is what we also do in real life. You have to go to a different place to see your doctor that you would go to buy a piece of furniture. Different spaces and different rooms satisfy different needs and wants the same way each app we swap from does. 

C.S. Pierce explains how a sign structure is basically a medium that “enables cognitive agents to go beyond the physical and perceptible properties of representations to their uses for interpretations which are not physical properties of representations themselves” (Irvine). We have an understanding of what it means to switch between apps, and when you have made that switches let’s say from your Canvas app to Instagram you have, most likely, completely unconsciously also switched your head-space, your behavior, your goals and expectations because each app represents and stands for something else. There is also a lot to be said about the symbolic representation of each app and the meaning behind their specific shape, frame, coloring etc. that even that has become so installed in us that whenever they make a change of there is an software update most of us are left shocked by the new appearance of an app because of your previous association with the past appearance i.e. Google changing all it’s icons, going from the older Instagram icon with the brown polaroid camera to the current one (and everything in between). 

We use technology for a more simplified version of our lives where everything becomes easier because everything truly is so easily accessible with just simple movements of our fingertips. However, these versions exist because we have taken much more complicated physical and real-life representations and decode them into an electronic, computational form because they mean something to us: “All symbolic forms from speech to images in any medium must have perceptible features that “afford” consistent inferences for recognizing the sign patterns that can be correlated with the meanings and uses understood by a community” (Irvine). And so we have taken this concept of a physical space and a physical human action and turned them into a digital world that unfolds itself without being second guessed. You don’t really sit and think “hmm I wonder how I’m going to get to his app…”, “I wonder how long it is going to take me to get to the library app…”. In non digital life, we would probably be thinking “hmm what time do i need to leave my house to go the clothing store and then the library? What if there is an accident on the way?”. We have mastered this interface of basically being able to switch locations and tasks so fast without realizing that in that moment we are also mentally going from one concept to the next, from being in school on Zoom to shopping for food on Amazon Fresh, representations of what would be a physical classrooms and all of its associations and an actual physical grocery store. 

 

 

References

Martin Irvine, Introduction to Symbolic-Cognitive Interfaces: History of Design Principles 

Martin Irvine, (new) From Cognitive Interfaces to Interaction Designs with Touch Screens

Janet Murray, Inventing the Medium: Principles of Interaction Design as a Cultural Practice. Cambridge, MA: MIT Press, 2012. Excerpts from Introduction and Chapter 2.

 

HTMLing Tumblr

I remember my first true experience with HTML code was “back in the day” when Tumblr had a big presence in teenage girls’ lives. Of course the theme of your blog played a crucial role in the “vibe” you were going for and the experience that you were trying to create for your visitors. Years later, and multiple websites later, it was really interesting and fun going back to where it all started, the initial steps of HTML. Working out everything from the beginning and practicing on code-generated exercises and quizzes (Khan Academy, in-Learning, etc.) allows you to understand how and why a specific command of line of code transcribes and translates to a specific thing/action/button/title/etc. on a “live” website.

So in honor of my first HTML hands-on-experience, I was curious to see what the current is the current approach that Tumblr is taking in terms of to designing your own Tumblr theme or even “messing with” a pre-designed one cataloged on the platform. Back then, touching up or altering themes on Tumblr wasn’t necessarily something that people did. However, other users on Tumblr or other online platforms and chat boards had suggestions, instructions and steps on how to change the code, add extra features, change certain characteristics or add cool ideas that your chosen theme did not previously have, allowing for a more “unique” and “interesting” appearance of your Tumblr. 

Interestingly enough, Tumblr now has a whole page on how to create your own custom made HTML theme. 

This reminded me exactly like the Khan Academy video lessons and practical exercises that broke down the very initial and basic yet super crucial commands that need to be inserted into the browser in order for it to execute the code and translate the “computer-human-language” combo into a specific characteristic/feature i.e. title, images, etc. 

Below I’ve attached a screenshot of a much more recent Tumblr blog of mine that I had to create as part of a research for a communications class that I combined with my thesis research in undergrad. I didn’t really delve into the more specific details and features of the theme that I used so I never really needed to open up the HTML code and the first glance that you take at it it looks completely unrecognisable. However, as you focus more on breaking down and actually understanding the code you can “see” how it will translate on the actual website without actually needing to “see” it on the website. Still, since it is not a code you have written or at this point haven’t even altered, it does seem like something “foreign”.

These concepts and questions around “assimilation” or sense of ownership or even attachment to code are interesting to look into. I wonder how this might also have an underlying connection with different legal matters around the issue of code, or tech-language ownership and authentications? 

GPU – Graphic Processing Unit

These sources have highlighted the importance of a computer or system breakdown. The iput, the memory, the CPU and the output, all to form a function that seems so simple i.e. a click of a key to display a letter, yet has to go through a plethora of pathways and steps to reach a certain type of output, display or functionality (Kahn Academy). Nowadays, we are so consumed and focus on our actual computers or shall I say, on the output displayed on our computer screens, that we  completely ignore the actual processing that goes behind the physical and visible computer(s) that make it truly possible for us to consume whatever it is we are starring at on our screens. 

Over the past few weeks we have highlighted the importance of data translating and transforming into various more, “human” forms. For example, we see a photo of a swing and it reminds us of our childhood, VS the computer is just translating bits into pixels and outsourcing them as small cubes of colors and patterns that form yet another image to be displayed on the screen. As Campbell-Kelly and Russ put it: a “familiar example of a formal system in which
we can apply the rules, or procedures, for transforming strings of symbols
without regard to their interpretation” (Irvine).  

 A GPU is a Graphics Processing Unit, one of the couple major components of a computer. A processor or better yet – a visual processor, that communicates back and forth with the monitor by using a series of transactions/transitions and instructions, to “determine” what color each individual pixel should be and therefore what should the overall image displayed be. In the early 90’s GPU started becoming a hit with the appearance of 3D graphics and computer gaming (Luebke & Humphreys). A GPU is basically what every computer i.e. your laptop, phone, etc. to create and image and display it on the output i.e. your laptop’s screen, your phone’s screen. The GPU receives information from the input that has been converted into binary information, it then receives instructions (strings of data) on what to do with that info from the CPU and works with the memory stored in order to alter them and execute them to produce a command/result/display (Kahn Academy).   

Graphic systems translate everything (images, graphics, etc) – aka the sign(s) (Denning) – into shapes that have points and vectors or what we’d consider triangles. The relationship (Denning): The GPU uses its memory, a computer graphics library, that combines the vertex point and connects them into vertices and finally triangles. From memory, the GPU can ascribe a placement of a pixel in order to synthesize an image. The input to the GPU is the “description of a scene”. In order for it to process that scene, it has to break it down into those vertices that have been defined from the saved memory in the graphics system  and expressed in a “common 3D coordinate system” that is inputed from and outputted for the user (i.e. the gamer) (Luebke & Humphreys). The observer (Denning): The GPU has the ability to compute each triangle’s color, texture, placement, depth, etc. based off of the memory in the global coordinate system and the lights found in the scene, in order to properly display it on the screen (output). 

 

One version of a GPU  

References:

Irvine, Martin. “A First Look at Recent Definitions of Computing: Important Steps in the History of Ideas about Symbols and Computing”. (pg. 1 – 6)

Kahn Academy: Introducing Computers

Luebke, David, and Greg Humphreys. “How GPUs Work.” Computer, vol. 40, no. 2, 2007, pp. 96–100., doi:10.1109/mc.2007.59.

“What Is a GPU and How Does It Work? – Gary Explains.” Android Authority, 24 May 2016, www.androidauthority.com/what-is-a-gpu-gary-explains-693542/. 

 

Photo code

I started off of with the idea of hunting down the code for a photo on my own personal desktop. On a regular basis, no one really sits and think “Hmm, I wonder what my computer thinks of this image…” or “I wonder what this image looks like to my computer…”. It is so “awe-stricking” to come to terms with the fact that you’re device views this data as something completely different than the human eye does. It translates a code and transforms it into the combination of pixels, colors and formation that we see and recognize as something symbolic. It takes the e-information and decodes into a data set, translated and transformed into a symbol of representation. I wanted to see how taking something so personal, as a private photo that belongs to you on your computer and means something to you, is in reality a combination of transcribed encoded matter that combined all together allows for the image to be created to what I see it us. 

I chose this photo that was on my desktop (I had it as my background at some point) that I took in Yokohama, Japan two years ago. To me, this photo encapsulates way more than what it simply does for my computer. 

We are viewing the same thing but actually seeing it differently. 

And if we wanted to take it a step further, I would even say that my phone camera that captured this instance also saw this in a very different “light” – pun intended- when trying to take a real-time-moment that results in the still representation of the amusement park. Even though you could/can distort the image, change it, photoshop, etc. the specific capture moment is assigned/converted/translate and existent on its own unique code. – In the same way that we saw how ASCII, Unicode, etc. operate and each symbol i.e. a letter or an emoji, translate and stand for a specific assigned characteristic/value, similarly, this image, this digital data, has it own data correlation (of course pre-editing, since after that it would have a new, different, code that is ascribed to it with the new changes). 

References

Martin Irvine, “Introduction to Data Concepts and Data Types.”

Digital Character Sets: ASCII to Unicode

Ron White and Timothy Downs, How Digital Photography Works. 2nd ed. Indianapolis, IN: Que Publishing, 2007. Excerpt.

Emoji Charts

 

 

Representations

The word that comes to mind when talking about the translation of the physical to a symbolic system or structure, to a signal whether in a more physical or a more electronically digitised form is, representation. From weeks 3 and 4 we discussed and saw the role of symbology especially in terms of the depiction of a spoken word or meaning through a specifically assigned token. Similarly, these physical components i.e. a photo, is an actual tangible screenshot of time, a single moment captured. However, that photo might look like a family member laughing, or a dog mid-jump catching its ball, or a scenic cliff view but in reality it symbolises what we aren’t really seeing. Its electronic and digital form will be that of pixels, colored and encoded by a software/computer/system/ to holistically portray what appears to us as that family member, dog, vista, etc. The digital/electronic versions of what is symbolised can be shared, transmitted, distributed and recomposed in a completely different place and time on/in another physical piece of matter i.e. smart phone, computer, etc. 

    • Semiotic System: “a physical electronic medium is used to represent something not physical, but abstract  patters from our human symbolic, conceptual repertoire” (Irvine, 2) 

This kind of movement and transmission-recomposition is possible because of the assigned symbolic meaning and reasoning we have given to each of the things that are being transmitted. Symbolically mapping their path from the physical/perceptible components through the unseen layer of metamorphosing into the physical signal unit in another place and time.  The representation of the actual living relative, to their representation on a captured instant of time, in which in its own right represents the whole of a coded patter composed of binary, pixels, colors, etc., taking it as far as to; what it represents (after it has been transmitted and recomposed in another) place/time to the person or even computer, that it has been received. 

    • “representations as stable patterns in and through a physical medium in space and time” (Irvine, 6) 

I’d like to take it a step further and —if i may— question the existence of these digitised representations and symbols. Let’s say a word document: it can be printed so I have an actual physical copy of it (without going into what that symbolises i.e. language, letters, etc.) but it can also remain in its electronic form. But what is that form? Where does it exist and how? Is it a composition of bits and pieces, a physical signal unit, taking up binary numbers on a hard drive? If it’s “on the cloud” is it taking up some sort of electronic matter of digital space? What does it constitute, what does it represent as part of the digital world? 

 

References

Daniel Hillis, The Pattern on the Stone: The Simple Ideas That Make Computers Work (New York, Basic Books: 1998; rev. 2015)

Denning and Martell. Great Principles of Computing. Chap. 3, “Information,” 35-57.

Martin Irvine, “Introduction to the Technical Theory of ‘Information’ (Information Theory + Semiotics)

 

 

Language tree

– “Natural language”: constitutes a part of what we would call “human languages” (vs “computing” or “artificial” languages) but ultimately aligns with the fact of what you were born into or more specifically, what “language” you were born into. 

(i.e. what language did your parents/guardians speak to you when you were a newborn? What was the language of the cultural community you were born into? etc.)

Essential features: 

– A language would have to be able to create/produce/entail: sound (phonology), form/”shape” (morphology), meaning (words, lexicon, dictionary, vocabulary), order (syntax), symbols of guidance(?) (semantics), expressionisms/”particularistics” (pragmatics) 

– Grammar, structural features Generative, rules & constrains 

– Combinatoriality: rules & procedures, building blocks, you need to combine thing in order to form something 

– Recursion: looping, nesting, embedding 

– Discrete Infinity, Creativity, Productivity: “combinatorial function”, myriad of possibilities of what you can form 

– Intersubjectivity, collective cognition: shared, does not belong to one person but also could not constitute a language if only one person spoke it (excluding languages that are unfortunately dying/getting lost) 

 

 

It was interesting comparing the results from the XLE-Web parser and seeing how deep you can delve into the analysis of a sentence. I added the sentence “My bark is worse than my bite.” (Grandmother Willow – Pocahontas) and I was surprised to see a different break down than I would have imagined based on my interpretation of the video. However this speaks to the complexity of language and testifies to its aforementioned features. I was surprised because you can interpret language in so many different ways because of the plethora of combinations that can occur/be formed. Many times, especially when it comes to our natural language, we don’t often question the detailed analysis of the sentences and phrases that we form when we speak. Anything from our intonations to our tone of voice, to vocal expressions will usually naturally establish themselves in specific circumstances, moments and occasions. The same sentence or meaning of a sentence can be expressed in a variety of ways that allows for the syntax, grammar and other characteristics of the specific sentence to shift. 

For example in “My bark is worse than my bite.” you can also consider both “My bark” and “my bite” as two distinctive NP where “My” and “my” are the determiners and “bark” and “bite” are the nouns. The “is worse than my bite” would be the VP with the verb being the “is”, and so on and so forth, with the possibility for that to be broken down into different language components and sentence properties. 

Age is foolish and forgetful when it underestimates youth.” – Dumbledore, Harry Potter and the Half Blood Prince. 

“When it comes to snacking, we tend to follow our instincts.” – Second Nature, Wholesome Medley packaging description slogan. 

Some more examples of the different combinations of a sentence’s grammatical properties. It was interesting trying to break these down following the Tree Diagramming Practice video before seeing the “results”/breakdown on the XLE-Web parse and different notions can form different views even in language. 

 

References 

Andrew Radford, et al. Linguistics: An Introduction. 2nd ed. Cambridge, UK: Cambridge University Press, 2009. Excerpts.

Irvine, M. Introduction to Linguistics, Language, and Symbolic Cognition: Key Concepts

Pinker, S. (2012). Linguistics as a Window to Understanding the Brain. Youtube Video https://www.youtube.com/watch?v=Q-B_ONJIEcE

Pinker, S. (2011) “Language as a Window into Human Nature” 

Pinker, S. (1999). Words and Rules: The Ingredients of Language. New York, Basic Books, 1999. Excerpt, Chapter 1.

Tuzy, F. (2014) Tree Diagramming Practice. https://www.youtube.com/watch?v=jmrmHnXruFw

XLE-Web