Author Archives: Alie Fordyce

Fordyce Alie_Evolutionary Computation

Evolutionary Computation:

Exemplifying the Interdependency Between Human and Machine

Image 1 source: “EQ in the AI Age: Humans and Machines Must Work Together.” Albert.

Abstract. What we do in computing is fundamentally human; computers are a product of us. Evolutionary computation is a framework for problem solving based on the natural biological process of evolution and self-adaptation. Computation, in turn, plays a large role in societal decision-making. Therefore, understanding the mutually influencing relationship between human and machine is critical for determining how the output of computation informs human decision-making and the impact it has on internal bias within the machine.

 

  1. Introduction

            Technology is a quickly evolving entity of society. We often discuss this evolution, since the discovery of electricity in the 18th century (although technological advancement greatly preceded electricity), as an independent linear progression. This paper uses evolutionary computation to argue that this advancement is not a linear progression, rather a development sequence very much intertwined with human the human condition and biological evolution. How we talk about computational development and emerging technology is more appropriately discussed within the context of human and biological processes. The clash between human and machine continues to grow as privacy and individual agency is threatened by ever-growing applications of artificial intelligence in daily life. This division can be broken down by showing how deeply intertwined and interdependent technology and humanity are and how each are very much influenced by the other. Evolutionary computation is an important approach to explore this codependent relationship, helping to reveal the interdependency evolutionary algorithms have with societal development.

Image 2 Source: Bentley, Peter. “Aspects of Evolutionary Design by Computers.”

Evolutionary computation, as shown above, is the precise intersection between evolutionary biology and computer science.

 

  1. Evolutionary Computation: An Overview

            Evolutionary computation refers to the intersection between biological processes and computer science. In An Overview of Evolutionary Computation it is defined as:

Evolutionary computation uses computational models of evolutionary processes as key elements in the design and implementation of computer-based problem-solving systems. There are a variety of evolutionary models that have proposed and studied which we refer to as evolutionary algorithms. They share a common conceptual base of simulating the evolution of individual structures via processes of selection and reproduction. These processes depend on the perceived performance (fitness) of the individual structures as defined by an environment. (Spears et al. 1993, 442)

Evolutionary computation, more precisely, evolutionary algorithms, are products of biological principles: “structures that evolve according to rules of selection and other operators, such as recombination and mutation” (Spears et al. 1993, 442). Selection occurs and is focused on the fitness level of an individual. Below is an example of a typical evolutionary algorithm:

Image 3 source: Spears, William M, et al. “An Overview of Evolutionary Computation.”

The above algorithm demonstrates a random population that is evaluated based on the fitness of each individual in relation to its environment. Selection has two parts: 1) parent selection and 2) survival (Spears et al. 1993, 443). Parent selection is a function deciding which individuals become parents and how many children they will have; children are a product of information exchange between parents and through mutation, otherwise known as recombination; then children are tested for survival under relevant environmental conditions. An example of this would be an automotive manufacturer designing a new engine and fuel system trying to optimize 1) performance, 2) reliability, 3) gas-mileage, and 4) minimizing emissions (Spears et al. 1993, 443). In this example, it is assumed that an engine simulation unit can test various engines and output a single value rating its fitness score. The initial step would create a population of possible engines, each engine receives a fitness score based on various metrics, then parent selection decides which engines have ‘children’ and how many (products of recombination between parent engines), and finally it is determined which engines ‘survive’. This provides a basic illustration of an evolutionary algorithm framework: evaluation, selection, recombination, mutation, and survival (Spears et al. 1993, 445). Below are two simplified diagrams illustrating the evolutionary algorithm framework:

Image 4 source: Soni, Devin. “Introduction to Evolutionary Algorithms.”

Image 5 source: Rudolph, Günter. “EVOLUTIONARY ALGORITHMS.”

Generally, the unique elements of evolution strategies in computation are recombination and mutation of both the population and the parameters of the algorithm (Rudolph 2015). This allows for self-adaptation, mimicking the biological evolutionary process.

            One key type of evolutionary algorithm is the genetic algorithm. Initially developed by Holland (1975), genetic algorithms play an important role in studying complex adaptive systems. Below is an example of a typical genetic algorithm:

Image 6 source: Spears, William M, et al. “An Overview of Evolutionary Computation.”

Adaptation is most familiarly known as a biological process of survival; organisms mutate and rearrange genetic material and successful adaptations lead to high probability for survival in a dynamic environment (Holland 1975). Holland, notably, built a mathematical model representing a nonlinear complex interaction mimicking that of genetic mutation. He applied his model to different fields – psychology, economics, artificial intelligence – demonstrating its universality and challenging accepted dogma regarding mathematical genetics (Holland 1975). He explored the models used in naturally occurring processes, representing the properties of coadaptation and coevolution. In genetic algorithms, bit strings represent single individuals and the selection process chooses two individuals (two bit strings) from the population for the next offspring recombination – including a probability for their relative fitness score (Rudolph 2015). The ‘offspring’ receives genetic material from both parent individuals, is mutated, and replaces the parent generation. This continues until termination is reached by encountered a specific criterion (Rudolph 2015).

            The following gif, developed by Soni, provides an example of an evolutionary algorithm at work. The gif displays multiple generations of dinosaurs learning to walk by way of manipulating body structure and muscular force. The right-most dinosaur depicts the largest number of generations, and hence, the most optimize and well-functioning walking form (Soni 2018). The earlier generations of dinosaurs were not able to effectively walk, but as the evolutionary algorithm mutated over generations it eventually reached an optimized state.

Media 7 source: Soni, Devin. “Introduction to Evolutionary Algorithms.”

            As described, evolutionary computation is heavily influenced by the natural process of evolution. However, since humans dictate the metrics and success criteria for the algorithm, there is a direct link between humanity and the machine algorithm, realized through the evolutionary process. Thus, the outcome from evolutionary computation is both human and machine driven.

 

  1. Evolutionary Computation and Human Symbolic Systems

Humans, in turn, have been heavily influenced by the outputs and process of evolutionary computation. Agre, in Computation and Human Experience, argues for a critical technical practice in the AI community (Agre 1997, xii). He writes,

What is needed, I will argue, is a critical technical practice – a technical practice for which critical reflection upon the practice is part of the practice itself. Mistaken ideas about human nature lead to recurring patterns of technical difficulty; critical analysis of the mistakes contributes to a recognition of the difficulties. This is obvious enough for AI projects that seek to model human life, but it is also true for AI projects that use ideas about human beings a heuristic resource for purely technical ends. (Agre 1997, xii)

Agre depicts the complex and imperfect relationship humans and machines share. Since the earliest iterations of modern technology, critiques and therefore corrections of machines have been a constant. This editing process comes from humans and thus is implicitly biased with human characteristics that inevitably fuel the outputs of computation. Computation has been used as a tool to explain human nature, as Agre explains:

 How, then, can computation explain things about people? The most common proposal is that human beings instantiate some mechanism, psychology tries to discover what that mechanism is, and success is judged by matching the input-output behavior of hypothesized mechanisms to the input-output behavior of human beings… This view of computational explanation had great appeal in the early days of computational psychology, since it promised to bring precision to a discipline that had long suffered from vagueness and expressive poverty of its concepts. (Agre 1997, 17-18)

The key here is recognizing how humans are affected by these computational explanations. Computation, broadly speaking, impacts society by making assertions about how humans do and do not behave. These assertions, or predictions, have real-life impacts on decision-making processes. In this way, computation, and specifically evolutionary computation, make decisions for humans.

Evolutionary computation is a population-based problem solver; an initial set of options (like in the automotive manufacturing design example of deciding on a new optimal engine and fuel system) implements a trial and error process within the given initial population. Within the evolutionary algorithm new generations are produced based on eliminating characteristics unfit for survival. This, biologically speaking, reflects processes of mutation and natural selection. The output of evolutionary algorithms are the individuals most fit for survival. This form of computation is used in almost all fields and, therefore, heavily influences decisions in all areas of humanity.

Chandler, in Semiotics: The Basics, offers insight to the concept of human symbolic systems:

All meaningful phenomena (including words and images) are signs. To interpret something is to treat it as a sign. An experience is mediated by signs, and communication depends on them… As a species we seem to be driven by a desire to make meanings: above all, we are surely Homo significans – meaning-makers. We cannot avoid interpreting things, and, in doing so, we treat them as ‘signs’. (Chandler 2018, 2-11)

Chandler makes it clear that throughout history human beings have been innately prone to make meaning of everything that surrounds them. Computation and machine-learning has fundamentally changed the relationship humans share with meaning and symbolic representation. In creating machinery that can decision-make in place of humans, a co-dependency has been created altering the way communication occurs and how meaning is created. Furthermore, Deacon infers that computation is a later generation of symbolic representation by way of explaining that:

We inhabit a world full of abstractions, impossibilities, and paradoxes… In what other species could individuals ever be troubled by the fact that they do not recall the way things were before they were born… The doorway into this virtual world was opened to us alone by the evolution of language, because language is not merely a mode of communication, it is also the outward expression of an unusual mode of thought – symbolic representation… symbolic thought does not come innately built in, but develops by internalizing the symbolic process that underlies language. So species that have not acquired the ability to communicate symbolically cannot have acquired thee ability to think this way either. (Deacon 1997, 21-22)

Computation is part of the evolution of language and its purpose as symbolic representation. In doing so, computation has altered what makes us fundamentally human: our cognitive ability and the form in which we communicate. Herein lies the mutually influencing relationship between human and machine; machines are created by humans and the machines influence the systems by which humans communicate. Thus, creating a mutual dependance and circular evolutionary structure.

 

  1. Conclusion

Evolutionary computation is a framework for problem solving based on the natural biological process of evolution and self-adaptation. Computation, in turn, informs much of societal decision-making and replaces many traditional forms of communication. Therefore, understanding the mutually influencing relationship between human and machine is critical for determining how the output of computation informs human decision-making and how the interdependency of the two play a role on their outcomes.

 

 

__________________________________________________________________________________

References

Agre, Philip. Computation and Human Experience. Cambridge University Press, 1997.

Bentley, Peter. “Aspects of Evolutionary Design by Computers.” UCL Department of Computer Science, www0.cs.ucl.ac.uk/staff/P.Bentley/wc3paper.

Chandler, Daniel. Semiotics: the Basics. 3rd ed., Routledge, 2018.

Deacon, Terrence William. The Symbolic Species: the Co-Evolution of Language and the Brain. International Society for Science and Religion, 1997.

“EQ in the AI Age: Humans and Machines Must Work Together.” Albert, Marketics, 26 Oct. 2019, marketics.ai/blog/humans-and-machines-must-work-together/.

Holland, J. H. (1975) Adaptation in Natural and Artificial Systems. Ann Arbor, Michigan: The University of Michigan Press.

Rudolph, Günter. “EVOLUTIONARY ALGORITHMS.” Technische Universität Dortmund, 2015, ls11-www.cs.tu-dortmund.de/rudolph/ea.

Soni, Devin. “Introduction to Evolutionary Algorithms.” Towards Data Science, Medium, 18 Feb. 2018, towardsdatascience.com/introduction-to-evolutionary-algorithms-a8594b484ac.

Spears, William M, et al. “An Overview of Evolutionary Computation.” European Conference on Machine Learning, vol. 667, 1 June 2005, pp. 442–459., doi:10.7554/elife.36495.007.

 

Media Citations

Image 1: “EQ in the AI Age: Humans and Machines Must Work Together.” Albert.

Image 2: Bentley, Peter. “Aspects of Evolutionary Design by Computers.”

Image 3: Spears, William M, et al. “An Overview of Evolutionary Computation.”

Image 4: Soni, Devin. “Introduction to Evolutionary Algorithms.”

Image 5: Rudolph, Günter. “EVOLUTIONARY ALGORITHMS.”

Image 6: Spears, William M, et al. “An Overview of Evolutionary Computation.”

Media 7: Soni, Devin. “Introduction to Evolutionary Algorithms.”

Fordyce Week 13

My main learning achievements of the semester so far have been:

  1. General understanding/introduction to concepts that create the meaning of code and the history thereof
  2. Deblackboxing of some coding languages (e.g. HTML, CSS, Python)
  3. History of computing and how technology has advanced
  4. How and why programming languages are designed the way they are
  5. Basic understanding of methods and concepts from a variety of fields that feed off of computing (e.g. design thinking, systems thinking, semiotics, cognitive science, etc.)

The class has covered many topics within the field of computing and the meaning of code, but what I found most rewarding was relating the technical concepts to those of human representation concepts. It’s interesting to humanize computing in that way: as a reflection of human thought processes, which in fact it is. While diving into how computing has progressed as a field (seeing the image of the computer Professor Irvine wrote his thesis on) to the modern computers we use today (or at least what we consider modern in 2020), it’s clear that the core purpose of computing has remained the same. Computing is in a way an extension of the human brain; it stores information (cognitive off-loading), performs calculations, and retrieves and processes information – all in a more efficient and often times more reliable way than humans. Computing is responsible for much of why humans have become such advanced animals. What has differed since the conception of the first Turing machine was invented is the increased efficiency at which processes occur and the increased set of capabilities that a computer can complete.

In recent weeks we have moved on to discuss artificial intelligence (and even viewed a video of a robot communicate independently in an interview without direct instruction) and the ways in which this will further advance our notion of computing. We are teaching computers to think for themselves. It’s a strange and interesting new landscape for the information that humans will have access to.

At CCT I would like to continue to develop on our AI learnings and conduct research on the ethics and responsible implementation of AI in our society. There is an interesting field of research that covers the intersection of artificial intelligence and societal impact that continues quickly evolves as technology continues to advance. Some of the larger big tech firms and some smaller foundations are conducting extremely interesting research on this topic:

  1. Google’s PAIR group: https://research.google/teams/brain/pair/
  2. Facebook’s FAIR group: https://ai.facebook.com/
  3. Microsoft’s FATE group: https://www.microsoft.com/en-us/research/theme/fate/
  4. AI Foundation: https://aifoundation.com/
  5. Center for Humane Technology: https://www.humanetech.com/
  6. AI Now Institute: https://ainowinstitute.org/

The intersection between societal impact and emerging tech that is growing today is especially interesting because it almost represents a full circle effect of the history of technological development that we have discussed in class. Computing was once developed to enhance human capability and, while that still is the goal today, it has strayed away slightly to over-saturating its capabilities to few large corporations. Responsible AI brings the goals of computing (and emerging tech more broadly) back to the masses of people it aims to serve.

Professor Irvine’s course has taken us through how computer systems and digital information is based on basic human symbolic capabilities, taught us a basic level of programming, and helped us apply that knowledge to being the middle man between the technically-leaning and non-technically-leaning fields – a, too often, overlooked importance in any field.

Fordyce Week 12

Computing is rooted in binary questions. Compiling and grouping these binary questions has given us an immense amount of information that we have used to gather even more information. Computation is precision work and differs from human communication in that there is no body language to read to further understand a message; it is more black and white in that way.

While doing the educational learning videos, it became clear to me that Python is the most useful ‘blanket’ language for beginners. It offers a more straight forward platform to engage with coding. Some key concepts we have now gone over and I have learned about are:

Source code: a text/ string of words/numbers/symbols that a computer program can use (i.e. using HTML)

Executable code: this is what causes the computer to perform the specific tasks from the source code (file that gives tasks that can be directly executed by the CPU).

The readings this week made me think about the future of coding. As AI continues to develop and become increasingly self-sufficient, how will programming change? AI could come to a point where it can independently do all of its own computing and write its own code. Will be get to a point where we don’t even need to learn to code anymore?

An interesting article on this: https://news.mit.edu/2019/toward-artificial-intelligence-that-learns-to-write-code-0614

 

References

David Evans, Introduction to Computing: Explorations in Language, Logic, and Machines.

LinkInLearning. Programming Foundations: Fundamentals.

Fordyce Week 11

This week I started with the Computational Thinking reading. It was a nice introduction and prelude to the more hands-on/practical learning that followed. Notably, it reads:

“Computational thinking is thinking recursively. It is parallel processing. It is interpreting code as data and data as code. It is type checking as the generalization of dimensional analysis. It is recognizing both the virtues and the dangers of aliasing or giving someone or something more than one name. It is recognizing both the cost and power of indirect addressing and procedure call. It is judging a program not just for correctness and efficiency but for aesthetics, and a system’s design for simplicity and elegance” (Wing 33)

Wing writes about computational thinking as more of a style of thinking, rather than as a specific act. Computational thinking is “planning, learning, and scheduling in the presence of uncertainty” (Wing 33). I appreciate having this broader understanding of what it means to think like a computer scientist because it breaks down the notion of the more rigid narrative that it refers only to a strict programming mindset.

Furthermore, Evans writes about how automatic computing has “radically [changed] how humans solve problems, and even the kinds of problems we can imagine solving” (Evan 1). He also argues why it’s important that, especially today, most people have an understanding of basic computing: “1. Nearly all of the most exciting and important technologies, arts, and sciences of today and tomorrow are driven by computing. 2. Understanding computing illuminates deep insights and questions into the nature of our minds, our culture, and our universe” (Evans 1). This demonstrates how all fields are evolving under the lens of technology; technological practice has in many ways become the staple for all development (whether it be in the arts or sciences… interesting emerging area is VR art in museums).

The readings and lessons this week have clarified a lot of these concepts in relationship with “code” concepts. Python is an interesting language to become familiarized with because it is more applicable to fields in business, for example. So it is more relatable to fields outside of computer science and tech more broadly.

I wonder if in the near future it will be possible at all for any fields to be separated from the concepts of computing and code. Art and art sharing is now heavily reliant on technology, among many other fields that have historically been separated from the crutch of technology.

 

References:

David Evans, Introduction to Computing: Explorations in Language, Logic, and Machines. 2011 edition.

Jeannette Wing, “Computational Thinking.” Communications of the ACM 49, no. 3 (March 2006): 33–35.

https://www.linkedin.com/learning/programming-foundations-fundamentals-3

Fordyce Week 10

Human-Computer Interaction (HCI) is a field that only continues to grow. Computing systems were first forms of calculating machines, but with development became symbol processors and highly advanced user interfaces that allowed a more fluid relationship between the user and the input/output of the computer. The way we use computers today have become near extensions of our physical beings. In Myer’s article, he writes “Even the World Wide Web is a direct result of HCI research: applying hypertext technology to browsers allows one to traverse a link across the world with a click of the mouse” (Myers). Computing systems are what allow the information to exist, but HCI research are the developments in computation that make it easily accessible to users. Without HCI, most of us wouldn’t be able to access the information. Myers helps delineate the conceptual leaps and “explosive growth” that has occurred in the HCI field – he provides a useful timeline for the development of the everyday things we have become extremely accustomed to:

This graphic is useful in understanding what concepts the core elements of computing systems were developed with HCI. Something as simple as the “direct manipulation interface” – where we use our mouse to move objects on a screen (in other words, being able to grab objects, move them, and change their size) – was something that had to be developed as an interaction design concept. Everything seems so obvious now because of the ease with which we do them, but they weren’t originally obvious concepts to develop. Most of what I did to create this very post, originated as some kind of interaction concept.

As Irvine explains in his paper, “the history of designs for books, libraries, and techniques for linking one section of text to another, or one book to another… have long histories, and many of the early concepts and techniques underlie our more recent concepts for hypertext and hyperlinking in GUI interfaces” (Irvine). It’s interesting to compare our modern computational practices with historical means of communication and idea sharing – much of it is rooted in the same concepts, only today it exists in a more machine-like, efficient form. Computation specific symbolic cognition (how we understand symbols – generally speaking) has become a kind of learned language because of how second nature it has become.

 

 

References

Brad A. Myers, “A Brief History of Human-Computer Interaction Technology,” Interactions 5, no. 2 (March 1998): 44-54.

Martin Irvine, Introduction to Symbolic-Cognitive Interfaces: History of Design Principles (intro essay).

Fordyce Week 9

HTML and CSS are two of the cornerstones of coding methods for building webpages. HTML allows the structure of the webpage to be built and is the core skeleton of any webpage (needed for any kind of webpage), while CSS always an added layer of layout and visual effects. JavaScript is the next level on top of CSS that makes the webpage interactive and controls the behavior of the webpage elements (Cox). These programming languages are how we communicate with software and to direct it to compile what we want as an outcome.

An interesting tool on Google Chrome is the “inspect” feature, which allows you to look at the underlying code of any webpage. For example:

It’s interesting to look through this feature because it helps explain a lot about how elements are put together and the complexity and occasional (surprising) simplicity required to build webpages. It’s also helpful when learning the various rules and appropriate etiquette to coding format.

In addition, an extremely interesting field of code (especially when learning about how elements are built from lines of code, and how JavaScript and CSS is layered on top of HTML) is “code as art”. A coder, Diana Smith, made the following image entirely out of code (on the right is the ‘inspected’ review of the code used (Smith).

Below, I’ve shared some of my own HTML/CSS work – a silly shark game website.

 

 

References

Cox, Lindsay Kolowich. “HubSpot”. Nov 7th, 2018.  https://blog.hubspot.com/marketing/web-design-html-css-javascript

Smith, Diana. “Pure CSS Oil Painting – by Diana Smith Aka CyanHarlow.” Diana Smith, diana-adrianne.com/purecss-francine/.

W3Schools.com, “Try Out HTML Code”

Fordyce, Week 8

This week we are reminded of the various computer system design principle definitions. Put simply (almost too simply), “the modern computer… [is rooted] in the themes of representation and of automatic methods of symbolic transformations” (Irvine, 1). Denning narrows in on the study of information processes in uncovering the relationship between information, symbols, and computing. Denning points out the problem with leaving ‘processing’ out of the equation of ‘data’ (Irvine). ‘Data’ and ‘data processing’ are not the same. He clarifies, “information consists of (1) a sign, which is a physical manifestation or inscription, (2) a relationship, which is the association between the sign and what it stands for, and (3) an observer, who learns the relationship from a community and holds on to it” (Irvine, 4).

Random access memory (RAM) is one of the basic subsystem of a computer. There are three general categories of the subsystems of computers: (1) the central processing unit, (2) the main memory, and (3) the input/output subsystem. RAM is a computer’s short-term memory – it is fast, but temporary and is part of the computer’s memory subsystem. (Villinger). It’s useful for applications that are running when the computer is on, but the memory is lost once the computer is shut off (e.g. useful for a web browser). SDD is the computer’s long-term memory, for when things are saved. RAM can be described using a physical desktop analogy: “your working space – where you scribble on something immediately – is the top of the desk, where you want everything within arm’s reach, and you want no delay in finding anything. That’s RAM. In contract, if you want to keep anything to work on later, you put it into a desk drawer – or store it on a hard disk, either locally or in the cloud” (Villinger).

How have RAM and SDD changed our notion of real-life memory? Have we become more reliant on computer memory in replace of our own?

 

References:

Irvine, Martin. “A First Look at Recent Definitions of Computing: Important Steps in the History of Ideas about Symbols and Computing”. (pg. 1 – 6)

Villinger, Sandro. “What is RAM and Why is it Important?” https://www.avast.com/c-what-is-ram-memory (November, 2019).

 

Fordyce, Week 7

For this week, after reading’s Irvine’s introduction, I watched the Code.org Instagram video. The basis of Instagram lies around the concept that visual communication is effective. But how we display images? Through pixels. Individual pixels aren’t actually easily visible (Code.org). But the whole sum of pixels grouped together create a comprehendible image. Images are becoming better and better quality because as we innovate pixels can be grouped closer and closer together (Code.org). An image file at its core is actually made of bits – in other words just 1’s and 0’s (Code.org). Irvine helps us distinguish between data and E-information – data being chunks of bytes (made up of grouped bits) assigned to a specific computable type and e-information being binary units (bits and bytes) assigned to computer system memory (Irvine).

Below is an image of my hometown, Basel, Switzerland:

The following is its “inspected” code:

It’s interesting to look at these different representations of the same data. In one instance we see a pixilated, visually appealing representation of a place. In the other, we see the coded format of what makes the visual data. 

The video on how smart phone cameras work was also very interesting. Because of smart phones and increased accessibility to camera technology, the amount of photographs humans take is unfathomably large. The CPU plays a central role in the process of capturing an image on a smart phone (Branch Education). The process of taking and saving a photograph on your phone is actually quite complicated (has many steps), but we are able to so easily snap moments of our every day lives without thinking about it. Quite like how are brain functions in ways we aren’t aware of, the CPU does that for the smart phone’s functionality – emulating a human-like central control unit for procedures. Quite like how humans process data through representation, a camera must do that with light sensors to save an image. It has been most useful to think about all data as representation and to use that to understand how all different kinds of data can be shared in different formats.

References

Martin Irvine, “Introduction to Data Concepts and Data Types.”

Images, Pixels and RGB (video lesson from Code.org, by co-founder of Instagram)

How do Smart Phone Cameras Work? (video lesson, Branch Education)

Fordyce, Week 6

 

The week focuses on ‘information’ more broadly. What is the technical meaning of information? What are structures of information? The understanding is essential to the meaning of code. Irvine’s introduction helps to delineate some of these topics; information, once simply shared through on/off switches, is now so readily and easily transferred that there is overload of consumable information made available through computation and electricity (Irvine).

Bits are one of the foundational concepts of information transfer that allow electronic and digital communication to take place; bits represent two possible states (1 or 0) and a byte is made up of 8 bits (Irvine, 2). A bit represents a single unit of information, a byte, therefore, offers 8 units of information – a byte is referred to as a ‘word’ (Irvine).

We make sense of digital representations of information because the media system allows us to convert signals to make sense in a symbolic system. A text message, for example, is an electronic transfer of language that we understand through our symbolic systems of communication.

 

How has social media altered the fibers of digital information transfer?

At what point does information overload become problematic?

 

References

Martin Irvine, “Introduction to the Technical Theory of ‘Information’ (Information Theory + Semiotics)

 

Fordyce, Week 5

It was interesting looking at the XLE-Web site, to better understand how to parsing sentences works.

I typed in: “Hello all, I am Alie and I am learning about a new system.” to the program. XLE-Web provided the following solution when parsing my sentence: (F-Structure was cut-off at the bottom):

This introduces a new system of linguistics, complicating what we understand language to be. Wikipedia helped to begin to understand the syntax of parsing, but it is an extremely intricate and complex field. Natural language acquisition (denoting natural language) bypassing the complications of language because at a young age we seamlessly catch on to the rules and mechanisms of our native tongue. Studying language is important because the concepts of the various branches of language are applied to many other fields (Irvine).

 

References

https://clarino.uib.no/iness/xle-web

Martin Irvine, “Introduction to Linguistics and Symbolic Thought: Key Concepts” (intro essay).