Author Archives: Amanda Morris

From the Messenger Boy to Facebook Messenger: The Transformative Power of the Telegraph (Amanda Morris)

“Society can only be understood through a study of the messages and the communication facilities which belong to it; and that in the future development of these messages and communication facilities, messages between man and machines, between machines and man, and between machine and machine, are destined to play an ever increasing part.” – Norbert Wiener

Introduction

220px-samuel_morse_1840

Samuel Morse

   Five weeks aboard a ship in the year of 1832 was all it took for Samuel Morse to begin fostering an idea of an invention that would forever change the future of communication. An artist and a professor, Morse was returning back to the United States after spending three years in Europe improving his painting skills and beginning work on his iconic painting, Gallery of the Louvre.

telegraph-sketch  Two weeks into the voyage, Morse found himself discussing electromagnetism with a fellow passenger, Dr. Charles Jackson, who explained that electricity was believed to be capable of passing through a circuit of any length instantaneously. Through the remainder of his journey home, with the Gallery of the Louvre sitting unfinished in the cargo (Antoine, 2014), a curious Morse began the early sketches of what would eventually become the electromagnetic telegraph.

da1900-35028-lm002305   How did the telegraph change the way that we communicate today? Without the scientific advancement and communicative enhancement that this invention brought to the world, the radio, telephone, and computers may have looked and operated very differently, if they could exist at all. It is often forgotten that the telegraph was one of the first inventions to connect the technical with the humanistic; a combination that the human mind has the tendency to seperate. However, the telegraph is a prime example of how these two disciplines have come together to enhance the ways in which humans communicate and understand the world around them. 

The Telegraph

220px-reess_cyclopaedia_chappe_telegraph   It is often forgotten that Samuel Morse was not the first inventor of the telegraph, nor was he the only individual with the idea for an electric telegraph. While Morse remained relatively ignorant of the work of others, across the world, scientists and scholars were attempting to create the very same concept. It should be noted that only after various fundamental discoveries in chemistry, magnetism, and electricity could a practical electromagnetic telegraph come to be. Before the electromagnetic telegraph came attempts at communication using shutter systems and the semaphoric telegraph, both of which communicated visually using towers and pivoting shutters.

   In the 1790s, Galvani and Volta revealed the nature of galvanism – the generation of electricity by the chemical reaction of mixing acids and metals, and in 1820, electromagnetism was discovered by Hans Christian Oersted and Andre-Marie Ampere. In the 1820s and 1830s, scientists and inventors from across the globe were working to create a working and practical electric telegraph, perhaps most notably William Cooke and Charles Wheatstone in England. Yet, many of these inventors ultimately reached a roadblock: electromagnets were only so powerful, and mechanical effects were not being produced from a distance. Morse ran into the same problem. However, he eventually met and began working with a fellow American, Joseph Henry, who in 1831 solved this critical problem by replacing the customary battery of one large cell with a battery of many small cells instead (Beauchamp, 2001; Czitrom, 1982; Standage, 1998).

telegraph-prototype   The electric telegraph advanced the way that people communicated. It included an information source which, with the help of a human, produced a sequence of messages to be communicated to the receiving terminal.  It included a transmitter which operated on the message in order to produce a signal that was suitable for transmission over a channel – the electric wire. There was a receiver t the other end who reconstructed the message sent by the transmitter. And through the receiver, the piece of communication eventually reached its destination – the person for whom the message was intended. The telegraph is an example of a discrete system of communication, where both the message and the signal are a sequence of discrete symbols; the message is a sequence of letters and the signal is a sequence of dots, dashes, and spaces (Shannon, 1948).

Communicating through Morse Code

Before learning more about Morse code specifically, it is important to distinguish the difference between a code and a cipher, mainly because Morse’s original idea for communication through the telegraph was to use a cipher.

Code: When letters of the alphabet are replaced by symbols. An important group of codes used in telegraphy are the two-level, or binary, codes, of which the Morse code is the best known example. (Beauchamp, 2001)

Cipher: When the letters containing a message are replaced by other letters on a one-to-one basis, meaning that the message will not be shortened. This concept was introduced into the operation of the mechanical semaphore toward the end of its period use. This type of communication requires a cipher-book (which differs from a code book) and a higher order of accuracy in transmission. (Beauchamp, 2001)


As he worked on perfecting the telegraph, Samuel Morse and his team were also experimenting with how exactly two people could communicate through this invention. Morse originally intended on using a cipher in which all of the words of the English language would be assigned a specific and unique number, and only the number would be transmitted. However, this idea was eventually replaced by the American Morse Code – an alphabetic code where each letter and number, and many punctuation signs and other symbols, are represented by a combination of dots and dashes.

international-morse-codeMorse and Alfred Vail, an inventor who worked with Morse on the telegraph, designed the code by counting the number of copies of each letter in a box of printer’s type, ensuring that the most common letters had the shortest equivalents in code. This duration-related code had never before been considered by other inventors working on creating an electric telegraph. However, it is perhaps because of this duration-related code (as opposed to say, Cooke & Wheatstone’s polarity-related needle indication), that it was Morse’s version of the telegraph and code that changed and jump-started the entire telegraph industry (Beauchamp, 2001). When American Morse code reached Europe, a number of changes were made and the International Morse Code was created. This became the standard for almost a century, with a 1913 international agreement requiring the American code to be replaced.

The coding system that was often used in electric telegraphy was a proto-binary code, which means that it was recognized by either the duration or the polarity of the transmitted electric impulse. According to his own personal notes, Morse defined four principle features of the telegraph. It was a marking instrument, consisting of a pencil, pen, or print-wheel. It used an electromagnet to apply pressure to the instrument on a moving strip of paper. It was a system of signs – i.e. the Morse code – that identified the information that was transmitted. And lastly, it was a single circuit of conductors (Beauchamp, 2001).

The Significance of Combining Machine and Code

“If the presence of electricity can be made visible in any desired part of the circuit, I see no reason why intelligence might not be instantaneously transmitted by electricity to any distance.” – Samuel Morse

As the first electric telegraph line began to experience success in 1844, an entirely new era of modern communication was established in America, and eventually, around the world. The electric telegraph introduced a significant change in the way that humans communicated – this was the first time in history where some method of transportation was not required in order to communicate; the telegraph introduced instantaneity to the world (Czitrom, 1982).

messengers-a-messenger-a-servant-that-worked-for-the-lord-iohjie-clipartCommunication had previously relied on a middle party – the messenger. If people did not live together or find themselves as neighbors, their communication across distance was only as quick as the messenger. In many instances, the telegraph eliminated the messenger and introduced the beginnings of what would evolve into the rapid networks of communication that we know today. The telegraph eliminated dependence on time and distance by connecting people through electricity and code.

illus001Expanding further, the electric telegraph expanded on the concept of communicating through code. While humans had always been communicating and making sense of their world through symbols (i.e. art as a means of communication), the electric telegraph created a combination the world had never before seen: electricity and code. As Daniel Czitrom wrote in his book, “Media and the American Mind,” the telegraph served as a “transmitter of thought” where human cognitive understanding was combined with electricity and the machine.

a114_web-drawing-abc-telegraphThe electric telegraph supported one main idea: it assigned the humanistic symbolic values of a system of signs (Morse code) to the scientific process of electric currents in a switched circuit that could electromagnetically imprint marks and sounds to process the code. The simple on/off switches found in the telegram paved the way for the beginning of binary switches that could be found in the first computer designs (Irvine).

Coding and Computing

Because the telegraph introduced the design and production of technical equipment in a pre-electronic age, we now know a great deal more about data compression, error recovery, flow control, encryption, and computer techniques. The beginning of the internet was influenced in part by the pioneers involved in the coding of the telegraph (Beauchamp, 2001).

differenceengineSlightly before Samuel Morse began work on the electric telegraph, Charles Babbage began designing a different kind of machine that he hoped would be able to compute and produce certain kinds of mathematical tables without human intervention. This early idea of automatic computation was the beginning of what we now know as computer science.

codecodeIn order to understand the process, it is important to understand the terminology associated with key words. While, according to etymology, computation refers to the idea and act of calculating, Subrata Dasgupta writes in “It Began with Babbage” that computation is comprised of symbols – things that represent other things – and “the act of computation is, then, symbol processing: the manipulation and transformation of symbols.” Dasgupta points out that “things” that represent other things could include a word that represents an object in the world, or a graphical road sign that contains meaning to motorists (Dasgupta, 2014).

How does the electric telegraph relate to computing? Morse code is an essential factor. Samuel Morse was able to combine symbols – code that represented words that held meaning to humans – and share this very humanistic code rapidly, using a very scientific method of electrical switches. Samuel Morse and his team were some of the first people to begin paving the trail for what we’re still figuring out today – how to encode different types of switches on our technological devices so that we can communicate more rapidly and effectively.

python-morse-code-exampleMorse’s electric telegraphy is an example of a discrete noiseless channel for relaying information; a sequence of choices that come from a finite set of elementary symbols (Morse Code). Each of the symbols has a certain but differing duration of time depending on the amount of dots and dashes that is contained in each individual code. The symbols (code) can be combined into a sequence, and any given sequence can serve as a signal for the channel. Morse code helped to enact the idea of combining math and communication, the humanistic and the scientific, by introducing the question of how an information source could be described mathematically, and how much information – in bits per second – could be produced in a given source (Shannon, 1948). It is in this way that the process of transmitting Morse code over the telegraph served as a precursor to the process of encoding and decoding that is now used in modern technology such as computers. The Morse code messages that were transmitted contained a sequence of letters that often formed sentences which contained a statistical structure of a human language, such as English. Thus, certain letters appeared more frequently than others. By correctly encoding the message sequences into signal sequences, this structure allowed humans to save time, as well as channel capacity, while communicating (Shannon, 1948).

Closing

The electric telegraph and the Morse code that accompanies it, is a prime example of how communication can be seen as a means for one mechanism (for example, the code transmitted through electric telegraph/message sent) to directly affect another mechanism (for example, rapid reception of news) (Shannon, 1949). It is because of the ideas of Morse, Babbage, and countless others that humanistic ideas of symbolism can be combined with scientific technological advancements to continually enhance the ways in which humans connect.

codingMorse’s idea is still alive and well in today’s computers. Just like the presence and absence of electricity in certain parts of a circuit (binary states) was used to send a code that represented human signs and symbols, today’s computers continue to use this combination of electricity and human signs and symbols to code the machines and devices that allow us to communicate (Irvine, 2). This is signifiant, considering how our technology has changed the way that we send and receive messages, thus changing the ways that we communicate, and furthermore, change the ways in which we understand society (Packer & Jordan, 2001). 

r_1bahxmp5iToday, humans use a similar but much more advanced concept of coding to program our electrically-powered digital devices. However, the purpose of this evolved and modern process remains very much the same as that of the electric telegraph: to communicate and connect in the most rapid and effective way possible. It is essential to realize that the sciences and humanities go hand-in-hand when thinking about how we have communicated since the invention of the electric telegraph. Without code, there would be no way to communicate, and without mathematics, there would be no way to transmit the code. 

Works Cited

Antoine, J. (2014). Samuel F. B. Morse’s Gallery of the Louvre and the Art of Invention. Brownlee, P. (Ed.). New Haven, CT. Yale University Press.

Beauchamp, K. (2001). A history of telegraphy: its technology and application. Bowers, B., & Hempstead, C. (Eds.). Exeter, Devon: Short Run.

Dasgupta, S. (2014). It began with Babbage: the genesis of computer science. New York, NY: Oxford University Press.

Czitrom, D. (1982). Media and the American Mind. Chapel Hill, NC: University of North Carolina.

Irvine, M. A Samuel Morse Dossier: Morse to the Macintosh Demonstration of the Morse Telegraph: Electric Circuits and “A System of Signs.” Georgetown University.

Packer, R., and Jordan, K. (2001). Multimedia: From Wagner to Virtual Reality. New York, NY: W.W. Norton & Co.

Shannon, C. (1948). A Mathematical Theory of Communication. The Bell System Technical Journal, 27, 379-423, 623-656).

Shannon, C., & Weaver, W. (1964). The Mathematical Theory of Communication. Urbana, IL: University of Illinois.

Standage, T. (1998). The Victorian Internet: The remarkable story of the telegraph and the nineteenth century’s on-line pioneers. New York, NY: Walker.

Going Down the Rabbit Hole of Google Arts & Culture – Katie & Amanda

Museums are a traditionally physical structure that foster the education and examination of art history. They function as cultural institutions and “an organizing system for a society’s (usually nationalistic) ‘cultural encyclopedia’ of prototype works” (Irvine, 4).

For this week’s post, we decided to focus on the virtual tour feature of the Google Art Project. This feature involves exploring and interpreting a museum, as well as the cultural artifacts inside it, in different ways than you normally would in a physical museum. The technology allows users to use a 360-degree video experience to choose from what perspective you view a museum and its contents.

screen-shot-2016-11-30-at-6-22-06-pm

The ability to fluidly move between rooms, focus on a particular piece of art and choose from what angle and to what extent you examine this art, effectively shifts how we organize shared knowledge and how we archive memory, specifically online. In this way, the Google Arts and Culture Project is an interface to not only art history, but to the virtual museum experience in which we examine that history.

Museums are mediated by the virtual tour function of Google Arts and Culture Project. In addition, the Google Arts and Culture Project has done its best to replicate what it means to “tour” a museum. By this we mean that the simulated museum experience does resemble the actual physical tour experience in some ways (although there are important differences). Similarities include a 360-degree view of each rooms. Viewers aren’t limited to a 2D static image. They can move (by way of a mouse) to look in each room from different perspectives.

From here, we’ll unpack the different conceptual layers of the virtual museum experience. The first is choosing a museum.

  1. Choose a Museum: We chose the J. Paul Getty Museum, located in Los Angeles, CA. As students in Washington, DC, our ability to tour this west coast based museum took a matter of seconds, which indicates how time becomes less of a constraint when virtually touring a space.

Below is an aerial image of the Getty museum. Not only can users choose which museum to tour, but they can begin that tour in any part of the museum. This choice allows them to self-navigate, creating tours that other museum goers might never have experienced before, which highlights the simultaneously individual yet collective experience of a virtual tour.

screen-shot-2016-11-30-at-1-17-28-pm

This view refocuses viewers back to the importance of the physical museum. Proctor notes,  “… both the gigapixel image and the Street View underscore and enhance the importance and centrality of the original object and its context in the museum” (Proctor, 221). Google does not let viewers forget the context in which they are viewing a piece of art. And yet, “Curators make deliberate and educated choices about the placement of art in the museum. The stories and relationships revealed by the way objects are hung in the galleries offer as much insight into the works as any catalogue or other document authored by an expert” (Proctor, 219). Is it significant that museum goers can connect stories and relationships not necessarily intended by the curators?  

What are the implications of encoding meaning and transmitting representations of artifacts in this virtual museum through time and space? As Dr. Irvine explains, “… the museum functions by also transmitting the museum idea, an image of an abstract ‘cultural encyclopedia’ made visible” (Irvine, 4). The way that the replicated images are organized on the Google Arts and Culture platform impacts how we encode the images as “an idealized, interpretive narrative sequence assumed to exemplify a common musée imaginaire…” (Irvine, 5).

  1. Exploring the Getty Museum. The museum is represented in static images, which are a construction of pixels mapped onto our screen. We create movement among these images by navigating with our mouse. Through the extension of human cognition via a computer system, we can move through the museum, recognizing patterns among exhibits, as well as within individual pieces of art.  We use media, signal and symbol representations to identify the significance of these patterns. In the context of a computer system, we act through an interface, which has specific design features and software layers that map pixels onto our screen in a particular way.

In other words, the virtual museum itself is a representation, or replication, of the actual, physical Getty Museum. So, the online representation of the Getty Museum is an instance, or token, which contains cultural artifacts (other tokens). Through the computer screen interface, we explore the Getty Museum in the context of the Google Arts and Culture web project. Nancy Proctor in “The Google Art Project” cites Eric Johnson, webmaster at  Thomas Jefferson’s Monticello, who describes this viewing experience as “a shift from ‘‘content’’ to ‘‘context”” (Proctor, 215).

As Dr. Irvine explains, “a contemporary computational system (large or small) is a design for implementing pre-existing human symbolic-cognitive processes to enable ongoing interpretations through the interactive metamedia design for all our digital encodable symbolic artefacts” (Irvine, 3). In the context of the Google Arts and Culture platform, this means that when museums (previously established sites that “convert the material and social history of cultural objects into a generalized “art history”” (Irvine, 4)) are remediated online, they allow users to interact with “an evolving collection of symbol structures” as they move through time by way of simulation and 360-degree technology (Simon, 22). These interpretations are situated in time and the represent our current environment. They are moving pictures of meaning making.

  1. Looking at a specific artefact. Once you are inside the museum, you have the ability to place yourself in a variety of rooms that actually exist within the physical museum. Because the process of digitizing the various pieces of art can be both time-consuming and highly expensive (Proctor, 216), each virtual tour option differs depending on the amount of art that it is able to provide online. In our example, the Getty museum has a variety of rooms available to browse on both the first and second floors, however, not every room is accessible. Upon “entering” a room, the user has the option to take a closer look at the different pieces of art by clicking one of the images at the bottom.

screen-shot-2016-11-30-at-6-27-42-pm

We chose to take a closer look at a painting titled, “A Hare in the Forest.” Upon clicking on the image, the camera directs users to the place where the artefact is located. This gives someone the ability to see what other artefacts are located nearby while giving them a sense of place and space – even without being there physically, users can get a sense of their surroundings. While users can zoom in and see the art from the room perspective, they can also gain more information on the piece of art by clicking on the box with the artwork’s name on it.

screen-shot-2016-11-30-at-6-26-46-pm

Upon clicking on the named box, users are taken out of the room and into a space that introduces them to the piece of art. In our example, we are looking at a painting that was created in the 1500s.

screen-shot-2016-11-30-at-6-31-58-pm screen-shot-2016-11-30-at-6-32-13-pm

It is at this point that users are able to experience more online than they would be able to experience in the museum; if someone clicks on the magnifying glass, they then have access to the painting from a much closer perspective – a proximity that would never be achieved if viewed in the actual museum. This feature backs up the statement that was made in “The Google Art Project” reading – this high-resolution image exemplifies how the Web can be used to complement an encounter with gallery artwork, instead of attempt to imitate the artwork (Proctor, 215).

screen-shot-2016-11-30-at-2-19-26-pm

The ability to zoom into the picture gives the user an experience that he or she would not have in a museum. Zooming out, the user also has the ability to read more about the painting’s history and context.

  1. Scrolling down the rabbit hole. Moving farther down the page, the user is confronted with the option to “discover more.” This feature is based on what the user is currently looking at; Google sees that they’re looking at a painting located in the Getty Museum, with the main subject being a rabbit, and the artist being Hans Hoffmann. Thus, the recommended content relates to what the user is currently looking at – the Getty Museum, Hans Hoffmann, and mammals. And if they were to click on either of the three options, they would then be directed toward more options that would allow them to discover more about the particular subject. For example, if we were to click on “mammals…”

screen-shot-2016-11-30-at-1-27-37-pm

… we would be taken to a page that introduces us to what a mammal is….

screen-shot-2016-11-30-at-2-29-03-pm

… and if we scrolled farther down this page, we would find a directory of paintings and artwork that contain depictions of mammals.

screen-shot-2016-11-30-at-2-29-16-pm

If you click on one of the images, it will then direct you to the page that allows you to zoom in on the image, while also giving you information about its history and at which museum it’s currently housed at. Echoing the title of this blog, the virtual tour feature of Google Arts & Culture can be related to the metaphor of going down a rabbit’s hole – just when you think you’ve found something interesting, you are encouraged to click on yet another new link with new opportunities for adventure. Just by clicking on the recommended “mammals” page, we went from the Getty Museum in Los Angeles to a painting with horses at the High Museum of Art in Atlanta. From there, we clicked on a chandelier image which recommended we take a look at glass artefacts, and from there we chose a recommended vase that is currently housed at the Museum of Arts and Crafts in Zagreb, Croatia. Google Arts & Culture allows users to learn and travel thousands of miles, visiting some of the world’s most treasured cultural centers, in the span of just a few minutes.

  1. A virtual experience that differs from the physical experience. The Google Arts Project allows users to navigate around a museum from the comfort and ease of their computer. Because of the virtual tours, not only can users see an image of an iconic piece of art, but they can see the room that the piece of art is housed in – the wall that it sits on or near, the pieces of art that compliment it nearby, and even the floor and ceiling of the room surrounding it. While all of these features enhance the experience, there are also elements that are lost in the process. For example, what does the room sound like? What does it smell like? Is the room hot or cold? How does lighting affect the painting at different hours of the day? While none of these elements enhance the art per se, they do make an impact on the user experience.

Another downfall of Google Arts & Culture is the fact that, as we mentioned above, not all rooms, or pieces of art, in any given museum or establishment, can be transferred as images online. As Proctor writes, “Beyond the costs of the gigapixel capture process, negotiating the rights to represent art online can be exceptionally difficult and costly” (216). It is noted in this reading that Jane Burton, the creative director of Tate Media, believes that Google Art risks “giving a very skewed image of creative output through time and around the world” (Proctor, 216). She gives the example that 20th century modernism could be absent because of high reproduction fees. This obviously serves as an obstacle for Google Arts & Culture, but it also proves the point that while this virtual tour option enhances society and its understanding of arts and culture, it will most likely not obsolesce the museum. Nancy Proctor reiterates this statement by writing, “I would argue that both the gigapixel image and the Street View underscore and enhance the importance and centrality of the original object and its context in the museum” (Proctor, 221).

6. Closing thoughts. Google Art & Culture has the potential to enhance the viewer’s interest in a particular piece of art or gallery, and this interest could lead to a future, in-person visit to the gallery. This could benefit the individual museum as well as the overall economy. However, we found that just because a user can take a virtual tour of these cultural meccas, it does not mean that this virtual experience replaces the act of physically visiting a museum; there are many elements of the user experience that are still missing. The fact that Google Arts & Culture is free to use serves as an enhancement to society; it is a free teaching tool for users of any age, and it has the potential to promote a culture of users who will become more educated on the topics of art and culture.

References

Martin Irvine, “André Malraux, La Musée Imaginaire (The Museum Idea) and Interfaces to Art“.

Martin Irvine, “From Samuel Morse to the Google Art Project: Metamedia, and Art Interfaces.”

National Gallery of Art, background on Samuel Morse’s painting, The Gallery of the Louvre.

Nancy Proctor, “The Google Art Project.” Curator: The Museum Journal, March 2, 2011.

Presentation (Irvine): “The Museum and Artworks as Interfaces: Metamedia Interfaces from Velázquez to the Google Art Project.”

Martin Irvine, “Introduction: Toward a Synthesis of Our Studies on Semiotics, Artefacts, and Computing.

Herbert A. Simon, The Sciences of the Artificial. Cambridge, MA: MIT Press, 1996. Excerpt (11 pp.).

Started at the snail shells, now we here – Amanda

Last year, before coming to CCT, if you had asked me what “computation” meant, I would have given you some vague guess that had some relation to a math equation or a computer coding system. I entered graduate school with no experience in computer interactions or coding, and during my undergraduate experience, many of the classes that I took were filled with notions that humans were very separate from computers – computers and technology were often cast in a negative light.

It would be an understatement to say that Semiotics & Cognitive Technologies has not only taught me many new concepts in regards to human meaning systems and cognitive-symbolic artefacts, but it has also challenged what I previously thought about the relationship between humans, computers, and technology (which seems like a very classic CCT lesson).

Perhaps the most important lesson that I’ve learned throughout the course of the semester is one that has changed a previously conceived notion that, as Denning wrote in “What is Computation,” while “computation” used to be a word that related to the mechanical steps that were followed in mathematical functions (806), that is not the only way that the word can be interpreted. This class has taught me that “computation” is much more than simply a way of dealing with math. As Dr. Irvine wrote in this week’s reading, both computation and digital media are methods for “physically encoding” both human sign systems along with the more mathematical operations that can be performed with these systems in order to create a variety of things that we use and interpret each day (Irvine, 1). In other words, computers are certainly a component of computer science, but they’re not the only part of computer science. Instead, they are used as a way of creating more opportunities for computational processes (Irvine, 2).

I never believed that computational systems could be equated to the human brain. But it is because of the slow, gradual, and very interdisciplinary nature of each week’s reading that I’ve come to realize that the computer, and all other forms of technology, are simply an add-on to what humans have been building upon all along. Step by step, our technology has become more sophisticated and more capable of doing amazing things. However, at the center has always been the very human concept of symbol systems – we would have no use for computers or technology if we could not draw an ounce of meaning from them.

As Herbert Simon wrote in “The Sciences of the Artificial,” both the computer and the human brain are artifacts that belong to the category of physical symbol systems (21). I thought that that page of the reading served as a prime example of many of the concepts that we’ve learned so far – the human brain and the computer are similar in many ways; they both process information, interpret meaning, and produce an output. While I once thought that they were completely separate entities, I now realize that each works hand-in-hand, and together, they create a very powerful pairing.

As we get closer to finishing up this semester’s class, I’m able to reflect back on all of the concepts we studied. Somehow we went from reading about artefacts such as ancient beads to Pierce’s triadic model to Alan Kay’s DynaBook concepts. But amazingly, they’re all connected through the concept of meaning – each item or concept either contains meaning, or assists in creating or interpreting meaning. I find that fascinating, and I look forward to seeing how technology progresses and better-enables us to express our ideas and learn more about the ideas of others. One of the major takeaways from this class, a simple but important concept, is the idea that we’re constantly evolving. Even while technology now seems so advanced, especially compared to some of the early computers (or the telegram!) that we’ve learned about, the advancement of technology is still not over.

References:

Man-Computer Symbiosis & Flight – Amanda

This week’s readings were a good reminder that we’ve come a long way from the first days of computing. However, our products still hold on to concepts that were applied in the beginning – in other words, with each new piece of technology that comes out, it contains traces of everything that has come before it.

Our reading, particularly “Man Computer Symbiosis” by J.C.R. Licklider, made me think of the way that technology in airplane cockpits has changed over the years. I am familiar with smaller airplanes, and I often hear the “glass cockpit” vs. the conventional, analog, “round dial” cockpit argument.

002Take, for example, the Grumman Tiger that was built in the 1980s. The cockpit looks a lot like the picture on the right. All of the instruments are analog, and there is a radio complete with a dial to change the frequency. That is about as high-tech as the cockpit gets. This is an older airplane, but it works just fine as long as the pilot knows how to function in the cockpit. While the instruments tell the pilot his/her speed, altitude, amount of gas, etc., the pilot must know how he or she is going to get to the desired destination because there is no built-in map system. The pilot must know how to turn various dials and in what direction they must go in. The pilot must manually change the radio frequency to communicate with the tower before takeoff and landing. The pilot is required to input a certain amount of information, and the instruments work with what they’ve been given.
Cirrus-CockpitOn the other end of the spectrum, airplanes are now coming out with “glass cockpits,” which have been around for a while (the military used them in the 1960s), but are just now finding their way into small aircrafts. A glass cockpit features electronic (digital) flight instrument displays (typically LCD screens), as opposed to the traditional analog dials and gauges. Because these displays are driven by flight management systems, the aircraft operation is simplified because pilots only have to focus on the most pertinent information, such as the flight path. Numbers are punched and data is processed. Essentially, a new interface has been added to the cockpit – the LCD screen represents what used to be the analog instruments. It is helpful to note, however, that there are still some back-up dial instruments that are not computerized (so analog is not completely obsolete).

There is a lot of debate as to which cockpit is A. easier to work with, and B. safer. Many pilots do not believe that the glass cockpit is a safe option because it is all computerized.

Licklider’s notes on man-computer symbiosis made me think of how this would apply to both analog and digital instruments. It seems as though in both cases, the human operator must supply basic information. However, in an older airplane with analog instruments, everything must be inputted manually (which often results in a slower process). It is very much a man working with the machine process. With the digital instruments, a person must still work with the machine. But in this case, there is much more automation and computing done by the machine, and less done by the pilot. As Licklider writes, in the instance of some computer-centered systems, the human operators “are responsible mainly for functions that it proved invisible to automate” (75). I see this statement being true in the case of pilots who work with digital instruments – the cockpit may be easier to work with because less information is required from the pilot.

Arguments in the pilot community often arise when talking about what happens when the instruments fail. Many argue that pilots who fly with glass cockpits are “lazy,” or not prepared for an emergency situation. If the digital instruments fail on a pilot, there aren’t a lot of choices left. Many small planes with glass cockpits come with a parachute, perhaps because of that assumption. In comparison, if failure is experienced in an analog cockpit, the pilot still has a chance to work with all of the other still-working instruments. The entire cockpit doesn’t go blank. And this takes me back to the idea of man-computer symbiosis.

Aviation technology has certainly progressed from the time of the Wright brothers and other early pilots and engineers. However, as technology has progressed, the relationship between people and computers does change. It makes life easier, for the most part. However, I question whether – when the computer, or instrument, stops working – whether that idea could still be applied.

References:

Licklider, J.C.R. 1960. “Man-Computer Symbiosis”. New Media Reader. Wardrip-Fruin, Noah, Nick Montfort, ed.. 74–82. Cambridge, MA: The MIT Press, 2003.

Irvine, Martin. 2016. “Introduction to Affordances and Interfaces: The Semiotic Foundations of Meanings and Actions with Cognitive Artefacts”.

Coding is Cool (and challenging, too) – Amanda

This week’s “Codecademy” assignment, along with the readings, served as yet another helpful stop along the way to better understanding semiotics and cognitive technologies. It seems like everything we have read before this point has led us here, and while I had absolutely no experience in coding before this week, it suddenly seemed less intimidating than it has always appeared to be in the past.

Out of all of the reading that we’ve done this semester, all of the pieces that have covered cognitive technologies seemed to jump out at me this week as I got started on the Python coding process. It is evident, as I practice this very basic introduction to coding, that computers – and so much of our life – run on a set of codes and symbols.

For example, I enjoyed learning how to do simple math equations on Python. Although the language is slightly different from what I know, the outcome is the same. Regardless of the symbol used to show that something is “equal” or “true” or “false,” the actual outcome of the process remains the same. This process takes me back to learning any other foreign language. When I took Spanish classes in high school and college, “I want two pieces of pizza” looked a lot different in English than it did in Spanish. The sentences were structured differently, and entirely different words were used to express the desire. However, the outcome remained the same, so long as someone knows how to translate between the two languages.

Thus, when I was following along to the Python lesson, I was reminded that although the way of computing on the screen looked & seemed very different than the way that I compute things in my head, there was a visible correlation. In “Computational Thinking,” Jeanette Wing mentioned that computational thinking is parallel processing – code is interpreted as data, and data is interpreted as code (33). This statement was made very clear as I, followed by the readings, worked on Python. As Subrata Dasgupta mentioned in “It Began with Babbage, computation is associated with the process and activity of human thought (11). While many of our readings have stressed this idea in weeks past, it wasn’t until I logged on to Codecademy that it all began to really make sense.

It may sound naive, but as I worked on the Python training, I couldn’t help but think that the coding process could be simplified; like it has somehow become harder than it needs to be. Obviously, I have very little experience with coding. However, I did become easily confused with all of the symbols that meant something very different than what I know them as in everyday life (for example, the = sign). While human language has been around for centuries, coding language still seems relatively new. Who exactly placed new meanings on these various coding symbols, and why were they chosen in particular? Obviously, there are various coding languages, and coding in general has evolved over time. However, I’m still interested in further discussing why computation and coding happens the way that it does.

 

References:

Wing, Jeannette. “Computational Thinking.” Communications of the ACM 49, no. 3 (March 2006): 33–35

Dasgupta, Subrata. “It Began With Babbage: The Genesis of Computer Science.” 2014. Oxford University Press.

Google Docs: A Meeting of the Minds – Katie & Amanda

We’re focusing on Google Docs, an online word processor that allows people to create text documents and collaborate with other users in real time. As long as a user can access the internet and has an email account, he or she can access Google Docs.

In our example, we have chosen the option of a blank document. We’ve gotten a little meta, creating this blog post about Google docs by collaborating over our very own Google doc.

Any number of users may work on the same document at any point in time, and the document can be shared with others. In reference to this week’s readings, this could be interpreted through the lenses of both the reductionist and the interactionist views of distributed cognition (Zhang & Patel, 335). Zhang and Patel explain that while a group of minds can be better than just one, because there are so many resources in a group, more errors must be cross-checked, tasks must be distributed, etc.

The construction of a Google Doc:

Many of the icons found in the document are similar in appearance to a document that one might find in the Microsoft Word or Pages applications on a Mac or PC. With options like “File,” “Edit” and “Tools,” it feels like the space where we normally come to write and format a document. But despite this familiarity, these options have been placed on a new, interactive platform.

Screen Shot 2016-10-19 at 12.56.49 PM

Google Docs serve as a space where minds can meet and share information – we not only extend our ability to type (coherent) ideas, but we distribute that burden among more than one individual. This could be related to the idea of culture and cognition – our mental, material (computers), and social structures work together in a historical context (time) to create a google document, or an artifact (Hollan et al., 178).

This product is automatically saved on the Cloud, making Google Docs a virtual storage space for knowledge that exists into perpetuity. They could be a version of what Dror et al. refer to as a “Cognitive Commons” (1). This space allows people who are physically dispersed to interact with a sense of immediacy that is unlike anything created in Microsoft Word (Dror et al., 1). It almost seems like instant messaging, but the goal is to produce a joint piece of work. This distributes the cognitive load of typing, brainstorming, and editing, from one person to multiple people.

Plus, it highlights the social nature of group work and collaboration. In a Google Doc, cognitive processes are naturally distributed across participants (Hollan et al., 177).

Users can participate in a conversation in a few ways, assuming that they understand how a Google Doc works. There’s a chat function, represented by a symbol, where members of a Google Doc can write messages to each other. This function can be accessed by clicking on the text box.

Screen Shot 2016-10-19 at 8.53.27 PM

We understand the text box, or word bubble, to exist with thoughts/ideas inside of it. It’s how we understand what others are thinking and/or saying when we see it pictured over their head.  

Screen Shot 2016-10-19 at 12.43.30 PM

Google Docs also influence how we think about virtual identity. A “voice” on this Google Doc is a user’s ability to put his or her cursor somewhere on the screen and start typing freely (at the same time as another user). The cursor represents a voice.

Screen Shot 2016-10-19 at 1.02.35 PM

A user can only use his or her “voice” when he or she is “in” the Google Doc. And, the name of a specific user corresponds with the color of his or her cursor (in this case, it’s pink), which further connects the individual to the words he or she types.

Screen Shot 2016-10-19 at 1.37.03 PM

This space is also one where access has to be granted – users can “share” the document with others and restrict their access. Participants can either edit, comment or only view the words on the screen. It’s created by the users, making it a consciously constructed place where we can offload not only the individual process of writing (by typing), but other group-related processes like editing and brainstorming.

Screen Shot 2016-10-19 at 8.56.29 PM

The real-time aspect of Google Docs changes the way that people work and input/output cognitive data; you could have users on three different continents, in three different time zones, and all three could still effectively collaborate on a document at the same time. In this way, Google Docs facilitate distributed cognition.

References:

Andy Clark and David Chalmers. “The Extended Mind.” Analysis 58, no. 1 (January 1, 1998): 7–19.

Andy Clark, Supersizing the Mind: Embodiment, Action, and Cognitive Extension (New York, NY: Oxford University Press, USA, 2008).

Itiel E. Dror and Stevan Harnad. “Offloading Cognition Onto Cognitive Technology.” In Cognition Distributed: How Cognitive Technology Extends Our Minds, edited by Itiel E. Dror and Stevan Harnad, 1-23. Amsterdam and Philadelphia: John Benjamins Publishing, 2008.

James Hollan, Edwin Hutchins, and David Kirsh. “Distributed Cognition: Toward a New Foundation for Human-Computer Interaction Research.” ACM Transactions, Computer-Human Interaction 7, no. 2 (June 2000): 174-196.

Jiajie Zhang and Vimla L. Patel. “Distributed Cognition, Representation, and Affordance.” Pragmatics & Cognition 14, no. 2 (July 2006): 333-341.

Decoding sarcastic texts – Amanda

Prompt: Following on with a specific case: how do we know what a text message, an email message, or social media message means? What kinds of communication acts understood by communicators are involved? What do senders and receivers know that aren’t represented in the individual texts? Our technologies are designed to send and receive strings of symbols correctly, but how do we know what they mean?

I was recently listening to my younger, tech-savvy brother talk about the online dating app, Tinder. On Tinder, you have the opportunity to message complete strangers and get to know them through what is essentially text or instant messaging. I’m fascinated by how people can create meaning over text, and actually get to know each other’s speaking style (in a sense), without ever having to meet one another.

I’ve found myself comparing the idea of a “first message” on Tinder to text message conversations I would have with a close friend or family member. This week’s homework, particularly Dr. Irvine’s video, made me reflect on the meanings that I take away from each type of conversation.

Take, for example, a text message I receive from a close friend I haven’t heard from in a week (in other words, this is the first text in a new conversation). “I am so done, I just want to jump off a building,” my friend writes. Because this is a close friend, I am able to recall past conversations and interactions together, and I am confident that my friend is not suicidal, but instead, she is just being her typical sarcastic self. I might reply with more sarcasm and a note of encouragement because she has sent me a cue that she’s not having a good day, and I am responding in a way that reflects on our relationship. The conversation continues, and I decode her words like I would listen to her talk in a conversation. There is enough that we understand about each other as senders and receivers that, if someone who didn’t know each of us looked at the conversation, they might bot be able to make meaning out of what we say.

However, if a stranger on Tinder sent me a message for the first time and said the same line as above, I would be very concerned and I would most likely feel conflicted. Is this person sarcastic? Or is this person going through a crisis, and if so, how do I help? How would I reply? I don’t know this person, and I don’t know how he or she will interpret my message. There would be a lot of confusion, and I don’t think I would get the meaning of the text that was sent. But I wonder if another visual cue, such as a particular emoji, would help me better understand the context in which the person sent the message and help me better decode what the sender is trying to say?

This brings me back to Dr. Irvines video, where he says, “The meaning of our messages comes from the human symbolic systems that surround them – social uses of technically mediated expression” (5:40). Furthermore, Dr. Irvine explains how we make meaning symbolically “on the fly” – we create the meaning when we perceive the signals, or to put it in other terms, when we decode the data (Irvine, 5). These ideas remind me of some of the basic definitions from the Piercian semiotic model because of the meaning that is typically “imbedded” in the text that we read – but it seems as though the meaning changes depending on who you’re talking to, and more importantly, how well you know the person. It seems as though some ways of speaking through text message, such as sarcasm, is better understood when it is used between two people who know each other well and can take meaning from a set of words that are organized in a specific way. And, thinking back to last week’s reading, at first I wondered: if a sign is not a symbol unless it has meaning, would a text message from a stranger symbolize anything? I think that perhaps it would. I’m also interested in further discussing Stuart Hall’s studies on encoding and decoding as presented in our reading (particularly the television communicative process discussed on page 509).

References:

  • https://www.youtube.com/watch?v=-6JqGst9Bkk&feature=youtu.be
  • Hall, Stuart. “Encoding, Decoding.” In The Cultural Studies Reader, edited by Simon During. London; New York: Routledge, 1993
  • Irvine, Martin. “Introducing Information and Communication Theory: The Context of Electrical Signals Engineering and Digital Encoding.” https://drive.google.com/file/d/0Bz_pbxFcpfxRejB6YWM0R0NrWTA/view

Using the Piercian model to decode artwork – Amanda

Prompt: Choose an example of an everyday symbolic genre (movie scene/shot; musical work or section of a composition; image or art work as an instance of its genre[s]) as an implementation of one or more sign systems, and using the terms, concepts, and methods in the readings so far, describe as many of the features that you can for how the meanings we understand (or express) are generated from the structures of the symbolic system(s). Can the “Parallel Architecture” paradigm extent to the features and properties of other symbolic meaning-making systems?

What do we think about when we see a piece of candy?

While I was originally going to choose a classical music composition to break down using some of the terms we’ve learned so far, my mind kept wandering back to the symbolism behind a piece of art that I discovered at the Art Institute of Chicago about five years ago. I will attempt to use the Peircean model to describe this piece of art.

273672_4095950

Courtesy of the Art Institute of Chicago

 

“Untitled” (Portrait of Ross in L.A.) is a work of modern/contemporary art by Felix Gonzalez-Torres (1957-1996).  It consists of many little candies individually wrapped in bright, shiny, multi-colored pieces of cellophane. Ideally, this piece consists of enough candy to weigh 175 pounds. However, this piece is unique in the way that anyone can go and take a piece of candy from the pile, thus, decreasing its weight (and the size of the pile) over time. However, the pile is always replenished before it runs out completely.

What does this piece of art symbolize? Why does it carry meaning? Why should we care about anything beyond the fact that we get a free piece of candy (or two)? This is an instance when knowledge of the artist, installation, and/or the time in which the installation debuted, is vital in understanding the meaning and symbolism behind the piece.

Created in 1991, “Untitled” (Portrait of Ross in L.A.) – our interpretant is an allegorical representation of Gonzalez-Torres’s partner, Ross Laycock, who died of an AIDS-related illness. The 175-pound pile of candy serves as the representamen, or the sign vehicle of the interpretant, which, understanding the Gonzalez-Torres’s biography and history, would be the “ideal” Laycock in his healthiest state. However, because the piece is interactive and people are encouraged to take a piece of candy from the pile, the pile of candy gradually gets smaller and smaller, once again serving as a representamen which signifies another interpretant, Laycock’s weight loss and suffering, prior to his death, which (I may be going out on a limb here or just incredibly confused) could serve as the interpretant (weight loss and suffering/withering)’s object.

“Untitled”-Portrait-of-Ross-in-L.A.-1991

Courtesy of The Gund Gallery/Kenyon College

While the interpreter of this message may initially think that the piece signifies a gradual journey that leads to an empty pile that might signify death, this piece of art involves the act of replenishing. Gonzales-Torres instructed that the piece be constantly refilled, metaphorically granting perpetual life to the piece of art – and to the love for and memory of Laycock and other AIDS victims. So, in this second “phase” of the piece, so to speak, the fact that this piece never visually disappears serves as the representamen of the interpretant, perpetual life, or perpetual memory, or perpetual love – however you want to interpret it.

Screen Shot 2016-09-28 at 4.47.41 PM

Courtesy of: Daniel Chandler, Semiotics: The Basics. 2nd ed. New York, NY: Routledge, 2007. Excerpts.

Referring back to Pierce’s triadic model, the line connecting the representamen and the object is dotted, or broken, because it is “intended to indicate that there is not necessarily any observable or direct relationship between the sign vehicle and the referent” (Chandler, 30). If I am actually correct in assigning the Piercian terms to the elements of Gonzalez-Torres’s artwork, this would make sense, because a 175-pound pile of multicolored candy pieces does not seem to have any observable or direct relationship to a healthy man, nor does there seem to be any correlation between a a slowly diminishing pile of candy and a dying person.

As for applying the Parallel Architecture paradigm to this piece of art/this concept, I’m not sure if that’s possible, and if so, how exactly to apply it. I’d love to get everyone’s feedback/thoughts regarding the Parallel Architecture, as well as simply whether the connections I’ve made are valid or really off the wall and illogical!

References:

  • Daniel Chandler, Semiotics: The Basics. 2nd ed. New York, NY: Routledge, 2007. Excerpts.
  • http://www.artic.edu/aic/collections/artwork/152961
  • https://en.wikipedia.org/wiki/Semiotic_elements_and_classes_of_signs

 

 

IDK if I understand what you’re saying – Amanda Morris

After this week’s reading on linguistics, I think I am finally starting to understand the idea of the human mind as OS Alpha. Just as computers process code, humans process language (among other things) and create meaning out of it. Our assigned readings and video proved to be helpful in many ways, but thanks to the visuals/diagrams in the readings, I could finally cement the idea of language as code (mostly because I could process the images [ex. images 252 & 253 in Redford reading] but I’m not quite sure I understood what they meant…).

Something that stood out during the readings was the topic of dialect. I appreciated Ray Jackendoff’s example, given early on in the reading, regarding the School Board of Oakland’s proposal that Ebonics be employed as part of class instruction (Jackendoff, 10). It highlighted the fact that language is tied with social identity, and that linguistic issues can oftentimes be considered social issues, too. Furthermore, incorporating the information shared in Steven Pinker’s video, I was able to sort out the differences between dialect and language and the fact that the rules of both language and dialect differ from one another – I didn’t really understand that before the reading. As I read though each of the readings, I kept asking myself the simple question of “Who made up all of these rules (grammar, sentence structure, etc.)? Why are there so many rules associated with something that seems to come so naturally to us?” While I still don’t really have a concrete answer, I appreciated Steven Pinker’s explanation of the dialect that was used in – and ultimately chosen from – the south of England. I’ve always thought of the phrase “I can’t get no satisfaction” as “bad English” or improper language, but I never realized that it’s really no worse than “I can’t get any satisfaction” (both serve as a double negatives); it’s simply the dialect that was chosen based off of geography  –  There’s nothing that makes a culture’s chosen dialect special (am I correct in thinking that?).

Continuing on the topic of dialect, to second/bounce off from Katie’s question, could we consider “text speak” a new” dialect? John McWhorter, who teaches linguistics at Columbia, gives an interesting TED talk on the concept and proposes that texting is a “fingered speech.” Much of the language & sentence structure used in texting does not sound or look correct, yet I find myself using it in my speech, as well as receiving it from others in conversations. However, I only speak words I would text with when I’m in conversatio with someone who texts. I wouldn’t speak in “text dialogue” with my grandmother, or someone else who does not know how to text, because s/he would not understand what I was saying. This brings me back to the idea of language, or dialect, being connected to a social identity, or the “communication environment” stated on page 3 of Dr. Irvine’s Linguistics, Language, and Symbolic Concepts. McWhorter also seconds the fact that language is not writing, but speech (like Pinker mentioned). However, the rules mentioned in Radford and Jackendoff’s readings do not necessarily seem to apply to text/instant messaging – in fact, this new kind of messaging seems to be based on a loose assumption that the rules are irrelevant. Could this then be reflected in the way we speak out loud – our language? I don’t seem to hear it as much as I read it, so perhaps this idea of text dialog is far fetched. For example, it seems as though many new words and terms have been created since we began using sites such as Facebook, Twitter, Instagram, etc. It is evident when we read the posts, but do we speak these terms out loud? It seems as though we’ll sometimes incorporate hashtag “slang” into spoken language, but do we say it enough for it to become a dialect?

References:

“Irvine-Linguistics-Key-Concepts.pdf.” Google Docs. Accessed September 22, 2016. https://drive.google.com/file/d/0Bxfe3nz80i2GNkFOckI4UGxkb2s/view?usp=sharing&usp=embed_facebook.
“Jackendoff-Foundations-of-Language-Excerpts.pdf.” Google Docs. Accessed September 22, 2016. https://drive.google.com/file/u/0/d/0Bxfe3nz80i2GRTVOakFQbS1KazQ/edit?usp=sharing&usp=embed_facebook.
“Radford-Linguistics-Cambridge-Excerpts.pdf.” Google Docs. Accessed September 22, 2016. https://drive.google.com/a/georgetown.edu/file/d/0Bxfe3nz80i2GUW03cm1FeVgwVTQ/edit?usp=sharing&usp=embed_facebook.
Big Think. Steven Pinker: Linguistics as a Window to Understanding the Brain, 2012. https://www.youtube.com/watch?v=Q-B_ONJIEcE.
McWhorter, John. Txtng Is Killing Language. JK!!! Accessed September 22, 2016. http://www.ted.com/talks/john_mcwhorter_txtng_is_killing_language_jk.

The roles of history and literacy in becoming “modern.”

This week’s reading initially struck me as somewhat confusing, perhaps because each reading came with a new perspective from the author/scholar, and each reading seemed to come with a variety of different ideas and hypotheses. However, seeing all of these hypotheses come together helped me in better understanding the general, big-picture idea of the process of the human species becoming modern minded.

While it was a shorter read, Kate Wong’s article in the Scientific American, The Morning of the Modern Mind caught my interest because of the different viewpoints and hypotheses she presented. It seems as though some scholars and researchers say that a new, cognitive “creative ability” (Wong, 88) was spurred on because of social factors such as confrontation that resulted from humans of modern appearance attempting to invade Neandertal territory. Others say it resulted from genetic mutations that happened earlier than the European “explosion” in Africa. Reading further, both Merlin Donald and Colin Renfrew seem to advocate that the human cognitive process has evolved, or progressed, because of social factors. Terrence Deacon seems to compare evolution in the human brain with evolution of the brains of other species while placing an emphasis on language as a defining difference between humans and other species. One little statement by Deacon kept popping up in my mind as I summarized the takeaways from all of this week’s reading: Neither languages nor brains fossilize, so it is difficult to study the early versions of both topics (Deacon, 10). There are so many ways that we can approach the topic, but none of them really seem to answer the question of how it all came about.

Two specific ideas stuck out to me this week, the first being the idea that Renfrew stated quite succinctly when he wrote, “… Without artefacts, material goods, many forms of thought simply could not have developed” (Renfrew, 2). And at first glance, one could wonder why such a statement is important – who cares how cognitive abilities and processes came about? But because of the artifacts that came about, and the cognitive ability to create these physical objects and tools, I’m typing in a language, on a laptop computer, at a university that is only as good as the individuals that make it up. Donald sums up the point that I’m trying to make when, on page 221, he writes about the fact that technologies – weights and measures, clocks, monetary systems, etc. – have the power to “amplify” a society’s level of intellect. Donald writes, “Such technologies are crucial in defining the real intellectual power of a culture. They to only allow cultures to preserve more complex ideas and traditions, but change how they achieve this” (221). This statement helped me to connect the fact that our current technologies do have a very deep, and rich, history. And this history is only made possible by the fact that, at some point in time, humans began to develop the intellectual capacity to begin using objects and associating meaning to these objects, whether it was the shell necklace in Wong’s article or the institutional structures stated in Donald’s piece. We only know what we know now, in regards to technological advancement, because of each step that has been taken before the others.

Donald’s insights on the power of a literate culture also struck me as interesting and true. Donald noted that “the most important network-level resources of culture are undoubtedly writing and literacy…” (220) and that these two resources have “revolutionized” human cognition at both individual and network levels. This made me reflect on the fact that many parts of the world that seem underdeveloped are lacking the skill of literacy, whether everyone is illiterate or particular minorities or classes are illiterate; it’s a known fact that cultures with a full range of literacy skills are able to produce highly functioning technologies, resulting in a powerful advantage over cultures that lack a reading/writing system (Donald, 220). Once writing is introduced, institutional structures have the opportunity to become more complex, and it almost seems like some sort of domino effect – from there, technology only keeps getting better and progressing. In my non-academic life, I volunteer as a literacy tutor. While it seems as though one person being illiterate is a much better problem to have than an entire culture being illiterate, the fact that one person cannot read or write (at all, or in the same language as the culture they live in) has a huge effect on technological innovation. Progress is stalled not only for the illiterate individual, but for the people/company that the individual works for, his or her family members (particularly their children), and even his or her economic bracket. One person makes a big difference. Seeing it on a smaller scale brings me back to the big picture, to the question of when humans became modern of mind. It proves that this was a huge, life-changing turn in history; it may have been a gradual change, and obviously language was developed before literacy, but the power of even one person being able to carry out a sequence of basic cognitive operations (Donald, 216) is all that’s needed to then begin teaching others and move toward developing intelligent, multi-layered societies. Technology builds upon itself, but it doesn’t do it on its own; It requires human input, knowledge, understanding, and direction.

References:

“Donald-Evolutionary-Origins-Social-Brain.pdf – Google Drive.” Accessed September 14, 2016. https://drive.google.com/a/georgetown.edu/file/d/0Bxfe3nz80i2Ga0ZLRWhPY3lJc1k/view.
“Wong-Modern-Mind-SA-June-05.pdf.” Google Docs. Accessed September 14, 2016. https://drive.google.com/a/georgetown.edu/file/d/0Bxfe3nz80i2GT1doMFRfaFRFYWs/edit?usp=sharing&usp=embed_facebook.
“Deacon-Symbolic-Species-Excerpts-1-13.pdf.” Google Docs. Accessed September 14, 2016. https://drive.google.com/a/georgetown.edu/file/d/0Bxfe3nz80i2GaWVFZFhXaDhfWGc/view?usp=sharing&usp=embed_facebook.