Category Archives: Week 12

omg it’s week 12 when did that happen

Oof, where do I even start? Perhaps the most important conceptual leap I made throughout our readings is the demystified understanding of computers. I used to look at the ubiquity of computers and software as an abstraction of culture, a creation of new cultural space, and that the singularity was imminent (I know, I’m the coolest guy there is). But now I realize that computers are utilizing our abstractions to maximize the efficiency of the different layers of abstraction. That these layers of abstractions are interfaces in themselves for how we design computers to mediate them.

I’m now looking at computers as an augmenting human technology. If the singularity ever occurs, it will not be because of advances in our computation technology exclusively, because computation is not the only thing that defines our humanness or makes consciousness possible. Rather, it encodes our inputs into a form that is computable by hardware in which we can do all sorts of fancy stuff.

It’s not surprising how pervasive dystopian narratives of computers are. When we offload so many important processes to machines (banking is a big one), skepticism is bound to arise. This idea highlights one of the biggest problems we face now with the role that computers play in culture – illiteracy with the technology (funny, because I could’ve learned this stuff at any point using a darn computer). The literacy to understand and operate the machines is the missing element in the vision of designers who conceptualized and engineered interactive computing. And furthermore, without the literacy, computers as abstract human-interaction-destroying monsters is a self-fulfilling prophecy. When our software is computing open-ended processes, it kind of seems like it’s alive or thinking. In actuality, it’s just waiting for our inputs (metaphorically speaking… right?).

Perhaps my favorite idea I’ve had is the idea that computing is a process that allows us to translate signs into meta-artifacts. The units of these artifacts are bits, which are themselves symbolic representations of our inputs, which are symbolic in a semiotic sense. So when we have open-source communities or teams collaborating on cloud software, we have the distributed mind manifest. It’s translated into bits and then represented in a human-perceptible way by our software. I think this idea reflects Alan Kay’s idea of symmetric authoring and consuming.

I’m also interested in exploring with the idea of phenomenological illusion in GUI and computer interface. It’s an important question to explore because we have to balance designing intuitive interfaces (as complex a problem that is in itself) with engineering a technologically literate user base. If we offload all computing processes onto our software, computers functionally are monoliths of sci-fi abstraction. Is there any way we can design computers so that interacting with them puts us in a place to reason with the computations themselves?

Couldn’t think of a title- Carson

I like Denning’s new definition for computation, information representations. I believe that it tends to the greater relationship that the mind shares with computing outside of a technology. I believe this touches on what we have learned this semester the most. When I signed up for my first semester of classes in CCT I was not aware of how much they would parallel. Even in 505, the technology my group is looking at the iPod and we talk about extended cognition. So yes, things have become clearer and this big picture of what communication, culture and technology is started developing.  To me, CCT not only focuses on methods and means, but what lies behind those methods and means. We ask multiple questions, not only How are we doing this? But also: Why are we doing this? Can we improve our process? What are other people doing?

57025874

Other thoughts:

Simon takes a different approach with the idea that symbols rely on the environment to determine their meaning. Symbols have to be physical real world things “…fabricated of glass and metal (computers) or flesh and blood (brains)” (p.22) This follows his ideas on computing; where computer parts are unreliable and we have to compensate for that by organizing the different (unreliable) parts in a way that works for us. Also, each function only becomes relevant once is it applied to the whole system. I think I understand what Simon is saying, but I am not sure it is something I groove to.

In the Wegner reading he says, “The radical notion that interactive systems are more powerful problem-solving engines than algorithms is the basis for a new paradigm for computing technology built around the unifying concept of interaction”

Can an interactive system be considered an algorithm with interruption and adjustment included? There is still the idea of going through a process of categorizing, but instead of waiting for the outcome to be produced from an algorithm, there is an opportunity to adjust that process to receive a different outcome… maybe?

 

References:

Denning, Peter (2010) “What Is Computation?” Originally published in Ubiquity (ACM)

Simon, Herbert (1996) The Sciences of the Artificial. Cambridge, MA: MIT Press. Excerpt.

Wegner, Peter (1997)  “Why Interaction Is More Powerful Than Algorithms.” Communications of the ACM 40, no. 5: 80–91.

Family History (Becky)

LillianFryberger

My grandmother in 1941. She was also a spot welder in the war.

My grandmother Lillian was born in 1918, a handful of years after C. S. Peirce’s death and Alan Turing’s birth. She lived to see the internet, and taught herself HTML code so she could embed midis of her favorite old songs in the body of emails she’d send me. Because she couldn’t see very well, she worked from a WebTV attached to the large screen of her television set. Needless to say, she was amazing.

Before I was born, she worked at Bell and AT&T as a switchboard operator, establishing connections between people by manually moving electrical cords and switches. Claude Shannon’s information theory with its bits, Henry Nyquist’s ideas about digitization and bandwidth, and much more grew from the telephone, which itself built on telegraphy and other inventions before it. The switchboards operated much like the early computers, which required people to manually move parts of room-sized machines to make calculations. Eventually, human-written binary code, electrical signals, and other innovations would come to replace the mechanical actions, paving the way for input-output machines modeled by Turing to become interactive computing systems built on software better modeled by something else.

Of course, my grandmother and I first started communicating before I knew language, let alone software. I had no concept of abstractions or the alphabet or other signs and symbols. But as a member of the symbolic species, I had in me a hidden capacity to map meaning, and gradually the syntax and semantics fell into place. I moved from primitive reactions to hunger, cold, and the like to using tools to play and eat. My understanding of icons, indexes, and symbols built up into an understanding and verbalization of the symbolic conventions that English speakers apply. My acquisition of language, or potentially just the ability to create artifacts, unlocked a capacity to store memory externally and build knowledge.

In the late 1980s, I extended those cognitive processes to computing systems thanks to my dad, who worked for Hewlett Packard. He was an electrical engineer by trade, trained in the navy, and went from working on radar oscilloscopes to computer scopes, from punch cards to PalmPilots, from huge pre-network machines to Oracle databases. He brought new HP equipment home to learn how it worked so he could fix it, which meant I got to explore computers in the living room and not just at school as a kid. I got lost in the DOS prompt, traveled the Oregon trail, and played my favorite game.

If Alan Kay’s vision had been fully implemented, I might’ve been learning code along with natural languages in elementary school. I might’ve been programming and learning by doing—taking my expanding symbolic capabilities and using them to conduct experiments with my computer as my teacher. Instead, I played Math Blaster and memorized multiplication tables.

But I shouldn’t be greedy. I have inherited a great deal. I’ve moved from holding multiplication tables in my head, to offloading my memories with a pen in notebooks, to exclusively using software on a laptop to store what I want to remember from class. And that software does more than just represent the written word; it is an interface to other symbolic systems as well. I can embed videos and audio into the files, or draw with a tool that still looks like a paintbrush but behaves in an entirely different, digital way. If I need the internet, I simply move to another layer thanks to Kay’s graphical user interfaces, windows, and more.

The concepts we’ve learned are helping me not just better understand the human condition but better understand my own family’s experience. I’ve come to learn that the cathode ray tubes used in old televisions were integral to the creation of technology that would lead to my grandmother’s WebTV, and many other more successful computing systems. That the HTML code my grandmother wrote consisted of symbols that both meant something to her thanks to a complex meaning-making process and could be read by computing devices that execute actions.

And there’s so much more in store. We’ve seen human cognition coupled with cars, but not the cognitive offloading that would accompany ubiquitous driverless vehicles. And we’ve seen HTML and hyperlinks and mice, but not widespread use of augmented reality lenses, wearable technology, and other versions of Douglas Engelbart’s vision of extending human intellect.

The curtain is slowly being pulled back on the meaning and complexity of this legacy and possibility. And the whole way, individual humans have been at the center, building on things that came before and finding new ways to expand their symbolic-cognitive processes.

Semiotics Reflection

I came to CCT with a broad interest in critical theory and media. When I registered for semiotics, I intended to somehow add it to my arsenal of theoretical assumptions. Looking back, I think my assumption was that it would expand my understanding of semantics, which I intended to use as a theoretical framework for whatever research projects I did while at CCT. Needless to say that assumption was extremely naïve. Studying semiotics has completely shifted my worldview and given me a tool for framing practically everything I look at or problem I attempt to understand.

For example, now when I think about the various forms of media I consume – film, music, literature, etc. – I not only analyze them in regard to the concepts and emotions I perceive, but also in regard to the technical structure of the images, sounds or words they contain. Specifically, I found it fascinating to see how de Saussere’s insight on the arbitrary nature of signs (Irvine 10-1) connects directly to information theory, through which we understand that meaning is not carried by signals through mediums, but rather understood through our own cultural interpretations (Floridi 20-2).

Furthermore, this understanding, combined with concept of cognitive offloading, has completely altered the way I view computation. Primarily, we are completely in control of the meaning that we make with these devices, rather than somehow subject to the perceived demands they place on us. In that regard, using these devices to store, transmit and ultimately use information that can be translated into meaning and abstraction is infinitely valuable in terms of our overall social progress. If we reduce these devices to simply their operational functions, we deny ourselves their ultimate educational and even political significance. We speak through them, create experiences through them and continue to develop new methods for doing both of these things through them.

While these concepts now seem apparent to me, I had not previously considered many of them prior to this class. Furthermore, I’m not sure that I would have ever come to understand or accept these things without having been brought through the process of learning them through the following theoretical constructs: semiotics, cognitive evolution, information and design theory. Having that background and perspective, I am able to trace developments of the technologies that we use today through their intended designs and appreciate them for their educational and productive potential without indulging concerns about how they negatively affect our society. The contributions of designers such as Engelbart, Sutherland and Kay alone give me an excitement for the technologies we have and the learning capabilities that they hold.

For this reason, I now personally feel more empowered and even have the desire to design not only digital technologies or programs, but analogue artifacts using concepts in the spirit of Alan Kay, emphasizing education and usability. Furthermore, when I am presented arguments for how technology is changing our society in manner in which we cannot control or predict, I have the desire to counter that argument and explain how we have brought ourselves to this point in terms of technological development and rather than resist the tools that we have created, we should seek to improve these tools and create new tools with a humanistic focus such as presented by Janet Murray in Inventing the Medium (Murray).

Floridi, Luciano. 2010. Very Short Introductions : Information : A Very Short Introduction. Oxford, GB: Oxford University Press. http://site.ebrary.com/lib/alltitles/docDetail.action?docID=10485527.

Irvine, Martin. 2016. Semiotics, Symbolic Cognition, and Technology Key Writings. Compiled and edited with commentary by Martin Irvine. Communication, Culture & Technology Program, Georgetown University.

Murray, Janet H. 2011. Inventing the Medium : Principles of Interaction Design as a Cultural Practice. Cambridge, US: The MIT Press. http://site.ebrary.com/lib/alltitles/docDetail.action?docID=10520612.

 

Being mediated … we are getting used to it

We are living an increasingly mediated life. As we intensively interact with computers in our daily life, two sets of symbols, human meaning system (usually referred to natural language) and “computational” meaning system (mediated meaning system simulating and representing human meaning system), are being used simultaneously and interactively. In fact, even without physically manipulating a computer, habitually, we are using computational thinking to solve problems, in which we are mediated by the technologies.

This mediation has been institutionalized; it has been collectively and culturally accepted after intersubjective practice. The reason why we are happy with interacting with computers is that we can gain efficiency from this process to some extent. The relative efficiency is realized through the affordances provided by the technologies. By offloading or distributing some part of our cognition functions into physical technologies, we are externalizing cognitive process, which was believed to be “imperceptible”, into perceptible devices. This embodiment is a remediated process and realized by the invisible and intelligible affordances. As we mentioned in the post of week 9, Venmo provides the affordances of calculating, initiating transaction, commenting, etc. While using this application, we are getting used to the icons, index and symbols intuitively in a human thinking way. If the process of accommodating ourselves to the context of this application is natural and smooth, we consider this application as a good example of humanized technology, which narrowing the gap between human and technology. If we get lost in mapping ourselves into this new set of symbols, the application is failed in simulating and remediating human semiosis.

It is why we are limited by the affordances as well. I happened to read a paper called “Are digital media institutions shaping youth’s intimate stories? Strategies and tactics in the social networking site Netlog” (Sander De Ridder, 2015). In this paper, Ridder insisted that SNS has established some institutions to shape youth’s intimate storytelling online. She also argued that the digital media reproduce the mainstream culture and make it even more prevailing. The “strategies” refer to the software design of the SNS, which provides affordances or options for audiences to self-represent themselves online. The “tactics” refer to audiences’ responsive behavior to accept or resist this predefined software context. Online storytelling involves information representation and information process as many other mediated symbolic cognition processes do. Researching how digital media institutions shaping online storytelling inspires me a lot to think about how we are limited by the affordances. For example, on Facebook, formerly we are only allowed to choose between male and female – just like there are two sides of a paper. Now, we are glad to see that we can custom our gender, getting rid of the “limitation” on self-representing ourselves online to some extent.

1FA4DC4C-AAF3-4216-92D0-2F78BBC3A925

I think it is a vivid example of how computational context or digital media shape our information representing via predefined affordances.

The influence brought by the technology can also be observed in the case excluding the interaction with computers. Human’s (only human) computational thinking process is an example of how human are jumping back and forth between our own meaning system and “computational” system. The redefinition of computation and the term “computational thinking” put the emphasis on information representation, which is more inclusive to engage the human agent in this process. Different from delegating agency to the technologies and using them to extend and distribute cognitive abilities, we “computationalize” ourselves as well. In other words, we are not going to “physically” turn into a machine; we can reinterpret our thinking process in terms of computational terminologies. When we are solving a problem, we take discrete steps to formulating a solution. From the former step to the current step, we map out a “correct” path, excluding all the “wrong” path. In essential, excluding uncertainties to find out the certain path is an information process. The only difference between how human process information and how computers process information is that we are using different meaning systems. Nevertheless, we can still term the process of finding the solution to the problem as “computation”; we can still term our way of working out the problem as “computational thinking”.

In general, it is a “conceptualized” post that allows me to connect the key concepts to explain what happened between different sets of symbols. The basic argument is that “humanization” and “computationalization” can happen simultaneously, and these phenomena represent the computation process, the transformation between one set of symbols to another set of symbols.

References:

  1. Irvine, Martin. “Introduction: Toward a Synthesis of Our Studies on Semiotics, Artefacts, and Computing.
  2. Denning, Peter. “What Is Computation?” Originally published in Ubiquity (ACM), August 26, 2010, and republished as “Opening Statement: What Is Computation?” The Computer Journal 55, no. 7 (July 1, 2012): 805-10.

Human Evolution

This is my first year in CCT. Before this CCT program, if you ask me what is a computer, I will answer that computer is a device that can carry out some arithmetic or logical operations. But at the end of 12 weeks of study of this course, I will answer that computer, is a computing artifact and an affordance of humanity’s cognition.

In these weeks, we go through the history of semiotics from C.S. Peirce’s theory of signs to the Saussure’s semiotic model, the history of meaning systems and symbolic representation from the earliest records of symbolic expression to computational language processing and artificial intelligence, the history of models of computation, the history of computing systems from memex by Vannevar Bush and the Dynabook by Alan Kay to the laptop and other computing devices today. After dipping into these theoretical backgrounds, I can see that my thinking of this world and human beings has been changed.

Humanity Memory V.S. Computer Storage

Nowadays, there have been major breakthroughs in the human brain understanding; some of them can directly change our daily life. However, media on human brain knowledge is far less than common sense. Common people, including me, have a lot of misconceptions about the human brain. The most popular one is the analogy between the human brain and the computing systems. This misunderstanding mainly comes from science fiction. Many science fiction works describe humanity’s memory as a data that can be erased and revised. For example, in the Matrix, people can learn martial arts or how to drive a helicopter by directly receiving codes.

Human’s memory is not recorded in the brain, but grows in the brain. The brain remembers things through neurons. There is an example: a patient had no pain nerves, which means that he can remain awake throughout the craniotomy. The doctor found that one of his nerves jumped when he saw the picture of Jennifer Aniston, and another one jumped when he saw the photo of Bill Clinton. The human brain arranges a specific nerve for each person he knows. In computing, the most semiconductor memory is organized into memory cells. It is easy to save a file on the hard disk, just change the arrangement of bits (0 or 1), but it is impossible to change people’s memory.

Computing Devices & Human Beings

After reading those science fiction works, I could not help but think of the day that the artificial intelligence will destroy the human world. Now, a lot of field people are working on the advantages of human beings relative to artificial intelligence. We have the unique human expectations of the future. We are the only species has the ability to understand the long-term future and make plans for it. In the book “The Future of the Mind”, the Japanese physicist, Kari Daishi, points out that the biggest difference between human beings and other species is that our brains understand the concept of time. So that the long-term interests will drive us to give up some short-term interests, and the collective development will drive us to give up some personal interests.

Of course, no one can predict the extent to which technology can be developed. Maybe one day, the machine can fool the human, and can even affect mankind.

However, after learning knowledge of semiotics, I see them from different angle, I regard these cognitive-symbolic artifacts as the extensions of human cognition. If you look at the history of computing systems, computing systems are getting smaller to smaller. At beginning, its size is about a room. In 1970s, we had the personal computers. Now, I have a laptop in my bag everyday, and we wear Google glass and all the other wearable devices. The next step is that they will be under our skins. So they become closer and more intimate to us, we are absorbing computers into us. In this society, we are bringing technologies to us, and they become parts of us. We are no longer independent from our technology. They change our identity of who we are. In the past, we thought ourselves ending by fingertips. After we are expanding ourselves with the help of those computing systems. Some people may concern about that the artificial intelligence and technologies may replace people from their jobs. It is true that some jobs will disappear, but it will create more new jobs for people. That is what we are doing, using technologies to create new things for people to do and further evolving to new creatures.

A convergence with an open-ended future – Lauren Neville

Unexpectedly, I find myself facing existentialism at this time of reflection. As this is my second course with Professor Irvine and second year in CCT, I have noticed a sense of dramatic and unanticipated growth in my understanding of my own meaning systems. Last year, in Leading by Design I began further conceptualizing boolean logic, cognitive distribution, layers of abstraction, and architectures of complexity. However, our work in semiotics has changed even my perception of self and of the reality I had built.

Signs are not simply images or words, they are context filled relationships that we have with each other. We render a symbol as a culture and interpret the symbol as a culture and because of that we are ever-presently engaging in a network of complex relationships with each other. Our speech, art, written word are all predetermined and therefore, we share a constant distributed cognition. What I once perceived as my personal judgements of music have now been explained to me as the collection of past interactions and relationships I have had with other music from our culture.

Because signs and symbols act as networking nodes, it now makes sense to me that semiotics is the obvious path to computation. It seems that throughout time, we have been getting closer to this convergence and valued the shared cognition that could link many people and concepts into hub-like spaces. The early book wheels of the Renaissance, Babbage’s difference engine and Sutherland’s Sketch Pad are not simply tools, but act as symbolic meaning making hubs. It seems obvious now the most important part of this evolution was the Internet in which cognitive distribution through relational symbol systems could be shared at the speed of light. 

Of course, this convergence and finally the development of the Internet does not imply an end-all be-all to our progress in meaning making systems. On the contrary, we have just opened up many new doors for making and creating as the affordances of graphical user interface and interaction designs begin to allow our culture to discover and explore our fantastical symbolic renderings of the world far beyond what we could have anticipated. I believe that advanced mathematics, social networks and planetary systems can only now be explored because of our abilities to utilize billions of cultural relationships to make symbolic representations of our universe. As I noted in my first post in this course, we should contemplate C.S. Peirce’s statement, “A sign is something by knowing which we know something more. The whole universe is perfused with signs.”

Dream Machine – Alexander MacGregor

Computer science is no more about computers than astronomy is about telescopes.” – Edsger W. Dijkstra

Coming into this course, the above quotation would have been incomprehensible to me. How could computer science not be about computers? These intricate, abstruse, blackboxed machines need a discipline as rigorous as a science to understand and interrogate. While I haven’t been completely divested of the latter sentiment, I am now far more confident in the universality of computational concepts than I was before. I believe it was receiving a grounding in semiotic concepts that set the stage for this transition in thinking. The ideas found in Peirce’s writings were absolutely instrumental in understanding the basic processes that are taking place whenever we interact with a computer. As was the information we gained about Morse and the history of binary. It now seems to me as though computers are just devices that we, as humans running OS Alpha, use to augment functions and processes we’ve been performing since time immemorial.

This is not to understate the importance of the mechanization of computers, particularly the “microcomputer revolution”. As we have tracked the history of computing, from the human computers of the Napoleonic Wars that inspired Babbage’s Difference Engine to the iPad, we see that the technological mediation of computational devices has been prismatic. The ability to cognitively offload tasks to machines capable to executing them at a far faster and more powerful rate has been crucial to constructing our present. The history of interface design has also been imperative in making use of these computational machines as widespread as it has become. From Vannever Bush’s Memex to Ivan Sutherland’s Sketchpad to Douglas Engelbart’s WIMP innovations, we see how design concepts like affordance and extended cognition have played a vital role in shaping our computational landscape.

One word I keep coming back to is abstract. I have found the process of de-blackboxing the “computer” to necessarily be an act of abstraction. Going from thinking of computers as simply mechanical devices to a “universally applicable attitude and skillset”, as Jeanette Wing puts it, has been enlightening for me, and has helped to expand the realm of what I considered to be computationally possible. I now consider computational thinking as more of a philosophy than a strict set of concrete rules governing inputs and outputs to machines. This pliability is important when thinking of possible phenomena arising out of the dissipation of computers into our surroundings. As society moves from conceiving of computers as metal encased box of wires and chips to potentially every item we see around us, computational thinking will need to be applied in order to tackle to the problems of tomorrow.

One last point I wanted to make was related to this quote from the Dennings reading:

Many of us desire to be accepted as peers at the ‘table of science’ and the ‘table of engineering’. Our current answers to this question are apparently not sufficiently compelling for us to be accepted at those tables.”

It seems to me as though a natural result of the spreading of computational thinking would be the dissolution of this ossified hierarchy that seems to be implicit in these “communities of practice”. We learned how important these distinct communities were during the early years of computing, and how their fingerprints can still be seen on our modern devices, but I believe that once we gain a deeper grasp on the ever-present computational processes surrounding us, “computers” will no longer be seen as being native to the disciplines of science or engineering, but will rather be as intrinsic to all fields as reading and writing are.

In conclusion, as I try to synthesize all the concepts we’ve learned so far in this class, I come to the question of how will the computational interface and system designs of tomorrow integrate historic design concepts, and interpret semiotic concepts such as extended cognition, affordance, and distributed cognition, to create a more intuitive computational relationship between the user and the machine in order to meet our new computational desires and needs? I’m particularly excited to see these developments as they apply to Artificial Intelligence via concepts like parallel computing and artificial neural networks. As computing continue to become a ubiquitous presence in our lives, breaking down that prevailing “man-machine” illusion may produce even more radical consequences to the way we not only perceive computers, but the very world around us.

References

  1. Irvine, Martin. “Introduction: Toward a Synthesis of Our Studies on Semiotics, Artefacts, and Computing.
  2. Simon, Herbert A. ” The Sciences of the Artificial. Cambridge, MA: MIT Press, 1996.
  3. Denning, Peter. “What Is Computation?” Originally published in Ubiquity (ACM), August 26, 2010, and republished as “Opening Statement: What Is Computation?” The Computer Journal 55, no. 7 (July 1, 2012): 805-10.
  4. Murray, Janet. “Inventing the Medium: Principles of Interaction Design as a Cultural Practice.” Cambridge, MA: MIT Press, 2012.

Those aren’t turtles…they’re bits!

Peirce’s triadic model enables a better understanding of communication as more than just sender and receiver. There are endless signs that can be used to communicate meaning, and when we fix them to an artifact we extend cognition. When we started to use digital mediums as the space to express ourselves we treated them as we had other spaces, and Otto’s notebook (from Clark’s example) became Otto’s computer system. However, a notebook would never refresh to show new information, or link the information you want to other information you might be interested in. With Otto’s computer system, it’s all about association, building webs of interest, from one idea. Otto might search for directions to the MOMA, then save those directions, then his computer prompts him to look at other museums, he saves those directions too. If Otto remembers that his goal is to get to the MOMA he’ll be fine, but if he builds his associative trail too far, he might forget where he was going.

This endless search for new signs, creates information overload, and it’s a real problem right now. There’s a palpable, sometimes dark energy out there, that I think a lot of people are feeling. It’s at its most powerful on the internet, where the sheer volume of information out there to be consumed is swallowing discourse. Not too long ago we had a fixed set of information, you’d get your daily paper and you’d be limited to the facts on it. Now, we still represent this information in the same way, but it’s no longer fixed. Wegner differs from Turing/Von Neumann in his assertion that there is greater richness to computation, as evidenced that machines can’t handle the passage of time during the act of computation. We have this same problem where as we are trying to digest new information, and we can’t account for what’s developing.

Possibly one of the issues in bridging legacy media like the newspaper to a digital medium is that we’re trying to pour old wine into new bottles. There’s a space that needs to be allocated to communicate news, science, and culture, but maybe that space needs to be represented differently. These sites as they are designed now do not reflect the way they are being consumed, which is minute to minute, second by second. Murray makes the point that there is a better option, for instance GUI helped to design a better desktop, and not by creating a layout that looks like a physical desk. We might need to change the mediums on which we receive information, otherwise we may be bogged looking at one story and seeing it propped on the back of another, on the back of another, on the back of another, and we’ll be searching without understanding. Only it won’t be turtles all the way down, it will be turtles made of bits.

My understanding of computing through the ages (semester)

As the semester starts winding down, it’s interesting to go back through my previous posts for the course and see how my thoughts have evolved over time. We sampled an extraordinary range of very complex, high-level topics in a relatively short amount of time, and dabbling with each of them is only merely scratching the surface of these rich intellectual traditions. In revisiting my old posts, I can see how I worked through de Saussure’s and Peirce’s semiotic models, at first perhaps misunderstanding the true distinction between the two, but eventually grasping how important these differences are to understanding symbolic meaning-making.

I also developed an appreciation for the ambiguousness and self-reflexivity of language itself, and how the human brain is uniquely equipped with the ability to create and understand meaning from arbitrary occurrences (be they sounds, letters, or even events). In this sense, the brain is a computer—our OS Alpha. Computers are not these distinct, separate pieces of hardware, and computing is not this magical thing that happens inside them. Rather, their logic evolved out of a natural progression of symbolic and representative technologies that are based on particular elements of the human mind. When we joke that somebody “thinks like a computer” (in that either they are gifted in computational thinking, or that they lack emotion) what we really mean is that they think in a specific way that has been isolated and applied to how computers function.

As they have advanced, computers have been adopting more and more of the characteristics and functions of humans, and more so resembling the human brain (just multiplied, of course). With AI, computers attempt to replicate emotional, conversational, and interactional functions that were previously unavailable. With predictive technologies, such as Google’s search suggestions or Amazon’s “you may be interested in…”, they have adopted our forms of associative thinking. This is not by accident—this is intentionally directed by humans. We used to make mix tapes on cassettes and give them to people we had crushes on—now we have Spotify make playlists for us based on our listening history. This was not just an accidental progression—this technology was built on how humans already thought. The same can be said for the technologies we use for offloading—instead of “filing” them away in our mind, they allow us to keep track of them while also juggling overwhelming amounts of information.

Samantha from “Her”

 

 

 

 

 

 

 

 

Needless to say, my understanding of meaning-making, symbols, representation, and computing has changed throughout this course. I now understand computing as an extension of human thought based on human rules, not as a mysterious black-box in opposition to it. But one thing does still bug me. I can’t quite figure out where the distinction between humans and computers should be (“should” because I’m operating under this normative assumption). Computers are bundles of calculations and processes that are necessarily derived from human thinking. The more you bundle them together, and at higher levels of abstraction and analysis, the more “complex” a computer or technology you have. The “highest” form of this would be AI that functions precisely as a human—one that contains all the analytical, judgmental, sensory, emotional, etc., capabilities that we have. But is this possible? Is it only a matter of technological capability, or is there a necessary divide between us? By divide I don’t mean in the “humans v computers” sense I described before, but just the mere fact of how our reality functions. Anyways, computers are really cool, and provide an infinitely fascinating mirror with which we can examine what it means to be human.