omg it’s week 12 when did that happen

Oof, where do I even start? Perhaps the most important conceptual leap I made throughout our readings is the demystified understanding of computers. I used to look at the ubiquity of computers and software as an abstraction of culture, a creation of new cultural space, and that the singularity was imminent (I know, I’m the coolest guy there is). But now I realize that computers are utilizing our abstractions to maximize the efficiency of the different layers of abstraction. That these layers of abstractions are interfaces in themselves for how we design computers to mediate them.

I’m now looking at computers as an augmenting human technology. If the singularity ever occurs, it will not be because of advances in our computation technology exclusively, because computation is not the only thing that defines our humanness or makes consciousness possible. Rather, it encodes our inputs into a form that is computable by hardware in which we can do all sorts of fancy stuff.

It’s not surprising how pervasive dystopian narratives of computers are. When we offload so many important processes to machines (banking is a big one), skepticism is bound to arise. This idea highlights one of the biggest problems we face now with the role that computers play in culture – illiteracy with the technology (funny, because I could’ve learned this stuff at any point using a darn computer). The literacy to understand and operate the machines is the missing element in the vision of designers who conceptualized and engineered interactive computing. And furthermore, without the literacy, computers as abstract human-interaction-destroying monsters is a self-fulfilling prophecy. When our software is computing open-ended processes, it kind of seems like it’s alive or thinking. In actuality, it’s just waiting for our inputs (metaphorically speaking… right?).

Perhaps my favorite idea I’ve had is the idea that computing is a process that allows us to translate signs into meta-artifacts. The units of these artifacts are bits, which are themselves symbolic representations of our inputs, which are symbolic in a semiotic sense. So when we have open-source communities or teams collaborating on cloud software, we have the distributed mind manifest. It’s translated into bits and then represented in a human-perceptible way by our software. I think this idea reflects Alan Kay’s idea of symmetric authoring and consuming.

I’m also interested in exploring with the idea of phenomenological illusion in GUI and computer interface. It’s an important question to explore because we have to balance designing intuitive interfaces (as complex a problem that is in itself) with engineering a technologically literate user base. If we offload all computing processes onto our software, computers functionally are monoliths of sci-fi abstraction. Is there any way we can design computers so that interacting with them puts us in a place to reason with the computations themselves?