Author Archives: Alexander Macgregor

Alex MacGregor’s Ramblings – Week 2

Hey everyone,

I don’t really have a deep background in this field, so my apologies if these come across like stoner thoughts.

• We don’t yet have the ability to measure or observe the neurological process of meaning making, but how would we even go about doing so? Brain mapping seems like the most plausible place to start, so is this just a problem of lack of tech or is this something that will forever be impossible to observe? If it’s the latter, then Cartesian Dualism may be a pertinent idea to explore here.

• On page 11 of the first reading, when Professor Irvine says “The meaning is not a property of a thing, but is in the meaning system of which it is a part.” is the goal of this statement to disabuse us of any ideas of necessity between “the thing” and “the meaning”? The Saussurean dyadic model, with its phonetic/linguistic emphasis, makes it easy to see the arbitrary relationship between the signifier and the signified, but Peirce’s triadic model seems to feature a less arbitrary relationship between its elements, particularly between the interpretant and the referent. I can see how the literal word “cat” has no necessary relationship to the actual animal, but the thought of a cat seems pretty closely connected to the actual animal.

• On page 8 of the first reading, when Professor Irvine says “Because the meaning-making structures of our symbolic faculties are formed in many layers and distributed networks of agency, they are not usefully represented in containers, boxes, or static points like those pictured in older communication and information diagrams. Meanings are not something transferred or transmitted from one location to another, or from one container (someone’s head) to another.” I’m kind of reminded of cloud computing, in that the data (in this case, the system of meaning and symbolic cognition) is not located locally in the device being used to access it (in this case, the individual person). Another analogy that comes to mind is open-source software, in that the software in question is being communally defined and continuously altered. If these analogies stand, I think it’s an interesting point that the culture and direction of the modern tech industry is coming in line with these semiotic concepts. I wonder if that’s intentional.

• On page 7 of the first reading, when Professor Irvine says “…we can access distributed meaning resources in different situations of use…” is that like how we can use and understand slang in a casual setting but it doesn’t work formal settings?

• What would a world without technical mediation (writing, print, image making, audio/video recording, etc) look like? Professor Irvine says we’ve been doing it for about 50,000 years, so what did human society look like before that, from a cognitive semiotics perspective? More “animalistic” immediacy? We refer to OS Alpha as the core human operating system, and surely humans are utilizing it to the most extreme degree, but anyone with a pet dog or cat can attest to the fact that we share signs with non-human beings as well. So are these non-human animals running a kind of proto-OS Alpha?

• The readings stress the co-constitutive and collective nature of the network of meaning-making functions, but do we, to some degree, have things that only make sense to us individually? Signs that only we recognize or acknowledge? Ultimately we are solitary creatures perceiving the world through only our own eyes, so where does the communal end and the private start? What would it look like if you were the only person in existence running OS Alpha?