I tried to modify the information theory diagram to account for meaning-making in a nice, concise way. But it quickly became very crowded. Narrative prose will have to do, and I’ll start with this essay as an example.
The first word of this sentence was “The,” and it has meaning to me. I understand the meaning of the capital T, for instance; in this case it means the beginning of a sentence, a new thought, given the period and space before it. I know all this and more because I’m part of a community of symbolic beings that understands English-language conventions.
So I tell the computer to make a capital T. Thanks to a helpful keyboard a very smart person invented and software and memory and Boolean algebra and electricity and more, I can do that without speaking computer speak (though ASCII tables are nice windows). My key press is translated into my computer’s “language” (to steal a particular linguistic term) of 1s and 0s. The representation that I understand as the capital letter T appears on the screen.
An English speaker could look over my shoulder and understand what that letter and all these letters mean, a process which Peirce explains in more depth.
And I could send these words or images in an email to someone else. Thanks to machinations I do not yet understand (but hope to!), what I do would be translated into electrical pulses that correspond to binary values that are transmitted through ethernet cables to another mechanical device some distance away that can decode them. Or they’re sent over radio waves. Or in light pulses. Or something else.
In any of these cases, the goal in terms of the information theory model is to replicate a “message,” as Shannon put it, as completely as possible. The model does not describe the transmission of meaning in the Peircean sense but rather the transmission of information in the form of bits. (Of course, someone capable of making meaning out of abstract ideas had to create that model in the first place.)
For my email message to have meaning at its destination, some member of a symbolic species on that end must be capable of making that meaning. A sign doesn’t exist until it is interpreted as such. This means the actor at the destination must be operating in the same context as the source. He or she must understand English or have a good translator. The medium matters for meaning as well. Most users of email know that all caps mean SHOUTING and should be used sparingly. Terseness is OK in texts, but could be rude in email. And so on.
I don’t want to venture too far into book-report territory, but I found the readings helpful illustrations of the meaning-making process—the stories about cryptography that Gleick retells, for example. There are also a few scenes in Imitation Game, the movie about Turing and the Enigma, that might help and are conveniently insertable into this essay. Take this one (a dramatized version of what actually happened, of course).
Based on the meaning of already-decoded messages and their knowledge of language conventions, code breakers understood that certain words—greetings, the weather—always came up in German messages. They built a machine that could focus on words that they already understood would be there. (Has technology advanced to a point where computers using algorithms can identify these seed words?) Floridi hints at something similar with his Normandy discussion (44).
On the source rather than destination end, it seems that Day is illustrating how the development of information theory’s conduit metaphor and its application to nontechnical areas were influenced by a specific meaning community—a Cold War environment. He says information studies should be rethought for today’s context.
This week—even more than others—I’m thinking about AI. When all of these factors and more are considered, it is no wonder the task of building a new kind of human machine is so difficult.