Meaning in the Signal-Code-Transmission Model

Mary Margaret Herring

While the signal-code-transmission model of information satisfactorily accounts for the technical transmission of messages, it does not lend much help when decoding the meaning of a message. In Shannon’s original signal-transmission model of information, the source sends a message to the transmitter who encodes the message into a signal. At this point, noise may be introduced into the system. The message is then decoded and arrives at its destination (cited in Irvine, n.d.). This linear process perfect sense on a purely technical level. For example, a person sending a message by telegram would fill out the form with their message. Then, an operator would transcribe the message into Morse code to a receiver who would decode the message and deliver it to the recipient. For this reason, the information theory model is essential in understanding the mathematical and engineering processes needed to get signals from the sender to the receiver.

However, the signals linear model. As Irvine (n.d.) notes, the meanings of the messages are not “baked into” the medium. In the example of the telegram, the only thing that is actually sent is a signal. Information about the message’s cultural context or assumed background knowledge is not included. From this strictly technical point of view, whether or not the message is meaningful is irrelevant. Floridi (2010) writes that an advantage of digital systems is that they can be understood equally well represented semantically, logio-mathematically, and physically. With digital technologies, Floridi writes “[i]t is possible to construct machines that can recognize bits physically, behave logically on the basis of such recognition, and therefore manipulate data in ways which we find meaningful” (2010, p. 29). But, this still raises the question of how meaning systems can be included in a model where the meaning of the message is irrelevant.

Since the signal-transmission model explains how digital signals can be encoded and decoded into meaningful messages, perhaps it is time to apply the sign-referent interpretation proposed by Denning and Bell (2012). They argue that information contains both signs and referents that we use to make sense of digital information. Denning and Bell use the example of seeing a red light (sign) and our brains commanding us to stop (the referent). In the same way that we know to stop at a red light, we also know that a blue-underlined phrase usually indicates that the text contains a hyperlink to another file or webpage. By relying on these signs, we can make meaning from *perhaps* meaningless content.


References

Denning, P. J., & Bell, T. (2012). The information paradox. American Scientist, 100(November – December), 470-477.

Irvine, M. (n.d.). Introducing information and communication theory: The context of electrical signals engineering and digital encoding. Unpublished manuscript.

Luciano, F. (2010). Information: A very short introduction. Oxford University Press.