Entropy and Meaning

Claude Shannon wrote that “the fundamental problem of communication is that of reproducing at one point either exactly or approximately the same message selected at another point.” Perhaps a more humanistic rephrasing would sound something like, assuming interpreting actors in two or more locations understand a perceptible artifact or series of perceptible artifacts to correlate to the same imperceptible and cognitively generated meanings, the fundamental problem of communication only requires reproducing, at one or more points, the perceptible artifact.

For Shannon, the journey of “information” from point A to point B does not require for the perceptible artifact to remain the same the entire time — it can be added to or taken away from, it can be reorganized and jumbled, sent in part or in whole — as long as by the time it arrives at its destination, the perceptible artifact returns to either “exactly or approximately” the same form as when it left its sender. Shannon’s theory, and the Information Theory which grows out of it, turn communication on its head; it only accounts for what is said (broadly speaking), not what is meant.

It would be easy here to disregard meaning and meaningfulness from Information Theory entirely. However, important principles rely on an assumption of meaningfulness as crucial parts of the process of reproducing a message at two different points. In particular, the insights gleaned from entropy and redundancy depend on the meaningfulness of a system either to remove redundancy for purposes of efficiency or to add it in order to ensure the integrity of a transmission.  In other words, Shannon realized that human meaning systems were patterned and therefore predictable. A highly patterned message, which tended away from randomness (i.e. exhibited low entropy) carries with it a greater degree of redundancy, and therefore can be probabilistically predicted with greater accuracy. On the other hand, a message with high entropy, or a great degree of randomness,  provides new information with every bit, making it more difficult to accurately predict.

In this way, for Shannon, determining entropy depends on meaningfulness, not in terms of the actual Peircean object of a bit of information (or what we tend to think of as “content”), but because by assuming meaningfulness, one can assume a pattern, or a level of redundancy, which means the system does not tend toward randomness, and the probabilistic likelihood of any bit of information can be discerned according to known bits of information. Of course, while an insight like this can provide remarkable understanding into modern technologies, such as autocorrect, speech-to-text software, and even machine learning, this assumed “meaningfulness” still disregards “meaning” itself. In other words, even if we program our machines to look for redundancies because our meaning systems are full of them, they still cannot know the Peircean “object” of the physical artifact, which, no matter the extent of mathematical fancy footwork, requires a cognitive agent to be truly understood.

References:

Martin Irvine, Introduction to the Technical Theory of Information

James Gleick, The Information: A History, a Theory, a Flood. (New York, NY: Pantheon, 2011).