Meaning Preserving in Communication System – Jieshu Wang

Why can’t we extrapolate from the “information theory” model to explain transmission of meanings?

As Professor Irvine mentioned in yesterday’s Leading by Design class, Samuel Morse was the first person who gave meanings to electronic current pulses. But it was not until Claude Shannon founded Information Theory, had this signal-code-transmission model been formally established as a discipline.

However, Shannon ignored meaning, so it is ambiguous where new information comes from[i]. The information theory he established and its predecessor mathematical theory of communication (MTC) both are not interested in meaning, reference, or interpretation of information. Instead, they mainly “deal with messages comprising uninterpreted symbols[ii]” that are at the syntactic level, not semantic information.

Let’s look at Shannon’s illustration of communication system, the simplest information system.

屏幕快照 2016-10-13 上午1.23.39

Claude Shannon’s original diagram for the transmission model, 1948-49. Source: Irvine, Martin. “Introduction to the Technical Theory of Information.”[iii]

All information, no matter it is an email, a phone call, or a song, is transformed and transmitted from its sender to the receiver through the pattern showed in the image above. For example, the pattern of Morse Code consists of dots, dashes, and spaces that are meaningless before they are decoded. That’s why we can’t extrapolate from the information theory model to explain the transmission of meanings.

Where are the meanings?

During the process of information transmission in communication systems, the meaning is not lost but exists in the sign referent model proposed by Paolo Rocchi[i]. According to Rocchi, information has two parts–sign and referent. “Meaning is the association between the two.” It is learned and stored in our brains and can be transformed and transmitted by machines. Once it is decoded into recognizable signs, the association—meaning—is ready for us to discover.

Those associations can also be conducted by computers and become more and more important for scientific discovery because the association capacity of the human brain is biologically limited. Computers could serve as a good cognitive artifact for us to offload this cognitive effort.

Using simple observation and intuitive induction reasoning conducted while bathing, Archimedes associated the behavior pattern of water with physical forces, and ultimately discovered the law of buoyancy. But modern physics does not work that way. For example, the discovery of gravitational waves earlier this year largely attributed to many sophisticated machine learning algorithms whose job, in a nutshell, were filtering all kinds of noises and screening out the most promising signals picked up by the supersensitive sensors. Basically, we offload the effort of associating signal patterns (sign) with astronomical events (referent) to computers. Computers are making, storing, and looking for meanings on behalf of us.

What is needed to complete the information-communication-meaning model to account for the contexts, uses, and human environments of presupposed meaning not explicitly stated in any specific string of symbols used to represent the “information” of a “transmitted message”?

In order to complete the information-communication-meaning model, first of all, we need a sign system shared by the members of the community. According to C.S. Peirce, the sign system consists of an object, an interpretant, and a representamen[iv]. The object is what the sigh refers, i.e. the referent. The meaning making process is hiding in the relationship among the three parts.

Second, during the design process of communication machine, our idea of the meaning is built into the machine, so that when the machine is used to transform and transmit information, the meaning of the information will be preserved in the association of sign and referent implanted in the machine[i]. For example, when people are designing computer language, they would also design a dictionary in which every code corresponds a specific logic action.

After the information is decoded, the receiver uses the sign system that he/she shares with other community members to interpret the message and finally interprets the meaning.


References

[i] Denning, Peter J., and Tim Bell. 2012. “The Information Paradox.” American Scientist 100 (6): 470–77.

[ii] Floridi, Luciano. 2010. Very Short Introductions : Information : A Very Short Introduction. Oxford, GB: Oxford University Press. http://site.ebrary.com/lib/alltitles/docDetail.action?docID=10485527.

[iii] Irvine, Martin. “Introduction to the Technical Theory of Information.”

[iv] Chandler, Daniel. 2007. Semiotics: The Basics. 2nd ed. Basics (Routledge (Firm)). London ; New York: Routledge.