I think the first step to answering these questions is to figure out what information theory is. From my understanding it is just the explanation for how we can impose human logic and symbolic values on electronic and physical media. From this we get this offshoot that I think falls into the information theory concept something called E-information which is the digital electronics concept i.e. electrical engineering information where E-information = mathematics + physics of signals + time. This whole process is to preserve the pattern and quality of ta signal unit so that it can be successfully received at the end. Now the signal-code transmission model I think was the result of the information theory built by Claude Shannon to transmit electronic signals most efficiently over networks or broadcast radio waves merged with the question of how to represent data in discrete electronic units. This model is a way of transmitting error-free electronic signals in telecommunication systems, but it left out meanings and social uses of communication because they are assumed or presupposed. As a result, I believe the information theory describes how our primary symbol systems “encode” but do not provide a meaning to the symbol. This is what I got from the first text on the introduction to technical theory.
For the next one, following a timeline of Shannon we see how the information theory came to exist, starting with the Differential Analyzer that was coordinated by a hundred relays, intricately interconnected, switching on and off in particular sequence. From this and his enjoyment in logic puzzles, Shannon realized in a deeply abstract way the possible arrangements of switching circuits lined up to symbolic logic particularly Boole’s algebra. Here we get his masters thesis in a machine that could solve logic puzzles the essence of the computer. Then he got interested in why the telephony, radio, television, and telegraphy all following the same general form for communication always suffer distortion and noise (static). But he took a job in Princeton and then WWII kicked off and he was assigned “Project 7” that applied mathematics to fire-control mechanism for anti-aircraft guns to “apply corrections to the gun control so that the shell and the target will arrive at the same position at the same time.” The problem was something similar to what plagued communications by telephone, the interfering noise and problem of smoothing the data to eliminate or reduce tracking errors. We go on a quick history of the telephone and are reintroduced to Shannon reading a text published in Bell System Technical Journal about the Baduot Code. Here information is the stuff of communication in which communication takes the place by means of symbols that convey some type of meaning. Fast forward to 1943 and Shannon is working as a cryptanalyst and enjoying tea with Alan Turing where they talked not about their work but of the possibility of machine learning. Here Shannon develops a model for communication and then I get lost in the math but from my understanding Shannon created this idea that natural-language text could be encoded more efficiently for transmission or storage. In which he develops a way to ensure end to end transmission.
From all of this we get the internet, which is designed with no central control. It’s a distributed packet switch network that ensures end-to end connectivity, so that any device could connect to any other device. I’m still having difficulty understanding what E-information is to compare it to properly compare it to the internet. The Internet is controlled by no one and everyone in the fact that the Internet packets (I think the information) are structure-preserving-structures between senders and receivers. At the same time, from my understanding, E-information is like the internet in that it uses electricity to create imposing regular, interpretable patterns that are designed to be communicable through a physical system to a human user. If this is right than wouldn’t the internet be a form of E-information?
Lingering Questions:
Where does E-information fit in the information theory?
What did the math that Shannon did to realize error-free transmission accomplish? I did not understand the math aspect behind his theories and would like a better explanation.
Where is the ALU and how does the ALU on my computer turn logic gates into actual images and symbols?
Why are there so many logic gates?