Author Archives: Qi Wang
QI WANG Week13
In this course, I learned a lot of new things. In the beginning, we learned semiotics, semiotic thinking, Peirce’s semiotic theory, signs, and symbols. In this part, it is exciting to know the relation between signs/symbols and meaning: signs/symbols are interpretable only in a socially understood context of use. It means that meaning is not a fixed entity; it is a relation, making me rethink the symbols and signs and realize they have significant power in our lives. To be honest, I never take symbols seriously before taking this course. My understanding of signs or symbols is the “stop” sign or the “apple” sign.
Then we talked about the language, that is my favorite part. Language to humans is an intricate talent: humans have an inborn ability and tendency to speak. For example, children are born with universal grammar. It contains the common characteristic of all human languages. When a kid is young, even with poor input, he or she can learn a language quickly. Because there is a syntax rule under the language, children can recombine the words to form unlimited sentences with this rule in their brains. This ability disappeared when they grow up. That theory is astonishing; when I learned this, I regretted that I studied English too late.
Next, we learned electricity and binary code. When I did the reading, I wrote down a quota which is very impressive to me: “We didn’t create electricity, we design it.” when we are using 0/1 to represent the state of the electricity, we map the abstract symbols to the physical thing (electricity). With 0 and 1, we can represent anything in the computer system, such as numbers, words, and images. The computer only can run the binary code to execute actions. For languages such as C++, Python, or Java, the target audiences are humans, not computers. Now I realize the importance of symbols and abstraction. I feel the previous materials we learned are like a foundation for now. We also do some practical practice this semester. I tried HTML and python that are brand new experiences for me.
Studying semantics can better analyze the rules of using human language, which is beneficial to the realization of artificial intelligence. Besides, when we delve into these language rules, we are more likely to design more advanced translation software. Microsoft has added a real-time translation function to Skype. Google also has a camera real-time translation function, but at the current level, especially the translation between Chinese and European languages has many problems.
And I think it is also beneficial in artificial intelligence conversation. There is an AI dating experiment recently. The developer allowed two virtual AI characters to communicate in the same space for four days. The two AIs come from different companies. Through the dialogue between the two AIs, it can be found that AI’s use of language and rules cannot reach the human level.
I really appreciate professor Irvin and this course. It brings basic but essential knowledge behind computation, so we can understand why and how a computer works in this way, filling the gap between theory and actual programming.
Qi Wang Week 12
After studying unit 4, I found that Python is more suitable for beginners compared to Java or other programming languages. Like in the video shows, Python has the characteristics of simple syntax and clear sentences (fewer punctuations), which allows beginners to focus on programming and computational thinking in the learning stage. Also, Python code can be easily read, and developers can leave comments in the code, so a Python developer can copy and share the code written by other programmers. In this case, there are few conflicting paradigms, which makes communication ideas and algorithms faster and easier.
One of the most significant advantages of using Python is that it has high similarities with everyday English. But it is a more concise and less ambiguous version. The daily used English has more meaning due to different contexts and situations (Evans, 2011). The exact but straightforward Python languages make the process of reading the code easier for the user. Another thing Python uses is the length of the code. Compared with other programming languages, Python code is relatively short. I have seen some Java codes in the videos. They are longer and more complexed.
In Chapter 3, the author mentioned that programming is writing commands that make computer to execute. So what we do basically is simulate the computer’s way of thinking and give instructions from a programming language that the computer can understand to complete the commands. The computer will exactly follow the code. There is no gray space for a computer. It only has True or False, no maybe choice (Video). If the code is wrong, the computer will not execute the code. There is no chance for a computer to recorrect the wrong codes by itself.
So I have a question here about artificial intelligence. I’ve heard that Python is widely used for artificial intelligence, so what’s the relationship between Python and artificial intelligence?
Because Artificial intelligence is to train machines to think and act like humans, think rationally, and act rationally. Artificial intelligence should not just follow the codes; it should have active thinking and learning and become human-like or even surpass human intelligence. But now we know that all computers or software are followed the rules we are giving to, they can not write the program by themselves without human interference. Does this mean, it is impossible to build a real AI?
Reference:
Evans. D., (2011). Introduction to computing: Explorations in languages, logic, and machines.
Davis. A., Programming foundations: Fundamentals. https://www.linkedin.com/learning/programming-foundations-fundamentals-3/introduction-to-functions?u=57879737
Qi Wang Week 10
After this week’s reading, I found that the evolution of interface is very interesting. The previous one is batch processing. It is a series of operations in a program on the computer without manual intervention, and it is non-interactive (everything on the card is fixed). Strictly speaking, it is like a processing procedure: processing data stored on a punch card, rather than individual manipulation. If you want to create a data file or program to use the data with other computers, the only way is to use a punch card. Then the early GUI, Douglas Engelbart came up with a new idea: Augmenting Human Intellect. This new conception denies that computers can only solve basic mathematic problems. His new system has a mouse, bitmap screen, and hypertext all laid the foundation for the development of modern desktop operating systems. Next, there is Xerox PARC (Palo Alto Research Center). Inspired by Engelbart’s presentation, Xerox researchers developed the WIMP model as early as 1970. It has windows, icons, menus, mouse pointers, which has been in use today.
Based on the reading, the professor mentioned the human’s semiotic ability can solve the question, why we need the interface. He said human sign systems must have a physical substrate for the human to recognize the patterns or understand the meaning, values, and intention of signs (Irvine). And this substrate is the “interface”. After 1960, there is a big leap in the computer system design, the computer screen is not just used to display the results of processing, it became a channel to work with the system. In this “two-way” system, humans’ intention and response are another input that gives instructions to the system. Just like the graph, symbolic process cycle with Interactive computing system design, human’s action goes back to the system and command the computer to make further output (Irvine).
Human-computer interaction is now graphical user interaction. For example, Windows, OS, IOS, and Android are all graphical user interfaces. At this stage, users mostly use their hands and eyes to input commands to the computer or gain output from the computer. If the computer receives the user’s input in diversified forms, it also transmits different outputs to the users. For example, not only can the users tap on the screen with their finger to obtain input, they can use the audio information to give instructions. Also, the output to the users has changed from a single text form to a chart, menu, graph, or other forms.
Reference:
Professor Martin Irvine. From Symbol Processing & Cognitive Interfaces to Interaction Design: Displays to Touch Screens as Semiotic Interfaces
Professor Martin Irvine.Computing with Symbolic-Cognitive Interfaces for All Media Systems: The Design Concepts that Enabled Modern “Interactive” “Metamedia” Computers
Week 9 Qi Wang
HTML is the carrier of web content. Content is the information that webpage creators put on the page for users. It includes text, pictures, videos, etc. CSS is more like performance and decoration. For example, the title font, color changes, or adding background images, colors, etc. These are things used to change the appearance of the content. JavaScript is used to achieve interaction on web pages, such as mouse over the pop-up drop-down menu, or click the button.
Tips (for memorize)
HTML doesn’t need to differentiate the cases, <h1> and <H1> are the same, but most of them are in lowercase.
HTML file has its own fixed structure.
<html>
<head>…</head>
<body>…</body>
</html>
The <head> tag is used to define the head of the document. The elements include tags such as <title>, <script>, <style>, <link>, <meta>, etc. Between the <body> and </body> tags, the content is the main content of the webpage, such as <h1>, <p>, <a>, <img> and other web content tags
Create the five elements of the table:
table, tbody, tr, th, td
<table>…</table>: The entire table starts with a <table> tag and ends with a </table> tag.
<img> tag, insert a picture for the web page
<img src=“”>
alt: Specify the descriptive text of the image. When the image is not visible (the download is unsuccessful), you can see the text specified by the words.
Many semantic tags have been added to HTML, such as header, footer, body, article, address, etc. When you see it, you probably know which parts are used to describe them. The semantic label of HTML (in simple terms) is to make the label have meaning as if it directly tells you what organ and part it is, instead of using other labels to piece it together. It looks like a human body with systematic layers.
Putting a semantic tag on a piece of content can make the page have a good structure, it is also easier to understand what the content is, and it is helpful for search it. I think that in future program design, there will be more and more semantic tags, and more and more people can master programming through simple learning.
Week 8 Qi Wang
The CPU is actually a collection of various transistors, which are arranged and combined into instructions through different designs. The computer we are using today is based on the computer model proposed by von Neumann. The main feature of the computer with this structure is the combination of programs and information together. The information needed by the computer system is pre-stored in the RAM ( I am not sure about this), and the things stored in the RAM are actually stored in a magnetic form in the order of instructions executed by the CPU. Note that the binary system is not a number for the machine; it is the signal combination which has meaning to human (Mahoney, 2005). Overall, in the computer system, the act of computation is processing symbols (binary codes). (Dasgupta, 2014).
The CPU cannot directly recognize and run the code written in a high-level language (such as C language). “Direct” here means that it can be done without relying on specialized software. Any high-level language code must be processed by some complex software (compiler, interpreter) before running.
The program that the CPU can directly recognize is composed of instructions one by one. Each instruction corresponds to basic operation of a CPU. These operations are actually very basic, such as changing the value and adding the numbers. The structure of instructions on different CPUs is different, and sometimes different CPUs cannot recognize each other’s instructions. Although this problem does not exist in high-level languages, high-level language codes do not directly manipulate the CPU. For instructions, recognition can be accomplished through pure circuits. After all, it is relatively simple in structure and does not need to consider the context.
Programming actually writes logic, and most of the time, data is not written by humans but is provided by the natural world or program rules (Binary code is based on electricity). For example, the recording is to collect sound data in the natural world, and video recording is to use pixels to record light and shadow data.
Question:
I am still confused about how CPU and RAM work together? Could you please explain the figure 4.5 (Great Principle of Computing).
I also have a question about the clock circle; in the Great Principle of Computing, the author mentioned that the length of a clock tick interval allows a complete instruction circle. Does that mean the speed of a computer depends on its clock cycle? The higher the clock cycle, the faster the processor speed. The faster the clock cycle, the more instructions the CPU can execute. Does this means that the CPU time can be reduced by optimizing the CPU instructions, thereby increasing the clock frequency or reducing the number of clock cycles?
Subrata Dasgupta, It Began with Babbage: The Genesis of Computer Science. Oxford, UK: Oxford University Press, 2014.
Michael S. Mahoney, “The Histories of Computing(s).” Interdisciplinary Science Reviews 30, no. 2 (June 2005): 119–135.
Denning Martell Great Principles of Computing
Qi Wang Week7
The picture is a file, not a reorganization program. When we are coding data about images or characters, we are actually making a data file. For example, If I type an emoji into the system 🙃, the first step is decoded by Unicode software layers (Irvine). The system will decrypt it into byte-encoded data. This type of data has a function of representation that helps to match the software layer or representation process. Then this specific data can be computable and run by the system. Next, this layer of code connects with graphical, pixel-shaped representations. Then the graphical system will regard the pixel representation as the outcome which shows on our screen.
In the video from Code.org, I learned how the computer system displays the images. The computer stores pictures in the binary code. When the screen wants to display an image, the computer actually converts the binary information into color pixels and then shows it.
The three primary colors in the computer system are red, green, and blue, which are referred to as RGB. In computers, a 24-bit binary is usually used to represent a color unit. Red (R) occupies 8 digits, green (G) occupies 8 digits, and blue (B) Occupying 8 bits, the value range of each color of red, green, and blue is 0-255. In this way, different colors can be formed according to the combination of different values of red, green, and blue, and thousands of different colors can be combined into pictures.
So this explains why usually a picture occupies more storage space in the computer than text because there are more color pixels on the picture, so the more the picture occupies the space volume. But I have a question here if every image has a unique code, how do we find similar or relevant images in the search engine?
I feel that Unicode has now deviated from its original intention. Unicode’s birth’s primary purpose is to allow all languages in the world to use the same encoding and, at the same time, allow many languages and characters that cannot be entered into computers to be internationalized, thereby eliminating the problem of garbled characters. But today’s Unicode has become a hodgepodge of emoticons. Is it indispensable for us to include so many emoticons? Should we care about those things about to be lost, such as rare languages in remote villages, special characters in a certain industry?
Martin Irvine, “Introduction to Data Concepts and Data Types.”
Qi Wang Week6
Our human has the capability to correlate physical objects with symbolic patterns(Irvin).In the binary system, we use 0/1 (on/off) to represent symbolic ideas. Switches can do much more than control current through circuits. They can be used to evaluate any logical statement we can think of (Hillis). As Shannon said: “message is one selected from a set of possible messages.” (Irvin). Information theory focuses on the phrase, which happens after the source is sent before the receiver interprets it. During the system, the transmission of information is a code or a signal, which has nothing to do with meaning and the real world. One thing noticeable here is that this signal doesn’t contain meaning or implication. Meaning is a motivation to promote signal transmission. That is interesting, in this perspective, the information, no matter what form is it, can be measured and translated into quantitive form. I have a question here, when we transform information into binary form, dose it help to maintain the quality, reliability of a message? In reality, people do not want signals but meanings. But once the meaning is involved, the correctness and accuracy are impossible to be completely consistent. And the lack of meaning feature contributes to further functions, such as encryption or encrypted system? Because according to the Information Transmission Model, this linear signal path has a noise source with it. If a sender adds some noise intentionally during the signal transmission, and the receiver knows how to remove this noise.
Information always existed there before Shannon, just as Newtonian mechanics. But before Shannon, almost no one thought that information can be a measurable quantity and a subject that can be adapted to mathematics or science. People believe that information was a telegram, a photo, a paragraph, or even a song. After Shannon, information was completely abstracted into bits. The sender is no longer important; the intention is no longer important, and even its meaning is no longer important: a phone call, a speech, and a page of novels can all be represented by bits. In a sense, it is not the meaning itself, but it contains meaning and social use.
Reference
Martin Irvin. Introducing Digital Electronic Information Theory: How We Design and Encode Electrical Signals as a Semiotic Subsystem
Hillis, W.D. (2015). The patterns on the Stone the simple ideas that make computers work.
Qi Wang Week 5
From the reading Linguistic, Language, and symbolic cognition (Irvin), generally, the “natural language” is defined as human languages acquired from the community where they were born in or live in. Language is an inherent talent, a unique characteristic of humans that distinguishes humans from other species. It is composed of different elements: words, grammar relations, and meaning. How the features work is not like a separate layer; rather, they work coordinately with reflectivity. Because the words are not blank, there are already syntactic features adding in it (Irvine).
Moreover, it also discussed “Universal Grammar” in both readings and the video. Chomsky’s Universal Grammar theory believes that language ability is determined by human genes and is innate. No matter what language you speak or where you are on the earth, as long as you are a human being, you have a unique and universally applicable grammar in your mind. All human languages can be abstracted into such grammar. It also explains why children can learn languages fast from a very young age, even with poor input. However, I have a question here, if the universal grammar is an inborn gift, why people find it is harder to study a second language in adulthood, even adults’ brains are more developed than children? Does it mean this gift will disappear in adulthood?
Besides, based on Chomsky’s theory, through the recursive semiotic system rules, all human language combinations can be generated from the basic words, including existing sentences and potentially infinite sentences. I am wondering that if contextual meaning contributes to this infinite potential of language? There is an example at the end of the reading: He is as cold as ice (Irvin). Even the words are the same, but it has a different meaning under different conditions. As for me, in different situations, even these sentences have exactly the same words, I won’t consider them the same sentences.
In the video, the professor mentioned one hypothesis: the Sapir-Whorf Hypothesis, which demonstrates languages can influence people’s perception and understanding of the world. I think it indicates the relationship between languages and our way of thinking. The book Metaphors We Live By talked about a fascinating language phenomenon, which is a metaphor (George Lakoff and Mark Johnson). For example, time is like money, which is a simile. Time is money, this is a metaphor. The author of this book believes that metaphor is not only a rhetoric but also our way of thinking. When we say that time is money, it is not only a rhetoric, but also reflects our thinking. We would say spend time, save time, waste time, and also say spend money, save money, waste money. Time and money are almost in the same mode of thinking here. Therefore, language is a better way to understand the brain, just like a specific programming language should be understood when studying a system.
My sentence: They killed Kenny!
Reference:
Martin Irvin, Linguistics, Language, and Symbolic Cognition: Key Concepts
Lakoff, G., & Johnson, M. (1980). Metaphor we live by. Chicago/London.
Qi Wang Week 4
The triadic theory is very fresh and exciting in the chapter Introducing C. S. Peirce’s Semeiotic (Irvine). In this theory, the symbol is used as a communication medium or a bridge.
The receiver’s understanding of the symbol is called the interpretation item. Then the interpreter turns into the sender, and the interpretation item that has become a symbol is sent to the next receiver. Thus gets a new interpretation, which continues to develop to infinite.
Another exciting thing I found is that “signs and symbols are interpretable only in the social context of use” (Irvin). From my understanding, it means the meaning/relation depends on the social context that is flexible and dynamic. If there is no code and metalanguage provided by the social cultural context, we cannot intuitively infer “meaning” (or “thing”) from “symbols”. For example, if I say “an apple,” people in the English context must know that I am talking about an apple, and if I say “苹果” then only Chinese or people who have studied Chinese can understand that I am talking about an apple. Understanding a meaning needs to be consistent with the cultural context.
English:One loves the sunset when one is so sad.
Chinese:当一个人如此悲伤时,它就爱日落。
Japanese:人は悲しみに沈む夕日を愛しています。
English:One loves the sunset when one is so sad.
Chinese:当一个人如此悲伤时,它就爱日落。
Japanese:人は悲しみに沈む夕日を愛しています。
English:One loves the sunset when one is so sad.
Chinese:当一个人如此悲伤时,它就爱日落。
Japanese:人は悲しみに沈む夕日を愛しています。
Even I change the fonts and the color, I still know the meaning of the sentence. Peirce’s triadic theory explain this point; he said human has the capacity to figure out the invariant patterns and focus on function in a process (Irvin). (Does that mean we should focus on “grammar”?)
In the chapter Introducing C.S. Peirce’s Semeiotic (Irvine), I question symbols reflexivity. What does “reflex back itself at different conceptual “levels mean? Could you please give me some examples?
Reference:
Introducing C. S. Peirce’s Semeiotic: Unifying Sign and Symbol Systems, Symbolic Cognition, and the Semiotic Foundations of Technology, Martin Irvine