Category Archives: Week 8

Chutong, Week 8, CPU

Background: how binary works
All electronic devices have their own circuits and switches, and electrons flow in and out of the circuit, which are completely controlled by the switch. Setting the switch to OFF, the electrons will stop flowing; setting it to ON, the electrons will continue flowing. This switch between ON and OFF is controlled only by electronic signals. In this way, the transistor’s ON state is represented by “1” and its OFF state by “0”, thus forming the simplest binary number. Many transistors produce a special order and pattern of “ones” and “zeros” that represent different situations and define them as letters, numbers, colors, and etc.
Three Main Components of CPU
  1. Arithmetic Logic Unit
  2. Registers
  3. Control Unit
<! – – from Google image – ->
  1. Extract: Extract and retrieve instructions (for values or a series of values) from memory or cache memory. The location of memory is specified by the Program Counter, which holds the value that identifies the current Program location. After the instruction is extracted, the program counter increases the memory location according to the instruction length.
  2. Decode: The CPU determines its execution behavior based on the instructions extracted from memory. The instructions are consisted of codes which indicate what operations are to be performed as well as codes which give the instruction the necessary information, such as the target of an operation. 
  3. Execution: After the extract and decode phase, the execution phase follows. This phase needs to connect to various CPU parts capable of performing the required computation. For example, if an addition operation is required, the Arithmetic Logic Unit (ALU, Arithmetic Logic Unit) will connect to a set of inputs and a set of outputs. The input provides the value to add, and the output will contain the result of the sum.
  4. Write back: Writes back the results of the execution phase in a format. The results are often written into the CPU’s internal memory for quick access by subsequent instructions.



Crash Course Computer Science: Video Lessons

Kahn Academy: Introducing Computers

Week8- Yanjun Liu

This week, I have learned a lot of information about computer’s construction and history. Below are two components of computer that I find very interesting.

CPU (central processing unit): an essential component of computer that can be regarded as its “brain”. CPU is basically a chip with intergraded electronic circuits in it that enables a computer to execute any orders it received. For example, If I want to open a word document, I click on Word icon on my desktop, and this order is immediately processing by the CPU, who then runs this software and reads the binary code in the document. It is called “the brain of computer” because it is just like how  our brain work in our daily lives that “order” us to eat when we are hungry and to drink when we are thirsty. When this concept is applied according to Peter Denning’s concept of information, CPU can also be seen as the “observer” of information, who learns and actively interpret the relationship between user and software. 

Compiler: compiler is ” is a computer program that translates computer code written in one programming language (the source language) into another language (the target language).” In short, compiler serves the function as the language center in human brain that can automatically translate languages we know to our mother tongues and then interpret the meanings from the signs we see.  For example, I am learning Korean right now, when I see Korean words, I automatically translate it into Chinese in my brain that help me to know what it means. 



Denning and Martell, Great Principles of Computing (selections) 

Crash Course Computer Science: Video Lessons



Week 8 Qi Wang

The CPU is actually a collection of various transistors, which are arranged and combined into instructions through different designs. The computer we are using today is based on the computer model proposed by von Neumann. The main feature of the computer with this structure is the combination of programs and information together. The information needed by the computer system is pre-stored in the RAM ( I am not sure about this), and the things stored in the RAM are actually stored in a magnetic form in the order of instructions executed by the CPU. Note that the binary system is not a number for the machine; it is the signal combination which has meaning to human (Mahoney, 2005). Overall, in the computer system, the act of computation is processing symbols (binary codes). (Dasgupta, 2014).

The CPU cannot directly recognize and run the code written in a high-level language (such as C language). “Direct” here means that it can be done without relying on specialized software. Any high-level language code must be processed by some complex software (compiler, interpreter) before running.

The program that the CPU can directly recognize is composed of instructions one by one. Each instruction corresponds to basic operation of a CPU. These operations are actually very basic, such as changing the value and adding the numbers. The structure of instructions on different CPUs is different, and sometimes different CPUs cannot recognize each other’s instructions. Although this problem does not exist in high-level languages, high-level language codes do not directly manipulate the CPU. For instructions, recognition can be accomplished through pure circuits. After all, it is relatively simple in structure and does not need to consider the context.

Programming actually writes logic, and most of the time, data is not written by humans but is provided by the natural world or program rules (Binary code is based on electricity). For example, the recording is to collect sound data in the natural world, and video recording is to use pixels to record light and shadow data.

I am still confused about how CPU and RAM work together? Could you please explain the figure 4.5 (Great Principle of Computing).
I also have a question about the clock circle; in the Great Principle of Computing, the author mentioned that the length of a clock tick interval allows a complete instruction circle. Does that mean the speed of a computer depends on its clock cycle? The higher the clock cycle, the faster the processor speed. The faster the clock cycle, the more instructions the CPU can execute. Does this means that the CPU time can be reduced by optimizing the CPU instructions, thereby increasing the clock frequency or reducing the number of clock cycles?

Subrata Dasgupta, It Began with Babbage: The Genesis of Computer Science. Oxford, UK: Oxford University Press, 2014.

Michael S. Mahoney, “The Histories of Computing(s).” Interdisciplinary Science Reviews 30, no. 2 (June 2005): 119–135.

Denning Martell Great Principles of Computing 

GPU – Graphic Processing Unit

These sources have highlighted the importance of a computer or system breakdown. The iput, the memory, the CPU and the output, all to form a function that seems so simple i.e. a click of a key to display a letter, yet has to go through a plethora of pathways and steps to reach a certain type of output, display or functionality (Kahn Academy). Nowadays, we are so consumed and focus on our actual computers or shall I say, on the output displayed on our computer screens, that we  completely ignore the actual processing that goes behind the physical and visible computer(s) that make it truly possible for us to consume whatever it is we are starring at on our screens. 

Over the past few weeks we have highlighted the importance of data translating and transforming into various more, “human” forms. For example, we see a photo of a swing and it reminds us of our childhood, VS the computer is just translating bits into pixels and outsourcing them as small cubes of colors and patterns that form yet another image to be displayed on the screen. As Campbell-Kelly and Russ put it: a “familiar example of a formal system in which
we can apply the rules, or procedures, for transforming strings of symbols
without regard to their interpretation” (Irvine).  

 A GPU is a Graphics Processing Unit, one of the couple major components of a computer. A processor or better yet – a visual processor, that communicates back and forth with the monitor by using a series of transactions/transitions and instructions, to “determine” what color each individual pixel should be and therefore what should the overall image displayed be. In the early 90’s GPU started becoming a hit with the appearance of 3D graphics and computer gaming (Luebke & Humphreys). A GPU is basically what every computer i.e. your laptop, phone, etc. to create and image and display it on the output i.e. your laptop’s screen, your phone’s screen. The GPU receives information from the input that has been converted into binary information, it then receives instructions (strings of data) on what to do with that info from the CPU and works with the memory stored in order to alter them and execute them to produce a command/result/display (Kahn Academy).   

Graphic systems translate everything (images, graphics, etc) – aka the sign(s) (Denning) – into shapes that have points and vectors or what we’d consider triangles. The relationship (Denning): The GPU uses its memory, a computer graphics library, that combines the vertex point and connects them into vertices and finally triangles. From memory, the GPU can ascribe a placement of a pixel in order to synthesize an image. The input to the GPU is the “description of a scene”. In order for it to process that scene, it has to break it down into those vertices that have been defined from the saved memory in the graphics system  and expressed in a “common 3D coordinate system” that is inputed from and outputted for the user (i.e. the gamer) (Luebke & Humphreys). The observer (Denning): The GPU has the ability to compute each triangle’s color, texture, placement, depth, etc. based off of the memory in the global coordinate system and the lights found in the scene, in order to properly display it on the screen (output). 


One version of a GPU  


Irvine, Martin. “A First Look at Recent Definitions of Computing: Important Steps in the History of Ideas about Symbols and Computing”. (pg. 1 – 6)

Kahn Academy: Introducing Computers

Luebke, David, and Greg Humphreys. “How GPUs Work.” Computer, vol. 40, no. 2, 2007, pp. 96–100., doi:10.1109/mc.2007.59.

“What Is a GPU and How Does It Work? – Gary Explains.” Android Authority, 24 May 2016, 


Sacha Qasim: Week 8

Peter Denning delves into the deeply intricated frameworks of information, symbols, and computing and conceptualizes these processes by defining them through breaking them down. Denning hones on information processing and data in his book, “Great Principles of Computing” and assesses that all computer science is confined to “the study of information processes”[1].  Depicted through his chapters, Denning states the simplicity of computing as “information consists of (1) a sign, which is a physical manifestation or inscription, (2) a relationship, which is the association between the sign and what it stands for, and (3) an observer.” (Denning) Computers today are more intuitive than ever before and can ameliorate our lives in a plethora of ways beyond text information and interacting with information displays.

Computing at its most unostentatious form are variables of true and false, positive and negative, on and off, and 1 and 0’s components. This is how information travels through the computing system from electricity. This is also defined as binary code, which complicates progressively the further we delve into information and symbols. The information transmitted to the computer is called a “bit”[2]. This one bit can represent two unique things such as 1&0 OR 0&1. This carries on until 8 bits are collected which is equivalent to a Byte. The context of this is imperative as bits and bytes do not have any meaning by themselves; they need to be taken in the context of a symbolic system such as a series of binary code summed to 87 (01010111), color gray, capital Q, etc.

Binary is necessary for any aspect of computing to be completed. Something as pleasantly aesthetic as web images, use binary code to show artistic creations and photographs. Pixels are a key element in allotting an image to process on the computer screen. Specifically, a pixel is “a mathematical abstraction for color values mapped to an artesian space designed to be implemented in physical locations (= memory and display device addresses). Mathematical values are represented as an ordered set (a 3 tuple) for Red, Green, and Blue (each position from 0-255, yielding 256 values)”[3]. With the precise computational transfer, an image can be generated by the use of “bitmaps” and displayed on your window.

The CPU is what hosts the binary code and processes it. The CPU has three components; an arithmetic logic unit, registers, and a control unit. It converts data into information processes. But to be able to do so, there is a specific sequence of instruction, programming through its memory ram. In affording itself to the processes, it utilizes logic gates to transfer data and communicate to the CPU. 

[1] Denning, Peter. “Great Principles of Computing”

[2] Khan Academy. Unit: Computers

[3] Dr. Martin Irvine. Computational Thinking: Implementing Symbolic Processes

Fordyce, Week 8

This week we are reminded of the various computer system design principle definitions. Put simply (almost too simply), “the modern computer… [is rooted] in the themes of representation and of automatic methods of symbolic transformations” (Irvine, 1). Denning narrows in on the study of information processes in uncovering the relationship between information, symbols, and computing. Denning points out the problem with leaving ‘processing’ out of the equation of ‘data’ (Irvine). ‘Data’ and ‘data processing’ are not the same. He clarifies, “information consists of (1) a sign, which is a physical manifestation or inscription, (2) a relationship, which is the association between the sign and what it stands for, and (3) an observer, who learns the relationship from a community and holds on to it” (Irvine, 4).

Random access memory (RAM) is one of the basic subsystem of a computer. There are three general categories of the subsystems of computers: (1) the central processing unit, (2) the main memory, and (3) the input/output subsystem. RAM is a computer’s short-term memory – it is fast, but temporary and is part of the computer’s memory subsystem. (Villinger). It’s useful for applications that are running when the computer is on, but the memory is lost once the computer is shut off (e.g. useful for a web browser). SDD is the computer’s long-term memory, for when things are saved. RAM can be described using a physical desktop analogy: “your working space – where you scribble on something immediately – is the top of the desk, where you want everything within arm’s reach, and you want no delay in finding anything. That’s RAM. In contract, if you want to keep anything to work on later, you put it into a desk drawer – or store it on a hard disk, either locally or in the cloud” (Villinger).

How have RAM and SDD changed our notion of real-life memory? Have we become more reliant on computer memory in replace of our own?



Irvine, Martin. “A First Look at Recent Definitions of Computing: Important Steps in the History of Ideas about Symbols and Computing”. (pg. 1 – 6)

Villinger, Sandro. “What is RAM and Why is it Important?” (November, 2019).