Category Archives: Week 7

Photo code

I started off of with the idea of hunting down the code for a photo on my own personal desktop. On a regular basis, no one really sits and think “Hmm, I wonder what my computer thinks of this image…” or “I wonder what this image looks like to my computer…”. It is so “awe-stricking” to come to terms with the fact that you’re device views this data as something completely different than the human eye does. It translates a code and transforms it into the combination of pixels, colors and formation that we see and recognize as something symbolic. It takes the e-information and decodes into a data set, translated and transformed into a symbol of representation. I wanted to see how taking something so personal, as a private photo that belongs to you on your computer and means something to you, is in reality a combination of transcribed encoded matter that combined all together allows for the image to be created to what I see it us. 

I chose this photo that was on my desktop (I had it as my background at some point) that I took in Yokohama, Japan two years ago. To me, this photo encapsulates way more than what it simply does for my computer. 

We are viewing the same thing but actually seeing it differently. 

And if we wanted to take it a step further, I would even say that my phone camera that captured this instance also saw this in a very different “light” – pun intended- when trying to take a real-time-moment that results in the still representation of the amusement park. Even though you could/can distort the image, change it, photoshop, etc. the specific capture moment is assigned/converted/translate and existent on its own unique code. – In the same way that we saw how ASCII, Unicode, etc. operate and each symbol i.e. a letter or an emoji, translate and stand for a specific assigned characteristic/value, similarly, this image, this digital data, has it own data correlation (of course pre-editing, since after that it would have a new, different, code that is ascribed to it with the new changes). 


Martin Irvine, “Introduction to Data Concepts and Data Types.”

Digital Character Sets: ASCII to Unicode

Ron White and Timothy Downs, How Digital Photography Works. 2nd ed. Indianapolis, IN: Que Publishing, 2007. Excerpt.

Emoji Charts



Chutong, Week 7

My example is an image in my website created through Glitch (a web development platform): 

The code of this image in my html doc:

<div align=“center>
        alt=“cute red panda”
What this image looks like in Glitch assets: 

But I am not sure does the assets equals to cloud or something? Cause not only in Glitch, but also in CodePen and many other web development coding platforms we can see this term assets. So I guess it is a conventional name for media storage in these platforms? 

And we can actually see the pixels when you zoom in the website, you will find the details of image start to blur if you zoom in a lot, while the text of the website still keep sharp (bitmap vs. vector)

All computer files (also include this image) are stored in our computer as E-information (0 and 1, machine language). The complier translates them into digital code for storage and the translator translate them back to display when we call these information through operating system.  And when I upload this image to WordPress, I am actually transmitting the digital information of this image from my computer to a larger system. So this chunk of data of this image is now storing in both my computer system memory and the cloud of this WordPress website waiting for re-instance. (Actually I’m not very sure it is in the cloud or something else, still not very clear about the definition of cloud.)


As we know how the bitmap work from this week’s reading (pixels, RGB), I wonder how the vector images work and how they display in our screen

If our screen is consist of small dots, then how can vector image display? For example in web development we use Adobe Illustration to create SVG image, because SVG image doesn’t change because of the resolution or zoom in. 


Digital Character Sets: ASCII to Unicode (video lesson, Computer Science)
How do Smart Phone Cameras Work? (video lesson, Branch Education)
Images, Pixels and RGB (video lesson from, by co-founder of Instagram)

Qi Wang Week7

The picture is a file, not a reorganization program. When we are coding data about images or characters, we are actually making a data file. For example, If I type an emoji into the system 🙃, the first step is decoded by Unicode software layers (Irvine). The system will decrypt it into byte-encoded data. This type of data has a function of representation that helps to match the software layer or representation process. Then this specific data can be computable and run by the system. Next, this layer of code connects with graphical, pixel-shaped representations. Then the graphical system will regard the pixel representation as the outcome which shows on our screen.
In the video from, I learned how the computer system displays the images. The computer stores pictures in the binary code. When the screen wants to display an image, the computer actually converts the binary information into color pixels and then shows it.
The three primary colors in the computer system are red, green, and blue, which are referred to as RGB. In computers, a 24-bit binary is usually used to represent a color unit. Red (R) occupies 8 digits, green (G) occupies 8 digits, and blue (B) Occupying 8 bits, the value range of each color of red, green, and blue is 0-255. In this way, different colors can be formed according to the combination of different values of red, green, and blue, and thousands of different colors can be combined into pictures.
So this explains why usually a picture occupies more storage space in the computer than text because there are more color pixels on the picture, so the more the picture occupies the space volume. But I have a question here if every image has a unique code, how do we find similar or relevant images in the search engine?
I feel that Unicode has now deviated from its original intention. Unicode’s birth’s primary purpose is to allow all languages in the world to use the same encoding and, at the same time, allow many languages and characters that cannot be entered into computers to be internationalized, thereby eliminating the problem of garbled characters. But today’s Unicode has become a hodgepodge of emoticons. Is it indispensable for us to include so many emoticons? Should we care about those things about to be lost, such as rare languages in remote villages, special characters in a certain industry?


Martin Irvine, “Introduction to Data Concepts and Data Types.” 

Images, Pixels and RGB 

Week 7 – Yanjun Liu

Notes of some basic contents that I took from the video introduction:

-ASCII is a 7 bit encoding system for a limited number of characters.

-Extended ASCII resulted in lots of incompatible code pages.

-Unicode allows every character in every written language to be encoded.

-Unicode is backwards compatible with ASCII.

-Unicode is space efficient.

-Unicode Transformation Format (UTF-8) uses 1, 2, 3 or 4 bytes. 

-Unicode is universally supported.

——Video: ASCII and Unicode Character Sets 

-Pixel: tiny dots of little light of different colors 

-Resolution: is basically the dimensions by which you can measure how many pixels are on a screen. (More pixels can one screen display, higher the resolution)

-R: red; G: green; B: blue. (range from 0~255 represents dark to bright; triplets of these values together compose a single pixel.) →different intensity per color channel. One byte represent each of the numerical values of the R, G and B.

——Video: Images, Pixels and RGB

Text: 2020年的感恩节即将到来。(The Thanksgiving Day of 2020 is coming soon.)

When I am typing the above Chinese characters, the computer is generating a series of  1s and 0s that are encoded based on the Unicode standard, which has set default binary code for every letter from every kind of Language. Meanwhile, the computer is accessing my order of typing by translate the date I have entered in its own system and then create output on the screen that are something I can read and understand according to my own language background. The sequence is: human language input → binary codes (computer language) → binary output → human language input.

However, sometimes my computer (which I have bought in the United States and set the default language as simplified Chinese) cannot successfully recognize a Mandarin file or run a Chinese application (I open it and there are things like ???K口口??) I think this is something that was mentioned by Yajing Hu in her blog post that Chinese does not has its own encoding system and when its characters are typed on computer with ASCII system, the system itself can be confused, therefore can not fully display the output. 


How web apps like WordPress or google doc successfully save the data that I have typed in even after I shut down my computer? (Does it save my data in their own server or cloud storage? I am still confused about the technical procedure of it. ) 



Han Ideographs in the Unicode Standard,” Yajing Hu (CCT student, final project essay)

Digital Character Sets: ASCII to Unicode (video lesson, Computer Science)

Images, Pixels and RGB (video lesson from, by co-founder of Instagram)

Fordyce, Week 7

For this week, after reading’s Irvine’s introduction, I watched the Instagram video. The basis of Instagram lies around the concept that visual communication is effective. But how we display images? Through pixels. Individual pixels aren’t actually easily visible ( But the whole sum of pixels grouped together create a comprehendible image. Images are becoming better and better quality because as we innovate pixels can be grouped closer and closer together ( An image file at its core is actually made of bits – in other words just 1’s and 0’s ( Irvine helps us distinguish between data and E-information – data being chunks of bytes (made up of grouped bits) assigned to a specific computable type and e-information being binary units (bits and bytes) assigned to computer system memory (Irvine).

Below is an image of my hometown, Basel, Switzerland:

The following is its “inspected” code:

It’s interesting to look at these different representations of the same data. In one instance we see a pixilated, visually appealing representation of a place. In the other, we see the coded format of what makes the visual data. 

The video on how smart phone cameras work was also very interesting. Because of smart phones and increased accessibility to camera technology, the amount of photographs humans take is unfathomably large. The CPU plays a central role in the process of capturing an image on a smart phone (Branch Education). The process of taking and saving a photograph on your phone is actually quite complicated (has many steps), but we are able to so easily snap moments of our every day lives without thinking about it. Quite like how are brain functions in ways we aren’t aware of, the CPU does that for the smart phone’s functionality – emulating a human-like central control unit for procedures. Quite like how humans process data through representation, a camera must do that with light sensors to save an image. It has been most useful to think about all data as representation and to use that to understand how all different kinds of data can be shared in different formats.


Martin Irvine, “Introduction to Data Concepts and Data Types.”

Images, Pixels and RGB (video lesson from, by co-founder of Instagram)

How do Smart Phone Cameras Work? (video lesson, Branch Education)

Week 7: Qasim

Through the series of articles and videos, I have assessed defining these subject areas by the following:

Source: Al Jazeera

The encode and decode the process of data type instance as data, is first to acknowledge everything is a data type. Whether it is symbols, emojis, pictures- they all are representative of information and are channeled into computing through the use of binary code which is rendered through text characters. So first, bytecode is used as the median characters in creating a character. The software then “stack design (the levels for rendering text through graphics controls and applications”[1]. Finally, the output is conducive of character shapes being transformed into pixel patterns through screen display. We also are able to conceptualize this through the Digital Images- Computerphile video; which delves into how pixels work, through the use of rgb, and how they are used within each pixel through the eight bytes of the three channels.

E-Information is a series of blackboxes deeply integrated within computing systems that are formatted into bits and bytes to be processed through the computer systems memory.[2]

[1] Dr. Martin Irivne, “Introduction to Data Concepts and Data Types”

[2] Dr. Martin Irivne, “Introduction to Data Concepts and Data Types”