I think something just clicked! Slowly starting to make sense of binary and how it is converted into the symbols we see on our screens. First, I need to define Data which is always something with humanly imposed structure, that is, an interpretable unit of some kind understood as an instance of a general type. Data is inseparable from the concept of representation. This representation must be universal so that devices can communicate with other devices, enter Unicode. Unicode is literally just that a universal code that gives a string of binary digits for each symbol, number, letter, etc. that at its current standard has enough space to generate 2,147,483,647 symbols. The first 127 symbols come from ASCII which uses a 7-bit structure and could only generate 127 symbols. We can get the extended ASCII scale that can generate 8-bit structures but there are so many more symbols that needs associated binary digits so Unicode developed a world system that most commonly uses UTF-8 or UTF-16 but can go up to UTF-32. So, for me to understand this better here is an example:
It is easy to use binary digits to represent numbers with the whole 64, 32, 16, 8, 4, 2, 1 (7-bit/ASCII) sequence in which 82 would represent 1010010 but to represent letters or symbols there needs to be a universal agreement on what binary digits represent. So, each letter or symbol is given a number such that “a” means 65 which means 1000001.
Now if every computer uses this same method of symbology (a.k.a. bytecode definitions) then they can communicate with each other which is why a universal standard is so important. The next question is then how do you get a computer to create the symbol “a.” I understand that the binary digits 1000001 = a but how does a pop up on my screen? How does “a” pop up on my screen, how is it converted from binary digits into the letter i.e. rendered? Professor Irvine mentioned it in his intro in which it seems like a software interprets it and then displays the text on a screen, so maybe its next week’s lesson?
This is just for text though. To understand photos is not that different which is wild! After reading How Digital Photography Works I no longer need to hire a professional photographer to take photos for me I know how to alter pictures! Joking but the basics are there and I’ve de-black boxed it! In simplest terms colors are composed of 256 numbers of each shade of red, blue, and green. So, to alter a pictures colors on just needs to change the number associated with that color. Black being 0 of all three which is the absence of color and white being 256 of all three. Though to get from an image that I see into something digital it goes through some cool science that if I did not know better would be a form of magic. The down and dirty, after light passes through a camera’s lens, diaphragm and open shutter it hits millions of tiny micro lenses that capture the light to direct it properly. The light then foes through a hot mirror that lets visible light pass and reflect invisible infrared light that would just distort images. Then it goes through a layer that measures the colors captured, the usual design is the Bayer array which has green, red, and blue separated and never touching the same color but double the number of greens. Finally, it strikes the photodiodes which measure the intensity of the light by first hitting the silicon at the “P-layer” which transforms the lights energy into electrons creating a negative charge. This charge is drawn into the diodes depletion area because of the electric field the negative charge creates with the “N-layers” positive charge. Each photodiode collects photons of light as long as the shutter is open, the brighter a part of the photo is the more photons have hit that section. Once the shutter closes the pixels have electrical charges that are proportional to the amount of light received. Then it can go through two different process either CCD (charge-coupled device) or CMOS (complementary metal-oxide semiconductor). Either process the pixels go through an amplifier that converts this faint static electricity into a voltage in proportion to the size of each charge. A digital camera literally converted light int electricity! MAGIC, joking its Science once you understand it! My question is then how does the computer recognizes the binary digits associated with the electric current? More precisely where in this process does the electric current become a recognizable number on the 256 green, red, blue scale?
Now that I understand the different types of data how to we access and store it. The crash course videos after watching a couple of times provided the answers! Data is structured to make it more accessible. It is stored on Hard Disk Drives and Hard Drives which are the evolutions of years of different research on storing data, which originated with paper punch cards (wild). Hard Disk Drives from my understanding is what our computers use for RAM because it has the lowest “seek time” (time it takes to find the data) by utilizing a memory hierarchy to manage and store the data. Hard Drives are RAM integrated circuits nonvolatile solid-state drives (SSD) that contain no moving parts but still not as fast as Hard Disk Drives. I am not sure if this is correct though because I more familiar with hearing the term Hard Drive rather than Hard Disk Drive, and associate disk technology with old computers. So, my question is what type of memory storage do most modern computers use or do the use both? Anyway, after understanding where data is located, the next step is understanding how it is organized in that storage system. Data is stored in file formats like JPEG, TXT, WAV, BMP, etc. which are stored back-to-back in a file system. At the front, a Directory file or Root File, is kept at the front of storage (location 0) and list the names of all the other files to help identify the files types. Modern file system stores files in blocks with slack space so that the a user can add more data to that file. If it exceeds its slack space it creates another block. This fragmentation of data goes through a defragmentation process that reorders data to facilitate ease of access and retrievability.
CrashCourse. 2017a. Data Structures: Crash Course Computer Science #14. https://www.youtube.com/watch?v=DuDz6B4cqVc&list=PL8dPuuaLjXtNlUrzyH5r6jN9ulIgZBpdo&index=15.
———. 2017b. Memory & Storage: Crash Course Computer Science #19. https://www.youtube.com/watch?v=TQCr9RV7twk&list=PL8dPuuaLjXtNlUrzyH5r6jN9ulIgZBpdo&index=20.
———. 2017c. Files & File Systems: Crash Course Computer Science #20. https://www.youtube.com/watch?v=KN8YgJnShPM&list=PL8dPuuaLjXtNlUrzyH5r6jN9ulIgZBpdo&index=21.
“FAQ – UTF-8, UTF-16, UTF-32 & BOM.” n.d. Accessed February 21, 2021. https://unicode.org/faq/utf_bom.html.
Martin Irvine. 2020. Irvine 505 Keywords Computation. https://www.youtube.com/watch?v=AAK0Bb13LdU&feature=youtu.be.
The Tech Train. 2017. Understanding ASCII and Unicode (GCSE). https://www.youtube.com/watch?v=5aJKKgSEUnY.
“White-Downs-How Digital Photography Works-2nd-Ed-2007-Excerpts-2.Pdf.” n.d. Google Docs. Accessed February 21, 2021. https://drive.google.com/file/d/1Bt5r1pILikG8eohwF1ZnQuv5eNL9j8Tv/view?usp=sharing&usp=embed_facebook.