Around the Code

I had heard the term “source code” numerous times but to really understand it I had to interact with it. Source codes are lines of code that can run a program. But when it comes to the practical world, it seems IDEs (Integrated development Environments) are used to write source codes, as they have more functions and better UI than simple text editors. In fact, IDEs like Xcode even try to make use of color conventions and indexes to make the coding process smoother.

Having an understanding of how coding works can help us deblackbox many of the cognitive symbolic technologies like the applications that run on iOS. By understanding the features and limitations, the affordances and constraints of Xcode, we can see why certain apps in the iOS ecosystem tend to be designed in the way they are.

Like natural languages, programming languages have a grammar too, i.e. syntax rules. They have functions as equivalent phrases, which are the most commonly used codes of line boxed into a single block. They have statements, variables, whitespaces, strings, all of which perform some function to translate actions from human language to machine language.

When we go deeper we see that any code at the basic level is translated into machine language which is in bits. Bits that are the basic unit of Information: a string of 1’s and 0’s. Numbers still lie at the center of programming languages. Working with numbers is still a basic skill one must learn to be proficient with in Python.  We can then see Python as part of a cognitive symbolic continuum, (Irvine, 2) where ratiocinators and analytical engines that used crunch numbers, underneath all the programming devices with advanced user Interfaces.

I tried python on the visual studio code software and it was interesting how even single lines could be run to see what results they give. It reminded me of the feedback that is expected of good GUI so that we can go about coding much easily without focusing on the syntax all the time.

While reading Evans (2011), I was particularly fascinated by the example he gives for colour. Since computers essentially run on bits, it means that each 1 and 0 could mean yes or no to a variety of colour inputs. This means that the computer has a finite range of colour options which it can use on a single pixel and together these varied coloured pixels will come together to deliver an image. But what about a painting? In a physical form, the colours are mixed to form interesting hues and if non-experts stand in front of a Van Gogh we cannot really tell the difference between a fake and an original. Yet an original “Starry Night” impacts us profoundly. Evans says that “The set of colors that can be distinguished by a typical human is finite; any finite set is countable, so we can map each distinguishable color to a unique bit sequence.” (Evans, 2011, 12)  But does that mean a computer image of painting that is indistinguishable from a painting has the same richness in colour? Does it have the same impact? I would like to leave you with this question.

 

References –

Martin Irvine, “Introduction to Cognitive Artefacts for Design Thinking” (seminar unit intro).

David Evans, Introduction to Computing: Explorations in Language, Logic, and Machines. 2011 edition.