Category Archives: Week 9

Bringing it Back Home with Recursion

This is my third attempt to get myself to learn a programming language. The first time I tried it was through Harvard’s edX Computer Science Course, but as I was already enrolled in 5 other classes I couldn’t really get into it. The next time was an Introductory Programming course taught by the head of Geneseo’s Computing and Information technology department, but I had senior-itis and the course was geared towards Physics majors, so again it didn’t take. I hope the third time’s the charm, I really like Code Academy and it’s helpful that I can structure the lessons myself. While I worked my way through the lessons I really kept thinking about one feature of linguistics and how it relates to coding: recursion.

The term has popped up before as the key feature of language that enables the production of a discrete infinity of sentence possibilities.  We cycle through our lexicon and combine words and syntax structure we know, but the recombinations are always unique. In programming it allows us to call back functions we’ve already used and apply them to new concepts. On the micro-level this allows us to create things like the ‘pyg latin generator’ in Code Academy’s python tutorial. We create the function that can translate a word into pig latin and then we can call it back to put any word we want in it. I can’t really comprehend the type of code big data companies must be writing, but I can imagine how instrumental recursion must be to them. Possible (and simplistic) example: every time you publish a Yelp review of a cafe, their code takes all of the other reviews given to that cafe and adds yours to the average, and the same type of process would happen for every review you post at any other cafe making it easy for them to assign aggregate scores to businesses. Ultimately, if we did not have recursion we would not be able to automate code, or at least our pattern matching would be far less efficient. However, despite all the praise I’ve heaped on recursion, I do have an issue with the term itself. We don’t  only use it to bring back old code, we always add something new to it. Otherwise, there would be no reason to go back in the first place.

Unrelated points:

  • Model Checking: Wing’s explanation of model checking as a system we can use across disciplines and professions was very interesting. When she explained it with the ATM metaphor I felt like I really understood it. Machines have two inputs, one being the hardware finite state (the physical ATM) and the software or temporal logic property (being able to get my money). Bugs in the system itself (money doesn’t come out) are the counter examples. However, I feel like her use of the ‘counter example’ doesn’t cover a wide enough range of  externalities related to the system. Model checking, in my understanding of it, wouldn’t account for how the ATM has affected the way we carry (less) paper money or a robber stealing your money at an ATM. They aren’t flaws in the model, but they wouldn’t exist without it.
  • This is more of a public statement: I f anyone has made it to the end of this post, I’m going to try and code for an hour every other day so I can finally learn a programming language. If you see me around ask me if I’ve been keeping up with Python and if I haven’t feel free to hit me.

Computation: More Than Just Programming (Katie Oberkircher)

While stumbling my way through Python, I found myself asking the question: How can we express human interests through computers?

Python uses a type of language grounded in computation. It’s founded on the idea of meta-function: some symbols can perform actions on other symbols. As Dr. Irvine explains in his video, symbols are a logical bridge – they provide a connection from human interests (also represented in symbols) to electronics and digital processing. We can then interpret and act on software encoded symbolic representations.

Since the symbols we use in software spaces don’t usually represent meaning, is it fair to say that in some ways, they are a means to an end?

We can better understand how creativity and efficiency relate to computation by developing procedural literacy. As Evan explains, “designed languages” are “created by humans for a specific purpose such as for expressing procedures to be executed by computers” (Evan, 19). This type of programming language is precise and unambiguous with a specific syntax.

I wasn’t aware of the level of precision of Python until I entered an invalid input and received an error. I added an extra space in the line: lion.(upper) and as a result, Python produced a message alerting me that I erred in typing a correct string of symbols. One space prevented the input from being interpreted correctly. (This is much different than natural language, which almost hinges on our ability to evolve and accept slight changes and adaptations of certain words and phrases.)

This level of detail seems simultaneously simple yet complex. It’s simple in how details are hidden, so we can focus on higher level operations and it’s complicated in how much control we have, as programmers, over certain mechanical resources (Evans, 38). In this way, we can think about abstraction as a means to differentiate between levels of programming language. In Great Principles of Computing, Denning and Martell indicate that there are over 500 programming languages (Denning and Martell, 84). These versions are based on abstractions, yet they are all precise and free of ambiguity. They are also based on scale, scope and complexity.

As I worked within Python, Jeannette Wing’s thoughts on humans as computers brought up a few other questions. If, based on speed and economics, humans can still solve some tasks better than machines, how can we reconcile this? Or, should we reconcile this? If, as Wing explains, we can do things like process and interpret natural language better than a machine, should we keep these ideas in mind when we measure an efficient, correct, usable abstraction?

References

David Evans, Introduction to Computing: Explorations in Language, Logic, and Machines. Oct. 2011 edition. CreateSpace Independent Publishing Platform; Creative Commons Open Access: http://computingbook.org/.

Martin Irvine, Introductory Video Lecture on Computational Thinking (Prof. Irvine, from “Key Concepts”)

Peter J. Denning and Craig H. Martell. 2015. Great Principles of Computing. The MIT Press.

Jeannette Wing, Computational Thinking YouTube video, 1:04:58. Posted by ThelHMC. October 30, 2009.

Coding is Cool (and challenging, too) – Amanda

This week’s “Codecademy” assignment, along with the readings, served as yet another helpful stop along the way to better understanding semiotics and cognitive technologies. It seems like everything we have read before this point has led us here, and while I had absolutely no experience in coding before this week, it suddenly seemed less intimidating than it has always appeared to be in the past.

Out of all of the reading that we’ve done this semester, all of the pieces that have covered cognitive technologies seemed to jump out at me this week as I got started on the Python coding process. It is evident, as I practice this very basic introduction to coding, that computers – and so much of our life – run on a set of codes and symbols.

For example, I enjoyed learning how to do simple math equations on Python. Although the language is slightly different from what I know, the outcome is the same. Regardless of the symbol used to show that something is “equal” or “true” or “false,” the actual outcome of the process remains the same. This process takes me back to learning any other foreign language. When I took Spanish classes in high school and college, “I want two pieces of pizza” looked a lot different in English than it did in Spanish. The sentences were structured differently, and entirely different words were used to express the desire. However, the outcome remained the same, so long as someone knows how to translate between the two languages.

Thus, when I was following along to the Python lesson, I was reminded that although the way of computing on the screen looked & seemed very different than the way that I compute things in my head, there was a visible correlation. In “Computational Thinking,” Jeanette Wing mentioned that computational thinking is parallel processing – code is interpreted as data, and data is interpreted as code (33). This statement was made very clear as I, followed by the readings, worked on Python. As Subrata Dasgupta mentioned in “It Began with Babbage, computation is associated with the process and activity of human thought (11). While many of our readings have stressed this idea in weeks past, it wasn’t until I logged on to Codecademy that it all began to really make sense.

It may sound naive, but as I worked on the Python training, I couldn’t help but think that the coding process could be simplified; like it has somehow become harder than it needs to be. Obviously, I have very little experience with coding. However, I did become easily confused with all of the symbols that meant something very different than what I know them as in everyday life (for example, the = sign). While human language has been around for centuries, coding language still seems relatively new. Who exactly placed new meanings on these various coding symbols, and why were they chosen in particular? Obviously, there are various coding languages, and coding in general has evolved over time. However, I’m still interested in further discussing why computation and coding happens the way that it does.

 

References:

Wing, Jeannette. “Computational Thinking.” Communications of the ACM 49, no. 3 (March 2006): 33–35

Dasgupta, Subrata. “It Began With Babbage: The Genesis of Computer Science.” 2014. Oxford University Press.

Python and Computational thinking – Yasheng

The eureka moment came when Evan explains coding and representation work: “instead of enumerating a mapping between all possible character sequences and the natural numbers, we need a process for converting any text to a unique number that represents that text (10).”

Denning and Martell clarifies this process in their writing,

A representation is a pattern of symbols that stands for something. The association between a representation and what it stands for can be recorded as a link in a table or database, or as a memory in people’s brains.

  • There are two important aspects of representations: syntax and stuff.
  • Syntax is the rules for constructing patterns; it allows us to distinguish patterns that stand for something from patterns that do not.
  • Stuff is the measurable physical states of the world that hold representations, usually in media or signals.
  • Put these two together and we can build machines that can detect when a valid pattern is present (372).

These explanations are very useful this week when I was deblackboxing Python in Codecademy.  The second module of Codecademy teaches the newbie coders to use python to calculate tips – an annoying task that happens to everyone who eats out at restaurants that requires tipping. Tipping is a process we all familiar with, and we all know how to do it step by step, and of course sometimes we ask help from the calculator.

Just from looking at the interface of Python, the steps of calculating tips became very clear.

Screen Shot 2016-10-25 at 10.08.13 PM

We, humans who have learned the basic procedures of tipping, first identify what is in play when calculating a tip: how much does the meal cost, what is the percentage of tax, and what percentage should my tip be? These questions are defined as “meal =,” “tip =,” and “tax =.” Then when values are assigned to these variables, an equation is formed to conduct proper calculation based on these variables.

Python and I both calculate tips in a same manner, the only differences are 1. I am not as fast as Python at calculation (I am an Asian who cannot do math well L) and 2. I filter out certain steps in the process of calculation, like defining variables, because I do it subconsciously though it is still very much a step I take in my cognitive functioning. So there is actually little difference between computation thinking and human thinking. Wing puts it perfectly, “A way that humans, not computers, think. Computational thinking is a way humans solve problems; it is not trying to get humans to think like computers (35).” When we have a problem, we identify it, isolate different elements that are involved, and follow a logical sequence to solve it. Or put in Denning and Martell’s terms, Python and the humans employ same syntax when conduction cognitive works yet the stuff human have can be more nuanced.

Questions:

What if a pattern that is not visible?

What happens when the “stuff” is too nuanced for computers to interpret?