Category Archives: Week 9

Computation is a State of Mind

Computation does not equal computer science. While Computer Science is the study of computation for science and engineering, computational thinking can really be applied to any discipline. A basic example is when Shannon came up with a “mathematical theory of communication” to explore a new way of thinking about information processing. Thus, it is perhaps more useful to think of computation as a particular way of “reckoning” and “calculating” to solve problems.

The following are some of the (non-key) computing concepts I gleaned from the readings and my coding journey:

  • Recursion
  • Interpretation
  • Parallel-processing
  • Abstraction (conceptualizing)
  • Compartmentalizing
  • Complexity
  • Redundancy
  • Error-correction
  • Representation
  • Iteration
  • Search
  • Time and space

The key concepts however surely make more sense to me now as a non-CS student, especially upon navigating through the Python tutorial on Code Academy. I recognized fundamental shifts in the way programming was allowing me to think. For example, I quickly realized how iterative the process of coding really is, when you could have communicated the same code to another human being in a much shorter time. It is now clear that while computation did begin as the process of doing mathematical calculations, over the years, computer scientists have found more broader things to compute other than arithmetic calculations. Therefore the modern computer works based on automatic calculations to alter “operating instructions”. This also led me to consider conceptual differences between human cognition and automatic computers. One of them is storage, or memory.  Computers don’t seem to have as much of a hard time as humans to remember something, forever (until erased, of course).

The structural classification in a computer as a computational artifact, namely, hardware (material artifact), software (abstract artifact) and architecture (liminal artifact) greatly helps in comprehending the complexity of these systems. This also correlates to Charles Babbage’s ardent dream to replace both “muscle” and “minds”. In popular modern culture, computer science is almost seen as a mechanical, dehumanizing and “thinking like robots” pursuit, when it is really rooted in the humanities. Computational thinking is a way to study how humans can solve problems, not to think about how a computer can (although computers used to be people). As Dasgupta says, ‘it is both a concept and an activity historically associated with human thinking of certain kind.”

While computation is rooted in mathematics, we no longer use computers to solve arithmetic problems for us, we use them for a lot more. Computation thus does not deal with numbers, but symbols that stand for something else. Most importantly, the science in computer science is different what the normative conception of science. Computer science is the science of the artificial, to build material artifacts that perform more efficient computations than human beings, in a myriad vectors.

References:

Programming Languages and Human Mind

Programming languages may have a rather “geek” tone to outsiders. Before I have learnt something about programming languages, I tended to think that they are languages from a very different world where weird people huddle up and create something that is the opposite of human. But now I find out from Mr. LeMaster’s Expressive Computation course of this semester and this week’s readings that programming languages have everything to do with our ingenious minds.

There is no doubt that programming languages are highly symbolic meaning systems, for the status of a electric circuit would mean nothing without human’s interpretation using a shared and collective “dictionary”. Symbols in these systems can be divided into three categories. The first one includes symbols for meanings, numbers, strings or other values that can be recognized by programming languages. The second one includes symbols that refer to pr represent other symbols, like different kinds of variables that can be defined to store values. We can find equivalent of those two categories in natural languages, which are good indications of the symbolic processes that happen in human mind–we can use languages not only to mean things, but also to describe languages themselves, e.g. linguistics.

But there is a third kind of symbols in programming languages that has no counterpart in natural languages–symbols that are intended to perform actions to other symbols. Why this kind is exclusive to programming languages? Because we don’t need languages to tell us how to perform a cognitive process. When we pay our checks in a restaurant, we don’t need to talk out aloud like “The bill is 10 dollars, tax is one dollar and the tips should be 2.2 dollars” to put the right amount money on the table before we leave. The reason why programming languages have such a third kind of symbols, I think, is because softwares do not think on their own, and every decision they make are prearranged. They collapse when they run into something new. In contrast, cognitive processes of human mind take place automatically and naturally, and new things can be taken into existing symbolic systems.

However, since modern computation was developed by many ingenious minds and it is the outcome of centuries cumulative human symbolic thought, we can learn a thing or two from so-called computational thinking. As Jeanette Wing stated in Computational Thinking, computational thinking is reformulating a seemingly difficult problem into ones we know how to solve. It’s rather a way of thinking, or a philosophy rather than reachless cutting-edge technologies. This definition for computational thinking reminds me of the methods Mr. LeMasters brought up in his class. He suggested that before we tried to write any code, we should write some “pseudo codes”, which was a sketchy outline in natural language of the symbolic processes of the program we were trying to develop. From this perspective, computational thinking is rather “human” and fundamental. No wonder Jeanette Wing would suggest an early and popular education on computation.

codes of mind

 

We were talking about the “black box” all the time, when learning theories about the human cognitive process. In the computer coding, this “black box” is everywhere. For one thing, in general, any computer program is a black box. It takes our instructions and gives results, in one per million second, and we can’t, and don’t have to, and most times are not allowed to, see the process. What’s happening behind the polished, good-looking window of softwares are all packaged and hided inside that “black box”. For another, the code of every program is also consist of thousands of small “black boxes” — the “function”.

Here’s a simplest sample of the function named “even_odd”, which functions to tell whether the input number is even or odd.

By defining this function, I can now call the “even_odd()” instruction, input a number, then get a string telling me the result in the language I can understand. All codes between the second and ninth line are the mysterious stuff inside that “black box” — and in fact they are not mysterious at all! They are just several steps of logical judgement and arithmetic.

First it defines a variable from the input, then does a yes/no judgement whether it’s an integral. If it is, then another “if” line will tell whether it can be divided by two. Then we get the result.

Deconstructing this function can help us get some knowledge about the cognitive process. In this case, the input information is processed step by step, logically and arithmetically. It’s so well organized and always right, without any ambiguity. Computer programs never hesitate about the “content” of the information — what people always do in communication — it always and only can end in one outcome, as long as it isn’t stuck. That’s the most different part of artificial processing from human mental cognition — it doesn’t try to “understand” things. In other words, artifacts don’t go to semantic level.

But it makes me consider what is the “semantic level” seriously.  When designed to tell the “odd/even” feature of a number, the function seems to simplify this feature into a division calculation, instead of “understanding” it. But isn’t that also some people would do when they are told to do this judgement? The meaning of “even”, in our cognition, is closely related to, or to say, defined by this calculation, and what computer does here are just slowing down and decomposing the process.

We can even code this function a bit differently:

Here the “division calculation” part is changed to more steps. It seems more complicated, but in fact it performs more like the cognitive process of more people would do when processing the meaning of “even”, which is, the number ends with 0/2/4/6/8. It may exactly the same process happening in people’s semantic level.

Of course this is much easier an example than what people would meet with in reality everyday, and not to mention all the history, culture, tradition, life experience and all other things that we call “context” when we try to get any meaning in semantic level. But computers can also be equipped with the “context”, if we spend as long as our life time to write those codes. We can also write codes to enable computers to make speculations when faced with new words or read facial expressions of people. Emotions, at least some of them, are also results following some certain conditions, which means we can also decompose them and code it. Anything can be concluded into a pattern, or a relationship between cause and effect, can all been coded.

What i’m trying to say is, is it possible that our mind, our cognition, our mental process, functions in a way just like those lines of codes? All in the “black box” are just sequential millions of simple steps, logical or arithmetical, as a result of the activities of neurons in brains, just like those of “0” and “1” in the very basic of computers.

In Wing’ s article Computational Thinking, the author claims that “computers are dull and boring; humans are clever and imaginative. We humans make computers exciting“. That’s pretty true. But since the “exciting” can be made, maybe it is just a very complicated accumulation of the “dull and boring”.

Logic of Machine and Tolerance of Human

After conducting only about 20 percent of Python learning on the CodeAcademy, I have a more intuitive understanding of how different is computing language different from natural languages. And the most obvious one is its tolerance. Having a unique syntax for computing languages, the rules to communicate to machines seems extremely strict to me. Computing languages are so not tolerate. Even blank mean something and if I miss the blank in coding “spam ” + “and ” + “eggs”, the software will not recognize it and render it as an error. Moreover, even the error has to be coded, probably using “if”, “elif”, “else”. Because the machine can’t process randomness like we human does with NLP, they have to be explicit to execute. To me, logical artificial languages like Python seem to be the exact reflex on machines’ logical mind. To conduct mathematical based, or electronic based programs, machines are meant to be logical, and intolerance.

On the contrary, human has much more tolerance than machines. In fact, we tend to describe an intolerance human who is indifferent, highly logical and cold-blooded as robot (This analogy now sounds more reasonable after learning a bit of Python). Wing, in her short paper, claiming the fundamental question computer thinking confronts is: What is computable? Or what can humans do better than computers and what can computers do better than humans?

I stumbled across an article this morning called Excuse me, you are fired. In the article, the author cited scholar’s opinion on jobs taken over by robots. Previously, it seems more than convincing to me that one day our jobs will be all taken over by robots. However, this article mentions a list of features that keep your jobs safer from automation, if your job needs 1) negotiating or communicating skills with sophistication; 2) helping or assisting others with genuine and sincere heart; 3) coming up with original ideas with aesthetic and creative minds. In contrast, if your jobs require 1) skills that can be easily grasp under training; 2) repetitive work that need experience rather than deliberation; 3) squeezing into small work spaces and seldom need to catch up with current affairs, then you are at higher risk of being replaced by the machine.

Telephone salesperson, hotel and accommodation manager or owner rank the highest and the least risk jobs from automation at a risk at 99% and 0.4%. With these two jobs, we can easily see the difference.

Hence one can see that how computational thinking is closely related to our livelihood. The safer jobs from automation are actually fields where thoughts are harder to compute. Randomness, tolerance and minds that dare to think out of logic are commonly met in those fields (probably even required in the 3rd kind of jobs). These are the jobs we do better than computers. And the higher risk jobs are fields that require fully implement of computable logic. In these fields, logic is vital and people literally teasing themselves working like a robots. In these fields, computer might do better than human, in future, or right now.

However, it is not saying that safer jobs don’t require computational thinking. In my own opinion, things might be totally on the contrary. Because in the high risk jobs where most works are computable, human act like computers but seldom think like one. After all, human are never able to exceed computers in “computational thinking”. However in those safer fields from automation, we act nothing like logical computers due to the tolerance of humanity. Human need to think more like computers in those fields, precisely because their work don’t fundamentally require computational thinking. So to me, the safer the jobs are, the more computer thinking will help.

(PS. The article Excuse me, you are fired  I read is in Chinese, but the opinion came from a comprehensive analyzing of data supplied by Michael Osborne and Carl Frey from Oxford University. See sources :THE FUTURE OF EMPLOYMENT: HOW SUSCEPTIBLE ARE JOBS TO COMPUTERISATION?)

(PPS. BBC makes this long paper by Osborne and Frey more readable, using the exact principles of “computational thinking”. See also on BBC website to find out your job’s risk: Will a robot take your job?)