Category Archives: Week 8

Programming as a change factor in the teaching and learning environment

I graduated from a technical high school in informatics where teachers explained that once we understand logic, we are able to learn any programming language in specific. Algorithm rulers, pencil, eraser, and sometimes a computer, were the main tools for the Programming Language class. After this experience, I ended up teaching programming for 2 years before going to college. The language used in the school that I worked for was Visual basics, a Microsoft language that used written language and a graphical interface. Now, I feel happy to learn Python, with a very innovative teaching and learning design.

One of the Codecademy’s founders used to be a Columbia University undergraduate student who dropped the course to invest in a programming platform that should be easy and intuitive for everyone. This is worth mentioning because his endeavor is a result of his frustration with the university methods of teaching and learning, and the content itself. He has advocated for teaching programming to young people, and it is not difficult to see him in education conferences and seminars.

This was my first time using the Codecademy platform, although I had planned to do it previously. I found it very pragmatic and effective: learn a language to communicate with a machine. Referring to Jeannette Wing’s article, the platform does not intend to form computer scientists, instead, it focuses on creating programmers, who learn the language to give instructions to the machine. Learning is intended to come from experience: the more one uses and the more s/he assimilates.

I completed 25 activities (2 lessons) through which I could notice that I memorized more easily commands that are usually used by other languages: the use of quotation marks for informing strings, the mechanism to define a variable and store data on it, the use of math operation expressions similarly to what is used in math, etc.

On the other hand, my mistakes showed interesting findings. As David Evans (2011) explains, programming means to describe every step using a language that humans understand and the machine can execute. The error notification messages that I received were very clear about that. One of them exposed my confusion between a variable and a string, using quotation marks. The error message tried to explain me that I was not saying to the machine what I was supposed to say. It clearly exposed that there were no margins for machine interpretation. The platform understood what I did, explained me it and informed me what I should have done instead. The platform algorithm is very didactic and focused on the learner at the same time that implicitly it makes clear that I need to learn how exactly to represent what I want the machine to do.

Interestingly, I imagine that the platform, which seems to have more than 25 million users so far, will be able to predict users’ mistakes more precisely, giving better instructions in the exercises.

The level of Python abstraction, as explained by Evans (2011) is not so hard to get use to it. For instance, upper(), lower() are abstract commands, but they are intuitive as well. So far, Python seemed to be a “simple, unambiguous, regular, and economical” language. This is maybe why it has become one of the most popular languages in social actions which try to teach young people and women to program.

Although I advocate for teaching programming at schools, I do not agree with Jeannette Wing that we need to have computational thinking at schools as a way to teach abstract and problem solving thinking. This seems to come from someone who does not know the schools curriculum, which already has different disciplines focused on such skills. Math, for example! The problem is how schools and teachers teach that. The problem is in the design of the classes. This is why the Codecademy platform is very ingenious – it changes the teaching-learning process design using the tools available nowadays. I share Jeannette Wings and David Evans aspiration of having liberal arts students and young people learning computational skills at schools, but my reasons are grounded in the necessity of changing the teaching-learning design and leveraging the number of skilled people from different backgrounds able to  command machines for good of their communities and day-to-day life.

Softwares are rules of life

Softwares are rules of life


Software programs are everywhere

The existence of software as a piece of mind and idea we can observe everywhere in the surrounding environment. From the level of organic life in alive cells of tree leaves or human body, and up to cosmic gravitation rule in the Universe, we may be witnesses of perfectly functioning rules. For example, in micro level of alive cells, amazing complexity of mitochondria


functioning to produce energy or work of ribosomes to synthesis protein


are done based on certain software programs, which are encapsulated in DNA of each cell and being intangible pieces make physic (or tangible / material things) bodies to function / live. “DNA translation can thus be called an information process; if someone discovers a controlling algorithm, it could be also called a computation[i].” In the macro level, we can see perfectly working rules as in the Solar system, in the Milky-Way Galaxy, as well as in the whole Universe.


All these rules, either in micro or macro levels, could be considered as software programs to bring materialistic bodies (and the world) in order and systematic ways to live. There is even mathematical algorithm recently discovered for Brownian (so called chaotic) motion of molecules.

Brown_motion  randomwalk

Apparently, it once again may prove an existence of software programs everywhere in the surrounding environment.

Technical artifacts designs by humans

The same approach human brain extrapolate to her/his own creations. In fact, to bring life to new inventions, it is important to have three major substances – material body, software and energy. We can try to prove this hypothesis observing how computers are designed and functioning. If material (hardware) things of a PC are sophisticated “blackboxed” devices, complex wire system, a body itself, then, math algorithms and programs, which brings electronic life to these tangible things are softwares.


And, finally, there is no any electronic mechanisms which can work without energy. Thus, to inbreathe a “real life” to a PC, along with installation a software, we need to plug it to electric grid system. As we see here, electronic artifacts invented by people and organic lives surrounding us have the same type of requirements and demands to live.

Personal experience and constraints

Apparently, we can feel and understand something when it becomes more crucial for us. Utilizing PC routinely usually don’t give us insights how sophisticated software programs are in this PC. Once I was inspired to design a social internet site to boost citizen journalism movement in my country, I realized how much we are dependent from such regular features like software programs.


I needed to use each section of that internet page purposefully, therefore, I had to define appropriate program for each function, which indicated the certain feature of the site. I got a technical assistance to build this site and started learning myself. Later I realized how Java program works and what software functions better for slideshow. I comprehended which programs we needed to employ to provide users with capacity to upload their articles and information, how they could use searching engine, what type of hashtags and key words were important to utilize and how we could balance between esthetic forms of the site and its content. One of the most complicated issues was composing procedures[ii] of all features of the site and screen the outputs. And how to make a trade-offs between time, space, processing power and storage capacity[iii]. Later, I understood that we were following almost all “Eight Great Ideas in Computer Architecture”, such as abstraction to simplify design or making the common cases fast[iv].

[i] Peter J. Denning, The Great Principles of Computing, American Scientist magazine, page 372

[ii] David Ivans, Introduction to Computing, University of Virgina, page 54

[iii] Jeannette M. Wing, Computational Thinking, Viewpoint, 2006, page 34

[iv] David A. Patterson and John L. Hennessy, Computer Organization and Design, page 11

We really do need to think like computer scientists

In Introduction to Computing: Explorations in Language, Logic, and Machines, Evans (2011) makes the argument that computing science should be taught as “a liberal art, not an industrial skill” (p. ix). He explains that everyone should study computing because “1. Nearly all of the most exciting and important technologies, arts, and sciences of today and tomorrow are driven by computing. “ and “2. Understanding computing illuminates deep insights and questions into the nature of our minds, our culture, and our universe” (p. 1). In after only a few lessons into the Code Academy’s Python introductory course, Evans’ motivation becomes clear as the logic of computing puts into course a manner of thinking that is very distinct and in a way empowering. Even within the first lessons in the course, when learning how to properly format instructions in the Python programming language, the logic of thinking from an instructive perspective has different feeling than other manners of thinking. Moreover, the idea that you have to think logically and on a step by step basis is empowering because it gives you the feeling of control; even if basic programming instructions, you are giving instructions, from which results emerge.

An argument explained by Denning and Martell (2015) is that computer science is a science branch of its own because it has its own approach to discover facts. Moreover, they argue that its approach is different than other sciences because it is transformative, not just focused on discovery, classification, storage, and communication. “Algorithms not only read information structures, they modify them” (p. 15). And generative, not just descriptive. “An algorithm is not just a description of a method for solving a problem, it cause s a machine to solve the problem” (ibid). This way of thinking is felt right away when writing a few lines of basic code by which I, as the programmer, could define variables and determine how they would behave in relation to other variables and different logical instructions. However, the idea of machine learning leveled this empowering feeling as I kept going with the module.

Both Denning and Martell’s and Evans’ proposals make sense for todays world. On the one hand, distinguishing the scientific approach of computer science from other sciences is primordial on a more massive level at this point. While computer scientists already know this as they rapidly advance the field — we are already speaking about artificial intelligence and high levels of machine learning —, the public may not be as clear about the wide world that is computing. As explained by Wing (2006), “thinking like a computer scientist means more than being able to program a computer. It requires thinking at multiple levels of abstraction” (p. 34), but the main narrative about programming we have today may not tell the whole story. On the other hand, Evans is right, computing is everywhere and understanding it can only helps us better understand ourselves and our culture.

Making computing more widely used is a challenge on several fronts, but the one that came to my mind considering the history of computing told by Campbell-Kelly and the increasing amount of news and media we see today about algorithms, machine learning and artificial intelligence, is the constantly widening digital divide that is part of the computing field. The fact that computing has to be more accessible has been pushed forward by policies emerging in different sectors and levels, which is why a website with such a greatly designed self-learning software like Code Academy exists for free today. However, even such programs may not fully illustrate to users how fastly the field is growing, and this lack of awareness means those who are not learning this logic are being left behind.

As I mentioned at the beginning, the feeling of empowerment by being the one giving instructions was great when I started the learning module. As I was thinking about this, Ada Lovelace’s argument came to my mind: “The Analytical Engine has no pretensions whatever to originate any thing. It can do whatever we know how to order it to perform” (Evans, p. 35). However, after I was done with a few more lessons, I reached the stage in which you can program interactivity with the user, and I realized that keeping the computing logic in mind is essential not just when I want to code something, but while constantly interacting with ubiquitous computing.

In an interview with mathematician Cathy O’Neil, author of the book ‘Weapons of Math Destruction,’ she explained that algorithms are such a big part of our lives because they process not just the information that we personally input into our computers, but information about us that companies process in order to make decisions that affect our lives. Big data that profiles people in order to advertise services or information to them may end up causing harm because, as she puts it, is used to categorize people in winners and losers. If the algorithms determine you are more likely to be persuaded by voting, for example, there is a type of information that will reach you that wouldn’t if you are categorized differently, even if mistakenly. Our access to information is mediated by algorithms, and I think this means that the logic of computing has to be part of our media literacy as well.

When consuming and processing information today, it is important we develop a layer of thinking in which we question how information was processed by algorithms in order to be shaped the way it is. If there is anything that taking the Python course made clear to me is that nothing in computing is accidental — it may have been instructed by mistake, but it doesn’t happen out of chance. What happens happens because it has been instructed to happen. When we apply for services, such as health insurance, and receive certain information, we have to be able to question how our profiles were processed. And when consuming information online, we also have to be constantly asking why we find some information instead of other. The issue, as put by O’Neil, is that we as a society blindly trust algorithms: “we don’t push back on algorithmic decisioning, and it’s in part because we trust mathematics and in part because we’re afraid of mathematics as a public.” This is highly problematic when we consider it gets in the way of our interaction with culture and knowledge in society today.

As noted, these challenges are started to be faced in different manners today. The idea that everyone should learn basic programming is increasingly part of the narrative, especially in developed countries. In 2013, for example was launched, funded heavily by the private tech industry, to promote this idea for children by giving tools for teachers, schools, and kids. And the US government has been investing more in getting people to be part of the STEM (science, technology, engineering, and math) field. Part of this effort should include learning the abstract computing thinking method not only to create but to consume. As Evans explains, when a computer scientist is faced with a problem, they think about it “as a mapping between its inputs and desired outputs” and they thus “develop a systematic sequence of steps for solving the problem for any possible input, and consider how the number of steps required to solve the problem scales as the input size increases” (p. 16). As consumers of information, we need to also consider how our information has gone through a number of steps before reaching us and thus is shaped in a particular way. We do this when we think about the news we consume: we know there is a journalist who researched and wrote an article along with an editor and that editing decisions when into the topic, the framing, the placing of the news item, etc. We need to add a layer of thinking in which we consider that information was also selected and processed through the use of algorithms. We need to be able to imagine the mappings mentioned by Evans, but for this we need to know they are there.

David Evans, Introduction to Computing: Explorations in Language, Logic, and Machines. 2011.

Peter J. Denning and Craig H. Martell. Great Principles of Computing. Cambridge, MA: The MIT Press, 2015.

Martin Campbell-Kelly, “Origin of Computing.” Scientific American 301, no. 3. 2009.

Jeannette Wing, “Computational Thinking.” Communications of the ACM 49, no. 3. 2006.

Fourth time’s the charm

Chen Shen

This is the fourth time I learned to code.

The first time was about 23 years ago and the language was Logo. It was the best language to start with at that time for its graphic oriented and specially made for kids. I hardly delved into depth with Logo, all I learned then is to draw certain geometry. The concept was rather straightforward: give instruction to a “turtle”: moving, turning, repeating, and your turtle would leave the trace on the screen.  For example, one easy code was

REPEAT 4 [FD 100 RIGHT 90]

It can even be interpreted in a single sentence: forward 100 pixels then turn right for 90 degrees and repeat this all for another 3 time. What do we get? Of course, it’s a square:


But it has the fundamental concepts of programming language, called as Procedural Literacy by Ian Bogost. In his paper Procedural Literacy: Problem Solving with Programming, Systems, & Play, Bogost argued: “more generally, I want to suggest that procedural literacy entails the ability to reconfigure basic concepts and rules to understand and solve problems, not just on computer, but in general”. I surely don’t understand Procedural Literacy then, but it granted kid with a sense of steps and protocols: almost every kid at that age can draw a square at east, but not all can describe how to do it to something that has no idea what a line is, what a right angle is. When you started to describe simple things for you to the computer, you started programming. But it was more. Though computer is stupid enough to fail all geometry tests, it can do a magical thing: repeating, just as shown in that one-line-code. Computers can repeat something as many times as you want, or time allows. And this is actually repeating. Unlike a kid trying to draw a square on a piece of paper which in fact he draw two horizontal line and two vertical lines with opposite directions (you seldom see a kid draw a square as first draw a line then turn the paper a right angle and repeat). With this ability, the stupid turtle can do things impossible for most kids, this:2


And finally, these4

Sacred Geometry, right? Here we can see another key concept: computational emergence. No matter the single square all the breathtaking pictures, their core is but computation principles. And if you recursively repeat that principle, patterns emerge. The Little principle can generate massive things. There’re more things in our lives can be computationally expressed than we care to admit, after all, the forces maintaining the earth spinning around is but a set of easy equations and ourselves a long string of Base pair.


The second time was 16 years ago and the language was C. I didn’t go far that time because I only learn that as a time killer in boring classes. But still I couldn’t help notice: unlike learning a natural language, it is much easier to learn an artificial language as you age. In our recent readings, we knew the quotation language is the mold of your thoughts. But it seems to me it’s only for your native language, other acquired languages are like pointers in C, they redirect to functions already existed in my mind. If the first time was mainly geometric, the second was algebraic, I taught myself easy algorithms,  sort, traversal, and so on. In learning C, I touched another core of programming: functions. Though we already knew functions pretty well in math, but coding functions can do even more. It can operate not only numbers, but also strings, interruption code, addresses, other functions, and even itself.


The third time was when I majored in Computer Science, it was finally a formal and systematic. The language was mainly C and assembly language. Both Logo and C is somehow intuitive, but definitely not assembly. An assembly code looks things like this:


What does it do? Simply calculate and output basic + – * / of your input. So it occurred to me, though they both run on my same computer, there’re things I can do with assembly language that inaccessible by C (in the sense of regular coding, not in the sense of UTM which makes assembly language capable of performing anything other languages is capable of), and vice versa. In this week’s reading, we met the concept morphemes, and it seems to me, different language has different morphemes, and there are meanings you can only assemble by a certain morpheme set in a certain language.  Last week Jessie and I just had this talk about cross-culture translation, for example, the word Litost (in Milan Kundera’s novel The Book of Laughter and Forgetting), which means “a state of torment created by the sudden sight of one’s own misery”.  But unlike the way you can code in different languages to output a same square, no matter what English morphemes the lexicographer used to define Litost, it’s merely an approximation. It means every language is the subset of the whole possible meaning. Then I wonder, are there meanings only accessible by combining morphemes from different language set?

Now with the language Python is my fourth time. Learning from an online platform is way intuitive and visual than from books. Now we can do it in a simulator IDE and start off with real codes. And current IDEs are kind enough to use different colors to signal which words are grammar approved instructions (how many time I checked through all my codes in assembly just to find a typo)



In the screenshots, we can see symbols that mean things, like the “welcome to Python!” in yellow and “Raw Sensor Values” in  dark green. These are symbols interpretable by human but not machine. And we can see symbols that do things, like the “print” in purple and “analogRead” in red. These instructions lead to machine actions both perceptible (print out a string on the screen) and imperceptible (read the voltage in a certain pin and save it to the register) to human.

With no doubt, this is the easiest time. Partly due to previous learning partly due to the advancement in language learning. I have to admit I enjoy programming language learning, not in anticipation of creating some codes or even software by myself, but of a deeper understanding of the computationable world and our minds.

Martin Campbell-Kelly, “Origin of Computing.” Scientific American 301, no. 3 (September 2009)

Jeannette Wing, “Computational Thinking.” Communications of the ACM 49, no. 3 (March 2006)

“Denning-Great-Principles-of-Computing Google Docs. Accessed October 26, 2016.

David Evans, Introduction to Computing: Explorations in Language, Logic, and Machines. Oct. 2011 edition.

“Litost [Lee’ – Toast].” Urban Dictionary. Accessed October 26, 2016.

Programming language – the combination of rule and logic

I always hold the opinion that computer science should be part of elementary education, just as mathematics and grammar courses. It doesn’t mean that kids have to develop a complete project or design innovative algorithms after finishing programming courses, nor does it mean that modern citizens should be able to fix bugs or develop new functions for applications they use themselves. Learning programming just means learning how to think like creatures living in digital age.

1.Programming is about language and rule
No matter what languages we use, if we want to express something and let others understand, we should follow certain rules connecting words and sentences in a comprehensible way, which is called grammar or syntax. Even though we can use other ways other than languages, such as gestures, to convey information, the prerequisite of communicating successfully is still the consensus.

Programming also follows fixed rules, though the rules may update much quicker than languages. In fact, except for binary machine code, all programming rules, even the assembler language, more or less rely on natural languages, and I think that is why we call those programming rules like C, C++, Python and Java programming languages.

Just like natural languages, the basic unit of programming language is word and punctuation mark. Generally speaking, the meaning of a key word, usually English, of a programming language is similar to its original meaning in natural language, but not identical. For example, the key word “return” in C means that a program will produce a result at the end, no matter null or meaningful results. It is similar to its meaning in English but the key word is tightly related to function structure. Another example is that key words are often used to illustrate the data types used by variables. In C language, a declaration is needed before using a variable. The key word “int”, “float” and “double” just refer to different kinds of data.

Punctuation mark is a totally different story. In standard I/O functions such as printf() and scanf() or I/O macros such as getchar() and putchar(), punctuation marks may serve the same function as they do in natural language. However, punctuation marks are often used in other ways in programming languages. For example, in C “=” and “==” mean differently and they also have different meaning from natural languages’. The former one is assignment operator. The statement “x=1” does not mean that x equals to 1 but assigning 1 to the variable x, that is to say, the statement does not state a fact but make a dynamic calculation and the calculation direction is form right to left. The latter one is relational operator, which is often used to make judgment. It may be natural for us to write x=y in daily life, but when we make judgment or test in C, the right way is to use “==”.


“==” used in while looping statement for testing

2.Programming is about logic
Although in some sense programming languages are similar to natural languages, there are still some significant differences here. When I learn C, I find that looping, branching and jumping statements are things that are hardly embodied in natural languages. Programming languages use a standard and modular way to deal with the logic relationship. An interesting idea is that the logic module, including independent function, in a program can be regarded as black box system and programming languages use simple statements to link those independent modules together like glue.


while looping and if&else branching statement

In fact, the logic of programming languages is the logic of hardware, the logic of computers and the logic of Von Neumann Architecture. The binary system uses the combination of “0” and “1” to transmit information. In electronic engineering field the binary logic relationship is reflected clearly by the logic gate and in mathematics field it is called set theory. Three basic logic principles are “and”, “or” and “not”. In programming languages, it is common to use those principles to decide whether a code block should be executed.


“and”, “or” and “not” 

What’s more complicated is that programming is not only about typing codes facing visual compiler interface, but interacting with invisible modules in computer system. When I read source code of video game, DOOM 3, which is written in C++, I find that game programmers write codes to call functions from standard library and models from 3D module software like Maya. That will be a far more difficult thing when we begin to talk about the concept like pointer or API.

3. Programming is about computational thinking
As I mentioned at first, learning programming is not related directly to project, engineering or algorithm, but teaching people how to think. Jeannette M. Win in the article Computational Thinking points out that computational thinking is not thinking like a computer, but conceptualize, abstract and solve problems. Generally, computational thinking asks everyone to think logically.

Just think about how a C program solves the question of 1+1=?. We need to first declare the data types of two variables, then use assignment operator to assign each variable with 1 and finally add up these two variables. Then what’s about more complicated program, like a calculator dealing with integer addition? We need to introduce maybe while looping statement and standard I/O functions.

So how about daily life? I think the key of computational thinking is to set mapping relationship between problems and solutions. Emergency plan is indispensable for enterprises, allowing them to make response as soon as possible. A student can respond to teachers quickly because he know how to figure out the key word of teacher’s question. Those are all good examples of computational thinking, which means thinking is ordered, logical, modular and hierarchical.

[1]Peter J. Denning and Craig H. Martell. Great Principles of Computing. Cambridge, MA: The MIT Press, 2015.
[2]Jeannette Wing, “Computational Thinking.” Communications of the ACM 49, no. 3 (March 2006): 33–35.
[3]Bryant, Randal, O’Hallaron David Richard, and O’Hallaron David Richard.Computer systems: a programmer’s perspective. Vol. 281. Upper Saddle River: Prentice Hall, 2003.
[4]Waite, Mitchell, Stephen Prata, and Donald Martin. C primer plus. Sams, 1987.

Call Me a Coder: Some Thoughts about Coding – Jieshu

Learning Python on CodeCademy was so interesting that I nearly forgot there was a blog post to write. Through learning basic knowledge of Python, I got a glimpse into how a programming language is designed to specify symbols that mean things and symbols that do things. I also learned why it is inevitable to use programming languages rather than natural languages to interact with machines if we want them to complete specific tasks.

In Python, the symbols that mean things include variables and strings to that meanings could be assigned by users. For example, we can assign the name of a girl called “Alice” to a variable using codes like girl_name = “Alice” and retrieve the second letter in the name on the console using codes like print girl_name[1]. But machines don’t know the meaning of Alice. They don’t know who Alice is and they even don’t know whether Alice is the name of a person or a dog. Moreover, we can assign any name to “Alice” other than “girl_name”. It won’t make any difference to computers. Alice and Bob are indistinguishable to machines, only with small differences in the order of letters, while in human beings’ eyes, they are names of a girl and a boy who have their own stories. Machines follow predetermined and predictable cascades of actions to process and store these symbols that are meaningful to humans in the form of 0 and 1.

Meanwhile, there are symbols that do things, include the most basic instructions like “print” and “return”. The print code is responsible for displaying anything that behind it onto the screen.


print “hello world”

Furthermore, there are symbols that simultaneously mean things and do things. As far as I learn, function is one way to combine the two together, using symbols that mean things to do things such as calculation and representation.

Here are some thoughts emerging from my learning.

Programming languages have a lot in common with natural languages. For example, they both have a tripartite parallel architecture mentioned in Ray Jackendoff’s Foundations of Language[i]. In the section of language construction in his Introduction to Computing, David Evans mentioned that the smallest units of meaning in programming languages are called primitives, corresponding to morphemes in natural language. Besides, they both have syntax and semantics governed by certain grammars. Like natural languages, programming languages also have recursive grammars, allowing infinite new expressions using limited sets of representations[ii].

Programming languages like Python use many words and abbreviations from natural languages with their original meanings persisted. For example, “print” means print something on the screen. “And“, “or“, and “not“, the three Boolean operators have the same meanings as in English. The “def” in the head of a function is an abbreviation for “define”. Not to mention “max()“, “min()“, and “abs()“, the three functions using frequently used abbreviations. This intuitive method lowers the learning difficulty.

The big contrast between the high simplicity of the programming languages and the complexity of the tasks they can achieve really impressed me. I’m not saying that coding is easy to learn. Their simplicity lies in the comparison with natural languages. In his Introduction to Computing, David Evans mentioned four reasons why natural languages were not suitable for coding—complexity, ambiguity, irregularity, uneconomic, and limited means of abstraction[ii]. Compare to natural language, programming languages are really simple. However, the meanings they are designed to express are complex. Programming languages serve as artifacts to that we offload and distribute our cognitive efforts, in order to achieve complex tasks. For example, according to the Wikipedia entry on Python, Wikipedia, Google, CERN, and NASA make use of Python, and Reddit is written entirely in Python. Even machine learning, the most advanced and sophisticated branch of computer science is rooted in the simple syntax of programming languages.

Although I never literally coded before, many programming ways of thinking have already been used by me for a long time. That’s because a lot of things we use today are rooted in programming. The programming thinking has distilled into our daily life. Here are some examples of my experiences:

  1. When I was learning Python in Codecademy, one thing I found interesting is that one method for adding comment was using three double quotation marks before comments, and different parts of the code have their own colors. This reminds me of one habit of mine. When I want to add comments in my notes, I would use three fullwidth periods “。。。” before comments and color them with blue. The purpose is to tell myself that “this is my comments, do not mistake it for the author’s idea!” Similarly, the purpose of “”” notation in Python is to tell Python that “This is my comments, do not mistake it for the codes!
屏幕快照 2016-10-26 上午1.12.28

Like the “”” notation in Python, I use three fullwidth periods and blue color to notate comments in my notes.

  1. The second example: I operated some business accounts on several social media platforms, where I set up many auto-responding rules. For example, if the system receives a message including “Interstellar” from a user, an article talking about the physics in the movie of Interstellar would be automatically sent to the user. I did it not by typing in programming codes, but by using simple functions that were coded by programmers and presented in the form of intuitive graphic interfaces with natural language instructions easy for laymen to learn.

What I found difficult in learning Python is to remember the syntax because it is different from natural language. I was wondering, are programmers supposed to remember all those syntaxes? Another question is whether it is possible to use natural languages to program?


[i] Jackendoff, Ray. 2002. Foundations of Language: Brain, Meaning, Grammar, Evolution. OUP Oxford.

[ii] Evans, David. 2011. Introduction to Computing: Explorations in Language, Logic, and Machines. United States: CreateSpace Independent Publishing Platform.