Author Archives: Linda Bardha

The meaning behind computer systems as sign systems, a semiotics perspective

Abstract

Computers are powerful tools, and today we cannot think to complete our daily tasks without using them. But how did we come to use programming languages in computers? What are the fundamental concepts that lead to the idea of artificial languages, and how does that connect to our use of natural language? This paper discusses the adoption of sign-theoretic perspective on knowledge representation, the application that lies under the human-computer interaction and the fundamental devices of recognition. This paper suggests the study of computer systems as sign systems, on a semiotics perspective, based on Pierce’s triadic model and looks at an application such as the General Problem Solver, an early work in computerized knowledge representation.

Introduction

Being a computer science student, I found out how powerful the art of programming and coding can be, but what fascinates me is the representation of all these different programming languages that we use to program. In digital computers, the user’s input is transmitted as electrical pulses, only having two states, on or off, respectively represented as either a 1 or a 0, and this sequence of 0’s and 1’s represents the “computer’s language”. But how did something so easy in concept, just having two states, becomes something so fundamental? Of course we have to take a step back and think about all the principles and interactions that lead us to this idea, the most important of them being the human computer interaction and the meaning behind different signs and symbols.

Computer Systems as Sign Systems

This is where I turn to semiotics, the study of sign and symbols, to understand the meaning behind these representations.  Andersen presents semiotics as a framework for understanding and designing computer systems as sign systems. There are different semiotics methods that can be applied to different levels of computer systems, but I will focus on a particular perspective, one that looks at computer systems as targets of interpretations. I am interested to look at programming and programming languages as a process of sign creation, and the semiotic approach behind it. It is interesting to view computer systems as signs and symbols, whose main function is to be perceived and interpreted by a group of users. Anderson suggests that when you think of computer systems through the lenses on semiotics, they are not ordinary machines anymore. Rather, they are symbolic machines constructed and controlled by means of signs. The interface of a system is an example of a computer based sign, and using this system means that it involves the interpretation and manipulation of text and pictures.  And underneath the interface, there are other signs that we see. The system itself is specified by a program text or a language, which on its own is a sign.  Then, the actual execution of the program requires a compiler, which its main function is to transform code written in one programming language, into another programming language. That makes the compiler a sign. If we continue with this approach, passing through different layers of the system, we will encounter more signs, from the operating system to assembly code, to machine code.

Semiotic’s theories

There are many kinds of semiotics theories when in comes to defining the concept of a computer-based sign and computer system.

1. The Generative paradigm

This paradigm was founded by Noam Chomsky in 1957. This generative grammar is focused on the individual language user, not on the social process of communication. This paradigm looks at a language based on a rule-defined set of sentences. Halliday explains why this is not a good approach:

“A language is not a well-defined system,  and cannot be equated with “the set of grammatical sentences”, whether that set is conceived as finite or infinite. Hence a language cannot be interpreted by rules defining such a set. A language is a semiotic system…-what I have often called a “meaning potential”…Linguistics is about how people exchange meanings by “languaging.” (Halliday 1985).

2. The Logic Paradigm

This paradigm was first founded by Frege, but it is counted as a linguistic theory with the logical grammars of Richard Montague (1976). This paradigm consists in translating natural language sentences into logical formulas on which rules of inference can operate, and that’s why this theory has become an integrated part of computer science. One way to construct such system is to represent knowledge in terms of logical statements, to translate queries into logical formulas and to let the system try to prove the query from the knowledge based. Now days we can link this theory to the idea of a “neural network”. By that I mean you program and built a system to achieve a certain goal, let’s say you write a program that goes through different image files and selects the images where dogs appear. You can feed data to a system, the more the merrier, and train to find the images that you are looking for. But the problem with this kind of approach is that it is not a linguistic approach, and it does not describe a linguistic behavior, rather a factual one.  If we use logic and facts, we defeat the purpose of understanding the sign representation.

3. The Object-Oriented paradigm

There is a relation between the concept of object-oriented programming and semiotics, where the system is seen as a model of the application domain, and we see the components of classes and objects and their interactions in the domain. These concepts are also characteristics for a semantic analysis that go back to Aristotle, and his idea of the hierarchy of classes, also known as the Porphyrian tree. (Eco, 1984)

4. Pierce’s triadic process

If we are talking about computers and semiotics there is without doubt one person that has to be mentioned for his incredible work and that is Charles Sanders Peirce (1839-1914). As Irvine mentions in his paper, Peirce, is without question the most important American philosopher of the mid-19th and early-20th century. His background is very interdisciplinary. Peirce was a scientist, mathematician, cartographer, linguist and a philosopher of language and signs. He commented on George Boole’s work on “the algebra of logic” and Charles Babbage’s models for “reasoning machines”. Both of these concepts are fundamental for logic used today in computing systems.

Pierce’s process on meaning-making, reasoning and knowledge is a generative process, also known as a triadic experience, and is based on human sign systems and all different levels of symbolic representation and interpretation. This process in explained through Martin Irvine’s evaluation, “A Sign, or Representamen, is a First which stands in such a genuine triadic relation to a Second, called its Object [an Object of thought], as to be capable of determining a Third, called its Interpretant, to assume the same triadic relation to its Object in which it stands itself to the same Object,” (Irvine).

 

 

Pierce’s triadic model

Irvine’s representation 2016

Although many fundamental computer science principles apply binary states, Peirce discovered that the human social-cognitive use of signs and symbols is a process that can never be binary, it’s never either science and facts or arts and representations. Rather, the process of understanding symbols and signs is a process that covers everything from language and math to scientific instruments, images and cultural expressions.

The Peircean semiosis

As Irvine suggest, the Peircean semiotic tradition provides an open model for investigating the foundations of symbolic thought and the necessary structures of signs at many levels of analysis:

  • the generative meaning-making principles in sign systems (individually and combined with other systems like in music and cinema/video),
  • the nature of communication and information representation in interpretable patterns of perceptible symbols,
  • the function of physical and material structures of sign systems (right down into the electronics of digital media and computing architectures),
  • the symbolic foundations of cognition, learning, and knowledge,
  • how a detailed semiotic model reveals the essential ways that art forms and cultural genres are unified with scientific thought and designs for technologies by means of different ways we use symbolic thought for forming abstractions, concepts, and patterns of representation,
  • the dialogic, intersubjective conditions of meaning and values in the many lived contexts and situations of communities and societies.

Computational semiotics

Now that we have gone through different paradigm models and semiotics theories in helping us understand computers as systems, and after we are introduced to Pierce’s process we are going to take a closer look at the field of computational semiotics and its applications today.

Computational semiotics is an interdisciplinary field that draws on research in logic, mathematics, computation, natural language studies, cognitive sciences and semiotics properties. A common theme across these different disciplines is the adoption of sign-theoretic perspective on knowledge representation. Many of its application lie in the field of human-computer interaction and fundamental devices of recognition.

Tanaka-Ishii in her book “Semiotics of Programming” makes the point that computer languages have their own type of interpretative system, external to the interpretative system of natural languages. That is because human beings do not think in machine language, and all computer language expressions are meant for interpretation on machines. Computer languages are the only existing large-scale sign system with an explicit, fully characterized interpreter external to human interpretative system. Therefore, the application of semiotics to computer languages can contribute to the fundamental theory of semiotics. Of course that computation is different from human interpretation, and interpretation of artificial languages differ from that of natural languages, but understanding the semiotic problems in programming languages leads to considering the problems of signs.

Many of the concepts, principles of computer programming have derived from technological needs, without explicitly stating the context of human thought. An example of this idea is the paradigm of object-oriented programming which we saw earlier.

Let’s take a look at a simple program written in Java that calculates the area of a triangle given three sides:

Java program calculating the area of a triangle

At a glance, this is a simple code for people whose field of study is Computer Science, but even if that is not the case, you can understand the code. Why?

Because the mathematical principles and formula for finding the area of a triangle still apply.

Following Heron’s formula to get the area of a triangle when knowing the lengths of all three sides we have:

Let a,b,c be the lengths of the sides of a triangle. The area is given by:

Area = √p(p−a) (p−b) (p−c)

where p is half the perimeter, or   a+b+c/2

Even you may not know how to code, just by looking at the code and reading the words, you can guess what the program does. The same formula from above is applied in our code, where the user is prompted to enter the three sides and the area of the triangle is calculated. So, this explains how powerful the meaning behind symbols and signs is, and how important they are, especially in a field like computing.

Let’s take a look at an early work in computerized knowledge representation.

General Problem Solver by Allen Newell, J. C. Shaw and Herbert A. Simon.

Earliest work in computerized knowledge representation was focused on general problem solvers such as the General Problem Solver, a system developed by Allen Newell, J.C. Shaw and Herbert A. Simon created in 1959. Any problem that can be expressed as a set of well-formed formulas (WFFs) or Horn clauses, and that constitute a directed graph with one or more sources (viz., axioms) and sinks (viz., desired conclusions), can be solved, in principle, by GPS. Proofs in the predicate logic and Euclidean geometry problem spaces are prime examples of the domain the applicability of GPS. It was based on Simon and Newell’s theoretical work on logic machines. GPS was the first computer program which separated its knowledge of problems (rules represented as input data) from its strategy of how to solve problems (a generic solver engine).

The major features of the program are:

  1. The recursive nature of its problem-solving activity.
  2. The separation of problem content from problem-solving techniques as a way of increasing the generality of the program.
  3. The two general problem-solving techniques that now constitute its repertoire: means-ends analysis, and planning.
  4. The memory and program organization used to mechanize the program(this will be noted only briefly, since there will be no space to describe the computer language (IPL’S) used to code GPS-I.

GPS as the authors explains, grew out of an earlier computer programmer, the Logic Theorist, which discovered proofs to theorems in the sentential calculus of Whiteheand and Russell. It is closely tied to the subject matter of symbolic logic. The Logic Theory lead to the idea behind GPS, which is the simulation in the problem-solving behavior of human subjects in the psychological laboratory. The human data were obtained by asking college sophomores to solve problems in symbolic logic, or as we know it “thinking out loud” as much as possible while they worked.

The structure of GPS                                                                                                                                      
GPS operates on problems that can be formulated in terms on objects and operators. As the authors explains, an operator is something that can be applied to certain objects to produce different objects. The objects can be characterized by the features they possess, and by the differences that can be observed between pairs of objects. Operators may be restricted to apply to only certain kinds of objects, and there may be operators that are applied to several objects as inputs, producing one or more objects as output. (Simon)

Using this idea, a computer program can be described as a problem to be solved in these terms. So, the objects are possible contents of the computer memory, the operators are computer instructions that alter the memory content. A program is a sequence of operators that transform one state of memory into another, the programming problem is to find such a sequence when certain features of the initial and terminal states are specified.

In order for the GPS to operate within a task environment it needs several main components:

  1. A vocabulary for talking about the environment under which it operates.
  2. A vocabulary dealing with the organization of the problem-solving process.
  3. A set of programs defining the term of the problem-solving vocabulary by terms in the vocabulary for describing the task environment.
  4. A set of programs applying the terms of the task-environment vocabulary to a particular environment such as: symbolic logic, trigonometry, algebra, integral calculus.

Let’s take a look at the executive organization of the GPS. With each goal type, a set of methods is associated in achieving that goal.

GPS solved simple problems such as the Towers of Hanoi, a famous mathematical puzzle.

The GPS paradigm eventually evolved into the Soar architecture for artificial intelligence.

Conclusion

When looking at computers from a semiotics perspective, we can understand the meanings behind the system and the different layers that make the system. From the interface of a system to machine code, we can find signs and symbols all the way down. It is interesting to see the correlation and dependencies between an artificial language, such as the different programming languages that we use, and the usage of our natural language. Humans as cognitive beings have the advantages of having a symbolic faculty and many sign systems. That helps us to make mental relations between perceptions and thought, and using advance technology today we talk about virtual reality and augmented reality.

Looking at Pierce’s model on meaning-making, we can interpret different levels of symbolic representation, and understand that the world we live in, the technology we use can never be binary, rather it is a dynamic environment with many interrelated processes.

When looking at computers as systems, we have to keep in mind the technical structure of the system, its design and implementation and its function. But, we also have to look at the semiotics process, and when looking at the theoretical framework of a system, we should also look at concepts for analyzing signs and symbols, where users interpret in their context of work. From the semiotic approach, interfaces of a system are not separated from its functionality.

By approaching our digital and computational life from the semiotic design view, and by understanding that semiotics, in its standard conceptualization, is a tool of analysis, we can see that we live in a media continuum that is always already hybrid and mixed, and that everything computational and digital is designed to facilitate our core human symbolic-cognitive capabilities. By using semiotics as a tool of analysis we can turn computers into a rich medium for human-computer interaction and communication.

References

Andersen, Peter B. “Computer Semiotics.” Scandinavian Journal of Information Systems, vol. 4, no. 1, 1992

Andersen, P.B. . A Theory of Computer Semiotics, Cambridge University Press, 1991

Clark, Andy and David Chalmers. “The Extended Mind.” Analysis 58, no. 1, January 1, 1998

De Souza, C.S., The Semiotic Engineering of Human-Computer Interaction, MIT Press, Cambridge, MA, 2005

Eco, U., A theory of Semiotics. The MacMillian Press, London, 1977

Gudwin, R.; Queiroz J. (eds) – Semiotics and Intelligent Systems Development – Idea Group Publishing, Hershey PA, USA, 2006

Halliday, M. A. K., Language as Social Semiotic. The Social Interpretation of Language and Meaning, Edward Arnold, London 1978

Halliday, M. A. K., System and Function in Language, Oxford University Press, Oxford, 1976

Hollan, James, Edwin Hutchins, and David Kirsh. “Distributed Cognition: Toward a New Foundation for Human-computer Interaction Research.” ACM Transactions, Computer-Human Interaction 7, no. 2, June 2000

Hugo, J. “The Semiotics of Control Room Situation Awareness”, Fourth International Cyberspace Conference on Ergonomics, Virtual Conference, 15 Sep – 15 Oct 2005

Irvine, Martin, “The Grammar of Meaning Making: Signs, Symbolic Cognition, and Semiotics.”

Irvine, Martin,  “Introduction to Linguistics and Symbolic Systems: Key Concepts”

Mili, A., Desharnais, J., Mili, F., with Frappier, M., Computer Program Construction, Oxford University Press, New York, NY, 1994

Newell, A., A guide to the general problem-solver program GPS-2-2. RAND Corporation, Santa Monica, California. Technical Report No. RM-3337-PR, 1963

Newell, A.; Shaw, J.C.; Simon, H.A., Report on a general problem-solving program. Proceedings of the International Conference on Information Processing, 1959

Norvig, Peter,  Paradigms of Artificial Intelligence Programming: Case Studies in Common Lisp. San Francisco, California: Morgan Kaufmann. pp. 109–149. ISBN 1-55860-191-0, 1992

Peirce, Charles S. From “Semiotics, Symbolic Cognition, and Technology: A Reader of Key Texts,” collected and edited by Martin Irvine.

Ray Jackendoff, Foundations of Language: Brain, Meaning, Grammar, Evolution. New York, NY: Oxford University Press, USA, 2003

Renfrew, Colin. “Mind and Matter: Cognitive Archaeology and External Symbolic Storage.” In Cognition and Material Culture: The Archaeology of Symbolic Storage, edited by Colin Renfrew, 1-6. Cambridge, UK: McDonald Institute for Archaeological Research, 1999.

Rieger, Burghard B, A Systems Theoretical View on Computational Semiotics. Modeling text understanding as meaning constitution by SCIPS, in: Proceedings of the Joint IEEE Conference on the Science and Technology of Intelligent Systems (ISIC/CIRA/ISAS-98), Piscataway, NJ (IEEE/Omnipress), 1998

Sowa, J, Knowledge representation: logical, philosophical and computational foundations, Brooks/Cole Publishing Co. Pacific Grove, CA, USA, 2000

Tanaka-Ishii, K.  “Semiotics of Programming”, Cambridge University Press, 2010

Wing, Jeannette “Computational Thinking.” Communications of the ACM 49, no. 3, March 2006

Can a virtual tour replicate feelings, meanings, values?

The google art platform enables the users to virtually tour museums around the world, explore information about artwork, and see high quality reproductions of different famous pieces.

Initially this looks like a great project, which gives you the ability to virtually be in any museum and see the art work available.

But how does this experience actually feel? Is it enough to reproduce the real experience of visiting an actual museum?

These questions to me relate to the hot debate about books vs. ebook, and how we use technology in our advantage to replicate the feel of a book, but no matter how good we are at that, we can never replace the feel and experience holding a book in your hand and reading it, and let’s not forget about technical features like the battery and electricity. If your ipad/pc runs out of battery, then everything that was available to you is gone in seconds..  upsss…

The same argument can be made for the google art project, but in this case the issue is a little bit deeper, because now we’re talking about culture, and the museum itself as a cultural function and cultural institution.

As Dr. Irvine explains, drawing from the insights of Malraux, Bourdieu, Latour, Debray, O’Doherty, and institutional theory, as a cultural institution, the museum (like the school or library) is a social construction, a reproducible function, given visible symbolic form in physical spaces (actual architected environments) as mediums for transmitting and reproducing the function. Rather than thinking about the museum function in some kind of neutral, pre- or non-technological state, we should rethink the museum in its network of functions and mediations, which are implemented in a historical continuum of technical systems (including the text and image technologies used to represent the artefacts organized by a museum and the symbolism of architecture).

As users, when we look at one of the galleries in a museum we can experience different things.

Let’s take a look at the features that are available to us.

It gives us the ability to zoom in, have a virtual tour, 360 degrees videos, street view, simulations.

This brings us to the Malraux’s dilemma which states: “Any technology for representation, reproduction, transmission, and access will need to be recruited and authorized to mediate the cultural functions of the museum and artworld”.

Cultural functions are not determined by specific technologies of mediation;
rather, cultural functions (institutions, categories of value: “Art”) precede any specific technology of mediation the technologies require validation for cultural functions.

Any artwork that we see is an interface to the system of meaning and values that made it possible. All cognitive, representational interfaces implement semiotic principles.

It is the human interaction with an artifact that makes that experience meaningful and valuable, and there is no virtual technology that can replicate that.

References:

Benjamin, Walter “The Work of Art in the Era of its Technological Reproducibility” (1936; rev. 1939).

Agostino,Cristiano “Distant Presence and Bodily Interfaces: ‘Digital-Beings’ and Google Art Project.” Museological Review – University of Leicester, no. 19 (2015): 63-69.

Martin, Irvine “The Museum and Artworks as Interfaces: Metamedia Interfaces from Velásquez to the Google Art Project”

Proctor, Nancy  “The Google Art Project.” Curator: The Museum Journal, March 2, 2011.

 

Software lies underneath everything that comes later

As we have learned by studying different concepts throughout this course, there is no “magic”, when it comes to the world of technology and computers.  But somehow, it is so hard for people to understand how something works and why does it work in that specific way? While there are many theories and research on fields like cognitive sciences and psychology,  that can come with different explanations to the human brain and how we perceive information, I highly believe that by living in a world where we are consumers, by living in a consumer culture, we have lost the sense of participating in the process of building things, we now can just buy what we need, and just make sure that the things we buy work, and never worry about how those things work.

Today, you hear about Iphone X and the “new amazing features” that the new phone can offer to it’s consumers, or maybe you looked at the new Apple Macbook Pro, or Microsoft’s Surface laptop with new improvements and more ways to make it interactive.  So many new things, and in order to participate in the discussions happening in social media (because who doesn’t want to share their personal opinions with the world) and let the world know how “in” they are with the new technologies, you have to buy the newest products, because everyone else seems to use them, and you don’t want to stay behind, right?

I have done that mistake too, and part of this is because you never actually see what’s happening behind the visible layer, what’s behind that blackbox. To cite Bruno Latour, blackboxing is “the way scientific and technical work is made invisible by its own success. When a machine runs efficiently, when a matter of fact is settled, one need focus only on its inputs and outputs and not on its internal complexity. Thus, paradoxically, the more science and technology succeed, the more opaque and obscure they become.

Everyone knows about the new features, but I doubt that people actually know the history of how these new features were invented? And where did they come from?

Lev Manovich, in his book “Software takes command” makes the point that industry is more supportive of the new innovative tech & applications than academia is. Modern business thrives on creating new markets, new products, and new product categories.

But to analyze his point, new discoveries almost always don’t include new content but rather new tools to create, edit, distribute and share this content. To add new properties to physical media, it requires to modify it’s physical substance. But since computational media exists as a software, we can add new properties, new plug-ins, new extensions, by combining the services and the data.

Software lies underneath everything that comes later.

So, the next time you hear about the new cool features of a new product, think of the branding and  marketing side of it.

Ted Nelson and his idea of software, as mentioned in his article Way Out of the Box

“In the old days, you could run any program on any data, and if you didn’t like the results, throw them away.  But the Macintosh ended that.  You didn’t own your data any more.  THEY owned your data.  THEY chose the options, since you couldn’t program.  And you could only do what THEY allowed you — those anointed official developers”. This is a quote by Ted Nelson, in his article Way out of the Box.

In his article, Nelson brings to our attention all the possible ways that we can do things. Just because some companies (Apple and later Microsoft) took the paper simulation approach to the behavior of the software, doesn’t mean that that is the only way to do it. They got caught up to the rectangle metaphor of a desktop, and used a closed approach. Hypertext was still long rectangular sheets called “pages” which used one-way links.

Nelson recognized computers as a networking tool.

Ted Nelson’s network links were two ways instead of one-way.  In a network with two-way links, each node knows what other nodes are linked to it. … Two-way linking would preserve context. It’s a small simple change in how online information should be stored that couldn’t have vaster implications for culture and the economy.

This is an example that demonstrates not to get caught up by the whole computer industry, as software gives plenty of possibilities to look at new ways to implement, rather than just believing and thinking that there is only one way.

Alan Key’s idea of a computer as a “metamedium”, a medium representing other media, was groundbreaking. It is the nature of computational media that is open-ended and new techniques will be invented to generate new tools and new types of media.

Vanneva Bush’s article “As we may think” in 1945, discussed the idea of the Memex, a machine that would act as the extension of the mind, by allowing its user to store, compress and add additional information. It would use methods of microfilm, photography and analog computing to keep track of the data.

You can clearly see the metamedium idea at the Memex.  The second stage in the evolution of a computer metamedium is about media hybridization, which as Manovich explains, is when different medias exchange properties, create new structures and interact on the deepest level.

It was Douglas Engelbart who recognized computers not just a tool, but a part of the way we live our life. The mother of all demos, demonstrated new technologies that have since become common to computers today.  A demo featured first computer mouse, as well as introducing interactive text, video conferencing, teleconferencing, email, hypertext, and real time editing.

Conclusion

All these examples make you think about different ways that software could behave and interact, and how these pioneers continued to push their tools to new limits to create creative outcomes, even without having access to the technology that we have today.

It really is inspiring to look at their work and understand that sometimes it is us who creates limitations to our technology, sometimes pushed by the computer industry and other factors, but it is crucial to understand that there are no limitations to the development of software and graphical interfaces in order to create new ways of human computer interaction (HCI)

Sources:

Bush, Vannevar “As We May Think,” Atlantic, July, 1945.

Engelbart, “Augmenting Human Intellect: A Conceptual Framework.” First published, 1962. As reprinted in The New Media Reader, edited by Noah Wardrip-Fruin and Nick Montfort, 93–108. Cambridge, MA: The MIT Press, 2003.

Latour, Bruno“On Technical Mediation,” as re-edited with title, “A Collective of Humans and Nonhumans — Following Daedalus’s Labyrinth,” in Pandora’s Hope: Essays on the Reality of Science Studies. Cambridge, MA: Harvard University Press, 1999, pp. 174-217. (Original version: “On Technical Mediation.” Common Knowledge 3, no. 2 (1994): 29-64.

Manovich, Lev. Software Takes Command. New York: Bloomsbury, 2016. Print.

Nelson, Theodor Holm.  WAY OUT OF THE BOX. EPrints, 3 Oct. 2009. Web. <http://ted.hyperland.com/TQdox/zifty.d9-TQframer.html>.

Thinking algorithmically

There is no doubt how powerful computers can be, and now days we can’t really imagine our lives without using computers at school, at home, at our jobs.

For me, everything started when I decided to study computer science as my undergraduate degree. I didn’t really have experience in programming and coding, but I was very interested to really understand how computers work and understand what can be accomplished using them.

I always loved puzzles as a kid, and to me, programming felt the same way. It was fascinating to see how organized my thinking became, and how you can break down any kind of problem into smaller tasks.

Learning how to code is exactly like learning a new language, where you have to learn the grammar, study the rules in order to fluently learn the language, and as they say, the more practice you have with it, the easier it gets.

The fascinating realization for me was when you study different programming languages, you understand the differences between them, and you choose which language can be more efficient when working on different applications and software.

IEEE ranking sheet of top programming languages of 2017, according to their popularity

From all the languages, I really like Python.  Python can be used for internet development, for scientific and numeric computing, for GUIs and other software applications, and because it offers so many libraries that you can import in your code, usually is the most popular and most used language.

For my undergraduate thesis, I used Python to do a sentiment analysis on Michelle Obama’s speeches from different years, to understand the connotation of each speech, and then I showed the results using different visualization graph, also using by importing python’s library in my code.

Here is part of the code from my final project:

To explain a little bit what is happening with the code, I have  a list of speeches ( 8 speeches from 2008-2016), and after I do a sentiment analysis, I also look for 5 particular traits on each speech (openness, conscientiousness, extraversion, agreeableness, and emotional range).

After the analysis is complete, I show the results using graphs as visuals.

As Jeannette Wing explained in her video, learning how to code and program, can really help with computational thinking.

Computer Science helps you with how to find solutions to different problems we face, and not just homework assignments. Thinking “algorithmically” about the world, helps you to tackle the problem fundamentally, by breaking it down in it’s easiest parts, studying it and find better solutions to the possible errors, just like running a program in the console.

And what is really interesting to me is the fact that now days, we can combine the power of computing and programming with any other discipline and the options and opportunities on what can be achieved are limitless. From social sciences, to humanities, to fine arts, to engineering, science and technology we can expand our curiosity and knowledge, and we can help in designing efficient solutions to make our tasks easier.

Resources:

Evans, David Introduction to Computing: Explorations in Language, Logic, and Machines. Oct. 2011 edition. CreateSpace Independent Publishing Platform; Creative Commons Open Access: http://computingbook.org/.

Irvine, Martin An Introduction to Computational Concepts

Wing, Jeannette Computational Thinking

Cell phones as part of a socio-technical system

When we think of a cell-phone now days, we immediately associate it with different things based on the function and the goal we want to achieve, and it is so much more than just to be able to call someone.

Keeping in mind the media richness theory (sometimes referred to as information richness theory or MRT),  cell phones now days are designed to offer different options and are able to reproduce visual social cues, such as gestures, body language through the video option. This translates to a richer communication medium, that has become part of our society.

It is important to understand that these affordances ( as Zhang explains the term) are available because they were designed, using modular and combinatorial design principles, as Dr. Irvine explains, that takes a lot of iterations in order to come up with a product that is both functional and practical.

But because we don’t see the different layers, this becomes a complex idea, and it is hard to understand each part and see how they fit in the whole product.

After we “de-blackbox” this technology, it is also important to understand that this artifact is part of a socio-technical system.

When talking about socio-technical system, we talk about the interaction of people and the technology in an environment.

So, let’s break it up a little more and see these interactions.

(image source: https://skateboardingalice.com/papers/2010_Rogers_Fisk/create_model_small.png)

So now, we can think of the technological features of a cell phone, the tasks that it can accomplish, how we interact with the technology itself, and how the technology is used in the environment.

References:

Latour, Bruno “On Technical Mediation.” Common Knowledge 3, no. 2 (1994): 29-64.

Irvine, Martin “Media, Mediation, and Sociotechnical Artefacts: Methods for De-Blackboxing” (introductory essay)

Zhang, Jiahie and Vimla L. Patel. “Distributed Cognition, Representation, and Affordance.” Pragmatics & Cognition 14, no. 2 (July 2006): 333-341.

 

 

Packet-switching across a network

There is so much information around us. As Floridi puts it, Information is notorius for coming in many forms and having many meanings. Over the past decades , it has been common to adopt a General Definiton of Information (GDI), in terms of data and meaning. That means that we can manipulate it, encode it, decode it as long as the data must comply with the meanings (semantics) of a chosen system, code or language. There has been a transition from analogue data to digital data. The most obvious difference is that analog data can only record information (think of vinyl records) and digital data can encode information, rather than just recording it.

But how is the information measured?

Claude Shannon, in his publication “A mathematical theory of communication”, used the word bit, to measure information, and as he said, a bit is the smallest measuring unit of information. A bit has a single binary value, either 0 or 1.

When I think of information, I almost never associate it with data, but rather with meaning. In a way, information to me serves the function of communicating a message. But, when we look at how is the message sent and delivered, is when we can see the data in it.

Shannon’s first diagram, a version of which he used for encryption and
decryption techniques in World War II, outlines a simple, one-way, linear signal path
without the surrounding symbolic and social motivation for the signs and symbols
encoded, transmitted, and decoded.

Now let’s take a look and see how information is sent over the web, and how computers exchange data.In 1961, Leonard Kleinrock introduced the packet-switching concept in his MIT doctoral thesis about queuing theory: “Information Flow in Large Communication Nets”. His Host computer became the first node of the Internet in September 1969, and it was the first message to pass over the internet.

So how does packet-switching works?

An animation demonstrating data packet switching across a network.

First, the TCP protocol breaks data into packets or blocks. Then, the packets travel from router to router over the Internet using different paths, according to the IP protocol. Lastly, the TCP protocol reassembles the packets into the original whole, and that’s how the message is delivered.

When you send e message from your computer to a friend, using the Internet as the mean of this communication, that message is divided into packets/blocks as we saw earlier, it finds different paths from the modem, to the router, finds the Domain Name Server and then the appropriate Web Server using the Internet Protocols, and at this point the message is than reassembled into the packets from the original whole, and that’s how your friend receives that message. There is a trade of complexity and performance that happens while using these design principles, but the end goal of this architecture is to effectively have the flow of information, the transmission of the data packets from one end of the server to the other

As Dr. Irvine explains, information theory contributes to the designs for the physical architectures and kinds of digital information encoding and decoding that we now
use in well-recognized, standardized formats and platforms. So, information theory and semiotics gives the more complete picture of meaning-making in our digital
electronic environment.

Reference:

Floridi, Luciano. Information: A Very Short Introduction

Gleick, James Excerpts from The Information: A History, a Theory, a Flood. New York, NY: Pantheon, 2011

Irvine, Martin. Introduction to the Technical Theory of Information

Claude E. Shannon, E. Claude. A Mathematical Theory of Communication.The Bell System Technical Journal 27 (October 1948): 379–423, 623–656.

 

Analyzing the song “Zombie” by The Cranberries

By Linda and Yang

For this week’s lab we are applying semiotics methods to analyze the song ‘”Zombie” by a band called The Cranberries.. We chose this song because it was unfamiliar to us.

First we completed the worksheet provided by Dr. Irivine to look at the elements of the song, and after that we give a short description of the context and historic elements of the song.

Music-Analysis-Worksheet (2)

Let’s take a look at the historic context of the song.

Irish Republican Army (IRA), also called Provisional Irish Republican Army, republican paramilitary organization seeking the establishment of a republic, the end of British rule in Northern Ireland, and the reunification of Ireland.

In 1993, IRA made two bomb attack in Warrington, England, in the second attact, when two small bombs exploded in litter bins outside shops and businesses on Bridge Street. Two children were killed and dozens of people were injured.

In protesting this event, the irish rock band the Cranberries wrote the song “Zombie” . “This song’s our cry against man’s inhumanity to man; and man’s inhumanity to child” This is the quote from Dolores O’Riordan – both the acoustic guitar of the band and the writer of the song.

The song holds the vision of the artist: the IRA couldn’t speak for the Ireland and she believe it’s a group of people with hatred who live in the past.

So the elements of the song, the melody, the tempo, the timbre, the dynamics of the sounds seems to accompany the context and the lyrics of the song. The song feels heavy, dark and the themes of the song, which are war, violence are in coherence with the elements of the song. The use of the instruments: guitar, bass and drums and the accompanying singer form the overall sounds of the song.

 

Applying semiotics methods to analyze a song

For this week’s lab I will choose to analyze a song by Nora Jones.

She is an american singer and song writer. A friend introduced me to her music not long ago. Nora Jones is a jazz artist. She sold more than 50 million records worldwide during 2000-2009 decade. Since I don’t have a lot of knowledge of this artist and her songs, it would be great to analyse and learn how to interpret them.

As Philip Tagg suggests in his article, we spend too much time listening to music everyday, and because of that it is important to understand the semiotics, the meaning behind the music that we listen to.

Here is the link to her music video:

As Dr. Irvine suggests, we need to keep in mind how the features of musical forms work for making meaning and this makes it easier to analyse and understand a song.

 

Some questions to analyse the song:

What instruments can we hear?

What is the beat, tempo of the song?

What cultural, subcultural values does this music represents?

Are the melody and the lyrics in sync?

What is the feel of the song?

Are there repetitions in the melody and the lyrics?

Reference:

Irvine, Martin. “Complex Artefacts: Music’s Meanings”. Presentation

Irvine, Martin. “Popular Music as a Meaning System” (Introductory essay).

Tagg, Philip.  “Introductory Notes to the Semiotics of Music.”

Logos hiding symbols

As Dr. Irvine explains in his introduction, a sign or symbol is anything we can express and interpret in physical instances of forms that are used to stand for something else beyond the fact of the physical instance. Signs and symbols are always formed in systems (language, graphical representations, image genres, etc.). Any spoken or written sentence, any graphical or visual representation, any audible composition of structured sounds in a musical genre are perceptible sign-instances (tokens) that are used to take us beyond the information given (beyond sense perception alone) to the meanings, values, symbolic associations, and emotional responses that are activated or enacted, by those in a sign-using community.

In order to interpret and study these different signs and symbols we use Semiotics, which is the study of sign process and meaningful communication, and is a necessary foundation for any kind of media theory. As we have discussed so far, it is important to “de-blackbox” processes or theories in order to study and understand them. Pierce’s method is the approach to develop models for testable hypotheses.

We use symbols and signs everyday, and especially today, being connected to the internet and browsing different web pages, we can see different symbols, from network, wireless and internet symbols, to hardware or navigation symbols.

For today’s post I wanted to take some time and take a look at different logos with hidden symbolism, and how people use design to express a meaning behind a logo.

Let’s take a look at the FedEx logo:

The white space between the ‘E’ and the ‘X’ forms a perfect arrow, suggesting a company moving forward and looking ahead. This is a great design element, using white space, to create a “hidden” meaning.

The company’s pink and blue logo depicts a large “BR” that doubles as the number “31.” Carol Austin, VP of marketing for Baskin-Robbins, in an interview said that the logo is “meant to convey the fun and energy of the Baskin-Robbins brand” as well as the iconic 31. “The 31 stands for our belief that our guests should have the opportunity to explore a fun, new ice cream flavor every day of the month”.

This tech company used the initials “L” and “G” for the phrase “Life is Good” but also used those letters in the design of the logo. In the beginning the logo looks like a winking face, but it’s using the design of the letters “L” and “G”.

This logo appears to be the Tostitos name, but the 2 T’s in the logo make us people, as they dip a tortilla chip into a salsa bowl, on top of the letter i.

Formula One racing is another organization that took the sport’s core values and applied them to its logo. The red color represents passion and energy, while the black color represents power and determination, according to sportskeeda.com. With another play on negative space, the F1 logo is more than a black “F” with red racing stripes; the space between these two main focal points is the number 1.

The largest zoo located in NYC is the largest zoo in north america and their logo wanted to represent the location of this zoo. In the beginning you see two giraffes and birds, but from a closer look, the legs of the giraffes imitate the famous skyline of New York.

 

 

Toyota’s representatives said that the three overlapping ovals on American vehicles “symbolize the unification of the hearts of our customers and the heart of Toyota products. The background space represents Toyota’s technological advancement and the boundless opportunities ahead.” And possibly even more impressive, if you look even closer at the overlapping ovals, you’ll see the word “Toyota” spelled out.

The arrow in the amazon’s logo represents the idea that they carry everything from A to Z, and it is also a smiley face.

The iconic NBC logo has a peacock in white with five colourful feathers representing each division of NBC, when the logo was first designed. The peacock is also looking to the right, often associated with looking ahead or forward.

The Tour de France logo actually contains the image of a cyclist which can be seen in the letter ‘R’, with the orange circle symbolizing the front tire.

These are some of the examples of the use of signs and symbols, to give meaning or represent an idea, a message of a certain company. From a graphic designer’s prospective, I really appreciate it when I see hidden symbols in logos, and they shows me that the person who designed it, really took the time to make sure that they’re representing the company and the brand, and a meaning with it.

Resources:

Chandler, Daniel. Semiotics: The Basics. 2nd ed. New York, NY: Routledge, 2007.

Giuliano, K. 13 famous logos with hidden messages. Retrieved from https://www.cnbc.com/2015/05/01/13-famous-logos-that-require-a-double-take.html. May 2015

20 Clever Logos with Hidden Symbolism.  Retrieved from http://twistedsifter.com/2011/08/20-clever-logos-with-hidden-symbolism/. July 2013

Irvine, Martin. Introduction to Signs, Symbolic Cognition, and Semiotics: Part I

 

How do babies learn language?

As Irvine mentions in his presentation, language isn’t just a box of words to string together, but has built in rules for combining words into grammatical phrases, sentences, and complex sequences. Language is a system that is made of subsystems, and each of these subsystems/layers play an important role for the whole architecture. Let’s take a look at these different subsystems.

Phonology: The system of speech sounds in a language, and the human ability to make
spoken sounds in a language.

Morphology: Minimal meaning units as the first level mapping​of spoken sounds and meaning
units

Lexicon: The “dictionary” or vocabulary of language: words understood as minimal
meaning units marked with syntactic (grammatical) and lexical functions
(word class function).

Syntax: Syntax​(or more loosely, grammar​) describes the rules​and constraints​for
combining words in phrases and sentences (combinatorial​rules) that speakers of
any natural language use to generate new sentences and to understand those
expressed by others.

Semantics:  how speakers and auditors understand words
in a sentence unit

Pragmatics: The larger meaning situation, codes, knowledge, speech acts, and kinds of
discourse surrounding and presupposed by any individual expression.

The process of learning language  is natural and we all are born with the ability to learn it. Let’s take the example of babies. From the research done there are three stages in which children develop their language skills.

The first one is learning sounds. In the first couple of months that a baby is born, they can make and hear different sounds in different languages. Babies learn which phonemes belong to the language they are learning and which don’t. The ability to recognize and produce those sounds is called “phonemic awareness,” which is important for children learning to read.

The second stage is learning words. At this stage, children essentially learn how the sounds in a language go together to make meaning. As Dr. Bainbridge explains, this is a significant step because everything we say is really just a stream of sounds. To make sense of those sounds, a child must be able to recognize where one word ends and another one begins. These are called “word boundaries.” However, children are not learning words, exactly. They are actually learning morphemes, which may or may not be words.

Stage three is learning sentences. During this stage, children learn how to create sentences. That means they can put words in the correct order.

Of course that the rates at which a language develops is affected by many factors, but what this shows is that since we are born, our brain is capable of learning a language, and then by studying it and learning the grammar we can develop more complex sentences.

It is more interesting to think about the process of learning more than one language, and study shows that the younger we are, the easier it is to learn more than one language. From my personal experience, I started learning English, when I was in the third grade. In order to really understand and learn a language and be fluent at it, there are 4 skills.

Practicing and repeating these different skills helped me to learn the language faster.

To conclude, language is a complex system made of all these different subsystems, and understanding each part, helps us to correctly use it in a meaningful way.

Reference:

Bainbridge, Carol “How Do Children Learn Language?” from https://www.verywellfamily.com/how-do-children-learn-language-1449116. 4 December 2017

Irvine, Martin “Introduction to Linguistics and Symbolic Systems: Key Concepts”

Jackendoff, Ray Foundations of Language: Brain, Meaning, Grammar, Evolution. New York, NY: Oxford University Press, USA, 2003.

Pinker,Steven  “How Language Works.” Excerpt from: Pinker, The Language Instinct: How the Mind Creates Language. New York, NY: William Morrow & Company, 1994: 83-123.