Author Archives: Handan Uslu

Apps as Interfaces for Accessing Mobile Software

Apps that we use in our smart phones are integral to the functioning of smartphones, and are the major elements for the interface design and functionality. While smartphone users intuitively use Apps, there are multiple symbol-making processes that take place as one is interacting with a smartphone application. To start with: clicking on a single button on a smart phone accesses apps. This process is made intuitive through a text with a name of the app, and a graphic design that further communicates the functionality of an app. The apps are tiled in the smartphone’s interface. This tile design suggests a modular structure: Apps are modules for accessing software.

In terms of functionality, Apps provide access to software that was designed by employment of human cognition. Therefore, “Apps” are interfaces that allow access. Framing Apps as interfaces allows us to further discern the software technology behind them. The interface stands for access to the app. Another dimension of apps is that they are embedded in the smartphones regardless of Internet access: They are mobile technologies, and are characterized by their integration to multiple devices.

Considering the softwares of Applications, defining the symbolic interaction as the interaction between the mobile phone’s screen to a person’s cognition will be limited, and won’t cover all aspects of the symbolic interaction. Pierce mentions an “object” (Chandler, 2007) when he describes the semiotic process, and employs a directional understanding to explore how meaning is generated. Similarly, Saussure mentions a “signified.” (Saussure, 2011) Saussure’s idea of the signified is more explanatory in terms of referring to the mental image and abstraction that takes place during symbol-making, yet these two models of sign making are not comprehensive enough to cover all aspects. Apps are interfaces that enable access to employed human cognition, and extends human cognition by providing cognitive tools for functioning.

De Saussure, F. (2011). Course in general linguistics. Columbia University Press.

Chandler, D. (2007). Semiotics: the basics. Routledge.

Meaning Making in Social Media: Generation of the Social Value of Content through the Interaction of Multiple Agents

Handan Uslu


Initially, social media platforms’ software and the newsfeed are defined are characterized in the context of the current online media structure, namely Web 2.0. An analysis is conducted on two levels: (1) the common architecture of all social media platforms are defined through an analysis of the interface (2) the constituents of a single social media post and the agents that generate them are identified. Through an analysis of the architecture of social media platforms, as well as an analysis of the constituents of a single Facebook post, the meaning-making process is defined. The analysis reveals that the business, users, and software are the three main agents that collectively curate the newsfeed, and generate the elements of a social media post. The elements of a social media post function to provide social value to social media content by providing contextual cues and economization of interactions. Finally, the meaning-making process and the user interaction with the content is modeled by employing Jackendoff’s “Parallel Architecture Model.”


            Social media platforms have become widely used online technologies for communication, information exchange, and representation. A multitude of constituents come together to form a social media post. Reducing the symbolic processes that take place during this interaction into a relationship between the “signified” and the “signifier,” (Chandler, 2007,p.14), or using other traditional frameworks to describe meaning making processes, however, fail to encompass the totality of the symbolic processes that take place as a user interacts with a social media content.


            While some of the scholarly work on the meaning making process describe the process on two dimensional or three dimensional frameworks (Chandler, 2007, p.14), these approaches fail to explain the totality of the symbolic interactions that take place as meaning forms.

In order to provide a comprehensive description of the meaning making process on social media, analysis will be conducted on two levels. Firstly, architecture of social media platforms will be analyzed in order to understand what is common to the software of social media platforms. For this analysis, online content from three different social media platforms will be represented without the actual content. Through a representation without the content, the interface will be analyzed. After having an understanding of the common features of social media platforms, and the nature of engagement that users have with online content, second analysis will be conducted. Second analysis will look at the constituents that make up social media content. This analysis aims to illustrate the multitude of human symbolic faculties employed during the meaning making process.

In order to explain the meaning-making processes as the user simultaneously interacts with the elements of social media content, Jackendoff’s model will be applied. Initially designed to describe the meaning-making process during language, Jackendoff’s “Parallel Architecture Model” is applicable to various meaning-making processes. Through a two-level analysis of social media platforms, firstly an analysis of the medium, and secondly the constituents of social media content, it is aimed that the role of multiple agents during the generation, representation, and contextualization of online content will be uncloaked.

Exploring the symbolic processes that take place as user interacts with content is significant, considering the high economic profit that this interaction has generated, and the cognitive consequences of this interaction. Viewing online content has became a common phenomenon, the mediums through with online content is viewed has also became marketing channels. Online content has integrated with advertisements. Secondly, interaction with online content has lead to formation of particular habits by generating specific type of cognitive engagement. The “checking-habit” is a habit that has recently emerged, and refers to checking smart phones without any notification. Similarly, Facebook addiction is a form of addiction in which the informational rewards offered by Facebook affects the neurotransmitter system in the brain as online content function as informational rewards.

Current Network Architecture: Scrolling down as an Information Consumption Process and the News-feed as a Personalized & Dynamic Meta-Medium:

            News-feed is known as the page on a browser or a social media app, where content flows through in a social media platform. News-feed consists of a multitude of media, ranging from text, image, moving image, sound, a combination of all, or a link to media. Considering the variety of content, the newsfeed is a “meta-medium.”

The software, the user and the businesses have collective agency during the curation of content in the news-feed. Firstly, user interaction is a curating agent. Liking a page, or following a friend consequently shows content from those users. Secondly, such interactions of a user on Facebook, whether in the form of liking somebody else’s post, or following a particular company’s Facebook page, functions as data. Consequently, this data is used by the software to predict and present content that is likely to be interacted by the user. Thirdly, the demographic information entered in the Facebook page as well as the interactions of a user with content allows companies to target people accordingly for marketing purposes. Liking a page related to photography, for example, may cause companies that sell photography-related products to target you. Consequently, the businesses that are interested in contacting with their customers through social media platforms become an agent that curate content on social media.

This particular representation of content is enabled through the current network architecture. The current network architecture allows online content to be streamed and instantly accessed. A description of this network architecture by O’Reilly characterizes it as “spanning all connected devices,” (O’Reilly, 2005) and allowing the “consuming and remixing data from multiple resources” (O’Reilly, 2005). This particular architecture of the web allows online content to be remixed from multiple resources. This architecture is embodied in social media platforms as well, as online content from various resources are streamed into a single interface, called the “newsfeed.”

“Scrolling-down” is a jargon that refers to the process where users interact with a multitude of content sequentially. Scrolling-down can be characterized as a particular form of information digestion, enabled by the software configuration of social media sites. Scrolling-down is a mental process, where content that is stream in a single news-feed is continually interacted with: Through a finger movement on a smart phone, or moving mouse down of a webpage browser, content is accessed.

Considering that vocabulary provides a cognitive framework for perceiving and conceptualizing, this interaction that takes place during scrolling-down will be referred to as a “consumption” process. While the word consumption refers to usage of a material, it also fits the social media content considering (1) the multitude of information digested (2) the algorithm that renders an online content obsolete in a few days: the current network structure allows the content to be updated continually, (O’Railly, 2005) and therefore content published in online media previously becomes obsolete and is not interacted with.

Mediation of Content in Social Media: Analysis of the constituents of a single social media post

            Multiple agencies take place not only in the curation of the newsfeed, but also in the symbolic construction of a single social media post as well. Online content is represented along with other elements, such as the profile photograph of a person, number of likes, etc. An analysis of the interface design of a single social media post, and description of the agents that take part in the formation of a post illustrates the agencies contributing to the meaning-making process.

(1) Contextual Cue for Social Media Content: Employing the function of “identity” and Representing Online Content as a Curation

            When the user interacts with an online content, the photograph and the name are the most outstanding features of a social media post. As seen in Figure 1, the primary symbols that are represented in a social media post are a single square photograph, and the name and surname of the person (User name in Figure 1) that has shared a content. These two features allow a person’s identity to be mediated online. The “profile photograph” is especially functional at this point: various theorists mention the role of “image-representations” as they are mentioning the cognitive aspects of meaning -making (Shepard, 1978; Pylyshyn, 2003). Therefore, the profile photograph facilitates the process of meaning making by addressing the mental-images necessary for cognition. Representing a name along with a photograph is a clever interface design, considering that it facilitates meaning-making by employing text and image, two different forms of media simultaneously.


Figure 1: Commonalities Features of Social Media Posts


Figure 2: A post on Instagram


Figure 3: A post on Twitter


Figure 4: A post on Facebook

The fact that identity is represented online leads us to reconsider the construct of identity. While traditionally known as a psychological construct, identity is also a “social-cultural structure” (Irvine, 2012, p.4) that is symbolic, and can be represented and employed in various contexts. Irvine emphasizes that these functions are deployed in various contexts: “Human culture and social functions are inseparable from our expansive system of symbolic systems and the daily activation of symbolic functions in every technical form of media and communication.” (Irvine, 2012, p.1)

The activation of the identity function has also become the sole focus of some software companies, such as in the case of the “Gravatar.” “Gravatar” is a software program that provides plug-ins for personal representation in blogging softwares, and has been integrated to WordPress in particular to deploy the function of identity to online platforms. The software company develops tools that can be integrated and synchorinized with other online platforms, in order to standardize the representation of identity during online interaction.

Presenting an identity along with a content enables online content to be represented with its curator: Identity functions as a contextual cue that contributes to the meaning-making process: the user interacts with online content in regard to the person that has shared the content. This particular representation of content as somebody else’s curation is a dynamic of the meaning-making process. This symbolic representation leads to further associations in cognition that contribute to meaning making: A user’s relation with the curator of the online content, the curator’s social capital, legitimacy, are some of the dynamics that play a role as somebody interacts with a particular content.

(2) Collective Representation of Personal Interaction: Standardizing Interactions through Likes, Follows, and Shares buttons

 Another feature that is common to social media platforms are the buttons that represent the “Like,” “Share,” and “Comment” buttons. The users are allowed to interact with online content by liking them, commenting on them, and sharing them in Facebook.These buttons take the form of “Reply,” “Retweet,” and “Favorite” in Twitter. Instagram allows only liking and commenting. “Like” button on Facebook corresponds to the “Like” button on Instagram, and the “Favorite” button in Twitter. While different texts are used in different platforms, the modes are interaction are structurally similar. (see Figure 2, 3, and 4)

These buttons that enable interaction also provide a framework that communicates possible modes of interaction with the content. Therefore, the software and the interface design that has generated these buttons have a cognitive function: they are deterministic in the way they prepare the cognitive grounds for interacting with content.

The agency of the software behind this interaction, however, is cloaked: The interface design and the text in the buttons, “Like,” “Share” “Comment” are intuitive. The texts provide the necessary cue to inform users about the functionalities of the buttons, which is a crucial dimension of intuitive interface design: “…its highest ideal is to make a computer so imbedded, so fitting, so natural, that we use it without even thinking about it.” (Weiser, 1994) Therefore, considering that a good design is intuitive and invisible, social media platforms have succeeded in requiring minimal literacy for interaction in these platforms.

Along with implementing the function of identity, these buttons function to simulate social interaction. There is no direct interaction taking place between people or between people and organizations in the traditional sense (there are no direct e-mails, no direct messages, or any physical interaction), yet the users engage in social interaction by engaging with these buttons.

Aggregation of Interaction: Social Value of Online Content through Economization

A particular aspect of the “Like,” “Reply,”and “ Comment” buttons is that these buttons standardize interaction: The Like button, for example, is useful to express any form of positive feedback for online content. This standardization allows the interactions that take place n the individual level to be represented in an aggregated manner. If 10 different people like a post, these interactions will be displayed aggregately, through the text “10 people likes this photo,” despite the fact that 10 different interactions take place in different time frames by different individuals.

When software transforms interactions into quantifiable representations, online media content’s value becomes calculable. This process enables online media content to be economized, and consequently gain a particular value. This value prepares the grounds for Facebook to become a digital marketing platform, by providing the structure that enables monetizing impressions, exposures, and interactions.

(3) Presenting Content in a Digestible Form: Employing multiple human cognitive faculties to represent online content

The third aspect of a single social media post constitutes of the online content itself. While the content may constitute solely of text or image, the software also allows users to share content from another website by providing a link. A significant symbolic process takes place as users post a link: the Facebook software retrieves a title, subtitle and image as it relates to the shared content. Consequently, all of these elements are represented all together as content. The interface design that places these elements in a Facebook post also allows a standard representation. This standard framework is employed during the remediation process, which extracts the necessary elements in an online link. Extracting elements in this simple manner and presenting them all together (see Figure 1) provides cues about the tone of the content, and allows digestion of the totality of the content.

In the analysis above, the three dimensions of a social media post are focused on: (1) Profile photograph and name of the user (2) Interaction buttons and the quantified representation (3) Online content itself. Modularly analyzing these three dimensions is significant, considering that what differentiates a social media website from another is not only the graphical design, but also the space allowed for text, image, and content. Instagram content, for example, is similar to Facebook content, but it only does not have the share button. Other than this difference, there is nothing structurally different in a Facebook post or an Instagram post. The significant differences in the demography that uses different social media sites and the nature of content, therefore, are consequences of the variances in the social media post elements.

Jackendoff’s “Parallel Architecture Model” for Understanding Meaning-Making during Consumption of Social Media Content

Along with an analysis of social media content, literature is employed from O’Reilly’s definition of Web 2.0 to characterize the information consumption process that takes place during interaction with content online. The analysis revealed three different agents, mainly the user, the businesses, and the Facebook algorithm as agents that collectively curate the Newsfeed, and provide contextual basis for meaning generation. Furthermore, modularly analyzing these features has allowed us to understand how the meaning making process do not happen in regard to particular elements, but is a function of all the symbolic and cognitive mediations that has taken place prior and the software that designs the symbols.

While the agents that contribute to meaning making process is defined through a modular analysis, the information consumption process do not happen modularly: a user do not generate meaning through looking at each element separately, but perceive social media content as a whole. The combination of the elements in a post is a “modular combination,” yet these elements are perceived simultaneously during the scrolling down processes.

Jackendoff’s parallel architecture model (Jackendoff, 2007) for understanding language can also be employed to understand how these multitude of content is perceived simultaneously. The parallel architecture modal is one of the “features of language that are extensible to other symbolic systems” (Irvine, 2012). The input of a user, the content, and the aggregated interactions all function as contextual cues in a meaning-making process, and all contribute to attach a singular social and economic value to content.

In this context, the terminology that Jackendoff employs to describe the “Parallel Architecture Model” (Jackendoff, 2007) can be translated into the social media context as well. While a social media post and language are structurally different, Jackendoff proposes a non-directional method of absract thinking that gives insight for making sense of complex phenomena. Given Jackendoff’s model, the components of a social media post can be considered as “generative components.” (Jackendoff, 2017, p.12)

The distinction that Jackendoff makes in terms of parallel processing vs. serial processing (Jackendoff, 2007) is also applicable to the context of a social media post. As human employs the language faculty, the language is perceived as a whole structure, and all the sub-structures of a language are perceived collectively. This characterization applies for a social media post as well, considering that all the elements form a post regardless of the completeness of the post, or the content inside it.


In this analysis, at first the newsfeed is characterized as a meta-medium in the context of the current network architecture. Afterwards, an analysis of the social media content revealed the role of multiple agents in the duration and representation of online content. The social value of content through economization is explored with a focus on the software’s agency. Consequently, Jackendoff’s “Parallel Architecture Model” is employed to describe the meaning-making process. The analysis reveals that contextual cues of online content and embedded interaction functionalities allow online content to gain social and economic value during the mediation process.


Chandler, D. (2007). Semiotics: the basics. Routledge.

Jackendoff, R. (2007). A parallel architecture perspective on language processing. Brain research, 1146, 2-22.

Martin Irvine, Media Theory and Technologies of Mediation: An Introduction. Google Docs. 2012-2014

Pylyshyn, Z. (2003). Return of the mental image: are there really pictures in the brain?. Trends in cognitive sciences, 7(3), 113-118.

Shepard, R. N. (1978). The mental image. American psychologist, 33(2), 125.

Weiser, M. (1994, November). Creating the invisible interface:(invited talk). InProceedings of the 7th annual ACM symposium on User interface software and technology (p. 1). ACM.



Identity as a remediated cultural text

It is a challenge for human beings to understand their own symbolic systems, because we can never be external to our own cognition. Another challenge is the restrictions that language brings. This is because words crystallize meaning, and precondition our thoughts by providing conceptual frameworks. On the other hand, these two challenges that make it a challenge for us to understand our symbolic systems – being a human with a particular language structure – can as well constitute starting points to understand human symbolic systems.

In order to overcome the restrictions that language provide, the best that we can do is gain a “meta” understanding. Secondly, a historical approach can be employed to understand how human cognition functions. Looking back to the history, understanding the implications of Claude Shannon’s Master’s thesis was the “a-ha” moment for me. Using electricity as a means to deploy human cognition has enabled us to create cognitive structures (not cognizers) for our use.

Understanding these concepts, allowed me to think of the following questions that I would not have thought of otherwise: What are the assumptions we make as we are navigating in our daily life? How do the language that we are using restrict our conceptualizations? How do we delegate and expect agency from artefacts? What is the best way to characterize artefacts with deployed cognition? And finally, what are the consequences of not asking these questions?

Considering that the way humans digest content is the topic that academically inspire me, the concepts that I have learned throughout the courses further extended my inquiry to the following question: What is the cognitive basis of remediation? Are there any common patterns in remediation of media? Can we observe any patterns in the way that media forms are remediated?

With this perspective, I will propose the following text as a unit of analysis with the conceptual tools at hand: online identity. Approaching online identity as a form of remediation, Bolter and Grusin’s remediation theory can be employed to understand how identity is transferred. Secondly, analyzing an online social media platforms’ interface in regard to the way it simulates person-to-person interaction can be a usefull approach to deblackbox the cognitive processes that take place. Finally, deblackboxing the functionalities and symbols embedded in buttons in online platforms can be useful to understand hoe interacting with a computer can substitute person-to-person interaction.

Istanbul inside the Google Cultural Institute

Even though Walter Benjamin mentions how art’s value is less about its rirualistic value, I remember having a very ritualistic relationship with particular media: the Q’uran. Believing that rooms with the Quran are said to have more angels in it, I grew up in rooms with a Q’uran corner. An Arabic book with Arabic texts, only there to fulfill its ritual function. The existence of Quran brought other rituals with it; you were not supposed to touch the Q’uran unless you had a valid religious bath (a ceremonial bath you take every day), and you were supposed to place it above your chest. The ritualistic dimension of Quran possession is probably more prominent to the Turkish eye, since Turks don’t know how to read Arabic letters, but always have the Arabic version of the holy book as a ritual. I remember being anxious around the book, when felt Muslim at the time.

When Islam applications emerged, my father downloaded Quran to his iPhone, and this is where the problems started: Was my father allowed to bring his phone to the restroom? Was it okey for him to keep his phone below chest level? Would he be better of carrying and interacting with Quran, or not carrying it and respecting it? My father was able to resolve this dilemma thanks to his engineering background, and told me the following: “Handan, this is not the real Quran, what is inside this phone are just 1s and 0s. I can do whatever I want to with my iPhone.”

As manifested in the instance I have explained above, and Benjamin’s discussion, the technical reproducibility of art has altered what art is how it is consumed, and the means it serves. Emergence of photography was particularly significant at this point, considering that it didn’t rely on primitive and intuitive methods of documentation, such as forming a sentence or drawing on walls, but rather on a particular technique of capturing image. From my perspective, what all inspiring photographers do being distracted by the exotic, the sunset, the historic, the poor, the far-away. Particular visual narrations are appealing for photographers, which often serves to politically function in a way – the Afghan girl photograph, for example, about how primitive Afghanistan is and how they deserve war.

The Google Art Institute, despite a visual design that depicts it as a neutral, meta- museum, is as well biased considering that it also creates a narration. The following words from Benjamin summarizes the curators’ power of reconstructing the history: museums are “where the way each single image is understood appears prescribed by the sequence of all preceding images.” Museums are a way of representation, just like the news, they are reconstructions of history by a narration that occurs through the alignment of particular elements together.

Just like the term “news,” “museum” has a neutral resonance – we are likely to think of news regardless of its context, or the agents involved in its formation. This is likely to occur because what makes news politics is more about what is not in the news, than it is about the content of the news. With this understanding of the neutral connotation of the museum, I would like to cmparatively analyze two cases: the “Mural Istanbul Festival” on the Google Art Institute versus “the Don Quixote Occupation House.”

In the Mural Istanbul festival, a municipality with a secular majority, had domestic and international artists make graffiti on walls of Kadikoy, Istanbul – a city that has been the capital of Roman Empire as well as the Ottoman Empire. The history of the city causes makes it problematic to regulate the city and renovate it, which consequently leads to complicated streets, hard navigations, and a city where only locals can understand and navigate. The walls that have been part of old apartments, historic streets and alaturca homes has become walls for Graffiti where artists made statements about the Occupy movement. These two items – the alaturca lifestyle of people in old houses and the politically-grounded graffiti culture, blended very well together, considering that blending has what made Istanbul what it is.

Despite having been seen almost all of the graffitis of the “Mural Istanbul Festival,” I learned about the project yesterday when I was navigating in the Google Cultural Institute. While Google Art Institute represents items from museums that are traditionally museums, Google Cultural Institute enabled me to observe these graffitis with the function of the museum: graffitis were visible to me in relation to eachother.

Google Cultural Institute, therefore, has became a means of re-conceptualizing and re-historicizing not only what has been formed, but also what was intentionally constructed to not function as a museum. Istanbul’s street art culture fits the city’s aura and characteristic, one that enables blending, and makes contrast look beautiful, but this time, what was made to be kept undercover (municipality’s agency and economic financement in graffitis) is emphasized through the GoogleCultural Institute, turning the Graffitis from counter-culture street demonstrations to elements of a museum.

The framing and construction power of GoogleCultural Institute is even more obvious when we realize that squatted houses are not present in it. After the Occupy movement, the leftist – socialist – counter-culture agents in the Turkish society found means to strengthen and demonstrate, and an abandoned house was occupied. The Don Quixote Social Center is known and acknowledged by only the Occupy Gezi protestors, some foreigners in the city, a couple of leftist websites. The link to their video has only 1400 hits.

The counter-culture dimension of this art project makes the “the Don Quixote Social Center” impermeable to any ideological attack or appropriation, and most importantly, it is built to be invisible to the Google Art Institute. Its own existence counters the idea of private property, and therefore do not fit by any means to the concepts of museum, copyright, or ownership.

Just in the case of news, what defines the Google Cultural Institute is not the existence of museum elements in it, but it is rather about what is not inside in it. On the surface, one would easily think that Google Cultural Institute is a meta-museum, trying to recreate the experience of a museum. However, considering the agency involved in creating the Google Cultural Institute (it does not function like a social network, so everybody are not making their own museums) Google Cultural Institute is a particular way of representation, it is far-away from being meta yet, but it is more like an art itself with a particular demonstration: the demonstration that the ritual can be taken out of the art.  

The new Memex and the Economics of Interaction

As in the case of ideologies and political movements, technological progress was as well possible by inspiring ideas, which provided vision for future engineers and scientists as they were making progress. Bush’s “As we may think”[1] is one of those articles which provide the conceptual grounding for today’s technology. Memex, based on the idea of compression, was conceptualized as an external memory which people could consult to. The concept of memex, however, do not rely on digitization but rather provides a vision.

This conceptualization has manifestations in multiple technology products that we use today. The idea of compressing information relates to the idea of USB flash drives – commonly used for storage and data back-up. USB’s are economically available for the public, and they relate to the idea of memex through its availability and ability to storage large data. USB’s, however, involve passive information – there is no interface in USBs that renders them devices that we can consult, nor hypertexts. Google’s search engine, on the other hand, relates more to Bush’s conceptualization of the Memex through its association with the memory.

That being said – less than 24 hours ago, a product that has functionalities of both USB and Google were introduced: (USB’s storage capabilities and mobility combined with Google search engine’s hypertexted structure that relates to our cognition): the Chromebit. Being able to be plugged in to any HDMI-equipped display, Googlebit turns every interface to a computer. Functionalities of Google-based laptops, in a device that is smaller than an iPhone: A USB-like computer. Memex is still inspiring new technological products, since it is more than a technological model.

The fact that the idea of memex existed before a technical model of Google, USB or digitization makes me think about the following: did participatory media exist before social media, or the current structure of Web 2.0? Actually, interactive media existed before social media platforms, or Web 2.0: Consider the following cases where more than one people interact with content: Second-hand book stores where people seek books with notes, in order to relate to nostalgically relate to the memories of strangers, or the signatures a paper gets for a cause. These two are the cases where media is interactive, where people are collectively interacting through a medium.

Interacting with content, however, is now integratl to media constumption process. By rating videos on Youtube,  we interact with content. By commenting on comments and rating feedbacks, we even interact with interaction. This particular structure do not base on a technological innovation, but rather bases on the way network architecture was modified to accommodate interaction. User-experience is the priority, and is facilitated through abstraction. Focusing on user-experience is is possible through design – the most crucial element in the architecture, considering that it is the interface between human cognition and the blackboxed artifact.

It should be noted, however, that the current structure of the Web as an interactive platform where users engage with content – rests on a digital economic model. Interactive design exploits our brain’s structure, by providing instant informational and emotional inputs. This structure is economically promotes, because the interfaces that we interact are also marketing channels for brands. Our attachment to interfaces is a demand of the market, not only because brands want us to be close to us as possible, but also because brands want us to interact with them.

In order to describe and characterize the current computational devices through which we access these content, I believe that it is necessary to note a convergence of the interface – the functionalities embedded on my computer, phone and television are getting similar day by day. Therefore, in today’s context, it is necessary to differentiate forms of media from medium. Previously, media forms directly related to senses: radio would make sound, books were for reading, and television was for watching. In today’s context, however, it is easier to discern that media forms are functionalities, and these functionalities are accessible through simple finger movements (such as scrolling, tapping, etc.) Differentiating media functionalities from the devices that enable access leads to the idea of “metamedia.”

[1] Bush, V., & Think, A. W. M. (1945). The atlantic monthly. As we may think,176(1), 101-108.


Week 10

Boolean logic relates to the idea of binary, which depends on the idea of a single difference. A difference can be embodied in hardware, by making two ends to a pipe, and therefore introducing binary options in mechanical terms. Understanding the idea of Boolean logic helps us what information really is. Traditionally, “a fact” comes to mind when we think of information, whereas information is really about the difference – the way Gregory Bateson puts it, it is the “difference that makes a difference. ”

As Shannon demonstrated in his master’s thesis, “A Symbolic Analysis of Relay and Switching Circuits,”[1] electricity can also be used to embody the Boolean logic. Irvine describes binary units (bits) as the “interface between logic-mathematics and electronics.” Computation, therefore, builds on our cognition. While the concept of computational thinking is used to describe how humans think in computer science terms, we can go further and describe how computers think in human terms, and mention cognitive coding to describe how computer programs depend on human cognition to begin with. While software depends on human cognition, however, computers are more powerful than humans. We owe computers’ power to their processing capacity and memories.

The more one gets invested in computing, it becomes obvious that a good code is more than a good algorithm. The procissing capacities of computers not only form their power, but also their limitations. A good code should take into consideration the processing power of computer, and be as smart as possible.

Consider the following Python algorithm that multiplies any three input variables:

def functionMultiply(a,b,c):




This algorithm, however, can be improved in the following way:

def functionMultiply(a,b,c):



The concept of computational thinking becomes more tangible once we take into consideration the limitations of computer. Wing eloquently describes this perspective by stating that computational thinking involves: “making trade-offs between time and space and between processing power and storage capacity.”[2]

This quote leads us to think about the elegance in coding design. Less code, less iterations, and rather applying more of algorithm and human intelligence, and relying less on the memory: “It is judging a program not just for correctness and efficiency but for aesthetics, and a system’s design for simplicity and elegance.”[3]

All of the software programs are designed to facilitate human thinking. In Java, the following statement prints a string:

print.out.ln(“Hello World”)

In Matlab, the following code clears the memory:


And finally, considering the following algorithm that I just wrote last week for my homework:

a <- a + theme(axis.ticks =element_blank(),







Even if you are not familiar with the R, Java, or Python, it is intuitively clear that the command “print.out.ln” somehow prints out something, “clc;” is clearing, and in the R code, I was creating something transparent or empty. Therefore, the coding process is eased not only through analogies such as “data sets”, etc., but also through the linguistic analogies with the English language.

What is really deblackboxed, from my perspective, is the way coding and predictive algorithms are altering our online exposure. The content we are formed online have became a means for us to make sense of the world: we consume news, media, entertainment, make buying decisions, and communicate through interfaces. Our interactions with computers, however, are used as data that predictcs the type of content we might be interested in consuming, the type of product we might look for, or to expose us the person that we might have known. Therefore, our online exposure is personalized for us – which brings us to the intersection of capitalism and computation.

[1] Shannon, C. E. (1938). A symbolic analysis of relay and switching circuits.American Institute of Electrical Engineers, Transactions of the57(12), 713-723.

[2] Wing, J. M. (2008). Computational thinking and thinking about computing.Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences366(1881), 3717-3725.

[3] Wing, J. M. (2008). Computational thinking and thinking about computing.Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences366(1881), 3717-3725.

Home Button as the cloacking interface to the interface of Siri

Before I decided on analyzing an iPhone for this week’s writing, I spend a lot of time thinking what other digital devices I could focus on. While I blamed myself for my lack of creativity in coming up with a digital device, I later realized that there is a reason that I can’t think of any device other than an Iphone: I don’t own anything other than a Mac and an Iphone. Remembering the boxes I used to have, in the corner of my room – full of mp3s, amateur Canon cameras, Nintendo gameboys, Nokia phones, the cables and the chargers – it is amazing how all I need is inside of an Iphone now.

Going through readings and having an understanding of how agencies and forces are interfaced, I was further curious about the only non-digital component we use during navigation in an iPhone, the Home button – and the software embedded in it (or cloaked by it) – Siri.

The Home button is our interface that leads us to use the software of Siri – which itself as well constitutes an interface between the human language and the information in the world wide web. This double layer of interfaces, however, consequentially leads us to perceive the Home button as the only interface to information. Siri’s branding as the “Personal Assistant” also cloaks Siri’s own software. While Siri is embedded in the network of the Internet, facilitated by Internet connection, digital voice processing, natural language processing, requiring significant data transmission and a strong algorithm – the way Siri talks and its instantenous response property cloaks the technology and software dimension. Consequently, Siri functions as the ultimate cloacking device to the very artefactness of the iPhone: iPhone acquires human like properties through Siri.

While the Actor-Network theory discusses the material technology in particular whereas Siri is a coding program, Siri’s particular place in the iPhone and the way that the Home button cloaks Siri helps us to handle it from the actor-network theory perspective. Siri is embedded in the Home button of the iPhone, which is the only actively used button for navigation purposes. Siri is the inherent quality of the iPhone – while many apps are an integral part of the iPhone – like Messages, Phone, etc., Siri is unique in its replacement. The home button, on the other hand, is the only interface that we perceive to be the access to information. The home button is the visible interface, whereas Siri itself constitutes a smart interface.

Zhang’s approach[1] on the external representation of distributed agency is particularly applicable to Siri: Siri has the potential to structurally alter the way we reach information. Through its characterization as a person, Siri doesn’t let us realize the machine-to-machine interaction behind, the algorithms that it uses to gather information, neither the natural language processing that takes place. Zhang describes this lack of awareness as a property of external representation by stating that external representation “anchor and structure cognitive behavior without conscious awareness.” [2]

What is visible to the user, on the other hand, is the affordance of Siri. Analyzing my personal understanding of Siri, I came up with the following conclusion: Affordances can be channeled from an artefact (a device, an application, or an algorithm) to the individual’s cognition, and be reflected back on another artefact. With this perspective, I realized that my interactions with Siri were based on my interactions with Google. The “cognitive affordance” of Siri was channeled from my understanding of how Google works. In fact, while we are promoted to interact with Siri as if it were a person – through complete questions, Siri does function like Google, and dropping keywords like “photograph Paris” results in relevant search results. What facilitated my interaction with Siri was my previous experience with Google.

Siri, however, differentiates from Google from the agency involved in it: While Google applies significant natural language processing to create a cognitive bridge between the information out there and our minds, Siri takes action. It has agency to create reservations, pick the most relevant information, and respond. Siri’s software, therefore, is two dimensional. It translates human intent into action – through at first analyzing our language by natural language processing, and then coming up with a prediction of our intention, which requires agency, and consequently representing a particular information or taking action

Therefore, it is not only the materiality of a technology, but also the software of a technology that suggests combinatoriality in Siri. The social theories should address these cognitive affordances as well, any maybe address the following questions: How do we have an understanding of what we have instant access to, and what we do not have instant access to? How do we determine what we should restore in our long term memory and short term memory? How are these perceptions mediated by technology? Leveraging the tools that Actor-Network-Theory offers to the cognitive aspect can help us address these questions.

[1] Zhang, J., & Patel, V. L. (2006). Distributed cognition, representation, and affordance. Pragmatics & Cognition14(2), 333-341.


Mediation in Digital Platforms

As McLuhan stated, the content in a particular media form distracted our attention from the materiality of transmission. Separating the materiality of transmission from the media function, however, provides a unique perspective to understand the impact of media. Understanding the possibility of doing a surgery in light of the technology that electricity provides, for example, brings insight into understanding modernism. Modernist critique is, in fact, embedded throughout Debray and McLuhan’s writings, in which they let us understand modern activities (like making a surgery, going to gym, etc. ) as cultural practices that are enabled through mediation technologies. The insight that they bring is so revolutionary, it can even be translated to critical theory. Consider the idea of “news,” and how they function, for example. The immediacy characteristic of the news form allows us to comprehend “news” as objective and neutral facts, even thought they are crafted combinations of text, image, and moving image, and chosen though a particular perspective, to serve a particular point of view.

Considering that the readings focused on media as a central factor in the expansion and structure of societies, an understanding of Web 2.0 is necessary. With Web 2.0, a completely new infrastructure of Internet has developed. Google, in particular, went beyond and provided us imagination: we have established a relationship with Google. Through our understanding of how Google works, we are able to ask for precise information by typing a couple of words into a box. In this transmission, for example, the media is our understanding of Google’s algorithm, transmitting our thoughts into a couple of keywords, and functions as a filter.

Therefore, post-digital media did not only simulate previous functions of mediums, (simulating the book function in the case of e-books, for example) but also the characteristics particular to them has led to the generation of unique medias. The hashtag, for example, despite being a derivation of text, is particular to social media platforms




How the Transmission Model Works in the Dialogic Communication Ritual

Language and metaphors dictate perception by providing cognitive frames to make sense of the world.  The transmission model of communication is one of those conceptualizations, formulated by Shannon, to understand communication. This particular understanding of communication has dominated the discussion about communication and information. Shannon stated that “the fundamental problem of communication is that of reproducing at one point either exactly or approximately a message selected at another point.” [1] The fundamental problem of Shannon’s model, excluding semantical dimension of communication, would probably be resolved by using the term “signal” instead of the term “message.”

However, I would like to take a step back from this criticism. Rather than trying to situate meaning in Shannon’s model – and discussing how it doesn’t fit, I would like to understand what transmission system illuminates in regard to communication. All of the discussion about the transmission model describes how the model does not consider meaning, but there is no discussion about where the meaning lies. Lets consider the case of an e-mail, and try to locate meaning. The example of e-mail is particularly explanatory in this case, because there is a temporal dimension added to communication in e-mails, unlike face-to-face communication, or telephone talk. Communication technologies did not only provide communication among distant places, but they have also included a temporal dimension to it. Communication is not instant anymore; I may read an e-mail days after it was sent. Communication is, however, is still a dialogic practice regardless of the time and space introduced by new technologies.

In an e-mail, something is transmitted through the Internet network. In the case of e-mail communication, lets apply Shannon’s model not between me and the person who is e-mailing me, but rather between our e-mail boxes. In this case, the transmission model functions to describe the process between 1) a person sending an e-mail to me 2 ) that e-mail emerging in my Inbox. As an e-mail falls to my inbox, the transmission model is completed. My inbox receives a signal. The transmission model, in this case, works well in describing the signal duplication process. A documented “difference” is transmitted, and replicated through interfaces.

The duplication of signals, letting me to see a person’s e-mail in my computer’s interface, can be excluded from communication. If we do not let the temporal breaks and spatial distances enabled through communication technologies misguide our understanding, we can still employ Carey’s[2] ritualistic approach to communication external to the transmission model.

Meaning, on the other hand, lies in the sequence of letters. It is a certain use of alphabet that determines the meaning, and as long as the sequence of letters in an e-mail is reproduced, Shannon’s challenge, “reproducing at one point either exactly or approximately a message selected at another point” is accomplished. The transmission model describes a successful transmission process, which has created an interface, for a possible dialogic communication process to happen.


[1] Shannon, C. E. (2001). A mathematical theory of communication. ACM SIGMOBILE Mobile Computing and Communications Review5(1), 3-55.

[2] James Carey, “A Cultural Approach to Communication” (from James W. Carey, Communication as Culture: Essays on Media and Society. Revised edition. New York and London: Routledge, 1989. )

Dialogic domination of the “Afgan girl” portrait

This week, the readings made me consider whether there is anything sociological in the meaning-making process, and whether any sociological insight can bring back and feed the dialogic-generative structure. Is there any relationship between verbal capacity and cultural encyclopedia? Can we observe communities, or maybe social networks, in the form of political fragments, with cultural encyclopedias particular to them? The cultural artifact that I will attempt to analyze will, however, will try to understand what is political about a cultural encyclopedia, and how would the political affect the dialogic.

I will focus on a photograph in this analysis to address this issue.

While music, for example, is seen as an art form that builds on the human symbolic faculty, there are no minimal constituents in photography that establishes an association between language and photography. Therefore, photography is holistic. The intuitive and reflexive moment corresponds to the moment that an image is documented.

Pixels, or what is observed in an image– they can all be considered constituents, but there is no generative principle that function as a rule to acknowledge or define a photographic genre. Therefore, photography is more likely to be governed by creative principles (like 1/3 rules, forming your own color palette, or having your own way of not having a color palette, etc.) than generative principles. The photographer rather searches for a mental image, an idea, a remix of certain colors, patterns, elements, in a single capture.

Now lets consider the Mona Lisa of photography – the Afghan girl portrait, by Magnum photographer, Steve McCurry.

Afghan girl

While this photograph has become an icon now, along with Che Guevera and Marilyn Monroe, this girl is not famous. There is a reason that makes her portrait such an iconic photograph, however: she exactly looks like our image of an Afghan Muslim girl. The girl looks poor, primitive, veiled, hairy; everything the Western expects an Afghan girl to be.

The construction of the image of the Afghan girl would normally be no different than the dialogic process as constructed by Bakhtin: a performance of a symbol recreates and transforms that symbol. If I say a word, I am regenerating its meaning, in regard to the cultural encyclopedia I inherit.

However, how is this dialogic process effected when the image is globally distributed? The photograph gains a weight in the dialogic system so that it distorts the organic direction of meaning. The impact of this photograph is global, and it anchors what an Afghan girl is and can be. Photography, therefore, can be a stereotyping machine, and lock a moment from a certain perspective. Considering the photojournalistic landscape that social photography resides in, it is not about why we were exposed to the Afghan girl, but rather a question of why were we not exposed to other photographs? If done successfully, photography has the power to manipulate the dialogical process – this time, formation of the image of the “Afghan girl.”