Category Archives: Week 1

Leaving for Paris

Elizabeth-Burton Jones

In order to assess the lifestyle of African Americans in Paris, I am going to go to Paris.

Paris Day 1 873

I will stay in Paris from August 7th until August 20th. During this time, I will go to museums, interview performers, go to jazz concerts, and tour the areas that African Americans called home. I am going to Paris for a number of reasons. One, in order to see if the myth of the liberated self in Paris is true. I have heard countless stories about various African Americans that felt an enormous weight lifted off of them as soon as they reached Paris (this “weight” is not confined to any period of time). Also, because I do not know anyone in the Paris Exodus research field, I figured that this would be the most opportune time to begin my research. I am also going to Paris in order to see how the African Americans are being remembered. I will tour a few of the places where the African Americans went when they reached Paris and I will find out if they truly left their mark. In addition, I will interview diverse Parisians and various musical artists and ask them about their perspectives on African Americans and Paris.

With this journey, I am very excited, yet nervous because I am making these connections on my own. I am contacting many people that I have never met before and we are meeting for the first time in Paris. However, I am sure that this trip will grant me tremendous insight to what it is like to be an African American in the City of Light.


I am going to Paris to materialize my research.

Further Questions:

How are the African Americans from the Exodus remembered via current branding techniques?

Are the buildings that the African Americans opened establishments in (restaurants, etc) still intact?

Unequal Opulence

Elizabeth-Burton Jones


The roaring twenties were supposedly a time to multiply every experience to the max. If you were going to gild the rose do so with extra panache. If you were going to wear makeup why not shave your eyebrows off and draw them. Paint your lips red. Shellac your hair to your scalp. Wear diamonds and more diamonds.


Everything was plush and in some ways careless. In typical situations, this image of fun times and no consequences seldom lends itself to any reality. In fact, in this time, not everyone was doing the Charleston without a care. In this time there was still injustice. This injustice is hard to uncover but it was prevalent.

Conspicuous Consumption:                                                        

At the end of the 1800’s Thorstein Veblen wrote a piece that described the ever growing culture of conspicuous consumption. Essentially, this culture included wealthy white men that could distinguish their wealth from others by the ways of food selection, entertainment, and status of his household.

“Conspicuous consumption of valuable goods is a means of reputability to the gentleman of leisure. As wealth accumulates on his hands, his own unaided effort will not avail to sufficiently put his opulence in evidence by this method. The aid of friends and competitors is therefore brought in by resorting to the giving of valuable presents and expensive feasts and entertainments. Presents and feasts had probably another origin than that of naive ostentation, but they required their utility for this purpose very early, and they have retained that character to the present; so that their utility in this respect has now long been the substantial ground on which these usages rest. Costly entertainments, such as the potlatch or the ball, are peculiarly adapted to serve this end. “ (Veblen 75).

Throwing lavish parties, is just one way that the author describes the conspicuous consumption culture, a culture where status trumps all other worries. Veblen also mentions that there was a desire to be honored. “Unproductive consumption of goods is honourable, primarily as a mark of prowess and a perquisite of human dignity; secondarily it becomes substantially honourable to itself, especially the consumption of the more desirable things” (Veblen 69). This honor continued the vicious cycle of consumption, because the more that people wanted to be dignified the more that they had to consume and keep up with the Joneses. Also, Veblen mentions that “If these articles of consumption are costly, they are felt to be noble and honorific. “ (Veblen 70). Therefore, the consumers, create a distinction between their goods and their class. This distinction is the important.

Veblen also spends a great deal of time explaining who the people are trying to distinct themselves from; answer is the worker (typically the working class). “In what has been said of the evolution of the vicarious leisure class and its differentiation from the general body of the working classes, reference has been made to a further division of labour, — that between the different servant classes.” (Veblen 68).One way to show distinction is through dress and uniforms. Veblen mentions that, “The wearing of uniforms or liveries implies a considerable degree of dependence, and may even be said to be a mark of servitude, real or ostensible. The wearers of uniforms and liveries may be roughly divided into two classes-the free and the servile, or the noble and the ignoble. “ (78). Therefore, this strengthens the argument of the class distinctions. In the rest of Chapter 4, Veblen gives the role of women at the time and their position within the consumption spectrum. However, for the purpose of this paper, the vicious culture of consumption is necessary to review. Even though this article was written in 1899, it still skims the surface of my study of the African American Exodus to Paris.

Opulent Observers:

In the early 1900’s to the 1930’s there was definitely a culture of spending money without a care. It seems as though whenever people give a summation of the times known as the Roaring 20’s people often name it a careless time, a sinful time, a time for follies, or all of the above. But, in this time description, a vast majority of people are being marginalized. Not everyone was able to afford a life where the main goal was to achieve honor.  That is why I included the Veblen reading in my studies.  I wanted to know about the people that that were mainly on the other side of the class spectrum included African Americans.

An interesting way to look at a different depiction of the unequal opulence is revealed during the 2013 remixed version of “The Great Gatsby” , even though inequality is not the main focus, it is at least heavily highlighted through distinct nonverbal actions of the actors. In the 2013 version of the film occurs when Tom Buchanan talks to his guests about the uprising of the minorities. During this scene Tom Buchanan was surrounded by African American butlers and he unabashedly directs his comments about suppressing the African American race to the waiters. Tom Buchanan does this by straightening one butler’s tie. In response the butler does not say anything; rather the butler gives an expression of suppression.

That scene was particular memorable because of the way that it was acted out. In the other versions of “The Great Gatsby” such as the film starring Robert Redford, I do not recall noticing an overt distinction of the races. Also, I did not notice a huge distinction between the classes particularly between Tom Buchanan and Nick Carraway. But in the 2013 film I continuously noticed the light and the dark. The differences covered a vast terrain (class, race, etc.) but each area found its destination in the  unattainable dream.


The point of the matter is that the opulence was not as evenly spread out between the masses and the races in America.

Further Questions:

Could African Americans experience this opulence in Paris?

When looking at the distribution of wealth during the roaring 20’s how did the marginalized live in an overly extravagant world?

How is this different according to race? Native versus immigrant?

Is conspicuous consumption a product of propaganda? Does this consumption work today?

Is conspicuous consumption more than a means to achieve status?

Could it be a side effect of fulfilling an emotional or psychological void?


Thorstein Veblen. “Conspicuous Consumption.” Chapter 4 in The Theory of the Leisure Class: An Economic Study of Institutions.  New York: The Macmillan Company (1899): 68-101.

Veblen, Thorstein. Conspicuous Consumption. New York: Macmillan, 1899. Print.

AO-Week 1: Re-mediation through Google Art

Jay David Bolter and Richard Grusin are concerned with the reproduction of media objects, a phenomenon which they refer to as remediation.  In Remediation: Understanding New Media (2000), Bolter and Grusin argue that “new media are doing exactly what their predecessors have done: presenting themselves as refashioned and improved versions of other media… what is new about new media comes from the particular ways in which they refashion older media and the ways in which older media refashion themselves to answer the challenges of new media” (14-15).

Bolter and Grusin identify the “double logic of remediation;” that is, “our culture’s [desire] both to multiply its media and to erase all traces of mediation” (5). This double logic rests on two main principles: immediacy and hypermediacy. “Immediacy dictates that the medium itself should disappear and leave us in the presence of the thing represented: sitting in the race car or standing on a mountaintop” (6). As the authors point out, this aspect of remediation is not a novel invention brought about by digital media; painting, photography, and computer systems for virtual reality all “seek to put the viewer in the same space as the objects viewed” (11). Hypermediacy works in opposition to immediacy, revealing the mediation by combining multiple forms of media into a single media object; “hypermediated forms ask us to take pleasure in the act of mediation” (14).

Immediacy can be understood by considering the ubiquity of the graphical user interface (GUI). “Immediacy is meant to make the computer interface ‘natural’ rather than arbitrary… the desktop metaphor, which has replaced the wholly textual command-line interface, is supposed to assimilate the computer to the physical desktop and to the materials (file folders, sheets of paper, inbox , trash basket, etc.) familiar to office workers. The mouse and pen-based interface allow the user the immediacy of touching, dragging, and manipulating visually attractive ideograms” (23). The authors speculate about the emergence of three-dimensional versions of this interface; Google Art fulfills this speculation.  GoogleArt offers a “museum view” allowing audiences to virtually navigate through the three-dimensional space of New York City’s Museum of Modern Art (MoMA). View “Starry Night” by Vincent Van Gogh in museum view here. Simply click on the area of the floor where you would like to move to and watch as your view changes to reflect your new location within the virtual space. The point of view presented in this virtual environment is meant to reproduce the view that museum visitors experience when standing in the physical gallery. The interface in this virtual environment strives to be as natural as possible, simply pointing and clicking in the direction you desire to move and selecting icons to view information about the artworks displayed. GoogleArt’s museum view provides an excellent example of the immediacy of new media.

The authors contrast immediacy with hypermediacy, saying: “In digital technology, as often in the earlier history of Western representation, hypermediacy expresses itself as mutliplicity.  If the logic of immediacy leads one either to erase or to render automatic the act of representation, the logic of hypermediacy acknowledges multiple acts of representation and makes them visible. Where immediacy suggests a unified visual space, contemporary hypermediacy offers a heterogeneous space, in which representation is conceived of not as a window on to the world, but rather as “windowed” itself — with windows that open on to other representations or other media. The logic of hypermediacy multiplies the signs of mediation and in this way tries to reproduce the rich sensorium of human experience…. Hypermedia makes us aware of the medium or media and… reminds us of our desire for immediacy.” (34)

GoogleArt’s museum view also exhibits qualities of hypermediacy. Look again at the museum view of MoMA. Notice that in the new tab that opens, the window is split into several sections. Across the top is a menu with hyperlinks to important pages and information, beneath which is the page header with the title of the artwork, the author’s name, and the date of creation. The main portion of the window is split into three sections: a map of the museum floor plan on the left, an icon toolbar in the center, and a three-dimensional virtual interface on the right. The footer includes yet another menu with hyperlinked information.  This one window displays several types of media: text, hypertext, digital graphics, and 3-D virtual reality. Each medium is represented in a way that reflects our cultural desire for immediacy, encouraging us to interact with the digital environment in a natural way. The hypermediacy of the environment is revealed when we consider the entire window, the sum of these media into a single media object (the window interface). No effort is made to conceal the media, but rather to organize it in a way that is functional and visually appealing; audiences are aware of the media represented within the window.


In Hamlet and the Holodeck, Janet Murray considers the role of digital media in the realm of literature.  It becomes evident that Murray is referring to the notion that computers have the potential to completely reshape the way narratives are consumed. Murray often refers to video games as a strong starting point for the combination of narrative and digital media. One of the most notable observations Murray makes is about agency.  At what point does the user become the author if they are in a digital work – one where manipulation is more than possible.  On page 152 she states, “They build simulated cities, try out combat strategies, trace a unique path through a labyrinthine web, or even prevent a murder, but unless the imaginary world is nothing more than a costume trunk of empty avatars, all of the interactor’s possible performances will have been called into being by the original author.”  She is essentially stating that those playing the game can manipulate the initial creation as much as an actor may when they are on stage.  He or she can forget their lines, but this person will have trouble creating a new character off the top of his or her head.  The author still creates the framework for what is created. Simply put, “The interactor is not the author of the digital narrative, although the interactor can experience one of the most exciting aspects of artistic creation – the thrill of exerting power over enticing and plastic materials.  This is not authorship but agency,” (Murray 153).

The use of video games to create a digital narrative could be incredibly beneficial to the English field, especially those that are studying Shakespeare.  While many youth find Shakespeare to be completely unattainable, breaking Shakespeare’s works down into something more familiar, like a video game, could make Shakespeare’s works easy to understand.  Research shows that the idea of creating educational Shakespeare games has been considered, but there have been no real results.  A webpage which had both stable financial support and a decent product was completely shut down.  Why isn’t this working?  Are schools slow to accept these practices?  Are the games not reaching the target user?

I was able to find one game, which was not really educational, based upon Shakespeare’s works.  In “Romeo,” we meet our hero, Romeo, who is trying to save Juliet.  The game is set up as a journey story, but does not actually create a story.  Essentially, Romeo has to traverse in an environment similar to “Super Mario Bros. 3” searching for the roses.  At the end of every level, you run into a cartoon Shakespeare who tells you “congratulations, you have completed another level.”  The game is not accurate in the narrative aspect of Romeo and Juliet.  Romeo actually saves Juliet, they have a happily ever after in the game.  This is the very aspect of the story which makes it such an iconic piece of literature.


Narratives and Semiotics

Today I have more definitions, this time by Seymour Chatman’s Story and Discourse: Narrative Structure in Fiction and Film. Chatman breaks down structuralist theory for narrative as such: a narrative has two parts – a story and discourse. The story is the chain of events, as well as the happenings plus “existents” which are the characters, the setting, etc. The discourse is the means by which the content is communicated (Chatman). He breaks it down even further by describing it as the “what” versus the “how”.

Chatman questions if narrative can be semiotic, meaning if it can communicate something on its own apart from the story. Can narrative itself be semiotic? In order to do so, he argues that narrative must contain a “form and substance of expression” and “a form and substance of content”. In this vein, Chatman says that the narrative discourse is the form of expression with the story being the content and the discourse the form of expression.

Signifieds: event, character, detail of setting

significants: elements in a narrative statement that can stand in for any of those signifieds. Any kind of physical or mental action, any person, any evocation of place (respectively to the signifieds above.)

Chatman contends that narrative structure imparts meanings on its own accord, by providing these three categories above. By providing eventhood or characterhood or settinghood, a meaningless text becomes understood. He describes a cartoon with animated lines and dots, which have no meaning on their own; they are just geometric symbols. But by animating them, character starts to emerge, and by putting in a setting or series of events, the meaning can be understood without overt vocal narration. The structure alone is providing enough narration.

(A really handy diagram Chatman lays out that I think will be helpful for this research going forward this summer. Pg. 26)

The post structuralist approach casts a wider net. It would require studying not just the narrative structure itself or the story, but also the systems of knowledge that produced that work. I’d like to explore this a little bit more, so I’m looking into a few more sources on post-structuralism.

A method for the quantitative mapping of change

By Eric Cruet

The problem of change[1].  Much of mankind’s preoccupation has been with changes in the sciences, technology, sociology and economics.  More recently, we seem to be concerned about variations in climate change, global financial states, the effect of technology on society, and the increasing use of unlawful violence intended to coerce or to intimidate governments or societies i.e terrorism.

Traditionally, network, graph, and cluster analysis are the mathematical tools used to understand specific instances of the data generated by these scenarios at a given point in time. But without methods to distinguish between real patterns and statistical error, which can be significant in large data sets, these approaches may not be ideal for studying change.  By assigning weights to individual networks, we can determine meaningful structural differences vs. random fluctuations [3].

Alternatively a bootstrap technique [2] can be used when there are multiple networks to arrive at an accurate estimate by resampling the empirical distribution of observations for each network.  In the case of a single network, resampling can be accomplished by using a parametric model to fit the link weights without undermining the individual characteristics of the nodes.  Using this technique, we can determine cluster significance and also estimate the accuracy of the summary statistics (μ, σ. ρ) based on the proportion of bootstrap networks that support the observation. 




The standard procedure to cluster networks is to minimize an objective function over probable partitions (left side of diagram).  By resampling the weighted links of the original network, a bootstrap world is created of resampled networks.  Next, these are clustered and compared to the clustering of the original network (2nd row, right side).  This provides an estimate of the probability that a node belongs to a specific cluster.  The result is a “significant clustering” [3].  For example, in the diagram above, the darker nodes (bottom of the diagram) are clustered together in at least 95% of the 1000 bootstrap networks.  Several algorithms in the public domain exist to automate the majority of these tasks.  

Finally, once a significance cluster has been generated for the network at each point in time,  an alluvial diagram is used to reveal the trends in the data. An alluvial diagram (bottom of the picture) orders the cluster by size and reveals changes in network structures over time [3]. Please refer to the diagram below:

As you can see from the alluvial diagram, from time 1 to time 2, the condition scenario represented by ORANGE clustered with the condition scenario represented by PINK.  This clustering was a result of some underlying change, and was not obvious at time 1.  As a result, the bootstrap/cluster analysis allowed the quantitative mapping of the change to take place.

The model can be used in a variety of scenarios:  to map the changes in global weather patterns, US emigration flows from state to state based on various factors (employment, housing prices, education, income per capita), variations in federal funds market in response to major events [3], and track global targets of terrorism activity.

But my main area of interest is illustrating the method by applying to map the change in the structure of science [4].  Stay tuned.  I conclude with a rather lengthy but appropriate and relevant quote.

From Michael Focault’s “The Order of Things”

The problem of change.  It has been said that this work denies the very possibility of change. And yet my main concern has been with changes. In fact, two things in particular struck me: the suddenness and thorough­ness with which certain sciences were sometimes reorganized; and the fact that at the same time similar changes occurred in apparently very different disciplines. Within a few years (around 1800), the tradition of general grammar was replaced by an essentially historical philology; natural classifications were ordered according to the analyses of comparative anatomy; and a political economy was founded whose main themes were labour and production. Confronted by such a curious combination of phenomena, it occurred to me that these changes should be examined more closely, without being reduced, in the name of continuity, in either abruptness or scope. It seemed to me at the outset that different kinds of change were taking place in scientific discourse – changes that did not occur at the same level, proceed at the same pace, or obey the same laws; the way in which, within a particular science, new propositions were pro­duced, new facts isolated, or new concepts built up (the events that make up the everyday life of a science) did not, in all probability, follow the same model as the appearance of new fields of study (and the frequently corresponding disappearance of old ones); but the appearance of new fields of study must not, in turn, be confused with those overall re-dis­tributions that alter not only the general form of a science, but also its relations with other areas of knowledge. It seemed to me, therefore, that all these changes should not be treated at the same level, or be made to culminate at a single point, as is sometimes done, or be attributed to the genius of an individual, or a new collective spirit, or even to the fecundity of a single discovery; that it would be better to respect such differences, and even to try to grasp them in their specificity. In this way I tried to describe the combination of corresponding transformations that char­acterized the appearance of biology, political economy, philology, a number of human sciences, and a new type of philosophy, at the threshold of the nineteenth century.


[1] Foucault, M. (2002). The order of things. Routledge.

[2] Palla, G., Derényi, I., Farkas, I., & Vicsek, T. (2005). Uncovering the overlapping community structure of complex networks in nature and society. Nature,435(7043), 814-818.

[3] Rosvall, M., & Bergstrom, C. T. (2010). Mapping change in large networks. PloS one5(1), e8694.

[4] de Solla Price DJ (1965) Networks of scientific papers. Science 149: 510–515. doi:10.1126/science.149.3683.510.

Note: Dragon Dictate is used as a speech to text transcriber for a portion of this document.  Although I make every effort to proofread the postings, any unusual syntax, lexicon or semantic error in language is attributed to my lack of attention and the immaturity of this technology.

Expectations and Representation

My second background reading, Alone Together: Why We Expect More from Technology and Less from Each Other, tied in with the Hayles reading I discussed in my last post primarily because of the author’s focus on the human aspect of technological development. Sherry Turkle, the author, came at the issue from more of a psychological perspective. Alone Together is an anthropological look at how people interact with machines. It is this perspective that informs the statement, “We are shaped by our tools. And now, the computer, a machine on the border of becoming a mind, was changing and shaping us” (6). The first half of the book focuses on human interactions with robots, while the second half looks more at people’s digital lives and virtual worlds. She looks at these interactions to see how people’s expectations of others and their representations of themselves are changing.

Throughout the book, Turkle analyzes instances of robotic interaction and participation in virtual worlds. One primary focus is the purpose of the interaction. She mentions that “We are on the verge of seeking the company and counsel of sociable robots as a natural part of life. Before we cross this threshold, we should ask why we are doing so” (29). She analyzes what input people give intelligent machines and robots to give themselves the illusion of willful feedback. My research will not focus on robotics, but I mention this because the author makes a point to link the idea of humans growing emotionally closer to robots to the idea that people paradoxically alienate each other partially due to networking technologies. She calls these “fearful symmetries” (154). Her focus on this cannot be overstated, it is fundamental to the assumptions she makes in the book about how people psychologically want to engage in what she calls the “digital fantasy” (31).

The part of this book that will more inform my research paper is the second half, but she tied her theses together very closely. For example, she looks back at robotics in the second half of the book when she states that “Nurturance was the killer app for robotics. Tending the robots incited our engagement. There is a parallel for the networked life. Always on and (now) always with us, we tend the Net, and the Net teaches us to need it” (142). She provides many examples, but I would imagine that most of us know the feeling of being constantly connected. What was closest to my interest was how people presented themselves online, and how they collaborated and structured their relationships in virtual worlds. She speaks to that when she says, “When part of your life is lived in virtual places—it can be Second Life, a computer game, a social networking site—a vexed relationship develops between what is true and what is “true here,” true in simulation” (141). Directly related to that is her statement that “the life mix is the mash-up of what you have on- and offline. Now, we ask not of our satisfactions in life but in our life mix. We have moved from multitasking to multi-lifing” (148). She is very focused on the psychological and cultural reasons for engaging in a kind of virtual life.

I found a lot that will contribute to my research, but I still need to finish the book. I’m interested to see what her conclusions are, because her perspective seemed fairly pessimistic to me at times. I plan to look into a record her psychological analyses more thoroughly for my paper.

Background on Posthumanism

I’ve started my reading with How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics by N. Katherine Hayles. The author broadened my fairly limited perspective on posthumanism. I had previously considered it as necessarily including non-organic cybernetics, but the author argued against that assumption. She claims very early on that posthumanism’s “defining characteristics involve the construction of subjectivity, not the presence of non-biological components” (4). This made it clear to me that cybernetics is only one aspect of the posthuman. In the author’s discussion of the cybernetic aspect of the posthuman she said that “a common theme is the union of the human with the intelligent machine” (2). This was more what I had thought when I’d heard about posthumanism, but I typically focused on the machine aspect of the combination. This was another assumption the author addressed fairly immediately.

Hayles argues for the reincorporation of the body into the cybernetic discussion, saying that “consideration needs to be given to how certain characteristics associated with the liberal subject, especially agency and choice, can be articulated within a posthuman context (5). Another assumption she dissects is that “information as a (disembodied) entity that can flow between carbon-based organic components and silicon-based electronic components to make protein and silicon operate as a single system” (2). A third assumption she addressed that I found fascinating was related to the field of artificial life. The example was whether a computer program made to imitate an organic creature with the capacity to evolve in unexpected ways could be called a life form. The assumption she said would have to be in place for that thinking is if the universe is seen as composed essentially of information. Related to this is the theory that human consciousness can be entirely transferred to a machine.

I’m focusing on the first chapter because many assumptions and perceptions were discussed that I hadn’t thought about before. She outlines what she will be discussing later in the book, and makes it clear that she is not writing a history of cybernetics. Her interests lay more in the direction of the co-existence of the human and posthuman. One last thing that I think will help me going forward is what the author called her “strategic definition of virtuality” (13). She says, “Virtuality is the cultural perception that material objects are interpenetrated by informational patterns” (13-14). All of this will inform my further my research if I focus on the human computer interaction aspect. Gaining a basic understanding of the some broader assumptions and fields was a good first step.


This week I’m just starting to dive into a discipline I don’t know much about in the academic sense: narratology. While I’m a voracious consumer of books, television shows, films and plays, I haven’t studied the idea of narrative itself. I’m interested in the way that narratives can play out through different mediums or different versions or adapations, or how the same basic narrative can be used over and over again in different iterations or by using the same formula. Since I’m still getting my feet wet, I figured I could lay out some of the definitions I read from Mieke Bal’s seminal work, Narratology: Introduction to the Theory of Narrative. These are Bal’s definitions.

Narratology: the ensemble of theories of narratives, narrative texts, images, events, cultural artifacts that tell a “story” – helps to understand, analyze, and evaluate narratives.

Text: finite, structured whole composed of signs. Signs can be linguistic like words or sentences, but also cinematic shots and sequences or painted dots and lines.

I thought that this was an interesting distinction that Bal makes, “The finite ensemble of signs doesn’t mean that the text itself is finite, for its meanings, effects, functions, and background are not.” This just means that the text has a beginning and end; a first and last word or first and last frame.

Narrative text: a text in which an agent conveys to an addressee a story in a particular medium (words, imagery, sounds, etc.)

Story: the content of that narrative text and produces a particular manifestation or inflection or “coloring” of a fabula,

Fabula: series of logically and chronologically related events that are caused or experienced by actors (agents that perform actions – not necessarily human) Events, actors and time are all elements of the fabula. Actors are given distinctive traits, which forms their character. A choice is made of whose point of view the events are presented, which is described as focalization.

From here, the fabula needs to be conveyed through through a medium with signs for it to be a narrative test. The agent that relates the signs is considered the narrator.

Bal says that there are three layers of a narrative text: text, story, fabula. Text is what a reader would see first because the fabula needs to be processed in order to be understood. I’m still trying to work through this a little more to get the terminology right; there are a lot of elements that Bal is breaking down throughout her work. The quick searching I’ve done on Bal has her, and this work specifically, labeled as structuralist. Structuralism seems to be a theory that stresses the whole over the sum of the parts, or the idea that everything is interconnected. It also posits that there must be structure in text, when it comes to literary theory at least, which makes it easier for experienced readers to understand. However, opponents to structuralism claim that it can be too reductive and that structuralists understand stories in too formulaic way. This would means that adaptations that are almost completely different stories would be unoriginal. I’m interested to dive a little deeper in structuralism as well as its counterpoint. 


What is remix?

Defining Remix

When one is trying to understand what remix is, he or she will find many different interpretations of the word.  Two of the most prominent voices in this argument are Lawrence Lessig and Eduardo Navas. Lessig and Navas seem to agree that there must be an original for there to be a remix.

Lessig speaks in depth about ownership rights of the pieces which are made because of the original.  Lessig uses the terms “Read/Write” (RW) Culture and “Read Only” (RO) Culture.  He argues there are both positive and negative outcomes to both.  In RW Culture, creativity is encouraged and more freedom is attainable.  In RO Culture, the creator of the original is recognized.  He says the major problem facing remix culture is that the question is no longer how can we nurture creativity, but how can the profit be maximized.  Finally, he writes that we should be more concerned with protecting the distribution channels, instead of copyright.

Navas writes, “Today, Remix (the activity of taking samples from pre-existing materials to combine them into new forms according to personal taste) has been extended to other areas of culture, including the visual arts; it plays a vital role in mass communication, especially on the Internet” (Navas).  He goes on to explain remix in reference to music, which says it is a “reinterpretation” of a song which already exists, which still maintains the aura of the original music.  Navas very specifically points out the role technology plays in remix.

It is my understanding that Navas would not consider this comic book a remix because it is not a digital entity. Yet, Navas would say this film is a remix because it is a digital entity.

For the purposes of this course, I will take Lessig’s broader interpretation of a remix to be true.  I believe something is a remix if the original’s aura is still recognizable, yet there has been a distinguishable change.  I think these changes can best be understood using Navas’ extended remix, selective remix, reflexive remix, or regenerative remix. If the item I am studying does not fall into one of these four categories, than it shall not be considered a remix.

The remixes I create will all be textually based, but I will study historical remixes of Shakespeare done in all media. I want to study if there are any economic challenges to this remixes, like dealing with copyright.

On Thursday, I will further study Murray’s Hamlet on the Holodeck and any historical reactions to Shakespeare’s works.

Navas, Eduardo. “Remix Theory » Remix Defined.” Remix Theory RSS. N.p., n.d. Web. 21 May 2013.
Lessig, Lawrence. Remix: Making Art and Commerce Thrive in the Hybrid Economy. New York: Penguin, 2009. Print.
“Sample Pages.” Shakespeare Sample Pages Comments. N.p., n.d. Web. 21 May 2013.
“The Royal Shakespeare Company Presents: Star Wars.” YouTube. YouTube, 04 May 2013. Web. 21 May 2013

AO-Week 1: Reproduction through Google Art

In his essay “The Work of Art in the Age of Mechanical Reproduction” (1936), Walter Benjamin discusses the implications of technology on art and society, tracing the transition of art from a cult(ural) object created for the contemplative few to a political object distributed to the masses. Authenticity and the concept of “an original” are integral to Benjamin’s argument: “The authenticity of a thing is the essence of all that is transmissible from its beginning, ranging from its substantive duration to its testimony to the history which it has experienced.”  Benjamin attributes a sense of authority to authentic artworks, saying, “that which withers in the age of mechanical reproduction is the aura of the work of art.” Reproductions alter perceptions of space/place/time and can often reveal things about the original that were not visible or noticed with the naked eye. Reproductions also allow for greater audiences to experience a version of the original that would not be possible otherwise.  These two statements can be demonstrated through examining the GoogleArt Project. 

The GoogleArt Project provides a platform for viewing high-resolution reproductions of famous works of art from around the globe. Viewers are often presented with flattened images of multi-dimensional artworks, for example this mural of Anthony and Cleopatra by Rene Antoine Houasse. Painted in 1860 in the ceiling of the Venus Salon at the Palace of Versailles in France, the GoogleArt image erases the context of the painting and alters the viewers perceptions of space and place. The image as it appears on your computer screen can vary somewhat in size, but it cannot accurately match the nearly 10-foot wide and 7-foot high original painting.

Additionally, viewers can zoom in on sections of interest, In his article, Benjamin uses the medicinal metaphor of a magician and a surgeon to describe change in relationship between the artwork and the audience. “The magician heals a sick person by the laying on of hands; the surgeon cuts into the patient’s body. The magician maintains the natural distance between the patient and himself….he greatly increases [this distance] by virtue of his authority. The surgeon does exactly the reverse; he greatly diminishes the distance between himself and the patient by penetrating into the patient’s body.” Traditional artwork such as ceiling murals at Versailles are the work of the magician, maintaining the distance and authority of the original artwork.  The virtual reproduction of Anthony and Cleopatra allows the viewer to use the zoom tool as a scalpel, mimicking the surgeon and cutting into the artwork.

Finally, GoogleArt expands the reach of the original artwork by providing a digital reproduction that is accessible to viewers around the world through the internet. In the past, technical reproductions relied on creating large quantities of copies to reach such a large audience, so much that Benjamin suggested that “quantity has been transmuted into quality.” GoogleArt seems to offer a digital reproduction with the goal of preserving a sense of authenticity rather than destroying it.  As museums agree to grant Google with unique access to reproduce and distribute its artworks as high-res images, it is likely that these images will come to complement – and, in some cases where great geographic distances prohibit an immediate physical experience, stand in for – the original artwork. GoogleArt offers universal access (substituting quantity) to quality reproductions of revered works of art.

Sentiment Analysis/Appraisal Theory

By Eric Cruet

Opinions are like elbows; everyone has two of them for every topic.  Scientifically, however, opinions are very difficult to examine.  As of late, the computational linguistic community has recognized value in extracting, mining, and analyzing opinions from bulk text found in SMSs (Social Media Sites).  Sentiment Analysis is the task of having computers use machine learning algorithms to automatically perform such tasks, and attempt the classification of the opinions into “emotions”.

Computational approaches to sentiment analysis focus on extracting the affective content of the text from the detection of expressions of sentiment.  These expressions are assigned a positive or negative value representing the corresponding positive, negative or neutral sentiment towards a specific issue.  For example, using information retrieval, text mining, and computational linguistics, one can calculate opinions using the Support Vector Machines classification algorithm with a “bag of sentiment words”.  This technique was very popular for movie review classifications.  In a bag of words technique, the classifier identifies single word opinion clues and weights them according to their ability to help classify reviews as positive or negative (the number of times they appear).   So the word “sucked” (as in the movie sucked) would have a higher weight than the word “ok” (as in the movie was ok).


It is obvious that there are many opinion scenarios that this classification technique will not address.  For instance, it cannot account for the effect of the word “not” which will turn a review of “good” into “not good”, thereby reversing a positive sentiment into a negative one.   It also cannot account for more complicated sentiments i.e. “I wish the movie was in 3D.”   In order to incorporate more complicated sentiment tasks, it is important to further structure the approach in order to capture these elements.

The tasks described above were part of sentiment classification. In order to incorporate more complicated sentiment tasks, a more appropriate technique is structured opinion extraction.

The goal of structured opinion extraction is not only to extract individual opinions from text, but to also break down those opinions and parts so that those subcomponents can be used by sentiment analysis applications.  This is defined by identifying product features and opinions about those product features.

One way to accomplish this is using an appraisal expression. An appraisal expression is a basic grammatical structure expressing a single evaluation, based on linguistic analysis of evaluative language, to correctly capture the full complexity of opinion expressions.  Most existing work and corpora in sentiment analysis have considered only three parts of an appraisal expression: evaluator, target and attitude.  However, Hunston’s and Sinclair’s [3] local grammar of evaluation demonstrated the existence of other parts of an appraisal expression that can also provide useful information about the opinion when they are identified. These parts include superordinates, aspects, processes, and expressors.

Evaluator I attitude love it target when she walks to me and smiles.


Target He is Attitude one mean superordinate bastardevaluator said the employee.


Extracting appraisal expressions is an essential subtask in Sentiment Analysis because it provides sentiment words that can help define the features used by many higher-level applications.  As stated in Cognitive Appraisal theory, we decide what to feel after interpreting or explaining what has just happened. Two things are important in this: whether we interpret the event as positive or negative and what we believe is the cause of the event.  The resulting classification of the appraisal expression allow for a finer granularity in the application of quantitative methods so the results more closely represent what is being measured.





Google Prediction API


[1] Asher, N., Benamara, F., & Mathieu, Y. Y. (2009). Appraisal of opinion expressions in discourse. Lingvisticæ Investigationes32(2), 279-292.

[2] Hunston, S., & Sinclair, J. (2000). A local grammar of evaluation. Evaluation in Text: Authorial stance and the construction of discourse, 74-101.

[3] Jackendoff, R. (2002). Foundations of language: Brain, meaning, grammar, evolution. Oxford University Press, USA. (2), 279-292.

[4] Taboada, M., Brooke, J., Tofiloski, M., Voll, K., & Stede, M. (2011). Lexicon-based methods for sentiment analysis. Computational linguistics37(2), 267-307.