Category Archives: Week 2

Case Study 1: Eugene Bullard



For the following Case Studies, I will use the blog commons as a repository for information about the people and places. After, I have listed some facts, I will write a “takeaway”.

Case Study 1: Eugene Bullard

eugene bullard


Eugene Bullard’s is extremely important to my study of the African American Exodus to France, because he was notably one of the African Americans that paved the way for other sojourners.

Here are some factoids:

  • Most notably he was “the first military pilot” and he was “one of only two black combat pilots in World War I
  • He was born in “Columbus, Georgia”. However, his journey truly began when he was a teenager. As a teenager Bullard stowed away on a ship headed to Scotland. The reason for leaving: to escape racism.
  • “In Paris, Bullard found employment as a drummer and a nightclub manager at “Le Grand Duc” and eventually became the owner of his own nightclub, “L’Escadrille”. He married Marcelle Straumann from a wealthy family in 1923, but the marriage ended in divorce in 1935, with Bullard gaining custody of their two surviving children, daughters Jacqueline and Lolita. As a popular jazz venue, “Le Grand Duc” gained him many famous friends, including Josephine BakerLouis ArmstrongLangston Hughes and French flying ace Charles Nungesser. When World War II began in September 1939, Bullard, who spoke German, agreed to a request from the French government to spy on Germans frequenting his nightclub.
  • “After the German invasion of France in May 1940, Bullard fled from Paris with his daughters. He volunteered with the 51st Infantry defending Orléans when he met an officer whom he knew from fighting at Verdun. He was wounded in the fighting but was able to escape to neutral Spain, and in July 1940 he returned to the United States.”
  • “Bullard spent some time in a New York hospital and never fully recovered from his wound. Moreover, he found the fame he enjoyed in France had not followed him to the United States. He worked as a perfume salesman, a security guard, and as an interpreter for Louis Armstrong, but his back injury severely restricted him. He attempted to regain his nightclub in Paris, but his property had been destroyed during the WWII. He received a financial settlement from the French government, which he used to buy an apartment in Harlem, New York City.”
  • “In the 1950s, Bullard was a relative stranger in his own homeland. His daughters had married, and he lived alone in his apartment, which was decorated with pictures of his famous friends and a framed case containing his fifteen French war medals. His final job was as an elevator operator at the Rockefeller Center, where his fame as the “Black Swallow of Death” was unknown.”
  • “In 1959 at age 65, he was named Knight of the Legion of Honor in a lavish ceremony in New York City. Dave Garraway interviewed him on the Today Show, still America did nothing to acknowledge this honor or acknowledge his place in history.
  • “President-General Charles de Gaulle of France, while visiting New York City, publically and internationally embraced Eugene Bullard as a true French hero in 1960.”


Eugene Bullard is extremely important to this study because he exemplifies the difference in the French and American perspectives. When learning about his story, I could not believe his transition from being a celebrated man in France to an elevator operator.

During my various Black Paris history tours, Bullard was consistently mentioned as a staple in the Black Paris experience. I believe Bullard’s story is essential, because it introduced me to the non romantic Paris experience. When I chose this study, I was fascinated by the pièce de résistance of Paris. Even though, I am still searching for that characteristic, I believe Eugene Bullard’s story presented the reality and necessity of Paris because it helps my study get to the brass tax of the Exodus to Paris. The brass tax is that African Americans were fleeing racism and in order to do this, some African Americans followed the myth of Paris. Of which they created a reality of acceptance, appreciation, and great expectations that were previously not considered in their home country.


Chivalette, William I. “Corporal Eugene Jacques Bullard First Black American Fighter Pilot.” Air & Space Power Journal. N.p., n.d. Web.

“Eugene Bullard.” Wikipedia. Wikimedia Foundation, 13 Aug. 2013. Web. 31 Aug. 2013.

Garner, Carla W. “Bullard, Eugene Jacques (1894-1961) | The Black Past: Remembered and Reclaimed.” N.p., n.d. Web. 31 Aug. 2013.

Further research:
Note that he was “publicly acknowledged” by President-General Charles de Gaulle of France.

AO- Week 2- Institutional Power of Museums

In his essay “The Exhibitionary Complex,” Tony Bennett discusses Foucault’s perspective on the institutional creation of knowledge/power. Bennett draws a distinction between the “institutions of confinement” such as prisons (which are Foucault’s focus) and “institutions of exhibition” such as museums. Where Foucault identifies a society of surveillance (panopticon penitentiary), distinct from the society of spectacle found in antiquity (public floggings and executions), Bennett suggests that the two exist simultaneously; the self-disciplining nature which surveillance engenders is reinforced through the spectacle of exhibition.  To illustrate this point, the author refers to Graeme Davison’s description of the Crystal Palace: “The Crystal Palace reversed the panoptical principle by fixing the eyes of the multitude upon an assemblage of glamorous commodities. The Panopticon was designed so that everyone could be seen; the Crystal Palace was designed so that everyone could see.” (Preziosi and Farago, 418)

The Crystal Palace at Sydenham, site of the Great Exhibition of 1851

The Crystal Palace at Sydenham, site of the Great Exhibition of 1851

Bennett sees the Crystal Palace, acting as an early museum, as a powerful institution of knowledge creation. Echoing Walter Benjamin’s discussion in The Work of Art in the Age of Mechanical Reproduction on the importance of artworks shifting from cult value to exhibition value, Bennett describes the “exhibition complex:”

comprised of “institutions… [that] were involved in the transfer of objects and bodies from the enclosed and private domains which they had been previously displayed (but to a restricted public) into progressively more open and public arenas where, through the representations to which they were subjected , they formed vehicles for inscribing and broadcasting the messages of power (but of a different type) throughout society.” (Preziosi and Farago, 414)

 Drawing on the work of Antonio Gramsci, Bennett explores how museums contribute to the negotiation of power within a society in order to maintain the hegemony of the ruling class. The author describes the institutional methods for maintaining the status quo following the industrial revolution, saying:

[The maintenance of social order] consisted not in a display of power which, in seeking to terrorize, positioned the people on the other side of power as its potential recipients but sought rather to place the people – conceived as a nationalized citizenry – on this side of power, both its subject and its beneficiary… this was the rhetoric of power embodied in the exhibitionary complex – a power made manifest not in its ability to inflict pain but by its ability to organize and co-ordinate an order of things and to produce a place for the people in relation to that order.” (Preziosi and Farago, 420)

 By making exhibitions and their organizing institutions (contemporary museums) available to the general public, an illusion of public power over these institutions is created. In an eagerness to participate in this new “democratic” society, the working and middle classes voluntarily submitted to a specific code of conduct while navigating the exhibition space – forming what Gramsci terms a “civil society.”

Museums were responsible for instituting not only a specific set of behavioral standards, but also a unique moral set of standards. Bennett describes “the emergence of a historicized framework for the display of human artifacts” which took place in museums during the period of industrialization:

[A teleological perspective developed through] the lifelike reproduction of an authenticated past and its representation as a series of stages leading to the present…[exhibitions] aimed at the representation of a type and its insertion into a developmental sequence for display to a public.” (Preziosi and Farago, 428)

Bennett also notes the important development of two cultural distinctions – national and universal – which came to be reinforced by exhibitions. Special museums were established to promote the unique cultural history of a nation-state (for example the American History Museum); within these exhibitions, “national materials were represented as the outcome and culmination of the universal story of civilization’s development” (Preziosi and Farago, 429). What resulted was the great divide between “the West” and “the rest,” the marginalization or exclusion of entire ethnic groups (orientalism and neo-colonialism).

Bennett articulates the political component of Foucault’s heterotopias saying:

“[Exhibitions] constituted an order of things and of peoples which, reaching back into the depths of prehistoric time as well as encompassing all corners of the globe, rendered the whole world metonymically present, subordinated to the dominating gaze of the white, bourgeois, and male eye of the metropolitan powers.” (Preziosi and Farago, 436)

In my final paper, I plan to explore further how the GoogleArt project might function as a virtual museum to contribute to the creation of knowledge and the alter or reinforce distribution of power within society. I am curious about how GoogleArt maintains or challenges ideas of nationalism and universalism within a digital, supranational space.



You’re off to a great start in this research project! I wanted to give you two references to recent work by CCT students on digital museums and Goggle Art that you should know and may like to reference:

Alexis Hamann-Nazaroff, “Google Art Project and its role in the Artworld” (from CCTP748):

Alicia Dillon, Mediating the Museum: Investigating Institutional Goals in Physical and Digital Space. CCT Thesis, 2012. (Check the Library archives for theses.)



Story versus Plot


Furthering this week’s thoughts on film and narrative, David Bordwell and Kristin Thompson dive deeper into film making as a whole in Film Art. They consider narrative to be a chain of events with cause and effect relationships within a certain time and place. This idea that time and place is crucial is something that puts narrative into a little bit more context. By inputting a setting, a story starts to unfold as opposed to miscellaneous actions or thoughts. They differentiate story and plot, much in the way others differentiate by story and discourse. Some of the other pieces I’ve read thus far have made the distinction on discourse, but I find this model a little easier to grasp, and is more palatable to the average consumer of storytelling. By their definition, a story contains the set of all events in a narrative, whether they are explicitly presented or inferred. The plot is everything that is visibly and audibly present in the film, during the actual time period one would watch the film.

Diegetic versus nondigetic

Diegesis: “recounted story” – this is the whole story, as Bordwell and Thompson define story. This includes things that the audience infers to have happened, people they assume to be offscreen, etc.

Nondigetic elements are things like the credits, which the audience sees but come from outside the story. These are elements that are put in with editing, like music.

A helpful way they use of showing how plot and story overlaps is this way:

Story: Presumed and inferred events

Explicity presented events            (overlap)      Plot:

Added non diegetic material

(p. 77)

 Cause and Effect

Cause and effect are also important components to a narrative, and these are usually carried out by characters, who possess certain traits. However, in some films, especially something like a disaster film or science fiction film, then some outside element comes and wreaks havoc with the characters, which then provides the event for the characters to react to and show how they will respond or deal with each other under the circumstances.

An interesting way to think about this view of narration was their example about murder mysteries. In a film like this, or a thriller, there tends to be a huge part of the story that occurs offscreen: the murder itself. The rest of the story is the characters trying to solve the crime that happened when the viewer wasn’t watching. This shows the difference between the plot versus the story nicely because when describing the plot itself, one would just recount what they saw onscreen. To describe the story, however, it would be necessary to include what happened offscreen. I also liked that they mentioned the idea that viewers are so used to storytelling since it is constantly around people at all times in advertising, television, novels, even in personal interactions – how many conversations revolve around telling a story to someone? – that they expect certain things from a narrative. Typically, the average person would expect that there are going to be characters who have to deal with some sort of situation or conflict and the ending will be either something that they expected, which would be satisfying, or a twist, which is satisfying in a different way.



This is a good start to cinema narrative description. (BTW, “diagesis” comes from the Greek term for narrative, literally “leading/drawing through” as through time. It’s a useful technical term for describing the movement of plot or narrative, as opposed to scene details, character development, and all other techniques in film that aren’t strictly “narrative.” “Narrative” comes from the Latin word “narratio” meaning a telling, a discourse.)  Chatman, Bordwell, and Metz also lead to considering time-based media in all forms (media requiring duration in time to experience and genres that represent states of time–past-present-future–in compressed or “real time” forms. Think about how this can be extrapolated to music, comics and graphic novel panels (like movie storyboards), multimedia on a computer (video and games), etc. –MI


Advances in Building the Human Brain

By Eric Cruet

“The making of a synthetic brain requires now little more than time and labour….Such a machine might be used in the distant future… explore regions of intellectual subtlety and complexity at present beyond the human powers…..How will it end?  I suggest that the simple way to find out is to make the thing and see.”

Ross Ashby, Design for a Brain (1948, 382-83)

The human brain is exceedingly complex and studying it encompasses gathering information across a range of levels, from molecular processes to behavior. The sheer breadth of this undertaking has perhaps led to an increased specialization of brain research.  One of the areas of specialization that has gathered steam recently is the modeling of the brain on silicon.  However, even when considering computing’s exponential growth in processing power, it is still unimpressive as compared with the “specifications” of the human brain.

The average human brain packs a hundred billion or so neurons − connected by a quadrillion (1015) constantly changing synapses − into a space the size of a honeydew melon.  It consumes a measly 20 watts, about what one compact fluorescent light bulb (CFL) uses.  Replicating this awesome wetware with traditional digital circuits would require a supercomputer 1000 times more powerful than those currently available.  It would also require a nuclear power plant to run it.

Fortunately, the types of circuits needed to model the brain are not necessarily digital.  Currently there are several projects around the world focusing on building brain models that use specialized analog circuits.  Unlike traditional digital circuits in today’s computers, which could take weeks or even months to model a single second of brain operation, these analog circuits can duplicate brain activity as fast or even faster that it really occurs, while consuming a fraction of the power.  But the drawback of analog chips is that they aren’t very programmable.  This makes it difficult to make changes in the model, which is a requirement, since initially it is not known what level of biological detail is needed in order to simulate brain behavior.

In the race to build the first low power, large scale, digital model of the brain, the leading research effort is dubbed SpiNNaker (Spiking Neural Network Architecture), a project collaboration between the following universities and industrial partners:

  • University of Manchester
  • University of Southampton
  • University of Cambridge
  • University of Sheffield
  • ARM Ltd (link to these)
  • Silistix Ltd
  • Thales

The design of this machine looks a lot like a conventional parallel processor but it significantly changes the way the chips intercommunicate.  Traditional CMOS (digital) chips were not invented with parallel computing in mind, which is the way our minds operate.  The logic gates in silicon chips usually connect to a relatively few number of devices, whereas neurons in the brain receive signals from hundreds of thousands of other neurons.  In addition, neurons are always in a “ready” state, and respond instantaneously after receiving a signal.  Silicon chips rely on clocking to advance computation in discrete time steps, which consumes a lot of power.  Also, the connections between CMOS-based processors are fixed, and the synapses that connect neurons are always in flux.

One way to speed things up is custom analog circuits that directly replicate brain operation.  Some of the chips under development can run 10,000 times faster than their corresponding part of the brain while being energy efficient.  But as we mentioned previously, as speedy and efficient as they can be, they are not very flexible.

The basic building block of the SpiNNaker machine is a multicore System-on-Chip (see below). The chip is a Globally Asynchronous Locally Synchronous (GALS) system with 18 ARM968 processor nodes residing in synchronous islands, surrounded by a lightweight, packet-switched asynchronous communications infrastructure.  Clicking on the PCB (Printed Circuit Board) will take you to the SpiNNaker Project website.

The figure below illustrates shows that each SpiNNaker chip contains two silicon dies: the SpiNNaker die itself and a 128 MByte SDRAM (Synchronous Dynamic Random Access Memory) die, which is physically mounted on top of the SpiNNaker die and stitch-bonded to it.

The micro-architecture assumes that processors are ‘free’: the real cost of computing is energy. This is why we use energy-efficient ARM9 embedded processors and Mobile DDR (Double Data Rate) SDRAM, in both cases sacrificing some performance for greatly enhanced power efficiency.  These are the same type of chips found in today’s mobile electronics.

It is obvious that although great strides are being made at developing a “digital” brain, simply “building” a brain from the bottom up by replicating its parts, connections, and organization fails to capture its essential function—complex behavior. Instead, just as engineers can only construct cars and computers because they know how they work, we will only be able to construct a brain if we know how it works—that is, if we understand the biological and computational details that are carried out in individual brain areas, and how these details are implemented on the level of neural networks.


Eliasmith, C., Stewart, T. C., Choo, X., Bekolay, T., DeWolf, T., Tang, C., & Rasmussen, D. (2012). A large-scale model of the functioning brain. science,338(6111), 1202-1205.
Pickering, A. (2010). The cybernetic brain: sketches of another future. University of Chicago Press.
Price, D., Jarman, A. P., Mason, J. O., & Kind, P. C. (2011). Building brains: An introduction to neural development. Wiley.


Lessig – hybrid cultures and economies

When I read Lessig’s book in my first semester here, I hadn’t thought about how it related to the interests I’ve been refining through this independent study. However, governance in virtual worlds is dependent on developing social relationships and contributing to a community. Being a digital community, issues of intellectual property and copyright are likely to crop up.

This book is written, like Virtual Justice, from the perspective of a law professor. He describes two types of culture that are typically thought of to be at odds with each other, Read/Only (RO) and Read/Write (RW). Lessig argues that we need to consider a hybrid culture that doesn’t discount the passive consumption of RO culture and doesn’t restrict the creative and collaborative aspects of RW culture. He suggests different ways to develop copyright law so it can be beneficial to all parties. His claim that this is necessary rests on his argument that current copyright laws are criminalizing those who engage in RW culture, primarily teenagers.

Second life is one of the case studies for a hybrid business model. It combines aspects of the “Read/Write” culture discussed throughout the book with the “Read/Only” aspect of user consumption. It is mentioned in the section on hybrid economies as well, because Linden Lab is making use of freely shared creative work contributed to their virtual world to make money. Linden Lab has been encouraging the users of second life to be as RW as they want to be. They have the mindset that the things their members do help to add value to the virtual world. Lessig delineates the ways members contribute: by helping each other, adding aesthetic value, contributing code, building institutions, and self-governing (215-217).

I was particularly excited about the description of Neualtenburg, the first democratic republic in Second Life. Lessig states that “the city builds this community through a mix of architecture, culture, law, as politics” and was “designed to be a ‘nexus for progressive social experimentation’” (217). The comparison of this virtual community to a tangible one is very useful to me. The social imperatives discussed are very much what I was interested in researching, particularly analyses like this: “And as with any community, the more people contribute, and see others contribute, the richer everyone feels” (217). As with World of Warcraft, in which the gameplay encourages and almost necessitates cooperation. This whole section will be an excellent resource for my final project, and it connects very nicely with Lastowka’s section on copyright I didn’t talk about in my last post.

AO-Week 2: Understanding the Contemporary Museum as an Institution

Critical of the discourse of modernity offered by twentieth century scholars, Michael Foucault  sought to present a more precise description of his own unique historical moment. In 1967, Foucault delivered a lecture (which was later published as Of Other Spaces in 1984) on the importance of spaces and the ways in which space is considered and discussed. In the past, there were clear distinctions between spaces, a “hierarchical ensemble of places: sacred places and profane places; protected places and open, exposed places; urban places and rural places… It was this complete hierarchy, this opposition, this intersection of places that constituted what could very roughly be called medieval spaces: the space of emplacement” (Prezoisi and Ferago, 372). Foucault acknowledges that these separations still exist to some extent, but – recognizing the increasing interconnected, yet often contradictory, nature of contemporary society – suggests two new primary types of spaces: utopias and heterotopias. 

“Utopias are sites with no real places. They are sites that have a general relation of direct of inverted analogy with the real space of Society. They present society itself in a perfected form, or else society turned upside down, but in any case these utopias are fundamentally unreal spaces.” (Prezoisi and Ferago, 374)
 “[Heterotopias] are something like counter-sites, a kind of effectively enacted utopia in which the real sites, all the other real sites that can be found within the culture, are simultaneously represented, contested, and inverted. Places of this kind are outside of all places, even though it may be possible to indicate their location in reality.” (Prezoisi and Ferago, 374)

Foucault  concerns himself primarily with heterotopias and goes on to describe five main principles of heterotopias:

  1. Heterotopias exist in all societies (Prezoisi and Ferago, 375)
  2. Over time, societies can change the function of existing heterotopias (Prezoisi and Ferago, 375)
  3. Heterotopias are “capable of juxtaposing in a single real place several sites that are in themselves incompatible” (Prezoisi and Ferago, 376)
  4. “Heterotopias are most often linked to slices in time” which can be either accumulating or fleeting (Prezoisi and Ferago, 377)
  5. Heterotopias are “not freely accessible like a public space… to get in one must have certain permission and make certain gestures” (Prezoisi and Ferago, 378)

These principles can be applied to understanding the museum, and even more specifically the Google Art Project- a type of virtual museum, as a heterotopic space. For this discussion, I am using museum to mean an institution that collects works of art and displays them for the edification of audiences. As such, museums have existed in all societies although they were sometimes known by different names – churches, universities, or private domestic collections. Over time, these “museums” were transformed into the institutions we recognize today as museums; we are now witnessing the next transformation of these institutions as they transition into the digital world through projects like GoogleArt. As a type of digital museum, GoogleArt is able to juxtapose in a single virtual space many works of art from across the world which could not otherwise be viewed in one collection, confounding our understanding of space. Museums, both the brick-and-mortar and the virtual versions, accumulate works of art from across the decades (and often centuries), altering our understanding of time. Finally, museums – especially virtual ones – are not accessible to everyone, despite their open appearance and mission to serve the public. Audiences of virtual museums must have access to the technology required to view the artwork, including a computer and high-speed internet access. Audiences of more traditional museums must have the leisure time to visit the museum. Furthermore, in order to fully participate in the museum, both types of audiences must have some amount of training in how to view the works of art and discuss them.

In his “Introduction to Museum Without Walls,” Andre Malraux discusses the history of museums and their transition to the type of institution that we recognize as a museum today. The discourse surrounding museums changed as the museum transitioned into a new type of institution; additionally, new discourse was created by these new institutions – the discourse of art history. Malraux argues that museums “are so much a part of our lives today that we forget they have imposed on the spectator a wholly new attitude toward the work of art; they have tended to estrange the works they bring together from their original functions and to transform even portraits into pictures” (Prezoisi and Ferago, 386). The separation of the artwork from its origin echoes Benjamin’s ideas about the loss of aura.

Malraux is primarily concerned with the use of photography to reproduce art and, by extension, re-mediate real space. Through photography, Malraux argues that “a museum without walls has been opened to us, and it will carry infinitely further that limited revelation of the world of art which the real museums offer us within their walls; in answer to their appeal, the plastic arts have produced their printing press” (Prezoisi and Ferago, 371). I think that digital technologies allow this “museum without walls” to expand exponentially.

Foucault also confronts the impact that the contemporary museum has had on art and literature in his essay “Fantasia of the Library” (1977). He says: “Dejeuner sur l’Herbe and Olympia [by Manet] were perhaps the first ‘museum’ paintings, the first paintings in European art that were less a response to the achievement of Giorgione, Raphael, and Velasquez than an acknowledgement… of the new and substantial relationship of painting to itself, as a manifestation of the existence of museums and the particular reality and interdependence that paintings acquire in museums” (Crimp, 47). Manet became famous during the modern era for using his artwork to point out the relationship between a painting and its sources; for example, Manet’s Olympia remixes Titian’s Venus of Urbino. Contemporary, post-modern artists continue this trend using reproductive technologies; this is the topic of Douglas Crimp’s essay “On the Museum’s Ruins” (1980). Crimp uses Rauschenberg as an example of a postmodern artist.  In his artwork Crocus (1962), Rauschenberg remixes Manet’s work by simply silkscreening photographs of Olympia onto a canvas, juxtaposed with images of trucks, helicopters, and insects. Artists are aware of the “estrangement” that takes place when a work of art enters the museum and are expressing their reactions to this phenomenon in their artistic creations. 

Benjamin, in The Works of Art in the Age of Mechanical Reproduction, was concerned with the impact that mechanical reproductive technologies would have on art, arguing that there was a shift in emphasis from cult value to exhibition value. In contemporary culture it is more important for a work of art to be seen by many and be well-recognized (requiring numerous reproductions in exhibition catalogs, promotional media, etc…) than to be held in high regard by an elite, esteemed few. Aware of the importance of attracting large audiences, curators seek out works of art that are entertaining or shocking; this influences many artists to produce a very specific type of work and limits creativity.



Preziosi, Donald, and Claire Farago. Grasping the World: The Idea of the Museum. Burlington, VT: Ashgate Pub., 2004.

Crimp, Douglas. “On the Museum’s Ruins.” October 13: 41-57. Boston, MA: MIT Press, 1980.

Improvising Digital Culture

In “Improvising Digital Culture,” Paul D. Miller (DJ Spooky) and Vijay Iyer discuss two contrasting positions concerning the definition of improvisation, primarily in context to digital media.  Although the two do use music (the saxophone) as a point of reference during various parts of the discussion.

Iyer argues that improvisation should be regards as “identical with what we call experience.”  He further explains that through this definition there is not a difference between what we experience as humans and improvisation.  We are always improvising. He also explains that some improvisation can be considered good or bad, like saving someone from danger or harming someone. Iyer says, “In other words, you might say that there are degrees, layers or levels to what we call “improvisation.” There’s a primal level at which we learn how to just be in the world, and then there’s another level at which we’re responding to conditions that are thrust upon us.”

My question concerning Iyer’s work: Is there any point at which something is not an improvisation? Is an actor reciting lines from a play improvising?  It seems as if there is some point where improvisation is not entirely a part of our life. While I do agree many parts of our lives are complete improvisation, I question acts such as following orders or reading something word for word as improvisation.  Does doing something someone has already predetermined create something other than improvisation?

Paul Miller stated that digital media is “not necessarily about the process per se, it’s about never saying that there’s something that’s finished.  Once something’s digital, essentially you’re looking at versions.  Anything can be edited, transformed, and completely made into new things.” This interpretation of improvisation is more embraceable as it seems a bit more definable.

The following link will lead readers to a clip of the Improvised Shakespeare Company:

The Improvised Shakespeare Company has been in existence since 2005 and performs every Friday night in Chicago. I think this is an interesting example of Miller’s interpretation of improvisation.  The actors are creating new work based upon something old: in this case, it is the style and speech of Shakespeare. This is a contrast from working solely from Shakespeare’s scripts. Which leave little room for improvisation.  While there is still a bit of space built into the script for improvisation, but not an extensive build up.

I will look into the history of Shakespeare improv and other stand up improv in Thursday’s post.


Iyer, Vijay, and Paul D. Miller. “Improvising Digital Culture.” N.p., n.d. Web. 28 May 2013.

Obbvideos. “The Improvised Shakespeare Company.” YouTube. YouTube, 23 July 2010. Web. 28 May 2013.

“The Improvised Shakespeare Company: About.” The Improvised Shakespeare Company: About. N.p., n.d. Web. 28 May 2013.


Narration and Film

Narration in the Fiction Film by David Bordwell

Narrative as a process, “the activity of selecting, arranging, and rendering story material in order to achieve specific time-bound effects on a perceiver” (xi). This process is what Bordwell considers to be narration.

Digetic theories: think of narration as consisting of a verbal activity, or “telling”, whether that’s is literally or not.

Mimetic: think of narration as presenting the spectacle of the “showing”.

Henry James, and later Percy Lubbock (novel as spectacle), thought that the novel was a sort of pictorial art. He felt that the inclusion of point of view within a novel was a post-Renaissance perspectival metaphor (8). He felt that people craved the picture, and that a novel provided the most elastic, comprehensive version of that. Lubbock took that a little bit further, by including the idea of drama along with pictorial representation. The perspective painting with point of view was pictorial, but the unfolding of events was like a stage play or drama; Luddock saw both of these facets in novels.

Bordwell explains that over more years, the idea of the pictorial element of novels got extended to cinema. In the mimetic tradition, it has become common to compare literary narration to cinematic narration.



Legal Issues in Virtual Worlds

In my reading for this week, I began looking into law and policy issues as they relate to virtual worlds. The source that interested me in the topic initially, Virtual Justice by Greg Lastowka, is the one I looked at most closely. Lastowka has a legal background, and has focused on intellectual property law as well as the intersection of law and technology. He begins the book with a cursory definition of virtual worlds, defining them as “Internet-based simulated environments that feature software-animated objects and events” (9). He distinguishes virtual worlds from other forms of media because they necessitate active engagement, generally through a customized avatar. Lastowka also sets up the relevance and importance of virtual worlds early in the book, claiming that “The social and interactive complexity of virtual worlds can be substantial, making users feel like they are truly ‘present’ somewhere else” (9). Directly related to this is his claim that “because virtual worlds are places, they are also sites of culture” (10). This reasoning is weaved into later arguments about legal issues in virtual worlds.

Some of the sources Lastowka mentions are familiar to me. Sherry Turkle, who I wrote on last week, is one example. He mentions a study by Turkle, T. L. Taylor, and Tom Boellstorff indicating that people use virtual worlds to experiment with the boundaries of their identities, because an avatar is never totally separate from its associated user. I inferred from this conclusion that people, on some level, may take injustices to their avatar very personally. One example of this is the Mr. Bungle case from “A Rape in Cyberspace”. This example doesn’t have to do with the legal implications discussed in the book so much as the identification with an avatar. The legal issues discussed were primarily to do with virtual property and who should handle disputes over the matter.

The book is not a full catalog of cases where the law intercedes (or refuses to intercede) in conflicts over virtual property, but Lastowka does offer quite a few examples to stimulate discussion. The first case he mentions is Bragg v. Linden Research, a dispute over land ownership in Second Life. Linden attempted to reserve the right to deny Mark Bragg access to Second Life and confiscated virtual property that was worth real money. The dispute was over virtual property, but the legal arguments centered around whether Linden Research could enforce their terms of service. Other examples are gold farmers having their accounts closed, virtual Ponzi schemes, and people being defrauded when purchasing virtual items. There was even a man who killed his friend over a very expensive item he stole and then sold. Lastowka brings these issues together under the umbrella of the legal right to retain acquired  property, virtual or not.

Lastowka goes into gold farming specifically as a variant on virtual property disputes. He succinctly defines the practice, saying it is “when virtual currency is harvested expressly for resale to other players” (22). The issue he presents based on this is as follows:

If we recognize a legal right to the possession of virtual property, does this necessarily entail a right to sell one’s virtual property to others? What if the owner of that virtual world— and the majority of the community that uses it— object to the practice of gold farming? Can real economies be kept separate, either practically or legally, from virtual economies? (24)

He discusses later in the book Michael Walzer’s ideas about “spheres of justice” in society (103). He addresses the problem of inconsistency is the legal system that makes the intersection of law a virtual worlds an issue in the first place. He states that “the gulf between law and games is not due to the triviality of games, but due to the fact that games constitute a rival regime of social ordering. The rules of games are inherently in tension with the rules of law” (105). Lastowka makes this case by discussing several sports whose rules have been deferred to by the legal system because the intersection “was too difficult a problem for courts to police” (112). In response to this practice, he makes argument that “Before law can defer to game rules— if it is to defer to game rules at all— we must have some sense of when and how game rules are present in virtual worlds” (118). However, he mentions several instances of the companies that control various virtual worlds shying away from controlling player behavior just as much as the legal system does. The problem with this, in Lastowka’s words, is that “When we defer to the “rules” of EVE Online under the aegis that it is ‘only a game,’ we permit the establishment of a very real and anarchic online frontier” (121).

The Quantitative Mapping of Change in Science

By Eric Cruet

As a follow up to last week’s post (re-posted below), we will consider an application where the quantitative method described [1] will be used to map changes in the sciences.

In this century, the volume of scientific research has become vast and complex.  However, the ever-increasing size and specialized nature of this body today makes it difficult for any group of experts to fully and fairly evaluate the bewildering array of material, both accomplished and proposed.

Therefore, a library faced with collection decisions, a foundation making funding choices, or a government office weighing national research needs must rely on expert analysis of scientific research performance. 

One approach is Bibliometrics.  It utilizes quantitative analysis and statistics to find patterns of publication within a given field or body of literature. 

Through cumulative cycles of modeling and experimentation, scientific research undergoes constant change: scientists self-organize into fields that grow and shrink, merge and split. Citation patterns among scientific journals allow us to track this flow of ideas and how the flow of ideas changes over time [2].

For the purposes of this simplified example [3] the citation data is mined from Thomson-Reuters’ Journal Citation Reports circa 1997–2007, which aggregate, at the journal level, approximately 35,000,000 citations from more than 7000 journals over the past decade.  Citations are included from articles published in a given year  referencing articles published in the previous two years [7].


  1. We first cluster the networks with the information-theoretic clustering method presented in the previous post [1], which can reveal regularities of information flow across directed and weighted networks.  The method will be applied to the pre-mined citation data.
  2. With appropriate modifications, the described method of bootstrap resampling accompanied by significance clustering is general and works for any type of network and any clustering algorithm. 
  3. To assess the accuracy of a clustering, we resample a large number (n > 1000) of bootstrap networks from the original network [7].  For the directed and weighted citation network of science, in which journals correspond to nodes and citations to directed and weighted links, we treat the citations as independent events and resample the weight of each link from a Poisson distribution with the link weight in the original network as mean. This parametric resampling of citations approximates a non-parametric resampling of articles, which makes no assumption about the underlying distribution.  For scalar summary statistics, it is straightforward to assign a 95% bootstrap confidence interval as spanning the 2.5th and 97.5th percentiles of the bootstrap distribution [4], but different data sets and clusters may require a different approach [5].
  4. To identify the journals that are significantly associated with the clusters to which they are assigned, we use simulated annealing to search for the largest subset of journals within each cluster of the original network that are clustered together in at least 95% of all bootstrap networks. To identify the clusters that are significantly distinct from all other clusters, we search for clusters whose significant subset is clustered with no other cluster’s significant subset in at least 95% of all bootstrap networks [7].  Figure 1 below shows this technique applied to a network at two different time points:
  5.  Once we have a significance cluster for the network at each time, we want to reveal the trends in the data by simplifying and highlighting the structural changes between clusters. The bottom of Figure 1, shows how to construct an alluvial diagram of the example networks that highlights and summarizes the structural differences between the time 1 and time 2 significance clusters. Each cluster in the network is represented by an equivalently colored block in the alluvial diagram. Darker colors represent nodes that have statistical significance, while lighter colors represent non-significant assignments. Changes in the clustering structure from one time period to the next are represented by the mergers and divergences that occur in the ribbons linking the blocks at time 1 and time 2.

    Diagram from: [7] Rosvall, M., & Bergstrom, C. T. (2010). Mapping change in large networks. PloS one, 5(1), e8694.

  6. The resulting alluvial diagram for the actual data (above) illustrates, for example, how over the years 2001–2005, urology gradually splits off from oncology and how the field of infectious diseases becomes a unique discipline, instead of a subset of medicine, in 2003. But these changes are just two of many over this period. In the same diagram, we also highlight the biggest structural change in scientific citation patterns over the past decade: the transformation of neuroscience from interdisciplinary specialty to a mature and stand-alone discipline, comparable to physics or chemistry, economics or law, molecular biology or medicine [7].
  7. In their citation behavior, neuroscientists have finally cleaved from their traditional disciplines and united to form what is now the fifth largest field in the sciences (after molecular and cell biology, physics, chemistry, and medicine). Although this interdisciplinary integration has been ongoing since the 1950s [6], only in the last decade has this change come to dominate the citation structure of the field and overwhelm the intellectual ties along traditional departmental lines.


Credit for this research belongs to the work performed in [7] Rosvall, M., & Bergstrom, C. T. (2010). Mapping change in large networks. PloS one5(1), e8694.

[2] de Solla Price DJ (1965) Networks of scientific papers. Science 149: 510–515. doi:10.1126/science.149.3683.510.
[3]Heimeriks, G., Hoerlesberger, M., & Van den Besselaar, P. (2003). Mapping communication and collaboration in heterogeneous research networks.Scientometrics58(2), 391-413.
[4] Costenbader E, Valente T (2003) The stability of centrality measures when networks are sampled. Soc Networks 25: 283–307. doi: 10.1016/S0378-8733(03)00012-1.
[5] Trevor. Hastie, Robert. Tibshirani, & Friedman, J. J. H. (2001). The elements of statistical learning (Vol. 1). New York: Springer.
[6] Gfeller, D., Chappelier, J. C., & De Los Rios, P. (2005). Finding instabilities in the community structure of complex networks. Physical Review E72(5), 056135.

[7] Rosvall, M., & Bergstrom, C. T. (2010). Mapping change in large networks. PloS one5(1), e8694.

A method for the quantitative mapping of change

By Eric Cruet

The problem of change[1].  Much of mankind’s preoccupation has been with changes in the sciences, technology, sociology and economics.  More recently, we seem to be concerned about variations in climate change, global financial states, the effect of technology on society, and the increasing use of unlawful violence intended to coerce or to intimidate governments or societies i.e terrorism.

Traditionally, network, graph, and cluster analysis are the mathematical tools used to understand specific instances of the data generated by these scenarios at a given point in time. But without methods to distinguish between real patterns and statistical error, which can be significant in large data sets, these approaches may not be ideal for studying change.  By assigning weights to individual networks, we can determine meaningful structural differences vs. random fluctuations [3].

Alternatively a bootstrap technique [2] can be used when there are multiple networks to arrive at an accurate estimate by resampling the empirical distribution of observations for each network.  In the case of a single network, resampling can be accomplished by using a parametric model to fit the link weights without undermining the individual characteristics of the nodes.  Using this technique, we can determine cluster significance and also estimate the accuracy of the summary statistics (μ, σ. ρ) based on the proportion of bootstrap networks that support the observation. 




The standard procedure to cluster networks is to minimize an objective function over probable partitions (left side of diagram).  By resampling the weighted links of the original network, a bootstrap world is created of resampled networks.  Next, these are clustered and compared to the clustering of the original network (2nd row, right side).  This provides an estimate of the probability that a node belongs to a specific cluster.  The result is a “significant clustering” [3].  For example, in the diagram above, the darker nodes (bottom of the diagram) are clustered together in at least 95% of the 1000 bootstrap networks.  Several algorithms in the public domain exist to automate the majority of these tasks.  

Finally, once a significance cluster has been generated for the network at each point in time,  an alluvial diagram is used to reveal the trends in the data. An alluvial diagram (bottom of the picture) orders the cluster by size and reveals changes in network structures over time [3]. Please refer to the diagram below:

As you can see from the alluvial diagram, from time 1 to time 2, the condition scenario represented by ORANGE clustered with the condition scenario represented by PINK.  This clustering was a result of some underlying change, and was not obvious at time 1.  As a result, the bootstrap/cluster analysis allowed the quantitative mapping of the change to take place.

The model can be used in a variety of scenarios:  to map the changes in global weather patterns, US emigration flows from state to state based on various factors (employment, housing prices, education, income per capita), variations in federal funds market in response to major events [3], and track global targets of terrorism activity.

But my main area of interest is illustrating the method by applying to map the change in the structure of science [4].  Stay tuned.  I conclude with a rather lengthy but appropriate and relevant quote.

From Michael Focault’s “The Order of Things”

The problem of change.  It has been said that this work denies the very possibility of change. And yet my main concern has been with changes. In fact, two things in particular struck me: the suddenness and thorough­ness with which certain sciences were sometimes reorganized; and the fact that at the same time similar changes occurred in apparently very different disciplines. Within a few years (around 1800), the tradition of general grammar was replaced by an essentially historical philology; natural classifications were ordered according to the analyses of comparative anatomy; and a political economy was founded whose main themes were labour and production. Confronted by such a curious combination of phenomena, it occurred to me that these changes should be examined more closely, without being reduced, in the name of continuity, in either abruptness or scope. It seemed to me at the outset that different kinds of change were taking place in scientific discourse – changes that did not occur at the same level, proceed at the same pace, or obey the same laws; the way in which, within a particular science, new propositions were pro­duced, new facts isolated, or new concepts built up (the events that make up the everyday life of a science) did not, in all probability, follow the same model as the appearance of new fields of study (and the frequently corresponding disappearance of old ones); but the appearance of new fields of study must not, in turn, be confused with those overall re-dis­tributions that alter not only the general form of a science, but also its relations with other areas of knowledge. It seemed to me, therefore, that all these changes should not be treated at the same level, or be made to culminate at a single point, as is sometimes done, or be attributed to the genius of an individual, or a new collective spirit, or even to the fecundity of a single discovery; that it would be better to respect such differences, and even to try to grasp them in their specificity. In this way I tried to describe the combination of corresponding transformations that char­acterized the appearance of biology, political economy, philology, a number of human sciences, and a new type of philosophy, at the threshold of the nineteenth century.


[1] Foucault, M. (2002). The order of things. Routledge.

[2] Palla, G., Derényi, I., Farkas, I., & Vicsek, T. (2005). Uncovering the overlapping community structure of complex networks in nature and society. Nature,435(7043), 814-818.

[3] Rosvall, M., & Bergstrom, C. T. (2010). Mapping change in large networks. PloS one5(1), e8694.

[4] de Solla Price DJ (1965) Networks of scientific papers. Science 149: 510–515. doi:10.1126/science.149.3683.510.

Note: Dragon Dictate is used as a speech to text transcriber for a portion of this document.  Although I make every effort to proofread the postings, any unusual syntax, lexicon or semantic error in language is attributed to my lack of attention and the immaturity of this technology.