Category Archives: Week 5

AO-Week 5: Making Art Available to All through the Museum Commons

A New York Times article “Online, It’s the Mouse That Runs the Museum” (2010), Alex Wright discusses how museums are using new technologies to explore new strategies for building collections, inspiring creativity, and facilitating learning. Wright describes how the National September 11 Museum and Memorial crowdsourced the task of building its collection. In a similar way, The Museum of the History of Polish Jews utilized social media sites such as Youtube, Flickr, and Facebook to obtain content for it’s Virtual Shtetl project.

Wright also points to the idea of a museum commons, citing the Smithsonian Institution as a case study: “That institution recently began an ambitious initiative called the Smithsonian Commons to develop technologies and licensing agreements that would let visitors download, share and remix the museum’s vast collection of public domain assets. Using the new tools, Web users should be able to annotate images, create personalized views of the collection and export fully licensed images for use on their own Web sites or elsewhere.” Unfortunately, I was unable to find a functioning commons site for the Smithsonian; it seems this project is still in development. Wright quotes Michael Edson, the Smithsonian’s New Media Directors, “described the initiative as a step in the institution’s larger mission to shift ‘from an authority-centric broadcast platform to one that recognizes the importance of distributive knowledge creation’.” I am interested to compare how the proposed Smithsonian Commons might function similarly to the Google Art Project (which the several of the Smithsonian Museums participate in) – both would allow increased public access to art objects and encourage participatory learning through a user-guided experience. What are the unique qualities of each project and how do they complement or compete with one another? For example, the Smithsonian Commons would make art objects available for use with attribution, encouraging creativity and remix – a feature that is lacking in the current Google Art Project. Google’s advanced platform and global presence encourages the participation of many institutions, increasing the database of art objects available to audiences. Is it important for the Smithsonian to host it’s own platform as part of its brand continuity?  How do these qualities weigh against each other?

The Earliest Remix

When one thinks of remix, they often make the mistake of only considering more technologically-advanced methods of remix.  Yet, one must understand that remixing is ever-present in the humanities.  Works are reimagined over and over, creating new pieces which are then, once again, recreated.  Shakespeare’s works are no exception. In a later post, Shakespeare’s very own remixes will be discussed.

Charles and Mary Lamb’s ”Tales from Shakespeare” is arguable the earliest documented form of Shakespeare’s works being remixed. The brother and sister duo worked for Thomas Hodgkins to create children’s books from Shakespeare’s works. The book was published in 1807, with only Charles being credited.  It is now known Mary also worked on many of the stories (Lamb iii-v).

The method in which the pair used to create these stories is important to understand. They actively avoided using language that was included in the English vocabulary after Shakespeare’s time.  It is known that Mary primarily worked on the comedies and Charles primarily worked on the tragedies.  The English histories and Roman plays were left untouched by the two (Lamb v-x).

To see how the two set together to remix Shakespeare, I will give reviews of a story which was altered by each of the two authors.

Macbeth (Charles Lamb)

One can assume Charles wrote the Macbeth interpretation, as he is credited with writing the tragedies.  In it, C. Lamb writes:

The king entered well-pleased with the place, and not less so with the attentions and respect of his honoured hostess, lady Macbeth, who had the art of covering treacherous purposes with smiles; and could look like the innocent flower, while she was indeed the serpent under it (Lamb 163).

This passage works in correlation with the archive he is working from in which Lady Macbeth plays a disparaging character.  One is brought to this assumption in far more words, but it is fully understood.

The entirety of the story is very easy to understand, yet keeps the verbiage used in the Elizabethan period.  This is a testament to the work of Charles, as many argue the difficulty in understanding Shakespeare’s works is the language. The main themes remain the same throughout the story.

A Midsummer Night’s Dream (Mary Lamb)

It is stated in the prelude that Mary was responsible for completing A Midsummer Night’s Dream.  Mary relies more heavily upon dialogue than Charles.   This may help us understand more about the way Shakespeare wrote.  Upon reading the two stories, it makes sense that Mary would rely more heavily upon dialogue because it seems as if tragedies like Macbeth are more theme-based, whereas the comedies may be more based upon entertainment.  This is not to say the comedies do not contain themes, they are just not as prevalent within the story.

After reading from “Tales from Shakespeare,” I can say in earnest how useful remixes can be in teaching Shakespeare’s works.  This remix makes the stories easier for a younger audience to understand, yet still contains the vocabulary which the creators of school curriculums are so insistent upon.

 

Lamb, Charles, and Mary Lamb. Tales from Shakespeare: For the Use of Young Persons, with an Introductory Sketch. Boston: Houghton, Mifflin, 1894. Print.

AO-Week 5: Using Technology to Bridge the Gap

Engineers and developers are constantly trying to innovate ways to bridge the gap between physical space (reality) and virtual space (virtual reality).  Overcoming this divide is also becoming increasingly of interest to museum professionals as they seek to “join up the museum experience with the online experience, taking the museum beyond the boundaries of the physical building and allowing online visitors into the museum” (Patten). In his essay “Web Lab – bridging the divide between the online and in-museum experience.” Dave Patten, Head of New Media at the Science Museum London, describes the current Web Lab exhibition consisting of five Google Chrome experiments: Lab Tag Explorer, Universal Orchestra, Teleporters, Sketchbots, and Data Tracer. The exhibition utilizes several types of technology to bridge the gap, including streaming video feeds from web cameras inside the physical museum, HTML5 and advanced browser capabilities, and robotics to visually represent data. “For example, [visitors] can see how the Data Tracer experiment uses WebGI to generate the 3D map they fly through when following their image search” (Patten). 

The Lab Tag Explorer Experiment is made up of several parts: the Lab Tag dispenser, the Lab Tag writer, and the Lab Tag Explorer. When you “enter” the exhibition (both online and in the physical museum), you are assigned a Lab Tag, a unique identifier which is used to mark your presence within the exhibition; the Lab Tag also allows you to capture and store information that you wish to return to later. In the physical museum,guests receive a Lab Tag by visiting the Lab Tag dispenser; Lab Tags are automatically assigned by the browser for online visitors. According to Patten:

“[The Lab Tag Writer] carries the title of the exhibition and a real-time count of the number of users who are currently online in Web Lab… The effect is to help draw physical visitors down to the exhibition and at the same time make them aware they are joining something… The key aims of the Lab Tag Writer are to help physical visitors understand they are about to enter an exhibition that is already being used by lots of people online, and to help them understand the global nature of Web Lab.”  

The Lab Tag Explorer emphasizes the globally-networked nature of the exhibition by allowing users to save and review their own Web Lab creative projects and share their projects creations through their existing social media networks. Visitors can also view other visitors’ projects.

Each of the other four experiments -Universal Orchestra, Teleporters, Sketchbots, and Data Tracer-  reinforce this same central theme: “[museum visitors] are sharing Web Lab with visitors from around the world” (Patten). The Web Lab exhibition explores ways in which museums can integrate physical and virtual museum spaces. Patten describes the variety of lessons his development team learned through this process as ranging “from the way teams can use collaborative tools such as Google Docs and Hangouts during the exhibition development process, to opening exhibitions in beta to allow final testing and development to take place before the big opening event. Web Lab has also given the Museum the confidence that moving its interactive development onto HTML5 is both achievable and desirable.”

While this case study takes place in a science museum, I think a similar approach could be taken to the contemporary art museum.  It would be interesting to see Google introduce a similar user experience for visitors to the physical spaces of it’s partner museums. The GoogleArt Project allows visitors to take on a curatorial role  by allowing audience members to guide their own experiences –  selecting artworks of particular interest and saving them in a “gallery,” then creating knowledge by analyzing, comparing, and contrasting the collected artworks.  Visitors to the physical museum could also be invited to use technology (perhaps through an identifying tag) to collect, compare, and share artworks that are meaningful to them. This approach would also allow visitors to the physical museum to virtually re-visit a work of art and explore it in greater detail using the zoom feature or compare it with artworks from around the world – both experiences which could not occur in the physical museum.

Additionally, art museum professionals should take the lessons learned from Patten’s development team and evaluate their exhibition design process to identify areas where technology could offer great opportunity for collaboration and creativity.

Remixing the classroom

How can remix methods change the classroom?  How can remixing Shakespeare be more effective than standard teaching methods?

Standard teaching methods I experienced during high school were, for the most part, ineffective.  The most tedious example I can share is reading an act of a play in a workbook which was then followed by a number of questions, which were primarily vocabulary.  While understanding the vocabulary is important, the entire theme was lost on most of the students.  Many of the students in this particular class lost interest.  The entire point of studying Shakespeare was lost on them.

The Common Core Standards, which have been adopted in 45 states and the District of Columbia, include Shakespeare in their high school literacy program.  According to the standards a student should be able to, “Determine two or more themes or central ideas of a text and analyze their development over the course of the text, including how they interact and build on one another to produce a complex account; provide an objective summary of the text,” by the time they are a senior (corestandards.org).  In Teaching English by Susan Brindley, she recommends teaching Shakespeare’s works actively, instead of students learning Shakespeare in a more solitary manner.  She recommends teachers see his works as a script instead of a mere sheet of text. With Brindley’s recommendations in mind, it seems remixing could be complimentary to her ideas.

Lesson plan ideas:

  • Cut and paste remixes: After reading the balcony scene of Romeo and Juliet, give the students printed sheets with the text on them.  Tell the students to cut out the individual words, put them in a plastic bag and mix them up, and then recombine the selection.  Given their knowledge of the vocabulary, can they put together sentences that make sense, but are different from the original scene? This will be a great way for students to gain a greater understanding of the vocabulary used.
  • Once students have read the entirety of a Midsummer Night’s Dream, they will create their own play within the play.  They will work in groups, using the information they already have from the play to create another play within the play. Not only will they write the mini play, but they will then perform it.

These are just a few of the ways students could benefit from the use of remix in the classroom.  Students should actively participate in their education and teachers should encourage this.

 

Brindley, Susan. Teaching English. London: Routledge, 1993. Print.

“Mission Statement.” Common Core State Standards Initiative. N.p., n.d. Web. 25 June 2013. <http://www.corestandards.org/>.

Remediation.

Bolter and Grusin discuss the ideas of remediation, mediation, and immediacy. The main idea behind their writing is that people want to experience media without a sense of the medium – that they just want to experience the story.

“Filmmakers routinely spend tens of millions of dollars to film on location or to recreate period costumesa nd places in order to make their viewers feel as if htey were ‘really’ there” (5).

They refer to web sites which meld many different kinds of media forms, such as animation, video or graphics which might also be referencing a certain time period or art style. Films also often mix media and styles, according to Bolter and Grusin. A big part of their argument with remediation within film lies with increasing technology which allows for digital possibilities that didn’t exist in the past. Now films can combine live-action footage with computer editing and graphics.

“The desire for immediacy leads digital media to borrow avidly from each other as well as from their analog predecessors such as film, television, and photography” (9).

Immediacy is a big part of their argument. Immediacy is really similar to the immersion Murray talked about, as well. This puts the audience as part of the story where the medium fades as far into the background as possible. “…to achieve immediacy by ignoring or denying the presence of the medium and the act of mediation” (11).

“Digital visual media can best be understood through the ways in which they honor, rival, and revise linear-perspective painting, photography, film, television, and print. No medium today, and certainly no single media event, seems to do its cultural work in isolation from other media, any more than it works in isolation from other social and economic forces” (15).

“This ‘naive’ view of immediacy is the expression of a historical desire, and it is one necessary half of the double logic of remediation” (31).

“Sometimes hypermediacy has adopted a playful or subversive attitude, both acknowledging and undercutting the desire for immediacy” (34). I think this is an interesting idea, and one that’s percolating for my final project on The Great Gatsby. It also seems apparent in some of the remix culture that has become so much a part of the cultural encyclopedia. In a lot of cases, works are overtly referencing what came before, sometimes with a kind of wink and a nudge that alludes to the mediation itself.

Bolter and Grusin elaborate, “In the logic of hypermediacy, the artist (or multimedia programmer or web designer) strives to make the viewer acknowledge the medium as a medium and to delight in that acknowledgement. She does so by multiplying spaces and media and by repeatedly redefining the visual and conceptual relationships among mediated spaces – relationships that may range from simple juxtaposition to complete absorption” (41-42).

They also touch on the idea of historical works, where in the 1990s filmmakers produced film versions of classic novels set in the past. In a lot of these examples they tried to be historically accurate with costumes and the setting and stayed close to the original story. However, these movies typically don’t overtly reference the novel from which they were adapted. If they were mentioned, then the immediacy would be disrupted because in this view the audience would want to just experience the story in the same seamless way as reading the novel would provide.

“The digital medium can be more aggressive in its remediation. It can try to refashion the older medium or media entirely, while still marking the presence of the older media and therefore maintaining a sense of multiplicity or hypermediacy” (46).

“This tearing out of context makes us aware of the artificiality of both the digital version and the original clip. The work becomes a mosaic in which we are simultaneously aware of the individual pieces and their new, inappropriate setting” (47). This especially reminds me of movies set in historical times which utilize modern music or modern dress. It’s a blatant mash-up and acknowledging the original source while adapting in a new way.

Overall this reading focused much more on the medium and mediation than on narration, but I’m starting to see how this can be applied to the broader picture. I’m excited to utilize these ideas to deconstruct some adaptations of The Great Gatsby for my final paper.

Narration on the Holodeck

Janet H. Murray’s Hamlet on the Holodeck considers narration and storytelling devices in the digital realm. At times she makes convincing arguments that digital formats allows for more thorough storytelling with the addition of multimedia forms – or even with the namesake holodeck and virtual reality.

“Eventually all successful storytelling technologies become ‘transparent’: we lose consciousness of the medium and see neither print nor film but only the power of the story itself. If digital art reaches the same level of expressiveness as these older media, we will no longer concern ourselves with how we are receiving the information. We will only think about what truth it has told us about our lives” (26).

“Decades before the invention of the motion picture camera, the prose fiction of the nineteenth century began to experiement with filmic techniques. We can catch glimpses of hte coming cinema in Emily Bronte’s complex use of flashback, in Dickens’ crosscuts between intersecting stories, and in Tolstoy’s battlefield panoramast hat dissolve into close-up vignettes of a single soldier. Though still bound to the printed page, storytellers were already striving toward juxtapositions that were easier to manage with images than with words (29).

In these still early days of the “narrative computer”, as Murray puts it, she sees examples of twentieth-century novels, films, and plays that have been pushing the boundaries of linear storytelling. She views the future as multiform stories, “linear narratives straining against the boundary of predigital media like a two-dimensional picture trying ot burst out of its frame” (29). In this case, she’s using multiform to describe a narrative that presents a single plotline or situation in multiple versions – her example is It’s a Wonderful Life, but any movie with divergent timelines would work. One example that pops to my head is the Community episode that plays exactly into this construct when they explore several different timelines including the dreaded darkest timeline.

Another important part of this book to the study of narrative, is the idea of immersion. This means the audience feeling like a participatory part of the story, or feeling like you’re being transported into the story. There are fictional versions of exactly like, like her titular holodeck from Star Trek, but also in The Matrix.  “When we enter a fictional world, we do not merely ‘suspend’ a critical faculty; we also exercise a creative faculty. We do not suspend disbelief so much as we actively create belief. Because of our desire to experience immersion, we focus our attention on the enveloping world and we use our intelligence to reinforce rather than to question the reality of hte experience” (110).

“We bring our own cognitive, cultural, and psychological templates to every story as we assess the characters and anticipate the way the story is likely to go” (110).

“Such immersive stories invite our participation by offering us many things to keep track of and by rewarding our attention with a consistency of imagination” (111).

Murray gives an example of a language program in Paris which included a working telephone that students could access by stepping into an apartment through a photographed space. The story was mostly told through pre-recorded video segments, but the inclusion of the telephone where they could call pre-approved numbers became a favorite part of the experience because it was a functional virtual object that offered accomplishment for a specific goal.

“As the digital art medium matures, writers will become more and more adept at inventing such belief-creating virtual objects and at situating them within specific dramatic moments that heighten our sense of immersed participation by giving us something very satisfying to do” (112).

Murray also brings up agency within digital environments; people like to feel like something will happen if they double-click on a folder on their desktop. But usually agency isn’t a big part of narrative in ways that people are used to experiencing it. One form of agency common to digital environments is spatial navigation. This is definitely true of video games, where plays can choose their movements through digital landscapes.

An Overview of Google Analytics

By Eric Cruet

Google Analytics is a free service offered by Google that generates detailed statistics based on the number of visitors to a website.  It will show you who is visiting your website, where they came from and what they searched to find you [1].

In addition to traffic breakdown, interpretation of Google Analytics can also show you how visitors are engaging with your site by reporting on key areas such as:

  • A measure of your best content, by indicating the most popular pages of your site
  • Visitor traffic over specified periods of time giving you a feel for how sticky your site is and how many visitors come back for future visits
  • The length of time spent on your site

  • Visitors
    1. Characteristics
    2. Browser, new vs. returning user, originating location
  • Traffic
  1. Origins
  2. Keywords, refers, pages
  • Content
  1. Effectiveness
  2. Bounce rate, paths, navigation summary

To get started all you need is a Google account.  Once you go the site you sign up, add the name of the website you want to track, and it will generate a tracking ID and the tracking code you  will need to add to each page that you want to generate statistics on [2].

This is what the tracking code looks like in my attempt to track visits to my CCTP 903 blog:

<script>
(function(i,s,o,g,r,a,m){i[‘GoogleAnalyticsObject’]=r;i[r]=i[r]||function(){
(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
})(window,document,’script’,’//www.google-analytics.com/analytics.js’,’ga’);

ga(‘create’, ‘UA-41934475-1’, ‘georgetown.edu’);
ga(‘send’, ‘pageview’);

</script>

This code needs to be pasted in the html header of each page you want to track.  There is a specific procedure to perform this task within WordPress.  Unfortunately, when I attempted to do this, I did not have administrator privileges to copy it in the correct location (I copied it in the wrong place and was not able to get any statistics).  Consult with the systems administrator of the website you need to monitor.

Once the code is installed, there are numerous variables that can be set up in the dashboard section of the program.  Below you will find the links to the all the measurements available through the Core Reporting API [3].  Use this reference to explore all the dimensions and metrics available.  Besides the ability to call these programmatically by code, the majority can also be set up for real-time monitoring using the dashboard.  Click the category name to see dimensions and metrics by feature:

Visitor

Session

Traffic Sources

AdWords

Goal Conversions

Platform / Device

Geo / Network

System

Social Activities

Page Tracking

Internal Search

Site Speed

App Tracking

Event Tracking

Ecommerce

Social Interactions

User Timings

Exception Tracking

Experiments

Custom Variables

Time

Google Analytics offers a host of compelling features and benefits for everyone from senior executives. professionals in marketing, advertising and politics, to social media site and content developers.  It’s free and easy to get started.  If you want to see what Google Analytics can do first-hand, take the (short) tour.

 

 

 

References:
[1] http://www.google.com/analytics/features/index.html
[2] https://www.udemy.com/getting-started-with-google-analytics/
[3] Ledford, J. L., Teixeira, J., & Tyler, M. E. (2010). Google analytics. Wiley.
 

An Examination of Virtual Communities

The Virtual Community: Homesteading on the Electronic Frontier has proven to be a useful text in fleshing out the background research of my topic. Howard Rheingold has quite a bit to say about what constitutes a virtual community, and its differences from, similarities to, and overlap with tangible communities. I had intended to flesh this post out with additional material of this sort, since this book is fairly old. However, the relevant points still apply, and other references that I found over the last few days branched off a bit too far from my interests. Born Digital by John Palfrey and Urs Gasser and The Human Factor by Kim Vicente were two of the things I looked into, but I think I have enough material from The Virtual Community that doesn’t quite mesh that I’ll hold off on posting about them until next week.

By way of getting into the topic, Rheingold gives a definition of virtual communities in the introduction, saying they “are social aggregations that emerge from the Net when enough people carry on those public discussions long enough, with sufficient human feeling, to form webs of personal relationships in cyberspace” (5). This does overlap somewhat with Kim Vicente’s topic, if only because it brings into focus on how people use technology. However, Vicente’s book is more from an engineering standpoint, while Rheingold takes an anthropological look at virtual communities. He talks about his personal experiences as a part of virtual communities to flesh out his descriptions, and they sound very familiar. For instance, he states that “my virtual communities also inhabit my life. I’ve been colonized, my sense of family at the most fundamental level has been virtualized” (10). He uses these anecdotes to go into detailed comparisons between virtual worlds and the tangible communities I mentioned before.

I think it is worthwhile to describe some of the specific similarities Rheingold draws between virtual and “real world” communities. He talks about how they create value, and discusses some of the things individuals exchange in virtual communities, stating that “Reciprocity is a key element of any market-based culture, but the arrangement I’m describing feels to me more like a kind of gift economy in which people do things for one another out of a spirit of building something between them, rather than a spreadsheet-calculated quid pro quo” (57). This description, like most in Rheingold’s book, is quite a positive tie in with offline culture. He also makes this positive comparison to society: “People in virtual communities use words on screens to exchange pleasantries and argue, engage in intellectual discourse, conduct commerce, exchange knowledge, share emotional support make plans, brainstorm, gossip, feud, fall in love, find friends and lose them, play games, flirt, create a little high art and a lot of idle talk” (3). He makes the argument that community is made up of individuals and that virtual communities simply are based in a different medium.He also discusses the differences between the communities throughout the book. Part of the purpose of the book, in fact, is to look into “the ways virtual communities are likely to change our experience of the real world as individuals and communities” (4). The primary aspect that strikes me as distinct from offline communities is the idea of a groupmind. He discusses that utilizing his online community gives him the feeling of “tapping into this multibrained organism of collective expertise” (110). He also makes a very basic comparison, saying that “The places I visit in my mind, and the people I communicate with from one moment to the next, are entirely different from the content of my thoughts or the state of my circle of friends before I started dabbling in virtual communities” (10).

Like I mentioned, he doesn’t only discuss black and white similarities and differences, but aspects that represent a crossover of the communities. One thing I found interesting was a type of virtual governance, but not of a virtual community. The example he gives is as follows, “Santa Monica’s system has an active conference to discuss the problems of the city’s homeless that involves heavy input from homeless Santa Monica citizens who use public terminals” (10-11). I read a more recent article about a similar thing featuring the city of Tallinn, Estonia. Some key points from the article are that “Officials say they had to create an “e-government.” The reason for this as explained by  Jaan Priisalu, director general of the Estonian Information Systems is that “’We are a small nation, and at the same time we have to develop a government that has same functionality as the big countries’”. Other areas of overlap are hybrid uses of the networking technology. A specific example the author gives is that “Virtual communities are places where people meet, and they also are tools; the place-like aspects and tool-like aspects only partially overlap” (56). He describes how this would look practically, saying “If, in my wanderings through information space, I come across items that don’t interest me but I know would interest one of my worldwide affinity group of online friends, I send the appropriate friend a pointer or simply forward the entire text” (57). This implies a social contract inherent in virtual worlds, both similar to and distinct from “real life”.  Rheingold describes the social contract, saying it is “supported by a blend of strong-tie and weak-tie relationships among people who have a mixture of motives and ephemeral affiliations” (57). He brings the focus back around to value building when he states that accessing the network “is about more than simple fact-finding. It is also about the pleasure of making conversation and creating value in the process” (61).

Now, to reign this all in a bit to what I’m focusing on, I’ll close up by looking at Rheingold’s discussion of MUDs, or multi user dungeons. These are rather old, but the concepts relate to current massive multiplayer online games. I’m looking into background information on my topic because the gaming communities I’m interested in are just specific types of virtual communities. The author says as much when he claims that “MUDs are living laboratories for studying the first-level impacts of virtual communities – he impacts on our psyches, on our thoughts and feelings as individuals” (146). Part of the goal of this exercise is to “analyze the second-level impacts of phenomena like MUDs on our real life relationships and communities lead to fundamental questions about social values in an age when so many of our human relationships are mediated by communications technology” (146). Once again, value is build when he describes how people tend to feel about their avatars, or the characters they create for gaming communities. He says, “More than just your imaginary character is at stake. [Its] fate will influence the virtual lives of other characters who represent real friends in the material world” (145). He also gets down to the most basic fundamentals of what virtual worlds are, human interactions mediated by technology, when he says they are “imaginary worlds in computer databases where people improvise words and programming languages to improvise melodramas, build worlds and all the objects in them, solve puzzles, unvent amusements and tools, compete for prestige and power, gain wisdom, seek revenge, indulge greed and lust and violent impulses” (145). One of the questions brought up by this particular type of virtual community is whether or not the participants “have a life”. The author compares fandom to the communication addiction evidenced by some online gamers, saying that “The phenomenon of fandom is evidence that not everyone can have a life as “having a life” is defined by the mainstream, and some people just go out and try to build an alternate life” (167). A fascinating claim related to how people build alternate lives in virtual worlds is that “latent selves are liberated by technology” (170).

The book also addresses how people interact with technology by saying that “The technology that makes virtual communities possible has the potential to bring enormous leverage to ordinary citizens at relatively little cost.[…]. But the technology will not in itself fulfill that potential; this latent technical power must be used intelligently and deliberately by an informed population” (4). I mention this at the end of this rather long post because it reminds me of one of my favorite internet activists and authors, Cory Doctorow.

Improving Shakespeareremixed.com

As I have researched remix theory, culture surrounding Shakespeare, and consequently, remixed works of Shakespeare, I have compiled a way to make Shakespeareremixed.com more effective.  The goal of the website is to create a community of Shakespeare remixers and to be an educational tool for teachers who need to find new ways to teach Shakespeare.  Some of the changes I have in mind are purely logistics, but many have come from research I have completed.

Home Page: The home page should I have a more clear description of the purposes of the website.  It should also house a running feed of information coming from different Shakespeare social media sites.

Social Media: I will add a social media aspect to the website with a Twitter account, Facebook page, and possibly an Instagram account.  This may be the easiest way to create buzz around the website.

Fanfiction: As I was researching Shakespeare remixes, I came across a rather expansive collection of fanfiction, but there were not any website dedicated to Shakespeare and fanfiction.  Most websites which contained fanfiction were fanfiction databases, where there were many different genres – Shakespeare being one of many.

Analysis: At this point, the website is mostly dedicated to amateur remixes.  It seems like a huge oversight to forego the addition of professional remixes.  The website may also act as a center for the analysis of what most would consider professional remixes.  This could be very useful for the educational aspect of the website as teachers may use the analysis of remixes to determine the right materials for their classrooms.

Increased Educational Materials: The website only has a few homework assignments and forums for teachers and students to discuss their projects.  I would like to add more to the educational side of the website.  This includes the aforementioned analysis of professional Shakespeare remixes, like big-budget movies, as well as listing places to visit Shakespeare remixes.  This should be a malleable aspect of the website.

Reorganize Remixes: I need to find a more effective way to organize the remixes on the website.  There is currently a tab per piece of literature, but this is a bit overwhelming and not complimentary to the users experience.

The end goal is for Shakespeareremixed.com to be an easy-to-use website which will both build a community of remixers and act as a tool for educators.

Modeling Neuron Electrokinetics using Markov Models

By Eric Cruet

Ion Channel Kinetics

In continuation of the previous post, the function of neurons in the brain is about detection.  They receive thousands of different input signals from other neurons, trained to detect patterns specific to their function.  A simplistic analogy is the thermostat in an oven.  When you set the oven to preheat at 350 degrees, the sensor samples the temperature until it reaches the specified threshold temperature.  Then it “fires” a signal and the alarm goes off.  In the same fashion, the neuron has a threshold and “fires” a signal to adjacent neurons only if it detects a signal significant enough to cross its threshold.  This signal is known as the action potential or spike and in the diagram above, is represented and accomplished by the excitability arrows.

Synapses are the connectors between sending neurons, dendrites are the “branches” that integrate all the inputs to the neurons, and the part of the axon that’s very close to the output end of the neuron (Axon hillock) is where the threshold activity takes place.  The farthest end of the axon branches out and turns into inputs to other neurons, completing the next chain of communication.  See the image below.

Neuron Cell Structure

The bottom line is understanding the neuron’s fundamental functionality as a detection mechanism: it receives and integrates inputs, and determines whether its threshold has been exceeded, triggering an output signal based on its inputs.

Now let’s briefly cover some basic biochemistry, since Markov models simplify and simulate ion channel kinetics.  Ion channels are where the some of vital functions involved in the triggering of the signal occur.

There are three major sources of input signals to the neuron [4]:

    1. Excitatory inputs:  these are the more common, prevalent type of input from other neurons (=approx 85% of all inputs).  Their effect excites the receiving neuron, which makes it more likely to exceed its threshold and “fire”, or trigger a signal.  These are signalled via a synaptic channel called AMPA, opened by the neurotransmitter glutamate.  In addition, AMPA receptors that are non-selective cationic channels allowing the passage of Na+ and K+and therefore have an electric equilibrium potential near 0 mV (milliVolts).
    2. Inhibitory inputs: comprising the other 15% of inputs, they have the opposite effect of the excitatory inputs.  They cause the neuron to be less likely to fire, or trigger a signal, which makes the integration process (of inputs) much more robust (by keeping the excitation under control).  Specialized neurons in the brain called inhibitory interneurons accomplish this function in the brain.  These inputs are signalled via GABA (gamma-Aminobutryc Acid) synaptic channels, via the GABA neurotransmitter.  It also causes the opening of ion channels to allow the flow of either negatively charged Cl (chloride) ions into the cell or positively charged K+(potassium) ions out of the cell.
    3. Leak inputs: technically not considered inputs since they are always active.  However, they are similar to inhibitory inputs in that they counteract excitation and keep the neuron in balance.  They receive their signalling via K+ (potassium) channels.  

The interaction between these elements in a cell create what is know as the membrane potential.  Membrane potential (also transmembrane potential or membrane voltage) is the difference in electrical potential between the interior and the exterior of a biological cell. Typical values of membrane potential range from –40 mV to –80 mV.  These are a result of differences in concentration of ions ( Na+/K+/Cl) on opposite sides of a cellular membrane.  Please refer to the following picture:

 

So we’ve covered the process by which the neuron “detects” various inputs based on chemistry.  These chemical processes generate a difference in potential (charge or mVolts) across the cell.  In simplified terms, the rate, direction, and the amount of change in this potential is what determines whether a neuron will exceed its threshold.  A brief overview of mathematical models for neuron ion channel kinetics follows.

Hodgkin-Huxley

The first, most widely-used models of neurons that is based on the Markov kinetic model was developed from Hodgkin and Huxley’s 1952 work [2] based on data from the squid giant axon. We note as before our voltage-current relationship, this time series generalized to include multiple voltage-dependent currents:

C_\mathrm{m} \frac{d V(t)}{d t} = -\sum_i I_i (t, V).

Each current is given by Ohm’s Law as: (this is derived from the basic  I = \frac{V}{R},   ) where

I = Current, V = Voltage and R = Resistance or 1/g where g = Conductance

I(t,V) = g(t,V)\cdot(V-V_\mathrm{eq})

where g(t,V) is the conductance over time, or inverse resistance, which can be expanded in terms of its constant average  and the activation and inactivation fractions m and h, respectively, that determine how many ions can flow through available membrane channels. This expansion is given by

g(t,V)=\bar{g}\cdot m(t,V)^p \cdot h(t,V)^q

and our fractions follow the first-order kinetics

\frac{d m(t,V)}{d t} = \frac{m_\infty(V)-m(t,V)}{\tau_\mathrm{m} (V)} = \alpha_\mathrm{m} (V)\cdot(1-m) - \beta_\mathrm{m} (V)\cdot m

with similar dynamics for h, where we can use either τ and m or α and β to define our gate fractions.

With such a form, all that remains is to individually investigate each current one wants to include. Typically, these include inward Ca2+ and Na+ input currents and several varieties of K+ outward currents, including a “leak” current. The end result can be at the small end 20 parameters which one must estimate or measure for an accurate model [1].  At that time this could not be computed.  This was the starting point for a subsequent series of studies, all attempting to simplify the neuron model. 

In 2008, James P. Keener, performing research in mathematics at the University of Utah, published a paper entitled “Invariant Manifold Reductions for Markovian Ion Channel Dynamics.”  He proposed using Markov jump processes to model the transitions in ion channel states [3]. These Markov models had been previously been used in conductance based models to study the dynamics of electrical activity in nerve cells, cardiac cells and muscle cells.

In summary, what Dr. Keener proved was that the classical Hodgkin-Huxley formulations of potassium and sodium channel conductance are exact solutions of Markov models, although there were no means of computing proof at the time. This means that the solutions of the Hodgkin-Huxley equations and the solutions of a full Markov model simulating neuron electrokinetic activity with an 8-state sodium channel and a 4-state potassium channel model (after several milliseconds during which initial transients decay) are exactly the same, even though the first is a system of four differential equations and the latter is a system of 13 differential equations.

There are a lot of pieces to the cognitive neuroscience puzzle.  This is one of many theoretical frameworks to approach the complex subject of brain function.  One of the drawbacks of a computational cognitive approach is that it basically designs the very functionality it tries to explain.  It is still very useful, but limited in what it can ultimately explain. It has also been one of the most successful in dealing with cognitive function precisely because it deals at a higher level modeling a system that uses mathematics and logic with the same tools.  The ultimate goal is that it piques the curiosity of those who are unfamiliar to the subject and share what I’ve learned with those who have acquired an interest.

“The larger the island of knowledge, the longer the shoreline of wonder.”

Ralph Washington Sockman (1889 – 1970)

 

References:

 

[1] Goldwyn, J. H., & Shea-Brown, E. (2011). The what and where of adding channel noise to the Hodgkin-Huxley equations. PLoS computational biology7(11), e1002247.
[2] Hodgkin, A. L., & Huxley, A. F. (1952). Propagation of electrical signals along giant nerve fibres. Proceedings of the Royal Society of London. Series B, Biological Sciences140(899), 177-183.
[3] Keener, J. P. (2009). Invariant manifold reductions for Markovian ion channel dynamics. Journal of mathematical biology58(3), 447-457.
[4] O'Reilly, R. C., Munakata, Y., Frank, M. J., & Hazy, T. E. (2012). Computational Cognitive Neuroscience. Wiki Book,.

 

 

 

Closer look at earlier texts

Since beginning to synthesize relevant material last week, I’ve gone back over some of my previous readings more closely. I’ve done this in order to look at the sources as they relate to the history of my broader topic and how that informs my more specific question about what constitutes a virtual community. I’ve been considering Spreadable Media, Virtual Justice, and Remix: Making Art and Commerce Thrive in the Hybrid Economy. Spreadable Media suggests that communities form around shared interests and content. The authors offer traditional social networking sites as examples, but their best example was YouTube, which focuses almost entirely on content. While more comprehensive virtual worlds are not explicitly for sharing created content, they do provide a place where people can engage in shared interests. Also, as Lessig mentioned, some participants build on the code of the virtual world, and others create videos using images from games like World of Warcraft.

Another aspect of Spreadable Media I found interesting and didn’t go into depth on was its assessment of cultural participation. They take a broad perspective, stating, “we think audiences do important work beyond what is being narrowly defined as ‘production’ here – that some of these processes marked as ‘less active’ involve substantial labor that potentially provides value according to both commercial and noncommercial logic” (171). Addressing other sides of the argument, the authors cite a Forrester survey for the whose subjects are U.S. adults online, and they found that “52 percent were ‘actual creators’ of so-called user-generated content, Van Dijk and Nieborg conclude, ‘The active participation and creation of digital content seems to be much less relevant than the crowds they attract. […] Mass creativity, by and large, is consumptive behavior by a different name’” (171). I’m inclined to understand the authors’ arguments, because it seems like a more nuanced perspective.

I believe that in filling in the background on my topic, I will find some arguments for why virtual communities are either extremely similar to or different from offline communities. I’m sure someone argues for the happy medium as well, which is where I think it falls. The important thing, which Spreadable Media addresses, is the human interaction. The authors quite Douglas Rushkoff, who says that “Content is just a medium for interaction between people” (216). They emphasize that the consumer can take on a broader and more discerning role. The authors reference John Fiske, who states that “If the cultural commodities or texts do not contain resources out of which the people can make their own meanings of their social relations and identities, they will be rejected and will fail in the marketplace” (217). They place importance on the favor of the consumer, but they make it clear that the actions don’t have to be earth shattering to have a place, stating that “many of the choices people make in spreading content, just as described, are not grand and sweeping gestures but rather simple, everyday actions such as ‘liking’ a Facebook status update” (216).

Not only is participation described in varying levels of intensity, there are any number of reasons for people to create and/ or spread media. One that fits in well with my interest in virtual worlds is fan-made media. The authors describe it as something that “is shared among a community with common passions. […] fans understand their works as a contribution to the community as a whole. Fandom nurtures writers and artists, putting the deepest emphasis on that material which most clearly reflects the community’s core values” (220). While this didn’t directly follow up the idea of fan-made media, I found it applicable that the authors mentioned that people “do not simply pass along static texts; they transform the material through active production process or through their own critiques and commentary, so that it better serves their own social and expressive needs” (311).

This post feels a little bit like I’m blabbing, but I’m going to organize the ideas a little more and get some more background information for next time.