Category Archives: Week 6

To Paris With Love

All Pictures 8.14.13 288

I went to Paris to look at the African American Exodus to Paris from 1900-1950. However, with every day walking around the romantic streets of San Michel, reading books about the African American experience in Paris at the Jardin du Luxembourg, and having conversations with history buffs and street musicians my idea slowly but surely changed.

With every meeting and with every tour more questions emerged. I initially wanted to pluck two people from each discipline (arts, education, athletics, etc) and examine why they came to Paris. Then I narrowed the study to musicians. However, I started to fall in love with the idea of the musician community and remembering the greats through playing their standard music. However, I realized that this was too macro a topic for the course. So, I went back to the drawing board. I sifted through my notes and my reflections and personal journal entries.

I went back to my first time in Montmartre and then my various tours. I discovered that there was something missing. The physical memory manifested in monuments and signs. After discussing this issue with people, I received the answer that monuments are reserved for citizens of Paris. After mulling that idea over in my mind, I got it. I understood the sacred space that is Paris, however this study made me want to think about other ways of memory that equally respect both parties and cultures. I think that the answer is that I should start my own museum in Paris dedicated to the Exodus or diaspora. However, that might not happen, so a feasible idea would be some sort of music preservation, but I don’t know how. I don’t know how this could work or what effects this would have, but I’m excited to hopefully talk about this soon (after the holidays) either via e-mail or google hangout.

The Louvre

The Paris excursion has too many takeaways to name. From actually staying there by myself and meeting up with strangers to have discussions or mild debates about issues. To finishing up the applicable research and travelling to other countries while abroad. One takeaway from going abroad was actually experiencing the liberation that I read so much about. I guess the biggest takeaway would be creating a topic for future research. I now know that I can’t take a macro idea and make it work in a few months. Therefore, this final project has really laid the foundation for a bigger research approach dealing with culture and musical memory.

Abbey Road

Abbey Road


Jamón in Madrid

Jamón in Madrid

Reading during brunchReading during brunchAll Pictures 8.14.13 017


Shakespeare: The Remixer

It should be fairly well-known that Shakespeare, himself, was a remixer.  He took stories which has been passed on for centuries and turned them into plays.  Much of the reason we consider his plays archives, although they come from something else, is the way they have stayed intact for years.  Shakespeare’s use of language and iambic pentameter have created a standard which is still studied today.

In the Riverside Shakespeare Anthology, they explain that the most well-known piece of Shakespeare’s work, Romeo and Juliet, is very much a remix.  It came from a story, which came from a story, which came from a poem, which may (or may not) have come from a real life situation.

The same anthology explains Macbeth also has a rich history, similar to that of Romeo and Juliet.  The archive Shakespeare is believed to work from is Holinshed’s Chronicles (Macbeth). The character of Lady Macbeth is seemingly based upon Seneca.  It is believed that some of the changes in the play can be attributed to appeasing the king at the time.

Although this may seem apparent, Shakespeare’s histories are also remixed.  They come from what are believed to be true historical events, but Shakespeare adds his own spin to every situation, as he likely could not know the exact conversations that took place between the people featured as characters in his plays. Shakespeare became so famous for remixing histories that they are now considered a classification of his plays.

Knowing that Shakespeare was a remixer may help students grasp the idea of remix and also understand why Shakespeare is so remarkable.  With this in mind, I plan to make a portion of my website dedicated to explaining remixes which Shakespeare created.  This can act as another tool for students to use.

Shakespeare, William, G. Blakemore Evans, and J. J. M. Tobin. The Riverside Shakespeare. Boston: Houghton Mifflin, 1997. Print.

History and Meaning

Continuing on with some more ideas about White’s views, meaning becomes a really crucial factor in historical narratives.

An important notion that White brought up for me, that will apply to my paper about Gatsby, was that what fills out a mere list of events is the idea that there would be a social center and that events would be charged with moral or ethical significance. Without a social center, then events aren’t really ranked by importance and might stifle any desire to work something into a narrative. Without a social center, with moral or ethical implications, then in White’s perspective a fights are merely fights instead of epic battles or hero journeys. Taking it a step further, White talks about how Hegel thought that a “genuinely historical account ahd to display not only a certain form, htat is, hte narrative, but also a certain content, namely, a political-social order” (15).

Hegel wrote,
“…it is the State which first presents subject-matter that is not only adapted to the prose of History, but involves the production of such history in the very progress of its own being” (16).

This brings up the idea of authority, as well. White says, “this raises the suspicion that narrative in general, from the folktale to the novel, from the annals to the fully realized ‘history’, has to do with the topics of law, legality, legitimacy, or, more generally, authority” (17).

“The more historially self-conscious the writer of any form of historiography, the more the question of the social system and the law which sustains it, the authority of htis law and its justification, and threats to the law occupy his attention” (17).

“…every historical narrative has as its latent or manifest purpose the desire to moralize the events of which it treats” (18).

“And this suggests that narrativity, certainly in factual storytelling and probably in fictional storytelling as well, is intimately related to, if not a function of, the impulse to moralize reality, that is, to identify it with the soial system that is the source of any morality that we can imagine” (18).

“Common opinion has it that the plot of a narrative imposes a meaning on the events that comprise its story level by revealing at the end a structure that was immanent in the events all along…The reality of these events does not consist int eh fact that they occurred but that, first of all, they were remembered and, second, that they are capable of finding a place in a chronologically ordered sequence” (23).

“In order for an account of the events to be considered a historical account, however, it is not enough taht they be recorded in the order of their original occurence. It is the fact that they can be recorded otherwise, in an order of narrative, that makes thema t once questionable as to their authenticity and susceptibel to being considered tokens of reality” (23).

“The authority of the historical narrative is the authority of reality itself; the hisotircal account endows this reality with form and thereby makes it desirable, imposing upon its processes the formal coherency that only stories possess” (23).

“The historical narrative, as againts the chronicle, reveals to us a world that is putatively ‘finished,’ done with, over, and yet not dissolved, not falling apart. In this world, reality wears the mask of a meaning, the copmleteness and fullness of which we can only imagine, never experience. Insofar as hisorical stories can be copmleted, can be given narrative closure, can be shown to have had a plot all along, they give to reality the odor of the ideal” (23).

“I cannot think of any othe rway of ‘concluding’ an ccount of real events; for we cannot say, surely, that any sequence of real events actually comes to an end, that reality itself disappears, that events of hte order of the real have ceaseed to happen. Such events could only have seemed to have ceaseed to happen when meaning is shifted, and shifted by narrative means, from one physical or social space to another” (26).

“…this value attached to narrativity in the representation of real events arises out of a desire to have real events display the coherence, integrity, fullness,a nd closure of an image of life that is and can only be imaginary. The nothion that sequences of real dvents possess the formal attributes of hte stories we tell about imaginary events could only have its origin in wishes, daydreams, reveries” (27).

White’s concluding point is that life really presents itself more as annals and chronicles with mere sequences without beginning or end and aren’t tidy stories that tell us the meaning.

Narration and History

For my last post before my paper, I read a couple of pieces by Hayden White, who focused a lot on historical representation along with narration. They both covered similar ideas, but the one I found the most in was “The Value of Narrativity in the Representation of Reality”

“So natural is the impulse to narrate, so inevitable is the form of narrative for any report of the way things really happened, that narrativity could appear problematical only in a culture in which it was absent – absent or, as in some domains of contemporary Western intellectual and artistic culture, programmatically refused” (5).

“…narrative might well be considered a solution to a problem of general huan concern, namely, the problem of how to translate knowing into telling, the problem of fashioning human experience into a form assimilable to structures of meaning that are generally human rather than culture-specific” (5).

White continued that while people can’t necessarily understand how people from other cultures might think, they probably can understand a story that comes from another culture. He quoted Roland Barthes, who said, “narrative…is translatable without fundamental damage” in ways that philosophy or discourse doesn’t.

“Narrative is a metacode, a human universal on the basis of which transcultural messages about the nature of a shared reality can be transmitted” (6).

In talking about history, White broke down the different ways that historians can choose to tell historical accounts. And he really stressed the idea that they do choose how to tell them. They don’t necessarily have to choose narrative. Examples of non-narratives to White are meditation, anatomy or epitome. Some historians refused narratives because they felt that the events they wanted to tell weren’t suited to representation in the narrative form. The key distinction seems to be in telling a story with a beginning, middle, and end.

“While they certainly narrated their accounts of the reality that they perceived, or thought they perceived, to exist within or behind the evidence they had examined, they did not narrativize that reality, did not impose upon it hte form of a story” (6).

To White, there is a difference “between a discourse that openly adopts a perspective that looks out on the world and reports it and a discourse that feigns to make the world speak itself and speak itself as a story” (7).

For a historical narrative, the narrator stays objective by being invisible. In this sense, events are recorded chronologically, and no one seems to speak. The story basically tells itself. Interestingly, he talks about imaginary events, or fiction, and questions whether they can be represented as speaking for themselves, as well. It seems like he views the story as being more attuned to this sort of self-narration, and real events as needing a narrator.

“But real events should not speak, should not tell tehmselves. Real events should simply be: they can perfectly well serve as the referents of a discourse, can be spoken about, but they should not pose as the tellers of a narrative” (8).

White thinks that narratiization of history is difficult because it’s trying to give real events the form of a story. The difficulty behind this lies in the fact that history is fluid, and continues. There is no direct beginning, middle, or end to history. He questions what authors choose from historical records. And what gives them meaning? Attributing meaning is a big part of his argument.

“What wish is enacted, what desire is gratified, by the fantasy that real events are properly represented when they can be shown to display the formal coherency of a story?” (8).

“In the enigma of this wish, this desire, we catch a glimpse of the cultural function of narrativizing discourse in general, an intimation of the psychological impulse behind the apparently universal need not only to narrate but to give to events na aspect of narrativity” (8).

He views this as a desire to merge the imaginary with the real, and usually in order to make sense of the world or to give life meaning.

He gives three different forms of historical accounts: the annals, the chronicle, and the history proper. Annals lack any narrative component and is basically just a list of events in chronological order. A chronicle, in his view, wants to tell a story and usually starts to tell one, but falls short but ending abruptly, or failing to achieve narrative closure.

He shows old historical records that merely show births and deaths and social events. There are no descriptions or anything about them, just merely recorded facts. In White’s perspective, “Social events are apparently as incomprehensible as natural events. The seem to have the same order of importance or un-importance. They seem merely to have occurred, and their importance seems to be indistinguishable from the fact that they wer recorded. In fact, it seems that their importance consists of nothing other than the fact that they were recorded” (12).

This is an example of an annal. But to take that further, there must be a plot, according to White. He defines plot as a structure of relationships where events are given meaning and are part of a whole. The idea that events are part of a whole are a big factor to White’s argument. He continues,

“It is this need or impulse to rank events iwth respect to their significance for the culture or group that is writing its own history that makes a narrative representation of real events possible” (14).

One of the ways that White mentions cultures building up significance or making things are narrative-like as possible, is by filling in gaps between those big events.

“The presence of these blank years in the annalist’s account permits us to perceive, by way of contrast, the extent to which narrative strains to produce the effect of having filled in all the gaps, to put an image of continuity, coherency, and meaning in place of hte fantasies of emptiness, need, and frustrated desire that inhabit our nightmares about the destructive power of time” (15).


Approaches to Cognitive Science

By Eric Cruet

The genesis of cognitive science as a collaborative endeavor of psychology, computer science, neuroscience, linguistics, and related fields began in the the 1950s, however, its first major institutions (a journal and society) were established in the late 1970s.

A key contributor to the emergence of cognitive science, psychologist George Miller, dates its birth to September 11, 1956, the second day of a Symposium on Information Theory at MIT. Computer scientists Allen Newell and Herbert Simon, linguist Noam Chomsky, and Miller himself presented work that would point each of their fields in a more cognitive direction.

In the late 1970s, human experimental psychology, theoretical linguistics, and the computer simulation of cognitive processes saw progressive elaboration and coordination. In the 1990s, John McCarthy and Marvin Minsky at MIT developed a broad based agenda for the field they named artificial intelligence (AI), and the convergence of all the above led to the establishment of the multi-disciplinary field we recognize today.

Today, the inclusion of network theory, complexity science, advances in imaging modalities and visualization, and the ability to process entire data sets as opposed to small samples promise to significantly change the way in which the organization and dynamics of cognitive and behavioral processes are understood. Below, we describe a mix of classic and current approaches to cognitive science.

Distributed Cognition

Distributed cognition is a branch of cognitive science that proposes cognition and knowledge are not confined to an individual; rather, it is distributed across objects, individuals, artefacts, and tools in the environment.  Early work in distributed cognition was motivated by the fact that cognition is not only a socially (also, materially and temporally) distributed phenomenon, but one that is essentially situated in real practices [1].  The theory does not posit some new kind of cognitive process.  Rather, it represents the claim that cognitive processes generally are best understood as situated in and distributed across concrete socio-technical contexts.

Traditional cognitive science theory emphasizes an internalism that marginalizes (some would argue ignores) the role of external representation and problem solving in cooperative contexts.  Traditional approaches to description and design in human-computer interaction have similarly focused on users internal models of the technologies with which they interact.  In this case the theoretical focus is on how cognition is distributed across people and artifacts, and on how it depends on both internal and external representations.

The Cognitive Niche

Humans have the ability to pursue abstract intellectual feats such as science, mathematics, philosophy, and law.  This is surprising, given that opportunities to exercise these talents did not exist in the hunter-gatherer societies where humans evolved.

The “Cognitive Niche theory of cognition states that humans evolved to fill a mode of survival by manipulating the environment through causal reasoning and social cooperation. In addition, the psychological faculties that evolved to prosper in the cognitive niche can be co-opted to abstract domains by processes of metaphorical abstraction and productive combination like the ones found in the use of human language [2]. 


This theory claims several advantages as an explanation of the evolution of the human mind. It incorporates facts about the cognitive, affective, and linguistic mechanisms discovered by modern scientific psychology rather than appealing to vague, prescientific black boxes like “symbolic behavior”.  Finally, the cognitive adaptations comprise the “intuitive theories” of physics, biology, and psychology; the adaptations for cooperation comprise the moral emotions and mechanisms for remembering individuals and their actions; and the linguistic adaptations comprise the combinatorial apparatus for grammar and the syntactic and phonological units that it manipulates [3]. 



Connectionism is an alternate computational paradigm to that provided by the von Neumann architecture that has inspired classical cognitive science [4]. Originally taking its inspiration from the biological neuron and neurological organization, it emphasizes collections of simple processing elements in place of the centrally-controlled manipulation of symbols by rules that is typical in classical cognitive science. The simple processing elements in connectionism are typically only capable of rudimentary calculations (such as summation).

A connectionist network is a particular organization of processing units into a whole network. In most connectionist networks, the systems are trained using a learning rule to adjust the weights of all connections between processors in order to obtain a network that performs some desired input-output mapping.

Connectionist networks offer many advantages as models in cognitive science [5]. However, in spite of the fact that connectionism arose as a reaction against the assumptions of classical cognitive science, the two approaches have many similarities when examined from the perspective of Marr’s tri-level hypothesis [6].

There are many forms of connectionism, but the most common forms use neural network models.

Though there are a large variety of neural network models, they almost always follow two basic principles regarding the mind:

  1. Any mental state can be described as an (N)-dimensional vector of numeric activation values over neural units in a network.
  2. Memory is created by modifying the strength of the connections between neural units. The connection strengths, or “weights”, are generally represented as an (N×N)-dimensional matrix.

Connectionists are in agreement that recurrent neural networks (networks wherein connections of the network can form a directed cycle) are a better model of the brain than feedforward neural networks (networks with no directed cycles). Many recurrent connectionist models also incorporate dynamical systems theory. Many researchers, such as the connectionist Paul Smolensky, have argued that connectionist models will evolve toward fully continuous, high-dimensional, non-lineardynamic systems approaches.

Theoretical Neuroscience

Theoretical neuroscience is the attempt to develop mathematical and computational theories and models of the structures and processes of the brains of humans and other animals. It differs from connectionism in trying to be more biologically accurate by modeling the behavior of large numbers of realistic neurons organized into functionally significant brain areas. In recent years, computational models of the brain have become biologically richer, both with respect to employing more realistic neurons such as ones that spike and have chemical pathways, and with respect to simulating the interactions among different areas of the brain such as the hippocampus and the cortex. These models are not strictly an alternative to computational accounts in terms of logic, rules, concepts, analogies, images, and connections, but should complement other models to illustrate how mental functions can be translated and performed at the neural level.

Learning is arguably the central problem in theoretical neuroscience. It is possible that other problems such as the understanding of representationsnetwork dynamics and circuit function could be understood when the details of the learning process are known that, together with the action of the genome, produce these phenomena. 

Another tremendous challenge is “the invariance problem”.  Our mental experience suggests that the brain encodes and manipulates ‘objects’ and their relationships, but there is no neural theory of how this is done. We recognize, for example, a cup regardless of its location, orientation, size, or other variations such as lighting and partial occlusion. How do brain networks recognize a cup despite these complicated variations in the image data? How is the invariant part (‘cup-ness’) encoded separately from the variant part?

This is the ‘holy grail’ problem of the computer vision community, and we aim to tackle it by fortifying our learning algorithms with insights from the mathematics surrounding the concept of invariance. Invariance may also be seen in motor scenarios, cups being a class of things that we can drink from (what J.J.Gibson called an affordance).



[1] Wilson, R. A., & Keil, F. C. (Eds.). (1999). The MIT encyclopedia of the cognitive sciences (Vol. 134). Cambridge^ eMA. MA.: MIT press.

[2] Pinker, S. (2010). The cognitive niche: Coevolution of intelligence, sociality, and language. Proceedings of the National Academy of Sciences107(Supplement 2), 8993-8999.

[3] Whiten, A., & Erdal, D. (2012). The human socio-cognitive niche and its evolutionary origins. Philosophical Transactions of the Royal Society B: Biological Sciences367(1599), 2119-2129.

[4] Bechtel, W., & Abrahamsen, A. A. (2002). Connectionism And The Mind : Parallel Processing, Dynamics, And Evolution In Networks (2nd ed.). Malden, MA: Blackwell.

[5] Dawson, M. R. W. (1998).Understanding Cognitive Science. Oxford, UK: Blackwell.

[6] Dawson, M. R. W. (2004). Minds And Machines : Connectionism And Psychological Modeling. Malden, MA: Blackwell Pub.


AO- Week 6: Evaluating Hypermedia Learning Environments

In Multimedia for Learning: Methods and Development (2001), Stephen Alessi and Stanley Trollip identify several types of hypermedia learning environments (including “the museum”) sharing three essential features:

  1. “A database of information
  2. Multiple methods of navigation, including hyperlinks
  3. Multiple media (e.g., text, audio, video) for presentation of the information” (142)

Focusing on the database of information as the foundation for the hypermedia learning environment, Alessi and Trollip examine several key factors: “media types, size and organization of the database, resolution, modifiability, visible and internal structure, platform independence, and language independence” (150). These factors provide useful tools for examining the Google Art Project’s database of information.

The Google Art Project uses several media types including text, still pictoral images, and zoomable gigapixel images. The project features images of a vast range of art objects; the hypermedia learning environment offers a way for learners to make sense of this large database. The authors argue that “the size of the database is important in that it should impact the design of navigation methods and features to support learning… the more content, the more important it is to provide a variety of flexible navigation features and to provide features to facilitate motivation, memory, comprehension, and other aspects of learning” (152). The Google Art Project database uses several methods of organization, including “collections,” “artists,” “artworks,” and “user galleries.” Using multiple organizational methods “can facilitate a learner’s efficiency and use of the database” (Alessi and Trollip, 152).High-resolution images are a point of pride for the Google Art Project, boasting gigapixel images for many of it’s shared art objects. Currently, the project does not offer users many options for modifiability. As noted in an earlier post: the Google Art Project does not allow for users to annotate artworks; adding this feature would facilitate conversations and the collaborative creation of new knowledge. While users can save objects in their own “galleries” (essentially bookmarking artworks that are of particular personal interest) users are unable to add their own text (as the authors note: “the equivalent of marking marginal notes, underlining, and highlighting). It is impossible to evaluate the interaction of visible and internal structures because the internal structures of the project are not made available to the public. Because it is web-based, platform independence is not really an issue for the Google Art Project, though there are several similar issues to be considered: how well does the project perform when viewed in different browsers (Chrome, Internet Explorer, and Firefox) and how does it operation change when viewed through a touch-screen device as opposed to a traditional point-and-click navigation. As far as I can tell, the project does not allow for language independence; it seems that the site is only available in English.

Further analysis of the Google Art Project as a learning environment will likely be included in my final project.

AO- Week 6: Mediating the Visitor Experience through Gigapixels

In a previous post, I discussed how museums are using technology to bridge the gap between the physical and virtual museum spaces using the Web Lab at the Science Museum London as a case study. I wanted to further explore specific digital technologies museums are using to enhance knowledge production among audiences and their applications for integrating physical and virtual museum visitor experiences. In “Exploring Gigapixel Image Environments for Science Communication and Learning in Museums,” (2013) Ahmed Ansari, Illah Nourbakhsh, Marti Louw, and Chris Bartley describe the Stories in the Rock exhibit – a collaborative project between the Carnegie Museum of Natural History, Carnegie Mellon University, and the University of Pittsburgh. Stories in the Rock uses zoomable user interfaces (ZUIs) to “offer a spatial way to display and organize large amounts of information in a single interface using scroll, pan, and zoom controls; text, images, graphics, audio, and video can be embedded at spatial locations and zoom levels within an image, creating localized sites for commenting and conversation” (Ansari et al.). The authors identify the challenge addressed by this project: “how to develop intuitive interaction spaces that cater to disparate types of users, giving them deeper agency and choice in how to move through content in ways that are personally relevant and support coherent meaning making.”

The article identifies five “promising affordances” of gigapixel image-based platforms: 

1.“Deep looking and noticing in a shared observational space.” The authors cite Nancy Proctor, Digital Editor and Head of Mobile Strategy & Initiatives at the Smithsonian Institution, as she describes in her discussion of the Google Art Project, “the gigapixel scans by which artworks are rendered into digital data streams are enabling intimate encounters with images at visual depths not possible even in the galleries.”

2.“Democratizing a tool of science.” According to Ansari et al “Websites like and invite gigapixel image makers from all over the world to upload their content to be viewed, annotated, geolocated, commented on, and shared globally. This affordance is not utilized by the Google Art Project, preserving the role of museum curators as gatekeepers. Museum professionals maintain the most traditional “curating” role by continuing to select which pieces will be available for public view rather than allowing users to add their own gigapixel images of artworks which they find interesting. While many of the “old masters” are owned by museum and therefore must be included through the museum, many new forms of contemporary art could be considered “open source” – such as graffiti art and street art – and could easily be captured and uploaded by users.

3.”Encouraging participatory learning.” While some could argue that the Google Art Project does encourage audience participation in the creation of knowledge by allowing users to guide their own experience, there is great room for improvement in this category. Ansari et al use the North Carolina State University Insect Museum  as a case study to demonstrate how “museum scientists and users could interact and have conversational exchanges about insect biology.” Currently, the Google Art Project does not allow for users to annotate artworks; adding this feature would facilitate conversations and the collaborative creation of new knowledge.

4.”Offering new visuospatial ways to curate collections and environments.” The authors cite The Nature Valley Trail View as a case study, “enabling users to virtually explore and walk along trails at the Grand Canyon, Great Smoky Mountains and Yellowstone National Parks… along the way contextual “call outs” provide additional, interactive media overlays for a more dynamic experience.” Google Art Project offers a similar experience through its “museum view” available for many of the participating institutions. The screen capture below shows two “call outs” with information about the sculptures on display and their artists.

Musee d'Orsay in Google Art Project's "Museum View"

First floor of the Musee d’Orsay in Google Art Project’s “Museum View”

5.”Enabling context-dependent annotations and mediation.” Ansari et al cite the website for Canadian design firm Castor as an example of how “embedded information can be revealed depending on user interactions and locations within a three-dimensional space, dynamically tying information to user exploration.” Currently, Google Art Project is not making use of this technology. In order to do so, Google would need to encourage curators to include “call outs” on individual aspects of each work of art which appear as the user zooms in on a particular section of the artwork; this approach would still allow users to guide their own experience and select only information that is of interest to them, while providing some structure to aid the learning environment. Such an approach would “help museum visitors notice details, pick out salient features, and make personal connections to topics of interest” (Ansari et al).

In my next post, I plan to examine how the design of Stories in the Rock provides an excellent example of how multimedia can be used for learning. I will also do an assessment of the Google Art Project as a hypermedia learning environment.

Abstraction and Simulation

By Eric Cruet


Complex computational models typically require large amounts of processing power to produce highly detailed output difficult for users to understand. Building abstracted simulation and visualization systems that simplify both computation and output can help overcome this barrier.

Furthermore, the output of such simulations, which often consists of an agonizingly detailed trace of system events, can be difficult to understand at a global or intuitive level. These two considerations, economy of resources (time, cycles) and intelligibility, argue for the development of abstracted simulation systems of reduced complexity that ignore certain interactions or collapse over some dimensions. Abstracting a detailed simulation can simplify both computation and output, providing an accurate picture of events and efficient utilization of resources.

Since W. S. Gosset, a brewer at Guiness and considered the “father of statistics”, used simulation to prove his elucidation of the t-statistic [2], simulation and visualization in scientific research have been driven by interaction between the following:

1. Inspiration, which may be motivated by sheer curiosity as well as specific theoretical or practical problems

2. Intuition, which may guide the search for a problem solution or lead to new discoveries when reasoning alone is insufficient to ensure continued progress

3. Abstraction, which encompasses the modeling and analysis techniques required to build a simulation model, design experiments using that model, and draw appropriate conclusions from the observed results

4. Experimentation, which is computer based and thus differs fundamentally from other empirical scientific work because of the efficiency improvements that are achievable using Monte Carlo methods

Henri Poincare, was a polymath, and in mathematics also known as The Last Universalist, since he excelled in all fields of the discipline as it existed during his lifetime.  In his text, Mathematical Discovery [1], he stated on inspiration and detailed verification (emphasis added)— “I have spoken of the feeling of absolute certainty which accompanies the inspiration; in the cases quoted this feeling was not deceptive, and more often than not this will be the case. But we must beware of thinking that this is a rule without exceptions. Often the feeling deceives us without being any less distinct on that account, and we only detect it when we attempt to establish the demonstration

From the opposite perspective, abstraction encompasses both simulation modeling and the simulation analysis required to do the following:

•Build a model

•Design experiments using that model

•Draw appropriate conclusions from the observed results.

Simulation-based experimentation differs fundamentally from all other types of empirical scientific work by the large potential efficiency improvements that are achievable because we have complete control of the experimental conditions under which each alternative scenario is simulated.



NEURON is a simulation environment for modeling individual neurons and networks of neurons. As of version 7.3, Neuron is capable of handling diffusion-reaction models, and integrating diffusion functions into models of synapses and cellular networks.

NEURON [3] models individual neurons via the use of sections which are subdivided into individual compartments by the program, instead of requiring the user to manually create the compartments. The primary scripting language that is used to interact with it is hoc but a Python interface is also available. The programs for it can be written interactively in a shell, or loaded from a file. NEURON supports parallelization via the MPI protocol. Also, starting with NEURON 7.0 parallelization is possible via internal multithreaded routines, for use on computers with multiple cores. 

Currently, NEURON is used as the basis for instruction in computational neuroscience in many courses and laboratories around the world.


Generative E-Social Science for Socio-Spatial Simulation (GENESIS) [6]

Generative social science is widely regarded as one of the grand  challenges of the social sciences. The term was popularised by  Epstein and Axtell of the Brookings Institution in the book (1996) Growing Artificial Societies: Social Science from the Bottom Up who define it as simulation that “… allows us to grow social  structures in silico demonstrating that certain sets of micro-specifications are sufficient to  generate the macro-phenomena of interest”. It is consistent with the  development of the complexity sciences, with the development of decentralised  and distributed agent-based simulation, and with ideas about social and spatial  emergence. It requires large-scale data bases for its execution as well as  powerful techniques of visualisation for its understanding and dissemination. It  provides experimental conditions under which key policy initiatives can be  tested on large-scale populations simulated at individual level. It is entirely  coincident with the development of e-social science which provides the infrastructure  on which such modelling must take place.

In closing, advances in simulation are driven by the continuous interplay of the following:

Our sources of inspiration—both internal and external—for the discovery of solutions to practical problems as well as the theory and methodology required to attack those problems;

The intuition that we acquire from careful experimentation with well-designed simulation models, from intense scrutiny of the results, and from allowing the unconscious to work on the results

The conscious follow-up work in which the emerging flashes of insight into the problem at hand are expressed precisely, verified completely, and connected to other simulation work.


[1] Poincaré, Henri. (1914) 1952. “Mathematical Discovery.” In Science and Method, 46–63. Translated by Francis Maitland, with a preface by Bertrand Russell. London: Thomas Nelson and Sons. Reprint, New York: Dover Publications.


[3] Brette R., Rudolph M., Carnevale T., Hines M., Beeman D., Bower J., et al.  (2007). Simulation of networks of spiking neurons: a review of tools and strategies. J. Comput. Neurosci. 23, 349–398. doi: 10.1007/s10827-007-0038-6. [PMC free article]  [PubMed] [Cross Ref]

[4] Drewes R. (2005). Brainlab: a Toolkit to Aid in the Design, Simulation, and Analysis of Spiking Neural Networks with the NCS Environment. Master’s thesis, University of Nevada, Reno. [PMC free article]  [PubMed]

[5] Drewes R., Zou Q., Goodman P. (2009). Brainlab: a python toolkit to aid in the design, simulation, and analysis of spiking neural networks with the neocortical simulator. Front. Neuroinform. 3:16. doi: 10.3389/neuro.11.016.2009. [PMC free article]  [PubMed] [Cross Ref]