Appendix 1:  Bibliography of Foundational Publications

D. Ackley, G. Hinton & T. Sejnowski 1985.  A learning algorithm for Boltzmann machines. Cognitive Science 9.1: 147-169.

Breuer, J. 1997.  Education and the brain: A bridge too far.  Educational Researcher, November: 4-16.

Carey, S. 2009. The origin of concepts.  MIT Press.

Chomsky, N.  1959. Review of B. F. Skinner Verbal Behavior.  Language 35.1: 26-58.

Chomsky, N. & M.P. Shutzenberger 1963. The algebraic theory of context-free languages.    In P. Braffort & D. Hirschberg, eds. Computer programming and formal systems. North Holland, pp118–161.

Dehaene, S. 1997. The number sense: How the mind creates mathematics.  Oxford UP.

Elman, J. L. 1990. Finding structure in time. Cognitive science 14.2: 179-211.

Gallistel, C. R. 1990. The organization of learning.  MIT Press.

Hebb, D.  1949. The organization of behavior.   Wiley.

Hopfield, J. J. 1987. Learning algorithms and probability distributions in feed-forward and feed-back networks. PNAS  84.23: 8429 – 8433.

Hubel D. & T. Wiesel 1962 Receptive fields, binocular interaction and functional architecture in the cat’s visual cortex. Journal of Physiology 160: 106-154.

Jerne, N K.  1985.  The generative grammar of the immune system.  Science 229: 1057-1059.

Lorenz, K. 1981. The foundations of ethology.  Springer.

Marr, D. 1982. Vision.  Freeman.

Maynard Smith, J. 1974. The theory of games and the evolution of animal conflicts.  J. Theoretical Biology 47: 209-221.

McClelland, J.L. et al. 1986. Parallel distributed processing: Explorations in the microstructure of cognition (volumes I & II). MIT Press.

OECD, 2007.  Understanding the brain: The birth of a learning science. Centre for Educational Research and Innovation.

O’Keefe, J. & L. Nadel 1978. The hippocampus as a cognitive map.  Oxford UP.

Saffran, J.R.,  R.N. Aslin & E.L. Newport 1996. Statistical learning by 8-month-old infants. Science 274: 1926–1928.

Shannon, C.E. 1948. A mathematical theory of communication.  Bell System Technical Journal 27.3: 379–423.

Spelke, E.S. 1994. Initial knowledge: Six suggestions.  Cognition 50: 431-445.

Tenenbaum, J. B. & T.L. Griffiths 2001. Generalization, similarity and Bayesian inference. Behavioral and Brain Sciences 24.4: 629-641.

Tinbergen, N. 1951. The Study of Instinct.  Oxford, Clarendon Press

Vapnik, V.N. 1999.  The nature of statistical learning theory.  Springer.



Appendix 2: The six centers

(a-c) have been funded from 2004 and (d-f) from 2006.

a) LIFE: Learning in Informal and Formal Environments – PI Patricia Kuhl, University of Washington.  A central premise of LIFE’s work is that understanding and propelling learning requires an emphasis on informal and formal (e.g. K-12) learning environments and examining how social context influences learning. The center’s goal is to develop and test principles about the social foundations of learning, including how people learn to innovate in modern cultures.

b) CELEST: Center for Learning in Education, Science, and Technology – PI Ennio Mingolla, Boston University.  Human brains adapt and learn rapidly from unexpected environmental contingencies. Combining experimental studies and computational modeling, research at CELEST focuses on brain mechanisms of learning.  Of particular interest is the role of neural dynamics in determining the coding of information, the limits on capacity, and the interactions and plasticity within and among brain regions.    A goal is to develop and design biologically-inspired technology.

c) PSLC: Pittsburgh Science of Learning Center – PI Ken Koedinger, Carnegie Mellon University.  PSLC focuses on teaching strategies and processes that yield robust learning, i.e. learning that transfers to novel circumstances, lasts over long periods, and prepares a learner for future learning.  To yield theoretically sound and useful principles of robust learning, PSLC created LearnLab, an international resource using intelligent cognitive tutors that teach and collect educationally relevant research data in science, mathematics and foreign language classrooms.  These, together with authoring tools available to teachers, facilitate a new level of experimentation, hypothesis testing, data mining, and discovery in classroom-based research.

d) SILC: Spatial Intelligence and Learning Center – PI Nora Newcombe, Temple University. The goal of SILC is to understand and improve human spatial intelligence: how spatial knowledge and reasoning processes are learned, how they interact with symbolic systems (e.g., language, maps, graphs and diagrams), how they contribute to reasoning and learning in non-spatial domains, and how they support learning in STEM disciplines.

e) VL2: Visual Language and Visual Learning Center – PI Thomas Allen, Gallaudet University.  VL2 investigates how humans acquire and use language and literacy when audition is not available.  To better understand how deaf individuals learn to read, VL2 researchers investigate the biological, linguistic, cognitive, sociocultural and pedagogical conditions that influence the acquisition of language and knowledge through the visual modality, and explore how such visually-based learning strategies can benefit others in educational practice.

f) TDLC: Temporal Dynamics of Learning Center – PI Garrison Cottrell, University of California-San Diego.  Research at TDLC focuses on efforts to achieve an integrated understanding of how time and the timing of events influence learning across multiple time scales, brain systems and social systems, in order to create a new science of the temporal dynamics of learning and to use this understanding to transform educational practice, and to create a new collaborative research structure, a network of networks, to transform the practice of science.




Appendix 3:  Participants


David Lightfoot, Communication, Culture and Technology, Georgetown University,

Sarah Inman, Communication, Culture and Technology, Georgetown University,


Steering Committee:

Ralph Etienne-Cummings, Department of Electrical and Computer Engineering, The Johns Hopkins University,

Morton Ann Gernsbacher, Vilas Research Professor and Sir Frederic Bartlett Professor, Department of Psychology, University of Wisconsin,

Eric Hamilton, Educational Division, Pepperdine University,

Barbara Landau, Department of Cognitive Science, The Johns Hopkins University,

Elissa Newport, Department of Neuroscience, Georgetown University,

David Poeppel, Professor of Psychology and Neural Science, New York University,



David Andrews, School of Education, The Johns Hopkins University,

Nitin Gogtay, National Institute of Mental Health,

Ranu Jung, College of Engineering and Computing, Florida International University,

Linda Smith, Distinguished Professor and Chancellor’s Professor, Psychological and Brain Sciences, Indiana University,

Michael Stryker, W.F. Ganong Professor of Physiology, School of Medicine, University of California at San Francisco,

Sharon Goldwater, Institute for Language, Cognition and Computation, School of Informatics, University of Edinburgh,

Soo-Siang Lim,  Science of Learning Centers Program, NSF,

Gary Cottrell, TDLC, University of California San Diego,

Pat Kuhl, LIFE Center, University of Washington,

Ken Koedinger, PSLC, Carnegie Mellon University,

Nora Newcombe, SILC, Temple University,

Laura-Ann Petitto, VL2, Gallaudet University,

Barbara Shinn-Cunningham, CELEST, Boston University,



Mark Liberman, Christopher H. Browne Distinguished Professor, Departments of Linguistics and Computer and Information Science, University of Pennsylvania,

Clifton Langdon, VL2, Gallaudet University

Julie Booth, Department of Psychology, Organizational, and Leadership Studies in Education, Temple University,

Heather Ames, Center for Computational Neuroscience and Neural Technology (CompNet), Boston University,




Appendix 4: Program

National Science Foundation
4201 Wilson Boulevard, Arlington, VA 22230
Stafford II, Room 555

Thursday, October 4

8:30am: Coffee and pastries

8:45am: David Lightfoot introduction

9am: Michael Stryker, Focusing on connections and signaling mechanisms to dissect cortical plasticity

9:30am: Nitin Gogtay: Normal and abnormal brain development- Insights from neuroimaging studies

10am: Ranu Jung, Co-adaptive learning for sensorimotor therapy

10:30am: Break

11am: Sharon Goldwater: Human and machine learning: A computational perspective

11:30am: Linda Smith, Building functional systems: Components, connectivity and cascades

12pm: David Andrews, Translational research in the educational application of the science of learning

12:30PM: Lunch

1:30pm: Soo-Siang Lim: Building for the future: Ideas, people and tools

2pm: Nora Newcombe, SILC, Creating a science of spatial learning

2:30pm: Pat Kuhl, LIFE, Social brains, human minds, and the science of learning

3pm: Ken Koedinger, PSLC, The knowledge-learning-instruction (KLI) framework: Bridging the science-practice chasm to enhance robust student learning

3:30pm: Break

4pm: Barbara Shinn-Cunningham, CELEST, Learning in complex systems

4:30pm:  Gary Cottrell, TDLC, Toward optimal learning dynamics

5pm: Laura-Ann Petitto, VL2, Innovations in our understanding of the science of learning from the visual language and visual learning center’s explorations of the phonological brain and the bilingual mind

5:30-6pm: Discussion


Friday, October 5

8:30am: Coffee and pastries

9am-noon: Structured Discussion

Afternoon: Report Collaboration and Writing (Steering Committee)


Appendix 5:  One-pagers

In the order of presentation at the workshop

Focusing on connections and signaling mechanisms to dissect cortical plasticity

Michael P. Stryker

My thoughts about the science of learning start from the point of view that the engram, the result of learning, must consist of some reasonably specific set of changes in neural connections corresponding to the thing learned.  In the area of my own research, the development and plasticity of the central visual system, we have learned something about how to study problems in which neural activity operates to alter connections.  I believe that some of these approaches should also be pursued in order to understand learning.  Many of my thoughts on the role of neural activity in the development and plasticity of the mammalian central nervous system are outlined in a recent review [1]

Major achievement #1:  Map and connection formation in the developing visual system consists of both wiring and rewiring and may be an appropriate model for studying mechanisms that also underlie learning.  Learning something novel presumably involves formation of some new connections (“wiring”), and the differentiation of learning presumably involves an adjustment of those connection (“rewiring”).  It is spontaneous neural activity rather than experience operates in association with intercellular chemical gradient signals to organize the initial functional and anatomical retinotopic maps.  Interfering with either the chemical signal or the structured pattern of neural activity produces a somewhat messy but basically normal map of azimuth in primary visual cortex (V1).  Interfering with both completely prevents map formation [2].  The normally orderly connections between brain areas form with similar rules, although in the case we have studied the activity signal overcomes the chemical signal when they are in conflict [3].

Major achievement #2:  The experience-dependent rewiring of the visual cortex during a critical period in juvenile binocular mammals is perhaps the most dramatic form of activity-dependent plasticity in a circuit that is already fully formed and functional, and in those respects resembles learning.  Monocular visual deprivation produces a series of changes in responses to the two eyes as well as a substantial rewiring of cortical circuitry.  We have discovered that the characteristic changes in response take place in 3 temporally distinct phases mediated by at least 3 distinct signaling mechanisms:  an initial calcium- and NMDA-receptor-dependent loss of response to the deprived eye; a subsequent homeostatic synaptic scaling mediated in part by tumor necrosis factor alpha signaling that increases response to the open eye [4] and finally a recovery of response and restoration of connections mediated by BDNF-TrkB signaling when normal vision is restored during the critical period [5].  Two-photon microscopy in vivo of genetically labeled neurons and presynaptic and postsynaptic proteins reveals quantitatively as well as qualitatively which changes are due to rewiring and which due to changes in the efficacy of existing synapses.  In the study of learning, it seems possible that different experiences that give rise to different patterns of activity may analagously engage distinct mechanisms to regulate the same set of connections.

The biggest challenge for understanding the rewiring underlying most forms of learning is the identification of the neurons whose change in response accounts for the change in behavior.  The connections made and received by these neurons could then be examined, and the signaling mechanisms responsible for these changes elucidated.  One approach to this problem relies on the hope of increased expression of particular, to date mostly immediate-early, genes by cells responsible for learning [6].

Genetically encoded calcium indicators observed in two-photon microscopy in vivo also have the potential to identify responses in large numbers of neurons in surface structures like the neocortex, and to allow identification with neurons to be studied anatomically.  The challenge will be to demonstrate causality of neural changes observed in learning in behavioral experiments.

Links to all our papers on these topics at

[1] Espinosa, J.S. and Stryker, M.P. (2012) Development and plasticity of the primary visual cortex.  Neuron 75: 230-249.

[2]  Cang*, J.C., Niell*, C.M., Liu, X., Pfeiffenberger, C., Feldheim, D.A. and Stryker, M.P., (2008) Selective disruption of one Cartesian axis of cortical maps and receptive fields by deficiency in ephrin-As and structured activity. Neuron 57: 511-523.

[3]  Triplett, J.W., Owens, M.T., Yamada, J., Lemke, G., Cang, J., Stryker, M.P. and Feldheim, D.A. (2009) Retinal input instructs alignment of visual topographic maps.  Cell 139: 175-185.

[4]  Kaneko, M., Stellwagen, D., Malenka, R.C., and Stryker, M.P.  (2008). Tumor necrosis factor-alpha mediates one component of competitive, experience-dependent plasticity in developing visual cortex. Neuron 58: 673-680.

[5]  Kaneko, M., Hanover, J.L. England, P.M.. and Stryker, M.P. (2008) TrkB kinase is required for recovery, but not loss, of cortical responses following monocular deprivation.  Nature Neuroscience 11: 497-504.

[6]  Silva, A.J. et al. (2009). Molecular and cellular approaches to memory allocation in neural circuits. Science 326: 391-395.




Normal and abnormal brain development:  Insights from neuroimaging studies

Nitin Gogtay

Recent advances in neuroimaging methodology have made it possible to gain unprecedented insights into human brain development.  When combined with prospectively acquired scans, these techniques allow mapping of the anatomic (as well as functional) brain development across life span.  This, for the first time, has allowed us to map and understand the maturational pattern of healthy brain development, which can in turn help understand the abnormal brain development in severe neuropsychiatric illnesses such as schizophrenia that are increasingly considered to be neurodevelopmental in origin. At the same time, it is also important to understand that the current resolution of anatomic neuroimaging modalities does not provide details at the molecular level, nor can it confirm a clinical diagnosis at an individual level but can only provide a baseline observation for further research.  Two such achievements will be considered for this discussion while highlighting the two challenges that result from this research.

Mapping normal cortical gray matter (GM) development from age 4 through 22 revealed, for the first time, that the primary cortical areas (e.g. primary motor or sensory cortices) were already mature by age 4, while more sophisticated higher order cortical regions such as the dorso-lateral prefrontal cortex were yet to mature by 22. This logical sequence of brain maturation had a direct translational impact.  It provided strong evidence that the areas of the brain that deal with more sophisticated thinking are not yet fully mature in early twenties; an argument used by the Supreme Court to remove juvenile death penalty.

Understanding healthy brain maturation allowed comparison with the maturational pattern in children with schizophrenia.  In these patients, the brain development appeared to show a pattern of exaggeration of the healthy maturational pattern, suggesting lack of normal inhibitory controls, resulting in excessive brain GM loss.

Furthermore, healthy, non-psychotic (and hence un-medicated) full siblings of schizophrenia patients shared the brain abnormalities but only in early ages, which ‘normalized’ by late adolescence.

This suggested, first, that the brain abnormalities in schizophrenia are genetically influenced, and second, more interestingly, that some other, yet unknown, protective factors are at play in healthy siblings, which escape the disease.

This research has lead to many further challenges, two of which are highlighted here.  First, it is important to understand what is the mechanism of both gray matter maturation as well as excessive loss in schizophrenia and second, what are the protective factors that help the siblings remain disease free and simultaneously help normalize their genetically induced brain abnormalities.  These questions are currently being explored at the NIMH and other centers.



Co-adaptive learning for sensorimotor therapy

Ranu Jung

 “The most important trend in recent technological developments may be that technology is increasingly integrated with biological systems.  Many of the critical advances that are emerging can be attributed to the interactions between the biological systems and the technology.  The integration of technology with biology makes us more productive in the workplace, makes medical devices more effective, and makes our entertainment systems more engaging.  Our lives change as biology and technology merge to form biohybrid systems. ……. Some of the key developments in biohybrid systems have been in opening lines of communication between the engineered and the biological systems.” From “Merging Technology with Biology” in Biohybrid Systems: Nerves, Interfaces, and Machines, ed. Ranu Jung, 2011 Wiley-VCH Verlag GmbH & Co. KGaA.

In 2007, a workshop was held under the National Academies Keck Futures Initiative, “Smart Prosthetics: Exploring Assistive Devices for the Body and Mind.”  Some of the initial challenges identified were the feasibility of developing smart prosthetic systems that surpass the modality of replacing lost function and go to those that promote repair of neural function by harnessing the activity dependent plasticity in the nervous system.

The paragraphs above present the broad challenges for developing a paradigm of co-adaptive learning for sensorimotor therapy directed at promoting repair or recovery after neurotrauma or neurological disability by enhancing plasticity in the nervous system. The therapy may also help promote healthy aging and prevent or postpone neurological decline.

Perhaps the biggest success story for use of technology to replace lost sensory function has been that of the cochlear implant,  By the end of 2010, over 200,000 people had received a cochlear implant with more than 40,000 adults and 28,000 children in the US alone. While initially this implant was designed as a sensory prosthesis to replace lost auditory function, more recently the ability of cochlear implants to promote brain plasticity has become evident.

The enriched sensory environment provided by the device can help to promote plasticity, especially in the developing brain.  With existing cochlear implant technology, the system settings are set by a therapist and then manually adjusted periodically to accommodate the changes in the biological system.  If the device were to automatically adjust its settings, could speech comprehension improve?  That is, could performance be improved if the systems were co-adaptive?

Neural stimulation devices are also being used in other situations to promote plasticity in a more explicit manner.  In the weeks/months after traumatic injury such as a stroke or incomplete spinal cord injury, current rehabilitation practice seeks to engage mechanisms of activity-dependent plasticity to maximize functional gains,  Electrical activation of sensorimotor circuits can produce activity in structures targeted for adaptation.   In this situation, the device is less concerned with the specific task at hand (e.g. taking a step or picking up a cup) and is more concerned with promoting the plasticity required for device-independent function.  For this application, once again existing technology requires manual adjustment by a therapist.  If the device (stimulator or robot) were to automatically adjust its settings, could motor learning improve?;

A primary challenge is to design biohybrid systems that can access and capture the biosignatures of the living system through limited spatiotemporal sampling and interface with the nervous system through sparse inputs. Given the sparseness of any existing or foreseeable interface, we must maximize our ability to interpret data and our ability to alter activity patterns.

A second challenge is to make the biohybrid system co-adaptive. To promote plasticity, the challenge is to influence the core bio-chemical machinery in a desired manner.  In this design, it is important to recognize that interfaces that influence the nervous system at one scale (e.g. molecular) and location effect changes across other scales and locations.  This ill-defined objective in promoting biological plasticity presents major challenges to endowing the technology with effective and efficient adaptive capabilities.



Human and machine learning: A computational perspective

Sharon Goldwater

Since the beginning of AI, there has been a tension between researchers who treated the problem of creating artificial intelligence as primarily an engineering task, and those who wanted to study the problem in order to gain insight into human intelligence itself.  As the field matured and it became clear how difficult the goal of general AI was, the two groups of researchers diverged.  Machine learning and data mining researchers now build systems to solve specific AI subtasks that often have little to do with either general intelligence or human learning.  Cognitive modeling and computational linguistics researchers, meanwhile, seek to understand human learning by building computer simulations, but often not at a scale that would be useful as machine learning systems. Nevertheless, there have been some recent productive interactions between these two fields, and to make progress in both areas in the future, we should aim to increase interaction, as I argue below.  Note that my own research is primarily concerned with computational language learning, but I also include examples from other domains in describing the successes and challenges of each field, and how the two fields have and could influence each other.

1.       Machine learning

Success: consumer applications.  In the last decade, applications based on machine learning and data mining have become ubiquitous and, for many, indispensable.  While these tools are not perfect, they have gone from research prototypes and toys for techno geeks to mass consumption.  Examples include:

  • Machine translation (e.g., Google translate).
  • Automatic speech recognition (e.g., Dragon NaturallySpeakingSiri).
  • Automatic essay grading for standardized tests.
  • Recommender systems (e.g., Netflix).
  • Behavior tracking and personalization (in, e.g., fraud detection, targeted advertising, smart search).

Challenge: remove the human from the loop.  Most of the above systems are based on much earlier methods: either supervised training (learning from human-annotated examples) or simple dimensionality reduction techniques (identifying correlations between dimensions of the data).  Their recent success is mostly due to scaling up the amount of training data and the speed of computers in processing this data.  However, annotated data is time-consuming and expensive to produce, and unavailable for many languages, tasks, and domains.  Dimensionality reduction is an unsupervised method, but inappropriate for tasks involving highly structured data, such as language or visual images.  A primary challenge for the next decade is to develop better unsupervised and semi-supervised learning methods to exploit unannotated data.

A promising way to achieve these goals is by returning to a more cognitively-inspired approach.  Humans still far outclass computers in learning from naturally occurring data and generalizing from sparse evidence.  To match this performance, we need to understand the inductive biases of the human mind, and try to incorporate them into our systems.  Encouraging results in processing speech, text, and images come from methods such as deep neural networks, which assume that human inductive biases arise from the architecture of the brain (see Andrew Ng describing his work: NPR articleconference talk), and hierarchical Bayesian models, which define inductive biases at a more abstract mathematical level.  At the same time, we must consider whether novel sources of data (e.g. web/social media) or methods of feedback (e.g. “distant supervision”) could help to replace some of the other factors involved in human learning, such as social cues (see below).

2.       Cognitive Modeling

Success: Bayesian models.  Early work on human reasoning and induction often focused on humans’ apparent inability to behave consistently with normative logical and statistical principles.  In contrast, by adopting Bayesian probabilistic modeling methods from machine learning, more recent research has shown that human behavior often does conform to optimal probabilistic inference (see Significance article).  One key insight we have gained from these models is how learning at multiple levels of abstraction can improve generalization and allow predictions based on very few data points, with models that “learn to learn”.  Results help to explain how certain biases that have been proposed to require innate specification (such as the “shape bias” in word learning and the hierarchical structure of syntax) could actually be learned.

Challenge: social/emotional/attentional factors and individual differences. Current models tend to consider only the informational content of the input in presentation to a generic learner.  A challenge for the future is to understand how to model additional psychological factors that are known to affect learning, such as social situation, emotion, and attention, as well as individual differences in learner performance.  One difficulty is in determining whether these factors are simply regulating information uptake (e.g., an inattentive learner just “misses” some of the data) or whether they qualitatively change the computations performed.  New findings will also have implications for machine learning: can social/emotional factors be effectively replaced by more or different kinds of data that are easily available to machines, or will we need to simulate emotions and social interaction for effective machine learning to occur?




Building functional systems: Components, connectivity and cascades

Linda B. Smith

What we know:
The human brain consists of many heterogeneous components.  Performing any task – reading, throwing a ball, making a sandwich –recruits a subset of these components.  In the real time performance of a task, these components form a functional network.  The robust, inventive, and open-ended nature of human intelligence may be linked to five properties of these functional networks.

1. Coupled systems in a task drive change.  When component systems are coupled in a real-time task, they change as a product of that interaction (1,2). One cogent example concerns visual letter recognition and the writing of letters (3).  In adults, letter recognition, a high precision skill, is linked to the functional specialization of regions of the visual cortex.  Experimental and functional imaging studies with pre-school children show that children who are taught to recognize letters by writing those letters versus by merely seeing them, developed more mature visual neural responses to purely visual information, a result that shows that the coupling of motor and visual systems in writing letters led to changes in the visual system itself.  The real time coupling of heterogeneous systems in a task, does not just make a system, it changes the components.

2. Creating novel systems.  Remarkably, the human brain is capable of creating novel functional networks that become highly skilled and stable.  Humans read, write do algebra, invent programming languages, and excel in chemistry only because of the plasticity of the human brain and its ability to build novel and task specific functional networks (4).

3. A processing cascade. Within these networks, information is shared in a particular way that has been variously characterized as “interactive activation” or “incremental” or a “dynamic cascade” (5). Components share information incrementally.  For example, in on-line language processing, the fraction of a second sound that corresponds to “ba” activates a whole neighborhood of word candidates that stand in readiness for further information.  Moreover, the flow of information proceeds in all directions and over nested time scales so given “ba” and that knowledge that the topic is about a child’s party will lead to greater activation of the candidates “ball” and “balloon” than the candidates “ballot” and  “ballet”.  Because the flow of information within these complex networks in continuous and interactive, these systems are robust, work even when the available information is less than optimal, and can be inventive. However, it also means that a sluggish component may degrade the performance (and the learning) of the whole network.

4. Automaticity and precision. The perceptual components of these functional networks are strongly changeable by experience – extracting statistical regularities, becoming more sharply tuned, faster, and highly context-sensitive (6,7).  Precision and speed in these component perceptual processes may enable higher-level solutions within a whole functional network, including what we might all “insight” and “concepts” and lack of precision and speed may place strong constraints on what can be learned from educational experiences.

5. Overlapping integrations.  The component systems in any functional network (say reading) may also play a role in other functional networks (say music, 8). Theory and empirical evidence suggest that the overlapping couplings of component systems in multiple functional networks may play a critical role in in building higher order abstractions (9).

What we do not know:
How do we structure learning experiences to build robust, efficient, and flexible functional networks in specific domains?

(1)  Bullmore, E. T., & Sporns, O. (2009). Complex brain networks: Graph-theoretical analysis of structural and functional systems. Nature Reviews Neuroscience, 10, 186–198.

(2)  Smith, L. B. & Sheya, A. (2010) Is Cognition Enough to Explain Cognitive Development? Topics in Cognitive Science, 1-11.

(3)  James K.H. (2010) Sensori-motor experience leads to changes in visual processing in the developing brain. Developmental Science. 13:279–288.

(4)  Dehaene S., Cohen, L., Sigman, M.,& Vinkier, F.  (2005) The neural code for written words: a proposal. Trends in Cognitive Science, 9:335–341.

(5)  Price, C.J., & Devlin, J.T. (2011) The Interactive Account of ventral occipito-temporal contributions to reading. Trends in Cognitive Science, 15:246–253.

(6)  Jeter PE, et al. (2009) Task precision at transfer determines specificity of perceptual learning. Journal of Vision. 9, 1–13.

(7)  Samuelson, L., & Smith, L. B. (2000). Grounding development in cognitive process. Child Development, 71, 98–106.

(8)  Anvari, S.H., Trainor, L.J., Woodside, J., & Levy, B.A. (2002). Relations among musical skills, phonological processing and early reading ability in pre- school children. Journal of Experimental Child Psychology, 83, 111–130.

(9)  Smith, L.B. (2010) More than concepts: How multiple integrations make human intelligence. In D. Mareschal, P. Quinn, & S. Lea (eds), The making of human intelligence, New York: Oxford Unviersity Press.




Translational research in the educational application of the science of learning

David Andrews

Recent investments in the Science of Learning have resulted in heightened awareness of the underpinnings of learning that can shape future educational practices in both formal and informal settings.  Furthermore, these investments have led to a greater understanding of both etiology and individual differences in learning.  There remains, however, a dearth of rigorous research that translates our increased knowledge about learning into scientifically defensible educational practice. Many popular initiatives framed as “educational neuroscience” or “neuro-education” lack a mature body of translational research to support the practices they recommend.

Achievement #1
Within the last decade, randomized clinical trials (RCTs) have become the goal standard of evidence of “what works” in educational settings. The rigor of the scientific methods and designs expected in these RCTs has increased dramatically. The phrase “scientifically-based instruction“ occurs 115 times in the No Child Left Behind (NCLB) Act of 2001 and led to sustained investments in RCTs and subsequent meta-analyses of instructional approaches and educational interventions.  Results from a decade of efforts are transforming the way educators make decisions concerning both general instruction and specific interventions.  Given contemporary policy recommendations and the availability of i3 funding, the body of knowledge about “what works” related to learning is growing quickly.  The following links are to sites cataloging emerging research in this area designed to inform practice.
What Works Clearinghouse –
Best Evidence Encyclopedia –
Campbell Collaboration –

Challenge #1
High quality translational research in education is in its infancy.  The number of educational approaches that meet the highest standards of evidence at every stage of the research continuum are growing, but limited.  While decision makers in education are much more aware of the need to implement “evidence-based” approaches, there remains an incomplete understanding of issues that influence impact (i.e. fidelity and dosage issues;

Achievement #2
Advances in understanding and diagnosing individual differences combined with new data systems and instructional technology are leading to substantial advances in “personalized education.” While educators have extolled the benefits of differentiated instruction for many decades, only recently have we been able to diagnose differences in learners with enough specificity and regularity to create and implement individual learning plans for all students. Using these plans, expected performance can be compared with actual performance and educational interventions developed to maximize impact for each student.  Personalized approaches to education are being developed and rigorously evaluated in public/private ventures across the country. The following links address some of the major issues associated with this movement.
Next Generation Learning- http//
National Education Technology Plan –

Challenge #2
Translational research on the impact of personalized education is just beginning. Early findings on computer assisted instruction (CAI) at scale are mixed.  Blended models of face-to-face instruction with CAI in school settings are emerging rapidly. Most efforts are not using sophisticated diagnostics to match individual learner needs with the most appropriate evidence-based approaches. Additional research on individual differences and specific educational methods of addressing these differences can drive future investments.




Building for the future: Ideas, people and tools

Soo-Siang Lim

The goals of the SLC Program are to advance fundamental knowledge about learning through integrated, interdisciplinary research; to connect the research to specific scientific, technological, educational, and workforce challenges; to enable research communities to capitalize on new opportunities and discoveries; and to respond to new challenges.

SLCs have produced significant infrastructure and resources vital to achieving the Program’s and the centers’ goals.  These include tools, structures and other resources generated by individual centers, as well as more systemic outcomes facilitated by strategically selected structures and processes initiated by Program staff.  The latter focused on the development and future sustainability of an integrative, interdisciplinary science of learning.  Examples include:

1)    A rich variety of tools and other resources

  • Tools and resources  to enable translational efforts – to educational practice, to industry
  1. PSLC – CTAT Tool:
  2. SILC – Spatial Tests and Instruments:; CogSketch:
  3. TDLC – The Gamelan Project:
  4. Telluride Neuromorphic Cognition Engineering Workshop –
  5. CELEST – The Unlock Project:
  • Education tools and resources
  1. CELEST – Web based outreach about neuromorphic research:
  2. LIFE – Selection of Published Reports:
  3. VL2 – Research Briefs:
  • Tools and resources for outreach
  1. SILC – Ultimate Block Party: & L__rn:
  2. VL2 – Center Newsletters:

2)    A robust collaborative network for research and training – the SLC Network of Centers
has over 971 participants (455 students, 118 postdoctoral fellows, 313 faculty, 53 academic institutions, and 220 non-academic partners). The critical mass of ideas, people and tools has catalyzed the emergence of a new field/community of Science of Learning and raised awareness of learning as an important topic of investigation involving multiple disciplines, including several not traditionally included in “learning sciences”.

3)    Outstanding interdisciplinary training for students and postdoctoral fellows and the Trainee Network (i-SLC).
New programs and new courses were developed by centers to make education and training opportunities commensurate with the interdisciplinary research efforts.  Trainees’ understanding of research connections to societal impacts are enhanced by the centers’ partnerships with stakeholder partners.  A trainee network and annual conference (i-SLC) provide additional research opportunities, as well as leadership and other career skills.

4)    An emerging Translational Network for collaborations and connections among researchers, educators, and policy makers
The stable funding for the centers has facilitated long term, and two-way partnerships between researchers and educators.  This has facilitated better alignment of research agendas to education priorities, engaged teachers in research efforts and provided additional training and career development to teachers.  Given the enormous challenges in translating research findings to educational practice, further development of this network to integrate efforts across the centers would promote sharing of knowledge and experiences, more effective use of resources and benefit from economies of scale.

Infrastructural challenges for the SLC Program include:  1) lack of adequate cyber-infrastructure to support data curation, sharing, and analysis across centers; and 2) continued sustainability of existing infrastructure (especially the 3 networks) when the first cohort of centers “graduate.”




Creating a science of spatial learning

Nora S. Newcombe

 A report from the National Academies, Learning to Think Spatially (2006) was a landmark achievement, making a persuasive case for the importance of spatial thinking and its inclusion in K-12 education. The central premises in this case are that spatial ability is malleable and that it is related to STEM achievement.

While there was already support for these ideas in 2006, the case has since been strengthened. A meta-analysis by Uttal, Meadow, Tipton, Hand, Alden, Warren and Newcombe (2012), showed malleability, and also generalizability and durability of spatial education and training.

The premise of a relation to STEM achievement has been supported for high school students in analyses of large representative samples of high school students studied longitudinally (Wai, Lubinski & Benbow, 2009) and for young children by two longitudinal studies (Gunderson, Ramirez, Beilock and Levine, 2012. Gunderson et al. also discovered a possible mechanism for the predictive relationship they observed, namely that understanding the number line (a spatialization of number) mediated the link between early spatial skill and later mathematics achievement. doi: 10.1037/a0027433

One challenge for the future is to evaluate the number-line mechanism and to determine if there are other such mechanisms, e.g., spatial thinking in the utilization of maps, diagrams and graphs.

How can spatial learning be improved? While direct training of spatial abilities is one possibility, we can also use spatial analogy, gesture, sketching, spatial language, maps, and diagrams to improve learning across development. Recent research has shown that these spatial processes can improve domain learning from preschool mathematics to college-level physics, chemistry and geoscience. For example, spatial analogical comparison allows preschoolers to abstract new relational patterns (Christie & Gentner, 2010).  Additionally, spatial analogies improve children’s understanding of a basic engineering principle in a museum setting. doi:10.1080/15248371003700015

By middle and high school, teaching use of graphs and diagrams becomes important, sketching can become a more formal tool, and GIS technology can be utilized. In college students, spatial experience can improve understanding concepts such as angular momentum in physics, and gesturing can improve understanding of stereoisomers in chemistry. This research has led to a set of spatial learning tools that are readily translatable to education in both formal and informal settings; see Newcombe (2010) for an overview for teachers. An advantage of these spatial learning tools is that they can be incorporated into currently existing curricula.

 One challenge for the future is to delineate what techniques work best in what contexts, and how they best work together. Even more generally, we need to ask about the relation between spatial and non-spatial mechanisms — to explore whether learning mechanisms work in the same way for all concepts or whether spatial skills/mechanisms are better suited to some concepts, non-spatial skills/mechanisms to other concepts. 



Social brains, human minds, and the science of learning

Patricia K. Kuhl

This presentation explores how social factors enhance human learning across age (from infancy to adulthood) and across domains (particularly in STEM and language learning). Understanding how social factors influence learning will contribute to theories of learning, encourage ‘social’ technologies that support learning, and lead to revisions in educational practices and designs.

Achievement #1. NeuroLearning in Infants and Young Children: Understanding the brain mechanisms underlying learning in young infants has been hampered by the lack of neuroscience measures for children under the age of 5 years. We have produced a ‘toolbox’ for neuroimaging young infants for measuring structural and functional changes based on learning. To date we have used the neuro-imaging toolbox to understand early language learning in monolingual and bilingual children (bilingualism is a critical educational challenge in the USA). These methods are applicable to learning in other domains. The toolbox includes: (a) age-specific average head templates, (b) MRI-based voxel-based morphometry (VBM), (c) Diffusion Tensor Imaging, (d)magnetoencephalography (MEG), and event-related potentials (ERP). Analysis of the tsunami of data produced by these new neuroscience tools is challenging and requires multidisciplinary teams and theory-driven experiments.

Achievement #2. Social Cognition and Learning: Humans have exquisite sensitivity to others’ goals, intentions, and perceptions, and this information enhances learning from others. Joint attention is an example: we seek joint attention with others by, for example, following the gaze of another person as they scan the ambient field. Joint attention improves learning across age, domains, and learning contexts. Studies in the domains of language, STEM learning, and social/cognitive learning indicate that humans benefit from following the gaze and multi-modal activity patterns of another human. More specifically research shows that joint attention is linked to language learning in infancy, to learning from media in preschool and home, and to elementary-school and college collaborative learning. Humans of all ages appear pre-disposed to seek joint attention and social neuroscience is identifying how social cognition improves perception and learning. More deeply understanding the mechanisms of social cognition may improve learning across the lifespan.

Challenge #1: Linking genes, brains, and behavior to understand development and learning is a grand challenge that is now tractable. Scientific advances in the fields of neuroimaging and genetics have raced forward in the last decade, but they are rarely integrated. It is now possible, for example, to more thoroughly understand the acquisition of language by integrating protocols that include brain, behavioral, and genetic measurements. Understanding the genetic and environmental factors that open and close sensitive periods for language could lead to learning programs for second language learning over the lifespan, as well as identification of biomarkers for developmental disabilities involving language such as autism, specific language impairment, and dyslexia.

Challenge #2: Technologies for learning are being created worldwide for use across age and domains in both informal and formal settings. One challenge in the creation of successful learning technologies will be to utilize the principles derived from basic research on social interaction to enhance devices such as ‘robots,’ screen apps, and videos by including features that rely on humans’ interest in learning from devices perceived to as ‘social.’ Creating such devices to work in real-world environments, and across both informal and formal settings, will be essential.




The Knowledge-Learning-Instruction (KLI) Framework: Bridging the Science-Practice Chasm to Enhance Robust Student Learning

By Ken Koedinger and his LearnLab colleagues

Although the last 25 years of progress in the learning sciences has produced many discoveries about human thinking, learning and problem solving, there remains substantial disagreement about the most effective ways to apply those discoveries to educational practice.  One fundamental problem is that any such application requires a coherent and consistent overarching framework that can adequately represent what is known about human learning, while still being able to guide and constrain instructional design, implementation, and assessment.  Our efforts to formulate such a framework have resulted in the Knowledge-Learning-Instruction (KLI) framework, which was published this summer in Cognitive Science.

KLI promotes the emergence of instructional principles of high potential for generality, while explicitly identifying constraints of and opportunities for detailed analysis of the knowledge students may acquire in courses. Drawing on research across domains of science, math, and language learning, KLI suggests that optimal Instructional choices depend on which of many possible Learning processes are needed to achieve which of many possible Knowledge acquisition goals. The exploration of this three-way “KLI dependency” requires a specification of different kinds of knowledge, learning, and instruction. For instance, the framework specifies three broad categories of learning processes: 1) memory and fluency building, 2) induction and refinement, and 3) understanding and sense making. Cognitive psychology and cognitive neuroscience have substantially advanced our understanding of memory and fluency building processes, through experimental results, modeling, and in pursuing instruction implications (e.g., spaced practice and the testing effect). They have made less progress on the last two. In contrast, educational researchers have made most progress on understanding and sense making, but have paid little attention to the first two. Interestingly, machine learning research has focused largely on induction and refinement (e.g., statistical classification algorithms). A challenge for learning sciences is bringing these disparate views together.

To pursue an example, educational psychologists have produced educational recommendations, like the worked example effect, that are at odds with recommendations of cognitive psychologists, like the testing effect. While these two opposing, albeit research based, recommendations suggest incompatible instructional advice, the KLI dependency suggests a way out of the dilemma: Cognitive psychologists have focused on kinds of knowledge (i.e., facts) for which memory is the primary learning process whereas educational psychologists have focused on kinds of knowledge (i.e., general procedures) for which induction is primary.  Testing best enhances memory of facts, but examples best enhance induction of general procedures.

If optimal instructional decisions are highly dependent on domain-specific knowledge characteristics, a practical science of learning must make parallel progress on both across-domain learning theories and within-domain knowledge theories. This challenge may seem daunting, but a tremendous research opportunity is emerging as educational technologies are increasingly supplying Big Data.  Teams integrating machine learning and cognitive science are producing data-driven learner model developments, from fine grain models of learning transfer through models of metacognition and motivation to models of classroom social interaction and learning by dialogue.




Learning in complex systems

Barbara Shinn-Cunningham and Heather Ames

Basic neuroscience research on how brains coordinate learning reveals fundamental principles defining how brains adapt based on experience, including how neural capacity limits affect learning, how dynamics of neural activity affect information encoded in the brain, and how interactions within and among brain regions enable acquisition of knowledge and skills. These principles inform learning in machines and technologies that adapt to deal with unpredictable situations or inputs. Such technologies not only provide additional insights into practical considerations that affect learning in real-world systems, but also solve real application needs. Studies of adaptive, complex biological and artificial systems provide key insights into the science of learning.

Achievement #1: Learning through interactions between multiple brain areas
Understanding the mechanisms of learning in the brain depends upon understanding how information is encoded in distinct brain regions and how a change of activity in one of those areas can affect the processing in other areas. Great strides are being made in this arena, as exemplified by advances in understanding how spatial perception and navigational information is stored in neurons in the hippocampus, and how neural oscillations mediated by the medial septum directly affect information coding as well as learning.1,2

Achievement #2: Technology advances to support real-world systems
Advances in computing have finally caught up to the capabilities of neural models, allowing us to build adaptive machines that help us understand the fundamental mechanisms behind learning. Such achievements include the use of Graphic processor units (GPUs) as fast parallel processors as well as advances in computing power, memory, scalability, and data transfer.3,4 Being able to realize the fundamental units of learning (e.g., the synapse) in hardware5,6 and software (e.g. HP’s Cog Ex Machina platform7) is fundamental in this achievement. Advances in the design of Brain Computer Interfaces (e.g., see and Neuromorphic Technology (e.g., see build on these technological breakthroughs.

Challenge #1: Understanding multisensory learning in real-world settings
Our understanding of systems-level learning in the brain is rapidly advancing (see Achievement #1); however, typical neural studies use carefully controlled tasks that limit and control information as well as potential learning strategies. In rich, real-world settings, learning can take place simultaneously from multiple sources and a combination of strategies. Extrapolating results from simple, controlled learning paradigms to more realistic, open-ended situations is an enormous challenge.

Challenge #2: Human-machine interactions
How will humans learn to interact with new learning machines and vice versa? A number of recent technologies allow users and machines to co-learn, including Brain-Machine Interfaces and Human-Robotic Interactions. The results are more robust and powerful than systems that place the burden of learning either solely on human or solely on machine. As these and related technologies become more usable and more available to the general population, building effective, general approaches to enable co-learning will be critical.

1 Brandon MP, AR Bogaard, CP Libby, MA Connerney, K Gupta, & ME Hasselmo (2011). “Reduction of theta rhythm dissociates grid cell spatial periodicity from directional tuning, Science, 332, 595-599.

2 Gorchetchnikov A and S Grossberg (2007).  Neural Networks, 20, 182-193.

3 Int. Solid States Circuits Conf Trends Report (2010).

4 Strukov DB, GS Snider, DR Stewart & RS Williams (2008). “The missing memristor found,” Nature, 453, 80-83.

Folowosele F, T Hamilton, & R Etienne-Cummings (2011). “Silicon Modeling of the Milhanas-Niebur Neuron,” IEEE Trans Neural Net, 22,1915-1927.

Folowosele F, RJ Vogelstein, & R Etienne-Cummings (2011). ” Towards a Cortical Prosthesis: Implementing a spike-based HMAX model of visual object recognition in silicon,” IEEE Emerg Select Topics Circuits Systs, 4, 516-525.

Snider G, R Amerson, D Carter, H Abdalla, MS Qureshi, J Leveille, M Versace, H Amers, S Patrick, B Chandler, A Gorchetchnikov,& E Mingolla (2011). “From synapases to circuitry: Using memristive memory to explore the electronic brain,” IEEE Comp, Feb 2011, 37-44.

8 Brumberg JS, EJ Wright, DS Andreasen, FH Guenther, PR Kennedy (2011). “Classification of intended phoneme production from chronic intracortical microelectrode recordings in speech-motor cortex, Frontiers Neuroprosthetics, 5, 1-12.

9 Ames H, E Mingolla, A Sohail, B Chandler, A Gorchetchnikov, J Leveille, G Livitz, M Versace (2011). “The animat,” IEEE Pulse, Jan/Feb 2011, 47-50.




Toward optimal learning dynamics

Garrison W. Cottrell, Andrea Chiba, and the Temporal Dynamics of Learning Center

As outlined in a Science article coauthored by members of the TDLC and LIFE centers, transformative advances in the science of learning require collaboration from multiple disciplines, including psychology, neuroscience, machine learning, and education. TDLC has implemented this approach through the formation of research networks, small interdisciplinary teams focused on a common research agenda. By combining approaches from multiple fields, more progress is possible than can be achieved by single-discipline studies. In particular, by combining computational models and experiments, the underlying mechanisms of learning can be elucidated, because the models can be analyzed in ways that brains cannot. This approach has a long history, which we build upon (e.g., Hebb 1949Machado 1997Shadmehr & Mussa-Ivaldi 1997Staddon et al. 2002).

This form of team science is exemplified by the Interacting Memory Systems Network’s discovery of a behavioral function of cell birth (neurogenesis) in the Dentate Gyrus of the hippocampus of mature rats[1]. Brad Aimone, an IMS graduate student in Rusty Gage’s lab (Salk Inst.), wanted to use a model to understand the role of these neurons. He asked IMS member Jeff Elman, whose model of the hippocampus would be best suited for this investigation, and Jeff sent him to IMS member Janet Wiles (U. Queensland). Together, they added neurogenesis to Janet’s model, which then yielded new predictions of the functional role of these newborn neurons, including that newborn neurons would bind together temporally-adjacent associations with context. New behavioral tasks to verify this prediction were developed by IMS leader Andrea Chiba, project scientist Laleh Quinn and graduate student Lara Rangel. The predictions were confirmed. These cells are a new kind of place cell that fires only in a specific place and surrounding context, indicating the coding of space-time in the hippocampus. The Eichenbaum laboratory of CELEST recently discovered “time cells” in the CA1 region of the hippocampus. The existence of temporal coding and contextual encoding at the cellular level in the hippocampus provides a complement to our earlier finding that internally generated sequences of neural activity in the hippocampus are replayed in the absence of external cues (Pastalkova et al. 2008). Thus, the elements and the ensemble of the hippocampus aggregate to create a sequential record of our personal recollections.

A second, quite different application of this approach is to the study of spacing effects. Spacing of study and testing are well known to influence the duration and effectiveness of learning. We extended the understanding of spacing effects to educationally-relevant time scales, and found that spacing effects are time scale invariant, providing coarse but useful guidance for educators (Cepeda et al. 2009). Based on this data, we developed a new computational theory (the Multiscale Context Model) that successfully predicts the optimal spacing for arbitrary material. We have incorporated MCM into a web-based tool that optimizes study schedules (Mozer et al. 2009); we are evaluating it with 200 Colorado middle school Spanish students.

There are many more applications of this approach. We have applied various machine learning and modeling techniques to automatically detect perceived difficulty of a lecture from facial expressions (Whitehill et al. 2008), to learn the optimal action to take next in a tutoring context based on examples of human tutoring interactions (Ruvolo et al. 2008), and to analyze children’s facial expressions while problem solving in order to predict periods of uncertainty (Littlewort et al. 2011). Likewise, neural recording and behavioral data can inform modeling – ruling out different models of decision-making (Purcell et al. 2010).

Two remaining challenges are highlighted here. First, we have used many techniques to build models at various levels of the spatial and temporal hierarchy (from neurons and millisecond scales to the person and year-long scale for spacing effects). Many of these approaches – those that share optimality or Bayesian techniques – are compatible with one another, yet the mappings between levels of the temporal and spatial hierarchy remain to be bridged, although progress has been made (e.g., Lerner et al. 2011Poeppel 2012). Consideration of this problem leads to the insight that interactions between levels depends on the physics of how, for example, molecules (low level) interact at synapses (one level up) and has provided fundamental links between thermodynamics and prediction, showing that in order to be energy-efficient, an organism must be predictive (Still et al. 2012). However, this is still far from fulfilling the promise of what we call the “levels hypothesis” (Bell 2007), which is a search for fundamental principles linking the physical levels between, for example, synapses, cells, and organisms. A second challenge is to bridge a collection of findings indicating that an active EEG brain state is necessary for accurate encoding of sensory temporal patterns (Marguet & Harris 2011; Goard and Dan 2009; Minces, Harris, & Chiba, In Prep) with data showing that EEG brain state in babies predicts linguistic and cognitive development (Benasich et al. 2008Gou et al. 2011). This will require reverse engineering human EEG by using animal models, in order to understand the cortical activity and neuromodulatory inputs underlying fast oscillatory activity in human EEG.


[1] And, as the linked paper points out, neurogenesis is enhanced by running – another strong piece of evidence for the crucial role of physical education in K-12 education.



Innovations in our understanding of the Science of Learning from the Visual Language and Visual Learning Center’s explorations of the phonological brain and the bilingual mind

Laura Ann Petitto

A driving question in contemporary neuroscience is how the human brain and human learning are impacted by different sensory experience in early life. Much scientific focus has examined the role of sound and auditory processes in building abstract linguistic, cognitive, and social representations, leaving one of our species’ most critical senses, vision, underspecified regarding its contribution to human learning. Here, we focus on how early experience with a visual language changes the brain’s visual attention and higher cognitive systems, language learning in monolingual and bilingual contexts, and reading and literacy—indeed changes that are distinct and separable from sensory differences (deaf or hearing). How vision impacts learning in these domains constitutes a vital “missing piece” of knowledge in the promotion of productive, successful lives for all humans. A strong revolution in purpose derives from the strength and depth of the involvement of and collaboration with deaf individuals in this research endeavor—individuals who rely significantly on vision, acquire naturally visual signed languages, and learn how to read and write fluently without prior mastery of the spoken form of written languages. The formal properties of visual languages, the enabling learning contexts, and the multiple pathways used to derive meaning from the printed word are leading to a better understanding of how visual language and visual learning are essential for enhancing educational, social, and vocational outcomes for all humans, deaf and hearing individuals alike, consequently transforming the science of learning. Moreover, the identification of specific processing advantages in the young “visual learner” have already provided a significant conceptual challenge to prevailing societal views by offering an alternative to prior “deficit models.” They further provide new approaches to helping all young learners capitalize on visual processes.

Achievement  #1: “Visual Phonology.” We have discovered that regardless of modality, the human brain attends to, creates, and uses a “phonological” level of language organization that can be sound or visually based. That the human brain creates a visually-based “phonological” level of language processing in the absence of sound is remarkable in itself and reveals the centrality of this level of language organization in all human language processing. (Link to sample publication.)

Achievement  #2: Reading, and a new kind of “Bilingualism.” Both neural and behavioral studies lay bare the brain’s spontaneous development of alternative gateways to sound-based phonological representations typical of, for example, young hearing English readers’ use of sound phonological representations to access meaning from printed words. There is now growing evidence that deaf visual learners also have – and use – a “visual phonology” when accessing meaning from English printed words, which is built up from a complex combination of early visual experience with the sign-phonetic/syllabic rhythmic temporal properties at the nucleus of sign phonological organization, phonetic rhythmic temporal combinatorial parameters of fluid fingerspelling, and sensitivity to visual orthographic patterning. Early visual language experience also impacts the visual learner’s use of a larger “text processing” window when reading printed text (studied with eye-tracking and neuroimaging technologies). As in the acquisition of reading in spoken languages (e.g., English), visual phonology in the deaf visual learner appears to be especially crucial in early reading acquisition. While somewhat later readers (the fluent ASL and English bilingual deaf child) also show the shift to more sign-semantic processing observed in young hearing readers, as well as the classic dual language activation observed in all bilinguals, but, here, involving signs and words.  (Link to sample publication.) This work also boldly extends science and society’s concept of “Bilingualism” to include children who are indeed bilingual, but who have exclusive access to their other language through the printed word (bimodal sign-print bilinguals).

Challenge #1: Effecting conceptual change in Science. The present work has important implications for the nature of the human brain’s structural and functional changes as a result of different sensory experience in early life. It provides important “missing piece” knowledge about the contributions of vision to higher cognition vital to science’s pursuit of a most complete theory of the nature and origins of knowledge. It further provides a stunning window into the essential nature of human language – identifying properties of human language structure and processing that are so fundamental to the species that they defy modality, and, instead, carve themselves into vision/the hands if exposed to signing or audition/the tongue if exposed to speech. Our biggest challenge is to effect conceptual change in science and to encourage science to cease marginalizing such findings.

Challenge #2: Effecting conceptual change in Society. The present work has important implications for the nature of learning spanning the home and educational settings. We have met the challenge to provide education with the first comprehensive ASL Assessment Toolkit for early language, reading, and higher cognitive knowledge in the young visual learner across early human development. We have met the challenge to provide the public with a rich and comprehensive Parent Information Packet containing scholarly but highly accessible “Research Briefs,” with state-of-the-art information pertaining to home rearing and the education of young visual learners. We have met the challenge to translate research findings to practice by developing fascinating Bilingual ASL-English Reading Apps for all young readers. We have met the challenge of building the first Volunteer Participant Sharing and Data Sharing web based databanks. Our biggest challenge is to effect conceptual change in society and to encourage society (schools, parents, medical practitioners, policy makers) to cease marginalizing such findings and to embrace their significance for all young learners.


Appendix 6:  Achievements in the Science of Learning centers

The multiyear effort of each SLC has produced extensive self-reporting of accomplishments in the science, tools and human resource infrastructures in the science of learning.

This appendix furnishes a sampler of accomplishments identified by each SLC.  It is difficult to convey sufficient context for each.  In an effort to maintain readability but to permit some further review, we have excerpted the leading caption of selected accomplishments.  A fuller compendium of the summaries, from which these entries are excerpted, appears on the site  To view online the text related to the given accomplishment and its associated references, click on the adjacent icon (»).

This appendix is a sampler only, not a comprehensive self-report of SLC accomplishments.  Accomplishments from the Visual Language and Visual Learning (VL2) SLC at Gallaudet University were not received by the time that this report was finalized, but they do appear at the website.

Center of Excellence for Learning in Education, Science, and Technology (CELEST)

  • Measuring the quantity and quality of memory representations:  CELEST researchers have developed a computational toolbox for measuring the quantity and quality of memory representations.  (»)
  • Why emotions capture our attention: A brain pathway from the brain’s center for emotion talks directly to a region that controls attention. (»)
  • Using brain signals in response to sound to read the mind: CELEST researchers now can tell what sound source a listener is paying attention to simply by recording electrical signals non-invasively from the scalp. (»)
  • Finding Cancer in Medical Images with Advanced 3D Viewing Software:  CELEST researchers found that displaying a thick section of a 3D lung x-ray can improve observers’ ability to detect cancerous formations. (»)
  • CELEST researchers have developed ultra-small electrodes for Brain-Machine interfaces. Neural recordings from these electrodes are highly stable, opening new avenues in the study of learning in animals. (»)
  • Providing Computers Controlled by Brain Signals to Paralyzed Individuals: Researchers in CELEST have started the non-profit Unlock Project, aiming to deliver “brain-computer interface” technology to paralyzed individuals so they can control a laptop computer in their home or from a wheelchair. (»)
  • Environmental Enrichment and Learning-Related Neuroplasticity: CELEST researchers discovered that giving cocaine-addicted rats several brief exposures to an enriched environment (“rat camp”) changed brain function and improved a type of learning that helps guard against relapse to cocaine use (»)
  • The Grass is Greener: Visual Foraging and Models of Object Recognition:  Researchers in CELEST are trying to discover how objects are processed differently before and after they have been committed to memory. Using high quality eye scanning data to look at patterns of eye movement during visual search (a overt attention task), they are investigating these differences. (»)
  • Why don’t you recognize your mailman in the supermarket? Researchers from CELEST have found that context has a dramatic impact on remembering. (»)
  • Visual Short-Term Memory for Timing Recruits Auditory Brain Network. Researchers from CELEST have found that visual short-term memory recruits structures of the auditory processing network, when people are asked to remember the precise timing of the visual elements. (»)
  • Neuromorphic algorithms and hardware for adaptive collision avoidance in land robots and unmanned aerial vehicles:  CELEST researchers have developed neuromorphic (neutrally-inspired) algorithms and low-power hardware for visually-based collision avoidance learning in mobile robots. (»)
  • Dynamics of brain states relevant to learning: We discovered that dynamic internal brain state (measured by local field potential or EEG) powerfully regulates neural representations of sensory stimuli and learning in animal models. (»)
  • Universal preprocessing: Many models of unsupervised learning have been developed (Principal Components Analysis, Sparse Coding, Independent Components Analysis, Restricted Boltzmann Machines), and some have been used to explain receptive fields of early visual processing areas. (»)
  • Temporal patterns that guide behavior: It has long been hypothesized that the brain internally generates sequences that guide behavior. We have found cell assemblies in the rat hippocampus that generate temporal sequences in the absence of outside stimulation that predict later behavioral choices. (»)
  • Synchronization and Attention: Balinese Gamelan is a form of music that involves fast, highly synchronized drumming amongst a large group of musicians. TDLC-developed piezo-electric Gamelan instruments that could show correlations betwen the ability to synchronize and attentional scores of ADD students. (»)
  • Spacing effects: We extended the understanding of spacing effects to educationally-relevant time scales, and found that spacing effects are time scale invariant, which provides coarse but useful guidance for educators. (»)
  • Temporal properties of brain states that predict learning Several of our discoveries have bridged the gap between neuronal dynamics and behavioral dynamics. (»)
  • LearnLab produced multiple demonstrations that fine-grain educational technology data streams (e.g. 10 second interactions recorded over weeks) can be used to build accurate and useful models that represent discoveries about human performance, learning, and engagement.  (»)
  • LearnLab’s DataShop is the world’s largest open repository of educational technology data.  Our foundational efforts in Educational Data Mining include creation of DataShop, running the first large-scale educational data competition at the 2010 KDD Cup Competition, and the formation and fostering of the new field of Educational Data Mining(»)
  • The LearnLab course infrastructure has facilitated over 200 cross-domain classroom-based in vivo experiments.  These experiments have revealed insights into the causal mechanisms of implicit and explicit learning processes and the social and motivational conditions that enable them through demonstration of successful instructional interventions and the use of fine-grain process data. (»)
  • Multiple LearnLab-supported studies across Geometry, Algebra, and Chemistry not only extended the external validity of the worked example effect by demonstrating benefits within classroom contexts and but also enhanced instructional theory. (»)
  • Multiple experiments have revealed the critical role of implicit (subconscious/non-verbal) learning and knowledge in academic domains.  A striking result is the demonstration that grammar learning is critical in mathematics, particularly in the key task of formulating problem situations as symbolic models.  Such implicit learning mechanisms (i.e. grammar and production learning) help explain human learning and transfer. (»)
  • We have achieved a corresponding breakthrough in learning theory through a computational model of learning that incorporates a novel approach to representation learning (unsupervised grammar learning from perceptual input) to enhance skill-learning mechanisms (production rule induction).   (»)
  • LearnLab language research has shown some robust instructional effects, as indicated by long-term retention. (»)
  • In addition to discoveries regarding implicit learning processes of memory and induction, we have also advanced understanding of the explicit learning processes of sense making and how they can be enhanced through social dialogue and argumentation. (»)
  • LearnLab researchers have demonstrated how intelligent tutoring techniques can be extended to support student learning of student metacognitive and self-regulatory strategies  (»)
  • In addition to exploring social and metacognitive factors in learning, LearnLab has also explored motivational factors.  (»)
  • Social factors affect language learning: the demonstration that infants’ early computational skills (“statistical learning”) are constrained by social interaction during natural language learning (“social gating”) has influenced theory and practice (»)
  • Theory development on the science of learning: A Science paper was published that derives from collaborative work across two SLC Centers (LIFE and TDLC). The paper develops principles that form the foundation of a new science of learning. (»)
  • Development of new neuroscience tools: The LIFE Center published the world’s first paper showing the feasibility of using magnetoencephalography (MEG) to study learning in infants and young children (»)
  • STEM learning and academic identity: LIFE made advances on the origins of academic stereotypes and how they affect STEM learning and influence the underrepresentation of women and racial/ethnic minorities in STEM disciplines (»)
  • The creation of “social” technologies that aid learning: LIFE work has shown that incorporating particular “social” features into technologies such as robots (creating machines that can imitate, achieve joint attention and produce vocal responses) results in greater learning across age in the domains of language and social cognition. (»)
  • DIVER Web-based video analysis software for computer-supported collaborative video analysis: The DIVER software system supports nearly 4000 users in conducting collaborative video analysis for research and educational purposes both nationally and internationally. (»)
  • Key tool development through collaborations with non-academic partners: The LIFE Center is collaborating with non-academic partners to improve learning. (»)
  • Changing the landscape regarding informal learning: The LIFE Center’s focus on bridging the gap between informal and formal learning is influencing educational policy in schools and designs for informal learning environments such as museums, clubs and homes. (»)
  • Translational outreach and networking with the public, educators, business leaders and policymakers: The LIFE Center’s work has been presented at high profile national and international venues with increasing frequency. (»)
  • Creation of a new generation of interdisciplinary SoL scientists: The LIFE Center’s interdisciplinary training has created a new generation of scholars whose sustained interdisciplinary interactions have made them attractive in competition for tenure-line positions. (»)

Temporal Dynamics of Learning Center (TDLC)

Pittsburgh Science of Learning Center (PSLC): LearnLab

Learning in Informal and Formal Environments (LIFE) Center Achievements

Spatial Intelligence and Learning Center (SILC) Achievements

SILC is founded on the premises that spatial cognition is central in STEM learning, that spatial ability is malleable through learning and that spatial tools can be used to increase STEM achievement. Five cross-cutting themes integrate the work of SILC.

  1. Although space itself is continuous, human representations of space are also qualitative, organized into distinct categories, and these qualitative spatial representations are crucial to STEM education.
  2. Spatial skills can be defined by whether representations and processes apply to the intrinsic properties of objects or the extrinsic relations between objects (and/or external reference systems), and by whether these properties and relations are static or dynamic.
  3. External symbol systems (spatial language, diagrams and maps) are vital to spatial learning and provide a major route by which we form articulated representations of space, including the qualitative distinctions needed for STEM learning.
  4. Spatial analogy is a key learning mechanism. Spatial analogies can reveal common spatial patterns that apply across spatial situations, and that can highlight specific differences between them. Analogical processes are also instrumental in applying spatial representations to nonspatial domains.
  5. Human representations of objects and actions are often grounded in sensorimotor interactions with the world.  For example, our studies of spontaneous gesture suggest that these embodied representations remain potent even among STEM experts. Sketching is a common practice among STEM experts, harnessing their visual, motor, and spatial abilities to support their thinking and learning.

It is in this context that SILC accomplishments can be summarized in several areas:

  • Characterization of spatial skills relevant to STEM and successful efforts to chart their development. Spatial learning provides the foundation for a wide range of reasoning skills in STEM-based activities, from solving mathematical problems to designing new products to understanding graphical depictions of complex systems. (»)
  • Investigation of and development of tools for spatial learning. SILC researchers developed a set of powerful tools for spatial learning, honing them into effective, deployable educational techniques and practices for STEM learning, including advanced technology (e.g. intelligent educational software), effective curriculum units (e.g. in elementary school mathematics), engaging activities (e.g. in children’s museums), and spatial assessment instruments (e.g. testing children’s spatial skills, testing adults’ STEM-relevant spatial skills). (»)
  • The improvement of spatial skills and spatial learning in the quest to improve STEM education. SILC took up the challenge of the National Academies’ report, and showed that substantial improvement of spatial learning skills is possible and that such improvement matters to STEM success. (»)
  • The broader application of spatial learning in home and formal schooling environments.  SILC is working in a variety of science museum.  SILC studies have made significant progress in translating new understandings about spatialization competencies and spatialization skill development to tested and testable application in various contexts such as mathematics and science curricula, science museums, and home schooling. (»)

2 Responses to “Appendices, 1st Workshop”

  1. Metals Says:

    shaiiko r6tracker

Leave a Reply