The Science of Learning Workshop: Prospects

1.  Introduction

One of the great scientific challenges, on a par with understanding the structure of the universe, is to understand the human mind and brain.  Some argue that understanding the material basis of higher mental functions like language, number, thought, consciousness and more constitutes the hardest problem in science, one that will require new ideas about the nature of matter.  Darwin posed the problem as one of understanding how the brain secretes these higher mental functions.  For all the advances in the neurosciences resulting from recent non-invasive brain imaging techniques, our understanding of the human mind/brain will require innovative collaboration between physicists, neuroscientists, linguists, vision scientists, computer scientists and engineers, social scientists, psychologists, people studying emotion, and others.  Recognizing this reality, it was not a surprise when the White House recently announced a major investment in brain science, the “BRAIN Initiative,” designed to describe brain activity and store the descriptions in a coordinated way.

Central to a comprehensive, theoretically motivated and biologically explicit model of the mind/brain is a deep understanding of learning.  Understanding how biological systems change when learning occurs will reveal foundational aspects of the biological systems themselves.  What and how we learn, in fact, characterizes us as a species.  Spiders develop web-making skills, birds learn to sing, fish swim, humans learn to count and speak in different languages, and nowadays machines learn.  The range of human abilities (from language to juggling) and means of learning (from sleep to argumentation) constitute a remarkable feature of our biological architecture; similarly for the different biological architecture of other species.

The fact that we can learn so flexibly, across so many domains, is profoundly empowering, for individuals and for societies.  That being said, individual differences in learning and systemic differences in access to learning (tools, teachers, information) lead to great inequality, both locally and globally.

In light of the critical importance of learning for so many stakeholders, both scientific and public, the NSF has, over the past several years, made a significant investment in this area of research.  Six multi-institution Science of Learning Centers have investigated how learning works, ranging from the cellular underpinnings to the cognitive mechanisms that teachers might build on in a classroom; the work has construed learning as a broadly biological phenomenon, which has consequences for educational practice but has a wider range than the very different and narrower “learning sciences” that emerged in educational practice and its related disciplines.  There has also been a great deal of work on biological approaches to learning at many other institutions.  To take just two examples, there have been remarkable advances in our understanding of the development of vision and the development of language over recent decades.

As funding for the six NSF-funded Science of Learning Centers comes to an end, it is time to assess how the NSF and other agencies can leverage what has been discovered to create the next, critical steps in this essential scientific and societal endeavor.  That was the focus of this second workshop.


2.  Historical notes

The so-called “cognitive revolution” of the 1950s marked a paradigm shift in research on learning.  Under the earlier period’s prevalent behaviorist approaches, learning was seen to be a matter of responses to environmental stimuli, with the internal biology of learning organisms, humans and non-humans, playing a minor role or even no role at all in some formulations.  The first workshop discussed work that was part of this paradigm shift, now construing learning as a biological phenomenon.  The report of that first workshop, along with the appendices and the compendium on the centers, contains a great deal of information on what has been discovered about learning both through the six Science of Learning Centers and in other labs and universities in the U.S.  All of this is available at

Many exciting discoveries have been made and significant infrastructure has been built.  We pick just a few examples here to illustrate the work.

  • The research of Frank Guenther at Boston University combines computational modeling with behavioral and neuroimaging experiments to characterize the neural computations underlying speech production. Specifically Guenther has developed a brain-machine interface that enables patients with locked-in syndrome (i.e., complete paralysis with intact cognition) to learn to communicate again.
  • It has been well established for a long time that there are “critical periods” for the development of aspects of vision, language, and other elements of cognition.  In remarkable recent work Takao Hensch (Harvard, Boston Children’s Hospital) has achieved direct control over the timing of critical periods for vision in mice.  By manipulating inhibitory transmission in the neocortex, amblyopic effects consequent to types of visual deprivation can be delayed (by gene-targeted reduction of GABA synthesis) or accelerated (by cortical infusion of a positive GABA receptor modulator, diazepam).  See his one-page summary in Appendix 3.
  • Sleep is critical in memory consolidation (Ellenbogen, Payne, & Stickgold 2006).  Beyond sleep, recent work by Lila Davachi (NYU) and colleagues indicates that intermittent rest periods may benefit memory retention.  Neuroimaging results show that functional connectivity between relevant brain regions during post-encoding (learning) awake rest periods correlates with long-term retention.  See her one-page summary in Appendix 3.
  • LearnLab researchers at the Pittsburg Science of Learning Center study robust learning by conducting in vivo experiments in school math, science, and language courses. LearnLab, now the world’s largest repository of cross-boundary research related to in situ classroom learning, also supports collaborative primary and secondary analysis of learning data through an open data repository, PSLC DataShop, which provides data import and export features as well as advanced visualization, statistical, and data mining tools.  LearnLab’s own research studies analyze fine-grained data in service of developing larger grained models and theories that seek to test causal mechanisms of human learning processes and related socio-affective dynamics.  LearnLab’s facilities dramatically increase the ease and speed with which learning researchers can create the rigorous, theory-based experiments that pave the way to an understanding of robust learning.

These examples, discussed at the two science of learning workshops, represent a tiny sample of what has been learned in recent years.   For more, see section #4 of the report of the first workshop, which includes a more extensive inventory of important discoveries.  Many of the discoveries have been driven by new technologies, some of which are described in section #5 of the first report.  Appendix 6 of that report, along with the accompanying Compendium, gives a fuller sampling of what has been achieved at the science of learning centers, in particular.


3.  Organization of the workshop

A Steering Committee of well-known scholars in work on learning has guided the organization of both workshops and written the reports: Ralph Etienne-Cummings (Johns Hopkins), Eric Hamilton (Pepperdine), Elissa Newport (Georgetown), David Poeppel (NYU and recent member of the SBE Advisory Committee), and current members of the SBE AC, Morton Gernsbacher (Wisconsin) and Barbara Landau (Johns Hopkins).  This Steering Committee has been appointed as a sub-committee of SBE/AC and therefore is empowered to make recommendations to NSF about future support for the science of learning.

For the first morning session of the second workshop, five speakers were invited to address topics in the science of learning where progress could be expected over the coming decade and where either the current centers have not made a major focus or where the focus needs to be changed significantly, given what emerged from the first workshop: Takao Hensch (Harvard, Boston Children’s Hospital) on brain plasticity and critical periods, Lila Davachi (NYU) on memory, Yuko Munakata (Colorado) on cognitive development and executive function, Elena Grigorenko (Yale) on the (epi)genetics and (epi)genomics of learning, Roy Pea (Stanford) on connecting work on the science of learning with education.

The afternoon session was devoted to matters of funding for work on the science of learning and featured talks by Julia Lane (American Institutes for Research) on the science of science and innovation policy, Judith Verbeke (NSF) on national synthesis centers, Susan Winter (University of Maryland) on virtual organizations, Robert Kaplan (National Institutes of Health) on NIH investments in the science of learning, and Neil Albert (Spencer Foundation) on private foundations.

Presenters sent in one-page summaries in advance of the meeting, listing their main points and providing links to significant work (Appendix 3).  There was extensive discussion: 15 minutes after each presentation, 45 minutes at the end of the first day, and then structured discussion for the whole of the morning of the second day.  The list of participants is in Appendix 1 and the program (with slides for the presentations) in Appendix 2.

The report of this, the second workshop, should be read alongside the report of the first workshop, which was devoted to the recent history of work in the science of learning.  The first workshop has shaped the recommendations in this report from the second workshop.

In overarching terms, the Steering Committee recommends that NSF continue and expand its investment in the science of learning, pursuing answers to fundamental questions, exploring new techniques, and developing new lines of inquiry.  It also needs to expand and diversify its funding mechanisms: the center model that has constituted the major component of NSF support over the last decade has succeeded in supporting interesting and important work and, critically, in generating a productive community.  However, that community needs to be expanded beyond the six large centers, to become truly national in scope, and one with an expanded international engagement and profile.  This implies a more diverse portfolio of funding possibilities, with greater access by researchers at many more universities and laboratories.

In particular, we recognize that the community needs to do a better job of connecting work on the science of learning with education.  An improved understanding of the role of the mind/brain is at the heart of the science of learning; an enriched science of learning, in turn, will enable advances in formal and informal education.  Not only do we have a scientific challenge, but advances here also will address a social and political challenge, a profoundly important applied problem.  Global economies require more education, more skill in writing, in science, technology, engineering, and mathematics (STEM).  There are vast global differences in opportunities and in eventual learning.  Early experience has lasting effects, but there is profound variation in the quality of early experience and in learning across the U.S. and around the world.  The goal needs to be to optimize learning for all, a critical scientific and applied problem, and it is appropriate for NSF to take the lead in driving research in this field, while recognizing that the science of learning and the “learning sciences” of the education world represent distinct cultures; we return to this in Section 5, when we propose establishing a National Synthesis Center for the Science of Learning and its Translation.


4.  Scientific recommendations

One challenge for the future will be to address vital elements of learning that the current Science of Learning Centers have not emphasized.  The first report notes four such elements. These are already the subject of important research; they merit and will benefit from greater attention and support in the future:

  • Social/emotional/attentional factors and individual differences:  Current models of learning tend to consider only the informational content of the input to a generic learner.  A challenge for the future is to understand how to model additional psychological factors known to affect learning, such as the social situation, emotion, and attention, as well as individual differences in learner performance.  One difficulty is in determining whether these factors simply regulate information uptake (e.g. an inattentive learner “misses” some of the data) or whether they qualitatively change the computations performed.  New findings will also have implications for machine learning: can social/emotional factors be effectively replaced by more or different kinds of data that are easily available to machines, or will we need to simulate emotions and social interaction for effective machine learning to occur?
  • Childhood acquisition of the structure and variation in language:  There has been a great deal of productive work on how children acquire systematic properties of their native language that differ across languages around the world, both in syntax and phonology.  We are now well positioned to focus more intensely on the acquisition and learnability of these aspects of language under normal childhood conditions.  In many ways, research on the acquisition of a child’s native language provides a model for work on the acquisition and learning of other elements of cognition.  This may also translate to machine learning approaches where the algorithms themselves are learned or developed, rather than imposed by the designer.
  • How groups and societies learn:  Sometimes societies, from university departments to nations, undergo structural shifts in attitudes and political perspectives.  A striking example came around 1990, when many countries shifted from authoritarian and totalitarian regimes to systems where individual citizens exercised more power.  This is a kind of learning, and much insightful work has been conducted, yet mysteries remain.
  • Cultural influences on learning:  Although learning, by its very nature, is a culturally shaped process, as many learning scientists note, ‘the learning sciences have not yet adequately addressed the ways that culture is integral to learning’ (Nasir, Rosebery, Warren, & Lee 2005). Thus, the influence and impact of culture and its variations will be as important to identify as the impact and influence of other environmental features and variations.

Furthermore, much remains to be learned about basic science questions:

  • What are the types, mechanisms, and domains of learning?  Is sensitivity to early experience in the perceptual systems or in language the same thing as learning to navigate through space, learning mathematics, or learning to juggle? What form would a taxonomy of learning types take? How many distinct types of learning would such a taxonomy encompass? Crucial areas to examine in this context are language, number, space, and social interaction.  Addressing these questions would then allow us to learn more about similarities and differences among principles of learning and representation in these domains.
  • What is the optimal timing for different types of learning; should it best be slow or fast, early or late?  What are the differences between learning early versus later in life, within critical periods and open-ended learning?   What are the effects of provoking learning earlier or later, or of re-opening critical periods in animals (see the work of Hensch referred to above and colleagues whose work he has drawn on, e.g. He et al. 2007 and Levi 2012)?
  • What information does the mind/brain select and store, and how does it represent that information and incorporate it into its computational operations?  When is information consolidated, what are the effects of spacing, sleep, and rest?   How do we find the information we need, and what do we do with it over time?  See Yuko Munakata’s presentation on the development of executive function.  How can these representations be paralleled in machines and “big” databases?
  • How does (epi)genetics affect learning; how do experience and learning affect the genes?  For example, there are interesting issues about how particular genetic factors influence quite diverse phenotypical properties.  Tom Bever and colleagues (Hancock & Bever 2013) have raised intriguing questions about how handedness correlates with distinct parsing strategies in speech comprehension; twin studies have been used to investigate the role of genetics in cognitive development (e.g. Rice 2012).  Recent advances in epigenetics raise many issues for understanding the complex relation between genetic properties and development and experience; see Elena Grigorenko’s presentation and Naumova et al 2013.
  • How can machines help us better understand how humans learn, and how can they better assist learning?  How do machines learn and remember, and how can we improve the capacity of our devices to perform the operations we need?  Are there similarities between the computational approaches of “big data” analysis and biological learning? To what extent might we be able to build a machine model of the brain, returning to an early emphasis of work in artificial intelligence? Learning must play a central role in any attempt to reverse engineer the human brain, perception and cognition.
  • How can we relate our understanding of the cellular and molecular mechanisms of learning to the circuit and systems-level computational operations of mind and brain?  Can we develop a hardware description language for the brain and cortical learning structures?  How can we build better bridges across hierarchies?


5.  Organizational recommendations

NSF made a major investment in nurturing the science of learning, construed broadly as a branch of biology, and investigating the cognitive and neural bases of learning.  The complexity of the goals required expertise from various disciplines and integrative research programs that were beyond the capabilities of individual researchers or small groups.  This thinking led to the establishment of the Science of Learning Centers program, funding six centers around the U.S.  The longer durations of funding and the stable environments of centers were intended to provide incentives for committed, long-term interactions among researchers to re-conceptualize their thinking beyond the paradigms of traditional disciplines.

The synergistic focus of the SLC cooperative agreements tasked each center with the challenges of collaboration.  SLC researchers were required to cross boundaries and to seek insights that extended beyond the perspective of individual research communities or teams.  The boundaries that SLC researchers have crossed are varied and include, for example, temporal scales (e.g. microsecond activation patterns to more extended task performance), spatial scales (e.g. cellular to structural to regional to functional), and scientific domains (e.g. from neuroscience to cognitive psychology, to data-mining methods, to behavioral therapies, and education research more broadly).  This kind of boundary-crossing interdisciplinarity will need support beyond the SLCs in the future.

Furthermore, the SLCs have promoted a new generation of students trained in interdisciplinary work, who are gaining traction and success in their research and publications.  The publishing success of these early career researchers suggests an important capacity to formulate and situate strategically valuable questions and to size up their within-boundary and across-boundary dimensions.  The (former) students present at the workshops spoke enthusiastically about what they have learned from this interdisciplinary focus and believe that they have been well equipped to pursue interdisciplinary studies for the rest of their careers.

These successes represent a legacy that NSF will do well to nurture into the future, but in a wider context.  The centers program has been inherently localized and, while the centers have each had a somewhat wide reach, the time has come to reorganize and ramp up support so that it is on a truly national level, with funding more open to researchers at all institutions.

At the same time, the centers are large entities, each incorporating many researchers from several institutions, and that scale brings with it a certain unwieldiness.  They also represent large investments by NSF, and that requires monitoring and management that can be time-consuming, cumbersome, and expensive.  Center representatives expressed frustration at the time and money required for the annual site visits and other reporting requirements, which were not always very productive.  Much of this is inevitable, given the general requirements for NSF centers and cooperative agreements.

A 2009 COV found that the SLC program through its six centers had succeeded in fostering a scientific community in the science of learning, construed in this broad fashion.  This happened partly through the scale of the centers and through creating a learning network, going beyond the individual centers.  The community has been internationalized; the OECD held a workshop in January 2012 on the science of learning, where the NSF initiative played a central role, and now the centers model is being followed and funded in other countries.  However, workshop participants argued that, while the centers have clearly been productive and have succeeded in generating new research, future work in the science of learning will benefit from a more diverse set of funding mechanisms, rather than simply funding a set of new centers.  It will be important to continue to foster the beneficial aspects of the productive interdisciplinarity that has been cultivated through the centers but to do so in a way that diversifies the avenues for advancing the field and that reduces the unwieldy research management problems that often characterize large center funding.

In light of this, we recommend that there should be a focal point at NSF for the interdisciplinary work in the science of learning, at least a program in the science of learning, embedded in the Division of Behavioral and Cognitive Sciences (BCS).  However, it should be an exceptionally wide ranging program in at least three senses: as noted, the science of learning has a very broad basis; the program will, we recommend below, be responsible for more forms of funding than other programs; and it will need to engage with other programs across all of the NSF directorates (particularly ENG, CISE, EHR, and BIO), with other government agencies, with foundations, and with agencies in other countries.  It is clear from the afternoon presentations that there is considerable interest in supporting the science of learning in government agencies like NIH (see Robert Kaplan’s presentation), DARPA and DOD more generally, in foundations (see Neil Albert’s presentation), and in international agencies.  Given the nature of NSF, it would be appropriate for it to play a coordinating role among these various units.  Perhaps, given the range of activities that this program will need to engage in and its collaborative work, it should constitute a distinct division of SBE on a par with BCS and Social and Economic Sciences.

Further, we recommend that NSF move beyond the center model it has followed for the last 10 years and toward a funding portfolio that has three important characteristics:  first, a more diverse array of mechanisms that differ in size, type, and length of funding; second, the addition of mid-sized collaborations, including small partnerships and networks across institutions; and third, funding that distributes support across a large number of investigators, including an annual conference to exchange ideas and findings.  Our proposals deal with NSF funding but, given what was said above about the wide interest in the science of learning, this should be coordinated with funding by other agencies and foundations devoting serious attention to the science of learning.

We propose the following funding mechanisms.

  1. Single investigator grants of the usual kind, with access to existing possibilities for supplementary funds for the development of infrastructure and interdisciplinarity.
  2. Problem-focused networks of investigators, who come from different disciplines and different institutions and will work together on an important but specific and focused problem that can be solved, in principle, within five years.  Generally, researchers would be expected to form some kind of virtual organization (see Susan Winter’s presentation here).  These awards would not be renewable, and PIs would be expected to show a solution or at least significant progress on the funded problem.
  3. Planning grants for 1-2 years to develop a proposal for the network mechanism (2).
  4. Small partnerships with two to three investigators working on interdisciplinary collaborations.
  5. Umbrella virtual institutes would provide funding for links between already funded projects.  The main goal would be to provide a venue for cross-project discourse, meetings, and exchanges.  The virtual institutes could support intense hands-on workshops (like the Institute for Neomorphic Engineering’s annual Telluride Cognition Engineering Workshop) and mini-symposia at major conferences.  The institutes would support collaboration for intensive focus on strategic research themes, laboratory exchanges of early-career researchers, and efforts to foster the growth of a research community by organizing publications, online discussion groups/meetings, shared tools, and blogging space.  Such structures could provide flexibility both to NSF and to the field for imaginative ways to structure investments, including the engagement of other funding sources and international partners.  They would emulate some benefits of a center but also afford more agility and field-based autonomy in shaping and pursuing intellectually coherent collaboration.  NSF’s current Science Across Virtual Institutes (SAVI) vehicle supporting international collaboration illustrates a structure that could emerge.  NSF currently supports thirteen SAVIs (; together they represent a spectrum of imaginative field-initiated approaches to large questions in different fields, connecting research teams synergistically when the core funding for each team has already been provided.
  6. The program/division would have funds to support annually IGERTs focused on learning, beyond the usual IGERT program (which may also happen to support an IGERT on learning); proposals would go through the IGERT program review but be funded by the Science of Learning program/division. Similarly the program would have substantial funds available to co-fund work on learning (beyond IGERTs) where the proposals have come through another program.
  7. Pre-doctoral and postdoctoral fellowships focused on learning, particularly to support young people working in two or more distinct domains of learning.
  8. CAREER grants focused on learning.
  9. National Synthesis Center for the Science of Learning and its Translation (NSCSLT).  We propose a center along the lines of the national synthesis centers managed by the BIO directorate; see Judy Verbeke’s presentation and, for one example, see the website of the National Evolutionary Synthesis Center,  While recommendations 1-8 involve mechanisms that are transparent and well known, synthesis centers are not well known and we will spell out what we have in mind in greater detail than in the recommendations above.The major goal of such a center would be to synthesize work on the science of learning and to facilitate its translation, funding multidisciplinary groups of investigators to work together to bridge between findings on the basic science of learning and their applications in education, industry, and business.  The center will be home to imaginative and far-reaching activities across multiple research communities.  A few of the core activities appear below.As for synthesis within the science of learning, for all the emphasis on integrative approaches within the current SLC Program, it is not clear that much has been achieved in defining a general science of learning with its own general principles that cross the domains and different levels of analysis in STEM learning, language acquisition, development of spatial cognition, etc.  Each of these areas has its own distinct principles.  In that event, NSCSLT would be responsible for an annual meeting for discussing work across the full breadth of the science of learning, helping to consolidate the scientific community that has been developing and drawing people together from the distinct areas of work on learning.  Furthermore, NSCSLT will be prepared to facilitate the development of a society for the interdisciplinary science of learning along the lines of the Neuroscience Society, which would take over the annual conference.As for translation, the term is often associated with a notion of making research on learning applicable in contexts such as education (i.e. from basic research to practice).  A primary task of the SLCs has been to build translation between basic research communities and broader ranges of activity, including industry and business, partly to generate new ways of viewing research problems.  This has been based at least in part on an expectation that researchers collaborating across boundaries would create different and more advanced landscapes for the entire field.  This perspective on translation would be a function of the national synthesis center.A major focus of the synthesis center will be to link the science of learning in various areas with formal and informal education.  Many of our society’s most precious assets rely on a learned citizenry.  One of the most obvious applications of the science of learning is to influence the education system.  However, the interface between educators and scientists has not yet reached productive equilibrium, nor has it formed a common lexicon necessary for productive exchanges.  The applications of the science of learning to learning in formal and informal environments are understood at only primitive levels.  A bi-directional flow of inquiry, strategies and findings between education and the science of learning is still nascent and evolving.Education is often held as one of the most change-resistant enterprises of society, immune to influence from outside its own system, professions, and culture.  However, major changes are afoot and there are reasons to expect that educational change will accelerate as new technologies enable educators to personalize learning experiences and build community, thereby reconciling attention to the individual and to the collective (see Roy Pea’s presentation).  For example, the advent of learning analytics in education research is recent and still in formation but promises changes.  Extending far beyond clickstream data points, the availability of fine-grained and heterogeneous data fundamentally alters the kinds of questions that researchers can pose about the dimensions and variables of learning, as has been clear from work emanating from the Pittsburgh LearnLab.  Complementing the fine-grained data encouraged and exploited by learning analytics is the recognition that there are different types of learning, hence more complex and variegated ways of knowing and learning than have been understood in the past.A national synthesis center that connects the science of learning with education research is timely, as there are new opportunities and new dangers.   Considerable attention is being devoted by many universities now to Massive Open Online Courses (MOOCs); that alone opens important questions that people engaged in the science of learning should be addressing alongside educational practitioners.  Modern communication technologies are opening many possibilities for MOOCs and for other educational approaches of more modest scale.  For example, the promising Next Generation Learning Incubator (NGLI) being pioneered by some of the universities affiliated with the edX consortium is designed to bring existing and ongoing research in the science of learning to online education, to create new and innovative learning environments that promote the full spectrum of learning and development, and to conduct research into learning in those environments.



There have been considerable advances in the science of learning, many facilitated by work at the SLC’s.  We recommend that NSF continue to support work in the science of learning at least at the level of recent years, and indeed at higher levels when possible. However, the time is ripe now for different forms of support than in the previous program, elevating support to a broad and widely distributed national level, thus providing funding for many researchers at many research institutions.

This funding should emphasize the interdisciplinarity that has characterized work on the science of learning, encouraging crossing boundaries of geography, technologies, and disciplines, and coordinated with funding provided by other NSF directorates and programs, other governmental agencies, foundations, public-private partnerships, and agencies in other countries.  Among other things, NSF should encourage the building of national infrastructure and look for opportunities for translation to other areas of societal importance: health, national security, equity, justice, and more.

This makes good sense, we believe, particularly at a time when support for work in cognitive and neuroscience is being enhanced at NSF, because we stand to learn about aspects of cognition and their neurological basis through a better understanding of how they develop and change under external influence through learning.

We also recommend that funding for the new science of learning program/division begin in 2014, so that new work can begin immediately as the funding for the first cohort of SLCs is terminated.  The national synthesis center can come later, perhaps as funding for the second cohort of SLCs ends, allowing time for the consultations needed to develop an appropriate solicitation.

We recognize that the national synthesis center will require careful planning, but it is a critical component of our recommendations.  Given NSF’s traditional emphasis on broader impacts, going back to the blueprint of Vannevar Bush (1945), it is imperative that work in the science of learning be enlisted to address serious issues in inequities of learning.  Attempts have been made in the past, but they have been inadequate and something more comprehensive is needed.  Educators need help from the research community to address basic issues in education, and scientists often have interesting ideas about methods to improve education but are unable to penetrate educational bureaucracies and cultures.  We believe that instituting a national synthesis center could be an important means to address the grave political and social issues of educational inequities, with the goal of ensuring that learning is optimized for all.

We appreciate that our Steering Committee has been constituted as a Sub-committee of SBE/AC, enabling us to make these recommendations to NSF.  We stand ready to discuss these matters further and to provide further advice to NSF, as needed.

Submitted on 10 April 2013 by the Steering Committee:

David W. Lightfoot, PI, Georgetown U

Ralph Etienne-Cummings, Johns Hopkins U

Morton Gernsbacher, U Wisconsin

Eric Hamilton, Pepperdine

Barbara Landau, Johns Hopkins U

Elissa Newport, Georgetown U

David Poeppel, New York U



Hancock, R. & T.G. Bever 2013.  Genetic factors and normal variation in the organization of language. Biolinguistics 7: 75-95.

Bush, V. 1945.  Science, the endless frontier: A report to the President.  Washington, D. C.: U.S. Government Printing Office.

Ellenbogen, J.M., J.D. Payne & R. Stickgold 2006.  The role of sleep in declarative memory consolidation: Passive, permissive, active or none? Current Opinion in Neurobiology 16: 1-7.
He H.Y., B. Ray, K. Dennis, E.M. Quinlan 2007 Experience-dependent recovery of vision following chronic deprivation amblyopia. Nature Neuroscience Sept10(9):1134-6.

Levi D.M. Prentice award lecture 2011. Removing the brakes on plasticity in the amblyopic brain. Optom Vis Sci. 2012 Jun;89(6):827-38. doi: 10.1097/OPX.0b013e318257a187.

Nasir, N.S., A.S. Rosebery, B. Warren,  & C.D. Lee 2005.  Learning as a cultural process: Achieving equity through diversity.  In R.K. Sawyer, ed. The Cambridge handbook of the learning sciences.  Cambridge UP.

Naumova, O.Y., M. Lee, S.Y. Rychkov, N.V. Vlasova, & E.L. Grigorenko 2013.  Gene expression in the human brain: The current state of the study on specificity and spatiotemporal dynamics.  Child Development 84.1: 76-88.

Rice, M. 2012.  Toward epigenetic and gene regulation models of specific language impairment: looking for links among growth, genes, and impairments.  Journal of Neurodevelopmental Disorders 4:27.


Appendix 1: Participants


David Lightfoot, Communication, Culture, and Technology, Georgetown University,

Tanya Evans, Interdisciplinary Program in Neuroscience, Georgetown University,

Patrick Cox, Interdisciplinary Program in Neuroscience, Georgetown University,

Steering Committee

Ralph Etienne-Cummings, Department of Electrical and Computer Engineering, The Johns Hopkins University,

Morton Ann Gernsbacher, Vilas Research Professor and Sir Frederic Bartlett Professor, Department of Psychology, University of Wisconsin,

Eric Hamilton, Educational Division, Pepperdine University,

Barbara Landau, Department of Cognitive Science, The Johns Hopkins University,

Elissa Newport, Department of Neurology, Georgetown University,

David Poeppel, Professor of Psychology and Neural Science, New York University,


Lila Davachi, Associate Professor of Psychology, New York University,

Elena Grigorenko, Emily Fraser Beede Professor in the Child Study Center and Professor of Psychology, Yale,

Takao Hensch, Professor of Molecular and Cellular Biology and Professor of Neurology, Harvard,

Yuko Munakata, Professor of Psychology, University of Colorado,

Roy Pea, David Jacks Professor of Learning Sciences and Education, Stanford University,

Neil Albert, Associate Program Officer, Spencer Foundation,

Robert Kaplan, Director of the Office of Behavioral and Social Sciences Research, National Institutes of Health,

Julia Lane, Senior Managing Economist, American Institutes for Research,

Judith Verbeke, Division Director (Acting), Division of Biological Infrastructure, National Science Foundation,

Susan Winter, Lecturer and MIM Program Coordinator, University of Maryland,


Clifton Langdon, VL2, Gallaudet University,

Tanya Evans, VL2, Georgetown University,


Andrea Chiba, TDLC, Associate Professor of Cognitive Science, University of California San Diego,

Ken Koedinger, PSLC, Professor of Human-Computer Interaction and Psychology, Carnegie Mellon University,

Mark Liberman, Christopher H. Browne Distinguished Professor of Linguistics and Professor of Computer and Information Science University of Pennsylvania,

Mmantsetsa Marope, Director, Division for Basic Learning and Skills Development, United Nations Educational, Scientific, and Cultural Organization (UNESCO),

Nora Newcombe, SILC, Professor of Psychology, Temple University,


Appendix 2: Program

National Science Foundation

4201 Wilson Boulevard, Arlington, VA 22230

Stafford I, Room 375

Thursday, February 28th

8:30am: Coffee and pastries

8:45am: David Lightfoot:  Introduction

9:00am: Takao Hensch:  Brain plasticity [slides]

9:40am: Lila Davachi:  Insights into enhancing learning revealed in basic research on memory encoding, consolidation and retrieval [slides]

10:20am: Yuko Munakata:  Interactions between executive functions and learning [slides]

11am: Elena Grigorenko:  The (Epi)genetics and (epi)genomics of learning [slides]

11:40am: Roy Pea:  Prospects for scientific advances in work on learning from the perspective of work in education and technology [slides]

12:20PM: Lunch

1:30pm: Julia Lane:  The Science of Science and Innovation Policy: Lessons for the Science of Learning Centers

2:10pm: Judith Verbeke:  NSF Funded National Synthesis Centers [slides]

2:50pm: Susan Winter:  Virtual Science of Learning Organizations: Designing the Future [slides]

3:30pm: Robert Kaplan:  NIH Investment in the Science of Learning [slides]

4:10pm: Neil Albert:  Private Foundations [slides]

4:50-5:30pm: Discussion

Friday, March 1

8:30am: Coffee and pastries

9am-noon: Structured Discussion

Afternoon: Report Collaboration and Writing (Steering Committee)


Appendix 3: One-page Summaries

Brain plasticity
Takao K Hensch

Neural circuits are shaped by experience – the potency of which changes dynamically across the lifespan. A focus on the cellular and molecular bases of these developmental trajectories has begun to unravel mechanisms which control the onset and closure of such ‘critical periods’ for plasticity. This work in animal models offers new insight for tapping the brain’s potential to rewire both in the clinic and classroom (

Achievements Two important concepts have emerged in the study of critical periods:

1) Excitatory-Inhibitory (E-I) circuit balance as a trigger1. The classical enduring loss of visual acuity (amblyopia) due to imbalanced visual input early in life fails to occur when inhibitory function is compromised. Specific GABA circuit maturation underlies the onset timing of plasticity and is shifted across brain regions consistent with the cascading nature of critical periods. Notably, premature gain-of-function by pharmacological agents can trigger premature onset, while genetic disruptions lead to a delay. These manipulations are so powerful that animals of identical chronological age may be at the peak, before, or past their plastic window. Thus, the critical period per se is plastic.

2) Molecular ‘brakes’ that limit adult plasticity. While it is possible that plasticity factors are simply more abundant early in life, an emerging view is that the brain is intrinsically plastic, and one of the outcomes of normal development is then to stabilize the neural networks that are initially sculpted by experience2. This is demonstrated most clearly by the late expression of brake-like factors (thick black circle) beyond the critical period, which act to limit excessive circuit rewiring. These factors include functional brakes (red arrow) typically acting on neuromodulatory systems, and structural brakes (blue arrow) which physically prevent neurite pruning and outgrowth. Their removal unmasks potent plasticity in adulthood, which can be used to correct neurodevelopmental disorders3.

Challenges In order to leverage these insights in learning, several points need to be explored:

1) Individual variability in critical period timing. Appreciating the powerful role of E-I circuit balance (in particular one class of GABA neuron) and its sensitivity to early exposure to drugs, adversity, sleep, or genetic perturbation predicts that optimal plasticity windows will differ across individuals. A striking example may be the mis-regulation of E-I balance in autism or after early life seizures4, suggesting careful mapping of critical period timing is needed in patient populations.

2) Lifting brakes in adulthood. The realization that the brain’s intrinsic potential for plasticity is actively dampened by brake-like factors has overturned the traditional view of a fixed, immutable circuitry that is consolidated early in life. At the same time, the great biological cost to maintain multiple brakes throughout life highlights the need for circuit stabilization for proper brain function. Understanding why there are so many, how they interact and ultimately how to lift them in non-invasive ways may hold the keys to lifelong learning2.


1. Hensch TK. (2005) Critical period plasticity in local cortical circuits. Nature Reviews Neurosci. 6(11):877-88.

2. Bavelier D, Levi DM, Li RW, Dan Y, Hensch TK. (2010) Removing brakes on adult brain plasticity: from molecular to behavioral interventions. J Neurosci. 30(45):14964-71.

3. Morishita H, Miwa JM, Heintz N, Hensch TK. (2010) Lynx1, a cholinergic brake, limits plasticity in adult visual cortex. Science. 330(6008):1238-40.

4. Le Magueresse C, Monyer H. (2013) GABAergic interneurons shape the functional maturation of the cortex. Neuron. 77(3):388-405.


Insights into enhancing learning revealed in basic research on memory encoding, consolidation and retrieval
Lila Davachi

Emerging data in the field of basic memory processes has revealed (at least) three exciting results that have the potential to be implemented and applied to enhance learning in an educational setting.

Accomplishment 1: One of the hallmarks of conceptual learning is having a deep and flexible relational representation of learned information. In other words, having a representation of how information learned on one day or so, in one class setting, relates to information learned at a different time point in a different context.  Recent behavioral (Duncan et al, 2012) and neuroimaging (Shohamy and Wagner, 2008; Zeithamova et al, 2012) studies have shed light on the mechanisms that facilitate integrative learning. Specifically, integration across learning episodes is facilitated by the reactivation of previously learned associations/information during new learning.  Reactivation can occur by dint of effortful retrieval of past representations during new learning but also, importantly, can also occur automatically with appropriate cueing and context reinstatement. In a recent paper published in Science (Duncan et al, 2012), we showed that integration can even be facilitated by putting participants in ‘retrieval mode’ focusing attention on past representations during new learning.  Neuroimaging data has revealed that reactivation during successful integration (1) can be tracked using multivariate analysis approaches to identify which past representations are activated (Rissman et al 2010; Zeithamova et al, 2012; Kuhl et al, 2011; 2012) and (2) are supported by increased processing within the hippocampus.

Challenge 1: A thorough understanding of how and when past representations become reactivated is still unknown and whether they will also improve integration or, rather, cause some interference needs further investigation. Further areas for consideration include whether reactivation (1) is time-sensitive (2) is related to memory consolidation and (3) can be modulated by contextual cues.

Accomplishment 2: One important way to enhance subsequent learning is to enhance the retention of recently learned information. Sleep has long been known to be critical in memory consolidation, particularly within the first night after new learning (review paper). Beyond sleep, recent work suggests that intermittent rest periods may be beneficial for memory retention.  Recent neuroimaging results show that functional connectivity between brain regions of interest during post-encoding (learning) awake rest periods shows a relationship with long-term retention (Tambini et al 2010).  Other recent work has demonstrated that low-frequency correlations after an experience are related to perceptuo-motor learning (Daselaar et al 2010; Baldassarre et al, 2012).  Finally, resting state connectivity in the human brain even before any new experiences has been related to performance on a wide variety of cognitive measures.

Challenge 2: This exciting, yet novel, analytic approach allows for the identification of large-scale brain networks that cohere, change with recent experience and predict learning. Future work should identify whether the changes in low-frequency fluctuations are, indeed, a signature of ongoing memory consolidation. Importantly, for education purposes, there are many questions one could ask to assess application to a real-world learning setting: Will memory improve if we simply allow students to rest during the day? What counts as ‘rest’? Is it sufficient for a brain area to be ‘resting’ to allow for consolidation of specialized representations?

Accomplishment 3:  It has been known for some time that when learners are put in control of the learning experience by being able to explore the learning environment, they retain more information about that learning experience (Voss et al 2011; for review see Gureckis and Markant, 2012). However, recent work has suggested that incidental encoding of information may also be enhanced during active learning. If this is the case, it suggests that relatively minor shifts in perceived or actual control given to learners may enhance memory retention of task-specific, but also incidental, representations.

Challenge 3: It is imperative to identify WHY active learning improves retention and memory and whether this changes with the type of learning assessed.  It is possible that exploration in and of itself enhances processing in key memory regions, such as the hippocampus, which leads to better memory formation.


  1. Baldassarre A., Lewis C.M., Committeri G., Snyder A.Z., Romani G.L., Corbetta M. (2012) Individual variability in functional connectivity predicts performance of a perceptual task. Proc Natl Acad Sci. 2012 Feb 28;109(9):3516-21. doi: 10.1073/pnas.1113148109.
  2. Daselaar S.M., Huijbers W., de Jonge M., Goltstein P.M., & Pennartz C.M. (2010) Experience-dependent alterations in conscious resting state activity following perceptuomotor learning. Neurobiol Learn Mem. 2010 Mar; 93(3):422-7. doi: 10.1016/j.nlm.2009.12.009.
  3. Duncan K., Sadanand A., Davachi L. (2012) Memory’s penumbra: Episodic memory decisions induce lingering mnemonic biases. Science 27 July 2012: 337 (6093), 485-487. doi:10.1126/science.1221936.
  4. Gureckis T.M. & Markant D.B. (2012) Self-directed learning: A cognitive and computational perspective. Perspectives on Psychological Science 7(5) 464-481. doi: 10.1177/1745691612454304.
  5. Kuhl B.A., Rissman J., Chun M.M., & Wagner A.D. (2011) Fidelity of neural reactivation reveals competition between memories. Proc Natl Acad Sci Apr 5 2011; 108(14): 5903-8. doi: 10.1073/pnas.1016939108.
  6. Kuhl B.A., Rissman J. & Wagner A.D. (2012) Multi-voxel patterns of visual category representation during episodic encoding are predictive of subsequent memory. Neuropsycholgia 50(4):458-69. doi: 10.1016/j.neuropsychologia.2011.09.002.
  7. Rissman J., Greely H.T., Wagner A.D. (2010) Detecting individual memories through the neural decoding of memory states and past experience. Proc Natl Acad Sci 25 May 2010; 107(21):9849-45. doi: 10.1073/pnas.1001028107.
  8. Shohamy D., Wagner A.D. (2008) Integrating memories in the human brain: Hippocampal-midbrain encoding of overlapping events. Neuron 60:378-389.
  9. Tambini A., Ketz N., Davachi L. (2010) Enhanced brain correlations during rest are related to memory for recent experiences. Neuron 65:208-290.
  10. Voss J.L., Gonsalves B.D., Federmeier K.D., Tranel D., Cohen N.J. (2011) Hippocampal brain-network coordination during volitional exploratory behavior enhances learning. Nat Neurosci. 2011 Jan; 14(1):115-20. doi: 10.1038/nn.2693. Epub 2010 Nov 21.
  11. Zeithamova D., Dominick A.L., Preston A.R. (2012) Hippocampal and ventral medial prefrontal activation during retrieval-mediated learning supports novel inference. Neuron 75(1):168-79.


Interactions between executive functions and learning
Yuko Munakata

In the field of cognitive development, we now know a great deal about how children develop executive functions (such as inhibitory control) that support goal-directed behavior1.  We also know that such executive functions have implications for learning.  However, we still have much to learn about the nature of the relationship between executive functions and learning, and the implications for attempts to maximize learning.


– Establishing the importance of executive functions in a range of important long-term outcomes: Executive functions in childhood predict academic achievement, health, and income, up to decades later 2,3.  For example, inhibitory control in preschoolers predicts both math and reading ability in kindergarten, independent of general intelligence.

– Establishing that executive functions can be changed through experience: A variety of programs have shown promise in improving children’s executive functions4, from computer-based training targeting specific executive functions to preschool curricula that may target literacy and/or socioemotional development in addition to executive functions.

These achievements highlight the promise of understanding and improving children’s executive functions in order to improve learning and other outcomes.

ChallengesCapitalizing on this promise requires a deeper understanding of:

– How executive functions impact learning: While the development of executive functions is viewed as adaptive and is associated with positive outcomes, there is an increasing recognition that these relationships are complex.  Executive functions are diverse and can involve trade-offs, whereby being good at one aspect of executive functioning (e.g., maintaining goals in working memory) may come with a cost for another (e.g., reduced flexibility to shift to new goals)5.  Similarly, while better inhibitory control is associated with better academic performance, some aspects of executive function may impair certain types of learning6.  Thus, understanding the varied ways in which different aspects of executive function influence learning is critical.

– Effects of experience: While interventions and training programs have shown promise, some of these programs are multifaceted and time-intensive (e.g., incorporated into a year of preschool curriculum), making it difficult to know which components are critical, and limiting their ability to inform theory development and targeted interventions.  Some targeted interventions (e.g., for working memory) have yielded gains but shown limited transfer7, while others (e.g., for inhibitory control) have been met with limited success8.

These challenges highlight important opportunities for research on basic mechanisms of executive functions and learning, and on theory-driven work on effects of experience.


1. Munakata, Y., Snyder, H. R., & Chatham, C. (2012). Developing Cognitive Control: Three Key Transitions. Current Directions in Psychological Science, 21(2), 71–77. doi:10.1177/0963721412436807

2. Blair, C., & Razza, R. P. (2007). Relating effortful control, executive function, and false belief understanding to emerging math and literacy ability in kindergarten. Child Development, 78(2), 647–663. doi:10.1111/j.1467-8624.2007.01019.x

3. Moffitt, T. E., Arseneault, L., Belsky, D., Dickson, N., Hancox, R. J., Harrington, H., et al. (2011). A gradient of childhood self-control predicts health, wealth, and public safety. Proceedings of the National Academy of Sciences, 108(7), 2693–2698. doi:10.1073/pnas.1010076108

4. Diamond, A. (2012). Activities and programs that improve children’s executive functions. Current Directions in Psychological Science, 21(5), 335–341. doi:10.1177/0963721412453722

5. Goschke, T. (2000). Involuntary persistence and intentional reconfiguration in task-set switching. In S. Monsell & J. Driver (Eds.), Attention and Performance XVIII: Control of Cognitive Processes (Vol. 18, pp. 331–355). Cambridge, MA: MIT Press.

6. Thompson-Schill, S. L., Ramscar, M., & Chrysikou, E. G. (2009). Cognition without control: When a little frontal lobe goes a long way. Current Directions in Psychological Science, 18(5), 259–263. doi:10.1111/j.1467-8721.2009.01648.x

7. Melby-Lervåg, M., & Hulme, C. (2013). Is Working Memory Training Effective? A Meta-Analytic Review. Developmental Psychology, 49(2), 270–291. doi:10.1037/a0028228

8. Thorell, L. B., Lindqvist, S., Bergman Nutley, S., Bohlin, G., & Klingberg, T. (2009). Training and transfer effects of executive functions in preschool children. Developmental Science, 12(1), 106–113. doi:10.1111/j.1467-7687.2008.00745.x


The (Epi)genetics and (epi)genomics of learning
Elena L Grigorenko

As the accumulation of knowledge grows exponentially both in neuroscience and genetics/genomics, both fields feed back into psychology, expanding both its spectrum of questions to ask and research methods to employ. The junction of psychology and genetics/genomics has generated findings that have impacted both fields (for recent snapshots of this junction, see and Although with many caveats, the consensus is that understanding the psychology of learning necessitates understanding the biology of learning, which, in turn, necessitates understanding the genetic/genomics and—even more so, as evidenced by recent research—the epigenetic/epigenomics of learning. Within this contextual realization, there are multiple specific observations that have been recently made as cornerstones of the field today. Among many such observations, here three of them are cited in particular.

Accomplishment 1. As envisioned by the very early quantitative-genetic models of complex behavioral (psychological) traits such as learning, although the development of these traits can be challenged by a single major impact (see №2), the typical “work” of the genetic machinery that is used to lay the brain-based foundation for the development of these traits, calls for the engagement of many genetic factors. Each of these factors, individually, might be attributed to little variance in the trait, but, collectively, may account for a substantial portion of the traits’ population distribution (typically 50-80%). It has become evident that there are numerous genes whose functions are relatively well known, whose specific common–i.e., present at 5+% of the population—alleles (also referred to as genetic variants) seem to contribute ubiquitously to learning in its various forms. Yet, it appears that there are specific combinations of these common variants that might substantiate the relative specificity of particular types of learning (either by themselves or in combination with some less omnipresent variants). At this point, many of these variants have been documented.

Challenge 1. Given that the portion of variance attributable to each of these variants is rather small (almost by definition, as exemplified in early quantitative-genetic models), the field is plugged with findings which are contradictory and often uninterpretable. The first and foremost problem for studies trying to identify/confirm common genetic variants (and their combinations) substantiating learning is statistical power. Only in properly large and well characterized samples can common variants and their combinations be studied meaningfully.

Accomplishment 2. Although many variants (scores of which might be interchangeable) are needed to substantiate typical learning, this machinery can be broken down in multiple ways, often with a single genetic/genomic event. As they are rather deleterious, such events are rare. At this point, many such structural events have been documented and related to a number of complex phenotypes (referred to as genomic syndromes); virtually in all of these syndromes, learning is jeopardized.

Challenge 2. Understanding and cataloging such deleterious events is a laborious and complex task. Although detectable in single individuals, they are rare on the population scale. In addition, there appear to be much heterogeneity both at the levels of the genome and phenome. Only in a context of a very careful and detailed characterization of both the genomic lesion and its phenotypic manifestations can the impact of such rare events be understood and their generalizability for models of learning appraised.

Accomplishment 3. Even though present in the literature for over 50 years, epigenetic mechanisms and their role in learning are just becoming evident to the field. These mechanisms appear to be the material foundation of the existence of the genome in its environment and can explain/encompass gene-by-environment phenomena (i.e., interaction, correlation, co-action…however named). There is a rapid accumulation of relevant literature (primarily animal model literature) that mark these mechanisms as the key (epi)genetic/(epi)genomic mechanisms of learning.

Challenge №3. As (epi)genetic/genomic markers are tissue-/gene (or gene region)-specific, most studies today either rely on cell types that are commonly available (i.e., saliva or blood), sample postmortem tissue (i.e., are able to reflect only on the past, not ongoing epi-regulation), or are researchable only in animal models. Although exciting, this field of research is saturated with limitations of understanding and generalizability, and has to be navigated with great caution.


Prospects for scientific advances in work on learning from the perspective of work in education and technology
Roy Pea, Stanford University

This presentation begins with a clarifying framework for the aims of education synthesized in a recent NRC report on Education for Life and Work—what it is that people should be able to know and to do, in terms of three broad areas of competencies: cognitive, intrapersonal (such as intellectual openness, initiative, metacognition) and interpersonal (such as teamwork, collaboration, leadership, responsibility).  A central implication of this framework is that we need a science of learning for education that examines conditions for developing complex performances and competencies, not only memory and problem-solving and solo learning.

Furthermore, how learning occurs and affordances for its design change in a fundamental way when everyone everywhere has immediate access to the hyperconnected world of smart phones tapping into cloud computing, social media, broadband wireless networks, rich media conferencing, and big data informed apps. Many predict jobs will be far more rapidly transformed in this hyperconnected world, with lifelong learning and re-tooling for new jobs a persistent fact of life, as necessary knowledge and tools for work and life undergo waves of disruption and re-invention.

This tripartite emphasis is important for broadening STEM participation as more than cognitive processes and strategies are at stake in educational participation – interpersonal issues such as stereotype threat, difficulties in communication and collaboration, and intrapersonal issues such as disciplinary identity and self-evaluation – are consequential for selecting and maintaining STEM learning pathways.

It is valuable to contextualize several overarching challenges and opportunities for research, design, tool and theory development in relation to this framing.  Each surfaces substantial needs for scientific advances in work on learning for education.

Challenge #1: Personalized Learning at Scale

The 2010 National Education Technology Plan presents a Grand Challenge Problem of personalized learning in an always-on networked world of educational opportunities (p. 78): “Design and validate an integrated system that provides real-time access to learning experiences tuned to the levels of difficulty and assistance that optimize learning for all learners, and that incorporates self-improving features that enable it to become increasingly effective through interaction with learners.” “Grand challenge problems” are important problems that require establishing a community of scientists and researchers to work with measurable progress toward their solution.

This vision requires open learning maps of dependency relationships among learning standards such as the K-12 Common Core State Standards in mathematics and English Language Arts that are to be achieved during educational processes. Learning resources also require tagging with metadata to learning standards and learning maps. Digital assessments are needed to determine where the learner is in those maps, and a computational model of the learner needs to be continuously informed by data about their performances and choices. The concept of recommendation engines familiar from Amazon, iTunes, and Netflix for books, music and movies is also applicable in this personalized learning vision—but with greater complexities involved for learning resources, given the interdependencies of learning progressions as referenced to learning maps. Open source software has been created to enable broad experimentation in the issues involved in making such a K-12 learning technology ecosystem functional, and public-private partnerships will be valuable. Deeper inquiries are needed to unpack the nature of learner preferences for the modality or modalities of their experiences of learning resources, and how learner’s interests can be matched to appropriate learning resources.

There has been substantial progress in establishing a common vocabulary for tagging learning resources by the Learning Resource Metadata Initiative, building on the semantic web, and as supporting alignment to Common Core Standards. Yet we need far more inquiry to establish empirically warranted learning progressions for other K-12 STEM domains than mathematics (such as the Next Generation Science Standards), for courses of study at the college level, and for intrapersonal and interpersonal competencies, not only the cognitive. Intertwined relationships during human development and learning between these three categories of competencies are also likely. For example, personal identification with mathematics or with science domains – what some call ‘disciplinary identity’, build in significant measure from interests that the learner develops, but these interests themselves can be socially constructed, as in when a learner aspires to develop the capabilities that an adult model manifests. How is learning such competencies mediated by technologies, and how could such learning be better supported?

Challenge #2: Multi-modal learning interaction science and technology

An important frontier for developments is to capture, integrate and make systematic inquiries into large-scale, multimodal data streams of learning interactions in-situ, such as learners with their teacher in a classroom, collaborators in a project-based learning environment after-school, distributed students in a college-level MOOC, learning interactions of children with ‘social robots’.

There is increasing recognition that human learning is a complex multi-sensory affair of embodied semiotic interactions, and that the production and perception of meaning in context engages the full range of sensory modalities. This is important because there are many challenges associated with inquiry into better understanding how learning is occurring within and across formal and informal settings, as learners and educational systems exploit increasingly pervasive mobile learning devices and online educational applications and resources such as MOOCs, OER, Wikipedia, web search, and digital curricula, games and simulations. Yet most research on learning in education has minimal sensing of the contexts in which learning processes are being enacted, and in which learning outcomes are developed, since classroom studies dominate. A variety of technologies makes possible new inquiries for advances on these issues.

Increasing accessibility of environmental and personal sensors is now enabling ready capture and review of rich data streams of human interactions in learning contexts, such as audio/video (including panoramic systems), GSR/Heart Rate, breath-rate, body motion, GPS, gesture, EEG, and emotional states. Increasingly such sensors are embedded in mobile phones.  “Sensing” of learning contexts and recognition of the people, gestures, discourse patterns, and activities in them will be an important complement to “learning analytics” and “educational data-mining” – emerging and closely related multidisciplinary fields that have been focusing principally on clickstream data from online educational activities and related administrative data.

Less advanced but much needed are technologies and associated methodologies for integrating diverse data types from sensor streams and human-coded data, and multimodal data stream ‘workbenches’, incorporating analytic tools for sense-making and pattern detection with rich interactive data visualization capabilities for examining data stream inter-relationships.   How can multi-modal analytical techniques from computer vision, speech recognition, gesture recognition, and machine learning be used to deepen our understanding of in-situ experiences of learning and teaching in education? Research on the behaviors underlying differential performance of collaborative groups has already exploited these multi-modal sensing opportunities.  Multidisciplinary research teams will be needed to tackle these big data challenges, engaging social scientists, learning scientists, computer scientists, statisticians, neuroscientists, and disciplinary domain experts.


The Science of Science and Innovation Policy: Lessons for the Science of Learning Centers
Julia Lane

“How much should a nation spend on science?  What kind of science? How much from private versus public sectors? Does demand for funding by potential science performers imply a shortage of funding or a surfeit of performers?……A new “science of science policy” is emerging, and it may offer more compelling guidance for policy decisions and for more credible advocacy”  John H. Marburger III, “Wanted: Better Benchmarks” Science, May 20, 2005

Alas, despite the promise, we know far more about making wise investments in health, education, finance and workforce than we do in science. How should science best be fostered?   The answers to that are not well known but are emerging, in, as Marburger put it “a science of science policy”. What we have learned from this new field can be used by the Science of Learning community as they think through making new investments in the field.

The United States established both a White House Interagency Group of the major science agencies and the National Science Foundation established a research program called the Science of Science and Innovation Policy.(SciSIP). Japan and Norway  have begun similar programs.  The results have led to a US roadmap as well as an EU US roadmap, hard evidence about a variety of different investments  and the results of different program structures, and a wealth of additional insights from Science of Science and Innovation Policy researchers

Figure 1

Getting the right conceptual and empirical framework matters, lest resources and people get squandered  because incentives are wrong.    The emerging approach is to recognize that science is fundamentally about the creation, transmission and adoption of ideas not about counting documents. The conceptual and empirical focus is then on identifying and supporting scientists and scientific networks – the Kuhnian trailblazers, pioneers, settlers, sodbusters, ranchers, and developers of science.  This is in stark contrast to previous approaches,which  treat science as a black box (or a slot machine) where, when large amounts of money are spent  then “a miracle occurs” in terms of innovation (Figure 1).  The more scientific approach structures measurement to describe who is being funded to collaborate with whom and where to do what (Figure 2).  This correctly identifies the change agents as researchers and research networks and renders current approaches, derived from the reporting requirements of previous decades, which arbitrarily ties publications and patents to a single particular funding sources (the right hand side of Figure 1) both misleading and irrelevant.

Working from this framework has important science policy implications. It means that the focus of new investments should be on people not documents.  Funders should identify good researchers as they are developing, fund them to help build and expand their research networks, reward them for training good graduate and undergraduate students, and build the infrastructure necessary to do their science, rather than building massive infrastructures to count publications and patents as in the United Kingdom.   This is all feasible in an era of big data and cyberinfrastructure: existing data can be repurposed and 21st century technologies applied..

Figure 2

Four core steps are necessary to start the process in any country…including developing countries

Four key technical steps

The first is to build capacity to describe all people working on directly funded science projects – including students. There are two reasons for this. One is that identifying promising and productive individuals and their scientific collaborations is critical to fostering good science and technology transfer.  The second is that students play an important and unrecognized role.  They can be the key to the transfer of technology from universities to the private sector.  The experience of both the United States STAR METRICS and the Australian ASTRA programs is that only fourteen data elements are necessary and that the information can be retrieved from existing payroll systems with relatively low burden and cost. It should be even more straightforward as many developing countries are just starting to build their science funding systems; the systems can be built correctly from the ground up.

The second is to build capacity to describe what research Investments are being made: so that research institutions can identify their research strengths, gaps, and the changes over time.  It is unnecessary to rely on arbitrarily created taxonomies of science that expect researchers to fill out forms to categorize their activities.  Google and other companies do not require anyone to fill out forms to tag billions of documents; they use natural language processing techinques to mine massive amounts of text.  .

The third is to describe the results of research to permit more credible advocacy and better research management. This can be done by matching in existing datasets, such as patent datasets, scientists’ curricula vitae, and the new ways in which scientists communicate ideas.  New tools can be applied to harvest publications, capture scientific activity, and extract data.

The fourth is to bring the information together in an easy to use and intuitive framework.   Since the community consists of both researchers and policymakers, the goal here is to both create knowledge and to make it useable for policymakers.  Some examples of how that research is yielding practical results can be seen from the White House’s prototype R&D Dashboard and the wireframes developed for the French Institut National de Cancer’s HELIOS project.

The results in practice

As Daniel Kahneman has noted, the first big breakthrough in our understanding of the mechanism of association was an improvement in a method of measurement..   The approach being used, for example, by the Committee on Institutional Cooperation.  Is to actively engage research administrators and the SciSIP researchers in each university to:

  1. Describe, measure, visualize, and influence the impact of research and its progression from bench to practice by:
    • Capturing, at the most granular level, the collaborations and activities of project teams through successive stage gates of the research process
    • Quantifying the impact of access to trained researchers on the employment and economic performance of firms and economies
    • Designing successful public engagements in science and innovation policy
  2. Describe, measure, and visualize the breadth of coverage of science by capturing research collaborations
    • Within and across scientific areas
    • With other institutions within the United States and globally
    • With industry
  3. Describe, measure and visualize the pipeline of, and outcomes for, students likely to feed into the regional and national economy by matching STAR METRICS workforce data with:
    • Topics and fields of research
    • Starting earnings, employing industries and regional networks by matching to Census Bureau data

So, the answer to the question of how might the science of learning centers build the next stage of investment is: use a scientific approach.  Build a scientific community and create an intellectually coherent, generalizable and replicable body of measurement and analysis, and use new tools and models to examine the results of the investments from the beginning onward.


NSF Funded National Synthesis Centers
Judith A Verbeke

The NSF funded National Synthesis Centers play a critical role in organizing and synthesizing biological knowledge that is useful to researchers, policy makers, government agencies, educators, and society.  They develop new tools and standards for management of biological information and meta-information, support data analysis capabilities with broad utility across the biological sciences, host workshops that bring together scientists from a variety of disciplines, and host and curate databases.  Synthesis Centers do not support the collection of new data; they add value to data already collected.

The Directorate for Biological Sciences at NSF pioneered Synthesis Centers more than 15 years ago in response to community requests for a Center that would bring scientists together to synthesize the growing body of data in the field of ecology.  The vision was to create a Center that would address new questions, pose new research approaches, and advance existing fields in new directions through the process of synthesis.  The specific questions or problems to be addressed were not to be defined by the Center, or specified in the initiating proposal, but rather be proposed by the research community and thus to change over time.  Over its 15 years of operation, this Center (the National Center for Ecological Analysis and Synthesis, or NCEAS; ) facilitated synthesis to answer scores of specific questions, advocating a new culture of collaboration in the field of ecology.  Since its inception in 1995, NCEAS has hosted more than 5,000 individuals and supported more than 500 projects; products have included both high volume and high profile publications, stellar postdoctoral training in synthesis, and – to a lesser extent – educational and outreach materials.

NSF/BIO now supports a portfolio of National Synthesis Centers that realize synthesis in different ways and focus on different disciplines and approaches.  These Centers capitalize on economies of scale, cutting edge cyberinfrastructure, and foster innovation in ways not possible by individual researchers.  The research conducted by scientists who participate in these Centers has become increasingly collaborative, interdisciplinary, and international.  New social challenges have arisen around how scientists work together across disciplines, institutions, and geographic and political boundaries – and how we measure the impact of these shifts.  As a result, the notion of synthesis as advanced by Centers has changed considerably since NCEAS’ inception.

The National Evolutionary Synthesis Center (NESCENT; ) promotes the synthesis of information, concepts and knowledge to address significant, emerging, or novel questions in evolutionary science and its applications.  This Center’s science, informatics, and education and outreach programs serve the science community, educators and the general public.  NESCENT science includes a broad portfolio that spans a wide range of organisms, habitats, methods, and disciplines.  NESCENT’s education and outreach activities include Darwin Day Roadshows, teacher training workshops, and an evolution film festival.  The core NSF funding to NESCENT has enabled the Center to initiate or participate in additional externally funded cyberinfrastructure projects that are aligned with the Center’s mission.

The National Institute for Mathematical and Biological Synthesis (NIMBIOS; ) supports creative solutions to complex problems at the interface between mathematics and biology.  NIMBIOS enables scientific advances with high impact in areas as diverse as agriculture, the environment, health, and national security.  This Center includes education programs aimed at the mathematics/biology interface, thereby building the capacity of mathematically competent, biologically knowledgeable, and computationally adept researchers needed to address the vast array of challenging questions in this century of biology.  Education and outreach occurs at many levels, including K-12, research experiences for undergraduates, partnerships with minority-serving institutions, postdoctoral training, and education of the general public.

The iPlant Collaborative (iPlant; ) provides cyberinfrastructure to enable new conceptual advances in plant sciences through integrative, computational thinking.  iPlant focuses on grand challenge questions in the plant sciences, including innovative approaches to education, outreach, and the study of social networks.  This Center involves plant biologists, computer and information scientists and engineers, as well as experts from other disciplines who all work together in integrated teams.  The cyberinfrastructure created by iPlant provides access to world-class physical infrastructure as well as services that promote interactions that advance the use of computational thinking in plant biology.  The iPlant cyberinfrastructure framework includes hardware, software, and support for the multidisciplinary teams.

The Socio-Environmental Synthesis Center (SESYNC; ) uses synthetic approaches to advance the frontiers of scientific understanding of environmental complexity in order to anticipate and manage environmental challenges. SESYNC is dedicated to creating synthetic, actionable science related to the structure, functioning, and sustainability of socio-environmental systems.  The Center defines “actionable science” as scholarship that has the potential to inform government, business, and household decisions; improve the design and/or implementation of public policies; influence public or private sector strategies, planning and behaviors that affect the environment.  Workshops sponsored by the Center engage philosophers, sociologists, political scientists, psychologists, anthropologists, environmental biologists, and policy makers to integrate broad disciplines from the outset and to set precedence for all subsequent activities.


Virtual Science of Learning Organizations: Designing the Future
Susan J Winter

 Consistent with its mandate to improve US competitiveness, NSF’s Science of Learning (SoL) solicitation’s goal was to extend the frontiers of the SoL and broaden its impact on society; Centers were expected to contribute to learning in all educational settings.  In the US, both education and University-based research are inherently state and local matters carried out through a remarkably fragmented educational apparatus that is difficult to influence, much less control.  Thus, both extending the frontiers of science and contributing to learning represent the kind of complex intellectual challenges that necessitate multi-disciplinary collaboration among diverse scientific teams sharing common resources. Due to globalization, the dispersion of resources, and the inclusion of citizen scientists, scientific teams addressing complex challenges like the SoL are increasingly distributed geographically in virtual organizations (a collection of individuals whose resources are dispersed, yet who function as a coherent entity through the use of information and communication technologies, Cummings et al. 2008).

The Virtual Science of Learning Organization (VSLO) Design Challenge:

Like all science, virtual scientific organizations are human endeavors predicated on social conventions and requiring enabling technologies.  They are enabled by transformational advances in networked information infrastructure and increasingly common computer-mediated human interaction. The attendant social conventions are evolving to adapt to this new mode of science.  VSLO leaders will need to make informed and reflective choices about a host of fundamental issues, understand how these play out in their specific contexts, and decide when components can be performed virtually. Making good choices about the mix of virtual and face-to-face elements requires understanding the general principles of organizing.  In this presentation, a few of the most essential issues in the design and management of VSLOs are described (see Lutters and Winter, 2011). First, three underlying issues central to organizations are highlighted: asset development and use, governance and decision making rights, and knowledge flow. Then eight common contextual differences are addressed:  the organization’s lifecycle, problem boundedness, scale and scope, task interdependence, actor interdependence, degree of shared context among members, regulatory environments, and technological readiness. Interdependencies, co-evolution and the impact of virtuality on each of these is discussed.

The Future Organization of the Science of Learning:

Simply put, distributed science is the future of the scientific enterprise in general and the SoL in particular not because virtual, cross-disciplinary collaboration is easy, but because extending the frontiers of the SoL and broadening its impact on society demands it and the technology enables it. Indeed, it is difficult to see how these goals could be accomplished without VSLOs; to engage these challenges the SoL community must leverage complementary human, technical, organizational and social assets that are often sparsely distributed and poorly organized.  The resources that must be brought to bear on this problem cannot be collocated so the science must be performed in a distributed fashion. As they become more dependent on shared resources such as expensive equipment, big heterogeneous databases, or vast sensor networks to conduct the science of learning, diverse communities must become more closely integrated at all levels, from data collection through theory validation.   This can be done by identifying and partnering with others sharing the same vision to assemble the right assets, establish effective governance structures, foster efficient and effective knowledge flows, reflectively engage contextual factors at the time of VSOL design and then continually monitoring and revisiting these as the conditions change to maintain an optimal alignment.

VSLOs will be created, but hard and persistent organizational problems remain. Society and technology are dynamic so creating and managing effective VSLOs requires us to embrace the iterative processes of doing and learning, engage in self-reflective scanning for patterns across, not just within, projects as they evolve, and hit a continuously moving target. Ultimately, the social, organizational and technical arrangements by which the SoL progresses will co-evolve toward greater alignment. Creating, managing and participating in effective VSLOs is possible if those involved know why they are invested in these endeavors and are prepared to engage over a long time frame. The real question is how quickly can we align the various elements to optimize their progress?


  1. Cummings, J., T. Finholt, I. Foster, C. Kesselman, K. Lawrence and D. Rhoten. 2008. “Beyond Being There: A Blueprint for Advancing the Design, Development, and Evaluation of Virtual Organizations.” Arlington, VA: National Science Foundation.
  2. Lutters, W. & Winter, S.J. (2011) “Virtual organizations”, In W. Bainbridge (Ed.) Leadership in Science and Technology: A Reference Handbook, Sage, Thousand Oaks, CA.


NIH Investment in the Science of Learning
Robert M Kaplan and Sabrina Liao

The National Institutes of Health is world’s the largest funder of biomedical and behavioral research. The 2012 NIH budget of approximately $31 Billion provided support for approximately 50,000 research grants. NIH funds approximately 300,000 research investigators at over 2,500 institutions. This paper offers preliminary analysis of NIH expenditures relevant to the science of learning.  The analysis uses the Report Expenditures and Results System (rePORT),  an electronic tool that can be used to search NIH intramural and extramural expenditures for the past 25 years. The system also provides access to research supported by the Centers of Disease Control and Prevention (CDC), the Agency for Healthcare Research and Quality (AHRQ), the Health Resources and Services Administration (HRSA), the Substance Abuse and Mental Health Services Administration (SAMHSA), and the US Department of Veteran Affairs (VA).

We searched the system for all active grants using the terms learning, social learning, molecular learning, neural processes of learning, neural learning, cognitive learning, social influences, cognitive processes of learning, animal learning, neurological learning, molecular basis of learning, machine learning, cognitive processes of learning, social influences on learning, cognitive basis of learning, neurological basis of learning, artificial intelligence, artificial intelligence learning, and human learning. Among these categories, the largest expenditure was under the category social learning where there are 5,515 active grants with an estimated expenditure of $2.1 Billion. The next most common category was molecular learning (3, 282 projects; $1.3 billion). Neural processes of learning was associated with 3,585 active projects with an expenditure of $1.3 Billion. There were relatively few grants in artificial intelligence (35 projects; $16 million) and artificial intelligence learning (22 projects; $9.5 million). There were 1,583 projects on animal learning ($623 million) and 1,894 projects on cognitive processes ($705 million).

Figure 1 offers a word cloud using different search terms. The size of the printed word reflects the number of active grants in the NIH Portfolio.

Among the NIH institutes, the National Institute of Mental Health consistently supports the largest amount of research on learning. The National Institute of Neurological Diseases and Stroke is also a major supporter of research on the neurological basis of learning while the National Institute of Child Health and Human Development and the National Institutes on Aging also have substantial portfolios, primarily focused on cognitive processes.

This portfolio analysis is in a relatively early stage. It is important to interpret these results with great caution. Many grants are captured under multiple categories. As a result, it is not possible to sum the results to estimate the overall expenditure.  At the meeting analyses using other portfolio analysis tools will also be presented.

Figure 1.  Word cloud representing number of grants in NIH portfolio by science of learning topic


Private Foundations
Neil Albert


Many private foundations support work related to learning, though most of those that support research on learning do so because of an interest in education. For the last 50 years, the Spencer Foundation has sought to improve education by generating new knowledge through research, but we award a small portion of private resources going to education grants. In 2010, private foundations awarded just over $5B worth of education grants, with the majority of that money supports programs. Based on the overall proportion of private grants that are for research (which is just over 15%), that results in about $750M of support for education research. But much of that funding supports research that focuses on issues of practical importance; for example, school leaderships’ impact on teachers and students, improving graduation rates, or meeting the needs of first generation, low SES, or special needs students. Nonetheless, there are opportunities to build from recent research initiatives of some private foundations. There are also some areas that would benefit greatly from rigorous research and have the attention of private foundations, but for which there is not yet a clear research agenda.

At this time, Spencer has no large-scale proactive efforts focused on learning research per se, but there are a few topics that have been discussed or are the focus of smaller scale projects that connect to learning research. These include the projects on: cultivating and measuring cognitive, intrapersonal, and interpersonal skills and habits; deeper learning in the context of the Common Core State Standards; and equity in education. There are many questions in each of these areas that would benefit greatly from the serious deliberations of scholars from the learning research community.

One area of greatest interest of late, with the publication of Paul Tough’s “How Children Succeed” and dozens of related op-ed pieces, is the role that cognitive, intrapersonal and interpersonal habits and skills play in education (and life). The focus of Tough’s book is just the tip of the iceberg, and countless questions remain unanswered – including many for learning researchers. As foundations explore these issues, one question that is regularly asked is, “which of these skills or habits can impact students’ academic performance?” But there are many other important questions, such as, “how can we help students learn the skills and habits that would help them in their personal and professional lives?”

Many educators strive to develop their students’ ability and tendency to direct their own learning in effective ways, and students learning to learn (and becoming learners) is a worthy aim. There is also great interest in a variety of skills and habits (many of which are difficult to measure) and how students learn them and learn to use them, as elaborated in an NRC report from late last year. It is worth noting that this topic has some clear connections to efforts of a number of other foundations, including the Hewlett Foundation’s Deeper Learning initiative, and the Novo Foundation’s focus on Social and Emotional Learning.

There are two major education reforms that are also likely to warrant the attention of those studying learning: Massively Open Online Courses (or MOOCs) and the Common Core State Standards. In the case of MOOCS, there are likely to be a number of initial potential directions for researchers to build from the MacArthur Foundation’s nearly decade-old program in Digital Media and Learning. But the number and, presumably, diversity of MOOCs provide a valuable opportunity for researchers studying learning. In the case of the Common Core State Standards, there is an increased focus on favoring depth of knowledge over breadth of knowledge, and a focus on habits and skills to that end. The design of the Common Core State Standards may also provide many opportunities for researchers to learn from and help teachers transitioning into this new instructional ideology.

In closing, there is an additional point that I would like to make to this group – one that I always neglected as a researcher. Education is a life-long process. Educators develop grade-level expertise, but educators in all grades are looking to research. For learning research to better connect to education, more studies are needed across development. Third grade teachers may not benefit as much from studies of undergraduates, or even 6th graders for that matter, as one might hope. Similarly, research is needed to understand individual and group differences so we can better meet students’ varied needs – and to ensure that our schools have the necessary tools to be a catalyst for social equity.

4 Responses to “Report, 2nd Workshop”

  1. Z4gKo52sL Says:

    942013 536176Discovered your weblog and decided to have a study on it, not what I generally do, but this blog is fantastic. Awesome to see a website thats not spammed, and really makes some sense. Anyway, fantastic write up. 168869

  2. 552747 466436Oh my goodness! a fantastic post dude. Many thanks Nevertheless We are experiencing difficulty with ur rss . Dont know why Can not sign up to it. Could there be anybody obtaining identical rss difficulty? Anyone who knows kindly respond. Thnkx 182949

  3. Alaine Says:

    covid diet in tamil

Leave a Reply