The Search for Learning Outcomes:
Active and Engaged:
Science Centers and Informal Learning
Beyond the Deficit Model
By Richard Toon
Do science centers increase public understanding of science? I have always thought so, even though the premise
is hard to prove. Frank Oppenheimer thought so, too. In 1968, he included the following in a Curator article
about his rationale for creating the Exploratorium:
There is an increasing need to develop public understanding of science and technology. The fruits of
science and the products of technology continue to shape the nature of our society and to influence events
which have a worldwide significance. Yet the gulf between the daily lives and experience of most people and the
complexity of science and technology is widening. (Vol. 11, No. 3).
Science centers, Oppenheimer believed, would help bridge this gulf. But have they? Although hundreds of science
centers have opened since the Exploratorium was launched in 1969, the gap still seems wide as we attempt to
communicate about biotechnology, nanotechnology, and other sciences that didn't even exist in the 1960s. The
National Science Board's 2005 report on the public's level of "scientific literacy"defined as "knowing
basic facts and concepts about science and having an understanding of how science works"shows little
change from the results of the late 1980s (with the possible exception of increased awareness that antibiotics
do not kill viruses).
Are our institutions failing to raise public understanding of science? Or are we not evaluating our performance
correctly? Either way, we need to do better.
What do visitors say?
Most museum professionals believe strongly that science centers have a positive effect on the public's
understanding of science. They just don't like the standard way of measuring it.
To start with, they point out, there's the term "science literacy" itself, which implies that there is some
general level of knowledge to be gained and some general test to apply, on which a passing grade would render
one scientifically literate. Yet despite their objections, testing is the way some researchers have approached
this issue for years, as in the National Science Foundation "Indicators" surveys, which ask the public to
define basic scientific concepts, such as molecules or plate tectonics, and then compile the results.
Eight years of work on education programs in a science center have taught me that this "deficit model" just
isn't helpful. Most people aren't coming to museums to have their science deficits replenishedbut they
are coming to learn. To illustrate, I cite some data I collected in 2002 from visitors to Titanic: The
Artifact Exhibition, when I was a researcher at the Arizona Science Center.
Titanic is an extremely popular exhibition that continues to tour many U.S. centers. In Phoenix, it
received 320,000 visitors, 34,000 of whom wrote comments in a loose-leaf (and content-replaceable) binder
stationed prominently at the exhibit exit. This was part of an evaluation that included visitor observations,
an analysis of media coverage, and surveys of visitor experience, satisfaction, and enjoyment.
The comments book had the simple aim of prompting visitors to share their immediate responses. At the top of
each page was the invitation: "Please share your thoughts." It's impossible to give a complete sense here of
what these contained, but one can gain a flavor from a few examples:
Definitely worth the time and money even though we have all been deluged with "Titanic-mania" all
these years. It's great to hear the story and see all the artifacts in this most moving way. A definite tribute
to all who died and lived at that far distant time.
I loved the sound and the sense of actually being on the ship. The grand staircase was amazing! ...Through
gaining an identity of one actually on the ship, I gained an attachment and hope towards my identity living.
Thanks for a great insight!
Wonderful, makes you want to go to the library and find out more about your person. It was like you were
actually there walking through the ship (through time), until you hit the end.
...and I survived the Titanic; it was a great experience, well put together. I was very surprised
to see a brother. I had no idea black passengers were allowed on the ship, white wife or not.
Titanic shall always be the most dramatic manifestation of man's arrogance and defiance to life, fate, and
God's will and sovereignty.
These and thousands of other comments told me that visitors did learn a great deal at the exhibition, but more
often about themselves than about the science or history of the Titanic per se. They demonstrated that
learning is personal, unpredictable, idiosyncratic, context-dependent, and provisional. This kind of data can't
be captured with a multiple-choice survey focused on facts; it requires a more qualitative approach.
Bridging the quantitative/qualitative gap
The attempt to assess science learning in the broader "context" of not only factual knowledge but also social,
psychological, and political factors has been adopted and explored by many researchers and has proved useful in
understanding learning in generaland, one should add, science learning in science centers in
But the "contextualist" approach still hasn't helped to answer the big question: Have science centers had a
generally positive effect on public understanding of science? This is mainly because studies using the model
have been relatively small and limited to a particular exhibit, gallery, or conceptual understanding, while
studies using the deficit model have employed large-scale survey techniques. Hence the dilemma for the field:
The contextualist approach, though richer and closer to what people actually tell us about their learning,
doesn't produce the results we need, while the deficit approach, though broad-based and quantifiable, produces
results we don't accept. Is there any way to marry the two?
Fortunately, some possible escape routes have appeared. In a 2004 article in the journal Public
Understanding of Science, Patrick Sturgis and Nick Allum describe how they used data from the 1996 British
Social Attitudes Survey to test four hypotheses concerning the relationship between the favorability of people's
attitudes toward science and their level of political and scientific sophistication. The results demonstrate
empirically that attitudes toward science are affected more by the context of scientific knowledge than by the
context of political knowledge.
|The author with the stack of
By integrating contextualist and deficit perspectives, the authors write, "we hope to open up a more open and
fruitful dialogue between researchers in the field....We are convinced ... that survey-based approaches are by
no means unsuitable for research into public understanding of science from a 'contextualist' theoretical
perspective." Work done at the University of Leicester, U.K., by professor Eilean Hooper-Greenhill and her team
of researchers suggests another method of quantifying people's experiences in informal learning environments.
Their Generic Learning Objectives model (see box below) has been adopted by the U.K.'s Museums, Libraries,
and Archives Council as a recommended approach for institutions to use in reporting learning outcomes.
Will the GLOs be successful in crossing the divide from contextualist to empirical? It's not yet clear, and no
large-scale study of the sort I envision is under way. But what we do know is that the time is right to develop
tools that enable industry-wide comparisons and longer-term tracking of outcomes, while remaining sensitive to
new understandings of what counts as learning.
If we can pull this off, we may learn some useful and exciting things at last about science learning outcomes,
and be able to show that Frank Oppenheimer's hope is being fulfilled by the institutions he helped
Richard Toon is senior research analyst at the Morrison Institute for Public Policy,
Arizona State University, Tempe, Arizona.
Generic Learning Objectives: An Assessment Framework
Under the direction of professor Eilean Hooper-Greenhill, the Research Centre for Museums and Galleries (RCMG)
at the University of Leicester, U.K., developed a toolkit that uses a model called "Generic Learning Outcomes"
(GLOs) to measure learning in museums, libraries, and archives. Funding for the project was provided by Resource
UK, now the Museums, Archives, and Libraries Council (MLA).
In the GLO model, individual learning experiences of all kindscreative, intellectual, socialare
organized into the following five major categories, with some further subcategories:
• Knowledge and understanding:
learning facts, making sense of something, deepening understanding, making links and relationships between
things, knowing how museums operate.
being able to do new things, such as reading, thinking critically, making judgments (intellectual); locating,
evaluating, and using information (information management); meeting people, sharing, teamworking (social);
recognizing the feelings of others, managing feelings (emotional); writing, speaking, listening
(communications); running, dancing, making (physical).
• Attitudes and values:
feelings and perceptions, opinions or attitudes about oneself or others, empathy, increased motivation,
positive and negative attitudes in relation to an experience.
• Enjoyment, inspiration, and creativity:
having fun, being surprised, innovative thoughts and/or actions, creativity, exploration and experimentation,
• Activity, behavior, and progression:
what people do, intend to do, and have done; reported or observed actions, a change in the way people manage
Because the model is based on a view of learning as personal and context-dependent, diverse methods of
data-gathering can be used. In the U.K. pilot, concluded in 2004, the 15 participating organizations analyzed
user comment cards, held focus groups, employed observational studies, administered surveys, and reexamined
existing data sources for evidence of GLOs. Their results, detailed in an RCMG report, What Did You Learn
at the Museum Today?, have inspired other museums to adopt the model. At least one pilot site (the
Imperial War Museum) attributes its success in gaining new government funding to evidence of visitor learning
provided through the framework.
By creating a common language for talking about learning, the GLO system can enable institutions to report,
aggregate, and compare learning outcomes at the sector level (all science museums, for example). In addition,
the system promises to be useful for those seeking to communicate with schools about the learning impact of
museum programs. For more details, visit www.inspiringlearningforall.gov.uk.
Editor's note: In 2004, using the GLO framework, RCMG evaluated a further 41 museums
and other organizations for the U.K.'s Department for Culture, Media and Sport; the report of that study,
Inspiration, Identity, Learning: The Value of Museums, is available in PDF on the RCMG web site,
www.le.ac.uk/museumstudies. A new
study for the MLA, involving 56 British museums, is under way and will be reported in 2006.