At the 2011 ASTC Annual Conference in Baltimore, a session entitled “Exhibit Evaluation: Useless Bureaucratic Hurdle or Valuable Tool?” sparked a particularly spirited discussion. The session had its origins in a provocative post on ASTC’s listserv (ISEN-ASTC-L) in January.
Held Monday, October 17, the session was moderated by Sam Taylor of Pittsburgh’s Carnegie Museum of Natural History. It began with a series of five-minute presentations addressing the question of evaluation’s value.
First, Dave Ucko, formerly of the U.S. National Science Foundation, highlighted evaluation’s usefulness in providing accountability to the federal government, adding to our knowledge base, continuing to professionalize the field, and strengthening projects.
Next, Charlie Carlson of San Francisco’s Exploratorium (who emphasized that his statements do not reflect the position of his institution) pointed out that there are lots of exhibits that have succeeded without formal evaluation. “[Evaluation] does not directly result in a memorable, positive visitor experience,” he said.
Martin Weiss of the New York Hall of Science in Queens disagreed, calling evaluation “extremely important,” but asserting that “our profession has to strive to make evaluation better and more usable.”
As the sole independent evaluator on the panel, Ellen Giusti remarked, “Charlie says we know when an exhibit is popular. Popularity is not always the key to success.” She stressed the value of evaluation in determining whether an exhibition’s goals have been met, and also reminded the audience that lessons learned from one project can be applied to the next.
Finally, Paul Orselli of Paul Orselli Workshop (POW!) in Baldwin, New York, discussed the importance of having internal capacity for both exhibition development and evaluation, as well as the need to diversify evaluation. “We’ve sort of built up what I would characterize as evaluation monoculture,” he said. “I wonder if we could widen the view of notions of evaluation, so real physical prototyping becomes a more valued part of this, and exhibit people become more truly part of a partnership [with evaluators].”
Next the question was put to the crowd, which included exhibit developers, evaluators, and other professionals from around the world. An impassioned debate ensued. Here are some comments from this discussion:
• “Peer evaluation in some safe setting—a discourse about what’s worked and what hasn’t—would be more useful than professional evaluators’ feedback.”
• “Evaluation has tilted toward how people are changed after seeing an exhibition. There’s not enough emphasis on what people do and see in the exhibition…That’s why people go to museums, not because they [ask themselves], ‘What are the cognitive outcomes our kids will have?’”
• “Evaluation is one way to learn about what we do and the effect of what we do on our visitors—one way to learn about our own practice. I view it as a learning tool, and that keeps me going because I don’t know everything and I never will.”
• “I think evaluation should be like a visit from the health department to a café. They can show up at any time and gather information. It ought to be that the Spanish Inquisition can descend on your exhibition and really give you a bad time. That would be much more exciting.”
• “In evaluations, it’s easy to learn about all the things the project achieved, but you really have to squint to see what failed. We should put a book together of failed projects—that’s how the field advances.”
• “If you want future funding, the only way to get it is to have positive report. I find that very problematic. [It makes it] difficult to actually get honest evaluations.”
• “It should be a requirement that 10% of exhibit project money is left after the exhibit opens so you can actually go back and do remedial work to make it a truly great exhibit.”
• “You can’t measure everything that matters and everything that matters can’t necessarily be measured.”
• “If you just toss out the first idea you have and it works, congratulations…but it helps to have an outside person to say, ‘Let’s walk through the data. What do we have to do to make that work?’”