When are evaluation and other visitor feedback strategies the most useful for helping advance a science center’s mission? When are such strategies less successful?August 25th, 2013 - Posted in 2013, Dimensions, Viewpoints by Emily Schuster
This is an extended discussion of the question that appeared in the Viewpoints department of the September/October 2013 issue of Dimensions magazine.
Evaluation is a powerful tool for providing visitors with a voice, which then enables us to stay relevant to their needs and motivations. The goal of any exhibit or program is to engage visitors, perhaps challenging them to think critically about a concept or topic or to reflect on their own understanding. To do this, we need to understand who our visitors are and what knowledge and attitudes they may be bringing to an experience, as well as what would most likely pique their interest and keep them engaged. This strategy is only ineffective if the institution is not ready or willing to hear and act upon that voice.
Joy Kubarek-Sandor, director, learning planning and evaluation
John G. Shedd Aquarium, Chicago
Evaluation is most useful when evaluators are working in collaboration with the team, often exhibit developers and writers. I start a study by asking, “Do the evaluation questions reflect the interests and goals of the stakeholders?” When they do, I find the results are more easily integrated into practice.
Evaluation is most successful when it is part of an iterative process. When evaluation results can be acted upon and folded back into development, evaluation often becomes a valued tool for the team.
Nina Hido, senior project evaluator
Exploratorium, San Francisco
Evaluation can be most useful to a science center when its purpose is clear (e.g., to demonstrate public value, improve a program, meet funder requirements); its design, methods, and instruments align well with gathering and analyzing evidence of appropriate outcomes and impacts (including those unanticipated); its institutional support at all levels is strong; and its implementation helps build organizational capacity.
David Ucko, president
Museums + more, Washington, D.C.
We pondered these questions at a staff meeting and decided that a small but important tweak may be needed to begin addressing them. First, let’s clarify that mission describes what a museum does and impact describes the result of what a museum does—on the audiences it serves. We believe that anything a museum does—collect, exhibit, educate—is meaningless unless it is done in the pursuit of impact. So, when is evaluation most useful for advancing a science center’s mission? When it is done to advance impact not mission. It’s a little like that old adage: If a tree falls in the forest and no one is around to hear it, does it make a sound? With regard to mission and impact, we take a slightly different angle—if a museum does work or evaluation that does not lead to impact, are they really doing the work?
Evaluators are in the same boat as some museum practitioners. Evaluation is a means to an end, just as a museum’s collections are a means to an end. Unless evaluation is placed in a meaningful context, such as helping a museum pursue impact, evaluation doesn’t serve a purpose. As an evaluator, I suppose I should say evaluation is always valuable. But, that’s just not true. I’m a self-proclaimed data nerd. I love the minutia of evaluation—pouring over pages and pages of interview transcripts and pulling out those five key visitor trends. I can get lost in data for days and find myself pulled in many seemingly fruitful directions. “Oh, how interesting!” I will say to no one in particular. I often find myself lost in the visitors’ world, chuckling to myself about a quirky response to an exhibit or wondering who someone is and why he or she responded to a museum experience in a particular way. Getting lost in your work can be fun and, lucky me, happens to those of us who are passionate about what we do. So, while pursuing tangents in evaluation data is fun for me, there is a flip side to this coin—a lack of focus that can be detrimental to the pursuit of a larger goal. This is why we, as evaluators, push our clients to articulate what it is they want to achieve to keep us (and them) on track.
We consistently find museum practitioners to be among those most passionate about their work. Thus, these moments of losing oneself in one’s work, whether researching or examining an object, designing an exhibition, or creating a program, are frequent occurrences. When it comes to pursuing impact, this passion is both a joy and a burden. It is a joy because most practitioners can easily articulate what they do for their audiences. But, they often get lost in what they do and may not think about why they do what they do. A practitioner articulating the “why” is similar to the entire museum articulating its intended impact. Articulating impact provides a laser focus for all the work that museum practitioners do and helps keep them on track toward pursuing that larger goal. So, our response to ASTC’s second question, When are evaluation strategies less successful in helping advance a science center’s mission? When a science center and its collective staff have yet to articulate the impact they hope to achieve on the audiences they serve. Otherwise, we can all do evaluation until we are blue in the face but those reports will continue to collect dust on hundreds of science centers’ shelves. Of this I am certain—just like death and taxes.
Emily Skidmore, senior associate
Randi Korn & Associates, Inc., Alexandria, Virginia
I think the question creates a bit of a circle.
A carefully crafted mission statement should be the guidepost for work in the museum. It needs to be visibly carried out in all aspects of work, from exhibits to programs to visitor services. Visitor feedback and evaluation are the tools that help a museum know whether the mission is being achieved. If the key stakeholders (visitors, board, staff, etc.) all understand and embrace the mission, evaluation can be key to clearly demonstrating how the museum is carrying it out, thus creating more support. If there is little understanding or acceptance of the mission, evaluation may help some stakeholders understand more about the mission, but may also not make much impact overall.
On a related thought, at one point I was part of a team conducting an internal evaluation at the Museum of Science, Boston, which was focused on learning more about what staff thought about the museum. While conducting interviews, I discovered that there were people who did not see their departments as “mission related.” Various staff had created a classification about which departments were and which were not mission focused. These classifications varied from department to department. For example, people in accounting thought their work was not connected to mission, but visitor services was. At the same time there were people in visitor services who thought they were not connected to mission. In all cases, it was people who felt that they were not in mission-related departments who were creating this classification. I, who had always assumed everyone on the staff was part of carrying out the mission, was very surprised about this.
I wonder what effect this has on visitors if everyone from within does not feel that they are part of carrying out the museum’s mission.
Lynn Baum, principal
Turtle Peak Consulting, Needham, Massachusetts
The above statements represent the opinions of the individual contributors and not necessarily the views of their institutions or of ASTC.
About the image: A young visitor tests an app during a summer program. Photo courtesy Shedd Aquarium