AI-enabled tools and systems are rapidly changing many parts of our lives, yet there is a real gap between AI’s growing societal influence and public understanding or agency. Most of us don’t fully understand what AI is, how it’s shaping our lives, or how to navigate it carefully and thoughtfully. We need to develop our AI literacy. As Digital Promise defines it, AI literacy includes the “knowledge and skills that enable humans to critically understand, evaluate, and use AI systems and tools to safely and ethically participate in an increasingly digital world.”
Science centers have a critical role in helping to build AI literacy.
This week, ASTC is releasing “Building Public Agency in AI: A Typology of Roles for Science and Technology Centers and Museums,” a framework developed through cross-sector collaboration with leaders from science centers, AI industry, academia, government, and civil society. The framework identifies how science centers can advance AI literacy and public engagement across 13 distinct roles, and the release coincides with National AI Literacy Day (March 27), a coordinated effort led by The Tech Interactive, The EDSAFE AI Alliance, aiEDU, and Common Sense Media to spark conversations about AI literacy across education, community organizations, and public spaces.
What AI Literacy Looks Like in Science Centers
Science centers across the field are already engaging in AI literacy work in varied ways. The Phillip and Patricia Frost Museum of Science welcomed “AI: More Than Human” curated by London’s Barbican Centre, and the Exploratorium’s now-closed “Adventures in AI” provided interactive installations to help visitors understand AI technologies through hands-on experience. The Lawrence Hall of Science’s AI K-12 Network goes beyond public programming to work directly with K-12 educators, helping district leaders co-design AI literacy strategies for real classrooms rather than relying on one-size-fits-all commercial solutions. The Museum of Science Boston is hosting public conversations like “Being Human in the Age of AI” that create space for communities to grapple with AI’s broader implications.
These examples illustrate the range of approaches possible, and ASTC’s new framework is designed to help organizations of all sizes identify appropriate entry points based on their capacity and local priorities.
These diverse approaches share something important in common: they recognize that AI literacy isn’t just about understanding how the technology works. Effective AI literacy must draw on multiple domains—history, ethics, social implications, and yes, the technology itself—to help people understand how AI is developing and impacting different segments of society. This raises a critical question that should guide all our work in this space: AI literacy to what end?
AI Literacy to What End?
AI literacy can mean many different things depending on the audience and the goals.
Different approaches to AI literacy aim toward different ends:
- Enthusiasm/adoption: Getting people excited about and willing to use AI tools
- Understanding risks and protections: Learning to identify potential harms and navigate AI safely
- Workforce readiness: Preparing for AI-influenced careers
- Public agency: Understanding and having the capacity to influence how AI develops and gets deployed
- Informed citizenship: Developing the knowledge needed to participate in democratic decisions about AI
- Creative engagement: Exploring AI as a tool for, and subject of, artistic and cultural expression
Understanding AI systems empowers people to make informed decisions about when and how to engage with them. This includes understanding how they work, where they’re embedded in daily life, how to identify AI-generated content, and how algorithmic decision-making functions. People need opportunities to ask critical questions, build confidence, and navigate an AI-integrated world with agency and discernment.
This is where science centers excel. Through hands-on exploration and joyful, curiosity-driven learning experiences, we can help people engage critically and confidently with AI. We are building this capacity for both the communities we serve and our own staff, and this work will require ongoing investment and focus for the foreseeable future.
Fortunately, we’re not alone in this thinking.
Learning from Existing Frameworks
Multiple organizations have developed AI literacy frameworks that can guide our work, including the AILit framework from the Organisation for Economic Co-operation and Development (OECD); AI4K12, developed by the Computer Science Teachers Association (CSTA) and the Association for the Advancement of Artificial Intelligence (AAAI); aiEDU’s AI Readiness Framework; the Department of Labor’s AI Literacy Framework; and Digital Promise’s Framework to Understand, Evaluate, and Use Emerging Technology.
All of these frameworks share important common ground. They go beyond technical knowledge to emphasize ethics, social context, and the human dimensions of AI, recognizing that effective AI literacy must address not just how systems work but also their implications for individuals and society.
These frameworks validate what science centers are already doing: treating AI literacy as broader than technical knowledge alone. They reinforce that effective AI literacy work must address ethics, social implications, and the human dimensions of AI—not just how the algorithms work.
Each framework has a different emphasis and intended audience. The OECD framework is most comprehensive and works across contexts. The Digital Promise framework is designed for educators and builds on established tech literacy competencies. AI4K12 provides educators with big ideas and grade-level progressions for K-12 AI education. aiEDU’s framework is focused on “AI readiness”—building foundational understanding and capacity. The Department of Labor framework centers workforce preparation and economic impacts.
For science centers, these frameworks provide scaffolding and validation. They confirm that our approach, balancing hands-on exploration with critical thinking about implications, aligns with emerging best practices. And they give us language and structure to articulate our goals to partners and funders.
So, if AI literacy is essential to these roles, where does it lead? What else is possible?
Looking Forward: Literacy as a Foundation
ASTC’s new framework, “Building Public Agency in AI: A Typology of Roles for Science and Technology Centers and Museums,” identifies 13 distinct roles science centers can play in AI public engagement, organized into four groups. The framework anchors ASTC’s growing suite of resources to support members in AI engagement, including a Community of Practice for peer learning, professional development opportunities, and implementation tools to come.
Across these groups and roles, AI literacy is both a critical foundation and often the first area where science centers engage, given their expertise in making complex topics in science and technology accessible for diverse audiences.
Group A – Building Awareness and Knowledge about AI. This is where most institutions are currently working, and includes:
- Creating opportunities for exploration and hands-on learning about AI (Role A1)
- Helping people navigate AI’s impact on employment and careers (Role A2)
- Growing community capacity to use AI for their own purposes (Role A3)
- Helping people navigate AI safely and securely (Role A4)
This foundational work is absolutely critical for our relevance and our role at the intersection of science, technology, policy, and society. Many institutions are currently focused on these Group A roles, with opportunities to layer on additional work as capacity and partnerships develop.
AI literacy is not the only role science centers can play—it’s woven throughout a broader set of possibilities.
The typology identifies three additional role groups beyond literacy:
- Group B: Facilitating public input into AI development (bringing community voices into research, policy, and product design)
- Group C: Strengthening AI-related processes and infrastructure (supporting education systems, building coalitions, advocating for public interest)
- Group D: Exploring AI’s social impact and meaning (facilitating dialogue, cultural exploration, generating evidence about AI’s effects)
Literacy will be a component of almost any AI programming we do, but it opens doors to deeper engagement.
By grounding our work in building public agency—not just awareness or enthusiasm—we position ourselves as essential players in ensuring AI develops in ways that serve the public interest. This is how we stay relevant and responsive as trusted community institutions in a rapidly changing technological landscape.
Learn more about the framework and access implementation resources at astc.org/ai. And join us in celebrating National AI Literacy Day on March 27.
