In today’s rapidly evolving educational landscape, the role of informal science educators remains crucial yet often undervalued. These educators, who operate in settings such as museums, zoos, aquariums, botanical gardens, and planetariums, occupy a unique niche—bridging formal scientific knowledge and public engagement without the traditional structures of formal schooling. Megan Ennes, a former educator at the North Carolina Aquarium at Fort Fisher and now a doctoral candidate at the University of North Carolina at Chapel Hill, has devoted over a decade to addressing a pervasive challenge faced by this community: the absence of a standardized, reliable framework for self-assessing teaching efficacy.
Informal educators frequently operate under constraints that differ markedly from their K-12 counterparts. These limitations include irregular employment patterns, significant turnover rates, and a lack of institutionalized feedback mechanisms. Unlike classroom teachers who engage with a stable group of students over a fixed period, informal educators confront continuously changing audiences, ranging from curious children to knowledgeable adults. This variability demands a pedagogical agility that enables educators to simultaneously captivate a broad demographic spectrum while maintaining strict scientific accuracy and engagement.
Recognizing these challenges, Ennes embarked on a research trajectory aimed at developing a comprehensive assessment tool tailored to the nuanced requirements of informal science education. Her work responds to the necessity for educators to self-identify their strengths and deficits in pedagogical skills without relying on external evaluations, which are often impractical or nonexistent in informal settings. With collaboration from Bikram Karmakar, a statistician at the University of Wisconsin-Madison, Ennes designed a 51-question self-efficacy survey. This tool synthesizes established educational evaluation metrics with custom indicators sensitive to the distinctive realities of informal education environments.
The survey is grounded in nine broadly recognized dimensions of teaching success, adapted to address the multifaceted roles and fluid dynamics inherent to science centers. Factor analysis was employed to categorize responses into coherent clusters, facilitating easier interpretation similar to personality typologies like the Myers-Briggs framework. This methodological innovation ensures that educators and administrators can more effectively pinpoint specific pedagogical areas requiring development, thereby optimizing allocation of scarce professional development resources.
One of the pivotal aspects highlighted by this research is the pedagogical complexity informal educators face. They must craft educational experiences that are cognitively engaging across broad age ranges, a task complicated by the simultaneous need to be scientifically rigorous and accessible. This creates a delicate balance between educational content, delivery style, and audience engagement strategies. Ennes’s survey not only measures teaching self-efficacy but also implicitly underscores the significance of pedagogical science as a discipline vital to informal science education.
The lack of systemic evaluation in informal education stems largely from operational and structural factors within science centers. Institutions juggle diverse priorities including education, research, exhibit creation, and animal care, often with minimal personnel dedicated specifically to educational evaluation. Consequently, evaluation becomes sporadic or superficial, further isolating educators from meaningful feedback loops. Ennes’s self-assessment tool offers a practical workaround by empowering educators to independently gauge their instructional competencies.
Beyond feedback scarcity, the transience of audiences poses a significant barrier to obtaining longitudinal insight into educational impact. Unlike traditional classrooms, informal educators engage with visitors for short, episodic interactions, diminishing the feasibility of collecting consistent evaluative feedback. The survey thus fills a critical void by facilitating internal reflection and professional growth based on self-reported efficacy rather than episodic external observations.
Ennes argues that a targeted approach to professional development is desperately needed in informal science education. Many institutions have limited budgets and time to dedicate to staff training. By systematically identifying educators’ specific areas of weakness, her survey tool enables more judicious investments in continuing education, maximizing outcomes both for individual educators and their institutions.
The publication of these findings in the journal Research in Science Education marks a significant advancement in the scientific study of informal education. It bridges the gap between empirical educational research and the practical realities faced by educators outside traditional classroom environments. This research not only provides a validated assessment instrument but also highlights the critical need for professional development infrastructure tailored to the informal science education sector.
Providing free access to this survey tool democratizes educational evaluation for science centers regardless of size or funding. Museums and related institutions can now empower their staff with a scientifically grounded mechanism to assess and enhance their pedagogical skills. This democratization is particularly vital for smaller facilities with limited human and financial resources, which have historically lacked access to rigorous evaluative tools.
Ultimately, Ennes’s work underscores a broader cultural imperative: to recognize and support the indispensable role of informal science educators in public understanding of science. These educators foster curiosity, critical thinking, and lifelong scientific literacy outside the traditional school system, making their professional growth a focal point for science education advancement.
The deployment of this innovative self-efficacy survey represents an intersection of statistical rigor, educational theory, and practical applicability. It offers a scalable, validated framework for informal educators to engage in self-directed professional development while providing institutions with actionable data to guide resource allocation. The potential ripple effects of this tool can transform informal science education by fostering a culture of continuous improvement amid the sector’s unique challenges.
As informal education environments continue to evolve, integrating such scientifically grounded assessment mechanisms will be essential to ensuring educators remain effective, motivated, and capable of inspiring diverse audiences. Ennes’s decade-long research initiative thus serves as both a blueprint and beacon for the future of informal science pedagogy—promising a more skilled, confident workforce dedicated to enriching public science engagement nationwide.
Subject of Research: Teaching self-efficacy and professional development of informal science educators
Article Title: The Development and Validation of a Teaching Self-Efficacy Assessment for Informal Science Educators
News Publication Date: March 4, 2026
Web References:
Image Credits: Florida Museum photo by Kristen Grace
Keywords: science education, informal education, museum education, self-efficacy assessment, teaching evaluation, professional development, pedagogical skills, science centers, educational assessment, public engagement, survey validation, educational measurement

