HomeNewsAssessment Is More Than a Report Card


Assessment Is More Than a Report Card
Assessment Is More Than a Report Card


Survey measures effectiveness of science education outreach programs.
Measuring the effectiveness of a science education outreach program too often means toting up the number of participants and calling it evaluation. A survey designed by Howard Hughes Medical Institute and reported in the Fall 2004 issue of the journal Cell Biology Education shows a way to change that. "Many grantees view assessment as merely completing a report card in which the amount of effort expended is the sole measure reported," said Debra A. Felix, precollege science education program officer at the Howard Hughes Medical Institute (HHMI). Felix is the lead author of the report on the results of a study of the outcomes and impact of a group of HHMI science education grants. Co-authors include Peter J. Bruns, HHMI vice president for grants and special programs; Jill G. Conley, HHMI precollege science education program director; and Mark D. Hertle and Lori B. Washington, formerly on the HHMI precollege science education program staff. To help its grantees assess their programs more effectively, HHMI designed a survey of four-year grants that the Institute awarded to 35 medical schools and biomedical research institutions in 1999 to do science education outreach in their communities. Data on those programs were compared to data from a control group of 18 similar institutions that applied for but did not receive HHMI grants. The control group included institutions that were just below the funding cutoff in the final stage of HHMI's 1999 grant competition. They were medical schools, biomedical research institutions, teaching hospitals or academic health centers, just like the grantees, and they had proposed programs of sufficient quality to reach the end of a rigorous review process. HHMI grantees achieved positive outcomes and measurable impact in each of the eight categories evaluated. Institutions that did not receive HHMI funding either were unable to implement an outreach program or implemented a less-effective program than the average HHMI grantee, based on the same criteria. The outcomes measured included:
  • Ability to secure additional, non-HHMI funding

  • Institutional buy-in measured by gains in dedicated space and staff

  • Enhancement of the program director's career

  • Educational products developed

  • Number of related publications and awards

  • Percentage of programs for which teachers received academic credit

  • Increase in students' science content knowledge

  • Increase in student motivation to study science
Institutional buy-in, for example refers to allocation of additional space and staff, both valuable resources. Nearly half of the HHMI grantees surveyed (47 percent) gained space and 44 percent gained staff from their institutions, although HHMI does not require in-kind contributions. Only 11 percent of the control group gained space and 22 percent gained staff. "Buy-in helps ensure the long-term sustainability of the outreach program," said Felix. Educational products that can be used by large numbers of teachers and students, such as science kits, online labs, and Web sites, were another measure of the impact of a grant. Grantees surveyed produced kits that were used by 1,534 students, online labs used by 1,405 students and Web sites that generated 3.98 million hits a year. The control group produced kits used by 401 learners and no online labs. Their Web sites received 13,308 hits a year, one-thirtieth the number recorded by grantee sites. The outcomes survey, which HHMI welcomes other grant-makers to use, is published as an appendix to the journal article. Felix hopes that grant-makers and grantees alike can learn from the study. "We realize that reliable, quantitative measures of impact are difficult to obtain," she said. "But clearly, program assessment should be planned at the outset, implemented throughout, and inform change long before the program has run its course. Our hope is that by giving grantees a relevant and clear framework for evaluation before their grants begin, they will be able to design and implement meaningful evaluation processes. Program evaluation can be a positive collaboration between grantor and grantee."

For More Information

Jim Keeley
[ 301-215-8858 ]