Get Started Now
Comments from The Hope Survey’s Feedback Survey – June 2010
- We like to use multiple data points to measure student growth. Standardized assessments such as the NWEA Measure of Academic Progress and our state proficiency exam capture a small chunk of the academic piece--and those tests are biased toward a traditional curriculum. We find that the Hope Survey helps us to quantify important life skills and dispositions that aren't measured elsewhere. Habits of mind and work, one's level of belief in themselves--those are the measures that determine success. It gives us a measure of how well we are providing supports for growth in those key areas.
- The Hope Survey enables our district's schools to better understand if students really feel the academy's autonomy and sense of belonging. The results give us a clear vision of the students' perceptions.
- While standardized tests examine "what" a student knows (and how well he takes tests), the Hope Survey reveals "why" a student is/is not successful, allowing educators to truly help students learn better.
- It is helpful to share another measure with district officials--a measure that demonstrates growth in otherwise unassessed areas. This measure gives strength to why we do what we do. It is evidence that the climate in our building is healthy and promotes the growth of learning dispositions.
- We set our PLPs up intentionally to address the issues identified in the hope study.
- Comparison of data by advisory led to thoughtful reflection on our current practices and drove us to make the time necessary to discuss, analyze, design, and implement strategies for professional development, more pedagogical consistency among advisories, and authentic individualization of Personal Learning Plans (i.e. what is truly useful for students, for advisors). For example, we redesigned certain goal-setting tools within the PLP and created a better plan for how/when/how often PLPs would be utilized in an advisor-led way.
- Examination of individual scores allowed us to target areas of need for low-performing students; instead of using trial-and-error interventions, we were able to select/design authentic, individualized, practical strategies. Students receive worthwhile help in much more timely way.
- Hope Survey data forced our staff to truly examine our use of privileges/incentives, a practice we had borrowed from a mentor school. The process reaffirmed our commitment to continuous culture-building, reduction of extrinsic motivators, and development of achievement recognition.
- It has changed the way we assess ourselves as a school. We look to this data rather than just looking at test scores.
- Definitely. We see that students with the greatest "hope" are our highest achievers. We find that our "most improved" students--those who have a real spike in achievement gains--are also most improved in "hope." The two go hand-in-hand. We've always known that in our gut. It's nice to have data to back it up.
- It is very clear, both anecdotally and in looking at our school wide scores and statistics, that students who are engaged in learning achieve more. Our students score as well as or better on standardized tests than peers district- and state-wide--not because they had a teacher telling them what facts to memorize, but because they have voice, choice, drive to learn, and a supportive community.