Starting in Winter 2016 we assessed the results of our interventions by collecting students’ responses to the Grit survey and Math Perception survey. The survey data was collected once at the beginning of the term and once at the end of the term. We also collected anecdotal evidence from instructor observations in class and when assessing the rubric tasks throughout the quarter.
When we met at the end of the first iteration we had some positive anecdotal evidence based on observations of student behavior, attitudes, performance on items, and examples of student work, but were unsure of where to begin in analyzing the survey data. The group felt that the cultural interventions seemed to be successful in moving towards the types of changes in students we were hoping to see. We did not expect the Grit Scale Survey results to change much because we hypothesized that changes to Grit would take longer than one quarter. Our goal is, when possible, to test students over multiple quarters with this tool. As for the Perceptions Survey, we thought the higher the level of class the more consistent the scores might be because they may come in with more realistic expectations but expected to see increases as a whole.
In order to analyze the survey data more effectively our cohort began to collaborate with two institutional researchers from the participating community college. Initial Grit data collected appeared consistent with our predictions that affecting Grit would take more than one quarter; the data was not significantly different pre/post. Since the Math Perceptions Survey was 39 questions, factor analysis categorized the questionnaire into three categories, namely:
- Like: Attitude towards Math
- Relevance: Feelings about Relevancy
- Try: Belief that Hard work is Rewarding and Rewarded
Initial results of the Math Perception survey were discouraging in that it appeared students’ measures had declined throughout the term. These results contradicted our observations. For example, Jessica’s initial impression was that the cultural intervention was successful. “Students were actively engaging in problem solving tasks and working together on difficult problems. The classroom was a space for inquiry.” Chris noted, “Students seemed to like doing the grit scale and math perception survey…. The classroom is currently a very positive space and for many students a safe place to express ideas,” but also mentioned that there were still some students not feeling as safe to express ideas. Molly commented on students’ self-assessments noting “students weren’t very honest in their self-assessment on their Grit Scale. Not sure if this is because they are self-unaware or if they are trying to tell me what they think I want to hear.”
Working in conjunction with the institutional researchers, our cohort reflected on the differences in our observations and the results. We began to wonder if students’ perceptions and attitudes might decline after the beginning of the term, in response to an adjustment to the expectations of the content and the course, and then rebound later in the term, albeit not so much to exceed the initial measurement. As a result of the collaboration, Chris and Jessica adjusted their plans for the second iteration in Spring 16 and implemented the survey tools a third time, in the middle of the term, to determine if such a “dip” was occurring. Molly was unable to make changes at this point because she is on a semester system.
Other than implementing the surveys three times over the quarter, Chris and Jessica still continued with the same intervention strategies in quarter two. The data from the Grit surveys was consistent with the first set of data collected but the results of the perceptions survey this time around were less clear. The inconclusiveness of the results led to many questions from both the cohort members and the researchers.
Some of our questions included:
- Are the assessment tools measuring what is intended? For example, actual perseverance vs. students’ feelings about needing to persevere.
- What is the history of the assessment tools? How have they been used and are they reliable?
- Are there other factors we should be considering such as attendance, academic history, or other risk factors?
- Are there other questions we should be asking that might be more foundational?
- What are the effects, if any, of giving a survey right after a rubric task or test as opposed to before a rubric task or test?
Our cohort just finished the process of analyzing data from our third iteration (Molly’s second) compiled from Fall 2016. Because our perceptions data was rather inconclusive, we decided to seek help from some of the project leaders and found a new tool for measuring attitudes. At the start of fall quarter/semester we gave the new “Self-Knowledge” survey in conjunction with the old “Math Perceptions” survey so that we could do a comparison of the two attitude studies; we also continued giving the Grit survey. Moving forward we will only ask students to complete the Grit and Self-Knowledge survey. In addition, we decided to look more closely at student demographic data and how factors such as ethnicity, number of previous math classes taken, level of current class, age, grade average, grade in previous math class, gender, attendance and grade earned in the course impacted the survey results. We still used rubric tasks but fall quarter Jessica added in a choice component to the tasks and asked students to write a paragraph explaining why they chose the task they did.
Unfortunately the message was just as muddied from this term and we weren’t able to note any significant effect to any of the student’s attitude or belief measures on their final grade. For more detailed analysis see the link below.