Teaching Internship – Week 4Posted: September 23, 2011
This week, Dr. M started class by illustrating what he called the “research timeline.”
Question –> Literature Review –> Method –> Results –> Interpretation –> Conclusions
He used illustration as a reminder that research is a process, and he emphasized that the researcher should always go back to the literature as a guide. I liked the message and the reminder… both for myself and for the students. It made me realize that there are certain concepts that it pays to go back and re-emphasize each week, so that students retain the big picture while they are learning the details.
He then moved into an introduction to quantitative nonexperimental designs. He began with an Ed Week article that compared test scores for students in urban vs. non-urban charter schools. This opened up the topic of percentile distributions on a normal curve. He drew two normal curves side-by-side on the whiteboard, but he drew them vertically, rather than horizontally. This was good because it made it easy to see the percentile ranks going up, and he could compare scores from one group to another more effectively. I had never seen this done before, but I thought it was a nice visual.
We then began to talk about two types of hypotheses: (a) research hypotheses [Ha or H1] and (b)null hypotheses [Ho]. With respect to research hypotheses, he talked about how you can distinguish between experimental and nonexperimental studies from their research hypotheses:
- effect = experimental
- relationship = nonexperimental
He also clarified that most research hypotheses are directional. With respect to null hypotheses, he pointed out that these are really for the statistical tests, and they aren’t usually stated in articles.
The last part of class was spent asking students o formulate their own research hypotheses for their projects. Then students volunteered to share their hypotheses, which we critiqued as a group. I think it helped students understand hypotheses better when they were able to think of their own research questions and apply the concept directly.
During the second half of the class, I have my first lesson. It was a hands-on introduction to using SPSS using a step-by-step PowerPoint lesson that had been prepared by a former doctoral student. Students had been provided with the PowerPoint through Blackboard, along with instructions for installing the software ahead of time. They had been asked to bring laptops to class, and we had laptops available for those students without one of their own. Although I felt very prepared to teach, the lesson did not go well (from my perspective). There were three specific stumbling blocks:
- Many students had difficulty installing the software and were unable to use their own computers.
- The laptops owned by the School of Ed had not been updated when the version of SPSS expired.
Both of these issues caused frustration and made it difficult to get started. Several students had to share a computer with their neighbor, and they were unable to do the hands-on exercises themselves.
- The complete lesson was too long to cover in half of a class period, so the inferential statistics covered at the end got short shrift.
I sent out a survey to the students a few days after the class to get their feedback on the content of the lesson and my presentation. Nine (out of 15) students responded. Overall, my impressions were confirmed by the survey responses. There was a great deal of frustration with the installation of SPSS and the feeling that the last part of the lesson was rushed. Dr. M didn’t seem bothered by the frustrations, but I didn’t think it wasn’t the best start to my teaching career! In hindsight, I think I might structure the lesson differently in the future. For example, I might ask them to install the software at home and go through the exercises on their own… maybe one exercise at a time for a few weeks in a row.