Creating Better Student Feedback

Charlie G. showed us his methods for  keeping extensive records of student feedback. He recommends that individual faculty members initiate regular student feedback in their courses as this can supplement the summative evaluations that are completed at the end of the term.  Relying exclusively on summative evaluations is problematic since:   only a small proportion of students participate,  waiting until the end of the semester for feedback does not help current students in the course, and fundamental flaws in teaching are not revealed until after the course is completed. In addition, summative evaluations  play an important role in informing the tenure and promotion process. By taking an active role and performing one’s own student evaluations, a faculty member can i) respond to immediate feedback expressed by the class, and ii) create a database with student reports that supplement reports produced by  summative evaluation.

THE APPROACH

Charlie G. has systematically used his feedback process in graduate level statistics classes of ~ 20 students. The small class size allows him to anonymously keep track of each individual and their feedback pattern on a per session basis. To carry-out the feedback process, he distributes 2 feedback forms after every 3-hour teaching session. Once the students understand what is expected of them, it takes ~5 minutes of class time to complete the forms.  His approach generates a large quantity of qualitative and quantitative data, so an added benefit is that he can use the data set as a teaching tool for the class to demonstrate the application of statistical principles to a ‘real life’ situation.

STEP 1: QUALITATIVE FORM

The first form asks 4 qualitative questions (see Fig. 1). Once filled out, Charlie G. creates a typed  summary of all the answers which are then returned to the students at the beginning of the following class.  By participating in the feedback cycle, individual learners can:

1) Observe how other students perceive and write about the learning experience. It also shows textual variation in how the same class was described.

2) Judge whether they are identifying the same key ideas as their peers, thus providing an opportunity to reflect on their own learning.

3) Express special interest in a topic so that the instructor can recommend additional resources – Charlie G. keeps an extensive database for this purpose.

4) Reveal areas of weakness so that the instructor has an opportunity to clarify the topic.  This is the starting point in the next class, and the class does not proceed to the next topic until the point is clarified.

STEP 2: QUANTITATIVE DATA

The second form distributed to students is a quantitative data form (see Fig. 2) in which teaching variables are rated from 0 (unacceptable) to 10 (outstanding; see Fig below). The teaching variables are determined by the students at the start of the course and may include selections such as:  course objectives being met, clarity of instruction, and appropriate work load. Each form is anonymously coded to a specific student so as to keep track of individual responses over time. After the students complete the form, Charlie inputs all the scores into a Minitab database. He then adds up scores and creates  an overall index that can be used at any time to create summary reports. In this way he generates large data sets that can be analyzed with statistical approaches to illuminate patterns of student satisfaction in various areas of instruction.

MODIFYING THE APPROACH FOR LARGE CLASSES

In a small class setting, Charlie completes the feedback entry in 1-hour for every session . However, the logistics for large classes add an administrative burden and he suggests the following modifications to his system:

DISCUSSION

Charlie’s method is systematic, intensive and requires considerable commitment on the part of the instructor. This prompted a discussion on the logistics of applying this approach to large class situations and the intrinsic value of the feedback. For example, Barb B. expressed that the approach was evaluating the student perception of instruction, as opposed to effective student learning. She suggested that including teaching variables in the quantitative form,  such as appropriate assessment strategies, access to the instructor time or quality of the material would better reveal learning. Charlie emphasized that this was a student-driven process and he didn’t want to impose his teaching variables on the students; his course design includes other assessments that test depth of learning.

Broader questions  arose with respect to the evaluation process from the perspective of the whole faculty and it’s individual members.  It is clear that many faculty members are unsatisfied with the default summative evaluation approach, suggesting that an alternative option should be considered and many faculty members already conduct midterm evaluations in their classes. Ideally, a new approach would be widely used by individual faculty members for the sake of measuring consistency among teaching approaches. However, this would require a substantial investment in time and  it is not clear which approach would work well for most people.  In the interim, Charlie  encourages individuals to take on systematic evaluation activities to inform their teaching and to generate support for career advancement opportunities.

Fig. 1: Sample of a qualitative evaluation form.

Fig. 2: Sample of a quantitative evaluation form.