A course evaluation is a paper or electronic questionnaire, which requires a written or selected response to a series of questions to evaluate the instruction of a given course. The term may also refer to the completed survey form or a summary of questionnaire responses.
They are a means of producing feedback that the teacher and school can use to assess the quality of instruction. The process of (a) gathering information about the impact of learning and of teaching practice on student learning, (b) analyzing and interpreting this information, and (c) responding to and acting on the results, is valuable for several reasons.[1] They enable instructors to review how others interpret their teaching methods. The information can also be used by administrators, along with other input, to make summative decisions (e.g., promotion, tenure, salary increases) and formative recommendations (e.g., identifying areas where a faculty member needs to improve).[2] Typically, these evaluations are combined with peer evaluations, supervisor evaluations, and results of students’ test scores to create an overall picture of teaching performance. Course evaluations are implemented in one of two ways, either summative or formative.
Course evaluation instruments
Course evaluation instruments generally include variables such as communication skills, organizational skills, enthusiasm, flexibility, attitude toward the student, teacher–student interaction, encouragement of the student, knowledge of the subject, clarity of presentation, course difficulty, fairness of grading and exams, and overall student rating.[3][4]
Summative evaluation
Summative evaluation occurs at the end of a semester, usually a week or two before the last day of class. The current students of the class perform the evaluation. Students can reflect on teachers’ instruction without fear of punishment because course evaluations are completely confidential and anonymous. This can be done in one of two ways: either with a paper form or with online technology. Typically, in a paper-based format, the paper form is distributed to students while the teacher is out of the room. It is then sealed in an envelope, and the teacher will not see it until after final grades are submitted. The online version can be identical to the paper version or more detailed, using branching-question technology to elicit more information from the student. Both ways allow the student to provide feedback. This feedback is intended to help teachers assess the quality of their instruction. The information can also be used to evaluate a teacher’s overall effectiveness, particularly for tenure and promotion decisions.[5]
Formative evaluation
Formative evaluation typically occurs during the current semester, when changes can be made, although many institutions also solicit written comments on how to improve it. Typically, this form of evaluation is performed by peer consultation. Other experienced teachers will review one of their peers’ instructions. The purpose of this evaluation is to provide the teacher with constructive criticism on their teaching. Generally, peer teachers will sit in on a few lessons and take notes on the teacher’s methods. Later on, the team of peer teachers will meet with the teacher in question and provide useful, non-threatening feedback on their lessons. The peer team will offer suggestions on improvement, which the teacher can choose to implement.
Peer feedback is given to the instructor typically in the form of an open session meeting. The peers first reflect on the qualities of the instruction. Then they move on to areas that need improvement. Next, the instructor will offer suggestions for improvement and receive feedback on them.
Student feedback can be an important part of formative evaluation. Student evaluations are formative when their purpose is to help faculty members improve and enhance their teaching skills.[5] The teachers may require their students to complete a written evaluation, participate in ongoing dialogue, or direct discussions during the course of the semester. The use of a ‘Stop, Start Continue’ format for student feedback is highly effective at generating constructive feedback for course improvement.[6]
At the Faculty of Psychology of the University of Vienna, Twitter was used for formative course evaluation.[7]
Criticism of course evaluations as measures of teaching effectiveness
Summative student evaluations of teaching (SETs) have been widely criticized, especially by teachers, for failing to measure teaching effectiveness accurately.[2][8][9][10] Surveys have shown that a majority of teachers believe that a teacher’s raising the level of standards and/or content would result in worse SETs for the teacher, and that students in filling out SETs are biased in favor of certain teachers’ personalities, looks, disabilities, gender and ethnicity.[11] The evidence that some of these critics cite indicates that factors other than effective teaching are more predictive of favorable ratings. To get favorable ratings, teachers are likely to present the content that the slowest student can understand, and consequently, the content is affected.[12] Quantitative fields tend to receive lower student evaluations.[13] Many of those who are critical of SETs have suggested that they should not be used in decisions regarding faculty hires, retentions, promotions, and tenure. Some have suggested that using them for such purposes leads to the dumbing down of educational standards. Others have said that the typical way SETs are now used at most universities is demeaning to instructors[14] and has a corrupting effect on students’ attitudes toward their teachers and higher education in general.[15]
The economics of education literature and the economic education literature are especially critical. For example, Weinberg et al. (2009) find that SET scores in first-year economics courses at Ohio State University are positively related to the grades instructors assign but are unrelated to learning outcomes once grades are controlled for.[16] Others have also found a positive relationship between grades and SET scores, but unlike Weinberg et al. (2009), they do not directly address the relationship between SET scores and learning outcomes.[17][18] A paper by Krautmann and Sander (1999) finds that the grades students expect to receive in a course are positively related to SET scores.[19] Isely and Singh (2005) find that it is the difference between the grades students expect to receive and their cumulative GPA that is the relevant variable for obtaining favorable course evaluations.[20] Another paper by Carrell and West (2010) uses a data set from the U.S. Air Force Academy where students are randomly assigned to course sections (reducing selection problems).[21] It found that calculus students got higher marks on common course examinations when they had instructors with high SET scores, but did worse when they took later courses requiring calculus.[21] The authors discuss several possible explanations for this finding, including the possibility that instructors with higher SET scores may have concentrated their teaching on the common examinations in the course rather than providing students with a deeper understanding for later courses.[21][22] Hamermesh and West (2005) find that students at the University of Texas at Austin gave attractive instructors higher SET scores than less attractive instructors.[23] However, the authors conclude that it may not be possible to determine if attractiveness increases the effectiveness of an instructor, possibly resulting in better learning outcomes. It may be the case that students pay more attention to attractive instructors. Meanwhile, a 2017 lawsuit was filed on grounds of xenophobic discrimination in course evaluations at the University of Kansas, with Peter F. Lake, the director of Stetson University‘s Center for Excellence in Higher Education Law and Policy, suggesting this is no isolated incident.[24]
The empirical economics literature contrasts sharply with the educational psychology literature, which generally argues that teaching evaluations are a legitimate means of assessing instructors and are unrelated to grade inflation. However, as in the economic literature, other researchers outside educational psychology have reported negative findings on course evaluations. For example, some papers have examined online course evaluations and found them to be heavily influenced by the instructor’s attractiveness and willingness to give high grades in return for very little work.[25][26]
Another criticism of these assessment instruments is that the data they produce are largely difficult to interpret for purposes of self- or course-improvement, given the number of variables that can affect evaluation scores.[27] Finally, paper-based course evaluations can cost a university thousands of dollars over the years, while an electronic survey is offered at minimal cost to the university.
Another concern raised by instructors is that response rates to online course evaluations are lower (and therefore the results may be less valid) than those for paper-based in-class evaluations. The situation is more complex than response rates alone would indicate.[28] Student-faculty engagement is offered as an explanation, where course level, instructor rank, and other variables lacked explanatory power.
See also
- Classroom management
- Educational assessment
- Educational evaluation
- Donald Kirkpatrick, founder of the ‘Four Level Model’ of training evaluation
- Ronald Ferguson (economist), a researcher who studied student evaluation of teachers
- Student behavior report cards
References
- ^ Rahman, K. (2006). Learning from your business lectures: using stepwise regression to understand course evaluation data. Journal of American Academy of Business, Cambridge, 19(2), 272–279.
- ^ a b Dunegan, K. J., & Hrivnak, M. W. (2003). Characteristics of mindless teaching evaluations and the moderating effects of image compatibility. Journal of Management Education, 27(3), 280–303.
- ^ Kim, C., Damewood, E., & Hodge, N. (2000). Professor attitude: its effect on teaching evaluations. Journal of Management Education, 24(4), 458–473.
- ^ Tang, T. L.-P. (1997). Teaching evaluation at a public institution of higher education: factors related to the overall teaching effectiveness. Public Personnel Management, 26(3), 379–391.
- ^ a b Mohanty, G., Gretes, J., Flowers, C., Algozzine, B., & Spooner, F. (2005). Multi-method evaluation of instruction in engineering classes. Journal of Personnel Evaluation in Education, 18(2), 139–151.
- ^ Hoon, A.E., Oliver, E., Szpakowska, K., and Newton, P.M. 2014. Use of the ‘Stop, Start, Continue’ method is associated with the production of constructive qualitative feedback by students in higher education. Assessment and Evaluation in Higher Education. DOI:10.1080/02602938.2014.956282
- ^ Stieger, S., & Burger, C. (2010). Let’s go formative: continuous student ratings with Web 2.0 application Twitter. Cyberpsychology, Behavior, and Social Networking, 13(2), 163–167.
- ^ Emery, C. R., Kramer, T. R., & Tian, R.G. (2003). Return to academic standards: a critique of student evaluations of teaching effectiveness. Archived 2009-09-19 at the Wayback Machine Quality Assurance in Education, 11(1), 37–46. Retrieved 2011-06-16.
- ^ Merritt, D. (2008). Bias, the brain, and student evaluations of teaching. Archived 2008-10-08 at the Wayback Machine St. John’s Law Review, 82, 235–287. Retrieved 2011-06-16.
- ^ J. Scott Armstrong (2012). “Natural Learning in Higher Education”. Encyclopedia of the Sciences of Learning.
- ^ Birnbaum, M. H. (1999). A survey of faculty opinions concerning student evaluations of teaching. Archived 2016-03-04 at the Wayback Machine The Senate Forum (California State University, Fullerton), 14(1), 19–22. Longer version with references. Retrieved 2011-06-16.
- ^ J. Scott Armstrong (2012). “Natural Learning in Higher Education”. Encyclopedia of the Sciences of Learning. Archived from the original on 2012-10-28.
- ^ Uttl, Bob, and Dylan Smibert. “Student evaluations of teaching: teaching quantitative courses can be hazardous to one’s career.” PeerJ 5 (2017): e3299.
- ^ Gray, M., & Bergmann, B. R. (September–October 2003). “Student teaching evaluations: inaccurate, demeaning, misused”, Academe Online, 89(5). Retrieved 2011-06-16.
- ^ Platt, M. (1993). What student evaluations teach. Perspectives on Political Science, 22(1), 29–40. Retrieved 2011-06-16.
- ^ Weinberg, B. A., Hashimoto, M., & Fleisher, B. M. (2009). Evaluating teaching in higher education. Journal of Economic Education, 40(3), 227–261.
- ^ McPherson, M. A., Jewell, R. T., & Kim, M. (2009). What determines student evaluation scores? A random effects analysis of undergraduate economics classes. Eastern Economic Journal, 35(1), 37–51.
- ^ Langbein, L. (2008). Management by results: student evaluation of faculty teaching and the mis-measurement of performance. Economics of Education Review, 27(4), 417–428.
- ^ Krautmann, A. C., & Sander, W. (1999). Grades and student evaluations of teachers. Economics of Education Review, 18(1), 59–63.
- ^ Isely, P., & Singh, H. (2005). Do higher grades lead to favorable student evaluations? Journal of Economic Education, 36(1), 29–42.
- ^ a b c Carrell, S. E., & West, J. E. (2010). Does professor quality matter? Evidence from random assignment of students to professors. Journal of Political Economy, 118(3), 409–432. Retrieved 2011-06-16.
- ^ J. Scott Armstrong (2012). “Natural Learning in Higher Education” (PDF). Encyclopedia of the Sciences of Learning. Archived from the original (PDF) on 2011-11-05. Retrieved 2012-03-26.
- ^ Hamermesh, D. S., & Parker, A. (2005). Beauty in the classroom: instructors’ pulchritude and putative pedagogical productivity. Economics of Education Review, 24(4), 369–376.
- ^ Schmidt, Peter (January 13, 2017). “When Students’ Prejudices Taint Reviews of Instructors”. The Chronicle of Higher Education. Retrieved January 13, 2017.
- ^ Felton, J., Mitchell, J., & Stinson, M. (2004a). Web-based student evaluations of professors: the relations between perceived quality, easiness and sexiness. Assessment & Evaluation in Higher Education, 29(1), 91–108.
- ^ Felton, J., Mitchell, J., & Stinson, M. (2004b). Cultural differences in student evaluations of professors. Archived 2010-07-24 at the Wayback Machine Journal of the Academy of Business Education, Proceedings. Retrieved 2011-06-16.
- ^ Marks, P. (2012). Silent Partners: student course evaluations and the construction of pedagogical worlds Deprecated link archived 2014-01-19 at archive.today. Canadian Journal for Studies in Discourse and Writing, 24(1).
- ^ Anderson, J., Brown, G., & Spaeth, S. (Aug/Sept 2006). Online student evaluations and response rates reconsidered. Archived 2011-08-18 at the Wayback Machine, Innovate (Fischler School of Education and Human Services, Nova Southeastern University), 2(6). Retrieved 2011-06-16.
External links
- Weinstock, R. B. (2004). Quality control in the course evaluation process.