Sue Daffinrud
LEAD Center
University of Wisconsin-Madison
WHY USE THE SALG?
The SALG instrument can spotlight those elements in the course that best support student learning and those that need improvement. This instrument is a powerful tool, can be easily individualized, provides instant statistical analysis of the results, and facilitates formative evaluation throughout a course. Instructors feel that typical classroom evaluations offer poor feedback, and this dissatisfaction is heightened when these instruments are used for promotion decisions. We've found that questions about how well instructors performed their teaching role and about "the class overall" yield inconclusive results. We believe all of these shortcomings are addressed with the SALG.
WHAT IS THE SALG?
The SALG is a web-based instrument consisting of statements about the degree of "gain" (on a five-point scale) which students perceive they've made in specific aspects of the class. Instructors can add, delete, or edit questions. The instrument is administered on-line, and typically takes 10-15 minutes. A summary of results is instantly available in both statistical and graphical form.
WHAT IS INVOLVED?
| Instructor Preparation Time: | Time is needed to: clarify and prioritize class learning objectives and their related activities that the teacher wishes to be evaluated; check which existing questions express these and which need to be edited or added. No instructor time is needed to administer the survey, collect, and analyze the resultant data. |
| Preparing Your Students: | Time should be spent explaining the nature of the instrument to students, how to access and complete it. |
| Class Time: | Instrument can be give in or out of class. It takes 10-15 minutes to complete the sample instrument. |
| Disciplines: | Appropriate for all. |
| Class Size: | Appropriate for all. |
| Special Classroom/Technical Requirements: | Students need access to the web. |
| Individual or Group Involvement: | Normally individual, but could also be adapted for use with small groups. |
| Analyzing Results: | Data analysis is performed by the program. Instructors receive summary data, averages, and standard deviations (by question or sub-question and cross-tabulations for any pair of questions). |
| Other Things to Consider: | To insure meaningful results, student responses must be guaranteed anonymity. The instrument may be administered as a final student classroom evaluation instrument: several chemistry departments have adopted it for this purpose. It may also be used at any point in the semester for mid-course corrections to classroom teaching methods. Demographic data may be included for correlation with gender, major, or ethnicity. |
Description
The Student Assessment of their Learning Gains (SALG) instrument is an on-line instrument that provides information about the specific gains that students perceive they have made in any aspects of a course that instructors have identified as important to their learning. The sample instrument is divided into broad aspects of the class or lab, for example, students' perceptions of their learning gains from:
The sample questions in each question grouping can be edited and augmented to reflect any set of learning objectives.
After each section, the student is invited to add write-in comments. (In a forthcoming version of the program, a template will be added to allow instructors to categorize and count these additional comments by type.) Students complete the instrument on-line, and instructors to receive a summary of results in both statistical and graphic form.
Q1. HOW MUCH did each of the following aspects of the class HELP YOUR LEARNING?
| A. | The class's focus on answering real world questions | |||||||
| B. | How the class activities, labs, reading, and assignments fitted together | |||||||
| C. | The pace at which we worked | |||||||
| D. | The class and lab activities: | |||||||
| 1. | class presentations (including lectures) | |||||||
| 2. | discussions in class | |||||||
| 3. | group work in class | |||||||
| 4. | hands-on class activities | |||||||
| 5. | understanding why we were doing each activity/lab | |||||||
| 6. | written lab instructions | |||||||
| 7. | lab organization | |||||||
| 8. | teamwork in labs | |||||||
| 9. | lab reports | |||||||
| *10. | specific class activities (list) | |||||||
| *11. | specific labs/activities (list) | |||||||
| *12. | specific lab assignments (list) | |||||||
However:
The instrument has its origins both in a need expressed by instructor classroom innovators and in the evaluation findings from a five-year multi-institution initiative to improve learning in undergraduate chemistry by the use of "modular" teaching. As with other instructors implementing classroom changes, modular chemistry instructors seek new forms of assessment that better reflect their revised learning objectives and pedagogy. These include more appropriate and accurate tests of student learning, and more precise feedback from students on the value to their learning of different aspects of the class.
The basis for a useful form of student feedback to instructors (and their departments) emerged from findings from a student interview study that formed part of the formative evaluation of the modular chemistry consortia. Three hundred and forty-four students were interviewed in a matched sample of modular and more traditionally-taught1 introductory chemistry classes at eight participating institutions. The sample was chosen so as to represent the range of different institutions across the two consortia. They were: two research universities, three liberal arts colleges, one community college, one state comprehensive university, and one Historically Black college. (Two more community colleges and one research university were added to the sample later).
The focus group interviews were tape-recorded, transcribed verbatim, and the text files entered into a computer program to assist with the analysis. Student observations were of three types: answers to interviewers' questions, spontaneous observations, and agreements with observations made by other focus group members. There were 12,993 discrete comments of all three types. We analyzed these data in two ways-in terms of student assessment of (1) instructor performance as teachers and (2) their own learning gains. In these analyses, we discovered that although students gave positive or negative ratings to specific aspects of the class or of their teacher's classroom performance (e.g., the quality of the teacher's lectures and demonstrations, or the fairness of their tests), the grand totals for all students' observations on how well instructors performed their teaching role were (for both the modular and the comparative classes) broadly 50 percent positive, and 50 percent negative. Thus neither group of instructors got a clear picture of the overall utility of their classroom work when students offered judgments of their performance as professional teachers. This is, arguably, because students lack the knowledge or experience to make such judgments. This finding reflects the common instructor experience that asking students what they "liked" or "valued" about their classes, or how they evaluated their teacher's work (often without offering any criteria for these judgements), tells the teacher little about what students gained from their class.
By contrast, in both the modular and comparative classes, students gave clear indications about what they themselves had "gained" from specific aspects of their classes. When all specifically gain-related student observations were totaled and divided into three types-positive (things gained), negative (things not gained), and mixed reviews (qualified assessments of gains), 55 percent of the observations were positive (for both types of class), 33 percent (modular) and 32 percent (comparative) were negative, and 11 percent (modular) and 13 percent (comparative) were "mixed." The strong similarity between the student learning gains evaluation totals for the modular and comparative classes (though not for particular items) is likely to reflect the early stage of development of the modules and the teachers' limited experience in using them at the time of these interviews. The issue here, however, is not the relative merits of modular or more traditional chemistry teaching, but the hypothesis suggested both by our data on reasons for instructor dissatisfaction with traditional course evaluation instruments, and by these student interview data: that it is more relevant and productive to ask students about what they have gained from specific aspects of the class than about what they liked or disliked.
The ChemLinks Evaluator, Elaine Seymour, who developed the SALG instrument, first made it available to chemistry consortia participants in the fall of 1997. This first version was tested (originally as a paper-and-pencil instrument) by instructor volunteers in 14 lower-division modular chemistry courses at eight institutions in the spring and fall of 1998. This was the first of a two-part test was enabled by a grant from the Exxon Education Foundation. This gave the ChemLinks evaluation team (at the University of Colorado, Boulder) 14 sets of completed instruments (including students' write-in comments). For comparison, some instructors also provided completed sets of their institutional or departmental classroom evaluations from the same classes.
The original version of the instrument includes questions that express learning objectives of particular importance to the developers and adapters of the chemistry modules. However, a "generic" version of the instrument (that can be adapted for use by instructors in any discipline using any teaching methods) is offered on the web-site. Versions of the instrument created by users in different disciplines are also offered for adaptation and use by other colleagues. The author and web-site developer are considering additions to the site prompted both by their research findings and by feedback from users.
Findings (both about the efficacy of the instrument, and about aspects of modular teaching were offered in technical and substantive reports to the Exxon Foundation (Wiese, Seymour, & Hunter, 1999; Daffinrud, 1999), have been shared with ChemConnections participants, and presented at a number of conferences and meetings (including AAHE, June 1999).
A second round of testing to determine the flexibility of the on-line instrument with instructors and their classes in a variety of science and non-science disciplines is underway and will include interviews. A comparative analysis of the nature of students' write-in comments offered in both the eight institution sample of SALG responses and in a sample of more traditional classroom evaluation instruments is near completion. Publication of the findings from the two rounds of tests and the qualitative data analysis is projected for spring 2000, along with their presentation at the American Chemical Society meetings.
Davis, B. G. (1993). Tools for Teaching. San Francisco: Jossey Bass.
Daffinrud, S.M. (1999). Work Report for the Student Assessment of Their Learning Gains Web-Site. Report to the Exxon Education Foundation. LEAD Center, University of Wisconsin-Madison.
Hinton, H. (1993). Reliability and validity of student evaluations: Testing models versus survey research models. PS: Political Science and Politics September: 562-569.
Murray, H. G. (1991). "Effective teaching behaviors in the college classroom." In J. C. Smart (ed.), Higher Education: Handbook of Theory and Research, Vol. 7 (pp. 135-172). New York: Agathon.
Reynolds, A. (1992). What is competent beginning teaching? A review of the literature. Review of Educational Research, 62: 1-35.
Shulman, L. S. (1990). Aristotle had it right: On knowledge and pedagogy. Occasional Paper No.4. East Lansing, MI: The Holmes Group.
Wiese, D., Seymour, E., and Hunter, A.B. (May, 1999). Report on a panel testing of the student assessment of their learning gains instrument by instructors using modular methods to teach undergraduate chemistry." Report to the Exxon Education Foundation. Bureau of Sociological Research, University of Colorado, Boulder.
Centra, J. A. (1973). Effectiveness of student feedback in modifying college instruction. Journal of Educational Psychology, 65(3): 395-401.
Fowler, F. J. (1993). Survey Research Methods. Newbury Park, CA: Sage.
Gramson, Z. and Chickering, A. (1977). Seven principles for good practice in undergraduate education. AAHE Bulletin, 39: 5-10.
Gutwill, J. and Seymour, E. (1999). ModularChem and ChemLinks Annual Evaluation Report. Presentation to the ModularChem National Visiting Committee, Berkeley, CA.
Henderson, M. E., Morris, L. L., & Firz-Gibbon, C. T. (1987). How to Measure Attitudes. Newbury Park, CA: Sage.
National Research Council (1997). Science Teaching Reconsidered: A Handbook. Washington, D. C.: National Academy Press.
Seymour, E. and Hewitt, N. (1997). Talking About Leaving: Why Undergraduates Leave the Sciences. Westview Press: Boulder, CO.
Shulman, L. S. (1991). Ways of seeing, ways of knowing - ways of teaching, ways of learning about teaching. Journal of Curriculum Studies, 23, (5): 393-395.
Theall, M. and J. Franklin, Eds. (1990). Student ratings of instruction: Issues for improving practice. New Directions for Teaching and Learning, No. 43. San Francisco: Jossey-Bass.
1. It should be noted that the degree to which the matched comparative classes were "traditional" in their pedagogy varied considerably by institutional character. The comparative classes reflected whatever was considered the "normal" way to teach introductory chemistry classes at each institution in the sample.
Elaine Seymour
Douglas Wiese
Anne-Barrie Hunter
Sue Daffinrud
Assessment Purposes
Instructors can discover how much each component of their course is seen by their students as contributing to their learning. This allows instructors to adjust their teaching methods to meet student learning needs more effectively. They also have a basis upon which to discuss specific types of learning difficulty with students. Use of the instrument (especially where it is followed by class discussion of the results) encourages students to reflect upon their own learning processes, and to become aware of what (in their own behavior as well as that of the teacher) enables or deters learning.
Limitations
Students must be guaranteed anonymity: student identification is assigned by the program and is used only for the purpose of checking that all members have completed the survey. Instructors may add requests for demographic information like gender, race/ethnicity, and major and look for correlation across those variables. Correlation of student responses to class scores involves additional off-line analysis. Students should be explicitly informed if this step is taken.
Instructor Goals
Suggestions for Use
The authors are interested in suggestions from users as to other types of questions or information they would like to collect from students that would be consistent with the overall learning gains format. The option of including gender, ethnicity, major, year in school, and other demographic variables may be offered in a subsequent version of the instrument.
Teachers are advised not to leave out questions to which they really want answers because they are concerned about the length of the instrument. Even a long survey with 80 items will take no more than 20 minutes.
Step-by-Step Instructions
Emphasize the usefulness of the information the students offer for your teaching, and the seriousness with which their responses and additional comments are taken. (Our research finds a high degree of student cynicism about the value of their feedback to instructors.)
Analysis
The authors are considering the addition of other questions to the sample instrument, of additions to the statistical package, and a template for the classification/coding of additional typed-in student responses. User feedback on these and other issues are encouraged.
The scale chosen for the instrument is not a true Likert scale that has a neutral mid-point with two options above and below it. The authors wished to give students the option to distinguish between four possible levels of "gain" from "very little" to "a great deal," as well as a "no gains" and a "not applicable" option. Thus, instructors may regard averages on particular questions that are above 3.0 as "positive," and averages close to 4 or above as indicating a "good" or "very good" level of perceived student gain.
Pros and Cons
A fall 1999 faculty tester (in psychology) offered the following comment: "Overall, I think I'm getting a greater volume of analytic, honest, and potentially valuable feedback with this instrument than with any other I've used. I suspect it's partly the medium, and partly the high percentage of tailor-made questions."
Theory and Research
Research has found that effective teachers share several characteristics (Angleo & Cross, 1993; Davis, 1993; Reynolds, 1992; Murray, 1991; Shulman, 1990). Two of these characteristics are relevant with respect to this type of instrument:
There is substantial research which concludes that administering classroom instruments based on student perceptions of the efficacy of particular teaching methods can be both valid and reliable (Hinton, 1993). The SALG instrument discussed here is one method for obtaining information of direct utility to the classroom teacher about class content, teaching strategies (and the approach in which they are grounded), student activities, testing and grading procedures, materials and resources, organization, pacing, or workload. This information can be used to adjust aspect of any class so as to increase student learning. It can increase the awareness of learning processes in both teacher and students, and form the basis for discussions between teachers and their students, teaching assistants, and colleagues about methods that increase learning.
Links
Access the SALG Instrument
http://www.wcer.wisc.edu/salgains/instructor
References
Angelo, T. A., and Cross, K. P. (1993). Classroom Assessment Techniques: A Handbook for College Teachers, 2nd ed. San Francisco: Jossey-Bass.
Selected Bibliography
Braskamp, L. and Ory, J. (1994). Assessing Faculty Work: Enhancing Individual and Institutional Performance. San Francisco: Jossey-Bass.




Tell me more about this technique:
Introduction
Description, Purpose, and Limits
Goals, Use, and Instructions
Analysis and Pro/Cons
Theory, Links, and References
The Authors
View Entire Technique
Download Technique
Tools
![]()