This project focuses on how fourth- and eighth-grade students with or without disabilities function on three types of math and science assessments: on-demand knowledge-and-concepts tests that require a student to select a response (i.e., multiple-choice response format), on-demand performance assessment tasks that require a student to create a response (i.e., open-ended written response format), and teacher-constructed classroom performance tasks. The researchers are also interested in students' attitudes about the three types of assessments. In addition, the researchers have been documenting the specific testing administration or procedural accommodations that teachers make for students with disabilities in order to begin understanding what effect such accommodations may have on the validity of an assessment.
The team recently completed Year 2 data collection and analyses for 250 students from nine school districts in Wisconsin and further developed the Assessment Accommodation Checklist (AAC). The AAC was designed to describe the wide array of accommodations that teachers use with students during the various assessments.
Data from Year 1 (256 students) indicated that performance assessments are difficult for students with and without disabilities; however, as on the knowledge-and-concepts test, students without disabilities function at a level at least 3/4 of a standard deviation higher than their peers with disabilities. Elliott and Kratochwill learned that both groups of students liked completing the performance assessment tasks slightly better than the multiple-choice items and thought the performance assessments were more revealing of what they actually knew. The most frequently reported accommodations used by teachers to support students with disabilities were increasing the time to complete the assessment and reading instructions to students.
During Year 2 of the project 24 of the original 33 participating teachers have continued. These teachers and 8 new participants administered a statewide achievement battery to students in October 1996 and followed up with administration of math and science performance assessments during January and February 1997. These teachers piloted their own performance tasks with the same students in May 1997. Finally, the researchers interviewed the students and teachers to determine their attitudes about the different assessment methods in late May and early June 1997. The researchers recommend that results from performance assessments be considered supplemental to traditional assessment results.