Center for the Study of Systemic Reform
   in Milwaukee Public Schools

 

INCLUDING ALL STUDENTS IN ACCOUNTABILITY ASSESSMENT

Jeffery P. Braden

Stephanie W. Cawthon

Wisconsin Center for Education Research

University of Wisconsin-Madison

Milwaukee Public Schools (MPS) has joined other school districts across the country in declaring student achievement is its primary mission. MPS also embraces the goal of high academic achievement for all MPS students. The dual foci of excellence (high standards for academic performance) and equity (all students performing at high levels) heighten interest in student assessment outcomes. That is, assessment results indicate MPS’s success in achieving academic excellence and equity.

Assumptions Behind Accountability Systems

It follows that assessment outcomes are an integral part of MPS district reforms. Assessment outcomes are not just a by-product of the system, but are an engine to drive system reform. Changes that improve test scores are considered good; those that do not are considered off-the-mark; schools with high test scores are considered effective, and schools with low student test scores are identified as ineffective. The use of test scores as indicators of school and system success require many assumptions, some of which have not been demonstrated (Haertel, 1999). For example, test scores indicate academic excellence only if the test truly taps the academic and cognitive complexity demanded for student learning. Likewise, test scores indicate academic equity only if all students are adequately represented the sample tested.

Adequate representation does not require universal testing (i.e., testing every student), but it does require nonsystematic exclusions from testing. If students are excluded in a systematic, nonrandom manner from the testing pool, results will not represent the student population. For example, the National Assessment of Educational Progress (NAEP) routinely draws random samples that are presumed to represent state and national progress. However, test data drawn from nonrandom samples inevitably misrepresent the effectiveness of a given classroom, school, or district service delivery. In part to ensure adequate representation, and in part to provide data to every student and family, Wisconsin and MPS adopted testing policies requiring universal participation in state and district assessments (i.e., every student enrolled in grades targeted for assessment are included in tests).

Students may be missing from assessments (MIAs) by design (i.e., as part of an intentional effort) or by default (i.e., without deliberate intent to exclude). MIAs by design include students who excluded from assessments through specified exemptions to universal assessment policies. For example, state and district policies require MPS educators to exclude any student whose parent requests exclusion, who speaks English with limited proficiency (a 3 or less on a 5-point scale), or whose disability plan requires alternate assessment. MIAs by default are those students who do not participate in assessment even though they should. That is, some students are not captured in state or district assessments, although they are not formally excluded. MIAs by default are influenced by student attendance, district test practices (e.g., opportunities for make-up testing), and other factors (e.g., parental expectations regarding participation, consequences to students related to participation).

MIAs affect conclusions regarding school and district success. If a significant proportion of any group is MIA, the school or district will be unable to judge success for educational equity. Exclusion of any student group endangers the equity of educational opportunities for that group. Said another way, "Students who aren’t counted don’t count." Inclusion in testing and assessment programs means inclusion in reform efforts driven by educational outcomes. For students with exceptional educational needs (i.e., students with disabilities and students with limited English proficiency), participation in accountability assessment is a prerequisite for motivating school and district-wide reforms.

The SSR-MPS Study

As part of the Study of Systemic Reform project in the Milwaukee Public School district, we examined the participation rates of students on large-scale assessments. MPS tests outcomes at various grades, using on-demand performance assessments, exhibitions, and projects. However, MPS does not currently have the capacity for generating participation data for their district assessments. The only assessment data that are sufficiently comprehensive to estimate participation and MIAs are state test data. The state test is based on the TerraNova, and is known as the Wisconsin Knowledge and Concepts Exam (WKCE). The WKCE assesses Reading, Language Arts, Mathematics, Social Studies and Science at 4th, 8th, and 10th grades.

Policy analysis. Because students can be MIA either by design or by default, we examined district testing policies to determine whether policies might inappropriately encourage (or discourage) exclusion (or participation). We compared MPS policies to guidelines suggested by the National Center for Education Outcomes (NCEO) (Seyfarth, Ysseldyke & Thurlow, 1988). MPS policies were taken from policies posted on the MPS web site. Our comparison is shown in Table 1 on the following page.

Table 1

Comparison of NCEO Participation Criteria with MPS Participation Policies

NCEO Markers for Participation Criteria

MPS Participation Policies (Ö = Okay, ? = Unclear)

All students, including SWDs, are to participate in state and district assessments.

Ö "All students should be assessed in the State and District Wide Assessment Program"

The decision about participation is made by a person (or a group of people) who knows the student.

Ö It is implied that all decisions about assessment are made by an IEP team or the student’s teacher.

The decision about participation is based on the student’s current level of functioning and learning characteristics.

?The decision-making process itself is not clearly modeled within the district’s stated guidelines.

A form is used that lists the variables to consider in making participation decisions.

?A list of DPI questions are included on the website, but it is unclear how they are used in planning.

Reason(s) for exclusion is documented.

Ö "Teachers must document testing accommodations or exemption in a student’s IEP."

A student must participate in an assessment if the student receives any instruction on the content assessed, regardless of where instruction occurs.

Ö "Will the student be exposed to material similar to the material on the test?"

Decision about participation is not based on the program setting.

Ö "Decisions will be made on a case-by-case basis."

Decision about participation is not based on the category of disability.

Ö "Decisions will be made on a case-by-case basis."

Decision about participation is not based on the percentage of time in the mainstream classroom.

Ö "Decisions will be made on a case-by-case basis."

Decision guidelines allow for some students to participate in an alternate assessment or, when appropriate, in part of an assessment or assessment procedure.

Ö "A student can take all sections or a combination of specific sections (i.e. reading, math, etc.) of the State or District Assessments."

Decision guidelines recognize that only a small percentage of students with disabilities need to participate in an alternate assessment (e.g., those with severe disabilities, < 1% of all students).

Ö "On rare occasions it will be necessary to exclude a student from sections of an assessment or the assessment as a whole."

Parents understand the state/district accountability system.

?Parents are included in the planning meeting where assessment decisions are made.

Parents understand participation options and the implications of their child not being included in the assessment.

?Parents are included in the planning meeting where assessment decisions are made.

Decisions about participation are documented on the student’s IEP or attached to the IEP.

Ö "Teachers must document testing accommodations or exemption in a student’s IEP."

When we judged that MPS policies matched NCEO guidelines, we indicated agreement with a check (Ö). When we judged MPS policies were ambiguous, we indicated our judgment with a question mark (?). We found no points on which NCEO guidelines and MPS policies disagreed.

Our analysis suggests MPS policies are inclusive at almost every point. The district’s policies show a consistent commitment to inclusion of students with disabilities (SWDs) and students with limited English proficiency (LEP). MPS policies are congruent with NCEO guidelines, federal mandates (i.e., IDEA, 1997) and the goal of universal students participation in large-scale assessments. Only rarely are students to be excluded from assessment, and exclusion is recommended only in situations that are legally or ethically appropriate.

However, MPS guidelines are often inexplicit with respect to guiding decisions for assessment inclusion. Criteria guiding accommodations and exclusion are less specific than the general participation policy. Parental involvement is explicitly noted, but the mechanisms for assuring parents’ understanding of IEP criteria and their child’s inclusion or exclusion from assessment are not elaborated. Furthermore, for three of the criteria, MPS lists "decisions will be made on a case-by-case basis" without providing guidelines for making case-by-case decisions. Still, it is clear that MPS policies generally conform to NCEO guidelines and that it is the intent of the district to encourage students with exceptional educational needs to participate in assessments.

Participation analyses. A summary of the participation rates for MPS in the state tests are shown on the flowchart in Figure 1. The flowchart shows very small percentages of students excluded by design. That is, across all grades, the proportion of students excluded by parental request, limited English proficiency, or because of Individualized Education Plans (IEPs) or Individualized Assessment Plans (IAPs) is less than 5%.

(* All results are for the Spring 1999 Reading Test. Participation rates are nearly identical across other academic disciplines.)

Figure 1. Assessment participation flowchart.

In contrast, the largest category of students not participating in assessments is MIA by default. These students have not been formally excluded through the acceptable means of parent request, English proficiency below 4, or due to a disability accommodation. The proportion of students not tested by default increases with grade level. Although only 6% of the fourth grade population remains unaccounted for, more than one quarter (26%) of tenth graders were MIA.

Participation equity. We next asked, "Are students with disabilities or limited English proficiency as likely to be MIA as general education students?" An analysis of MIA rates for Students with Disabilities/Limited English Proficiency students and regular education students is shown in Figure 2. MIA rates for SWDs and students with LEP are higher than MIA rates for students without exceptional educational needs at every grade level. Students who are MIA are not randomly omitted from assessments. Instead, MIAs substantially over-represent students with disabilities and limited English proficiency. This proportion is alarming at the secondary level, where the majority of 10th grade students with exceptional educational needs are MIA by default (i.e., they are missing from assessment but are targeted for inclusion).

 

Figure 2. Students Missing in Assessments (MIA).

Discussion

The rates of MIAs are unusually high, especially when viewed within the context of strong policies that encourage universal inclusion. It is clear that MIA rates are not high because of intentional exclusion. That is, MIAs by design are low; they comprise less than 1% of the student body. In contrast, MIAs by default are high, comprising a substantial portion of general education students and a very large proportion of students with exceptional educational needs.

We considered five reasons why MIA rates might be high for MPS. These are:

    1. Measurement error.
    2. Institutional/logistical factors.
    3. Motivation.
    4. Human resources.
    5. Priority.

Measurement error. Participation rates are calculated by dividing the number of students who took the test (participants) by the number enrolled in a grade (total enrollment). We concluded there were few errors in estimates of the number of students taking the test (i.e., we did not feel participants were under-estimated). However, we wondered whether enrollment estimates were inflated, thus leading to depressed estimates of participation rates. This could occur when emigration out of the district during the school year (i.e., after enrollments are calculated in mid-September) exceeds immigration into the district during the school year. Preliminary analyses suggested that, although MPS shows substantial mobility rates for students, there is no evidence that emigration exceeds immigration—that is, students may change schools, leave, or enter the district after enrollments are calculated, but these changes do not systematically overestimate enrollment nor underestimate participation. Thus, we suspect MIA rates accurately reflect participation in assessments.

Institutional/logistical factors. Given that MIA rates reflect a real problem with assessment equity, we wondered what institutional or logistical factors might lead to high MIA rates. We suspected the following might contribute to MIAs:

Student absenteeism during the state testing window (i.e., if students are not in school, they cannot be tested).
School procedures/capacity for capturing absent students (e.g., a student may be absent on one day, but may be present on another—yet there may be no effort or mechanism for administering the test to a student who missed it).
Interaction of absenteeism, assessment procedures/capacity, and exceptional educational needs (e.g., students with exceptional educational needs may have higher absenteeism rates, and their educational needs may demand more flexibility and support than their general education peers).

Unfortunately, we do not have adequate data to describe absenteeism, nor do we understand the procedures and capacity of school sites to capture students who miss one or more test days. Previous MPS "audits" of school sites with high rates of MIAs suggest extremely limited procedures, or capacity, to capture MIAs at any point during the testing window (e.g., absentees are not systematically tracked, many sites lack any procedures for making up missed tests). Another procedure than may inadvertently encourage exclusion is MPS’s practice of reporting assessment outcomes (proficiencies) by the proportion of those who took the test, rather than as a proportion of those who were enrolled. This reporting practice discourages inclusion of low-performing students, and artificially inflates estimates of academic excellence. We hope to collaborate with MPS to better understand logistical and procedural issues related to MIAs, and how their policies in other arenas (e.g., reporting outcomes) may inadvertently encourage MIAs.

Motivation. Educators’, parents’, and students’ attitudes towards assessments may influence test participation. If they do not see a benefit of these outcome data, either for themselves personally or for the district as a whole, they may have little motivation to increase assessment participation. Informal contact with MPS educators suggests limited enthusiasm for assessment, especially for large-scale tests, but we do not have formal and systematic research to support this impression. Interestingly, middle schools have most enthusiastically embraced assessments; they also show the lowest MIA rates.

Human resources. MPS educators may not have the knowledge and skills to understand assessments, or effectively include students in assessments. Data regarding participation with accommodations is unknown. Neither the state nor MPS has the capacity to track the use and effectiveness of test accommodations. Data gathered from other research in MPS and surrounding school districts also suggested very limited knowledge of the state test (WKCE) content, and even less understanding of WKCE results. These data collectively suggest significant deficits in assessment literacy (Stiggins, 1991, 1995), a problem that undermines effective participation and inclusion in testing programs (Plake, 1993; Plake & Impara, 1993).

Priority. Finally, the high number of MIAs may be due to an emphasis on procedural and regulatory compliance, rather than on assessment inclusion and outcomes. MPS critics and state auditors have cited MPS for its failure to comply with special education regulations. These same critics have not emphasized assessment participation and outcomes. Understandably, MPS has made compliance with special education procedures its top priority. The number of students formally excluded from assessments is reasonable at all grade levels, indicating a judicious use of alternate assessments by MPS educators. However, MPS may resist shifting emphasis from regulatory compliance to assessment inclusion until it can be confident that it will not be cited for regulatory and procedural noncompliance.

Conclusions

MPS embraces goals for academic excellence and academic equity. It lacks the capacity for tracking its success in achieving excellence and equity within its district assessment system (i.e., it does not have the ability to know who participated and who did not). However, MPS has sound policies for excluding and excusing students from district and state assessments. State test data (which are adequate for estimating participation) show few students are formally excluded from assessments (i.e., MIAs by design are low, perhaps excessively so). However, MIAs by default are high, and increase by grade. Assuming that assessment outcomes (excellence) and participation (equity) are essential for systemic reform, large MIA rates among eligible students are alarming. MPS has high MIA rates for all but regular education students in the 4th grade. Furthermore, MIA rates for students with disabilities and limited English proficiency are much higher than for other students. These MIA rates undermine MPS goals for academic equity, and consequently distort estimates of academic excellence. We hope to work with MPS to develop its capacity for including students in assessments, so that it can better achieve its goals for academic excellence and equity among all MPS students.

References

Haertel, E. H. (1999).Validity arguments for high-stakes testing: In search of the evidence. Educational Measurement: Issues and Practices, 18 (4), 5-9.

Plake, B. S. (1993). Teacher assessment literacy: Teachers' competencies in the educational assessment of students. Midwestern Educational Researcher, 6 (1), 21-27.

Plake, B. S., & Impara, J. C. (1993, April). Teacher assessment literacy: Development of training modules. Report on an NCME-based Kellogg Foundation grant (ERIC Document No. 358 131). Paper presented at the Annual Meeting of the National Council on Measurement in Education, Atlanta, GA.

Seyfarth, A., Ysseldyke, J., & Thurlow, M. (1988). An analysis of perceived desirability, feasibility, and actual use of specific criteria for large-scale assessment and accountability systems (Technical Report 21). Minneapolis: National Center on Educational Outcomes.

Stiggins, R. J. (1991). Assessment literacy. Phi Delta Kappan, 72 (7), 534-539.

Stiggins, R. J. (1995). Assessment literacy for the 21st century. Phi Delta Kappan, 77 (3), 238-245.