How many schools made ayp




















This new, multiple-measures system provides a fuller picture of how districts and schools are addressing the needs of their students while also identifying the specific strengths and areas in need of improvement. The new accountability and continuous improvement system was implemented using an online tool known as the California School Dashboard Dashboard.

For more information on the Dashboard, please visit the Dashboard web page. California Department of Education. According to NCLB, districts that receive Title I funds as most do are accountable for student achievement on the same basic approach as schools, using subgroup performance and participation data. The first year districts and county offices of education were eligible to go into Program Improvement PI was They are identified for PI status if they miss AYP in the same content area or on the same additional indicator for two consecutive years.

If a district does not improve after two years in Program Improvement, it faces serious sanctions in the third year. These could include a new curriculum, replacing staff, public supervision of some schools, replacing the superintendent and school board with a trustee or, at the most extreme, restructuring or abolishing the district. In a state as complex as California, melding the existing accountability structure with a system to meet NCLB requirements for funding has been complicated.

Initial results turned up anomalies and circumstances that required refinements. NCLB has now been in place for more than a decade and is due for reauthorization. As more and more schools fail to meet their targets, several states have applied to the U. Although California applied for a waiver in June , it was denied. In March , for federal accountability, the U. Department of Education approved a separate testing waiver to allow California to not make new Adequate Yearly Progress AYP determinations for elementary and middle schools.

Instead, elementary and middle schools will receive the same AYP determinations as in This waiver did not affect high schools. Be among the first to know when new data becomes available! Please enter your email address below, then click Subscribe. Privacy by SafeSubscribe. For email marketing you can trust. Thank You for contacting us. Your message has been successfully sent.

Please enter a valid question or comment 6 characters minimum. An error occurred while sending your feedback, please try again later. All rights reserved. The proposed changes in state accountability plans have apparently almost always been in the direction of increased flexibility for states and LEAs, with reductions anticipated in the number or percentage of schools or LEAs identified as failing to make AYP. Issues that have arisen with respect to these changes include a lack of transparency, and possibly inconsistencies especially over time , in the types of changes that ED officials have approved; debates over whether the net effect of the changes is to make the accountability requirements more reasonable or to undesirably weaken them; concern that the changes may make an already complicated accountability system even more complex; and timing—whether decisions on proposed changes are being made in a timely manner by ED.

The major aspects of state accountability plans for which changes have been proposed and approved include the following: a changes to take advantage of revised federal regulations and policy guidance regarding assessment of pupils with the most significant cognitive disabilities, LEP pupils, and test participation rates; b limiting identification for improvement to schools that fail to meet AYP in the same subject area for two or more consecutive years, and limiting identification of LEAs for improvement to those that failed to meet AYP in the same subject area and across all three grade spans for two or more consecutive years; c using alternative methods to determine AYP for schools with very low enrollment; d initiating or expanding use of confidence intervals in AYP determinations, including "safe harbor" calculations; e changing usually effectively increasing minimum group size; and f changing graduation rate targets for high schools.

Accountability plan changes that have frequently been requested but not approved by ED include a identification of schools for improvement only if they failed to meet AYP with respect to the same pupil group and subject area for two or more consecutive years, and b retroactive application of new forms of flexibility to recalculation of AYP for previous years.

The most recent available compilations of state AYP data are discussed below in two categories: reports focusing on the number and percentage of schools failing to meet AYP standards for one or more years versus reports on the number and percentage of public schools and LEAs identified for improvement—that is, they had failed to meet AYP standards for two consecutive years or more.

Table 2 provides the percentage of schools and LEAs failing to make adequately yearly progress, on the basis of 7 8 assessment results, for each state, as reported by ED, based on Consolidated State Performance Reports.

Table 2. ED recently posted 45 data from the Consolidated State Performance Reports on the number of schools identified for improvement, corrective action, or restructuring for the school year, on the basis of assessment results through the school year.

As with the percentage of schools failing to make AYP, the percentage of schools identified varied widely among the states. A theme reflected in these results is a high degree of state variation in the percentage of schools identified as failing to meet AYP standards or as needing improvement. These variations appear to be based, at least in part, not only on underlying differences in achievement levels but also on differences in the degree of rigor or challenge in state pupil performance standards, and on variations in state-determined standards for the minimum size of pupil demographic groups in order for them to be considered in AYP determinations of schools or LEAs.

In general, larger minimum sizes for pupil demographic groups reduce the likelihood that many disadvantaged groups, such as LEP pupils or pupils with disabilities, will be considered in determining whether a school or LEA meets AYP. Although most attention, in both the statute and implementation activities, thus far has been focused on application of the AYP concept to schools, a limited amount of information is becoming available about LEAs that fail to meet AYP requirements, and the consequences for them.

The primary challenge associated with the AYP concept is to develop and implement school, LEA, and state performance measures that are: a challenging, b provide meaningful incentives to work toward continuous improvement, c are at least minimally consistent across LEAs and states, and d focus attention especially on disadvantaged pupil groups.

At the same time, it is generally deemed desirable that AYP standards should allow flexibility to accommodate myriad variations in state and local conditions, demographics, and policies, and avoid the identification of so many schools and LEAs as failing to meet the standards that morale declines significantly systemwide and it becomes extremely difficult to target technical assistance and consequences on low-performing schools.

Many critics are especially concerned that efforts to direct resources and apply consequences to low-performing schools would likely be ineffective if resources and attention are dispersed among a relatively large proportion of public schools. The remainder of this report provides a discussion and analysis of several specific aspects of NCLB's AYP provisions that have attracted significant attention and debate. As such, it generally does not focus on alternatives to the current statutory provisions of NCLB.

The required incorporation of an ultimate goal—of all pupils at a proficient or higher level of achievement within 12 years of enactment—is one of the most significant differences between the AYP provisions of NCLB and those under previous legislation. Setting such a date is perhaps the primary mechanism requiring state AYP standards to incorporate annual increases in expected achievement levels, as opposed to the relatively static expectations embodied in most state AYP standards under the previous IASA.

Without an ultimate goal of having all pupils reach the proficient level of achievement by a specific date, states might simply establish relative goals e. Nevertheless, a goal of having all pupils at a proficient or higher level of achievement, within 12 years or any other specified period of time, may be easily criticized as being "unrealistic," if one assumes that "proficiency" has been established at a challenging level. Proponents of such a demanding ultimate goal argue that schools and LEAs frequently meet the goals established for them, even rather challenging goals, if the goals are very clearly identified, defined, and established, if they are attainable, and if it is made visibly clear that they will be expected to meet them.

This is in contrast to a pre-NCLB system under which performance goals were often vague, undemanding, and poorly communicated, with few, if any, consequences for failing to meet them. A demanding goal might maximize efforts toward improvement by state public school systems, even if the goal is not met.

Further, if a less ambitious goal were to be adopted, what lower level of pupil performance might be acceptable, and for which pupils? At the same time, by setting deadlines by which all pupils must achieve at the proficient or higher level, the AYP provisions of NCLB create an incentive for states to weaken their pupil performance standards to make them easier to meet.

In many states, only a minority of pupils are currently achieving at the proficient or higher level on state reading and mathematics assessments. Even in states where the percentage of all pupils scoring at the proficient or higher level is substantially higher, the percentage of those in many of the pupil groups identified under NCLB's AYP provisions is substantially lower.

There has thus far been some apparent movement toward lowering proficiency standards in a small number of states. Reportedly, a few states have redesignated lower standards e. Colorado's basic proficiency level on CSAP is also high in comparison to most states.

It is unlikely that any state, and few schools or LEAs of substantial size and a heterogeneous pupil population, will meet NCLB's ultimate AYP goal, unless state standards of proficient performance are significantly lowered or states aggressively pursue the use of such statistical techniques as setting high minimum group sizes and confidence intervals described below to substantially reduce the range of pupil groups considered in AYP determinations or effectively lower required achievement level thresholds.

Some states have addressed this situation, at least in the short run, by "backloading" their AYP standards, requiring much more rapid improvements in performance at the end of the year period than at the beginning.

These states have followed the letter of the statutory language that requires increases of "equal increments" in levels of performance after the first two years, and at least once every three years thereafter.

For example, both Indiana and Ohio established incremental increases in the threshold level of performance for schools and LEAs that are equal in size, and that are to take effect in the school years beginning in , , , , , and As a result, the required increases per year are three times greater during than in the period.

These states may be trying to postpone required increases in performance levels until NCLB provisions are reconsidered, and possibly revised, by Congress. Many states have used one or both of a pair of statistical techniques to attempt to improve the validity and reliability of AYP determinations. Use of these techniques also tends to have an effect, whether intentional or not, of reducing the number of schools or LEAs identified as failing to meet AYP standards.

The averaging of test score results for various pupil groups over two- or three-year periods is explicitly authorized under NCLB, and this authority is used by many states. In some cases, schools or LEAs are allowed to select whether to average test score data, and for what period two years or three , whichever is most favorable for them. As discussed above, recent policy guidance also explicitly allows the use of averaging for participation rates. The use of another statistical technique was not explicitly envisioned in the drafting of NCLB's AYP provisions, but its inclusion in the accountability plans of several states has been approved by ED.

This is the use of "confidence intervals," usually with respect to test scores, but in a couple of states also to the determination of minimum group size see below. This concept is based on the assumption that any test administration represents a "sample survey" of pupils' educational achievement level. As with all sample surveys, there is a degree of uncertainty regarding how well the sample results—average test scores for the pupil group—reflect pupils' actual level of achievement.

As with surveys, the larger the number of pupils in the group being tested, the greater the probability that the group's average test score will represent their true level of achievement, all else being equal. Put another way, confidence intervals are used to evaluate whether achievement scores are below the required threshold to a statistically significant extent. This is analogous to the "margin of error" commonly reported along with opinion polls. While test results are not based on a small sample of the relevant population, as are opinion poll results, since the tests are to be administered to the full "universe" of pupils, the results from any particular test administration are considered to be only estimates of pupils' true level of achievement, or of the effectiveness of a school or LEA in educating specified pupil groups, and thus the "margin of error" or "confidence interval" concepts are deemed by many to be relevant to these test scores.

All other relevant factors being equal, the smaller the pupil group, and the higher the desired degree of probability, the larger is the window surrounding the threshold percentage. In this case, a school would fail to make AYP with respect to a pupil group only if the average score for the group is below the lowest score in that range.

The use of confidence intervals to determine whether group test scores fall below required thresholds to a statistically significant degree improves the validity of AYP determinations, and addresses the fact that test scores for any group of pupils will vary from one test administration to another, and these variations may be especially large for a relatively small group of pupils.

At the same time, the use of confidence intervals reduces the likelihood that schools or to a lesser extent LEAs will be identified as failing to make AYP. Another important technical factor in state AYP standards is the establishment of the minimum size n for pupil groups to be considered in AYP calculations.

NCLB recognizes that in the disaggregation of pupil data for schools and LEAs, there might be pupil groups that are so small that average test scores would not be statistically reliable, or the dissemination of average scores for the group might risk violation of pupils' privacy rights.

Both the statute and ED regulations and other policy guidance have left the selection of this minimum number to state discretion. While most states have reportedly selected a minimum group size between 30 and 50 pupils, the range of selected values for "n" is rather large, varying from as few as five to as many as pupils 53 under certain circumstances.

One state North Dakota has set no specific level for "n," relying only on the use of confidence intervals see above to establish reliability of test results. Although most states have always set a standard minimum size for all pupil groups, some states until recently established higher levels of "n" for pupils with disabilities or LEP pupils. In general, the higher the minimum group size, the less likely that many pupil groups will actually be separately considered in AYP determinations. Pupils will still be considered, but only as part of the "all pupils" group, or possibly other specified groups.

This gives schools and LEAs fewer thresholds to meet, and reduces the likelihood that they will be found to have failed to meet AYP standards. In many cases, if a pupil group falls below the minimum group size at the school level, it is still considered at the LEA level where it is more likely to meet the threshold.

In addition, since minimum group sizes for reporting achievement data are typically lower than those used for AYP purposes, 55 scores are often reported for pupil groups who are not separately considered in AYP calculations. At the same time, relatively high levels for "n" weaken NCLB's specific focus on a variety of pupil groups, many of them disadvantaged, such as LEP pupils, pupils with disabilities, or economically disadvantaged pupils.

There are several ongoing issues regarding NCLB's requirement for disaggregation of pupil achievement results in AYP standards, namely the requirement that a variety of pupil groups be separately considered in AYP calculations. The first of these was discussed immediately above: the establishment of minimum group size, with the possible result that relatively small pupil groups will not be considered in the schools and LEAs of states that set "n" at a comparatively high level, especially in states that set a higher level for certain groups e.

A second issue arises from the fact that the definition of the specified pupil groups has been left essentially to state discretion. This is noteworthy particularly with respect to two groups of pupils: LEP pupils and pupils in major racial and ethnic groups. Regarding LEP pupils, many have been concerned about the difficulty of demonstrating that these pupils are performing at a proficient level if this pupil group is defined narrowly to include only pupils unable to perform in regular English-language classroom settings.

In other words, if pupils who no longer need special language services are no longer identified as being LEP, how will it be possible to bring those who are identified as LEP up to a proficient level of achievement? In developing their AYP standards, some states addressed this concern by including pupils in the LEP category for one or more years after they no longer need special language services.

As was discussed above, ED has recently published policy guidance encouraging all states to follow this approach, allowing them to continue to include pupils in the LEP group for up to two years after being mainstreamed into regular English language instruction, and further allowing the scores of LEP pupils to be excluded from AYP calculations for the first year of pupils' enrollment in United States schools.

Another aspect of this issue arises from the discretion given to states in defining "major racial and ethnic groups. Some states defined the term relatively comprehensively e. A more narrow interpretation may reduce the attention focused on excluded pupil groups.

It would also reduce the number of different thresholds some schools and LEAs would have to meet in order to make AYP.

A final, overarching issue arises from the relationship between pupil diversity in schools and LEAs and the likelihood of being identified as failing to meet AYP standards. All other relevant factors being equal especially the minimum group size criteria , the more diverse the pupil population, the more thresholds a school or LEA must meet in order to make AYP. While in a sense this was an intended result of legislation designed to focus within limits on all pupil groups, the impact of making it more difficult for schools and LEAs serving diverse populations to meet AYP standards may also be seen as an unintended consequence of NCLB.

This issue has been analyzed in a recent study by Thomas J. Kane and Douglas O. Staiger, who concluded that such "subgroup targets cause large numbers of schools to fail Moreover, while the costs of the subgroup targets are clear, the benefits are not. Although these targets are meant to encourage schools to focus more on the achievement of minority youth, we find no association between the application of subgroup targets and test score performance among minority youth.

However, without specific requirements for achievement gains by each of the major pupil groups, it is possible that insufficient attention would be paid to the performance of the disadvantaged pupil groups among whom improvements are most needed, and for whose benefit the Title I-A program was established. Under previous law, without an explicit, specific requirement that AYP standards focus on these disadvantaged pupil groups, most state AYP definitions considered only the performance of all pupils combined.

And it is theoretically possible for many schools and LEAs to demonstrate substantial improvements in achievement by their pupils overall while the achievement of their disadvantaged pupils does not improve significantly, at least until the ultimate goal of all pupils at the proficient or higher level of achievement is approached.

This is especially true under a "status" model of AYP such as the one in NCLB, under which advantaged pupil groups may have achievement levels well above what is required, and an overall achievement level could easily mask achievement well below the required threshold by various groups of disadvantaged pupils.

One possible alternative to current policy would be to allow states to count each student only once, in net, in AYP calculations, with equal fractions for each relevant demographic category e. As was discussed earlier, concern has been expressed by some analysts since early debates on NCLB that a relatively high proportion of schools would fail to meet AYP standards. Future increases in performance thresholds, as the ultimate goal of all pupils at the proficient or higher level of achievement is approached, may result in higher percentages of schools failing to make AYP.

In response to these concerns, ED officials have emphasized the importance of taking action to identify and move to improve underperforming schools, no matter how numerous. They have also emphasized the possibilities for flexibility and variation in taking consequences with respect to schools that fail to meet AYP, depending on the extent to which they fail to meet those standards.

Further, some analysts argue that a set of AYP standards that one-third or more of public schools fail to meet may accurately reflect pervasive weaknesses in public school systems, especially with respect to the performance of disadvantaged pupil groups. To these analysts, the identification of large percentages of schools is a positive sign of the rigor and challenge embodied in NCLB's AYP requirements, and is likely to provide needed motivation for significant improvement and ultimately a reduction in the percentage of schools so identified.

Others have consistently expressed concern about the accuracy and efficacy of an accountability system under which such a high percentage of schools is identified as failing to make adequate progress, with consequent strain on financial and other resources necessary to provide technical assistance, public school choice and supplemental services options, as well as other consequences.

In addition, some have expressed concern that schools might be more likely to fail to meet AYP simply because they have diverse enrollments, and therefore more groups of pupils to be separately considered in determining whether the school meets AYP standards.

They also argue that the application of technical assistance and, ultimately, consequences to such a high percentage of schools will dilute available resources to such a degree that these responses to inadequate performance would be insufficient to markedly improve performance.

The proportion of public schools identified as failing to meet AYP standards is not only relatively large in the aggregate, but also varies widely among the states.

This result is somewhat ironic, given that one of the major criticisms of the pre-NCLB provisions for AYP was that they resulted in a similarly wide degree of state variation in the proportion of schools identified, and the more consistent structure required under NCLB was widely expected to lead to greater consistency among states in the proportion of schools identified.

It is likely that state variations in the percentage of schools failing to meet AYP standards are based not only on underlying differences in achievement levels, as well as a variety of technical factors in state AYP provisions, but also on differences in the degree of rigor or challenge in state pupil performance standards and assessments. Particularly now that all states receiving Title I-A grants must also participate in state-level administration of NAEP tests in 4 th and 8 th grade reading and math every two years, this variation can be illustrated for all states by comparing the percentage of pupils scoring at the proficient level or above on NAEP versus state assessments.

Such a comparison was conducted by a private organization, Achieve, Inc. According to this analysis, the percentage of pupils statewide who score at a proficient or higher level on state assessments, using state-specific pupil performance standards, was generally much higher than the percentage deemed to be at the proficient or higher level on the NAEP tests, and employing NAEP's pupil performance standards.

Of the states considered, the percentage of pupils scoring at a proficient or higher level on the state assessment was lower than on NAEP implying a more rigorous state standard for five states 58 out of 32 in math and only two states out of 29 in reading. Further, among the majority of states where the percentage of pupils at the proficient level or above was found to be higher on state assessments than on NAEP, the relationship between the size of the two groups varied widely—in some cases only marginally higher on the state assessment, and in others the percentage at the proficient level was more than twice as high on the state assessment as on NAEP.

More recently, a report by the National Center for Education Statistics mapped each state's standard for a proficient level of performance in reading and mathematics at the 4 th and 8 th grade levels for the school year onto the equivalent NAEP scales.

The report's authors concluded that in comparison to the common standard embodied in NAEP, state standards of proficiency varied widely, and in almost all cases were lower on state tests than under NAEP. A second issue is whether some states might choose to lower their standards of "proficient" performance, in order to reduce the number of schools identified as failing to meet AYP and make it easier to meet the ultimate NCLB goal of all pupils at the proficient or higher level by the end of the school year.

In the affected states, this would increase the percentage of pupils deemed to be achieving at a "proficient" level, and reduce the number of schools failing to meet AYP standards. It seems likely that the pre-NCLB variations in the proportion of schools failing to meet AYP reflected large differences in the nature and structure of state AYP standards, as well as major differences in the nature and rigor of state pupil performance standards and assessments.

While the basic structure of AYP definitions is now substantially more consistent across states, significant variations remain with respect to the factors discussed in this section of the report such as minimum group size or use of confidence intervals , and substantial differences in the degree of challenge embodied in state standards and assessments remain.

While, as discussed above, ED recently published policy guidance that relaxes the participation rate requirement somewhat—allowing use of average rates over two- to three-year periods, and excusing certain pupils for medical reasons—the high rate of assessment participation that is required in order for schools or LEAs to meet AYP standards is likely to remain an ongoing focus of debate.

In recent years, the overall percentage of enrolled pupils who attend public schools each day has been approximately What might be the major advantages and disadvantages of growth models of AYP, in comparison to status or improvement models? These questions are addressed in the following pages. Growth models generally recognize the reality that different schools and pupils have very different starting points in their achievement levels and recognize progress being made at all levels e.

They more directly measure the effect of schools on the specific pupils they serve over a period of years, attempting to track the movement of pupils between schools and LEAs, rather than applying a single standard to all pupils in each state.

They have the ability to focus on the specific effectiveness of schools and teachers with pupils whom they have actually taught for multiple years, rather than the change in performance of pupil groups among whom there has usually been a substantial amount of mobility.

They can directly as well as indirectly adjust for non-school influences on achievement, comparing the same students across years and reducing errors due to student mobility. Proponents of growth models often argue that status models of AYP in particular make schools and LEAs accountable for factors over which they have little control, and that status models focus insufficiently on pupil achievement gains, especially if those gains are below the threshold for proficient performance, or gains from a proficient to an advanced level.

Status models, such as the current primary model of AYP under NCLB, might even create an undesirable incentive for teachers and schools to focus their attention, at least in the short run, on pupils who are only marginally below a proficient level of achievement, in hopes of bringing them above that sole key threshold, rather than focusing on the most disadvantaged pupils whose achievement is well below the proficient level.

The current status model of AYP also confers no credit for achievement increases above the proficient level, that is, bringing pupils from the proficient to the advanced level. Although any growth model deemed consistent with NCLB would likely need to incorporate that act's ultimate goal of all pupils at a proficient or higher level of achievement by see below , the majority of such models used currently or in the past do not include such goals, and tend to allow disadvantaged schools and pupils to remain at relatively low levels of achievement for considerable periods of time.

Growth models of AYP may be quite complicated, and may address the accountability purposes of NCLB less directly and clearly than status or to a lesser extent improvement models. If the primary purpose of AYP is to determine whether schools and LEAs are succeeding at raising the achievement of their current pupils to challenging levels, with those goals and expectations applied consistently to all pupil groups, then the current provisions of NCLB might more simply and directly meet that purpose than growth model alternatives.

However, its implications are multifaceted, and do not necessarily favor a particular AYP model. Growth models have the advantage of attempting to track pupils through longitudinal data systems. But if they thereby attribute the achievement of highly mobile pupils among a variety of schools and LEAs, accountability is dispersed.

At the same time, the presence of highly mobile pupils in the groups considered in determining AYP under status and improvement models may seem unfair to school staff. However, the impact of such pupils in school-level AYP determinations is limited by NCLB's provision that pupils who have attended a particular school for less than one year need not be considered in such determinations. It is generally agreed that growth models of AYP are more demanding than status or improvement models in several respects, especially in terms of data requirements and analytical capacity.

For a longitudinal data system sufficient to support a growth model, it is likely that states would need to have pupil data systems incorporating at least the following:. Although the availability of information on state data systems is insufficient to enable one to determine with precision how many states could or could not currently implement such models if they chose to do so, it is very likely that growth models generally require resources and data systems that some states currently lack.

This concern is being addressed through an ED program intended to help states design, develop, and implement statewide, longitudinal data systems. Further, the establishment of longitudinal data systems for education is a priority for state participation in the State Fiscal Stabilization Fund and the "Race to the Top" discretionary grant competition under the ARRA.

Thus far, at least 41 states have received awards through three rounds of competition. According to the announcement in the April 15, , Federal Register , the program is intended "to enable SEAs to design, develop, and implement statewide, longitudinal data systems to efficiently and accurately manage, analyze, disaggregate, and use individual student data Applications from states with the most limited ability to collect, analyze, and report individual student achievement data will have a priority Most growth models used before initiation of ED's growth model pilot, or still used as part of state-specific accountability systems, have not incorporated an ultimate goal such as the one under NCLB—that all pupils reach a proficient or higher level of achievement by The first type of growth target has been most common, while NCLB's ultimate goal would represent a growth target of the second variety, with separate paths with presumably separate starting points for each relevant pupil cohort.

The models approved thus far under ED's growth model pilot arguably meet the ultimate goal requirement. However, under some of these models, pupils need only be proficient or on track toward proficiency within a limited number of years as of These consequences, as well as possible performance-based awards, are not discussed in detail in this report.

For additional information on this legislation, see CRS Report , Education for Disadvantaged Children: Major Themes in the Reauthorization of Chapter 1 , by [author name scrubbed] out of print, available upon request. There is a variant of the group status model, sometimes called an "index model," under which partial credit would be attributed to performance improvements below the proficient level—e.

One state, Massachusetts, has injected a partial growth element into its safe harbor provision. See the U. Program regulations published in did not require graduation rates and other additional academic indicators to be disaggregated in determining whether schools or LEAs meet AYP standards.

However, regulations published subsequently in October discussed later in this report require graduation rates to be disaggregated in AYP determinations.

If the number of pupils in a specified demographic group is too small to meet the minimum group size requirements for consideration in AYP determinations, then the participation rate requirement does not apply. It has occasionally been said that the AYP systems approved by ED for a few states before initiation of the growth model pilot announced in November incorporate "growth" elements.



0コメント

  • 1000 / 1000