The U.S. Department of Education should give states more time to implement innovative assessment programs, and the agency should make awards to projects that can have a broad impact on state testing systems, according to public comments the department received on innovative assessment grants.
In a notice published Jan. 8, the department requested public comments on its proposal to expand the reach of its Competitive Grants for State Assessment program to more states and to provide more grants toward states’ work on innovative assessments.
Under the proposal, grants would be provided to states to pilot and implement previously approved and new innovative assessments.
The Every Student Succeeds Act established the innovative assessment program as a way for states, or groups of them, to experiment with new forms of testing.
Four states have been approved for innovative assessments, according to the department, and in October, the agency invited more to seek approval.
A letter jointly submitted by the Aurora Institute, Center for Assessment, Center for Innovation in Education, Great Schools Partnership, and KnowledgeWorks said the department should lengthen proposed timelines for states to plan and implement grants for innovative assessment.
The letter calls for the expansion of the anticipated window for planning grants from 12-18 months to as long as 24 months, and to lengthen the proposed period for implementation grants from 36-48 months to 60 months.
“A state receiving an implementation grant at the start of their [innovative assessment] authority will need to administer two assessment systems until ED provides the state permission to transition fully to the new assessment system,” the letter states. “It is hard to imagine this could occur in fewer than five years.”
As proposed, more funding for innovative assessment may entice more states to apply for the authority, as funds will support pilot development and reporting efforts, wrote Stuart Kahl, an independent educational assessment consultant, in comments to the department.
But the innovative assessment program’s strict comparability requirements are a barrier to innovation, Kahl argued. As part of the innovative assessment pilots, states must make yearly determinations of students’ performance on assessments, comparing results across local school districts and to results of existing statewide assessment results.
Both of those comparability requirements are posing significant challenges for innovative programs, Kahl wrote.
In an effort to remove comparability barriers, the education department should allow states to add an innovative component to their existing accountability assessments, while not requiring that comparability between the new component and state tests be demonstrated, he wrote.
Another organization submitting comments, the National Center for Learning Disabilities, said grants should be prioritized based on the state’s commitment to accessibility and equity for students with disabilities and other historically disadvantaged groups.
Specifically, grants could be provided based on how states will use funds to improve alternate assessments aligned with state academic standards and alternate academic achievement standards for students with significant cognitive disabilities, the group said.
“Instead of funneling already scarce federal education funding to a limited number of yet-to-be-proven pilot programs, ED should provide funds to assist states needing the most support to improve their assessment systems,” the center said.
While pilots offer innovation opportunities, grant applications should be required to explicitly describe the equity benefits of innovative assessments, the Consortium for Citizens with Disabilities wrote.
“So far, the approved plans do not clearly articulate how these assessments will assist in closing achievement gaps or better measure learning for different populations,” the group wrote.
A major company in the K-12 market, Curriculum Associates, said in its comments that states should keep separate assessments used to inform instruction and assessments used for educator accountability. Districts should focus on assessments to guide instruction, and states should focus on testing focused on accountability.
The organization also took aim at some states’ plans for a “through-year assessments,” an idea to use tests given periodically throughout an academic year to show students progress, and to produce year-end accountability results. That idea is being explored by some states and by another testing provider in the market, NWEA.
Such tests will “box out” other innovative learning tools, including diagnostic assessments to inform instruction and shorter-cycle formative assessment tools and performance tasks, Curriculum Associates argued.
“Students will lose out on valuable growth opportunities, as these programs will be edged out by a state-mandated provider whose tools ‘integrate with’ state accountability assessments,” the group said. “A reduction in choice and competition will result in less innovation of benefit to students and teachers.”
While the basic concept that “one assessment can serve both purposes is seductive, but it is deeply flawed because it begets numerous unintended consequences that directly contradict the organizational structure of our education systems,” argued Curriculum Associates.
NWEA, which develops pre-K-12 assessments and professional learning offerings, pointed out in its comments that a group of districts in Georgia will be piloting through-year assessments as part of the state’s implementation of innovative assessment. The group is also working in Nebraska, all of whose districts will be transitioning to a through-year model in an effort to better connect assessment to teaching and learning.
NWEA predicts that that a through-year assessment will measure growth and proficiency throughout the year, providing valuable information educators need to improve learning while producing summative scores for federal accountability. (See EdWeek Market Brief’s recent Q-and-A with NWEA officials about their plans.)
“With through-year assessment, districts and states can measure fall-to-spring growth as well as annual changes in summative performance,” NWEA said on its through-year assessment webpage. “Plus, teachers receive both on- and off-grade level information in the fall, winter, and spring to inform teaching and learning.”