Considerations for Use of Early Grades Reading Assessments

Corey Savage
Annenberg Institute for School Reform, Brown University

EdInstruments catalogs early grades reading assessments1 for use in research and practice, as part of the larger library of instruments. The website categorizes the early-grades reading within Academic Knowledge and Skills > ELA. To guide potential users, this brief outlines early years reading content, describes the search and selection process we used for identifying these instruments, and presents recommendations and considerations for use. We do not endorse individual instruments because often the best instrument will depend on the specific use, but we provide examples of instruments for various uses. Users can navigate our website to explore additional options.

Early grades reading content

Reading is a complex skill that comprises several interconnected sub-skills. These sub-skills develop in a somewhat predictable sequence that begins with pre-language skills in infancy and extends into complex text interpretation and meaning-making skills, which continue to develop through adulthood (Pearson & Hamm, 2005). For K-3 students, educators focus on teaching phonemic awareness, phonics, fluency, vocabulary, and comprehension (National Reading Panel, 2000). Assessments that measure the ability to fluently and accurately connect letters and phonemes to sounds (e.g., letter sound fluency, phoneme segmentation fluency), decode words (e.g., decoding accuracy), read pseudowords (e.g., nonsense word fluency), read words (e.g., word identification), and read connected text (e.g., oral reading fluency), are important tools for supporting early reading instruction and for research at these grade levels. 

Early grades reading assessments can serve a range of purposes for researchers and practitioners. These uses include individual diagnostics, universal screening, progress monitoring, accountability, academic research, and evaluation, among others. Content coverage of a given assessment will vary depending on its intended purpose (Alderson & Alderson, 2000). Users should carefully consider their needs and ensure they align with the intended purpose of the assessment and the appropriate grade-level learning objectives.

Early years reading assessments on EdInstruments

The EdInstruments team collected early years (PreK-2nd grade) reading assessments for EdInstruments by conducting database searches, reviewing publisher websites, and soliciting expert recommendations. For consistency, one researcher on our team undertook the entire coding process.

We used the following protocol during the instrument selection process:

  • We included only formal assessments that could be administered by a researcher or in collaboration with schools/districts for the purpose of a large-scale program evaluation.
  • We excluded tests designed solely to be used by teachers or clinicians for individual diagnostics.
  • We excluded assessments that are soon to be discontinued (e.g., easyCBM).
  • We included the newest edition, when multiple editions of a test were available..

Table 1 provides the 38 measurements identified  in this process. For each instrument listed in Table 1, the supplementary material spreadsheet presents detailed information, including external links, reliability, evidence of validity and fairness, technical references, time, and cost.

Table 1. EdInstruments: Early Grades Reading Assessments
# Name Grades range
1 aimswebPlus PK-12
2 BRIGANCE IED III PK-2
3 Comprehensive Test of Phonological Processing 2nd edition (CTOPP-2) PK-12+
4 DIBELS 8th edition K-8
5 Early Reading Assessment PK-6
6 ECLS-K:2011 Reading Tests K-5
7 FastBridge CBMreading 1-12
8 FastBridge earlyReading PK-3
9 Gates-MacGinitie Reading Tests (GMRT-4) K-12+
10 Gray Oral Reading Test 5th edition (GORT-5) K-12+
11 Group Reading Assessment & Diagnostic Evaluation (GRADE) PK-12
12 HMH Reading Inventory K-12
13 i-Ready K-12 (diagnostic)
K-8 (growth monitoring)
2-8 (standards mastery)
14 Iowa Assessments (Forms E and F) K-12
15 ISIP Early Reading K-3
16 Kaufman Test of Educational Achievement 3rd edition (KTEA-3) PK-12+
17 NWEA MAP Growth K-12
18 NWEA MAP Reading Fluency PK-5
19 Phonological Awareness Literacy Screening (PALS) PK-8
20 Phonological and Print Awareness Scale (PPA Scale) PK-3
21 Peabody Picture Vocabulary Test 5th edition (PPVT-5) PK-12+
22 Predictive Assessment of Reading (PAR) PK-3
23 Stanford Achievement Test Series 10th edition (SAT10) K-12
24 Star Early Literacy K-3
25 Star Reading K-12
26 Test of Early Reading Ability 4th edition (TERA-4) PK-3
27 TerraNova3 K-12: complete battery
1-8: multiple assessments
28 Test of Preschool Early Literacy (TOPEL) PK
29 Test of Word Reading Efficiency 2nd edition (TOWRE-2) K-12+
30 Test of Reading Comprehension 4th edition (TORC-4) 1-12
31 Test of Silent Word Reading Fluency 2nd edition (TOSWRF-2) K-12+
32 Test of Silent Contextual Reading Fluency 2nd edition (TOSCRF-2) 1-12+
33 Test of Silent Reading Efficiency and Comprehension (TOSREC) 1-12
34 Texas Primary Reading Inventory (TPRI) K-3
35 Wechsler Individual Achievement Test 4th edition (WIAT-4) PK-12+
36 Woodcock Reading Mastery Tests 3rd edition (WRMT-III) K-12
37 Woodcock-Johnson IV PK-12+
38 Wide Range Achievement Test 5th edition (WRAT5) K-12+
Recommendations for using early years reading assessments in research

Researchers can consider a variety of reading assessments for their studies, depending on their specific research questions and design. Additional factors to consider are accuracy, reliability, availability of alternate forms, and time required for test administration. Teachers’ familiarity with the assessment might be beneficial, particularly for studies in partnership with schools or districts.

For comprehensive assessments of reading, the DIBELS-8th edition, the ECLS-K:2011 Reading Tests, the WRMT-III, GRADE are appropriate options. The GMRT-4 is also a strong option, although some of its subtests have lower reliability. The TOPEL is a good choice for research focused on pre-K reading. For studies that include both reading and math, researchers might consider the Iowa Assessments (Forms E and F), SAT10, aimswebPlus, Woodcock-Johnson IV, WIAT-4, and WRAT5. The TOWRE-2 is well-suited for research focused on word reading efficiency, and the Peabody Picture PPVT-5 is an appropriate choice for studies on receptive vocabulary.

Additional considerations for use by school practitioners

Screening assessments are not diagnostic
Using assessments that are aligned with their intended purpose will result in more useful information (Stahl, Flanigan, and McKenna, 2019). For example, screening assessments, such as the NWEA Measures of Academic Progress (MAP) and DIBELS, can identify students who may need additional support; while  diagnostic assessments, such as the Boehm Test of Basic Concepts Third Edition (Boehm-3) and the HMH reading inventory, can determine specific skill strengths and deficits. Screening assessments are generally not sufficient for determining strengths in specific skills.

Monitoring progress with instructionally relevant measures
Progress monitoring should be done with instructionally relevant measures that support adjustments to instruction based on the data (Rathvon, 2004). Ideally, progress monitoring measures should be linked to screening assessments to allow for the comparison of scores within and across tiers of support over time. However, to support the efficiency and utility of progress monitoring, it is more important that the data from progress monitoring assessments are immediately instructionally relevant and can be used to inform adjustments to the intervention received.

Usability, cost, and context
Factors such as usability and cost can affect which instrument is best suited to a particular context. Ideally, assessments are easy to use, have automated score reports that are easy to understand, and are cost-effective. The context in which the assessment is used, for example whether students are English learners, can affect which instruments are best, (Afflerbach, 2017). Many assessments are not designed to be used with English language learners and may not accurately assess the skills of these students. The availability of assessments and qualified examiners for accurately measuring the skills of English language learners in their native language can be a challenge, and assessments that have been translated from English into other languages may not have been culturally adapted. Considering these limitations and using caution when choosing early reading assessments and when interpreting the results will lead to more accurate information.

Timing, scope, and “stakes”
School systems often seek to be more efficient in their administration of assessment measures such as by using a single assessment system across grade levels (Afflerbach, 2016). However, adopting a single assessment system that spans multiple grade levels can lead to heterogeneous accuracy across grades. Further, it is important to consider the “stakes” of the decisions being derived from assessment data, as high-stakes decisions, such as determining eligibility for special education, require assessments with robust technical adequacy evidence, while low-stakes decisions, such as determining whether a student needs more practice with a particular skill, may be able to use assessments with weaker technical adequacy evidence.

Accessibility
Not all assessments are appropriate for students with learning differences. Researchers have validated several assessments that support accurate assessment of early literacy skills for students with high-incidence disabilities. However, when assessing students with low-incidence moderate to severe disabilities (e.g., visual impairment, hearing loss, significant cognitive impairment), extensive modifications may be needed to permit established assessments to be accessible to these students. Validated assessments that take into account the accessibility needs of students with low-incidence disabilities are needed to accurately assess these students’ early literacy skills.

References:

Afflerbach, P. (2016). Reading assessment: Looking ahead. The Reading Teacher, 69(4), 413-419.

Afflerbach, P. (2017). Understanding and using reading assessment, K-12. ASCD.

Alderson, C. J., & Alderson, J. C. (2000). Assessing reading. Cambridge University Press.

National Reading Panel. (2000). Teaching children to read: An evidence-based assessment of the scientific research literature on reading and its implications for reading instruction: Reports of the subgroups. National Institute of Child Health and Human Development, National Institutes of Health.

Pearson, P. D., & Hamm, D. N. (2005). The assessment of reading comprehension: A review of practices—Past, present, and future. In Pearson, P. D., & Hamm, D. N., Children's reading comprehension and assessment (pp. 31-88). Routledge.

Rathvon, N. (2004). Early reading assessment: A practitioner's handbook. Guilford Press.

Stahl, K. A. D., Flanigan, K., & McKenna, M. C. (2019). Assessment for reading instruction. Guilford Publications.

Suggested citation:

Savage, C. (2022). Considerations for use of early grades reading assessments. EdInstruments Brief, Annenberg Institute for School Reform, Brown University. www.edinstruments.com


1 I thank Alison Gandhi, Guangming Lin, Susanna Loeb, Samuel Madison, Nancy Nelson, Aizat Nurshatayeva, Nathaniel Schwartz, and Patricia Vadasy for their helpful feedback and expertise. Their participation in this work does not signify endorsement of any individual instrument/assessment. Any errors or omissions are my own.