19 Assessment of Reading and Writing

Different workplaces will divide responsibilities for reading and writing differently, and it is important to avoid duplication of services. However, as language experts, speech-language pathologists should be part of the team when reading and writing are areas of concern. The  following sections detail ways in which speech-language pathologists can evaluate reading and writing.

Testing

A number of standardized tests used by SLPs include assessment of reading and writing, including the Oral and Written Language Scales-Second Edition (OWLS-2, Carrow-Woodfolk), the Test of Adolescent and Adult Language–Fourth Edition (TOAL–4, Hammill et al., 2007) the TILLS (Nelson et al., 2016), and the Test of Written Language–Fourth Edition (TOWL–4). The Clinical Evaluation of Language Fundamentals–Fifth Edition (CELF–5, Wiig et al., 2013) includes supplementary subtests assessing reading and writing. Standardized tests are useful for obtaining norm-referenced scores. However, just as it is critical to use a variety of methods to assess oral language, a variety of methods is needed to obtain a thorough assessment of a student’s writing.

Rapid automatic naming (RAN) tasks  are predictors of the development of word reading (Peters et al., 2020; Powell & Atkinson, 2021). RAN tasks involve naming a series of familiar items, such as objects, colors, letters, or numbers, as fast as possible. These tasks are considered implicit, rather than explicit, tasks of phonological processing, as phonological processing is thought to be automatically, rather than consciously engaged (Melby-Lervag, Lyster, and Hulme, 2012).  RAN predicts fluency of reading both non-words and irregularly spelled words. RAN tasks have potential as early screeners of reading deficits.

The assessment of reading and writing must take an interdisciplinary approach. Teachers can provide writing samples from the student being evaluated, as well as average and superior examples from the class, in order to highlight the differences in the writing of the student being evaluated from the writing of typical and exceptional peers. Teachers can also provide insight into the specific aspects of writing that may be in need of remediation. Given that the CCSS (2010, National Governors Association) include three genres of written language: narrative, informational, and argument/opinion. Samples of each genre of writing should be reviewed in order to obtain a full picture of strengths and areas of need.

Written Language Sample Analysis

SLPs can use a variety of prompts to evaluate writing in narrative, expository, and persuasive genres. Narrative writing tasks can include writing stories in response to storybook pictures, videos, or oral prompts about given topics or topics of the students’ choosing. Expository writing can be elicited using picture description, descriptions or explanations without pictures, comparing and contrasting, or writing a retelling of a passage presented auditorally or visually (Price & Jackson, 2015). SLPs can elicit persuasive writing by asking students to take a stance on a given topic and write about it. Students can also write persuasive essays based on topics of their own choosing.

The 6+1 Traits Model is often used by teachers to evaluate student writing. It is beneficial for SLPs to use this same model in the assessment and treatment of writing, as solely focusing on analytic measures, such as number of different words, total number of words, or clauses per utterance may not lead to improved scores on rubrics used by teachers (Kousoftas & Gray, 2012).

Microstructure, or word- and sentence-level metrics, and macrostructure, or discourse-level metrics, comprise different dimensions of writing. The 6+1 traits total score represents writing quality. Genre-specific measures of macrostructure for narrative, expository, and persuasive writing, such as the Narrative Scoring Scheme (Miller et al., 2016), ExMac (Karasinski, 2022), and Persuasive Scoring Scheme (Miller et al., 2016), represent a separate dimension from the 6+1 traits score (Karasinski, 2022). Microstructure measures comprise the dimensions of accuracy, productivity, and complexity (Hall-Mills & Apel, 2015; Karasinski, 2022; Kim et al., 2014; Puranik, et al., 2008; Wagner et al., 2011). Accuracy includes spelling, punctuation, and capitalization. Productivity is comprised of number of different words (NDW), total number of words (NTW), number of ideas, number of clauses, and number of T-units (main clause + dependent clause(s)). Complexity includes mean length of utterance (MLU) and clausal density.

Many of the metrics listed above are commonly used in the assessment of oral language samples; these are also useful for evaluating writing samples (Scott, 2020; Karasinski, 2022). Assessing the number of words containing derivational bound morphemes, adverbs of magnitude and likelihood, adverbial conjuncts, metacognitive verbs, and abstract nouns is one way to measure growth in written language. Analysis of spelling also is important, as the types of spelling errors can provide information about the child’s knowledge of phonology, morphology, and orthography. Sentence complexity can be assessed by mean length of utterance or clausal density. Grammaticality, which can be measured either as percent of utterances without errors or percent of utterances with grammatical errors, reliably distinguishes children with DLD from this with typical language development (Scott, 2020).

Nelson (2018) provides a tutorial on transcribing and analyzing written language samples using Systemtic Analysis of Language Transcripts (SALT; Miller & Iglesias, 2019) software. Nelson recommends transcribing written language samples using the same coding as is used for oral language samples. Two challenges with transcribing written language samples into SALT, which was originally designed for use with oral language samples, include addressing misspellings and utterance segmentation. Nelson recommends transcribing misspelled words as the student spelled them, followed by a vertical bar and the correct spelling, then a word-level code. Utterance segmentation is performed in the same manner as it is performed for oral language samples, using C-units (a main clause with its dependent clauses). However, some components of writing, such as sound effects, could decrease the mean length of utterance, whereas other components, such as lists, could increase the mean length of utterance without adding to the syntactic complexity. To account for this, Nelson recommends establishing rules for including these types of written language within curly brackets or as comment lines.

Miscue Analysis

Miscue analysis refers to analyzing errors in spelling and decoding written words. Analyzing miscues provides information about the level at which the breakdown is occurring. For example, as five-year-old, my son sometimes wrote the following: “no wun wots to pla wis me,” when he wanted his sisters to stop what they were doing and do something that he wanted them to do. This provided insight into his mental representation of each word. He understood the phonology and orthography of no, to,  and me as evidenced by his correct spelling. He understood the phonology of one and play, as evidenced by writing letters that correspond to each sound in the words. He did not understand the orthography of play  and with; that is, he did not know the correct spelling of these words. Wants and with provide evidence for him not quite grasping the phonology. He did not represent the /n/ in wants, and he represented /s/ rather than /θ/ in with. Spelling miscues can also provide insight into morphology. For example, spelling backed as bakt would convey knowledge of the phonology, as demonstrated by the use of an appropriate letter to represent each sound in the word, but not the orthography, as evidenced by the use of k for ck, and not morphology, as shown by the use of for the past tense marker -ed.

Miscues when reading orally provide similar insight. Anna, a thirteen-year-old with DLD, produces reading miscues that illustrate the types of miscues typical of children and adolescents with deficits in decoding. For example, she read monument as moment. This type of miscue is syntactically appropriate, as a noun was substituted for a noun, but it is not semantically appropriate, as it changes the meaning. It is a substitution of an orthographically similar word, which suggests that Anna does have knowledge of phonics. Readers with phonological disorders often rely on whole-word representations rather than decoding at a phonological level when they encounter unfamiliar words, whereas readers with typical phonological development rely on whole-word representations for familiar words and decode based on phonological representations when reading unfamiliar words. Anna’s miscues suggest reliance on whole-word, or lexical, representations. They also suggest that she may not be monitoring her reading to ensure that the text makes sense. Incorrectly decoding words can lead to difficulty forming the cohesive representation of a text needed to comprehend it. A cohesive representation of the text is especially important when inferences are needed to make sense of the text (Kintsch, 1988).

References

Carrow-Woolfolk, E. (2011). Oral and Written Language Scales, Second Edition (OWLS-II). Torrance, CA: Western Psychological Services.

Education Northwest. (2014). 6 + 1 Trait Writing. Retrieved from http://educationnorthwest.org/traits/traits-rubrics

Hall-Mills, S. & Apel, K. (2015). Linguistic feature development across grades and genre in elementary writing. Language, Speech, and Hearing Services in Schools, 46, 242-255.

Hammill, D. D., Brown, V. L., Larsen, S. C., & Wiederholt, J. L. (2007). TOAL-4: Test of adolescent and adult language. Wood Dale.

Hammill, D. D., & Larsen, S. C. (2009). TOWL-4: Test of Written Language. Austin, TX: Pro-ed.

Karasinski, C. (2022). Microstructure and macrostructure measures of written narrative, expository, and persuasive language samples. Communication Disorders Quarterly, 1-11. https://doi.org/10.1177/1525740122111133

Kim, Y., Otaiba, S., Folsom, J., Greulich, L., and Puranik, C. (2014). Evaluating the dimensionality of first-grade written composition. Journal of Speech, Language, and Hearing Research, 57, 199-211.

Kintsch, W. (1988). The role of knowledge in discourse comprehension: A construction integration model. Psychological Review, 95, 163182.

Koutsoftas, A. & Gray, S. (2012). Comparison of narrative and expository writing in students with and without language-learning disabilities. Language, Speech, and Hearing Services in Schools, 43, 395-409.

Melby-Lervåg, M., Lyster, S.A. H., & Hulme, C. (2012). Phonological skills and their role in learning to read: A meta-analytic review. Psychological Bulletin, 138(2), 322–352. https://doi.org/10.1037/a0026744

Miller, J., & Iglesias, A. (2019). Systematic Analysis of Language Transcripts (SALT).[Computer software]. Madison, WI: SALT Software, LLC.

Nelson, N. (2018). How to code written language samples for SALT analysis. Perspectives of Language Learning and Education, 3, 45–55.

Nelson, N. W., Plante, E., Helm-Estabrooks, N., & Hotz, G. (2016). Test of integrated language and literacy skills (TILLS). Baltimore, MD: Brookes.

Nelson, N. W., Plante, E., Anderson, M., & Applegate, E. B. (2022). The dimensionality of language and literacy in the school-age years. Journal of Speech, Language, and Hearing Research65(7), 2629-2647.

Peters, J. L., Bavin, E. L., & Crewther, S. G. (2020). Eye movements during RAN as an operationalization of the RAN-reading “microcosm”. Frontiers in Human Neuroscience14, 67.

Price, J. & Jackson, S. (2015). Procedures for obtaining and analyzing writing samples of school-age children and adolescents. Language, Speech, and Hearing Services in Schools, 46, 277-293.

Powell, D., & Atkinson, L. (2021). Unraveling the links between rapid automatized naming (RAN), phonological awareness, and reading. Journal of Educational Psychology113(4), 706.

Puranik, C., Lombardino, L., & Altmann, L. (2008). Assessing the microstructure of written language using a retelling paradigm. American Journal of Speech-Language Pathology, 17, 107-120.

Scott, C. M. (2020). Language sample analysis of writing in children and adolescents: Assessment and intervention contributions. Topics in Language Disorders, 40(2), 202-220.

Wagner, R., Puranik, C., Foorman, B., Foster, E., Wilson, L., Tschinkel, E., & Kantor, P. (2011). Modeling the development of written language. Reading and Writing, 24, 203-220.

Wiig, E. H., Secord, W. A., & Semel, E. M. (2013). Clinical Evaluation of Language Fundamentals–Fifth Edition. Bloomington, MN: Pearson.

License

Language Disorders In School-Age Children And Adolescents Copyright © by apurvaashok. All Rights Reserved.

Share This Book