9 Language Sampling
Language sampling is critical for evaluating language in the everyday context. Some states, such as Michigan, require language samples for eligibility to receive services for language disorders in schools. Language samples can be repeated every day, and subtle changes can be demonstrated that are not detected on standardized tests.
The major barrier that SLPs report to conducting language sample analysis (LSA) is time. Historically, samples of 15-30 minutes, or 50-100 utterances, were considered necessary in order to obtain a reliable, valid measure of a child’s language. However, 7-minute samples of about 70 utterances have been found to be comparable to 20-minutes samples of about 170 utterances, suggesting that SLPs may not need to spend as much time collecting language samples as previously thought. For reliable measures of mean length of utterance, clausal density, and percentage of maze words, 10-minute samples may be needed, which are still shorter than the 15-30 minutes previously thought to be necessary (Wilder & Redmond, 2022). In addition to eliciting shorter samples , computer software can be used to decrease the amount of time spend on transcribing and analyzing language samples.
Language Sampling Contexts
Typically, language samples are elicited in the following contexts: play, conversation, narration, exposition, and persuasion. It is advantageous to elicit language samples in multiple contexts from a client. Samples of the child interacting with a variety of interlocutors is useful in order to assess differences in interactions with different communication partners. Narration, exposition, and persuasion are included in the Common Core State Standards (National Governors Association, 2010). The following paragraphs provide further explanations of methods of elicitation for each context.
Play
Assessing language in the context of play is ideal for young children, especially for those who are not yet able to engage in conversation. Toys used in the elicitation of a play sample should be interesting to the child. Toys that can be manipulated work best, such as a dollhouse or a farm set. Clinicians should follow the child’s lead, and use self-talk, parallel-talk, expansions, and extensions. Self-talk is talk about one’s own actions, such as, “I am putting the pink pig in the barn.” Parallel-talk is about what the child is doing, such as, “You are making the horse jump the fence.” Expansions recast a child’s short utterance into a complete sentence. For example, if the child says, “Puppy,” the clinician could say, “That is a puppy.” An extension is similar, but also adds new semantic information, such as, “That is a black and white puppy.” Clinicians should focus on making comments, rather than asking questions, and should avoid yes/no questions, which will only yield a yes, no, or I don’t know response. A general rule of thumb is a ratio of 3:1 comments:questions. It is also important for clinicians to pause and allow the child time to speak. Rushing to fill silence may decrease the opportunities the child will have to speak. Children need more time than adults to process information and formulate a response.
A video-recorded play sample also can be used to assess symbolic play. The Revised Concise Symbolic Play Scale (Westby, 2000) is a tool that assesses theory of mind, episodic memory, and decontextualization in play. Children five to six years of age are expected to demonstrate theory of mind by giving characters multiple roles, such as mother, wife and doctor. They demonstrate episodic memory by incorporating highly imaginative themes and multiple planned sequences. Decontextualization is demonstrated via the use of language to set the scene.
Photo by cottonbro from pexels: https://www.pexels.com/photo/child-playing-with-green-plastic-toy-3661288/?utm_content=attributionCopyText&utm_medium=referral&utm_source=pexels
Conversation
Most of our talk occurs in conversation, making conversational language samples vital to the assessment process in order to determine a child’s functional communication. Typically, conversational language samples can be collected from children kindergarten age or older; however, some preschoolers can engage in conversation, and some school-agers may need the context of play. The same elicitation protocol for a play sample applies to conversational samples, including using expansions and extensions, allowing pause time, avoiding yes/no questions or those which can be answered in a single word, and having a ratio of 3:1 comments:questions. Conversational language samples afford the opportunity to assess pragmatics, such as turn-taking, contingent responding, conversational repair, presupposition, and topic maintenance.
Conversation ties into educational initiatives, such as the Common Core State Standards (CCSS, National Governors Association, 2010). For example, SL.CCR.1 states, “Prepare for and participate effectively in a range of conversations and collaborations with diverse partners, building on others’ ideas and expressing their own clearly and persuasively (p. 22).”
Photo by Cliff Booth from pexels. https://www.pexels.com/photo/two-women-having-a-chat-after-workout-4057861/
Conversation is the most common language sampling context used by SLPs in the schools (Lenhart et al., 2022; Pavelko et al., 2016); however, it does not elicit the language complexity of narrative or expository samples, and does not reliably differentiate the syntactic complexity produced by adolescents with DLD from the syntactic complexity of those with typical language development (Nippold, 2008). However, conversational samples are useful if the focus of the assessment is on pragmatics (Lenhart et al., 2022).
Narration
Narrative language samples may be appropriate for some children as young as four years of age. Narration is part of educational standards, including the CCSS (National Governors Association, 2010). Narration is important for both academic and social success. The Narrative Scoring Scheme (Miller et al., 2016) is a useful protocol for evaluating story structure. The clinician scores the child’s narrative on a scale of 0-5 for each of seven constructs: introduction, character development, mental states, referencing, conflict and resolution, cohesion, and conclusion. A score of 5 indicates proficiency, 3 indicates satisfactory performance, and 1 indicates minimal or immature performance. Scores of 2 and 4 are undefined, and assigned using clinical judgement. Scores of 0 indicate child errors. More information on the Narrative Scoring Scheme can be found here: https://saltclasses.saltsoftware.com/pluginfile.php?file=/273/course/section/155/NSS_Scoring_Guide.pdf
SALT software provides norms for retells of the wordless picture books by Mercer Mayer, and for Pookins Gets Her Way (Lester & Munsinger, 1990), Dr. DeSoto (Steig, 2013), and A Porcupine Named Fluffy (Lester, 2013). There are also norms for “student selected story” retells, which can be used for the retelling of other stories, the Test of Narrative Language (Gillam & Pearson, 2017), and the Edmonton Narrative Norms Instrument (ENNI; Schneider, Dubé, & Hayward, 2005).
The ENNI is another protocol for eliciting and scoring a narrative language sample. This is a freely available norm-referenced measure, with norms for children aged four through nine years. The ENNI can be accessed here: https://www.ualberta.ca/communications-sciences-and-disorders/resources-for-clinicians-and-researchers/edmonton-narrative-norms-instrument.
Retelling fables has been shown to be a useful narrative task for school-age children (Lenhart et al., 2022) and adolescents (Nippold et al., 2014, 2017). The retelling of two fables takes approximately 6-8 minutes, which makes this a time-efficient task (Lenhard et al., 2022).
Exposition
Expository samples convey factual information. Expository language samples relate to academic performance and are part of educational standards, such as the Common Core State Standards (National Governors Association, 2010). Understanding and producing written and spoken informational language is critical to success not only in English Language Arts, but also in mathematics, science, and social studies. Proficiency with informational text is important for college and career readiness; consequently, the CCSS represent a shift to more focus on this type of text over a focus on fictional literature (National Governors Association, 2010). The Expository Scoring Scheme (ESS; Miller et al., 2016) scores the following components of the Favorite Sport or Game Task: preparations, object of contest, start of play, course of play, scoring, rules, strategy, duration, terminology, and cohesion. A score of 5 indicates proficiency, 3 indicates satisfactory performance, and 1 indicates minimal or immature performance. Scores of 2 and 4 are undefined, and assigned using clinical judgement. Scores of 0 indicate child errors. More information regarding the ESS can be found here: http://saltclasses.saltsoftware.com/pluginfile.php?file=/63/course/section/41/ExpoRDBDoc.pdf.
Because the ESS can only be used with the favorite sport or game task, the ExMac was created to be used with multiple types of expository samples, including how-to and compare/contrast samples (Karasinski, 2022). This measure uses aspects of the NSS, ESS, and Persuasive Scoring Scheme (PSS; Miller et al., 2016). For the ExMac, samples are scored on the same 5-point scale used for the NSS, ESS, and PSS on introduction, referencing, supporting details, cohesion, conclusion, and effectiveness.
A peer-conflict resolution task (PCR) is another expository language sampling task that has been shown to be appropriate for school-age children aged 8-11 (Lenhart et al., 2022) and adolescents (Nippold et al., 2007). In this task, students listen to two scenarios that include a problem, and they retell the scenario and answer problem-solving questions. This task takes approxmiately 6-8 minutes, which makes it an efficient task (Lenhart et al., 2022).
Persuasion
Persuasive language also is included in the Common Core State Standards (National Governors Association, 2010). Persuasion is necessary for academic success and success in one’s career, as well as social interaction. The protocol for eliciting a persuasive sample can be found at www.saltsoftware.com. The Persuasive Scoring Scheme (PSS; Miller et al., 2016) can be used to assess the child’s use of persuasion. The clinician scores the child’s persuasive language sample from 0-5 on seven measures: issue identification and desired change, supporting reasons, other point of view (counter arguments), compromises, conclusion, cohesion, and effectiveness. A score of 5 indicates proficiency, 3 indicates satisfactory performance, and 1 indicates minimal or immature performance. Scores of 2 and 4 are undefined, and assigned using clinical judgement. Scores of 0 indicate child errors. https://saltclasses.saltsoftware.com/pluginfile.php?file=/8713/course/section/297/PSS%20Scoring%20Guide.pdf.
Transcribing and Analyzing Language Samples
In addition to the elicitation of language samples, transcription and analysis are needed in order to obtain a clear picture of the child’s language. Transcription and analysis have been barriers to language sampling in the past. Fortunately, a number of advances in computer software have facilitated the ability to complete this essential work more quickly. In fact, Fox et al. (2021) found that using automatic speech recognition software, Google Cloud Speech, was more accurate in transcribing language samples than real-time transcription by SLPs and trained transcribers. Allowing an automatic speech recognition software program to complete the transcription allows the clinician to fully engage in the interaction, rather than focusing on transcribing. Although transcribing from recordings has been shown to be more appropriate than real-time transcription, (Miller et al., 2016), this does require extra time and is not always feasible. Making the process more efficient, while maintaining accuracy, is ideal.
Systematic Analysis of Language Transcripts (SALT)
Software programs, such as SALT, make analysis of language transcripts simpler by automatically calculating some of the commonly used metrics, which will be discussed here. Prior to analysis, transcripts must be segmented into utterances. There are multiple methods for utterance segmentation. One commonly used method, which is used in the SALT databases, is segmentation into Communication Units (C-units). A C-unit is a main clause with all of its dependent clauses. The SALT software contains databases of story retelling for children aged 4 years and 4 months of age through 12 years and eight months of age. There are databases for specific stories, as well as a general database for “student selected stories.” SALT also has a database specifically for the ENNI. Databases for expository samples, collected using the Favorite Sport or Game Task (Miller et al., 2012), are available for children 10-18 years of age. The SALT databases include persuasive samples for adolescents in grades 9-12. The protocols used for analysis with the SALT databases are based on Miller, Nockerts, and Andriacchi (2016) and can also be found at www.saltsoftware.com.
Computerized Language ANalysis (CLAN)
CLAN (MacWhinney, 2000) is freely available program that includes an automatic parser that is capable of labeling target forms. Software, manuals, and data can be accessed at https://www.talkbank.org. Databanks include the Child Language Data Exchange System (CHILDES), which contains language samples obtained from children. SLABank and BilingBank provide corpora for studying multilingualism. An advantage of CLAN is that it uses an automatic parser, which means that clinicians do not need to hand-code for targets of interest, such as bound morphemes. Overton et al. (2021) provided a useful tutorial on CLAN for SLPs.
Sampling Utterances and Grammatical Analysis Revisited (SUGAR)
Another computerized method of analyzing language samples is SUGAR. The SUGAR website: https://www.sugarlanguage.org/downloads provides downloadable forms that include noun phrase and verb phrase elements typically found for a given age. The use of these forms can be helpful in identifying intervention targets for morphosyntax. An advantage of SUGAR is that it uses word processing software that often is included with the purchase of computers and tablets and can be used for other purposes.
Analysis of syntax from language samples
Mean length of utterance
Mean length of utterance (MLU) is a measure of morphosyntactic complexity, and can be calculated in words or morphemes. Although MLU can provide insight into a child’s morphosyntactic complexity, at times, clinicians over-rely on this measure, which alone is not highly informative (Moyle et al., 2011; Overton et al., 2021). Historically, Brown’s (1973) rules have been used in the calculation of MLU in morphemes. These rules are used in the SALT databases. Per Brown’s rules, count as one morpheme: reoccurrences of a word for emphasis, ritualized reduplications, compound words, irregular past tense verbs, diminutives, auxiliary verbs, irregular plurals, and proper nouns. Count as two morphemes: possessive nouns, plural nouns, third person singular present tense verbs, regular past tense verbs, and present progressive verbs. Because MLU only provides information regarding utterance length, rather than structure or content, MLU may not be useful in setting intervention goals. MLU in morphemes may not be useful in capturing growth in children whose utterances are greater than four morphemes in length (Brown, 1973). For this reason, MLU in words may be a more useful metric for assessing older children (Overton et al, 2021). It is important to consider bias in the use of MLU when assessing children who speak a nonstandard dialect of English, as some inflectional morphemes are optional in some dialects.
Owens and colleagues have presented an alternate method of language sample analysis, which calculates MLU differently. This method is called Sampling Utterances and Grammatical Analysis Revisited (SUGAR). In SUGAR, the following changes to Brown’s rules were made. First, count as one morpheme each word in a proper name. Second, count these additional bound morphemes: -ful, -ly, -y (adj.), -en, -th, -ish, -ment, -tion, dis-, un-, re-, -er (comparative), -est (superlative), -er (person or thing that does some action unless common, such as teacher). Count wanna, gotta, hafta, and all contractions as two morphemes. Count gonna as three morphemes (Owens, 2013). Using these rules, age-related changes in MLU are seen in older school-age children. Owens and Pavelko (2020) found that children aged 7;0 (years; months) to 8:11 (years; months) have a mean MLU of 8.59 (standard deviation=1.40) and children aged 9:0-10:11 have an average MLU of 9.61 (standard deviation=1.52) in conversational language samples. Using the SUGAR method of calculating MLU results in better diagnostic accuracy than using Brown’s method in children aged 3-7 (Pavelko & Owens, 2019; Ramos et al., 2022).
Feature | # morphemes per Brown | # morphemes per SUGAR |
ritualized reduplications
(bye-bye)
|
1 | 1 |
compound words
(ladybug) |
1 | 1 |
irregular past tense verbs
(ran, went) . |
1
|
1 |
diminutives
(doggie, birdie) |
1 | 1 |
auxiliary verbs
(is,are, was)
|
1 | 1 |
irregular plurals
(mice)
|
1 | 1 |
proper nouns
(Mickey Mouse, Dr. Smith) |
1 | 2 |
possessive nouns
(John’s, Maddie’s)
|
2 | 2
|
plural nouns
(cats, chairs)
|
2 | 2 |
third person singular present
tense verbs (sings, waits, dances)
|
2 | 2 |
regular past tense verbs
(called, laughed)
|
2 | 2 |
present progressive verbs
(talking, going, smiling) |
2 | 2 |
-est
(smallest, funniest) |
1 | 2 |
-ful
(beautiful, thoughtful) |
1 | 2 |
-ish
(smallish, greenish) |
1 | 2 |
-ly
(really, happily)
|
1 | 2 |
-ment
(announcement, commencement) |
1 | 2 |
Re-
(replay, redo) |
1 | 2 |
-sion
(discussion, mission) |
1 | 2 |
-tion
(transportation, meditation) |
1 | 2 |
Un-
(undone, unsee) |
1 | 2 |
-y
(lucky, yucky, sticky) |
1 | 2 |
Dis-
(dislike, distaste) |
1 | 2 |
-en
(golden, olden) |
1 | 2 |
-th
(sixth, fourth) |
1 | 2 |
-er comparative
(bigger, stronger, smarter) |
1 | 2 |
-er agentive
(runner, baker) |
1 | 2 |
Index of productive syntax (IPSyn)
The IPSyn (Scarborough, 1990) is a measure of morphosyntax with norms for children aged two to six years. This measure assesses noun phrases, verb phrases, questions, negation, and sentence structure. This measure goes beyond MLU, which merely describes utterance length, by providing a detailed analysis of the specific structures used by the child, which can be useful in development of intervention goals (Yang et al., 2022). The IPSyn is comprised of four subscales; noun phrases, verb phrases, questions/negations, and sentence structures. For each each item in a subscale, a child earns 2 points for 2 exemplars, 1 point for one exemplar, and 0 points if an item is not used. Table 2 displays the IPSyn items by subscale (Scarborough, 1990).
Noun phrase | Verb Phrase | Question/Negation | Sentence Strucuture |
N1 Proper, mass, count noun | V1 Verb | Q1 Question marked by intonation | S1 2-word combination |
N2 Pronoun, prolocative | V2 Particle or preposition | Q2 Routine do/go or existence/name question or wh-pronoun alone | S2 Subject-verb sequence |
N3 Modifier
(adjectives, possessives, quantifiers) |
V3 Prepositional phrase
(preposition + noun phrase) |
Q3 Simple negation (negative + noun phrase) | S3 Verb-object sequence |
N4 2-word noun phrase:
article or modifier + nominal |
V4 Copula linking 2 nominals | Q4 Initial wh-pronoun followed by verb | S4 Subject-verb-object sequence |
N5 Article before a noun | V5 Catenative before a verb | Q5 Negative morpheme between subject and verb | S5 Any conjunction |
N6 2-word noun phrase
after verb or prepostion |
V6 Auxiliary be/do/have in verb phrase | Q6 Wh- question with inverted modal, copula, or auxiliary | S6 Sentence with 2 verb phrases |
N7 Plural suffix | V7 Progressive suffix | Q7 Negation of copula, modal, or auxiliary | S7 Conjoined phrases |
N8 2-word noun phrase before verb | V8 Adverb | Q8 Yes/no question with inverted copula, modal, or auxiliary | S8 Invinitive without catenative, marked with to |
N9 3-word noun phrase
(determiner/ modifier + Modifier + Noun) |
V9 Modal preceding verb | Q9 Why, which, when, whose | S9 Let/make/help/watch introducer |
N10 Adverb modifying | V10 3rd person singular present tense suffix | Q10 Tag question | S10 Adverbial conjunction |
N11 Other | V11 past tense modal | Q11 Other | S11 Propositional complement |
V12 Regular past tense suffix | S12 Conjoined sentences | ||
V13 Past tense auxiliary | S13 Wh-clause | ||
V14 Medial adverb | S14 Bitransitive predicate | ||
V15 Copula, modal, or auxiliary for emphasis or ellipsis (uncontractable) | S15 Sentence with 3 or more verb phrases | ||
V16 Past tense copula | S16 Relative clause | ||
V17 Other | S17 Infinitive clause: new subject | ||
S18 Gerund | |||
S19 Fronted or center-embedded subordinate | |||
S20 Other |
Recent research has revealed that 50-utterance language samples can use the IPSyn for analysis, helping shorten the time required to complete this measure by reducing the necessary number of utterances by 50% (Yang et al, 2022). The verb phrases subscale and the sentences subscale are more informative than the noun phrase subscale (Yang et al., 2022). CLAN can be used to compute the IPSyn, which saves a significant amount of time for the clinician and provides useful information (Overton et al., 2021).
Developmental sentence scoring (DSS)
The DSS (Lee & Canter, 1974) is a measure of syntactic complexity that scores indefinite pronouns, personal pronouns, main verbs, secondary verbs, negatives, conjunctions, interrogative reversals, and wh- questions. A point is given for complete sentences without errors. Norms are provided for ages three years through six years, 11 months. The percent of utterances receiving the sentence point has excellent diagnostic accuracy for children aged 3-5 in play and conversation contexts (Eisenberg & Guo, 2016; Guo et al., 2019; Ramos et al., 2022).The DSS can be computed using CLAN, which saves time for clinicians (Overton, 2021). The DSS scores indefinite pronouns or noun modifiers, personal pronouns, main verbs, secondary verbs, negatives, conjunctions, interrogative reversals, and wh-questions. A sentence point is awarded for sentences that are grammatically correct, with no errors. Table 3 depicts the scored items for the DSS (Lee & Canter, 1974).
Points | Indefinite pronouns/noun modifiers | Personal Pronouns | Main Verbs | Secondary Verbs | Negation | Conjunctions | Interrogative Reversals | Wh-Questions |
1 | is, this, that |
1st and 2nd person |
Uninflected, copula is or ‘s |
Five early-developing infinitival complements: I wanna see I’m gonna see I’ve gotta see Lemme see Let’s play |
it, this, that + copula or auxiliary is, ‘s + not |
and | Reversal of copula | Who, what, what + noun |
2 |
no, some, more, all, lot(s), one(s), two (etc.), other(s), another |
3rd person |
is + verb+ing |
Noncomplementing infinitives |
can’t, don’t | but | Reversal of auxiliary be |
where, how many, how much, what…do, what.., for |
3 |
something, somebody, someone |
Plural pronouns |
-s and -ed, auxiliary am are, was, were |
Participle, present or past |
isn’t, won’t | because | Obligatory do/
does/did Reversal of modal Tag question |
when, how, how + adjective |
4 |
nothing, nobody, no one, none |
those, these |
can/will/may + verb |
Early infinitival complements with differing subjects |
Any copula-negative or auxiliary-negative contractions Any pronoun-auxiliary contraction +not |
so, so that, and so, if | Reversal of auxiliary have
Reversal of any 2 auxiliaries |
why, what if, how come, how about + gerund: |
5 |
any, anything, anybody, anyone, every, everyone, everything, everybody |
Keflexive pronouns |
could/should/
would/might + verb; obligatory do + verb; emphatic did/does +verb
|
Passive infinitival complement: |
Any uncontracted negatives, |
or, except, only |
Reversal of any 3 auxiliaries |
whose, which, which + noun |
6 |
both, few, many, each, several, most, least, much, next, first, last, second (etc.) |
Wh-pronouns |
must/shall + verb;
have + verb + en; have (‘ve) got |
Gerund | Negatives with have,
Auxiliary-have negative contraction Pronoun-auxiliary have contraction |
where, when, while, why, how, whether(or not), for, till, until, since, before,after, unless, as, as + adjective +as, as if, like, that, than;
obligatory deletions; wh-pronouns + infinitive |
||
7 |
(his) own, one, oneself, whichever, whoever, whatever: |
Passive |
therefore, however, whenever, wherever |
|||||
8 |
have been + verb + ing, had been + verb + ing, modal + have + verb + en modal + be + verb + ing |
|||||||
9 |
Clausal density
Clausal density is a measure of syntactic complexity. Clausal density is calculated by dividing the total number of clauses by the total number of C-units. This can be calculated automatically using the Subordination Index in SALT software (Miller et al., 2016). Pavelko and Owens (2020) revealed a mean number of clauses per sentence of 1.34 for children aged 7:0-8:11, and 1.37 for children aged 9;0-10;11 in conversational language samples. Clausal density has been found to reliably differentiate children with DLD from children with typically-developing language (Wilder & Redmond, 2022).
Grammaticality Measures
The percent of utterances earning the DSS sentence point is one measure of grammaticality. Other measures of grammaticality also have good diagnostic accuracy. Both the percent of grammatically correct C-units or T-units and the percent of C-units or T-units with errors have good diagnostic accuracy for children aged 3-10 years (Guo et al., 2019; Guo & Schneider, 2016; Ramos et al., 2022). Guo et al. (2019) suggest the following cutoffs for percent grammatical utterances in narratives:
Age | Cutoff |
4 years | 55.04% |
5 years | 79.01% |
6 years | 83.00% |
7 years | 85.40% |
8 years | 91.50% |
9 years | 88.42% |
Analyzing semantics from language samples
A number of measures have been used to assess semantics from language samples. Although these measures have not been found to have high diagnostic accuracy (Charest, 2020, Ramos et al., 2022), they can be useful for providing insight into a child’s areas of need.
The total number of words (NTW) is the count of all words in the sample. It is a measure of lexical productivity. Owens and Pavelko (2020) revealed a mean NTW in the conversational language samples of children aged 7;0-8;11 of 379.63 (standard deviation=59.28), and for children aged 9:0-10:11 of 421.36 (standard deviation=66.61). Often, when comparing language samples to a database, the samples are equated for NTW; however, there may be times when it is appropriate to compare NTW on a given task. NTW shows robust growth with age (Scott, 2020).
Lexical diversity
Lexical diversity, or variety of words used, is frequently used as a general measure of semantics (Charest, Skoczylas, & Schneider, 2020). Low scores on indices of lexical diversity can indicate deficits in word-finding or word-learning (Charest et al.). Low lexical diversity may be seen in children from environments with decreased language input. Some investigators have suggested counting instances of specific word types, such as adverbs, subordinate conjunctions, abstract nouns, and metacognitive verbs, and use of derivational morphology may provide more insight into semantic deficits and should be used in conjunction with measures of lexical diversity (Charest et al., 2020; Scott, 2020) .
The number of different words (NDW) is one measure of lexical diversity. This is the number of unique words in a given sample. For example, the word the may be used several times in a sample, but will count only once in the calculation of NDW. Significant changes in NDW typically are seen across four or more grades, but may not be apparent across one or two grade levels (Scott, 2020). Children with with DLD produce fewer different words than those with typically-developing language (Wilder & Redmond, 2022).
Type-token ratio (TTR) is another measure of lexical diversity, and is calculated by dividing the number of total words by the number of different words. Moving average type-token ration (MATTR), is an estimate of TTR using a moving window. Initially, a window, such as 100 words, is chosen, and the TTR is calculated for the first 100 words, Next, the TTR is calculated for the next set of 100 words, that is, words 2-101, then 3-102, and so on until the end of the sample. The TTRs for each window is averaged to compute the MATTR. An advantage of the MATTR is that, unlike the TTR, the MATTR does not depend on sample length (Covington & McFall, 2010).
Verbal Facility
SALT software provides the opportunity to assess verbal facility, which provides insight into processing and formulation. Mazes refer to .false starts, revisions, repetitions, and fillers (e.g., um, uh). Maze words as a percentage of the total words is calculated in SALT. Speaking rate can be assessed in SALT using words per minute. Fillers, silent pauses, and abandoned utterances all are calculated in SALT and provide information regarding verbal facility. When reviewing a child’s use of mazes, silent pauses, and abandoned utterances, it is important to remember that we want this number to be lower, rather than higher.
One potential cause of mazes is linguistic uncertainty, which may be caused by lack of familiarity with a topic or by linguistic ability. Mazes also may be the result of increased linguistic productivity. As the complexity of utterances, mazes may be used as a place-holder. More grammatically complex languages may show evidence of increased use in their speakers (Taliancich-Klinger et al., 2021). Filled pauses may serve a pragmatic function; that is, these may be used to indicate that the speaker hasn’t finished a conversational turn as the speaker formulates the utterance. “Content mazes,” including repetitions and revisions, indicate difficulty with linguistic processing (Thordardottir & Ellis Weismer, 2002).
Because language samples can be repeated frequently, assessing the use of mazes over time and comparing use of mazes to linguistic complexity can help to determine how and why a child is using mazes.
Analyzing pragmatics from language samples
Conversational language samples provide the opportunity to evaluate pragmatics. SALT automatically calculates the following measures, which provide information about attention to the partner’s communication: percent of responses to questions, mean turn length in words, percent of overlapping speech, and number of interruptions. Topic initiation and maintenance, contingent responding, presupposition, turn-taking, and conversational repair all can be assessed in the context of conversation. The degree to which semantics, syntax, and verbal facility interfere with the listener’s understanding of the message also can be evident via language sampling. For example, during a session, a speech-language pathology graduate student asked a 12-year-old girl, Hailey (pseudonym), with developmental language disorder (DLD) to describe pizza. Hailey replied, “It has pepperoni. It has anything. It has like um mushrooms, meat, and and it’s it’s um yeah it has I don’t know.” This example shows fillers (um, yeah), repetitions (and, and; it’s it’s) vague terms (anything), and eventual giving up.
Summary of Language Sample Analysis
Language sample analysis is a critical component of a comprehensive language evaluation. A number of advances in technology have helped to decrease the time and energy historically spent on transcription and analysis, which allows clinicians to spend time interpreting the rich information language samples provide about an individual’s language.
References
Bangert, K. & Finestack, L. (2020). Linguistic maze production by children and adolescents with attention-deficit/hyperactivity disorder. Journal of Speech, Language, and Hearing Research, 63, 274-285.
Brown, R. (1973). A first language: The early stages. Harvard University Press.
Charest, M., Skoczylas, M. J., & Schneider, P. (2020). Properties of lexical diversity in the narratives of children with typical language development and developmental language disorder. American Journal of Speech-Language Pathology, 29(4), 1866-1882.
Covington, M. A., & McFall, J. D. (2010). Cutting the Gordian knot: The moving-average type–token ratio (MATTR). Journal of Quantitative Linguistics, 17(2), 94–100.https://www.doi.org/10.1080/09296171003643098
Eisenberg, S., & Guo, L. Y. (2016, May). Using language sample analysis in clinical practice: Measures of grammatical accuracy for identifying language impairment in preschool and school-aged children. In Seminars in Speech and Language (Vol. 37, No. 02, pp. 106-116). Thieme Medical Publishers.
Fox, C.B., Israelsen-Augenstein, M., Jones, S., & Gillan, S.L. (2021). Methods for school-age children’s narrative language: Automatic speech recognition and real-time transcription. Journal of Speech, Language, and Hearing Research, 64, 3533-3548. doi:10.1044/2021_JSHHR-21-00096.
Gillam, R. B., & Pearson, N. A. (2017). Test of Narrative Language– Second Edition. Pro-Ed.
Guo, L., Eisenberg, S., Schneider, P. & Spencer, P. (2019). Percent grammatical utterances between 4 and 9 years of age for the Edmonton Narrative Norms Instrument: Reference data and psychometric properties. American Journal of Speech-Language Pathology, 28(4), 1448-1462.
Guo, L.-Y., & Schneider, P. (2016). Differentiating school aged children with and without language impairment using tense and grammaticality measures from a narrative task. Journal of Speech, Language, and Hearing Research, 59(2), 317-329.
Karasinski, C. (2022). Microstructure and macrostructure measures of written narrative, expository, and persuasive language samples. Communication Disorders Quarterly, https://doi.org/10.1177/15257401221111334.
Lee, L, & Canter, S. (1974). Developmental sentence scoring: A clinical procedure for estimating syntactic development in children’s spontaneous speech. Journal of Hearing and Speech Disorders, 36(3), 315-340.
Lenhart, M. H., Timler, G. R., Pavelko, S. L., Bronaugh, D. A., & Dudding, C. C. (2022). Syntactic complexity across language sampling contexts in school-age children, ages 8–11 years. Language, Speech, and Hearing Services in Schools, 53(4), 1168-1176.
Lester, H. (2013). A porcupine named Fluffy. Houghton Mifflin Harcourt.
Lester, H., & Munsinger, L. (1990). Pookins gets her way. Houghton Mifflin Harcourt.
MacWhinney, B. (2000). The CHILDES Project: Tools for Analyzing Talk. 3rd Edition.
Mahwah, NJ: Lawrence Erlbaum Associates.
Miller, J., Andriacchi, K., & Nockerts, A. (2016). Assessing language production using SALT software (2nd ed.).
Miller, J., & Iglesias, A. (2019). Systematic Analysis of Language Transcripts (SALT). [Computer software]. Madison, WI: SALT Software, LLC.
Moyle, M.J., Karasinski, C., Ellis Weismer, S., & Gorman, B. (2011). Grammatical morphology in school-age children with and without language impairment: A discriminant function analysis. Language, Speech, and Hearing Services in Schools, 42, 550-560. DOI: 10.1044/0161-1461(2011/10-0029)
Nippold, M. A., Frantz-Kaspar, M. W., Cramond, P. M., Kirk, C., Hayward-Mayhew, C., & MacKinnon, M. (2014). Conversational and narrative speaking in adolescents: Examining the use of complex syntax. Journal of Speech, Language, and Hearing Research, 57(3), 876-886.
Nippold, M. A., Mansfield, T. C., & Billow, J. L. (2007). Peer conflict explanations in children, adolescents, and adults: Examining the development of complex syntax. American Journal of Speech-Language Pathology, 16(2), 179–188.
Nippold, M. A., Mansfield, T. C., Billow, J. L., & Tomblin, J. B. (2008). Expository discourse in adolescents with language impairments: Examining syntactic development. American Journal of Speech-Language Pathology, 17, 356-366.
Nippold, M. A., Vigeland, L. M., Frantz-Kaspar, M. W., & Ward-Lonergan, J. M. (2017). Language sampling with adolescents: Building a normative database with fables. American Journal of Speech-Language Pathology, 26(3), 908–920. https://doi.org/10.1044/2017_AJSLP-16-0181
National Governors Association Center for Best Practices, Council of Chief State School Officers. (2010). Common core state standards (insert specific content area if you are using only one). Washington D.C.: Author. http://corestandards.org/
Overton, C., Baron, T., Pearson, B.Z., & Ratner, N.B. (2021). Using free computer-assisted language sample analysis to evaluate and set treatment goals for children who speak African American English. Language, Speech, & Hearing Services in Schools, 52(1), 31-50. https://doi.org/10.1044/2020_LSHSS-19-00107
Owens, R., & Pavelko, S. (2020). Sampling utterances and grammatical analysis revised (SUGAR): Quantitative values for language sample analysis measures in 7- to 11-year-old children. Language, Speech, and Hearing Services in Schools. https://doi.org/10.1044/2020_LSHSS-19-00027.
Pavelko, S. L., & Owens Jr, R. E. (2019). Diagnostic accuracy of the Sampling Utterances and Grammatical Analysis Revised (SUGAR) measures for identifying children with language impairment. Language, Speech, and Hearing Services in Schools, 50(2), 211-223.
Pavelko, S. L., Owens Jr, R. E., Ireland, M., & Hahs-Vaughn, D. L. (2016). Use of language sample analysis by school-based SLPs: Results of a nationwide survey. Language, Speech, and Hearing Services in Schools, 47(3), 246-258.
Ramos, M. N., Collins, P., & Peña, E. D. (2022). Sharpening Our Tools: A systematic review to identify diagnostically accurate language sample measures. Journal of Speech, Language, and Hearing Research, 65(10), 3890-3907.
Scarborough, H. S. (1990). Index of productive syntax. Applied Psycholinguistics, 11(1), 1–22. https://doi.org/10.1017/ S0142716400008262
Scott, C. M. (2020). Language sample analysis of writing in children and adolescents: Assessment and intervention contributions. Topics in Language Disorders, 40(2), 202-220.
Taliancich-Klinger, C.L, Summers, C., & Greene, K.J. (2021). Mazes in Spanish-English dual language learners after language enrichment: A case study. Speech, Language, and Hearing. doi: 10.1080/2050571X.2021.1877049
Thordardottir, E., & Ellis Weismer, S. (2002). Content mazes and filled pauses in narrative language samples of children with specific language impairment. Brain and Cognition, 48, 587–592. Retrieved from http://europepmc.org/abstract/MED/12030512
Westby, C. (2000). A scale for assessing children’s play.
Wilder, A., & Redmond, S. (2022). The reliability of short conversational language sample measures in children with and without developmental language disorder. Journal of Speech, Language, and Hearing Research.
Steig, W. (2013). Doctor De Soto. Farrar, Straus and Giroux (BYR).
Yang, J.S., MacWhinney, B., & Bernstein Ratner, N. (2022). The Index of Productive Syntax: Psychometric properties and suggested modifications. American Journal of Speech-Language Pathology, 31, 239-256. https://doi.org/10.1044/2021_AJSLP-21-00084