Keywords

figure a

Motive

The rationale for this chapter is that the methods used to devise this assessment as well as other processes such as the Delphi technique can be applied to different Bantu languages. Speech–language therapists (SLTs) would then be able to assess preschool children to ensure that they receive required language assistance. This assistance needs to be provided as early as possible in a child’s life to prevent the long-term consequences of language disorders.

Research Questions

The aim of the study was to devise an expressive and receptive isiZulu language assessment for preschool children. The sub-aims were the following:

  • To establish the necessary components of language assessment in isiZulu

  • To determine the expressive and receptive language ability of preschool isiZulu children living in Soweto as measured on the language assessment

  • To determine the appropriateness of the assessment tool devised, using quantitative and qualitative analysis

Problem Background

One of the major problems facing the speech therapist in South Africa today is the absence or inadequacy of tests available for use with the black population. Factors such as educational, cultural, linguistic and environmental considerations mean that the tests that are available, generally standardized in England or the United States, standardized on western white middle class populations are found to be inappropriate in the accurate evaluation of the black South African population. Therefore, new tests must be created so as to overcome the limitations of translated imported tests and to fill the need for assessment tools for the black population.(Ballantine et al., 1976, p. 5)

The above plea has been reiterated many times over the last four decades (Bortz, 1995; Southwood & van Dulm, 2015). It has begun to be heeded by some researchers and therapists. Morgan and her team at the Chris Hani Baragwanath Hospital began working on these kinds of material during the 1970s and 1980s. Other South African authors who have devised multilingual and multicultural materials include Maphalala (2012), Gonasillan et al. (2013), Mdladlo (2014), Tshule (2014), and Mazibuko (2018).

Normed African assessments have also been devised. These include the Spoken Language As-sessment Profile-Revised (SLAP-R, Kramer & Hartley, 2013) and Malawi Developmental Assess-ment Tool (MDAT, English and Chichewa), whichis now translated into Kinyarwanda (Gladstone et al., 2010). In addition, Tchoungui Oyono (2016) has developed a Cameroon Speech and Language Assessment, the Evaluation du Language Oral.

Pascoe and Norman (2011) wrote a powerful editorial asking whether there were contextually relevant resources in speech–language therapy (SLT) and audiology in South Africa. In this editorial, they also asked where this research was hiding.

And yet, the plea continues to be made. As recently as 2015, Southwood and van Dulm explain that there “is still an absence of appropriate assessment and remediation material for Afrikaans and African languages” (Southwood & van Dulm, 2015, p. 1). These authors studied the challenge of linguistic and cultural diversity with a sample of 71 South African SLTs with over 20 years of experience (more experienced group) as well as 79 less experienced therapists with at least 5 years of experience. The results showed that most SLTs, regardless of level of experience, were aware of the need to consider the underlying linguistic base of the assessment instruments they used, but few considered the cultural and linguistic appropriateness of these instruments (Southwood & van Dulm, 2015).

Southwood and van Dulm suggest that standardized tests are possibly not devised for South Africans due to challenges with translation. Other difficulties they reported included that the need to standardize the translated instrument on a representative sample was expensive. However, we do not agree with this view. The majority of South Africans and Africans have been deprived of essential assessment materials for far too long. It is hoped that this information will motivate and assist SLT colleagues to devise much-needed methods for their communities.

Therefore, the purpose of this chapter is to provide details about how a standardized isiZulu Expressive and Receptive Language Assessment (ZERLA) was devised (Bortz, 1995).

One of the reasons for the necessity of a standardized language assessment is that unsuccessful attempts have been made to address the absence of standardized tests by translating existing standardized tests into one or more of the Bantu languages. Masiloane (1983) performed a literal translation of the Reynell Developmental Language Scales (RDLS, Reynell, 1977) into isiZulu. Results showed difficulty with the translated version as the children did not know linguistic items such as Santa Claus.

The challenge of literal translation is that although it ensures that the basic meaning is re-tained, it results in a structure which differs in syntactic complexity, semantic form, and pragmatic implications from the original (Paltiel, 1990). Literal translation also ignores cultural and linguistic differences (Hartley, 1986). Pascoe and Norman(2011) report on unsuccessful studies which found that “simply translating the language of a test does not make it appropriate for another population group, as the cultural and context of the target population needs to be considered to avoid misinterpretations of the results” (p. 3).

Another alternative to address the lack of standardized language assessment in South Africa has been to use criterion reference tests. Second-year SLT students at the University of the Witwatersrand (2015) completed an assignment where they adapted standardized tests to suit the South African population. They focused on providing vocabulary items that were more linguistically and culturally appropriate for the South African population. An example was using the item “engine” for “caboose” on the Peabody Picture Vocabulary Test (Dunn & Dunn, 1997). However, ultimately standardized language assessment is the most powerful tool available to assess children effectively and prevent the long-term sequelae of language disorders.

Theoretical Motivation

Importance of Standardized Language Assessment

The use of formal testing instruments in speech and language pathology derives from a simple concern within the profession – to provide an orderly, systematic, and convenient basis for tapping the language capabilities of a population of speakers. (Wolfram, 1983, p. 21)

Standardized language assessments are one of the most effective methods to diagnose language disorders (Lund & Duchan, 1993). However, language assessment is not an easy task.

Weiss et al. (1987) state that the most important aspect of clinical management is assessment. Standardized tests are also important for therapy because “they can be used to guide interventions and measure treatment outcomes” (Weiss & Zureich, 2009, p. 4).

Standardized language tests can take the form of diagnostic tests which detect the nature of the language disorder so that appropriate therapy programs can be devised for a client (Dale & Henderson, 1987). They can also take the form of screening tests. Screening tests provide a sample of broad-based language behaviors for the purpose of selecting children who need further language assessment (McCauley & Demetras, 1990).

Another advantage of standardized SLT tests is that they are objective. These tests can also compare the skills of a child to a larger group of similar children, and test administration is “usually efficient,” according to Shipley and McFee (2015).

Hyter and Salas-Provance (2019) recommend developing a new test if the “person’s ethnic and/or language background is represented in the normative sample” (p. 231).

Epidemiology of Language Problems

In the SLT field, prevention is defined as “the elimination of factors which interfere with the normal acquisition and development of communication skills” (American Speech–Language-Hearing Association [ASHA], 1982, p. 425). Prevention occurs at the primary, secondary, and tertiary levels (Gerber, 1990). The ASHA Committee on Prevention of Speech, Language, and Hearing Disorders (1988) strongly recommended increased development and implementation of primary prevention strategies, particularly for low-income populations who are at the greatest risk for conditions that can lead to communication disorders. Secondary prevention is the early detection of a communication challenge and aims at reducing the prevalence of a communication disorder (Gerber, 1990). Tertiary prevention relates to treatment of disorders, which is the traditional focus of attention for SLTs.

From both epidemiological and clinical perspectives, it is imperative to identify language problems as early in a child’s life as possible so that the child will receive therapy in a timely manner and thereby obtain maximal benefit from therapy (Wetherby 1985 in Wetherby et al., 1989). If communication problems are not treated during the critical language learning period (Lenneberg, 1967), then “the deficiencies in communication skills result in academic failure, social maladjustment, and the need for special care programs, often at considerable cost to society” (Ehrlich et al., 1973, p. 522).

Early identification could also prevent more serious and long-term repercussions of language delay, such as problems with education and social or vocation opportunities (Aram & Nation, 1980; Bernstein & Tiegerman, 1993; Wetherby et al., 1989).

Features and Psychometric Criteria Necessary for Standardized Language Tests

McCauley and Swisher (1984) conducted a seminal study which outlined ten essential criteria that are required in any standardized language test. Therefore, when devising a standardized language test, the SLT needs to ensure the following:

  • “A consideration of cultural factors. This factor is extremely important in multicultural South Africa, and Africa. There are varying communication rules among different cultural groups… and diagnosis of a person with a communication disorder is more likely to be effective if one uses instruments, interpersonal interaction, testing and interpretation of findings that are consistent with the communication rules of the group from which the person comes” (Taylor & Payne, 1994, p. 164). The New Reynell Developmental Language Scales (NRDLS) includes a Multilingual Toolkit (Edwards et al., 2011). This toolkit assists with concepts and materials of linguistic diver-sity for children who speak English as an addi-tional language.

  • A specific set of instructions and stimuli to elicit the required behavior (Bernstein & Tiegerman, 1993).

  • A specific set of standards for scoring and interpreting the elicited behavior (ASHA, 1989). Traditionally, language tests such as the RDLS (Reynell, 1977) score responses as correct or incorrect. McCartney (1993) suggests that this form of scoring be used because “yes-no” answers are not representative of “normally developing children” (p. 41). McCauley and Swisher (1984) state that it is necessary for test administration to be described in sufficient detail to ensure replication of the administration and scoring procedures for norms to be based on. Means and standard deviations are derived from raw scores (Kinsey, 2010; McCauley & Swisher, 1984).

  • A sample test population, encompassing a general geographical area and standardized for a broad range of social class, intelligence, and dialect (Emerick & Hatten, 1979).

  • A reliable and valid measure of language. Reliability and validity are dependent on each other, in that making judgments on whether an assessment is valid depends on whether the assessment is reliable. The opposite also holds true (Beech et al., 1993; Plante & Vance, 1994).

  • Item analysis. This is a method used to identify the best items within a pool of potential items (Hresko et al., 1991). Anastasi (1990, p. 202) states that performing item analysis is an alternative indication of validity and reliability and that “high reliability and validity can be built into a test in advance through item analysis.”

  • Description of tester qualifications. The general background and training required for the people who are to administer and score the test should be described.

  • McCauley and Swisher (1984) reviewed 30 language and articulation tests according to the psychometric criteria described above. They found that only 20% of tests met at least half of these criteria, while most tests met only two. Plante and Vance (1994) repeated this review a decade later and found that 38% of tests met at least half of the criteria. The modal number of criteria met increased to four. Most recently, Kinsey (2010) used 15 norm-referenced tests which were published after 1998. She found that the reliability and validity of tests have improved since McCauley and Swisher’s original study.

  • Friberg (2009, p. 78) describes the additional criterion of identification accuracy: “Identification accuracy refers to an assessment tool’s ability to accurately diagnose the presence or absence of a speech and/or language disorder.”

Structure of isiZulu

isiZulu was selected as the language to use to devise the standardized language assessment. isiZulu is the most commonly spoken first and second language in South Africa (spoken by 22.4% of the population) (Statistics South Africa, 2011). isiZulu is also the lingua franca of cities like Soweto (Crawhall, 1994), as shown in Fig. 19.1.

Fig. 19.1
figure 1

Percentages of languages spoken in South Africa. Note: Data on “sign language” were not collected in 2001. Slight differences exist on how the question on language was asked in the two censuses. (Based on Statistics South Africa, 2001, 2011)

The structures described below are those that were included in this standardized language assessment. isiZulu is a partially agglutinating language where words typically consist of more than one morpheme, for example:

  • a-ngi-m-bon-anga

  • Neg-I-him-see-negative

  • “I didn’t see him.”

Typologically, isiZulu and Bantu languages are characterized by noun class systems, extensive agreement, and a suffixal system of verbal derivatives (Doke, 1990).

Noun classes can be singular or plural prefixes, “where the nominal stem is invariant” (Demuth, 1992, p. 560). Nouns in isiZulu are composed of two formatives, a prefix and a stem, for example:

umntwana NC1 > um + ntwana “child”

Prefixes vary to express number and the noun class (NC) to which the noun belongs, while the stem remains constant (Taljaard & Bosch, 1988):

abantwana NC2 aba + ntwana “children”

There are six singular/plural noun classes, with two noun classes and their prefixes (Cope, 1984). Noun class prefixes determine the semantic content of noun classes as well as control extensive concordial agreement (Cope, 1984).

Semantic Origin

Historically, Bantu noun classes were assumed to be semantically based (Kunene, 1979). The term “semantically based” refers to similar nouns being categorized into corresponding meaning classes. An example is that noun classes 1 and 2 are referred to as the “human classes.”

Agreement

The noun class system determines alliterative agreement that links nouns to other words in the sentence (Taljaard & Bosch, 1988). Adjectives, relatives, possessives, and verbs are inflected to agree with the head noun.

Subject and Object Prefixes

Taljaard and Bosch (1988, p. 30) state that “the subject concord always bears a close relationship to the class prefix of the noun which is the subject of the clause. Object concord has a similar structure to subject concord.”

Adjectives and Relatives

Adjectives are composed of stems and adjectival concordial prefixes and can only be used when there is agreement between the stem and the concord. The adjective follows the noun in the sentence, for example:

  • in-gubo en-khulu

  • NC9-blanket AP9-large

  • “the big blanket”

However, adjectives are not common in isiZulu and Bantu languages. Instead, relative stems are used. An example is that, in English, colors are represented as an adjective but, in isiZulu, colors are represented by relative stems (Doke, 1990).

Verb Morphology

As Table 19.1 shows, verbs in isiZulu may have a very complex structure as a range of tense, aspect, concordial, and derivational affixes occur with the basic verb stem. The meaning of the verb is carried in the radical (Doke, 1990). Verbs can be distinguished from nouns in that their stems end in a consonant and the final vowel is an inflection, for example:

Table 19.1 Aspects of isiZulu verbal morphology (Bortz, 1995, p. 62)
  • u-ya-hamba

  • SP1-tense-go-present tense

  • “he travels”

Tenses are marked with verbal affixes, which are prefixal yo- or zo- in the future tense, the suffix -a in the present tense, and -e/ile in the past tense (Doke, 1990). Passivation is a productive grammatical category in Bantu languages, and isiZulu language makes much use of the passive voice (Doke, 1990). The verb is marked with a passive extension -w.

Varieties of isiZulu

Urbanization, immigration, and migration are all factors in the emergence of urban varieties of isiZulu. Because of these processes, much contact between isiZulu and English and Afrikaans took place (Calteaux, 1992). Similar processes have occurred with other Nguni and Sotho languages.

Standard isiZulu is spoken in the rural areas of Kwa IsiZulu Natal (Doke & Vilakazi, 1958). G.K. Schuring (personal communication, April 1, 1993) states that standard varieties tend to be based on a form of language that was spoken about five decades ago. IsiZulu speakers in rural areas are slower in accepting innovations.

In contrast to the standard variety, different varieties of isiZulu are spoken by Sowetans. These can be called isiZulu B, Soweto isiZulu, Township isiZulu, or Colloquial isiZulu (G.K. Schuring, personal communication, April 1, 1993).

Methods

Research Design

This study used a mixed design, with emphasis on a quantitative design. The quantitative aspect of the design was cross-sectional. According to Bordens and Abbott (2013), different participants from a number of age groups can be selected. Focus groups and in-depth interviews formed the qualitative aspect of the design to determine items for structures of isiZulu to be examined.

Ethical Clearance

Ethical clearance was obtained from the Human Research Committee (Non-Medical) of the University of the Witwatersrand. Informed consent forms were obtained from parents and the principals of the preschools that the participants attended.

Participants

Table 19.2 shows the number of participants in both phases of the study. Participants were required to be first-language isiZulu speakers who attend preschools. They were aged between 2.6 and 5.5 years for the pre-standardization phase and 3.9–4.3 years for the standardization phase. Regarding the participants’ language, when teachers were asked “what language do the children speak at preschool?,” their answers were “Soweto language.” Soweto language is a mixture of languages spoken at home together with isiZulu. The participants attended full-day preschools belonging to the African Self Help Association.

Table 19.2 Description of subjects in the standardization phase (Bortz, 1995, p. 98)

Research Assistants

Tester bias has a strong influence on a child’s language ability (ASHA, 2004; Leaders Project, 2013). To ensure that the children would be comfortable to interact with the research assistants, the research assistants were isiZulu first-language speakers. They were also required to live in Soweto to ensure that their sociolinguistic environment was consistent with that of the participants. A total of 12 research assistants participated in this study.

Research assistants’ tasks included administering, scoring, coding, and analyzing data. The researcher did not fit these criteria, and due to a possible difference in the results obtained when a person from a different culture is present, she did not take an active part in testing (Taylor & Payne, 1983).

Data Collection

To devise the standardized language assessment, a pre-standardization phase and standardization phase occurred as shown in Fig. 19.2. The standardized language assessment was called the isiZulu Expressive and Receptive Language Assess-ment (ZERLA).

Fig. 19.2
figure 2

Pre-standardization and standardization phases of the ZERLA development. (Adapted from Bortz (1995, p. 17))

Procedure

The most important aspect of the pre-standardization phase was to define the principles upon which the ZERLA was based. Vaughn-Cooke (1986) suggested that the test should be based on valid assumptions about language and that the test should provide an adequate description of some aspect of the child’s knowledge of language and the results of the test should provide principled guidelines for language intervention.

The ZERLA was based on an adaptation of Lahey (1988) and Bloom and Lahey’s (1978) model of language. In this model, language consists of three separate but interrelated components: form, content, and use. These components operate together and need to be regarded together when assessing language (Bloom & Lahey, 1978). This model of language also forms the basis for other recognized language assessments such as the Test of Early Language Development—Second Edition (TELD-2, Hresko et al., 1991).

Regarding the use of language, Hresko et al. (1991, p. 2) report that language use was not used as a dimension of the TELD-2, because operationalizing language use is extremely challenging in that the concept does not lend itself easily to standardized test formats.

The ZERLA does not formally assess pragmatics for similar reasons. However, attempts were made to devise the ZERLA in a pragmatically sensitive manner (Lund & Duchan, 1993), through assessing language in context. Examples include adjectives and relatives being assessed within the theme of dressing a doll.

Given the enormous complexity of language, it would be unrealistic for a test to evaluate in detail every aspect of a speaker’s linguistic knowledge. An adequate test should have a clearly defined focus, that is, it should be specifically designed to assess the grammatical, semantic, or pragmatic systems or subcomponents of these systems (Vaughn-Cooke, 1986, pp. 43–44). The ZERLA was, therefore, specifically designed to evaluate children’s knowledge of the morphology and syntax of isiZulu through the use of expressive and receptive subtests. The purpose of each subtest was to provide an indication of the participant’s knowledge of a specific component of her/his language.

Expressive language was assessed using spontaneous language sampling. According to Gallagher (1983, p. 2), “spontaneous language sampling is the centerpiece of child language assessment.” The ZERLA utilized a semi-structured language sample which assessed the participants’ expressive vocabulary. Research assistants asked questions which required responses from the participants. The conversations included descriptions of concrete “hereand-now topics” (Snow, 1981), for example, ugqokaninamhlanje “what are you wearing today?” The participants’ knowledge of agreement and relative stems were also elicited in this manner. The subtests of the ZERLA can be seen in Fig. 19.3.

Fig. 19.3
figure 3

Components of the ZERLA. (Adapted from Bortz (1995, p. 112))

One aspect of the Bloom and Lahey’s (1978) model that was not used in the ZERLA was phonology. Tone, an integral part of isiZulu, is also not assessed. This omission is a limitation of the study.

ZERLA Materials

Using objects for ZERLA materials was based on the same principles as the RDLS-2 (Reynell, 1977). However, efforts were made to ensure that these materials were suitable for the Soweto population via an informal focus group of Soweto mothers, who had come to the Chris Hani Baragwanath Hospital for a routine developmental follow-up for their children. They were questioned about what their children played with at various ages. Examples of items provided by the mothers included household items and toys. In addition, stimuli that were commonly seen in Soweto, such as ihashi “horse” (Dellatola, 1990), were used. Items were also selected according to a list that Reynolds (1989) found that Black South African children commonly played with, such as combs and plastic bottles.

An activity mat was designed by an art student living in Soweto and studying in a local college. The mat depicts a “typical” Soweto neighborhood with characteristic “matchbox” houses and groundini “sports field.” Items not common in Soweto, such as flowers, were thus sadly excluded (Dellatola, 1990).

Treatment of Data: Coding and Analysis

On the ZERLA, correct responses marked with a tick were given a score of 1. Incorrect responses indicated with a cross received a score of 0. In the pilot test phase, the research assistants had to provide the response that the child had made.

Statistical Analysis

Descriptive statistics were applied to analyze the results. For each participant, the total number of correct responses out of the total number of possible responses on each subtest of the ZERLA was calculated as percentages. Frequency counts of all the responses for subtests were addressed to obtain composite scores for each participant.

Norms were obtained by transforming the raw scores to means, and standard deviations were also calculated. Measures of internal consistency, test-retest reliability, and mark-remark reliability were determined. Different measures of validity such as concurrent and internal validity were also determined.

Analyses of variance (ANOVAs), t-tests of significance, and Bonferroni t-tests have a weakness. Therefore, chi-square goodness-of-fit tests were administered. These tests indicated that the receptive language and the composite score of the ZERLA showed good fits with the normal distribution.

Qualitative Analysis

As many of the measures of reliability and validity are not empirical, qualitative analysis was used. Qualitative analysis provides a comprehensive description and rationale for the results, particularly reliability (Marshall & Rossman, 1989; Patton, 1988). Participant observation, in-depth interviews, and focus group interviews were used.

Results and Discussion

Item Analysis

Items were analyzed according to discriminating power (the discriminating power of each item was determined by using the point biserial correlation technique Howell (1989)). Item difficulty is defined as the “the proportion of examinees who got the item correct” (Hresko et al., 1991, p. 40). Good test items should range in difficulty between 15% and 85%. Although this is a fairly wide dispersion, Anastasi (1990) argued that items should have an average percentage of difficulty.

The results of the ZERLA showed that 37% of the items met the criteria of discriminating power. Fifty-one percent of the ZERLA items had appropriate item difficulty. Thus, not all items included in the ZERLA met the criteria for appropriate discriminating power or item difficulty. The fact that these criteria were not met was a limitation of this study. However, Anastasi (1990) stated that having only items with good discrimination can lower the validity of a test.

Norms

Means, standard deviations, standard sores, and percentile ranks made up the norms for the ZERLA. The means and standard deviations contributed to developmental norms for the ZERLA while percentile ranks provided within-group norms. These normal distributions provided additional indicators of the validity of the ZERLA.

However, the chi-square goodness-of-fit test showed that a poor fit was obtained for expressive scores, as a significant difference from the normal curve was obtained. Poor results on expressive measures of language assessment are not unusual. According to Letts et al. (2010), the expressive scales of the first two versions of the RDLS (Reynell, 1969, 1977) were difficult to administer objectively and yielded uninformative results. This resulted in the development of the Reynell III.

The results of standardization demonstrate that the ZERLA can be utilized to assess the language abilities of isiZulu-speaking preschoolers. SLTs can compare a child’s performance with that of the participants assessed in the standardization sample of the ZERLA, in order to determine if the child has any language challenges. The ZERLA can be used as a standardized language test because it contains sufficient items which have discriminating power and appropriate item difficulty. The ZERLA is also a norm-referenced instrument. The fact that scores obtained on the ZERLA are representative of a normal distribution is an additional verification that the ZERLA can be used for the identification of language difficulties. The standardization process also showed that the ZERLA is a reliable and valid measure (see Table 19.3).

Table 19.3 Z-scores and percentile ranks of the ZERLA (Bortz, 1995, pp. 463–464)

Impact

Language Acquisition

The findings of the ZERLA reflected the universal nature of language; for example, noun classes were found to develop earlier than verbs and receptive language abilities predominated over expressive abilities (Bloom & Lahey, 1978; Lahey, 1988). Studies of isiZulu and Sesotho language acquisition, such as those conducted by Demuth (1989, 1992), Demuth et al. (2010), and Suzman (1985, 1991), indicated that children acquired their noun class systems and concordial morphology, passive, and relatives by age 3 years. The present study verified these findings regarding noun class and agreement. However, it found that the children tested on the ZERLA still had difficulty with their relative structures even at the age of 5 years.

There were also sociolinguistic and cultural findings from this study. All languages undergo continuous changes (Aitchison, 1991; Hickey, 2003; Kamwangamalu, 1989). However, in South Africa, change has been influenced due to the transformation of the political situation and changes in the education system. These changes in education have resulted in a strong influence of spoken English on isiZulu. English is becoming a lingua franca of Soweto, as this is the language which people use to gain employment. Parents also want their children to be educated in English as they view this as a language of increased opportunities (Jordaan, 2011).

Vaughn-Cooke (1986, p. 38) states that “forms of language used to code concepts can vary as a function of age, sex, social class, ethnicity and geographical region.” The effects of language shift in isiZulu and geographical region certainly influenced the ZERLA in a myriad of ways; for example, preliminary investigations showed that standard isiZulu agreement using subject prefixes had to be omitted, for example, -thatha aba-ntwana take NC2 children “take the children” instead of ba-thathta aba-ntwana SP2 take NC2 children “take the children.” The variety and inconsistency of Soweto IsiZulu and code-switched responses that subjects used when naming colors or numbers also showed the influence of language shift. Table 19.4 shows how features of the ZERLA can be compared to criteria of standardized language assessments (McCauley & Swisher, 1984; Plante & Vance, 1994; Vaughn-Cooke, 1986).

Table 19.4 Features of the ZERLA compared to criteria of standardized language tests (Bortz, 1995, p. 267f; derived from Hresko et al. (1991), Lund and Duchan (1993), McCauley and Swisher (1984), Vaughn-Cooke (1983, 1986))

Epidemiological Implications

South Africa has adopted the primary health-care system, which consists of primary, secondary, and tertiary prevention (Gerber, 1990). This assessment can be used at all levels of prevention as can be seen in Fig. 19.4.

Fig. 19.4
figure 4

The use of the ZERLA for prevention of language disorders. (Adapted from Bortz (1995, p. 275))

An example at the primary level of prevention, the screening version of the ZERLA, can be used. At the secondary level, the ZERLA can be used to detect and identify language problems. The ZERLA can therefore be used to identify children with language difficulties. It thus fulfills the criterion of identification accuracy mentioned by Friberg (2009).

The ZERLA can also be used at a tertiary level of prevention, by assisting with tertiary prevention (Marge, 1991). Lahey (1988) and Vaughn-Cooke (1986) state that language tests should be able to determine what form of intervention and remediation of a child is required. The ZERLA can also be used while the child is receiving therapy. It can also be readministered to assess progress.

Psychometric Results of the ZERLA

An often-leveled criticism against standardized language assessments is that they “do not report impressive validity data, if any at all” (Lund & Duchan, 1988, p. 289). The ZERLA determined several aspects of validity, as shown in Fig. 19.5.

Fig. 19.5
figure 5

Aspects of validity determined on the ZERLA. (Adapted from Bortz (1995, p. 270))

Scoring on the ZERLA was done by creating a pool of responses to deal with language variation. One of the criticisms traditionally leveled against standardized language assessments is the manner in which they are scored. Scoring is usually done in a binary fashion using a right-wrong system with correct responses counted as correct, 1 point, or incorrect, 0 points. Authors such as McCartney (1993) are concerned that this kind of coding is not representative of developing children’s language.

Clinical Implications: Examiner Qualifications

An important consideration during this study and regarding future use of the ZERLA is who is qualified to administer and score this assessment. There is a shortage of suitably trained SLT personnel in South Africa as well as Africa (Tuomi, 1994; Bortz et al., 1996; Barratt et al., 2012; Southwood & van Dulm, 2015, Enwefe, A. and S. personal communication; Wiley personal communication, October 9, 2016). However, as SLTs who specialize in communication, we need to empower ourselves to deal with the linguistic situation that presents itself during assessment.

In the case of the ZERLA, when no isiZulu speaker is available, a recommendation is that the SLT should work with an isiZulu speaker such as the parent or an interpreter. Using the parent can be an added advantage if the child is shy or uncooperative. The SLT needs to train the isiZulu speaker on the purpose of the test and method of administration. The therapist needs to observe that the test is being administered according to the detailed instructions required (McCauley & Swisher, 1984).

A criterion for the selection of a person to administer the test is literacy, as this would facilitate transcription of the child’s responses. The range of responses would be used to score the responses.

Utilizing test personnel in this manner would also be beneficial when using the ZERLA to screen large populations. The facilitator could train a number of examiners on test administration and scoring. She/he could then coordinate the process.

Considering the current composition of the SLT profession in South Africa, another alternative for test administration is for an English or African speaker to administer and score the ZERLA. The major drawback to this alternative is that isiZulu is a tonal language (Doke, 1990) and difficulty with tones is often experienced by non-isiZulu speakers.

Despite these issues, the non-isiZulu-speaking therapists would be able to administer the ZERLA on the proviso that they did considerable preparation prior to test administration. They could practice with an isiZulu speaker.

Considerations for Future Work, Research, and Politics

Standardization of Assessments in Bantu Languages

Due to the similarities within the languages of the Southeastern Bantu zone language group, the ZERLA can be translated into other languages such as the Sotho language group (Setswana, Northern Sotho, Southern Sotho). These assessments should also be standardized and normed so that they can effectively evaluate the languages of all children in South Africa. Similarly, the principles used for devising the ZERLA can be applied to other Bantu language zones due to the similarity of structures in the Bantu languages. Normed language assessments for the African population would assist in the prevention of long-term consequences of language delay.

Rural Areas

There is a need to adapt the ZERLA for use in rural environments, as approximately half the population lives in these areas (Hlophe, 1993). Standard dialects of the language are spoken in the rural areas. Therefore, linguistic changes would have to be reflected in the standardized tests. In addition, results of preliminary testing in rural envi-ronments revealed that vocabulary and activities such as the activity mat would need to be modified. The Delphi technique, which according to Hsu and Sandford (2007) is a method used to build consensus, can be used for this purpose.

Development of a South African Multilingual and Multicultural Database

Many authors have spoken about the urgent need for the development of appropriate speech, language, and hearing materials for a multilingual and multicultural South Africa (Ballantine et al., 1976; Bortz, 1995; Southwood & van Dulm, 2015; Tuomi, 1994). For decades, South African SLTs working in hospitals and universities have devised these kinds of materials. Unfortunately, much of this work is unpublished and not shared among colleagues.

Efforts have been made to compile a list of these resources (Professor Shajila Singh, personal communication, 2010–2014). The South African Speech–Language-Hearing Association (SASLHA) discussed this issue at the 2016 SASLHA Conference (October 2016). At this conference, an African Connections’ partnership using email was set up. SASLHA also set up an African Connections’ committee. The aim is to partner and assist speech–language and hearing therapists working in Africa with any challenges and needs they may have. A specific aim is to devise a database listing any resources. This email group already assisted in informing this chapter in terms of the assessments used in Africa, as described previously. Such a database would consolidate the materials and prevent “reinventing the wheel.” Clinicians would know what resources are available. An added benefit of such a library would be that clinicians could comment on their experience of the materials, adding to the materials’ reliability and validity. Such a resource would also prevent results such as those of Southwood and van Dulm (2015) who found that SLTs with the most experience persist in using assessments that are not linguistically or culturally appropriate for South Africa.