Keywords

figure a

Background

“After almost two decades of official multilingualism in South Africa, SLTs’ practices remain a poor reflection of the multilingual and multicultural realities of the population. This is especially striking given that the focus of our work is on this very aspect: language.”Van Dulm and Southwood (2013, p. 55)

South Africa is a richly diverse, multilingual, multicultural country with a progressive constitution that recognizes 11 official languages: nine indigenous languages from the BantuFootnote 1 group and two West-Germanic languages. The Bantu languages include isiZulu, isiXhosa, Sepedi, Setswana, Sesotho, Xitsonga, siSwati, Tshivenda, and isiNdebele. The West-Germanic languages are English and Afrikaans. In addition to these official languages, people in the region speak a range of other indigenous languages, together with languages spoken by immigrants from the rest of Africa and beyond. South African Sign Language is also used across the country, although not yet recognized as an official language. The linguistic diversity of the region is accompanied by similar cultural variety, which coupled with the country’s often-troubled history and social and economic challenges, creates a dynamic and complex environment rich in opportunity.

Speech–language therapists (SLTs) and audiologists working in this context are tasked with providing equitable services to all people of the country. They face innumerable challenges in doing so, variously described by authors such as Jordaan and Yelland (2003), Pascoe and Norman (2011), and Van Dulm and Southwood (2013). In this chapter, we focus specifically on challenges faced by SLTs in carrying out reliable and valid clinical assessment of the communication of children and adults in this context. It is widely acknowledged that there is a lack of assessment instruments designed for and standardized on the South African population (Jordaan & Yelland, 2003; Mdlalo et al., 2016; Penn, 1998; Van Dulm & Southwood, 2013). Here, we aim to describe the challenges faced by SLTs in assessment and focus in particular on assessments that have been developed for the context, the methodologies that underpin their development, and future work that needs to be done. Alongside these assessment-related challenges, opportunities abound to study under-researched languages, develop new assessment materials and protocols, and set standards for the clinical training of SLTs and audiologists that speak directly to the multilingual, multicultural environment and place us at the forefront of the profession in terms of embracing this diversity.

There are many different ways in which South African SLTs might increase the relevance of their practice for the people of the country. These have been discussed from epistemological (Kathard & Pillay, 2013; Penn, 2014), clinical training (Singh et al., 2015; Watermeyer & Barratt, 2013), and curriculum (Seabi et al., 2014) perspectives. There is an urgent imperative to recruit and train more multilingual SLTs, specifically speakers of African languages who will ensure that the demographics of the SLT workforce better match that of the population. In this chapter, we limit our focus to just one of the ways in which this increased relevance might be achieved, namely, through the development of culturally and linguistically appropriate assessments.

Current Challenges

Contextually relevant resources have been described as any tools (assessments, intervention programs, or guidelines) used with a specific population in a specific setting that were developed with that population and setting in mind (Pascoe & Norman, 2011). Given the limited range of contextually relevant assessments available for children and adults in South Africa, how do clinicians typically proceed? Van Dulm and Southwood (2013) carried out a national survey of SLTs working with children in South Africa. Results suggested that English-speaking South African children are often assessed with British or American instruments, with or without replacement of inappropriate vocabulary items. For Afrikaans-speaking children, translated versions of some of the British/US-based assessments are used. Informal assessments are also frequently administered with clinicians sometimes devising their own assessments or making their own translations of tests. Van Dulm and Southwood’s survey indicated that there are a few tests available in some of the Bantu languages (e.g., Sepedi and isiXhosa translations of some tests), but for most of these languages, there is nothing at all.

Simply translating an assessment from one language into another does not make it appropriate. Languages are not always equivalent to each other in terms of lexical items or structure. Furthermore, the culture and context of the target population need to be considered to avoid misinterpretation of results (Van der Merwe & le Roux, 2014). There is a growing drive to develop or adapt assessment tools and procedures to match the needs of populations and take different worldviews into account. Carter et al. (2006) emphasized the need to develop culturally appropriate materials that take cultural variation and potential cultural bias into account. Their work in Kenya led to suggestions for clinicians assessing or treating children from a culture different from their own. These included a focus on the influence of culture on performance, familiarity with the testing situation, the effect of formal education, and picture recognition. Gladstone et al. (2010) used focus groups to identify important local concepts and developmental milestones when creating a developmental assessment for children in Malawi. This was done in preference to simply translating and adapting available tools from other settings. The focus groups highlighted social milestones and social intelligence as important components of development for the community, aspects which existing tests may have emphasized less. It should be noted that for many cultures the idea of assessment or testing is itself peculiar and “…a particularly Western middle-class phenomenon as the manner, content and criteria for evaluation are firmly embedded within middle-class culture and standards” (Solarsh & Alant, 2006, p. 2).

One of the problems with the administration of assessments developed for different populations may be the inaccurate identification of individuals with communication difficulties, leading to either over or under referral. Pascoe et al. (2015) assessed English-speaking children in Cape Town, South Africa, using the Diagnostic Evaluation of Articulation and Phonology (Dodd et al., 2002). Before considering the adaptations necessary to account for dialectal differences (as advised by the developers of the test), the prevalence of speech difficulties was noted to be 21% of the 3-year-olds sampled. However, this figure fell to 6.66% once dialectal differences were taken into account. South African English is the variety of English spoken in South Africa, and it contains distinguishing vowel and consonantal features that must be considered in a phonology assessment (see section “The Languages of South Africa” for further details). Another example of the dangers of using unadapted materials comes from Wilson and Moodley (1999). These authors showed that the use of the Central Institute for the Deaf, Test W-22 wordlist (a speech discrimination test developed in the USA and widely used by South African audiologists), is problematic. The participants spoke South African English as their home language and had hearing thresholds within normal limits yet still performed more poorly than their US counterparts on whom the norms are based. If an assessment is inappropriate or inaccurate and does not take cultural variation and the potential for cultural bias into account, results will not be accurate, and intervention may be inappropriate or even harmful (Carter et al., 2006).

Similar to an earlier survey by Jordaan and Yelland (2003), Van Dulm and Southwood’s (2013) survey reported that all SLT respondents were fluent in English. However, the proportion able to provide services in an indigenous African language was 15%, lower than the figure estimated by Jordaan and Yelland 10 years before (25%). Less than one-fifth of the SLTs surveyed could serve clients in a Bantu language, although approximately 80% of South Africans speak languages from this group as their home language (Statistics South Africa, 2012). This disparity between the languages in which SLTs can provide services and the country’s demographics is striking and remarked on with increasing frequency (Jordaan & Kunene Nicolas, 2016; Mdlalo et al., 2016; Pascoe & Norman, 2011; Van Dulm & Southwood, 2013).

This issue of language mismatch between clinician and client is, of course, not a uniquely South African problem. At an international level, the profession has increasingly focused on the need to serve multilingual, multicultural populations and ways in which this might be done (Leadbeater & Litosseliti, 2014; Legg & Penn, 2013; Verdon, Wong, & McLeod, 2015; Verdon, McLeod, & Wong, 2015). The International Expert Panel on Multilingual Children’s Speech (McLeod et al., 2016) drafted guidelines for addressing this very issue, irrespective of context. The principles detailed in that document are helpful in framing assessment as a broad information-gathering process comprising of case history taking, informal observations, and formal assessment that should ideally include all languages spoken by a child. Although access to formal assessments is mentioned as a key part of the process, the document is a helpful reminder that formal assessment tools are just one part of a larger assessment process. In terms of formal assessment tools, the authors detail a step-by-step process which begins with “1: [SLTs should] familiarize themselves with the language and assessment tool/test” (p. 11). Using this as a guide, we have structured the remainder of the chapter as follows: section “The Languages of South Africa” describes the official languages of South African in further detail, section “Assessment” gives an overview of assessment principles in general, and in section “A Review of Current Assessments”, we review assessments that have been developed or adapted for use in the South African context.

The Languages of South Africa

A deeper knowledge of all the local languages is essential if we are to provide the same level of service to all people in South Africa. To reach a greater level of understanding will require multidisciplinary research teams comprising linguists, psychologists, and SLTs (Jordaan & Kunene Nicolas, 2016). In the section that follows, we focus on the official languages of the country and describe them in further detail.

IsiZulu is the most widely spoken language in South Africa, with 22.7% of the population speaking it as a first language. This is followed by isiXhosa (16%), Afrikaans (13.5%), and English (9.6%). Figure 17.1 summarizes the first-language distribution of the population of South Africa (based on Statistics South Africa, 2012).

Fig. 17.1
figure 1

Percentage of the first-language speakers in South Africa. (Based on Statistics South Africa, 2012)

Although English has one of the smaller home language bases in South Africa, it is generally thought to be the country’s lingua franca and the dominant language of trade and industry, science and technology, politics, and education. English is widely favored as the language of learning and teaching in many South African schools, especially from Grade 4 (Lafon, 2008; Posel & Zeller, 2016). Many children thus have home languages different from their language of instruction in school, which may affect their academic development. The language situation is a complex one where the majority languages of isiZulu and isiXhosa actually have a minority status when considering aspects like education and availability of clinical resources and personnel.

Multilingualism is widespread in South Africa with most people being able to speak a mix of different languages. The many languages (official and unofficial) influence each other widely so that, for example, isiXhosa contains words borrowed from Afrikaans, and most South Africans, irrespective of their first language, will know words like hayibo! (an expression of surprise deriving from isiZulu) and braai (an Afrikaans word meaning barbecue). The distribution of languages varies by region, with, for example, isiXhosa being the main language spoken in the Eastern Cape and isiZulu being the dominant language in KwaZulu-Natal. isiZulu is also the most frequently spoken language in Gauteng, although by a smaller proportion of people. Afrikaans is the most widely spoken home language in the Western Cape.

The Bantu languages of South Africa are divided into two main groups: the Nguni group comprising isiZulu, isiXhosa, Siswati, and Ndebele and the Sotho-Tswana languages comprising Northern Sesotho (Sesotho sa Leboa or Sepedi), Southern Sesotho, and Setswana. Although the languages within these families are separate in their own right, the languages of the Nguni and Sotho-Tswana families are closely related, especially in terms of syntax and lexicon, and, for the most part, are intelligible to a first-language speaker of one of the languages in the group. They all have a subject-verb-object (SVO) structure and agglutinative verb structure and are tone languages (Zerbian & Krifka, 2008). One of the most widely known and well-described features of the Bantu languages is the noun-class system where each noun is assigned to a specific class (ranging from 12 to 20 depending on the language) and creates a system of grammatical agreement (for further information, see Demuth, 2000; Smouse et al., 2012). Each of the languages comprises a range of dialects deserving of further study and consideration in clinical contexts. For example, isiXhosa is characterized by a number of dialects described in detail by Gxilishe (1996) including the Thembu, Gcaleka, Cele, and Bhaca dialects. Each dialect is linked to a specific geographical region of the country, and although mutually intelligible, the differences between the dialects can be marked.

Focusing on the two Germanic languages of English and Afrikaans, South African English comprises two main varieties commonly referred to as L2 Black South African English (BSAE) and L1 English, formerly known as White South African English (WSAE) (De Klerk, 1999; Lass, 2004; Mesthrie, 2017). These well-documented world English varieties are in a state of flux that mirrors the dynamic socio-political environment of South Africa. Mesthrie (2017) describes the way in which “traditional” features of these varieties are changing and how the prestigious former WSAE is no longer the preserve of whites only, as South African society deracializes following the demise of apartheid. BSAE is the dialect spoken by first-language speakers of Bantu languages and may also be a regional dialect for some first-language English speakers (De Klerk, 1999; De Klerk & Gough, 2004; Van Rooy, 2008). Defining features of the variety include reduced contrasts between short and long vowels; the use of fewer central vowels; realization of /θ, ð/ as plosives /t, d/; and palatal fricatives /ʃ, ʒ/ produced as alveolars /s, z/ (De Klerk & Gough, 2004; Van Rooy, 2008). Devoicing processes are frequently reported (Lass, 2004; Van Rooy, 2008). In light of the variations within South African English, and the features that define it as a variety distinct from other varieties of English, including those in which the majority of standardized tests are normed, it is essential that assessment and therapy materials are adapted for the South African context. South African English (SAE) is described as morphologically impoverished when compared to Bantu languages such as isiXhosa, although it does have subject-verb agreement, making it less morphologically impoverished than Afrikaans which has no noun classes, noun prefixes, or overtly marked subject-verb/object-verb agreement (Potgieter & Southwood, 2016). Afrikaans, the third most widely spoken language in South Africa (Statistics South Africa, 2012), derives from Dutch and has been influenced by other languages such as English, Malay, German, Portuguese, French, and some of the Bantu languages. It is a dominant language in the Western and Northern Cape regions of the country and, like all of the languages, has a range of different dialects linked to different socioeconomic groups and geographical location.

Van der Merwe and le Roux (2014) discuss the notion of language-specific symptoms of speech, language, and hearing disorders. They suggest that while there are likely universal language-independent symptoms associated with specific communication disorders, there will typically also be a set of characteristics specific to a given language. Thus, for example, developmental phonological processes are thought to be universal across all languages, but cluster reduction may apply only to languages that contain consonant clusters and final consonant deletion only to those where words end in consonants. Salient features of disorders are often described based only on English language investigations. We need to view these studies critically, with an awareness of how the difficulty would be experienced by speakers of other languages and cultures. Since English and Afrikaans are both Germanic languages, the difficulties could be similar for these languages but are likely to be very different for the Bantu languages. Van der Merwe and le Roux (2014) describe the idiosyncratic features of the sound systems of some of the indigenous languages, suggesting that the first step for SLTs working in South Africa is to understand the reality of Bantu language-specific symptoms. We should be aware that the languages contain sound characteristics that do not occur in the Germanic languages and the features are not always represented orthographically. These authors suggest that stimuli selection for assessment and therapy has to be considered carefully. Diemer et al. (2015) provide an excellent example of this careful consideration in their paper on phonological awareness assessment in isiXhosa. They provide a critical review of what is known about phonological awareness in the Bantu languages and note that “In adapting a phonological awareness test from a structurally very different language, such as English, a number of decisions must be made about what tasks to use, what linguistic segments to target, how the test relates to the linguistic structure and how to administer the test” (p. 332).

Assessment

Assessment is a key component of clinical practice in SLT. The use of valid and reliable tools is vital for the accurate identification of speech and language difficulties. Once such difficulties have been identified, assessment of an individual’s strengths and weaknesses is needed to plan appropriate intervention. Dockrell and Marshall (2015, p. 116) note, “Effective targeted interventions and the ability to monitor progress require tools that are reliable, valid and fit for purpose.” Although such steps are fundamental in the training of SLTs, being able to undertake such assessment presupposes that valid and reliable tools are available for the particular setting and that the use of the tools will mean that best practice for management can be adopted. Reviews and critiques of available assessments have been undertaken by Friberg (2010), McLeod and Verdon (2014), and Dockrell and Marshall (2015). These papers highlight the psychometric properties that need to be considered when evaluating assessments and emphasize the complex nature of human language and interactions. Dockrell and Marshall (2015) emphasize the value of dynamic assessment as a means of fairly evaluating the skills of individuals from a range of different language and cultural backgrounds. McLeod and Verdon’s (2014) review focused on speech assessment in languages other than English. They described 30 speech assessments covering 19 languages. Approximately half (53.3%) were norm-referenced, with the number of children in the normative samples ranging between 145 and 2568. Many of the assessments met the psychometric criteria for operationalization, although only a small number provided sensitivity and specificity data. These authors noted that in situations where bilingualism is typical, norms should include data for bilingual children.

Norm-referenced tests involve a comparison of an individual’s score in relation to the scores obtained by a sample of the population. The individual’s score is compared to the performance of the sample and placed in relation to it, that is, it is better than most, poorer than most, comparable to the average score obtained by the sample. Most speech and language assessments use this approach. Kester and Brice (2010) suggest that when evaluating an assessment, the norm group be carefully considered. Aspects that should be considered include representation and the extent to which the group is characteristic of a particular population. The factors considered most important are age, grade level, gender, geographic region, ethnicity, and socioeconomic status. Guidelines suggest that the number of participants should be at least 100 per cell for standardizing a test (Vergouwe et al., 2005).

For some purposes, national norms may be most relevant. In other cases, the norms of a specific subgroup may be more relevant.

Criterion-referenced assessments are different in that they compare an individual’s performance to a predetermined standard or desirable level. Many educational assessments follow this format in which learners must meet an acceptable standard to pass an examination or grade. An example of a communication assessment that follows a criterion-referenced approach is the Rossetti Infant-Toddler Language Scales (Rossetti, 1990). As such, it does not matter how learners perform in relation to one another, rather than they meet the grade (or not).

Validity is an estimate of whether a test measures what it intends to measure. Reliability refers to how consistently the test measures what it measures. Standardized assessments usually involve large-scale studies providing an estimate of validity and reliability. These constructs are linked to each other since a test that measures what it purports to measure is more likely to yield consistent and reliable results. Kester and Brice (2010) note that a test must have a high reliability if it is to achieve high estimates of validity. Estimates of reliability and validity can range from 0.0 to 1.0 with 0.6 and above being considered as high. There are multiple types of validity, discussed in detail by Kester and Brice (2010). In brief, content validity refers to whether an instrument takes all content into consideration. For example, a speech assessment that only included two consonants would not cover the entire domain of phonology. Judgments about content validity are usually made by experts in the field. Criterion-related validity involves a comparison of the tool to another instrument that measures the same thing. Criterion-related validity would look at the correlation between scores on similar tests. Predictive validity refers to an assessment’s ability to predict something. For example, a high score on a single-word speech assessment may predict high levels of intelligibility. Concurrent validity is an estimate of the ability to distinguish between groups that are different. For example, when assessing two groups of children (those diagnosed with speech difficulties and those who are judged to be typically developing), we would expect the assessment to yield very different scores for the two groups.

If we are to move forward in the development of assessments appropriate for the different languages and people of South Africa, we need to start by gathering sets of normative data. Writing about sub-Saharan Africa more generally, Alcock et al. (2015, p. 764) note “the current lack of appropriate tools is associated with a dearth of systematic studies of typical development.” The collection and analysis of normative data will lead to a database of what is typical and thus a better understanding of the difficulties that may occur. Since this knowledge of what to expect may inform the way in which assessments are designed, it is reasonable to ask whether the collection of normative data should happen before the development of assessments – or alternatively, is it only possible to obtain data once an assessment has been developed for this purpose? Different researchers have taken different approaches in their work, but in general, it seems as if the two areas need to advance together in parallel. Normative data is typically collected using an early version of an assessment tool. The data collected – and the process of collecting it – then leads to refinement of the tool, and so on, in a spiral process.

A Review of Current Assessments

In this review, we focus specifically on assessments in the official South African languages designed for use by SLTs in South Africa. There are many unpublished studies (e.g., honors and postgraduate student projects; informal assessments by clinicians), which have an important contribution to make and have been included in the reviews by Penn (1998), Mphahlele (2006), and Mdlalo (2013). However, for the purposes of this review, we have selectively limited our focus to published projects in SLT and therefore cannot claim to be exhaustive. We have included assessments based on parental questionnaires and screening tools. In the following sections, we analyze the assessments by language, clinical domain, and methodology. This is followed in section “Discussion and Future Plans” by a discussion of the findings and implications for future work.

Description of Assessments by Language

Table 17.1 details the available assessments by language. There are 27 assessments that met the criteria set in our review. Of these the greatest number (10) was for Afrikaans, followed by five assessments for isiXhosa, and four for isiZulu. Sepedi and SAE both have three assessments, whereas Setswana has one. The other languages, Sesotho, Tshivenda, Siswati, Xitsonga, and isiNdebele, do not have any known/published assessments – aside from the Intelligibility in Context Scale (ICS) that was adapted for all of the South African languages (see Pascoe & McLeod, 2016). It is interesting to note that although Sesotho does not have any freely available assessments, it is one of the Bantu languages that has been relatively well studied, especially in terms of children’s language acquisition. There is a fairly substantial body of knowledge about the nature of children’s acquisition in this language (e.g., Demuth, 1990, 2007), and although an assessment tool remains to be developed, Sesotho is ahead of some other languages that have early versions of assessment tools but very limited associated normative data.

Table 17.1 South African speech and language assessments by language

Description of Assessments by Clinical Domain

Table 17.2 shows that the language assessments that have been developed or translated cluster in a few specific areas. It is clear that more work has been conducted relating to children than adults. There are just four assessments described here that focus on adults with acquired speech and language difficulties: the adaptations by Mosdell et al. (2010) of the Boston Naming Test and Cookie Theft Test which form part of the Groote Schuur Neurocognitive Assessment Battery (see also Balchin, 2008), preliminary work around a translated version of the Western Aphasia Battery by Barratt et al. (2012), and Fouche and Van der Merwe’s (1999) speech intelligibility test designed for use with Sepedi adults with dysarthria or other acquired neurogenic speech difficulties. The development process described in these papers is a complex one, detailing work in progress rather than fully validated tools complete with psychometric data. There is much other work being undertaken in the development of assessments for adults with communication impairments in South Africa such as that by Allie et al. (2015) focusing on apraxia of speech in isiXhosa-speaking adults and work describing alternative methods to assessment using ethnographic approaches and narratives to more completely grasp the socio-cultural background of individuals (Legg, 2010; Legg & Penn, 2013; Penn, 2014).

Table 17.2 South African speech and language assessments by domain

For children, the assessments in Table 17.2 cluster around the domains of speech (four assessments), literacy and phonological awareness (two assessments), lexical development (eight assessments), language (seven assessments), and general development (including language). Van der Merwe and le Roux (2014) and Van Biljon et al. (2015) note that the development of articulation assessment protocols is needed for each of the Bantu languages and provide some guidelines that might be used in developing these, as well as potential research questions that this type of work might address. Although there is no children’s speech assessment included for isiZulu, Naidoo et al. (2005) have undertaken studies of isiZulu phonology, which have added considerably to our knowledge of isiZulu speech development. Parent-administered scales (such as the ICS, Language Development Survey, Mullen Scales of Early Learning, and Ages and Stages Questionnaires [ASQ]) have not always been included in reviews of assessments because they do not always test children directly and are more likely to be criterion rather than norm-referenced. We consider that they have a very important place in the assessment of young children and their families, and the growing number of tools of this nature testifies to this. Outside the parameters of our selective review, Abdoola (2015) describes a project translating the ASQ (Squires et al., 1999) into Hindi, one of the local Indian languages.

There are lexical assessments for five of the South African languages (isiXhosa, isiZulu, Sepedi, Afrikaans, SAE). Earlier projects describe the adaptation of the Peabody Picture Vocabulary Test (revised, Dunn & Dunn, 2007) into Afrikaans and Sepedi. This was followed by the development of the Afrikaanse Reseptiewe Woordeskattoets (ARW, Afrikaans Receptive Vocabulary Test) by Buitendag et al. (1998). This well-validated test is still widely used by clinicians in South Africa (including 30% of the respondents in Van Dulm and Southwood’s survey of SLTs) and has been cited in local research studies focusing on Afrikaans-speaking children (e.g., Southwood & Van Dulm, 2016). More recently, Southwood and her team at the University of Stellenbosch developed lexical assessments for isiXhosa, Afrikaans, and SAE which are linked to each other and other languages studied as part of a bigger cross-linguistic project. This type of study is effective in showing both the applied value of a newly developed tool and the theoretical value that can be obtained when languages are compared and contrasted with each other. The Picture Naming Game (PiNG, Bello et al., 2012) is a relatively new assessment of lexicon in children, initially devised for Italian and now adapted for isiZulu (Kunene Nicolas & Ahmed, 2016).

Some assessments have been designed to comprehensively assess children’s expressive and receptive language: Bortz’s (1997) isiZulu Expressive and Receptive Language Assessment (ZERLA) is a comprehensive test battery for isiZulu; and Southwood and Van Dulm’s (2012) Receptive and Expressive Activities for Language Therapy includes isiXhosa, Afrikaans, and SAE activities for informal assessment and therapy. The Diagnostic Evaluation of Language Variation (Seymour et al., 2003) is an assessment developed in the USA to distinguish language disorder from language difference. It does this by focusing on universal aspects of linguistic knowledge, which do not vary across dialects. An Afrikaans version of this assessment was created by Van Dulm and Southwood (2008) with a view to being able to assess a range of universal language skills in Afrikaans. Solarsh and Alant (2006) adopted an innovative approach in their development of an isiZulu assessment of verbal reasoning entitled: Test of the Ability to Explain. This assessment was designed to assess the verbal reasoning skills of isiZulu-speaking children in a way that is culturally fair but at the same time able to evaluate universal cognitive skills.

Description of Assessments by Methodology

In our final table (see 17.3), we look behind the scenes at each of the 27 assessments focusing on the methodologies used in the work. We analyzed each assessment using four main descriptors which are now described in turn:

  1. A.

    Description of the assessment: We considered information about each assessment and aimed to provide an overview of the main characteristics and purpose for each.

  2. B.

    Assessment development process: Where the information was available, we attempted to give an overview of the approach that was taken to the test development and to describe phases or steps that were taken in developing the material.

  3. C.

    Pilot/s overview: Where applicable, pilot studies that were undertaken with the assessments were described. Where possible we aimed to include details about the number of pilots undertaken, the number of participants, and any other pertinent characteristics of the participants.

  4. D.

    Results/psychometric data: Finally, we were interested in the psychometric properties of the assessments and aimed to share these if they were available or provide summary sta-tements about the progress toward achieving validation and documentation of psychometric properties.

Table 17.3 shows that some of these assessments have been adapted and translated from other existing assessments, for example, Mosdell et al.’s (2010) adaptation of the Boston Naming Test and Bornman et al.’s (2010) adaptations of the two parent-administered scales. There are other assessments that have started fresh from a blank page – possibly because there were no appropriate models suitable for adaptation. Diemer et al. (2015) and Mahura and Pascoe (2016) focused on understanding the language structures of isiXhosa and Setswana, respectively, and used this knowledge to shape assessments that are different from the available materials for other languages, as suggested by Van der Merwe and le Roux (2014).

Table 17.3 South African speech and language assessments – description and methodology

A range of different methodologies has been used in the development of these assessments. As shown in Table 17.3, several of the research teams started with focus groups where the opinions of “experts” or first-language speaking adults could be obtained as a first step, for example, Kunene Nicolas and Ahmed (2016) solicited the opinions of first-language isiZulu speakers for their preliminary PiNG test items, and Maphalala et al. (2014) undertook a focus group with an expert panel required to critique stimuli items in terms of specific criteria pertinent to the language. Other studies focused more heavily on the literature, theoretical models of test design, and stimuli selection. Several of the tests involved a parallel process of test development and pilot work with multiple versions of an assessment created and updated. Normative samples ranged from small groups (e.g., the 24 participants used by Maphalala et al. (2014) and only four participants used by Fouche and Van der Merwe (1999) to the larger studies of Solarsh and Alant (2006) with 292 participants and Bortz (1997) with 303 children).

Discussion and Future Plans

This chapter has emphasized the importance of assessments for SLTs that are valid and reliable for the given context. This focus on assessments should not detract from the need for the development of culturally and linguistically relevant interventions. Assessment is a means to an end so that relevant intervention can be provided if needed. Much of what has been written about therapy strategies and interventions is based on English speakers in high-income countries. A simple example of this is cueing words using initial consonants as for English. This would not work well with languages such as isiXhosa or Sesotho that typically begin with a vowel (Gxilishe, 2004). Intervention is more valid when it is relevant and culturally acceptable, and it must be tailored specifically to the culture of the community in which the individual resides (Hartley et al., 2009).

Tables 17.1 and 17.2 clearly indicate that there are many gaps in the languages covered by assessments as well as the domains addressed. Table 17.2 includes only a small number of literacy and phonological awareness assessments, which, given the educational crisis in South Africa (Spaull, 2013), must surely be a key focus area to be addressed. Diemer et al. (2015) review a larger set of work (both published and unpublished) that has focused on phonological awareness in the Bantu languages, not only limited to South Africa. There are likely many informal assessments of children’s literacy and phonological awareness being used by SLTs and educators, which were not included in our review. Linked to this point is a caution about the limitations of this review. Our review focused on published studies only, but there are many unpublished resources that we did not describe here, and we may have inadvertently omitted assessments that our search strategy did not find.

It is apparent from the summary provided in Table 17.3 that assessment development is a process usually occurring over the long term with multiple phases and iterations. There are some complete assessments included in Table 17.3 (e.g., Bortz’s [1997] ZERLA and Buitendag et al.’s [1998] ARW), but many of the assessments described are in the early stages of development. The authors acknowledge that these materials will change over time, becoming more valid and reliable and growing together with a larger body of normative data against which individual children can be compared.

In section “Assessment” of this chapter, we described the psychometric qualities of the assessments and gave an overview of the ways in which validity and reliability can be considered. Table 17.3 shows that we have some way to go in ensuring that the available assessments meet psychometric criteria. There are assessments that show, either through the assessment manual or related publications, psychometric data and rationales underpinning the test design, collection of normative data, and standardization (e.g., Solarsh & Alant, 2006; Bortz, 1997; Buitendag et al., 1998). However, these assessments are the exceptions, and there is much work to be done in strengthening the validity and reliability of the tools presented in this paper – as well as developing new tools to plug the gaps in areas (languages and domains) in which there are few assessments at all.

SLTs should be driven by their own day-to-day needs. What is it that is needed to make our roles more relevant? Without doubt, we must expand research leading to linguistically and culturally appropriate assessment and intervention material for which clinician and researcher partnerships are critical. A collaborative national speech and language project could support a coordinated agenda of strengthening research in progress, developing networks of researchers, and acting as a clearinghouse for published materials. In parallel with this project, we also need to address other areas such as the training of SLTs and updating practice frameworks.

“If you talk to a man in a language he understands, that goes to his head. If you talk to him in his language, that goes to his heart.”Nelson Mandela