World Englishes: Do they Make IELTS Invalid?
Kyle Lachini
Trustee, English Language Testing Society, CA., USA. eltsociety.org; Reviewer, Language Testing Journal; York University Alumnus, Toronto, Canada.
World Englishes and Standardized Tests: Bridging the Gap in Language Assessment
The concept of World Englishes, introduced by Braj Bihari Kachru (1932 – 2016), highlights the diverse ways English is used and adapted across the globe. As English spread beyond its native-speaking contexts, it evolved into multiple varieties influenced by local cultures, histories, and linguistic systems. Indian English, Nigerian English, and Singaporean English, among many others, demonstrate the adaptability of English as a global language. However, this diversity challenges the traditional norms of "Standard English," particularly in the realm of standardized language testing. These tests, which often uphold native speaker norms, raise critical questions about equity, validity, and inclusivity in language assessment.
Standardized tests such as TOEFL, IELTS, and Cambridge exams have long been the gatekeepers for academic, professional, and migration opportunities. However, their design often reflects a bias towards native speaker norms, typically those of British or American English. Test-takers from countries where World Englishes dominate may find themselves at a disadvantage, as the unique grammatical, phonological, or lexical features of their English varieties are frequently treated as errors. For example, features like the omission of articles in Indian English or lexical innovations in Nigerian English are often penalized, even though they are valid within their respective linguistic systems. This approach prioritizes conformity over communicative competence, undermining the authenticity of assessments.
Another pressing concern is the cultural and contextual disconnect present in many standardized tests. Test content often assumes familiarity with cultural norms specific to English-speaking countries, such as references to Western traditions, idioms, or educational practices. For non-native speakers, this cultural irrelevance can obscure their true linguistic abilities, resulting in scores that do not accurately reflect their proficiency. This disconnect also raises questions about the validity of such tests in measuring English proficiency for global contexts.
Economic and educational implications further complicate the issue. Millions of non-native English speakers take these tests to pursue international education or employment. When the assessment criteria marginalize features of World Englishes, they inadvertently reinforce linguistic hierarchies that favor native varieties. This perpetuates inequities and stigmatizes those who use localized Englishes, limiting access to opportunities and undervaluing their communicative competence.
领英推荐
To address these challenges, standardized tests must evolve to better reflect the realities of English as a global language. One approach is to incorporate features of major World Englishes into the assessment process. Linguistic elements that are distinct yet functional in these varieties should not automatically be marked as incorrect if they do not impede understanding. For instance, evaluative rubrics for speaking and writing could be revised to focus on effective communication rather than rigid adherence to native-like norms.
Additionally, language proficiency tests should emphasize contextualized competence. Instead of measuring proficiency by how closely it aligns with native speaker norms, assessments should prioritize the ability to communicate effectively across diverse linguistic and cultural contexts. This shift would ensure that proficiency reflects real-world usage, particularly in international settings where English serves as a lingua franca.
Listening and reading materials in standardized tests could also include a range of accents, idiomatic expressions, and lexical choices from various Englishes. By exposing test-takers to these variations, assessments would not only be more inclusive but also more reflective of how English is used globally.
Finally, ethical and transparent testing practices are crucial. Test developers must engage with linguists, educators, and stakeholders from diverse backgrounds to ensure fairness. Providing clear guidelines on how features of World Englishes are treated in assessment criteria would further enhance transparency and trust in the testing process.
The global spread of English demands a shift in how language proficiency is assessed. Standardized tests must move beyond their traditional frameworks to embrace the linguistic and cultural diversity inherent in World Englishes. By doing so, they can foster inclusivity, ensure fairness, and better reflect the realities of English as a global language. Ultimately, this evolution would uphold the principle that language proficiency is not about mimicking native norms but about effective communication in a multilingual world.
English for Specific Purposes Practitioner / PhD candidate in Language Assessment / MSc in Social History / Decolonial Thinking
2 个月Spot on! This is such an important topic. One of the main issues with English language tests is that they attempt to pass their constructs off as universal, when we know these constructs merely reflect the English used by those who see themselves as the owners of the language (BANA countries, mostly). We need much more pluriversal constructs so that English language tests stop excluding individuals who effectively communicate in English but whose knowledge is not recognized as valid.
U.S. State Department Fellow, Program Coordinator, Educational Consultant, Writer/Editor, Conference Organizer, Podcast Host
2 个月I especially like the way you point out that these test are exactly the opposite of today’s education and business buzzwords of inclusivity, diversity, and accessibility. Good one! ??
University lecturer , Associate Editor, Associate Editor for Research & ERIC Liaison of MEXTESOL Journal, and Trustee at English Language Testing Society
3 个月Kyle Lachini Thank you for raising such a controversial issue in applied linguistics, my dear Dr.Kyle! Kazemian et al. (2023) did a similar study, titled "ELT Scholars’ Attitudes towards Inclusion of Intercultural Competence Assessment in Language Proficiency Tests," and mentioned "World Englishes and language proficiency tests" in this paper as well(https://tesl-ej.org/wordpress/issues/volume26/ej104/ej104a6/). Moreover, Eric Moore interviewed me and my colleague about the issue in Language Testing in 10 in April,2024. https://eltsociety.org/testing-in-10-dr-fatemeh-khonamri-and-mr-mohammed-kazemian/
Mother / Wife / Lecturer / Language Assessment Consultant / Researcher
3 个月I agree that language, and psychological tests for that matter, often reflect outdated norms, prioritising native-speaker or euro-centric standards over the rich diversity of global experiences. This creates unfair barriers for those using valid, localised English varieties or coming from different cultural contexts. It's time to rethink these assessments—shifting the focus from rigid conformity to celebrating effective communication and inclusivity in a multilingual world.
Trustee, English Language Testing Society, CA., USA. eltsociety.org; Reviewer, Language Testing Journal; York University Alumnus, Toronto, Canada.
3 个月In a nutshell, standardized tests like IELTS often disregard linguistic diversity, promoting a monolithic "native speaker" version of English—one that even native speakers themselves don't consistently adhere to. This approach turns language testing into more of a business than a true measure of proficiency. To make language proficiency assessments more inclusive and effective, we need to embrace linguistic and cultural diversity. This means designing tests with varied themes, incorporating different accents, and allowing L1-influenced language variations—as long as they don’t hinder meaning or communication. It’s time to revolutionize language proficiency testing. One potential solution is developing tailored tests for speakers of specific first languages (L1). By understanding common challenges and mistakes for each group, assessors can determine whether errors impact communication or mutual intelligibility. Scores could then reflect the distinction between surface-level errors and significant "Errors" (those that disrupt intended meaning or understanding). What are your thoughts on this approach?