A Bit Of Background to The Lexical Syllabus

A Bit Of Background to The Lexical Syllabus

The sincerity and intellectual curiosity displayed by Harry Waters in his response to my comment on a recent podcast he did about teaching grammar has inspired me to offer a brief look at how developments in corpus linguistics led to a re-assessment of the roles of grammar and lexis in the English language, and to the appearance of the lexical syllabus in ELT. MA TESOL students might find it a useful addition to my earlier post discussing this curious dwarf star in the ELT syllabus firmament. ?????

?How did it start??

Corpus linguistics got a sudden shot in the arm in the early 1980s thanks to the appearance of relatively cheap personal computers and the explosion in the size of RAM (Random Access Memory) and the processing power of computers in general. This made it possible for far greater numbers of people to use concordance software to interrogate “big data” about the English language. Centuries before these technical advances appeared, scores of monks in different monasteries had pored over different bits of the bible they'd been allotted by the Vatican, counting the occurances and observing the surrounding texts of words like "God", "divine", "miraculous" and so on. What took them several years back then could suddenly be done in seconds.

So, the first thing we need to deal with is “What’s a concordancer?”?A concordancer is a computer program that will search a text or corpora stored in electronic form for a target item (an item of punctuation, a morpheme, a word, a phrase or a combination of words) and display all the examples it finds with the contexts in which they occur. The program allows the user to use "wild cards" (for example searching for "*ing" will list all the occurrences of words ending in "ing", while searching for "book*" will list all the occurrences of "book", "books", "booked", "booking", "bookable", etc.), and to specify which corpora will be searched.?

The program can be used to examine these sorts of questions:?

  • What words occur in the text?
  • How frequently does each word occur?
  • In how many different types of text (different subject areas, different modes, different mediums) does the word appear?
  • Are there any significant subsets? (For example, in English, the 700 most frequent words account for approx. 70% of all text.)
  • What are the collocations of the target item??
  • What are the contexts in which the word appears?

Thus, taking a word as the unit search item, a concordancer will examine a corpus and list all the different words alphabetically, or in the order they appear, it will count how often each word occurs and rank order them in terms of frequency, it will indicate what type of text the word appears in, and it will display the instances of the word in its context in a variety of formats (the most usual being the KWIC (Key Word In Context) format) according to a variety of priorities.

?Apart from providing access to large amounts of natural language, perhaps the most impressive feature of a concordancer is its ability to group text in such a way that patterns in the language are clearly visible. For example, a KWIC search for the word "so" in a corpus of English, with the first word to the right of "so" prioritised (occurrences are listed with the words to the immediate right of "so" in alphabetical order) allows patterns such as "so as", "so far" "so on", and "so that" to emerge clearly from a large text where they might otherwise not be visible.?

?Texts and corpora

?No matter how good a concordancing program may be, the results will only be as interesting as the corpora on which it works. The corpora are made up of different types of texts - novels, reports, newspaper editorials, transcripts of spoken texts such as a speech or a dialogue, scientific reports, newspaper reports, academic papers, school essays, love letters, etc., etc., To study general features of the language, a large general corpus is needed, consisting of texts from as many different sources, and of as many different types, as possible. Fig. 1 below, from The English-Corpora.org website, lists some of today’s corpora. In 1980, the Brown Corpus, a one million word corpus of written American English, and the similarly-sized LOB corpus of British English were the main references.

?The combination of concordancers? and very large corpora provides a powerful tool for exploring both the language and language learning. The corpora comprise large databases of naturally-occurring discourse (so that analyses are based on naturally-occurring structures and patterns of use rather than intuitions and perceptions which do not accurately represent actual use), and enable analyses of a scope not previously possible.

Issues raised by concordancer research

Corpus-based analyses can address a range of issues in applied linguistics. I've grouped these under two headings: descriptions of the language, and pedagogical applications.?

Descriptions of the language?

A. Grammar?

Biber (1993), Sinclair (1985, 1991), Willis (1990), Renouf (1988), Fox (1991), Johns (1991), and Lewis (1993) all argued that corpus-based research shows that the actual patterns of function and use in English often differ radically from previous descriptions. They argued that prior research in these areas had been based on intuition, and that these intuitions were not in accordance with the newly-observed facts of usage. New data from the analysis of large corpora led to the discovery of new patterns in the English language, and as Johns (1991) put it:

????????? The evidence thrown up by the data has left no escape from the conclusion that the description of English underlying our teaching, whether home-made or inherited from other teachers and linguists, needs major reassessment.

?One of the most dramatic examples given was the use of the word "any". The normal explanation of the word is that it’s the interrogative and negative form of "some". But a search of "any" in a large corpus reveals that the majority of the occurrences of "any" are in fact examples of its use in the affirmative, in the sense of "it does not matter which".

Apart from such "errors" in explanation, Biber and others claimed that there was a more generalised failure to give topics their proper "weighting". Most ESL/EFL pedagogical grammars until the 1990s agreed to a large extent about the core topics of English grammar; they covered broadly the same topics, they had similar organisations, and gave the topics the same priorities. But, Biber pointed out, this consensus was not based on an empirical analyses of actual patterns in use, and consequently some relatively rare constructions were seen as central, and so received a lot of attention, while more common constructions were seen as peripheral and hence were ignored.?

Biber (1993) gave the example of postnominal modifiers. Grammar books, he argued, consistently gave relative clauses extensive treatment, while giving little to participial clauses, and even less to prepositional phrases. Text analysis reveals however that prepositional phrases are far more common than relative clauses or participial phrases across a range of popular English registers. Biber concluded that differences across registers raise doubts about any "core" linguistic features and that the core grammatical structures are not necessarily those given prominence in pedagogical grammar.?

Corpus-based research sheds new light on some of our most basic assumptions about English grammar, and as a result it offers the possibility of more effective and appropriate pedagogical applications."? (Biber, 1993).?

Willis (1990) pointed to corpus-based research which showed that the passive voice is inadequately treated in most coursebooks, and was similarly critical of the way in which they insist on the "myth" of the three conditionals. Any search for "if" in a corpus will quickly illustrate that there are in fact far more than three conditionals, and that the three which course books usually focus on are not the commonest.?

Many, including Johns (1987) and Yule (1992) have complained of the way in which course books and teachers present reported speech.? Corpus-based research, especially of spoken text, indicates that the set of reported speech procedures described in course books are in fact rarely used in natural spoken discourse.?

As a final example, Fox (1991) suggested that context-dependent language is an important feature of naturalness and one that was often ignored in ELT. Rather than see verbs as being either transitive or intransitive, as most teachers do, Fox argued that it was better to concentrate on clauses, looking at the relationships between the verb group and the subject and object groups where these occur. By doing so, it will be found that most verbs cannot be labelled as transitive or intransitive, but can be used in both transitive and intransitive clauses.

?B. Lexis?

Corpus-based lexicographic research shows that our intuitions about a word often do not match the actual patterns of use. For example, Sinclair (1991) analysed the word "back".? While most dictionaries list the human body part as the first meaning, the COBUILD Corpus (developed at Birmingham University in the 1980s) showed this meaning to be relatively rare, and the adverbial sense of "in, to or towards the original starting point" (not usually given prominence) to be the most common.?

?Biber (1993) reports on his analysis of the word "certain". He observed that the actual patterns of use depart markedly from our intuitions, in that the word "certain" rarely marks "certainty".? More commonly it is used to mark a referent - a certain kind, in certain places.? Furthermore, the two major senses of "certain" are not at all uniformly distributed across registers. For example, "certain" marking certainty is more common in fiction than in Social Science, while "certain" as a referent marker is more common in Social Science.?

I trust these two examples will suffice to illustrate the general claim made by Sinclair, Biber and others that there is a deep and widespread mismatch between our intuitions about individual words, and the data about them that emerges from concordance analysis.??

A related issue is that of frequency. By examining the corpus, the COBUILD team found that the 700 most frequent words of English accounted for around 70% of all English text. “That is to say around 70% of the English we speak and hear, read and write is made up of the 700 commonest words in the language” (Willis, 1990). The most frequent 1,500 words accounted for around 76% of text, and the most frequent 2,500 for 80%.?

C. Grammar or lexis? The lexical phrase?

Pawley and Syder (1983) drew attention to what they called "lexicalized sentence stems" -"chunks" of formulaic language, of clause length or longer, a normal competent native speaker having many thousands of them at his disposal.??

??????????? A lexicalized sentence stem is a unit of clause length or longer whose grammatical form and lexical content is wholly or largely fixed; its fixed elements form a standard label for a culturally recognised concept, a term in the language.? ...... Many such stems have a grammar that is unique in that they are subject to an idiosyncratic range of phrase structure and transformational restrictions; that is to say, by applying generally productive rules to these units one may produce an utterance that is grammatical but unnatural or highly marked (Pawley & Syder, 1983).?

The existence of these lexicalized sentence stems questions the traditional compartmentalization of grammar into syntax (productive rules) and lexis (fixed, arbitrary usages), and also presents learners with two problems: how to learn a means of knowing which of possible well-formed sentences are nativelike (the puzzle of nativelike selection), and second, how to produce lexicalised sentence stems (and often multi-clause units) without hesitating in mid-clause (the puzzle of nativelike fluency).??

Nattinger and DeCarrico (1992) argued that “the lexical phrase" is at the heart of the English language.? Work done by computational linguists (Sinclair 1987, Garside et al. 1987) on collocations uncovered recurring patterns of lexical co-occurrence, and of many function words as well.? Kennedy (1989) showed that two generalized frames for prepositional phrases with "at" - "at + (the) + Proper N denoting place" and "at + Personal Pronoun" account for 63% of the occurrences of this preposition. Strong and weak collocates were identified, and a range of collocations, from the completely fixed (such as many idioms and cliches) to the less predictable, were outlined.?

As a result of all the research, the belief grew among computational linguists that linguistic knowledge cannot be strictly divided into grammatical rules and lexical items, and that rather, there is an entire range of items from the very specific (a lexical item) to the very general (a grammar rule), and since elements exist at every level of generality, it is impossible to draw a sharp border between them.? There is, in other words, a continuum between these different levels of language.?

??????????? Raining cats and dogs is certainly specific, John saw the giraffe is certainly general.? Between these two, however, lies a vast number of phrases like a day/month/year ago, the _____er the _____er, etc., which have varying degrees of generality and cannot efficiently be placed with either of these two extremes.? (Nattinger and DeCarrico (1992).????????

Sinclair (1991) studied the lemma "yield" and demonstrated a striking correspondence between sense and syntax, particularly in the two most frequent senses of "yield", namely "give way" and "produce". He concluded: ?

??????????? This study supports the contention that adjustment of meaning and structure is a regular feature of a language.? It can be used to provide valuable evidence for lexicography, suggesting sense divisions, and identifying phrase units and distinctive patterning.? Then, by using the same evidence in reverse, the traditional domain of syntax will be invaded by lexical hordes (Sinclair 1991).?

Pedagogical Applications

?1. The "Teach the facts" view

?Given the new facts about English which corpus-based research has revealed, Biber, Sinclair and others argue that teaching practice must fit the new, more accurate, description.? They go further, and suggest that now teachers have the data available to them, it should form the basis for instruction. One of the most strident expressions of this view is the following (quoted in Widdowson, 1990):?

??????????? Now that we have the means to observe samples of language which must be fairly close to representative samples, the clear messages are:?

a)????? We are teaching English in ignorance of a vast amount of basic fact.? This is not our fault, but it should not inhibit the absorption of the new material.

???????????? b) The categories and methods we use to describe English are not appropriate to the new material.? We shall need to overhaul our descriptive systems.?

??????????? c) Since our view of the language will change profoundly, we must expect substantial influence on the specification of syllabuses, design materials, and choice of method (Sinclair, 1985).

?Biber sees the teaching implications of corpus-based research as similarly obvious, and agrees with Sinclair that both grammar and vocabulary teaching must adjust to the new facts.????

2. The lexical syllabus

Willis (1990), drawing on the work of Sinclair (1987, 1991) and the COBUILD team (COBUILD = Collins Birmingham University International Language Database, a British research facility led by Sinclair, set up at the University of Birmingham in 1980 and funded by Collins publishers) outlined a lexical syllabus which he claimed provided a "new approach to language teaching".? Here are its main points:

  • There is a "contradiction" between a grammatical syllabus and a communicative methodology. A grammar syllabus is form-focused and aims at the correct production of target forms, but real communication demands that learners use whatever language best achieves the desired outcome of the communicative activity.? There is a dichotomy in the language classroom between activities which focus on form and activities which focus on the outcome and the exchange of meaning.
  • The presentation methodology which regards the language learning process as one of "accumulated entities", where learners gradually amass a sequence of parts, trivialises grammar - learners need insights into the underlying system of language. The method (and the coursebooks employed) oversimplify, and make it difficult for learners to move beyond these entities or packages towards important generalisations.
  • A successful methodology must be based on use not usage, yet must also offer a focus on form, rather than be based on form and give some incidental focus on use.
  • The COBUILD coursebooks are based on data produced with a concordancer which examined the COBUILD corpus of more than 20 million words in order to discover the frequency of English words and, as Willis puts it "to better examine various aspects of English grammar". Word frequency determines the contents of the courses.
  • The COBUILD English Course Level 1 starts with 700 words and Levels 2 and 3 go out to 1,500 then 2,500.?
  • Tasks are designed that allow the learners to use language in communicative activities, but also to examine the corpus and generalise from it.
  • Level 1 is based on? a corpus which contextualised the 700 words and their meanings and uses, and provided a range of activities aimed at using and exploring these words.
  • The lexical syllabus does not simply identify the commonest words, it focuses on commonest patterns too, and indicates how grammatical structures should be exemplified by emphasising the importance of natural language.

?3. The lexical phrase as a key unit for learning

Nattinger and DeCarrico suggest that the distinction between grammar and lexis in descriptions of the English language is too rigid. The suggested application of this is to focus teaching on the lexical phrase. This approach is quite different to Willis' (which takes frequency as the main criterion) and rests on two main arguments.? First, some cognitive research (particularly in the area of parallel distributive processing and related connectionist models of knowledge) suggests that we store different elements of language many times over in different chunks (note: this view is now very widely adopted, though many, including me, don’t buy it). This multiple lexical storage assumes that all knowledge is embedded in a network of processing units joined by complex connections, and accord no privilege to parsimonious, non-redundant systems. Rather, they assume that redundancy is rampant in a model of language, and that units of description, whether they be specific categories such as "word" or "sentence", or more general concepts such as "lexicon" or "syntax" are fluid, indistinctly bounded units, separated only as points on a continuum (Nattinger and DeCarrico, 1992). If this is so, then the role of analysis (grammar) in language learning becomes more limited, and the role of memory (the storage of, among other things, lexical phrases) more important.???????

The second argument is that language acquisition research suggests that formulaic language is highly significant.? Peters (1983) and Atkinson (1989) shows that a common pattern in language acquisition is that learners pass through a stage in which they use a large number of unanalyzed chunks of language - prefabricated language.? This formulaic speech is seen as being basic to the creative rule-forming processes which follow.? Starting with a few basic unvarying phrases, first language speakers subsequently, through analogy with similar phrases, learn to analyze them as smaller patterns, and finally into individual words, thus finding their own way to the regular rules of syntax.? It has also been suggested by Skehan (1991) that a further step in the language acquisition process is the "re-lexicalization" of various patterns of words.?

The computational analysis of language confirms the significance of patterned phrases as basic, intermediary units between the levels of lexis and grammar.? Cognitive research and language acquisition research support the argument that such phrases play an important role in the learning process.? In other words, current corpus-based research and research in language acquisition converge in a way that reveals the lexical phrase as an ideal unit which can be exploited for language teaching.??

Discussion?

My own personal, strongly-held view is that nearly thirty years after the high point of interest in the matters discussed, a lot of badly-informed, poorly argued stuff has been talked about a lexical approach by Hugh Dellar and his co-author Andrew Walkley. Leo Selivan has, again in my personal opinion, added to the confusion. Since I’ve written a number of posts about their work on my blog, including a review of Dellar & Walkley’s Teaching Lexically, and a review of Selivan’s Lexical Grammar, I will say nothing more here, focusing instead on my own reactions to the work of the pioneers.

?1. Describing language

?First, we must be clear about the limitations of the kind of descriptions concordancers offer us of the language. Concordancing tells us a lot about text that is new and revealing, but we must not be blinded by it. Although corpus analysis provides a detailed profile of what people do with the language, it does not tell us everything about what people know. Chomsky, Quirk, et al.(1972, 1985), and Greenbaum (1988) argue that we need to describe language not just in terms of the performed (as Sinclair, Biber, Willis, and Lewis suggest) but in terms of the possible. Sinclair and Biber argue “What is not part of the corpus is not part of competence”, and this is surely far too narrow a view, harking back to 1950s behaviorism. I think most people today see language as a cognitive process, and would agree that when Hymes argued for the need to broaden our view of? competence, he wasn’t arguing that we look only at attested behaviour.

?Widdowson (1991) uses Chomsky's distinction between Externalized language (E-Language): a description of performance, the actualized instances of attested behaviour, and Internalized language (I-Language): competence as abstract knowledge or linguistic cognition, to suggest that we need to group the four aspects of Hymes' communicative competence (possibility, feasibility, appropriateness and attestedness) into two sets. I-language studies are concerned with the first two of Hymes' aspects, and E-language studies deal with the other two.? Discourse analysis deals with one E-linguistic aspect and corpus-based linguistics with the fourth.? The limitations of corpus-based research are immediately evident, and thus we should not restrict ourselves to its findings. As Greenbaum observes:

???????????? We cannot expect that a corpus, however large, will always display an adequate number of examples.... We cannot know that our sampling is sufficiently large or sufficiently representative to be confident that the absence or rarity of a feature is significant (Greenbaum, 1988).

Significant, that is, of what users know as opposed to what they do. Widdowson points out that in discourse analysis there is increasing recognition of the importance of participant rather than observer perspective. To the extent that those engaged in discourse analysis define observable data in terms of participant experience and recognise the psycho-sociological influences behind the observable behaviour, they too see the actual language as evidence for realities beyond it.

?But how do we get at this I-Language, this linguistic cognition, without having to depend on the unreliable and unrepresentative intuitions of the analyst? Conceptual elicitation is one answer. Widdowson cites Rosch (1975) who devised a questionnaire to elicit from subjects the word which first sprang to mind as an example of a particular category. The results of this conceptual elicitation showed that subjects consistently chose the same hyponym for a particular category: given the superordinate "bird", "robin" was elicited, the word "vegetable" consistently elicited "pea", and so on. The results did not coincide with frequency profiles, and are evidence of a "mental lexicon" that concordancers cannot reach. I’ll return to this point shortly.

?2.? From description to prescription

?Quite apart from the question of the way in which we choose to describe language, and of the limitations of choosing a narrow view of attested behaviour which can tell us nothing directly about knowledge, there is the wider issue of what kinds of conclusions can be drawn from empirically attested data. The claim made by Biber, Sinclair and others is that, faced with all the new evidence, we must abandon our traditionally-held beliefs about language (which are partly based on intuition) and change not just our description of the language, but also language teaching materials and language instruction too. In short, now that we have the facts, we should describe and teach the facts (and only the facts) about English. Widdowson (1990) points out that the relationship between the description of language and the prescription of language for pedagogical purposes "cannot be one of determinacy." This strikes me as so obvious that I am surprised that Sinclair, Biber and others seem not to have fully grasped it. No description has any necessary prescriptive implications: one cannot jump from statements about the world to judgements and recommendations for action as if the facts made the recommendations obvious and undeniable. Thus, as Widdowson points out, descriptions of language cannot determine what a teacher does. Descriptions of language tell us about the destinations that language learners are travelling towards, but they do not provide any directions about how to get there. Only prescriptions can do that.??

While Sinclair is justified in expecting corpus-based research to influence syllabus design, there is no justification for the assumption that it must necessarily do so, and much less that such research should determine syllabus design. A case must be made for the approach which he seems to regard as somehow self-evident. When Sinclair says that the categories and methods we use to describe English are not appropriate to the new material, we need to know by what criteria appropriateness is being judged. Similarly, when Biber says "Consensus does not mean validity", and when he claims that corpus-based research offers the possibility of "more effective and appropriate pedagogical applications", we need to ask by what criteria (pedagogical presumably) validity, effectiveness and appropriateness are to be judged. When he talks of data from frequency counts "establishing" the "inadequacy" of discourse complexity he is presumably once again referring to assumptions, criteria which are not made explicit. When he suggests that the evidence of corpus-based research indicates that there is something special about the written mode, in that it enables a kind of linguistic expression not possible in speech, he is once again making an inadmissible conclusion.

?3.? Pedagogical criteria

?Facts do not "support" prescriptions, but our view of language will influence our prescriptions about how to teach and learn it.? If we view language as attested behaviour, we are more likely to recommend that frequently attested items of lexis form the core vocabulary of a general English course. Willis appreciates that his approach to syllabus design is in no way "proved" by facts, but he still takes a very narrow view. To return to the discussion above about Rosch's "prototype words" (the mental lexicon), I don’t think that such words should be ignored simply because they are not frequently attested, and it could well be argued that they should be one of the criteria for identifying a core vocabulary.

?Widdowson takes the case further. He suggests that Chomsky's idea of "kernel sentences" indicates the possibility that there are also prototype sentences which have an intuitive role. They do not figure as high frequency units in text, but they do figure in descriptive grammars, and their presence there can be said to be justified by their intuitive significance, their psychological reality, as prototypes. Furthermore, they are the stock in trade of language teaching. Teachers may all be wrong about the significance of such kernel sentences, but we cannot simply dismiss the possibility of their prescriptive value on the grounds that they do not occur frequently in electronically-readable corpora.

?More evidence of the limitations of sticking to frequently attested language forms comes from research which led to the specification of core language to be included in Le Fran?ais Fundemental (Gougenheim et al. 1956). The research team began with frequency counts of actual language, but they felt that some words were still missing: french people had a knowledge of words which the researchers felt intuitively should be included despite their poor showing in performance. So the researchers carried out an exercise in conceptual elicitation. They identified categories like furniture, clothing, occupations, and asked thousands of school children which nouns they thought it would be most useful to know in these categories. Once again, the lists did not correspond to frequency counts, and gave rise to the idea of "disponibilité" or availability. As Widdowson says, the difference between the French research and Rosch's is that availability is a prescriptive criterion: the words are prescribed as useful not because they are frequently used but because they appear to be readily available in the minds of the users.?

?Widdowson (1990) suggests that there are more direct pedagogical criteria to consider than those of frequency and range of language use. In terms of the purpose of learning, he cites “coverage” as a criterion described by Mackay:

???????????? The coverage of an item is the number of things one can say with it. It can be measured by the number of things it can displace (Mackay 1985).

Most obviously, this criterion will prevail where the purpose of learning is to acquire a minimal productive competence across a limited range of predictable situations. The process version of coverage is what Widdowson calls valency - the potential of an item to generate further learning. He gives the example of the lexical item "bet" as described in the COBUILD dictionary (1987). Analysis reveals that the canonical meaning of the word, "to lay a wager", is not as frequently attested as its informal occurrence as a modal marker as in "I bet he's late".? It doesn’t follow, however, that the more frequent usage should be given pedagogical preference. First, the informal meaning tends to occur mainly in the environment of first person singular and present tense, and is idiomatic, and it is thus limited in its productive generality. Second, the modal meaning is derivable from the canonical lexical meaning but not the other way round. In this sense the former has a greater valency and so constitutes a better learning investment. Widdowson proposes a general principle: high valency items are to be taught so that high frequency items can be more effectively learned.

?Pedagogic prescription should, suggests Widdowson, specify a succession of prototypes - simplified versions of the language, each of which is a basis for later improved models.? The process of authentication through interim versions of the language has to be guided by other factors as well as those of frequency and range of actual use, factors to do with usefulness rather than use.? Words and structures might be identified as pedagogically core because they activate the learning process, even if their actual occurrence in contexts of use is slight.?

?4. The lexical phrase

?Notwithstanding the discussion above, the work of Nattinger and DeCarrico strikes me as an important development which was radical and far-reaching, and is still relevant. While Sinclair, Biber, Willis and others take too narrow a view of language competence, lexical phrases (more carefully described and better analysed units than earlier descriptions of formulaic language) occupy a crucial place in the continuum between grammatical rules and lexical items.

?In Knowledge of Language and Ability for Use (1989) Widdowson, having argued that Chomsky's and Hymes' views of competence are not commensurate (since one is interested in an abstract system of rules, and the other in using language) suggests that there are eight, not four aspects to Hymes' competence: knowledge of each aspect, and ability in each one. He then reformulates these as grammatical competence (the parameter of possibility) and pragmatic competence (the rest), and characterises knowledge in terms of degrees of analysability, and ability in terms of accessibility.? Although both analysability and accessibility are necessary components, analysability has its limits. Nattinger and DeCarrico (after Pawley and Simon) draw attention to lexical phrases which are subject to differing degrees of syntactic variation. It seems that a great deal of knowledge consists of these formulaic chunks, lexical units completely or partially assembled in readiness for use, and if this is true, then not all access is dependent on analysis.? Gleason (1982) suggested that the importance of prefabricated routines, or "unopened packages" in language acquisition and second language learning has yet to be recognised.

?If we accept this view then communicative competence can be seen in a fresh way.?

??????????? Communicative competence is a matter of knowing a stock of partially pre-assembled patterns, formulaic frameworks, and a kit of rules, so to speak, and being able to apply the rules to make whatever adjustments are necessary according to contextual demands (Widdowson, 1989).

???????????? ?Communicative competence is a matter of adaption, and rules are not generative but regulative and subservient (Widdowson, 1989).

???????????? Competence consists of knowing how the scale of variability in the legitimate application of generative rules is applied - when analysis is called for and when it is not. Ignorance of the variable application of grammatical rules constitutes incompetence. (Widdowson, 1990)

?The suggestion is that grammar's role is subservient to lexis. If, as Widdowson thinks, we should provide patterns of lexical co-occurrence for rules to operate on so that they are suitably adjusted to the communicative purpose required of the context, then Nattinger and DeCarrico's work, which identifies lexical phrases and then prescribes exposure to and practice of sequences of such phrases, can play a key role. They suggest a programme of teaching based on leading students to use prefabricated language in a similar way as first language speakers do, and which, they claim, avoids the shortcomings of relying too heavily on either theories of linguistic competence on the one hand or theories of communicative competence on the other. ?

I’m not persuaded by the programme and I should quickly say that my support for these ideas is conditioned by a complete rejection of coursebook-driven ELT. I reject the use of a synthetic syllabus (including the syllabuses used in the Outcomes coursebook series which its authors claim are “lexical” syllabuses, and, of course, the more obviously “grammar-based” syllabuses used in popular General English coursebooks like Headway, English File, Speak out, and English Unlimited, and all those used at a national level in different parts of the world to implement Ministry of Education policies regarding ELT), the use of the CEFR scale, and the use of assessment tools like the IELTS.? ??????

The Cobuild project at Birmingham University made a big, very positive contribution to corpus linguistics, but the Cobuild Course of English, comprising 3 levels, written by Jane and David Willis, never really took off. I think it was one of those “design by committee” efforts, which had some interesting components which somehow failed to gel. Jane and David went on, sometimes collaborating, sometimes separately, to develop an approach to task-based language teaching which was very popular and still has a big following.?? ???????????????

?Conclusion

?Despite the criticisms I have made of some of the more strident claims made by researchers using concordancers, and despite the limitations of text analysis and of frequency as a pedagogical criterion, there is no doubt that corpus-based research has thrown valuable light on the way English and other languages are actually used. The new information helped build a better, more accurate, description of English, and helped teachers, materials writers, and learners escape from the intuitions and prejudices of previous "authorities". Furthermore, this line of research led to the re-thinking of the traditional distinction between grammar and lexis, and to a general recognition of the importance of the role of formulaic language. An impressive legacy by any standard.

……………………………………………

?See my review of Dellar & Walkley’s Teaching Lexically by clicking blue highlighted title. ?

Likewise for my review of Selivan’s Lexical Grammar ??


References

?Biber, D. (1993). Corpus-based approaches to issues in applied linguistics. AAAL colloquium on Discourse Analysis.

?Fox, G. (1991). Context Dependency and Transitivity in English. In Johns, T. & King, P. (Eds.), 1991, Classroom Concordancing. ?

Gougenheim, G., et al. (1956). L'Elaboration du fran?ais élementaire.?Didier.

Greenbaum, S. (1988). Good English and the grammarian. Longman.

Johns, T. (1988) "Whence and whither classroom concordancing?" in Bongarts T et al, 1988, Computer Applications in Language Learning. Floris.

?Johns, T. (1991) "Should you be Persuaded: Two Examples of Data-driven Learning" in? Johns, T. & King, P. (Eds.) 1991?? Classroom Concordancing ELR Journal, Vol 4, p.1 - 16.

?Johns, T. (1991) "From Printout to Handout" in? Johns, T. & King, P. (Eds.) 1991?? Classroom Concordancing ELR Journal Vol 4.? p.27 - 46

?Johns, T. & King, P. (Eds.) (1991). Classroom Concordancing. ELR Journal, Vol 4.?

Johns, T. (1991). Should you be Persuaded: Two Examples of Data-driven Learning. In? Johns, T. & King, P. (Eds.), 1991, Classroom Concordancing.

Lewis, M. (1993). The Lexical Approach. LTP. ?

Mackay, R.?(1965).?Language Teaching Analysis. Longmans.

?Nattinger, J. and DeCarrico, J. (1992). Lexical Phrases and Language Teaching. OUP.

Pawley, A and Syder, F. (1983). Two puzzles for linguistic theory: native selection and nativelike fluency. In J. Richards and R. Schmidt (eds) Language and Communication. Longman.

?Renouf, A. (1987). Corpus Development. In Sinclair, J. (ed) 1987, ). Looking Up. Collins.

?Rosch, E. (1975). Cognitive representations of semantic categories. Journal of Experimental Psychology,104.

Skehan, P. (1991) Individual Differences in Second Language Learning. Studies in Second Language Acquisition. 13, 2, 275-298.

?Sinclair, J. (ed.) (1987). Looking Up. Collins.

?Sinclair, J. et al (1987). Collins COBUILD Essential English Dictionary. Collins.

?Sinclair, J., Fox, G. et al. (1991). Collins COBUILD English Grammar. Collins

?Sinclair, J. (1991). Corpus, Concordance, Collocation. OUP.

?Widdowson, H.G. (1989). Knowledge of language and ability for use. Applied Linguistics, 10, 2.

?Widdowson, H.G. (1990). ?Aspects of Language Teaching. OUP.

?Widdowson, H.G. (1993). Response to Biber.? AAAL colloquium on Discourse Analysis.

?Willis, D. (1990). The Lexical Syllabus. Collins.

要查看或添加评论,请登录

Geoff Jordan的更多文章

  • Enhanced incidental learning (EIL)

    Enhanced incidental learning (EIL)

    This is a heavily abridged excerpt from Chapter 6 of Jordan & Long (2023). In previous chapters we argue that SLA is…

  • Examples of poor Critical Thinking in published work on SLA and ELT Part 1

    Examples of poor Critical Thinking in published work on SLA and ELT Part 1

    I'm collecting examples of poor critical thinking in our field. More examples welcome! 1, Li Wei Translanguaging In his…

  • Critical Thinking

    Critical Thinking

    Introduction Happy New Year! To start the new year, I offer a few thoughts about the importance of critical thinking…

    12 条评论
  • Dellar, 2024: Whither Teachng Lexically?

    Dellar, 2024: Whither Teachng Lexically?

    I was up early on Friday morning and during what was supposed to be a quick browse of social media, I came across a…

    8 条评论
  • SLTE Part 4: How it could be done better

    SLTE Part 4: How it could be done better

    Introduction The three previous parts of this discussion of SLTE gave my own opinion about things discussed in Chapter…

    9 条评论
  • SLTE Part 3: CELTA and Continuous Professional Development (CPD)

    SLTE Part 3: CELTA and Continuous Professional Development (CPD)

    CELTA and CUP best illustrate how the $200 billion ELT industry has become commodified, how it has abandoned…

    40 条评论
  • SLTE Part 2: Pre-service ELT courses for non-native English speaker teachers

    SLTE Part 2: Pre-service ELT courses for non-native English speaker teachers

    Note: This is an abridged version of part of Chapter 10 of Jordan and Long (2023) ELT: Now and How it Could Be.) More…

    15 条评论
  • Second Language Teacher Education (SLTE)

    Second Language Teacher Education (SLTE)

    Here’s an abridged version of part of Chapter 10 of Jordan & long (2023) ElT: Now and how it Could Be. Introduction…

    8 条评论
  • Writing

    Writing

    I came across this post I wrote years ago on a now defunct blog. Those doing papers for an MA might, just might, learn…

    9 条评论
  • Here We Go Again

    Here We Go Again

    A new academic term is underway in UK universities. If you’re just starting an MA in ELT or Applied Linguistics, you’ll…

    13 条评论

社区洞察

其他会员也浏览了