Communicative Language Testing is intended to evaluate learners’ capability to utilizing the target language in actual life conditions. Its now ten years since Communicative Language Teaching (CLT) has been launched in secondary English curriculum of Bangladesh. Therefore, the test of English on the SSC level is now going through the challenges of assessing learners’ communicative expertise. This examine seems on the current model of the SSC English test and explores the probabilities of incorporating a extra communicatively based mostly take a look at format. The research is carried out on the basis of an evaluation of the check objects on writing skills set within the SSC take a look at papers.
It also explores the views of Bangladeshi secondary English teachers and internationally famend Language Testing Experts. In this paper, it’s argued that, although secondary English training in Bangladesh has stepped right into a communicative era ten years back, the current SSC take a look at is not in accordance with the curriculum goals. It is discovered that the test gadgets on writing lack both validity and reliability.
Suggestions made for enhancing the present SSC take a look at include: defining the purpose of communication in English for SSC level learners, drafting test specifications, setting take a look at objects that are extra relevant to a communicative objective, and creating a marking scheme to mark the subjective items.
Introduction
The idea of Communicative Language Teaching (CLT) has had much affect within the fields of English language instructing, curriculum and check design. Since the Nineteen Seventies, there have been appreciable developments in the area of language testing. Various theories and practical testing fashions have developed following the idea of communicative competence.
Bangladesh has launched a communicative English curriculum at its secondary training sector.
However, the aims and aims of the communicative curriculum can by no means be achieved and not using a testing system that assesses the communicative ability of learners. This paper appears on the current Secondary School Certificate (SSC) English examination to establish the elements of communicative testing in it and examines the suitability of this testing system to the curriculum objectives. The research entails a crucial analysis of the present SSC check. It also explores the views of Bangladeshi secondary English academics and two internationally famend language testing specialists on the SSC test and investigates the methods of creating it extra communicatively based mostly.
Background of English Language Teaching (ELT) in Bangladesh
The teaching of English in Bangladesh has an extended history that traces again to the colonial era. However, the British models of instructing English continued to influence the ELT situation of post-colonial Bengal even after the colonial rule was over in 1947. Since then the grammar translation methodology continued to influence the ELT scenario as the most dominant instructing method in the Indian subcontinent. After the independence of Bangladesh (1971), a number of attempts have been made to re-design ELT sector with little or no success.
In 1990, a four yr ELT project called Orientation of Secondary School Teachers for Teaching English in Bangladesh (OSSTTEB) was jointly launched by the Government of Bangladesh and DFID, UK to enhance English Language Teaching and Learning at secondary degree. This project revised, adapted and revamped the secondary English curriculum (Hoque, 1999). In 1997, a major step was initiated with the introduction of English Language Teaching Improvement Project (ELTIP). The project began working with a view to enhancing the communicative competence of the secondary stage learners. Under this project, a communicative curriculum, revised text books and newly written Teachers’ Guides (TGs) were developed and some 30 thousand English teachers, test administrators, and markers have been trained.
The SSC examination
The SSC is the first public examination in Bangladesh that learners sit for after 10 years of schooling. Students take English as a obligatory topic at this degree. The examination is administered countrywide by way of the seven Boards of Intermediate and Secondary Education (BISE). The query papers are set by the respective BISE independently following the nationwide curriculum and syllabus of National Curriculum and Textbook Board (NCTB). The syllabus document of NCTB explicitly recommends a testing system that is in preserving with the spirits of CLT. The new syllabus doc for classes 9-10 (NCTB 1999: 135) mentions, “Until and unless an acceptable public examination is devised that tests English language expertise somewhat than students’ ability to memorise and replica without understanding, the aims and objectives of the syllabus can by no means be realised.” Moreover samples of question papers have been offered within the TGs and Teachers had been encouraged to observe the check models.
Research Questions
This study is anxious with the following analysis questions: 1. How are students’ writing skills tested by the present SSC English examinations? 2. To what extent are these test gadgets communicatively based? 3. What do Bangladeshi lecturers and the worldwide testing experts suppose of the current SSC English examination? 4. How can the SSC examination be improved to reflect the targets said in the national curriculum and syllabus document?
Research methodology
The method to this resaerch belongs to the interpretative epistemology which argues that knowledge, in social analysis, is concerned not with generalization, prediction and control however with interpretation, meaning and illumination (Usher, 1996: 12). The method right here is guided by the belief that actuality is a posh phenomenon which doesn’t admit orderly occasions or easy cause-effect relationship. The data used isn’t only involved with facts but additionally with values.
In taking a glance at a testing system which is comparatively new in the context of Bangladesh, it is admitted that actuality is a human construct. The purpose here is to explore perspectives and shared meanings (Wellington, 2000: 16) and the info used here is qualitative.
The analysis procedure makes use of three totally different sources for accumulating data and entails three steps. They are: a) a crucial evaluation of the SSC English test format, b) amassing the views of Bangladeshi English academics through questionnaires and, c) interviewing the 2 Australian testing consultants based mostly at Melbourne Univeristy. The analysis of SSC examination includes a close evaluation of the prevailing SSC check papers, syllabus doc and marking standards. The questionnaire makes an attempt to explore the values and attitudes of secondary English academics in relation to the SSC English testing system. The interviews with the language testing consultants are intended to generate priceless concepts that might be applicable in enhancing the testing system of SSC.
The growth of recent language testing
The improvement of contemporary language testing occurred in three historical phases prior to and in the course of the 1970s. These three periods are- the scientific era, the psychometric-structuralist period and the integrative sociolinguistic period Spolsky (1978:5). According to Spolsky, the pre-scientific period was characterised by a scarcity of concern for statistical issues or for such notions as objectivity and reliability in language testing whereas the psychometric-structuralist interval was involved with exams that focus on discrete item exams. In truth, the psychometric-structuralist method supplied the premise for the flourishing of the standardised language take a look at with its emphasis on discrete construction level objects. However, discrete point tests were additionally criticised for being inadequate indicators of language proficiency (Oller 1979: 212). Language testing was directed to international tests in the 1970s, which opened up the psycholinguistic-sociolinguistic era (Weir, 1988: 3). This format of global and integrative checks (such as cloze) gained theoretical assist from many researchers.
[newline]
Davies distinguishes 4 essential kinds of language checks on the basis of their operate or use- achievement checks, proficiency tests, aptitude exams and diagnostic exams (Davies and Allan 1977: 46-7). While achievement checks are concerned with assessing what has been discovered of a known syllabus, proficiency tests are based mostly on assessing the learning of either a known or unknown syllabus.
The concept of communicative competence
The idea of communicative language educating emerged in the Seventies following Hymes’ theory of communicative competence, which tremendously emphasised learners’ capacity to use language in context, significantly, in phrases of social calls for of efficiency (McNamara, 2000: 116). Hymes believes that knowing a language is greater than knowing its rules. Once Hymes proposed the idea of communicative competence, it was expanded in varied methods during the following 20 years. The term competence was interpreted in many different ways by researchers. To some it merely means the ability to ‘communicate’; to others it means the social rules of language use; and to yet different, it refers to a set of talents including knowledge of linguistics, socio-linguistics and discourse rules (Bachman & Palmar, 1984: 34). However, the essential thought of communicative competence remains the power to use language appropriately, both receptively and productively, in actual situations (Kiato, et al. 1996: 1)
The growth of communicative language testing
The concept of communicative testing was developed on the idea of Hymes’ two dimensional mannequin of communicative competence that contains a linguistic and a sociolinguistic component. Davies et al. provides the next definition of communicative language exams:
Communicative checks are tests of communicative skills, typically utilized in contradistinction to exams of grammatical knowledge. Such checks usually claim to operationalise theories of communicative competence, although the form they take will depend on which dimension they select to emphasize, be it specificity to context, authenticity of supplies or the simulation of real life performance. (Davies et al. 1999: 26)
Harrison mentions three elements which distinguishes a communicative language test from different exams. He argues:
1. A communicative test should assess language used for a function beyond itself. 2. A communicative check should rely upon the bridging of an information hole. It has to suggest a language using purpose which can be fulfilled by the communicative ability up to now acquired by the learners. three. A communicative test ought to represent an encounter. The state of affairs on the finish of it must be totally different from what it was firstly, and which means there has to be some sequence throughout the check.
(Harrison, 1983: 77-8)
Competence Vs performance
There have been debates among the researchers regarding the nature and function of communicative checks. One issue of controversy was how to specify the elements of communicative competence and to relate them in measuring performances. Another complication arose as the terms ‘competence’ and ‘performance’ had been used differently by numerous researchers suggesting important distinctions between them. Chomsky (1965) claimed that ‘competence’ refers to the linguistic system which an ideal native speaker has internalized whereas ‘performance’ is principally involved with the psychological factors which are concerned within the notion and production of speech.
Later Hymes (1972) explicitly, and Campbell and Wales (1970) implicitly proposed a broader notion of communicative competence during which they included grammatical competence as nicely as contextual or sociolinguistic competence. They, nonetheless, adopted the excellence between communicative ‘competence’ and ‘performance’. According to Canale and Swain (1980: 3) ‘competence’ refers to information of grammar and other aspects of language whereas ‘performance’ refers to actual use.
For the language testing researchers it was tough to find out a super test model, which might be legitimate and reliable enough to take a look at communicative
competence. They had been involved with what performances for task based activities have to be devised so as to assess learners’ communicative competence. The most discussed reply to this question is the one offered by Canale and Swain (1980) who, in their influential work ‘Approaches to Second Language Testing’ specified 4 aspects of knowledge or competence- grammatical competence, sociolinguistic competence, strategic competence and discourse competence.
What makes good communicative tests?
Though a communicative language take a look at intents to measure how college students use language in actual life, it is troublesome to set a task that can measure communicative competence in real contexts. Ellison (2001: 44) argues that testing by its very nature is artificial and except we are to comply with an examinee around on a regular basis noting how he/she deals with the target language in all conditions, we essentially have a lower than actual state of affairs. However, it ought to be the purpose of the take a look at setter to attempt to complement real situations as a lot as possible. Referring to the issue of figuring out the weather of communicative testing Morrow (1991) states:
The important question which a communicative check should reply is whether or not (or how well) a candidate can use language to talk meanings. But ‘communicate meanings’ may be very elusive criterion certainly on which to base judgment.
(Morrow, 1991: 112)
There have been attempts to develop a model for communicative competence and valid checks of its components. Bachman and Palmer (1984: 35) describe three approaches: the skill-component method, communicative strategy and measurement strategy to specify what language exams measure. Offering a detailed interpretation of the Canale-Swain communicative strategy, Bachman and Palmer specify some elements (trait components, modal components, technique factors) that should be considered while designing a efficiency take a look at. Having examined the construction of a mannequin which encompasses these three elements, Skehan (1991: 9) regarded it as ‘being of pivotal significance in influencing the language testing theories and practices all through the 1990s.’ Later Bachman went further as he supplied important distinctions between task-based and construct-based approaches to test design. He explained:
The procedures for design, development, and use of language checks should incorporate each a specification of the evaluation task to be included and definition of the abilities to be assessed.
(Bachman, 2000: 456)
Task primarily based language assessment gave rise to 2 questions: a) How real-life task sorts are recognized, chosen and characterized and the way pedagogic or evaluation tasks are related to these (Bachman, 2000: 459) .
The discussion of different approaches to language testing are involved with their strengths and limitations by means of the standards of validity and reliability. Validity in language testing is about whether or not a test can measure what it’s intended to measure. Other arguments regarding the check validity embrace the query of content relevance and representativeness, task problem etc. Reliability refers to the extent to which take a look at scores are consistent.
Assessing second language writing
Assessment of second language writing has been discussed on the idea of two completely different approaches: goal check of writing and direct test of writing. Objective exams claim to check writing by way of verbal reasoning, error recognition and different measures that have been proven fairly extremely with measured writing ability (Lyons, 1991: 5). In direct exams of writing, actual samples of students’ writings are assessed. In reality, direct tests of writing have received the assist of many researchers as they have interaction students with extra communicative and creative task varieties. However, this method has additionally been criticised for missing reliability. Despite their issues with reliability, direct exams are nonetheless extremely popular in lots of academic settings all through the world.
Kiato et al. (1996: 2) discuss with some typical issues of testing writing. They level out that testing writing objectively may not necessarily replicate the way it is utilized by the students in the real world. On the other hand, testing of writing in a method that displays how the students use writing in actual world is troublesome to gauge objectively and the take a look at setters has less control over the writing duties. However, they argue that the ability to put in writing ought to involve six element skills- grammatical capacity, lexical ability, mechanical capacity, stylistic expertise, organisational skills and judgment of appropriacy. Among the writing tasks they find helpful are: hole filling, form completion, making corrections, letter and essay writing.
Weir (1988: 63-4) presents an elaborate discussion on each oblique (objective) and direct tests and distinguishes the 2 types. He argues that writing may be divided into discrete components corresponding to grammar, vocabulary and punctuation and so on. and these components are examined individually by the use of goal exams. He advised that both productive and receptive skills may be damaged down in to levels of grammar and lexis according to a discrete level framework and goal duties such as cloze, selective deletion, gap filling and so forth. can be designed for testing studying with writing. Weir describes the direct check of writing as a more integrative check which tests a candidate’s capability to carry out sure of the functional tasks required within the performance of duties in the goal scenario.
Research on writing involving each native speakers and second language are additionally involved with primary research of the nature of writing process to find a way to relate them to the validity of writing take a look at tasks. Some of the questions involved are:
1. To what extent is efficiency influenced by the quantity of prior knowledge that writers have concerning the matter that they are asked to write about in a test? 2. Does it make a difference how the writing task is specified on the take a look at paper? 3. Do several sorts of tasks produce vital distinction in the efficiency of learners in a writing test? (Read, 1991: 77) Johns (1991: 171) suggests three standards for academic testing of writing- (1) use of studying for writing assessment: testing for audience awareness, (2) exploitation of frequent writing genres: argumentation and downside solution, and (3) testing of material, conceptual control and planning. He insists that studying and writing be combined to give a more genuine context for testing writing for educational function. He says:
Because reading and writing are interconnected in any respect educational levels, it seems unprofessional and positively unacademic to test writing without the real interactivity that studying supplies.
(Johns, 19991: 176)
Literature on testing has suggested different methods to deal with the issue of making direct writing tasks. The problem with these duties is they are very tough to mark as the marking of such duties is somewhat subjective. One solution advised by many testing consultants is to make use of an analytical marking scheme to help make the marking consistent. Murphy (1979: 19) outlined the character of a marking scheme demanded by the Associated Examining Boards, “A marking scheme is a comprehensive document indicating the express standards against which candidate’s answers shall be judged; it allows the examiners to narrate specific marks to solutions of specified quality.”
There have been discussions on two forms of marking at no cost writing tasks- impressionistic and analytic. However there are arguments over what valid and dependable measures of writing can be utilized and what might be the connection of those measures to general impressionistic quality score. The TOFEL examination included a direct writing measure (Connor, 1991: 216) in 1986 for the check of written English that was marked holistically (TOFEL check of written English information 1989).
A great deal of research was carried out by the Educational Testing Service into the event and validation of a measure to evaluate communicative competence in writing (Bridgman Carlson, 1983; Carlson et al. 1985). A holistic scoring guide was developed to mark two common topics-comparison/contrast and describing a graph that had six ranges and included syntactic and rhetorical standards. The Test of Written English Scoring Guidelines (1989) identified the next criteria of a written task.
An essay in the highest category is- well organized and well developed, effectively addressed the writing task, makes use of appropriate details to support or illustrate ideas, shows unity, coherence and progression, shows constant facility in the use of language, and demonstrates syntactic selection and applicable word alternative.
(The Test of Written English Scoring Guidelines, 989)
The marking scheme suggested by ELTIP to help teachers assess writing compositions is made on the premise of 5 criteria- grammar, vocabulary, mechanical accuracy, communication and content material. A Marking scheme like this exhibits how developments in language testing analysis are offering fashions to deal with the challenges of marking writing tasks.
The SSC Curriculum, syllabus and the test
The SSC is the school leaving public examination for grad 10 students. English is a compulsory topic at this level and the check of English is an achievement test in sort. The take a look at is designed to test reading and writing skills only as there is no provision of testing listening and speaking skills.
The NCTB syllabus of English focuses on the development of the four expertise through learner-centred activities within meaningful contexts. It provides importance to choosing contexts which replicate actual social situations outside the classroom and make the educational of English ‘relevant, interesting and enjoyable’. It is expected as per the syllabus that college students should achieve an ‘elementary to intermediate command of the four language skills’ by the end of secondary level. The curriculum document specifies the target and functions of learning English because it states:
English must be recognised as an important work-oriented ability that is needed if the employment, improvement and academic needs of the nation are to be met successfully. Increased communicative competence in English, subsequently, constitutes a vital ability for learners at this stage.
(SSC Syllabus Document, 1999, NCTB: 136)
Terminal competencies in four expertise are specified in the NCTB syllabus. The competencies for writing expertise for grade 10 are outlined as follows:
Students should find a way to-
a) write simple dialogues, formal and informal letters together with letters of application and reviews. b) demonstrate creativeness and creativity in applicable writing varieties. c) fill in the types (i.e. job applications and so on.) and write a curriculum vitae d) plan and organise the above duties effectively in order to communicate ideas and information clearly, accurately and with relevance to the subject. e) take notes and dictations
f) use completely different punctuation and geographical gadgets appropriately.