Assessing to Teach: Writing in an ESL Classroom
By Nupur Samuel
“Did you read my essay? It’s not good, no?” asked Shefali. Taken aback by this self-evaluation, I looked at her closely to realize she was asking this question in all earnestness. Shefali was a second semester student of French enrolled on an Intermediate level English language proficiency course. She was bright, cheerful and took initiative to actively engage in classroom discussions. Yet she was a bundle of nerves and self-depreciation when it came to writing. It made me wonder what reduced an otherwise confident learner to such a state of low self-esteem. The answer probably lies in the manner in which we teach and assess writing. In India, academic writing is most often the primary site of assessing student’s performance across disciplines and levels. Paradoxically, while students’ content knowledge is assessed through their writing skills, they are seldom taught writing in class. I argue that part of the problem – the inability to consider writing as a cognitive, critical, original exercise – lies in our understanding of assessment as a discrete, objective and stand-alone activity. In contrast, is the idea that teaching and assessment are interdependent processes; that only teaching but assessment too can allow for learners’ abilities to develop at the same time that it enables the teacher-assessor to arrive at a more nuanced understanding of the learners’ abilities.
Assessment of writing is closely dependent on what we want to assess – content, language, or structure. At the primary and secondary school level, English language assessment in India primarily consists of achievement tests where reading and writing tasks are based on prescribed texts and the focus of interest is to test knowledge and understanding of these set texts. Classroom teaching mirrors testing where the focus of all lessons and teaching is to help learners prepare for assessment. Since English is taught as a content subject and not as a tool of communication, learners are unable to use it in different situations in different ways or produce any original piece of written or spoken text. The washback of testing is that even after years of education through the English medium, most graduates are unable to use the language for any meaningful purpose in real-life contexts. When these learners enter higher education after years of schooling, they are woefully inept and ill-prepared to engage in any meaningful discourse in any discipline, especially since most institutes of higher education have English as the medium of instruction.
What are the implications of such assessment practices? How do we address the obvious lacunae these case-studies reveal? I argue that the design of the assessment prevents teaching to encourage production of new knowledge. By reducing the process of writing into a finished product that quantifies learning and rewards rote-memorisation at the cost of any originality or criticality of thought, the system ensures that learners only regurgitate knowledge but never produce it. More importantly, assessment continues to act as a powerful gatekeeper. In Assessment Culture and National Testing, Audrey M. Kleinsasser points out that it is the element of mystery that makes traditional testing powerful; there is evidence for this in the majority of assessment exercises where the only indication of a learner’s performance is a number or a grade.
Policy documents on education or assessment reforms keep the mystery intact by dealing with this in surreptitious terms employing such vague jargon that even after decades the implementation of those reforms remain unclear. The National Curriculum Framework (NCF 2005), in its focus group discussion on examination reforms, traced the current problematic assessment system to the colonial era which encouraged mastery of prescribed content. NCF asks teachers to maintain portfolio of learner-performance and have an ongoing system of assessment but does not mention how this exercise would be undertaken when class sizes are large or teachers’ own language proficiency is inadequate. NCF was not saying anything new as far as a cry for reform goes. In 1948, the University Education Commission had noted the dismal situation going as far back as 1902, calling for exam reforms by making it transparent and removing bias or subjectivity in assessment. It asserted that exams should be reliable and measure what they are expected to measure. Almost three decades later, the National Policy on Education (NPE, 1986) reiterated this by stating that it was imperative to have an assessment system which improves teaching and learning in a powerful manner. Another thirty years have elapsed but the narrative does not seem to have changed much. The latest National Policy on Education (2016) draft highlights the same issues plaguing the education system. Like others before it, this document also suggests assessment should be continuous; avoiding simple labels of pass or fail that are invalid indicators of learning.
Though the policy documents lament the inefficacy of assessment and suggest reforms, they do not provide any clear directives or theoretical underpinning on the basis of which any revision of assessment could be undertaken. There is hardly any space given to the discussion of pedagogy of writing, nor is there any direction about the appropriate ways of teaching and learning writing and the role that teachers of English can play in this. The role of other discipline teachers – those not identified as English teachers – in helping students develop their writing does not find any place even in the imagination of curriculum or policy developers. Seldom do universities address this gap and most often documents at the university level restrict themselves to laying out guidelines for examination schedules and other practical issues such as weighting awarded to each assessment exercise and when students could take them again. This observation is true for both state and central-funded universities as well as private universities, though some universities are setting up writing centres to address the growing concern that students are unable to articulate themselves through writing and need dedicated support.
Why are the policy documents silent on the pedagogy and assessment of writing? Why is there no discussion on how to teach writing, which is crucial since it is through writing that students’ learning is assessed? Since Vivian Zamel introduced the idea of writing as a process in second language studies in 1976, highlighting similarities between writing in first and second languages, emphasis has shifted from a text-oriented approach to writer-oriented research. This process approach of writing draws attention to writing as a cognitive process that is recursive and non-linear as well as a social and cultural phenomenon; the writer is at the core of this process; so when documents dismiss subjectivity, they take the teacher out of the teaching and ignore the learners’ learning. They dismiss Shefali’s feelings of inadequacy and Payal Singh’s experiential learning process which she describes with candour and self-reflexivity in this anthology. What do we learn from our student voices and how do we implement the suggestions that all policies have been repeating since independence? There are ways in which we can provide ongoing assessment that measures what it is expected to measure, is objective because assessment is based on transparent rubrics and is enabling because it assists learning even during the process of assessment. I argue that this is possible if we acknowledge the organic, interdependent relationship between teaching and assessment.
One way of demystifying the assessment or grading process is to help our learners decode grades or scores that characterise learning outcomes. Payal Singh shares how her writing teacher advised her to overcome the fear of grades and I suggest that one way of doing is have an in-class discussion on rubrics to assist learners understand what a B Minus or a score of 4 out of 10 means. The second logical step would be to teach strategies and skills to achieve a better grade. This becomes an even more powerful tool when learner’s themselves arrive at the rubrics in a language they understand and which does not alienate them. What better way can there be to achieve the objectivity that policy documents value so highly? In institutes where rubrics already exist, asking learners to rephrase them or rewrite them in a language that is more meaningful to the learners would be empowering. For instance, in my class of a Basic English Proficiency course, students were struggling to give feedback to each other so I decided to work with what they knew: grades and scores. I asked them to assign a score out of 10; later they were asked to defend why they had assigned that score to their peer’s essay. As they struggled to justify the scores, the discussion moved to developing a rubric with factors such as ideas, organization of ideas and grammar, punctuation as separate heads that must be considered while evaluating a piece of writing. It reflected the participants’ growing awareness of different aspects of writing as well as the process of writing. The rubrics developed through interaction with learners were more meaningful to them since they had developed them in their own language. It helped them decode the grades and comments they had been receiving from teachers and helped them to understand how they could improve their score. For instance, they realised that if they gave supporting ideas with main ideas, they would receive a better score than if they did not develop the main ideas properly. Thus, the students described a score of point 4 (for organisation of idea) as, Half organisation is well and half is here and there – indicating that some parts were organised though others were not. This exercise not only gave them the linguistic repertoire to discuss writing, it enabled them to use it to give specific support to their peers on their writing tasks.
Peer feedback has many benefits, especially when class sizes are large. Moreover, it paves the way for self-assessment; student-writers have often reflected on how it was easier to first comment on their classmates’ writing than on their own. Students often complain that they cannot review their own writing and insist that the teacher should do it; though for peer assessment too, they need structured guidance; since schooling has trained students to only focus on surface errors such as errors of spelling and grammar. It takes guided peer review for them to start reading for organization, evidence and ideas. Here again, Payal’s writing teacher’s caution to avoid comparison with others is worth noting. Instead of competitors, when students see each other as collaborators and co-constructors of knowledge, the language of feedback becomes more promising and enabling. For instance, the common template of a peer review worksheet – asking for feedback that ask for two things that the student liked about the text and one suggestion that would help the peer improve their writing – allows for more positive engagement with the peers’ text.
All the above suggestions will not be possible unless the writing space is a space that is safe, welcoming and where feedback is given with care, with the intention of helping others grow and not to point out their shortcomings. This allows writers to be open and share their writing with confidence – something that Payal Singh did when she shared her draft for this anthology with me – with the implicit faith that the reader or collaborator would be sensitive and appreciate that no writing is devoid of the writer. It was this sense of a safe haven that transformed Shefali, who began the course by disparaging her own writing but soon grew into a woman with a strong personal voice, confidently defending her writing during discussions. This pulls writing out from the lonely confines of one’s home to a shared space where we learn and develop through mutual support. Writing itself gets transformed and ceases to be a lonely, agonizing activity that Tricia Hedge describes in her book Writing. I witnessed this transformation when a student spent some time discussing a draft before submission and requested extra time to revise the paper, showing me that in a successful teaching-assessing scenario, students too begin to see assessment as a part of the learning process.
Nupur Samuel is Assistant Professor at Ambedkar University Delhi. Her research interests include English language assessment, writing pedagogy, inclusive education and teacher education. Nupur develops teaching-learning materials and tests for English as a Second Language (ESL) students and ESL teachers. She can also be found baking, doing crochet, chasing after her dog and occasionally cooking in unfamiliar kitchens.
For more stories, read Café Dissensus Everyday, the blog of Café Dissensus Magazine.