Some years ago now, I wrote a book entitled Resisting qualifications reforms in New Zealand: The English Study Design as constructive dissent. I notice that it’s still available from Sense Publishers. The book reports on the biggest action research project I ever led, the outcome of which was the development of the English Study Design, a senior secondary-school qualification in English that was available to students in Years 12 and 13. The development of this qualification between 1997 and 20204 was one helluva ride (and the book tells the story). It’s relationship to critical literacy resides in the fact that it developed a particular version of subject English, designed a programme for this particular construction of English, and developed assessment and moderation processes that were commensurate with this construction.
As the book, relates, the ESD became an NCEA “alternative”, owned as a qualification by the University of Waikato. For two years (2003-2004), we actually got away with this challenge to NZQA’s domination of the discursive space, but eventually this agency made things so difficult for us that we withdrew the qualification and put it on ice.
One of my collaborators on this project was Professor Cedric Hall, now Emeritus Professor of Education at Victoria University in Wellington. He has actively opposed the introduction of separate-standards-based assessment regimes in educational settings, firstly in universities (where he was successful) and in schools (not successful). Along with Warwick Elley, he has been the most balanced and informed of NCEA critics, even attempting to work collaboratively with NZQA on a number of their review committees.
He still has a bee under his bonnet about the NCEA, and a few days ago I received from him a draft paper where he reiterates a number of major concerns with the NCEA, in particular the problem with coherence. In a summary of his argument, he writes:
My basic argument is that NCEA, by the very nature of its design, fosters the breakdown of the curriculum into learning chunks (I have no problem with this), but does not foster anything like as well the knowledge and skills that students need for understanding how the chunks relate to each other. In other words, the more you break down a subject (course) into separate components for assessment purposes, the more you need to address the assessment of the knowledge and skills that show that students understand the important relationships and connections within the curriculum. In a high stakes assessment environment, if you don’t address through assessment the “integration” of the parts, then the knowledge and skills that underpin the links and connections within and between subjects will be given limited or even no coverage in course design and teaching.
I agree with this and with other points made by Hall in his paper. Those reading this post by well be interested in checking out
an article on the NCEA in the Dominion Post this year (2015) on the subject of grade inflation (a separate issue, but a good example of problems in the NCEA with reliability and moderation).
Meanwhile, my NCEA focus here is much more on the micro level. It addresses the issue of construct validity, that is, the manner in which an assessment task or rubric or schedule actually constructs the skills/knowledge being assessed. Here is my example for this particular post. It is based on a recent article where I argued that Literary Studies, for its own health, needs to move from “English” into the Arts curriculum. My critical target here, then, are NCEA Achievement Standards that are concerned with literary texts. All assessment technologies construct in some way the object of evaluation, be it close reading of literary texts or some form of creative writing. However, high-stakes assessment technologies in a context characterised by workload intensification are likely to ensure that teachers tailor their teaching to the way assessments are designed. That is why construct validity matters.
By way of example, let’s look at the 2014 examination for AS91098: “Analyse specified aspect(s) of studied written text(s), supported by evidence”. This standard is examined by external examination and has 4 credits attached to it. Its specification indicates that to achieve this standard, students can study fiction, non-fiction, poetry or print media. (Yes, the need to study literature at all is being steadily eroded in English.) The examination requires candidates to write an essay on at least one text from the following categories: novel, non-fiction, print media, drama, short story, poetry/song lyric, digital/online text. Regardless of their choice of genre, students are expected to choose one of 8 questions:
- Analyse how shifts in power were used to illustrate one or more themes in the written text(s).
- Analyse how language features were used to reveal the attitudes of one or more characters or individuals in the written text(s).
- Analyse how structure was used to reveal the writer’s purpose in the written text(s).
- Analyse how setting was used to develop your understanding of one or more themes in the written text(s).
- Analyse how language features were used to shape your reaction to one or more ideas in the written text(s).
- Analyse how one or more significant events were used to comment on an aspect of society in the written text(s).
- Analyse how cruel or kind behaviour was used to show one or more ideas in the written text(s)
Note: “Ideas” may refer to character, theme, or setting.
- Analyse how the ending created a satisfying outcome in the written text(s).
I trust the ungentle reader will be perceiving a degree of predictability in the task construction. The key verb is “analyse”, while the key preposition is “in”. These discursive markers locate the construction of reading firmly in the New Criticism construction of literary reading. (The examination formula is identical for 2012 and 2013 also.) Reading here (and it’s not necessarily literary reading) is reduced to a form of evidence-hunting to be exhibited in an essay that is likely to be as formulaic as the question itself. Textual meaning is being constructed here as “in” the text, thus liberating students from the difficult knowledge that textual meaning may reside elsewhere. There is scant invitation to bring an authentic personal response to proceedings (as per Reader Response criticism). As for critique, for example, via a critical literacy approach to reading – that is not even on the horizon.