Abstract
Teachers come across errors not only in tests but also in their mathematics classrooms virtually every day. When they respond to learners’ errors in their classrooms, during or after teaching, teachers are actively carrying out formative assessment. In South Africa the Annual National Assessment, a written test under the auspices of the Department of Basic Education, requires that teachers use learner data diagnostically. This places a new and complex cognitive demand on teachers’ pedagogical content knowledge. We argue that teachers’ involvement in, and application of, error analysis is an integral aspect of teacher knowledge. The Data Informed Practice Improvement Project was one of the first attempts in South Africa to include teachers in a systematic process of interpretation of learners’ performance data. In this article we analyse video data of teachers’ engagement with errors during interactions with learners in their classrooms and in one-on-one interviews with learners (17 lessons and 13 interviews). The schema of teachers’ knowledge of error analysis and the complexity of its application are discussed in relation to Ball’s domains of knowledge and Hugo’s explanation of the relation between cognitive and pedagogical loads. The analysis suggests that diagnostic assessment requires teachers to focus their attention on the germane load of the task and this in turn requires awareness of error and the use of specific probing questions in relation to learners’ diagnostic reasoning. Quantitative and qualitative data findings show the difficulty of this activity. For the 62 teachers who took part in this project, the demands made by diagnostic assessment exceeded their capacity, resulting in many instances (mainly in the classroom) where teachers ignored learners’ errors or dealt with them partially.
Introduction
Teachers come across errors in the mathematics classroom virtually every day. When they respond to learners’ errors in their classrooms, during or after teaching, teachers are actively carrying out formative assessment (Black & Wiliam, 2006). Responding to learners’ errors is a specialised activity of formative assessment, which relies on teachers’ deep knowledge of content, and requires teacher’s professional judgement on how to respond to learners’ needs when teaching that content. Working with learners’ errors diagnostically in context implies that the ‘cognitive architecture’ (Hugo, 2015, p. 81) of teachers’ mathematics knowledge is strong and that their knowledge is stored in a form of ‘networked schemas’ in their long-term memory, ready to be selected economically, for example in a form of principles, representations and other symbolic forms. Yet in South Africa there is empirical evidence showing that teachers’ mathematical and pedagogical content knowledge is weak (Taylor, Van der Berg & Mabogoane, 2013).
Teachers have always had to assess learners’ work and recognise the errors present in this work, but prior to the introduction of the Annual National Assessment (ANA), a written test under the auspices of the Department of Basic Education, there were no specific demands on teachers to use learners’ errors as building blocks for teaching and learning. With the introduction of ANA, teachers are now required ‘to interpret their own learners’ performance in national (and other) assessments’ (Departments of Basic Education & Higher Education and Training, 2011, p. 2) and develop better lessons on the basis of these interpretations. This requirement implies that teachers are expected to use learner data diagnostically, which places a new and complex cognitive demand on teachers’ pedagogical content knowledge (for example, deciding on what to focus, and how, and what to leave out or postpone).
In a previous article (Shalem, Sapire & Sorto, 2014), we developed analytical criteria for teachers’ explanations of learners’ errors in standardised mathematics assessments following the framework of Mathematics Knowledge for Teaching, (Ball, Hill & Bass, 2005; Ball, Thames & Phelps, 2008; Hill, Ball & Schilling, 2008). In that article we operationalised the criteria for an analysis of teachers’ engagement with errors during interactions with learners in their classrooms and in one-on-one interviews with learners. The empirical data for this article were collected during the first two phases of the Data Informed Practice Improvement Project, a three-year teacher development project which was a collaboration between the School of Education of a Johannesburg university and the Gauteng Department of Education.
The specific problem we want to bring to light, in relation to the diagnostic work required for this specialised activity as part of formative assessment, is that when teachers respond to learners’ errors in context, their ‘pedagogical load’ is increased in complexity because of the ‘cognitive load’ placed on the teachers (Hugo, 2015, p. 83). When working with learners’ errors, teachers need to increase learners’ ‘germane load’, which involves reflecting on and making meaning of the patterns underlying errors. By creating connections between concepts and the related errors, teachers increase the learners’ germane load. This pedagogic activity is essential to enable learners to generalise learning from errors. In Hugo’s (2015) terms, generalising learning means shifting information ‘from the limited world of working memory into knowledge networked within the infinite world of long-term memory’ (p. 85). At the same time as increasing germane load, teachers need to reduce the ‘extraneous load’ on learners by limiting factors that could increase this load. Extraneous load is increased by things such as incorrect mathematical explanations, misleading statements or examples, all of which can lead to incorrect generalisations. The playoff between increasing germane load and reducing extraneous load places cognitive load on the teachers, which they need to manage at the same time as they manage their pedagogical load.
The research questions addressed in this article are: -
How did teachers engage with learners’ errors in mathematics classes and in one-on-one interviews with learners?
-
What does this reveal about the relationship between the pedagogical and cognitive loads involved in using errors for teaching?
Our first step is to build a conceptual framework which shows what constitutes the schema of teachers’ knowledge of error analysis.
Teacher knowledge of error analysis
Studies on teacher knowledge, in the field of mathematics education, agree that there is a professional knowledge of mathematics for teaching (Adler, 2011; Anderson & Clark, 2012; Ball et al., 2005; Bertram, 2011; Grossman, 1990; Rowland & Turner, 2008; Shalem, 2013; Shulman, 1986). This knowledge is ‘tailored to the work teachers do with curriculum materials, instruction and students’ (Ball et al., 2005, p. 16). These studies maintain that teachers need specialised knowledge of what they teach, a broad sense of diverse methods of teaching and, most importantly, ways of explaining and representing the content they teach, with the view to imparting it to learners of a specific age and cognitive level of development. Shulman (1986) was the first to introduce this idea, when he introduced the term ‘pedagogical content knowledge’ (PCK) to describe the unique specialisation involved in teaching a given subject. PCK, he says, is ‘that special amalgam [blend] of content and pedagogy that is uniquely the province of teachers, their own special form of professional understanding’ (p. 8). Many theorists have followed Shulman’s innovative idea and developed different categorisations of teachers’ knowledge of mathematics for teaching.
Elaborating on and extending Shulman’s work, Ball et al. (2008), explain that teachers’ knowledge consists of six core domains. Domains one and two elaborate the specialisation of subject matter knowledge (common content knowledge and specialised content knowledge). This refers to knowing subject matter in ways that are specific to teaching (e.g. using mathematical language precisely but also age appropriately, justifying the use of specific representations), which general mathematicians do not necessarily need to focus on. The knowledge of what counts as a correct solution, taking into account the age and cognitive development of learners, is included in these two domains. The next four domains elaborate on the specialisation of PCK from the perspective of learners, curriculum and pedagogy. Two of the four refer to teaching subject matter knowledge from the perspective of curriculum demands (knowledge of content and curriculum and horizontal content knowledge). The other two refer to mediating content in the light of what learners of a specific age are likely to know about the concept being taught as well as of misconceptions arising during learning (knowledge of content and students and knowledge of content and teaching). Teachers’ awareness of errors and their diagnostic activities about learners’ reasoning in relation to errors are included in these domains, albeit in a particular sequence. Ball et al. (2008) emphasise that teachers’ awareness of errors and their diagnostic activities about learners’ reasoning build on the first two specialised domains:
Recognizing a wrong answer is common content knowledge (CCK), whereas sizing up the nature of an error, especially an unfamiliar error, typically requires nimbleness in thinking about numbers, attention to patterns, and flexible thinking about meaning in ways that are distinctive of specialised content knowledge (SCK). In contrast, familiarity with common errors and deciding which of several errors students are most likely to make are examples of knowledge of content and students (KCS). (p. 401)
The inter-dependence between these domains has serious implications for the expectation that teachers should work diagnostically with learners’ errors. Studies on teaching dealing with learners’ errors show that teachers’ interpretive stance is essential for the process of remediation of error, without which teachers simply reteach without engaging with the mathematical source of the error or with its metacognitive structure (Brodie, 2014; Gagatsis & Kyriakides, 2000; Peng, 2010; Prediger, 2010). According to Ball et al. (2008), teachers need to judge if there is a pattern in student errors. They also need to size up ‘whether a nonstandard approach would work in general’ (p. 400). When teachers size up a learner’s error or interpret the source of its production, they are working diagnostically with the subject matter being taught, for which they need to recruit different ‘networked schemas of knowledge’ of specific aspects of error analysis corresponding to the concept or the procedure they teach. Recruiting different aspects of error analysis places simultaneous cognitive and pedagogical demands on teachers and thus makes for a challenging form of PCK. Building on the work of cognitive load theorists such as Sweller, Kirschner and Clark (2007), Hugo (2015) shows that because the capacity of working memory is limited, structuring one’s knowledge along ‘networked schemas’ and not by ‘tiny elements [of information] at a time’ is essential. Conceptual network schemas (connections between concepts) are developed through systematic formal learning and are stored in long-term memory. They include tiny and contingent elements but because these elements are ordered in a schema, over time, the conceptual frame stored in long-term memory becomes the organising tool for processing new contingent elements.
This is the nub of the challenge in diagnostic work of formative assessment: many aspects of a lesson can be planned and extensive preparation can be done for every lesson, but learners’ errors can raise unanticipated questions, for which teachers cannot necessarily prepare. In these situations, teachers need to make quick decisions as to how to conduct their general pedagogy in the course of a lesson and how to attend to learners’ errors specifically. The cognitive load of these situations, which can only be inferred, consists of the work of synthesising and making decisions about aspects drawn from curriculum knowledge (what of the actual content knowledge to focus on), curriculum coverage (how to deal with pressures such as lack of knowledge or time constraints) and pedagogical knowledge (what to anticipate in learner’s response, how to listen to learners and how and when to respond to learners). The pedagogical load of focusing the response to a learner’s error on the germane load and controlling the extraneous load and the cognitive load attended to it are managed with more ease by teachers whose mathematics knowledge is strong, since their knowledge of mathematical errors and misconceptions is structured in networked schemas. Teachers with weak mathematics knowledge, however, experience a high extraneous cognitive load - in recognising and interpreting errors, thinking about them and responding to them in the context of the engagement. These teachers may be unable to fully grasp the learner’s position; they may be hesitant or even unable to adapt their own knowledge in order to respond appropriately and are more likely to avoid dealing with errors.
In the sections that follow we describe the project and the methodology we followed in order to study how a sample of teachers engaged with learners’ errors during lessons and in one-on-one interviews with learners. Our analysis of the findings examines what they reveal about the relationship between the pedagogical and cognitive loads involved in using errors for teaching.
The project
Working with teachers on interpretation of learner standardised assessment data was the central goal of the project, which provided a context for professional conversations where 62 Grade 3–9 mathematics teachers from a variety of Johannesburg schools discussed mathematical assessment data. In this pioneer community of practice research project, teachers were organised into groups by grade level, forming eight Grade 3−6 groups and six Grade 7–9 groups. The groups worked together in weekly meetings, mapping mathematics test items onto the curriculum (Shalem, Sapire, & Huntley, 2013), analysing learners’ errors, designing lessons, teaching and reflecting on their instructional practices, preparing and conducting interviews with learners and constructing test items. The teaching and interview activities were intended to give teachers an opportunity to apply error analysis when interacting with learners, and hence to develop their understanding of the role of errors and misconceptions in the learning of mathematics.
The sample data for this article – lessons and learners’ interviews
The teachers all took part in planning the lessons that were taught by a group representative in each round. Over the course of the project 39 teachers were filmed teaching planned lessons on behalf of their groups. We selected 17 lessons across all of the groups as our sample for analysis. Interviews were also planned collectively but carried out individually. Each teacher selected one, two or three learners to interview about an interesting error that they identified in a test they had drawn up collectively, administered individually in their own classes and then marked and analysed together in their project groups. For the purpose of analysis of teachers conducting learner interviews the sample was made up of 13 interviews conducted by teachers, corresponding to the classroom lessons sample. Consent was obtained from all participants in the lessons and interviews (teachers, learners and parents of learners), prior to the videoing of the activities.
The lessons and interviews were not all equal in length. The number of minutes, in total, that formed the sample for the quantitative analysis is summarised in Table 1. A lesson required that time was spent teaching and engaging with learners on more than just errors. It is not always appropriate to address all errors in the context of a full lesson. For the purpose of analysis of teachers engaging with errors during teaching only the ‘error episodes’ were coded minute by minute. The error episodes were identified as intervals of time starting with an error expressed by a learner in the context of a lesson and ending when engagement with the error terminated. A total of 129 minutes were coded in the classroom teaching sample as ‘error episodes’ (out of 906 minutes teaching time). For the purpose of analysis of teachers engaging with learners’ errors during interviews, the full length of each interview was coded, minute by minute, since the aim of the interview was to engage with learners’ errors. The total time of the coded interviews was 223 minutes.
Operationalising the analytical criteria
To analyse how the teachers’ engaged with learners’ errors during teaching and in one-on-one interviews we used four criteria: teachers’ procedural explanations in relation to the error (Proc), teachers’ conceptual explanations in relation to the error (Con)1, teachers’ awareness of the error (Awa) and teachers’ diagnostic reasoning when engaging with the learner in relation to the error (Diag). To capture variability in the quality of the teachers’ explanations of the errors, each of the criteria was divided into four categories: full, partial, inaccurate and not present. The classroom and interview activities had independent coding sheets, which contextualised the criteria and categories to the teaching and interview contexts (see Appendix 1 for the category descriptors for the interview activity). The criteria were applied to each minute and were methodologically differentiated in the coding process. Nonetheless, it is the relationship between the teaching activities, as they develop across the lesson or the interview, that leads to a successful error analysis engagement. The four coding criteria (full, partial and inaccurate exemplars) are demonstrated below using examples relating to classroom teaching and learner interviews.
Procedural understanding of the error
Teachers need to recognise and be able to explain the steps needed to get to the correct answer, the sequence of the steps and the appropriate conceptual links between the steps. Because this knowledge underlies recognition of error, we include it under the first of Ball et al.’s (2008) domains, common content knowledge. Procedural activity explanations of errors need to be given with sufficient clarity and accuracy if the learners are to grasp correct procedures and become competent in performing them. A Proc code was assigned to the aspects in the teachers’ utterances that demonstrate an attempt by the teacher to unpack a mathematical procedure while probing the learner’s error.
In this excerpt from a Grade 6 lesson the teacher probes to expose the error in the learner’s expression of a decimal number. The teacher noticed the flaw in the learner’s use of terminology relating place value: she was confusing tens and tenths and hundreds and hundredths. By giving simple prompts such as ‘in the place values of the decimals what do I have? What words?’ the teacher worked with the learner procedurally until the learner used the correct expression (‘one tenth’ and ‘two hundredths’). A partial Proc code was assigned to this excerpt. The teacher’s prompts did result in technical corrections, hence the inference made is that the teacher managed the pedagogical load (she got the learner to use the correct mathematical expressions). However she did not address the number concept related to this terminology she only made procedural corrections to the learner’s expression. The inference made here is that the teacher did not manage the cognitive load posed by the episode. In this example the germane load of the learner is not increased to the extent to which it could be. A further complication in this episode is that we see the teacher using incorrect language herself (she says ‘Zero comma twelve’ – she does so in several other instances in the lesson) which may increase the extraneous load of the learners.
Teacher: Now what do I have there? Yes, Thembi?
Learner: Ma’am, it’s zero comma twelve.
Teacher: Zero comma twelve. Now explain to me why zero comma twelve? [Learner writes 0,12 on board.] I’m waiting.
Learner: Ma’am, because there are no units and there are tens and hundreds.
Teacher: There’s no units, good.
Learner: And there are tens and hundreds.
Teacher: And then the other part after the fraction is a decimal … I mean, after the comma is a decimal, right? And in the place values of the decimals what do I have? What words?
Learner: One.
Teacher: One what?
Learners: One tenth.
Teacher: One tenth and …?
Learners: Two hundredths.
Teacher: And two hundredths. Conceptual understanding of the error
Whereas teachers’ knowledge of what counts as the explanation of the correct answer enables them to recognise the error, looking for explanations that will enable them to interpret learners’ solutions and evaluate their plausibility points to a teachers’ conceptual knowledge of errors. In Ball et al.’s (2008) words, the key aspect here is that teachers are looking for patterns in student errors and ‘sizing up whether a nonstandard approach would work in general’ (p. 400). Conceptual aspects related to the recognition of whether a learner’s answer is correct or not could thus span two domains, the first common content knowledge and the second specialised content knowledge. A code Con was assigned to the aspects in teachers’ utterances that demonstrate an attempt by the teacher to unpack a concept, or a conceptual feature of a procedure, while probing the learner’s error.
In a Grade 7 lesson on balancing equations the teacher asked the class probing questions to ascertain learners’ conceptual understanding of the meaning of the equal sign. This is taken as an error episode because while the teacher’s pedagogical load consists of asking for the meaning of the equal sign, the explanations given by learners were limited to the operational (find the answer) rather than the relational (balance the equation) meaning of the equal sign. At the end of this interaction in which learners expressed their thoughts about the meaning of the equal sign, the teacher moved on, even though the explanations had not touched on the relational meaning of the equal sign. The inference we make in this episode is that the teacher managed the cognitive load though not optimally. A partial Con code was assigned to this excerpt since the teacher engaged conceptually in discussion with learners about the meaning of the equal sign but did not fully complete this discussion. In this example there was no distraction created in the form of extraneous load but the germane load of the learners is not increased to the extent to which it could have been. The teacher did not clarify the different understandings of the equal sign nor did she explain that the core conceptual understanding is one of balance and equality on both sides of an equation as is evident in the following dialogue:
Teacher: What we’re going to be doing today is discussing the equal sign. You’ve often used it in maths since you were tiny so now we’re going to try and find out what this sign actually means to you. Okay who can tell me – what does this sign actually mean? [Points to equal sign on board.] Put up your hands if you know the answer. Right Isabella?
Learner: It means after the equals sign you put your answer. [Teacher repeats her answer – writes something on board.]
Teacher: Right, what else does it mean? To somebody else, what does it mean? Okay so you say that after the equal sign you put your answer. What else does it mean? Aandile?
Learner: It means that two numbers are the same, are equal.
Teacher: Ok two numbers can be equal. [Writes on board.] Alright what else does it mean to somebody?
Learner: It means that you’ve got your answer.
Teacher: So it means that once you’ve got your answer sign you’ve finally got your answer. Perfect, so you’ve got your answer. [Teacher writes ‘got answer’ on board.] Right anything else, does it mean anything else to somebody? Mandla?
Learner: The equal sign means that it’s the end of that sum.
Teacher: The end of the sum?
Learner: Ja.
Teacher: Ok. So it’s the end of that sum. [Teacher writes ‘end of sum’ on board.] Jeff?
Learner: It’s also the end of the equation.
Teacher: The end of the equation. Ok perfect. Anything else? No? Ok right, let’s carry on. Awareness of error
Sizing up the source of the error, in particular recognising common misinterpretations of specific topics (Olivier, 1996) or learners’ levels of development in representing a mathematical construct, is an aspect of PCK related to teachers’ knowledge of errors. A conversation in which the nature of the error is not made explicit or elaborated has very little educational value. From the point of view of error analysis, this knowledge domain involves teachers explaining specific mathematical content primarily from the perspective of how learners typically learn the topic or ‘the mistakes or misconceptions that commonly arise during the process of learning the topic’ (Hill et al. 2008, p. 375). Knowledge of content and student enables teachers to explain and provide a rationale for the way the learners were reasoning when they produced the error. Because contexts of learning (such as age and social background) affect understanding and because in some topics learning develops through initial misconceptions, teachers need to develop a repertoire of explanations, with a view to addressing differences in the classroom. A code Awa was assigned to the aspects in teachers’ utterances that demonstrate an attempt by the teacher to identify the error around which the conversation is focused. The emphasis of this code is on teachers’ discussion with the learner of what the error is about, in response to what is verbalised by the learner in the course of discussion.
In a Grade 9 interview the teacher engaged with the way in which a learner had plotted points using a scale, in particular the way in which the learner had chosen to deal with numbers that did not fit into the scale he had chosen for his axes. The teacher’s pedagogical load consists of giving the learner a possible explanation for how he had adjusted the values using a mathematical method (‘multiplied by ten’), even though doing this was not appropriate in the context. The learner rejected the explanation, asserting that he had ‘ignored the zero’ and ‘removed the comma’. Neither of these responses is appropriate but the teacher did not engage with them, she echoed what he said and moved on. An inaccurate Awa code was assigned to this excerpt since in her discussion the teacher did not refer to ‘scale’ at all and does not unpack the erroneous ways in which the learner adjusted his coordinate values in order to fit them into his chosen scale. This is an example where the learners’ extraneous load is increased. The error relates to the use of a scale and so in not touching on this topic, the teacher does not establish a conversation around the error in focus, creating further difficulty for the learner. The inference made here is that the teacher did not manage the cognitive load posed by the episode.
Learner: As in for twenty-five [points to number on horizontal axis], I did a two point five, to, two hundred and fifty [points to number on vertical axis], because there was no twenty-five. So I used these variables [points to numbers on horizontal axis] just to say this is five, even though it’s zero comma five, I said this is five, and like that…
Teacher: Ok, so you ignored the zero [points to zero] and…
Learner: Yes.
Teacher: Actually you multiplied by what? Multiplied by ten or the…?
Learner: No, I didn’t multiply it, I just ignored the [crosses out a number on horizontal axis]…
Teacher: You removed the zero.
Learner: Yes.
Teacher: Ok.
Learner: Just like the two point five, I removed the comma [draws a comma]. Yes. I did it like that. Diagnostic reasoning when engaging with the learner in relation to the error
The pedagogical work of probing learners’ thinking, by taking them through the error and supporting them with examples, representations and, when appropriate, bringing an example from every day to enable them to understand the concept, forms another aspect of PCK related to teacher knowledge of errors. In terms of teacher knowledge of error analysis, the key idea that this criterion puts forward is that teachers go beyond stating the actual error by using probing questions to try and follow (with the learner) the way the learner is reasoning about the error. This criterion could also span two of Ball et al.’s (2008) knowledge domains, knowledge of content and students and knowledge of content teaching. This kind of knowledge would enable a teacher seeking to find out the learner’s mathematical reasoning behind the error. In response to the error the teacher probes further and asks the learner to explain the steps of their reasoning. A code Diag was assigned to the aspects in teachers’ utterances that demonstrate an attempt by the teacher to probe the learner’s reasoning behind the error. The emphasis of this code is on the questions the teacher uses to probe the learner to explain the steps in their reasoning.
A Grade 8 teacher set up an interview to probe the learner’s thinking in relation to an error that arose in an activity where learners had to generate equations using a given set of symbols (see Figure 1). The learner had not been able to make correct equations using the symbols given in the task.
|
FIGURE 1: Grade 8 test item with learner’s working. |
|
The teacher managed the pedagogical load presented by this error in the following way. She asked several probing questions to follow the thinking of the learner. She asked both broad (‘just explain to me your answer’) and specific (‘so that’s what you meant when you wrote here that the envelope and the heart must have a bigger value’) questions. A full Diag code was assigned to this excerpt since the teacher used systematic probes to identify the learner’s error. Based on the learner’s responses and in order to follow through the diagnosis the teacher then chose to introduce a supportive example in which the learner had to use the numbers (rather than symbols) to make equations. This allowed for alternative explanations because it offered a different representation, hence increasing the germane load of the learner by setting up the stage to follow up on the error. The inference made here is that the teacher managed fully the cognitive load posed by the episode.
Teacher: Ok, alright, so that’s what you meant when you wrote here that the envelope and the heart must have a bigger value?
Learner: Yes.
Teacher: Ok I see. Alright. And can you just explain to me your answer here, or your working out for me?
Learner: Well I thought that if the envelope would be a bigger value, I’d have to add a smaller value with it so that’s why I added just the smiley face to think that maybe that would equal that answer.
Teacher: Ok.
Learner: And with the heart, I thought that you need two of these symbols [points at the smiley face and envelope] in order to equal a bigger value with another bigger value in order to get the answer.
Teacher: Ok, I see, I see, alright. Now what I’m going to do is I’m going to give you a sum with numbers.
Learner: Ok.
Teacher: And we’re going to work with that sum of numbers a little bit and then we’re going to see if we can link it in somewhere to this question [points to question being discussed]. Ok so I’m just going to put that aside for the moment [pushes away first paper and brings forward another paper].
Findings: Engaging with errors in classroom teaching and learner interviews
In answering the research questions, the evidence presented below suggests, firstly, that the teachers shied away from engaging with learners’ errors during teaching and, secondly, that when they did engage with learners’ errors, both during the teaching and interviews, they were not always successful in coming to grips with the nature of the error nor did they enable learners to clarify their own thinking and develop a deeper understanding of the mathematical concepts underpinning the error. In other words they did not increase the germane load of the learners. We argue that this is evidence of teachers not coping with the cognitive load they face when errors arise in discussions, which impacted on the way they managed their pedagogical load. This can be seen both in the time spent engaging with errors and in the quality of activity when engaging with errors.
Figure 2 shows the time identified as error episodes (time actively spent engaging with a learner error over the course of a normal lesson) in relation to the total amount of teaching time in a lesson during the three rounds of teaching. Despite the noted increases between rounds the percentages of time during which teachers engaged with errors remained low. Across the three rounds the Grade 7–9 groups spent more time engaging with learners’ errors than the lower grade groups and the time they spent increased in successive rounds. The interviews (only one round) were focused on errors (as planned) and so teachers were actively engaged with errors throughout the interviews.2
|
FIGURE 2: Time spent engaging with errors in classroom lessons. |
|
The cognitive difficulty (which we use to infer cognitive load) of the activity for the teachers can be shown by a comparison between the amount of time teachers spent engaging with errors using procedural and conceptual explanations. The graphs (Figure 3 and Figure 4) show the differences between the occurrence and the quality of the procedural and conceptual activities, when teachers interact with learners on an error, in the classroom and during learner interviews.
|
FIGURE 3: Procedural explanations in relation to the error (averages across all rounds). |
|
|
FIGURE 4: Conceptual explanations in relation to the error (averages across all rounds). |
|
Both in the classroom and in the interviews, the teachers’ engagement with errors is predominantly procedural: there are more not present and inaccurate codes and fewer partial and full codes in both contexts. The presence of not present codes in the lessons (36% Proc, 65% Con) compared to 0% (Proc and Con) in the interviews suggests that many of the teachers glossed over errors in the course of a lesson, which may be based on a pedagogical choice or an inability to engage meaningfully with wrong answers. Examples where wrong answers are not probed or explained can be found in the majority of lessons. It is further notable that during lessons, when engaging with learners’ errors, the teachers gave more inaccurate procedural explanations (36% in lessons and 13% in interviews) than conceptual explanations of the learners’ errors; in the interviews they gave more inaccurate conceptual explanations (24% in lessons and 33% in interviews) than procedural explanations of the learners’ errors. Notwithstanding, the conceptual explanations are more consistent in quality in the interviews and more evident than in the classroom. This suggests that the teachers felt more confident to address conceptual issues in the interviews (for which the interviews had been planned) than in the course of a lesson.
Qualitative analysis also sheds light on the cognitive difficulty of the activity for the teachers. Importantly, it shows how the teachers coped with the pedagogical and cognitive loads presented by the error episodes. The four examples discussed earlier in this article gave insight into how teachers managed the loads in different ways, some more successfully than others. Teachers found it very difficult to home in on the error underlying the learners’ statements made during conversations, both in classroom lessons and during interviews. This is evidence of teachers not coping with the cognitive load they face when errors arise in discussions.
In Figure 5, which shows the differences between the occurrence and the quality of the Awa activity, we see very few partial and full codes, indicative of poor awareness of the nature of the mathematical errors in both the classrooms and interviews, although interviews do show slightly better quality in this criterion. The code not present is highly frequent in the lessons and during interviews (55% and 54% respectively), showing that teachers found it difficult to express the nature of the error arising in the conversation mathematically.
|
FIGURE 5: Awareness of the error (averages across all rounds). |
|
Teachers often used incorrect or sloppy mathematical language resulting in inaccurate explanations (more so in the classroom), further evidence of the inability to correctly express an awareness of the mathematical error. When teachers use inaccurate mathematical language when attempting to address an error, their ability to identify the error around which the conversation is focused is compromised. For example, in a Grade 9 lesson, a learner answered ‘eighteen and twenty-one’ (referring to a coordinate point). In this lesson the teacher and the learners both use this incorrect/vague language. In a Grade 6 lesson a teacher says ‘one comma twenty-three’ for 1,23. This poor expression undermines the concept of decimal place value that he is trying to teach. Saying ‘one comma twenty-three’ confuses the relationship between places before and after the decimal comma. Precision in mathematical expression (both verbal and written) is vital since it supports a deeper understanding of the concepts under discussion; unclear or imprecise language provides further evidence of teachers not coping with cognitive load.
Figure 6 shows the differences between the occurrence and the quality of the Diag activity. Here the teacher’s skill of probing, listening to the learner and developing the conversation is under scrutiny. In both the lessons and interviews, the data shows that the teachers do not engage with the learners’ language of expression, evidence that teachers struggle to cope with both the cognitive and pedagogical loads presented by errors voiced by learners. The category inaccurate for this criterion (most highly evidenced in both classrooms and interviews) captures times when teachers probed the error broadly but did not seek the learner’s mathematical reasoning behind the error. This is different from inaccuracy in the other criteria (Proc, Con and Awa) where mathematical content may be compromised. Teachers often asked broad questions such as ‘why did you say that?’ or ‘tell me about what you did here’. On many occasions they followed up the broad probe with a question, ‘so is it correct/right/wrong/incorrect?’ In so doing, teachers were merely asking learners to make a judgement call. In this way broad probes did not lead to exploration of, or elaboration on, issues related to errors. For example, in one Grade 9 lesson the teacher used the word ‘correct/correctly’ 23 times, usually as part of a probe. When a teacher asks ‘is that correct?’ the learner generally realises it is not and answers ‘no’ but often seemingly without understanding. Diagnostic conversations tend to terminate or continue at the same level, without focusing on or moving closer to the misconception underlying the error. This is evidenced in the low level of partial and full diagnostic explanations both in classrooms and interviews, although the interviews showed slightly better quality in this criterion.
|
FIGURE 6: Diagnostic reasoning in relation to the error (averages across all rounds). |
|
In the classroom teachers’ probing frequently involves calling on a learner to re-do work that has been done or then calling on another learner to correct work that had been done. This is often guided by the teacher using leading questions. When learners respond they often repeat the same error. Sometimes the ‘correcting’ learner makes another mistake which may or may not be identified. In their haste to move on, teachers often do not probe further, they either reteach using the same or another similar example, without engaging with the content of the learners’ response. No explanations are given; work is simply corrected and the conversation moves on. The error, or limitation, in this type of interaction is noted and sometimes even identified, but is not worked through. Diagnosis is thus rare and not often adequately acted upon; in all of these cases teachers have not coped with the cognitive and pedagogic load created by the need to engage with learners’ mathematical errors.
Discussion
The above findings point to the cognitive difficulty involved in applying error analysis, when teachers engage with learners, more so in teaching than in interviews. We use this to infer that cognitive load impacts on pedagogical load. To cope with the pedagogical load, teachers may choose to ignore errors. At times they ignore errors completely, sometimes accepting incorrect work, or they continue teaching or asking questions on a planned route, not acknowledging the error but trying to teach or reteach the ‘right stuff’. In other instances, teachers do acknowledge errors. Acknowledgement of errors might involve indicating recognition of an error vocally (for example, saying ‘no’) without addressing or questioning the error. In limited attempts at engaging with errors, teachers may indicate recognition of an error vocally and request a correction from another learner or offer the correct answer themselves. Finally, teachers may engage diagnostically with errors, ask probing questions, which range from broad to more error-focused questions or use open-ended exploratory questions.
Classroom and interview responses might follow a different set of ‘rules’ according to their contextual nature. Table 2 summarises the key differences between the two contexts.
TABLE 2: Contextual differences for engagement with errors. |
The comparison shown in Table 2 might imply that application of error analysis in an interview is more straightforward, but, as we have shown, teachers found the activity difficult in both contexts. Although there were examples of meaningful interaction on the part of teachers with their learners’ errors in the lesson and interview activities, such evidence was sparsely scattered in the data set. We expected to find stronger and more consistent evidence of diagnostic reasoning in the context of a learner interview, yet the interview activity, undertaken during the last six months of the project, highlighted teachers’ difficulty in dealing with learners’ mathematical errors in conversation. Teachers conducted these interviews after more than two years in the project, working with colleagues and district officials under the guidance of mathematics education experts (university staff members or postgraduate students). These could be considered ideal circumstances for optimal performance, and yet the teachers struggled to engage meaningfully with their learners about errors they had made. This might lead one to ask whether teaching through errors is too much to expect of teachers.
When teachers engage with their learners’ errors they use their diagnostic reasoning in the activity of formative assessment. This kind of pedagogic content knowledge (Shulman, 1986) is a higher and more difficult level of teacher knowledge, which largely depends on teachers’ ability to unpack a mathematical procedure, a concept or a conceptual aspect of a procedure while probing the learner’s error. If this argument is accepted then Hugo’s (2015) analysis, which shows that cognitive load economises or increases pedagogic load depending on the strength of the teacher’s ‘cognitive architecture’ (p. 81) of their mathematics knowledge for teaching, explains why teachers find working with learners’ errors during teaching a complex task to achieve.
Conclusion
The schema of teachers’ knowledge of error analysis and the complexity of its application was discussed in relation to Ball et al.’s (2008) domains of knowledge and Hugo’s (2015) explanation of the relation between cognitive and pedagogical loads. Evidence of the difficulty of this activity has been shown quantitatively as well as through qualitative examples from the classrooms and interviews. The evidence highlights that the cognitive and pedagogic loads of applying error analysis in context exceeded the capacity of the teachers. Some teachers did engage in discussions that could increase the learners’ germane load, but in most instances teachers actually increased the extraneous load when they engaged with errors, making it more difficult for learners to absorb what was being taught. The difficulty the teachers experienced in responding meaningfully to errors in context could be related to their mathematical knowledge gaps, linguistic ability or lack of experience in focusing on what the learner says and responding directly to what has been said.
Teachers’ involvement in activities such as analysing learners’ errors on standardised tests, engaging with learners’ errors when planning and teaching a lesson, discussing them with colleagues and through interviews with learners’ to probe their reasoning has some merit. The data presented from the three rounds of teaching indicate that teachers can learn to engage with learners’ errors over time but that such learning is very slow. Poor overall demonstration of awareness of error and poor use of probing questions (in diagnostic reasoning) suggest the need for caution in advocating for developing teacher competence in addressing learners’ errors through informal group-guided discussion. Further research into the relationship between teachers’ knowledge of mathematical content and their ability to engage diagnostically with learners’ errors and demonstrate an awareness of mathematical errors needs to be done. Nevertheless, the findings suggest that teachers need formal and structured opportunities to improve their mathematical content knowledge, to inform the diagnostic work required by this kind of formative assessment.
Acknowledgements
We acknowledge the funding received from the Gauteng Department of Education and in particular would like to thank Reena Rampersad and Prem Govender for their support of the project. The views expressed in this article are those of the authors.
Competing interests
The authors declare that they have no financial or personal relationship(s) that may have inappropriately influenced them in writing this article.
Authors’ contributions
I.S., the lead author, was also involved in the coding and analysis of the data. Y.S. was involved in the conceptualisation, analysis of the data and the writing of the article. B.W.-T. was involved in the coding and analysis of the data and contributed to the writing of the article. R.P. was involved in the coding and analysis of the data and contributed to the writing of the article.
References
Adler, J. (2011). Knowledge resources in and for school mathematics teaching. In G. Gueudet, B. Pepin, & L. Trouche (Eds.), From text to ‘lived’ resources (pp. 3–22). Dordrecht: Springer. http://dx.doi.org/10.1007/978-94-007-1966-8_1
Anderson, D., & Clark, M. (2012). Development of syntactic subject matter knowledge and pedagogical content knowledge for science by a generalist elementary teacher. Teachers and Teaching: Theory and Practice, 18(3), 315–330.
Ball, D.L., Hill, H.C., & Bass, H. (2005). Knowing mathematics for teaching: Who knows mathematics well enough to teach third grade, and how can we decide? American Educator, 22, 14-22; 43-46. Available from http://hdl.handle.net/2027.42/65072
Ball, D.L., Thames, M.H., & Phelps, G. (2008). Content knowledge for teaching: What makes it special? Journal of Teacher Education, 9(5), 389–407. http://dx.doi.org/10.1177/0022487108324554
Bertram, C. (2011). What does research say about teacher learning and teacher knowledge? Implications for professional development in South Africa. Journal of Education, 52, 3–26.
Black, P., & Wiliam, D. (2006). Developing a theory of formative assessment. In J. Gardner (Ed.), Assessment and Learning (pp. 206–230). London: Sage.
Brodie, K. (2014). Learning about learner errors in professional learner communities. Educational Studies in Mathematics, 85, 221–239.
Departments of Basic Education & Higher Education and Training. (2011). Integrated strategic planning framework for teacher education and development in South Africa 2011–2025: Frequently asked questions. Pretoria: DOE.
Gagatsis, A., & Kyriakides, L. (2000). Teachers’ attitudes towards their pupils’ mathematical errors. Educational Research and Evaluation, 6(1), 24–58.
Grossman, P.L. (1990). The making of a teacher: Teacher knowledge and teacher education. New York, NY: Teachers College Press.
Hill, H.C., Ball, D.L., & Schilling, G.S. (2008). Unpacking pedagogical content knowledge: Conceptualizing and measuring teachers’ topic-specific knowledge of students. Journal for Research in Mathematics Education, 39(4), 372–400.
Hugo, W. (2015). Boundaries of the educational imagination. Cape Town: African Minds.
Kieran, C. (2013). The false dichotomy in mathematics education between conceptual understanding and procedural skills: An example from algebra. In K.R. Leatham (Ed.), Vital directions for mathematics education research (pp. 153–171). New York, NY: Springer.
Olivier, A. (1996). Handling pupils’ misconceptions. Pythagoras, 21, 10–19.
Peng, A. (2010). Teacher knowledge of students’ mathematical errors. Available from http://www.scribd.com/doc/54223801/TEACHER-KNOWLEDGE-OF-STUDENTS%E2%80%99-MATHEMATICAL
Prediger, S. (2010). How to develop mathematics-for-teaching and for understanding: The case of meanings of the equal sign. Journal of Mathematics Teacher Education, 13, 73–93.
Rowland, T., & Turner, F. (2008). How shall we talk about ‘subject knowledge’ for mathematics. In M. Joubert (Ed.), Proceedings of the British Society for Research into Learning Mathematics (Vol. 28(2), pp. 91–96). Chichester: BSRLM. Available from http://www.bsrlm.org.uk/IPs/ip28-2/BSRLM-IP-28-2-16.pdf
Shalem, Y. (2013). What binds professional judgement? The case of teaching. In M. Young, & J. Muller (Eds.), Knowledge, expertise and the professions (pp. 93–106). London: Taylor and Francis Books.
Shalem, Y., Sapire, I., & Huntley, B. (2013). Mapping onto the mathematics curriculum – An opportunity for teachers to learn. Pythagoras, 34(1), Art. #195, 10 pages. http://dx.doi.org/10.4102/pythagoras.v34i1.195
Shalem, Y., Sapire, I., & Sorto, M.A. (2014). Teachers’ explanations of learners’ errors in standardised mathematics assessments. Pythagoras, 35(1), Art. #254, 11 pages. http://dx.doi.org/10.4102/pythagoras.v35i1.254
Shulman, L. (1986). Those who understand: Knowledge growth in teaching. Educational Researcher, 15(2), 4–14.
Sweller, J., Kirschner, P.A., & Clark, R.E. (2007). Why minimally guided teaching techniques do not work: A reply to commentaries. Educational Psychologist, 42(2), 115–121.
Taylor, N., Van der Berg, S., & Mabogoane, T. (2013). What makes schools effective? Report of the National Schools Effectiveness Study. Cape Town: Pearson.
Appendix 1:
Footnotes
1. We see procedural and conceptual explanations as activities that can be characterised distinctly, while acknowledging that they essentially occur simultaneously and cannot be split into a false dichotomy (Kieran, 2013).
2. There was only one set of learner interviews and so no progression over time can be analysed.
|