Article Information

Authors:
Yael Shalem1
Ingrid Sapire1
M. Alejandra Sorto2

Affiliations:
1School of Education, University of the Witwatersrand, South Africa

2Mathematics Department, Texas State University, United States of America

Correspondence to:
Ingrid Sapire

Postal address:
Private Bag 3, WITS 2050, South Africa

Dates:
Received: 31 Jan. 2014
Accepted: 06 Aug. 2014
Published: 04 Nov. 2014

How to cite this article:
Shalem, Y., Sapire, I., & Sorto, M.A. (2014). Teachers’ explanations of learners’ errors in standardised mathematics assessments. Pythagoras, 35(1), Art. #254, 11 pages. http://dx.doi.org/10.4102/
pythagoras.v35i1.254

Copyright Notice:
© 2014. The Authors. Licensee: AOSIS OpenJournals.

This is an Open Access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Teachers’ explanations of learners’ errors in standardised mathematics assessments
In This Original Research...
Open Access
Abstract
Introduction
   • Teachers’ knowledge of learners’ errors
   • Studying teachers’ explanations of learners’ errors in standardised mathematics assessment
      • Criterion 1: Procedural understanding of the correct answer
      • Criterion 2: Conceptual understanding of the correct answer
      • Criterion 3: Awareness of error
      • Criterion 4: Diagnostic reasoning of learners’ thinking in relation to error
      • Criterion 5: Use of everyday links in explanations of error
      • Criterion 6: Multiple explanations of error
Teacher development research project
   • Operationalising the criteria
Findings: Usability of the measurement criteria
Discussion: The teachers’ knowledge of error analysis
Conclusion
Acknowledgements
   • Competing interests
   • Authors’ contributions
References
Footnotes
Appendix 1
Abstract

With the increased use of standardised mathematics assessments at the classroom level, teachers are encouraged, and sometimes required, to use data from these assessments to inform their practice. As a consequence, teacher educators and researchers are starting to focus on the development of analytical tools that will help them determine how teachers interpret learners’ work, in particular learners’ errors in the context of standardised and other assessments. To detect variation and associations between and within the different aspects of teacher knowledge related to mathematical error analysis, we developed an instrument with six criteria based on aspects of teachers’ knowledge related to explaining and diagnosing learners’ errors. In this study we provide evidence of the usability of the criteria by coding 572 explanations given by groups of mathematics educators (teachers and district officials) in a professional development context. The findings consist of observable trends and associations between the different criteria that describe the nature of teachers’ explanations of learners’ errors.

Introduction

Reporting on data from standardised mathematics assessments that provide information about what learners can or can’t do is becoming a common practice in many countries. The reports set out to provide managers and teachers with reliable data, in the form of statistical averages, to be used to inform broad policy and classroom teaching. The Elementary and Secondary Education Act in the United States specifies that standardised assessment allows teachers ‘to get meaningful information about their practice, and support them in using this information to ensure that all students are getting the effective teaching they deserve’ (U.S. Department of Education, Office of Planning, Evaluation and Policy Development, 2010, p. 15). In South Africa, teachers are required ‘to interpret their own learners’ performance in national (and other) assessments’ (Departments of Basic Education & Higher Education and Training, 2011, p. 2) and develop better lessons on the basis of these interpretations. This requirement implies that teachers are expected to use learner data diagnostically and therefore, we argue, teachers’ involvement in error analysis of standardised and classroom assessment is no longer a professional right but a responsibility, an integral aspect of teacher knowledge. Research has only recently begun to engage with the question of how to use learner data beyond that of a statistical indicator of quality, that is, beyond benchmarking for external accountability (Boudett, City & Murnane, 2005; Cohen & Hill, 2001; Katz, Earl & Ben Jaafar, 2009; Katz, Sutherland & Earl, 2005). Some attempts to examine a more balanced way between external and internal performance include Shavelson, Li, Ruiz-Primo and Ayala (2002), Black and Wiliam (2006) and Nichols, Meyers and Burling (2009). In South Africa, Reddy (2006), Dempster (2006), Long (2007) and Dempster and Zuma (2010) have each conducted small case studies on test-item profiling, arguing that this can provide useful data that can be used by teachers for formative and diagnostic purposes. Notwithstanding these important contributions, there is very little research on how to analyse teacher knowledge of errors, analysis or what criteria can be used to assess teachers’ explanations of learners’ errors in standardised mathematical assessments. We hope that by analysing teachers’ explanations of learners’ errors, and through this their knowledge of error analysis, this article will advance this area of research and will also contribute to Black and Wiliam’s (1998) well-established argument of the positive potential impact of formative assessment.

To analyse teacher knowledge of error analysis we developed an instrument with six criteria and compiled evidence of its usability as an analytical tool. This we did as part of our work with 62 mathematics teachers over a three-year period in the Data Informed Practice Improvement Project (DIPIP, see more below). Our central aim in developing the instrument was to detect variation and associations between and within the different aspects of teacher knowledge related to mathematical error analysis. With this in mind, we investigated the following research questions:

1. What is the nature of the teachers’ explanations of learners’ errors on standardised mathematical assessments?

2. What variability in the quality of the teachers’ explanations of learners’ errors can be identified using the criteria?

3. What relationship between aspects that inform the teachers’ explanations of learners’ errors are the criteria descriptors able to detect?

The article proceeds as follows: in the first section we examine the idea of error analysis within the literature of teacher knowledge, focusing on its value for mathematics teaching. The next section describes the conceptual background on which we drew to develop the six criteria with a view to studying teachers’ explanations of learners’ errors. More specifically, we examine the aspects of error analysis included in three ‘domains of teacher knowledge’ (Ball, Hill & Bass, 2005) and list the relevant criteria for studying teachers’ explanations of learners’ errors on an international standardised assessment test. In the third section we provide detail about the DIPIP project, explain the methodology we used to operationalise the criteria for our study of teachers’ explanations of learners’ errors and present exemplars of coding. In the last two sections, we assess the extent to which the criteria capture key error analysis aspects and use this to draw inferences about the nature of teachers’ explanations, their quality and the relationship amongst the different aspects that make up the six criteria.

Teachers’ knowledge of learners’ errors
Research in mathematics education has shown that a focus on errors, as evidence of mathematical thinking on the part of learners, helps teachers to understand learner thinking, to adjust the ways they engage with learners in the classroom situation, as well as to revise their teaching approach (Adler, 2005; Borasi, 1994; Brodie, 2014; Nesher, 1987; Smith, DiSessa & Roschelle, 1993; Venkat & Adler, 2012). Some work has begun on developing broad classifications of learners’ errors (Brodie & Berger, 2010; Radatz, 1979). Studies on teaching dealing with learners’ errors show that teachers’ interpretive stance is essential for the process of remediation of error, without which teachers simply re-teach without engaging with the mathematical source of the error, or with its metacognitive structure (Gagatsis & Kyriakides, 2000; Peng, 2010; Peng & Luo, 2009, Prediger, 2010). Nevertheless, there is hardly any literature that analytically examines the different aspects that make up teacher knowledge of error analysis and its relation to subject matter knowledge (SMK) and to knowledge about teaching.

Knowledge of errors can be shown to incorporate both the substantive and syntactic dimensions of teacher subject matter knowledge. Following the famous work of Schwab (1978) and Shulman (1986), Rowland and Turner (2008) propose the following definition of the substantive and syntactic dimensions of teacher subject matter knowledge:

Substantive knowledge encompasses the key facts, concepts, principles, structures and explanatory frameworks in a discipline, whereas syntactic knowledge concerns the rules of evidence and warrants of truth within that discipline, the nature of enquiry in the field, and how new knowledge is introduced and accepted in that community – in short, how to find out. (p. 92)

This distinction is important. It suggests that subject matter knowledge includes the explanations of facts and concepts central to the discipline but also the rules of proof and evidence that a discipline community considers legitimate to use when making knowledge claims. Both of these refer to aspects of teacher knowledge of error analysis. The substantive dimension foregrounds teachers’ explanations of what is erroneous and why, taking into account what a learner is expected to know, given the learner’s age and level of cognitive development. The syntactic dimension foregrounds teachers’ explanations of the process that needs to be followed to construct a truth claim, resolve a problem, get a correct solution, and so on. Error analysis is an integral part of teacher knowledge, and the specific aspects informing teachers’ explanations of learners’ errors are the subject of this article.

Particular to the field of mathematics education, Hill and Ball (2009) see analysing learners’ errors as one of the four mathematical tasks of teaching ‘that recur across different curriculum materials or approaches to instruction’ (p. 70). Peng and Luo (2009) and Peng (2010) argue that the process of error analysis includes four steps: identifying, addressing, diagnosing and correcting errors. In South Africa, Adler (2005) sees teachers’ knowledge of error analysis as a component of what she calls mathematics for teaching. She asks:

What do teachers need to know and know how to do (mathematical problem solving) in order to deal with ranging learner responses (and so some error analysis), and in ways that produce what is usefully referred to as ‘mathematical proficiency’, a blend of conceptual understanding, procedural fluency and mathematical reasoning and problem solving skills? (Adler 2005, p. 3)

In this study we take up Ball et al.’s (2005) idea of six domains of teacher knowledge and show the specific aspects of error analysis included in the first three domains (see next section). It is important to emphasise that underlying the attempts to classify the tasks involved in error analysis is an understanding that error analysis requires professional judgement to recognise exactly what learners do not understand, their reasoning behind the error, how that may affect their learning and which instructional practices could provide affordances (or constrain them) to address learner difficulties (Shepard, 2009, p. 37). We offer the idea of ‘diagnostic reasoning’, to point to the delicate work of judgement involved in error analysis. Appropriate judgement of how close or far a learner is from what is correct is core to teachers implementing appropriate assessment and feedback to learners. Prediger (2010, p. 76) uses the notion of ‘diagnostic competence’ to distinguish reasoning about learners’ errors from merely grading their answers. His view is consistent with Shepard’s (p. 34) work on formative assessment, in particular with the idea of using insights from learners’ work formatively to adjust instruction.

Making sound judgments, then, is key to formative assessment. The formative aspect lies in teachers’ explanations of learners’ errors, particularly when they are faced with misunderstandings exhibited by a large group of learners or when they need to scaffold further explanations in order to enable learners to cope with the complexity of a given task (City, Elmore, Fiarman & Teitel, 2009).

Studying teachers’ explanations of learners’ errors in standardised mathematics assessment
For the purpose of studying teachers’ explanations of learners’ mathematical errors we used Ball’s classification of knowledge domains. We used this analysis to develop six criteria of teachers’ explanations of learners’ errors in standardised mathematics assessments within three of the domains of knowledge that define mathematics knowledge for teaching (Ball et al., 2005; Ball, Thames & Phelps, 2008; Hill, Ball & Schilling, 2008). Ball et al. (2008) explain that the first two domains elaborate the specialisation of subject-matter knowledge (common content knowledge and specialised content knowledge). The second two domains elaborate the specialisation involved in teaching mathematics from the perspective of learners, curriculum and pedagogy. These domains (knowledge of content and learners and knowledge of content and teaching) elaborate Shulman’s (1986) notion of pedagogical content knowledge (PCK). Hill et al. (2008) argue that the first two domains are framed, primarily, by subject matter knowledge. This is very important from the perspective of examining teachers’ explanations of errors. As Peng and Luo (2009) argue, if teachers identify learners’ errors but interpret them with wrong mathematical knowledge, their assessment of learner performance and their plan for a teaching intervention are both meaningless. In other words, the tasks that teachers engage with in error analysis, such as sizing up the error or interpreting the source of its production, are possible because of the mathematical reasoning with which these domains of teacher knowledge equip them.

In what follows we foreground key aspects of teachers’ explanations of learners’ errors relevant to each of the three domains we drew on specifically for the purpose of this analysis1.

Under the first domain, common content knowledge, we map aspects related to the recognition of whether a learner’s answer is correct or not. Teachers need to recognise and be able to explain the crucial steps needed to get to the correct answer, the sequence of the steps and their conceptual links. Because this knowledge underlies recognition of error, we include it under content knowledge. This analysis gives rise to two criteria in this domain:

Criterion 1: Procedural understanding of the correct answer
The emphasis of the criterion is on the quality of the teachers’ procedural explanations when discussing the solution to a mathematical problem. Teaching mathematics involves a great deal of procedural explanation, which should be done fully and accurately for the learners to grasp and become competent in working with the procedures themselves.

Criterion 2: Conceptual understanding of the correct answer
The emphasis of the criterion is on the quality of the teachers’ conceptual links made in their explanations when discussing the solution to a mathematical problem. Teaching mathematics involves conceptual explanations, which should be done with as many links as possible and in such a way that concepts can be generalised by learners and applied.

The difference between procedural and conceptual understanding of the correct answer used in this study is similar to that of the categorisation of conceptual understanding and procedural fluency in the context of the strands of mathematical proficiency proposed by Kilpatrick, Swafford and Findell (2001). Conceptual understanding refers to ‘comprehension of mathematical concepts, operations, and relations’ and procedural fluency refers to ‘skill in carrying out procedure flexibly, accurately, and appropriately’ (Kilpatrick et al. 2001:116).

It is also in line with the difference between levels of cognitive demand, exemplified by Stein, Smith, Henningsen and Silver (2000) as ‘procedures with connections’ and ‘procedures without connections’ in the context of analysing mathematical tasks (p. 13).

For example, in the text below, the explanation of the correct answer notes both the procedural and the conceptual aspects of understanding required in order to answer this question. (See Table 1 for further possible levels of explanation.)

TABLE 1: Domains of teacher knowledge and related error analysis criteria.

Question: Which row contains only square numbers? (Correct answer, C)

Explanation: 1² = 1; 2² = 4; 3² = 9; 4² = 16; 5² = 25; 6² = 36; 7² = 49; 8² = 64. Therefore the row with 4, 16, 36 and 64 only has square numbers. To get this right, the learner needs to know what ‘square numbers’ mean and to be able to calculate or recognise which of the rows consists only of square numbers. (Grade 8 teacher group.)

The relationship between procedural and conceptual mathematics knowledge is complex and recent research insists that the two need to be seen as integrated rather than polarised when thinking about mathematical ideas (Baroody, Feil & Johnson, 2007; Long, 2005; Star, 2005). Notwithstanding, some mathematical problems lend themselves more to procedural explanations whilst in others the procedural and the conceptual are closely linked. There is a progression in mathematical understanding of concepts: what may be conceptual for a Grade 3 learner (for example, basic addition of single digit numbers) is procedural for a Grade 9 learner who will have progressed to operations at a higher level. The two criteria are thus closely aligned and yet they can be differentiated.

Under the second domain, specialised content knowledge, we map aspects related to mathematical knowledge required for the recognition of the nature of the error. In Ball et al.’s (2008) words, the key aspect here is teachers looking for patterns in student errors, ‘sizing up whether a nonstandard approach would work in general’ (p. 400). Whereas teachers’ knowledge of what counts as the explanation of the correct answer enables them to spot the error, looking for patterns in learners’ errors enables them to interpret learners’ solutions and evaluate their plausibility. Knowledge of this domain enables teachers to ‘size up the source of a mathematical error’ (p. 397) and identify what mathematical steps would produce a particular error. We added the following criterion under this domain.

Criterion 3: Awareness of error
This criterion focuses on teachers’ explanations of the actual mathematical error and not on learners’ reasoning. The emphasis in the criterion is on the mathematical quality of teachers’ explanations of the actual mathematical error.

Under the third domain, knowledge of content and students, we map aspects related to teachers’ mathematical perspective of errors, typical of learners of different ages and social contexts in specific mathematical topics. This knowledge includes common misconceptions of specific topics (Olivier, 1996) or learners’ levels of development in representing a mathematical construct (e.g. Van Hiele levels of geometric thinking, Burger & Shaughnessy, 1986). From the point of view of error analysis, this knowledge domain involves teachers explaining specific mathematical content primarily from the perspective of how learners typically learn the topic or ‘the mistakes or misconceptions that commonly arise during the process of learning the topic’ (Hill et al. 2008:375). The knowledge of this domain enables teachers to explain and provide a rationale for the way the learners were reasoning when they produced the error. Since it is focused on learners’ reasoning, this aspect of teacher knowledge of errors includes the ability to provide multiple explanations of the error. Because contexts of learning (such as age and social background) affect understanding and because in some topics the learning develops through initial misconceptions, teachers will need to develop a repertoire of explanations, with a view to addressing differences in the classroom. We included three further criteria under this domain:

Criterion 4: Diagnostic reasoning of learners’ thinking in relation to error
The idea of teachers’ explanation of error goes beyond identifying the actual mathematical error (‘awareness of error’). The idea is to understand how teachers go beyond the mathematical error and explain the way learners were reasoning when they produced the error. The emphasis in this criterion is on the quality of the teachers’ attempt to provide a rationale for how learners were reasoning mathematically when they chose a distractor. This aspect aligns with one of the knowledge of content and students categories studied by Hill et al. (2008), which they call common student errors; this refers to ‘providing explanations for errors, having a sense for what errors arise with what content, etc.’ (p. 380).

Criterion 5: Use of everyday links in explanations of error
Teachers sometimes explain why learners make mathematical errors by appealing to everyday experiences that learners draw on and confuse with the mathematical context of the question. Drawing on the work of Walkerdine (1982), Taylor (2001) cautions that:

familiar contexts provide essential starting points for teaching young children to reason formally. … [But] not just any everyday example provides a suitable jumping off point for higher levels of conceptual development. (p. 3)

The emphasis in this criterion is on the quality of the use of everyday knowledge in the explanation of the error, judged by the links made to the mathematical understanding that the teachers attempt to advance. For example, in the error explanation below, which is about learners’ confusion between units of measurement of capacity (between litres and millilitres) the use of ‘everyday’ enables mathematical understanding: ‘He draws on his frame of reference of how he perceives a litre to be e.g. a 1.25l of cold drink or a 1l of milk or a 2l of coke, etc.’

Criterion 6: Multiple explanations of error
One of the challenges in the teaching of mathematics is that learners might need to hear more than one explanation of the error. This is because some explanations are more accurate or more accessible than others and errors may need to be explained in different ways for different learners. This criterion examines the teachers’ ability to offer alternative explanations of the error when they are engaging with learners’ errors, which is aligned with Shulman’s (1986) aspect of PCK related to ‘the ways of representing and formulating the subject that make it comprehensible to others’ (p. 9) in the context of error explanations.

The set of six criteria hence span the first three of Ball’s knowledge domains, providing evidence of the rich nature of error analysis activities. In the next section we explain the teacher development project from which the data for this analysis is taken and show how we operationalised the criteria.

Teacher development research project

The Data Informed Practice Improvement Project (DIPIP), a teacher professional development project, was one of the first attempts in South Africa to include teachers in a systematic process of interpretation of learners’ errors on a standardised mathematics test (Shalem, Sapire, Welch, Bialobrzeska & Hellman, 2011). The 3-year (2007–2010) research and development programme2 included 62 mathematics teachers from Grade 3–9 from a variety of Johannesburg schools. Schools were initially selected on the basis of their participation and results in the International Competitions and Assessments for Schools (ICAS3) 2006 round of testing; later, proximity to the university campus also became a priority for practical reasons. Teachers were organised into groups of three by grade level, forming eight groups of Grade 3−6 teachers and six groups of Grade 7–9 teachers. The groups consisted of a group leader (a mathematics specialist: staff member or postgraduate student who could contribute knowledge from outside the teaching workplace), a departmental subject advisor and two or three teachers. In this way groups were structured to include different authorities and different kinds of knowledge bases. Over a period of three years, during term time, the groups met once a week, sharing ideas and learning from each other and exposing their practice to each other. Six different activities were designed to provide a set of learning opportunities for the groups to reason about assessment data in the context of a professional learning community. In the project teachers mapped the ICAS 2006 and 2007 mathematics test items onto the curriculum, analysed learners’ errors, designed lessons, taught and reflected on their instructional practices and constructed test items. Item-based statistics provided to the teachers for the analysis corresponded to 55 000 learners from Gauteng who wrote the ICAS tests in the province. In this analysis we report on the groups’ analysis of the learners’ errors related to 332 test items.

For each test item analysed by the groups, the group was requested to fill in an error analysis task template. The template was designed to guide the error analysis of learners’ choices of correct and incorrect answers. The template consisted of four parts. The first part of the template required the group to map each test item to the national curriculum expectations and grade level. The second part required the groups to anticipate learners’ achievement and comment on any particular distractor before checking the actual item achievement. The last two parts required the groups to analyse the correct answer and learners’ errors. The groups wrote up their explanations of how they thought the learners had reasoned when they selected the correct answer and each of the distractors. They were requested to write several explanations.

Operationalising the criteria
The sample of explanations for the analysis reported on in this article related to 140 items (20 items per grade) across a range of items covering all of the content areas in the mathematics curriculum. A total of 572 texts were collected and coded (for the purpose of coding, each one of the groups’ explanations was called a ‘text’). There were 320 texts relating to the correct answer (‘answer texts’) and 252 texts relating to the most common distractor selected by learners (‘error texts’) which were analysed. The texts collected were a product of small group discussions and not of particular teachers; hence, inferences made about teachers’ explanations of learners’ errors should take this into account.

The first two criteria (henceforth ‘procedural’ and ‘conceptual’) were used to analyse the answer texts. The remaining four criteria (henceforth ‘awareness’, ‘diagnostic’, ‘everyday’ and ‘multiple explanations’) were used to analyse the error texts. To capture variability in the quality of the teachers’ explanations of the correct answers and of the errors, each of the six criteria was divided into four categories: full, partial, inaccurate and not present. Category descriptors were developed for the criteria (see the Appendix 1 for the error analysis coding template). Exemplars were developed to operationalise the criteria (Shalem & Sapire, 2012).

The answer and error texts were entered into Excel spreadsheets to facilitate the coding process. Two coders were given the Excel spreadsheet to record their coding decisions for each text. The coders were requested to enter one of the above four categories next to the text, according to each of the criteria’s category descriptors (see Box 2 and Box 3). Consensus discussions between the coders were held on certain items in order to hone agreement between them. The final set of codes used in the analysis was agreed on in discussion with and through arbitration by a third expert (a member of the evaluation analysis team). The alignment between coders was 57% before the review, 71% after the review and full agreement after the arbitration.

BOX 2: Exemplars of coded error texts in relation to a number patterns item.

The data was analysed quantitatively, finding observable trends and relationships evident in the sample. Data was summarised for descriptive analysis. The correlation between the two criteria for the answer texts (‘procedural’ and ‘conceptual’) was calculated. Similarly the correlation between two of the criteria for the error texts (‘awareness’ and ‘diagnostic’) was calculated. Both were calculated using Pearson’s r coefficient.

Exemplars of answer and error texts, the codes and the coding justification are given below. These are presented in two tables. Box 1 relates to answer texts (‘procedural’ and ‘conceptual’ criteria) and Box 2 relates to error texts (‘awareness’ and ‘diagnostic’ criteria).

BOX 1: Exemplars of coded answer texts in relation to a number patterns item.

Findings: Usability of the measurement criteria

In terms of the first research question, the nature of the teachers’ explanations, we found that groups drew primarily on mathematical knowledge and less so on other possible explanations to explain the correct answer and the errors. In about 70% – 80% of all the explanation texts (that is, the answer and error texts), groups drew primarily on mathematical knowledge and much less so on other discourses. Figure 1 shows that only about 15% of the answer texts and closer to 25% of the error texts did not have mathematical content (see not present). This result means that the groups did not often resort to common sense talk on learners’ errors, such as test-related explanations (e.g. the learners did not read the question well, or the learners guessed) or learner-related explanations (e.g. the question is not within the learners’ field of experience) or curriculum-related explanations (e.g. the learners have not learned this work). This finding is consistent with the following finding: despite the recommendation in the national curriculum at the time, to make links to everyday experiences when explaining mathematical concepts, 95% of the error texts included no links to learners’ everyday experiences (Criterion 5) (see Figure 1). Only a small number of texts in the sample of error texts demonstrate teachers’ explanations that are focused on the link between an everyday phenomenon and the mathematical content of the item. These two findings are consistent with Hill et al. (2008), who found in cognitive interviews with teachers responding to multiple-choice items measuring knowledge of content and students that teachers’ reasons for their choices were more often related to knowledge of learners’ errors or mathematical reasoning than test-taking skills.

FIGURE 1: Percentage of texts by error analysis aspect.

In terms of the second research question, variability in the quality of the teachers’ explanations, we found that the distribution of levels of quality of explanation within the domain of subject matter knowledge (‘procedural’, ‘conceptual’ and ‘awareness’) was similar across the three criteria (see Figure 1). In comparison, the distribution of levels of quality of explanation within the domain of pedagogical content knowledge (‘diagnostic’, ‘everyday’ and ‘multiple’) varied across the three criteria. This suggests that in teachers’ explanations of learners’ errors matters of content knowledge are consistent in quality (most of them are partial) whilst matters of pedagogical content knowledge are inconsistent (no pattern of quality can be seen).

Most of the explanation texts (that is, the answer and error texts) were partial, missing crucial steps in the analysis of what mathematics is needed to answer the question. Overall, across the 14 groups 50% of the answer texts on Criterion 1 (‘procedural’), 42% of the answer texts on Criterion 2 (‘conceptual’), 43% of the error texts on Criterion 3 (‘awareness’) and 33% of the error texts on Criterion 4 (‘diagnostic’) were partial. More full answer texts were found in Criterion 1 (34%) than in Criterion 2 (28%) (see Figure 1). The evidence of teachers using predominantly partial explanations on Criterion 1 is particularly interesting. Partial procedural explanations of the mathematics involved in solving a particular mathematics problem may impede teachers’ capacity to identify mathematical errors, let alone to diagnose learners’ reasoning behind the errors.

The extent to which the groups were able to describe learners’ reasoning behind mathematical errors (Criterion 4) is an indication that the teachers struggled to think diagnostically about learners’ errors. Figure 1 shows that 27% of the error texts did not attempt to explain learners’ reasoning behind the error (not present) and another 28% described learners’ reasoning without honing in on the error (inaccurate). Altogether, more than 50% of the error texts demonstrated weak diagnostic reasoning. About 33% of the texts honed in on the error but the description of learner reasoning was partial. This means that in close to 90% of the error texts, groups offered no explanation, and where they did offer a mathematical explanation of learners’ reasoning it was inaccurate or partial. Only 12% of the error texts were systematic and honed in on the learners’ reasoning about the error. According to Ball et al. (2008), explanation of learners’ reasoning implies ‘nimbleness in thinking’ and ‘flexible thinking about meaning’. This is the criterion in which groups’ performance was the weakest and proportionally so more in the Grade 3–6 group (see Figure 2). The weakness in explaining learners’ reasoning is consistent with the groups’ inability to produce more than one explanation of the error (Criterion 6). The four category descriptors for Criterion 6 (multiple explanations) indicate a numeric count of the number of mathematically correct explanations (see the Table 1−A1). The code inaccurate for Criterion 6 means that one mathematically correct explanation was given. Overall, 73% of the error texts provided only one mathematically feasible or convincing explanation.

FIGURE 2: Procedural explanations of learners’ correct answers by grade groups.

A comparison between the explanations of correct answers recorded by the two grade groups gives insight into some differences between these groups. The Grade 3–6 groups were a little weaker than the Grade 7–9 groups (see Figure 2 and Figure 3). This can be noted in the relative strength of the Grade 7–9 group evidenced in higher percentages of full explanations in both Criterion 1 and Criterion 2 compared to the Grade 3–6 group.

FIGURE 3: Conceptual explanations of learners’ correct answers by grade groups.

In terms of the third research question, the use of criteria enabled us to find two interesting correlations. The first correlation we found is between Criterion 1 and Criterion 2 (‘procedural’ and ‘conceptual’). The correlation was high (r = 0.73). This suggests that when teachers are able to provide a full explanation of the steps to be taken to arrive at a solution, their explanations also cover the conceptual links that underpin the solution and vice versa. The weaker the procedural explanations are, the weaker the conceptual explanations, and vice versa. The correlation confirms that there is interdependence between the procedural and conceptual aspects in teachers’ explanation of the mathematics that underlie a mathematical problem. The second correlation we found is between Criterion 3 and Criterion 4 (‘awareness’ and ‘diagnostic’). The correlation was also high (r = 0.67). This suggests that when groups demonstrate high awareness of the mathematical error (SMK) they are more likely to give a better diagnosis of the learner thinking behind that error (PCK). When teachers can describe the error mathematically well (SMK) they are more likely to be able to delve into the cognitive process taken by the learners and describe the reasoning that led to the production of the error (PCK). Furthermore, in view of the finding that the teachers’ answer texts were mostly partial, we suggest that the finding that the teachers struggled to describe the mathematical way in which the learners produced the error is expected.

Discussion: The teachers’ knowledge of error analysis

Much research in South Africa suggests that teachers use more procedural and not enough conceptual explanations of mathematics and that this may explain learners’ poor performance (Baroody et al., 2007; Carnoy, Chisholm & Chilisa, 2012; Long, 2005; Star, 2005). This research was able to quantitatively examine the procedural and conceptual relationship and found a strong correlation. More research is needed to differentiate between strong and weak explanations of answer texts, in general and in different mathematical content areas. This is important for building the database for teachers on crucial steps in explanations, particularly in mathematical topics that underpin conceptual understanding at higher levels, such as place value. Efforts in teacher education are needed to improve the quality of teachers’ procedural explanations, making sure that teachers are aware of which steps are crucial for addressing a mathematical problem and what counts as a full procedural explanation.

The dominance of the partial category in all the texts, the groups’ difficulty with explaining the rationale for the ways in which the learners were reasoning and their inability, even in a group situation, to provide alternative explanations despite being requested to do so are noteworthy. These findings suggest that teachers struggle to explain the mathematical content covered by an item and particularly so when they are asked to explain it from the perspective of how learners typically learn that content. Teachers seem to draw on different kinds of knowledge when explaining correct answers or errors and when providing reasons (single or multiple) behind learners’ errors. Providing full procedural and conceptual explanations to correct answers and explanations of the actual mathematical error depends on teachers’ knowledge of mathematics, whilst diagnostic reasoning depends not only on mathematical knowledge but also on the degree of teachers’ familiarity with learners’ common ways of thinking when choosing an incorrect answer. That is, teachers understand when learners’ answers or reasoning are incorrect due mainly to their own understanding of the mathematics but this does not necessarily translate into an understanding of learners’ ways of thinking. Evidence of the presence of different patterns in the distribution of levels for criteria grouped in the two different knowledge domains (SMK and PCK) highlights the difference between these domains of knowledge. Error analysis aspects within the mathematical knowledge domain (Criteria 1, 2 and 3) show similar patterns of distribution. This implies that these three aspects of error analysis can be interpreted and studied as a single construct. In contrast, the variation of distributions within the PCK domain (Criteria 4, 5 and 6) is an indication of the multidimensionality of this construct even in the specific context of error analysis. Formally, the varied distribution means that providing a rationale for how learners reason mathematically is not related to the ability to provide multiple explanations of the error, nor to explaining the error by linking the rationale with everyday experiences. More development is needed for error analysis aspects in this domain of knowledge if they are going to be used as a single measurement scale. To those interested in using error analysis tasks and the proposed analytical tool to educate or develop teachers, this is an important and useful identification.

Conclusion

The argument of this article shows that assessment data can be used as an artefact to stimulate discussion and can provide an opportunity for teachers instead of being used against them as a naming and blaming tool. The study also shows that the six criteria and their category descriptors can be used to evaluate the variation in quality of teachers’ explanations of learners’ errors in mathematics assessments and can detect relationships between some of the aspects that inform teachers’ explanations of learners’ errors.

Acknowledgements

We acknowledge the funding received from the Gauteng Department of Education (South Africa), and in particular would like to thank Reena Rampersad and Prem Govender for their support of the project. We would also like to acknowledge Professor Karin Brodie for her role in the conceptualisation of the DIPIP project. The views expressed in this article are those of the authors.

Competing interests
The authors declare that they have no financial or personal relationship(s) that may have inappropriately influenced them in writing this article.

Authors’ contributions
Y.S. (University of the Witwatersrand), project director, was involved in the analysis of the data for the writing of the article. I.S. (University of the Witwatersrand), project coordinator, was involved in the coding and analysis of the data and the writing of the article. A.S. (Texas State University) was involved in the coding and analysis of the data and she also contributed to the writing of the article.

References

Adler, J. (2005). Mathematics for teaching: What is it and why is it important that we talk about it? Pythagoras, 62, 2–11.

Ball, D.L., Hill, H.C., & Bass, H. (2005). Knowing mathematics for teaching: Who knows mathematics well enough to teach third grade, and how can we decide? American Educator, 22, 14–22; 43–46.

Ball, D.L., Thames, M.H., & Phelps, G. (2008). Content knowledge for teaching: What makes it special? Journal of Teacher Education, 9(5), 389–407. http://dx.doi.org/10.1177/0022487108324554

Baroody, A.J., Feil, Y., & Johnson, A.R. (2007). An alternative reconceptualization of procedural and conceptual knowledge. Journal for Research in Mathematics Education, 38, 115–131.

Black, P., & Wiliam, D. (1998). Inside the black box: Raising standards through classroom assessment. London: King’s College Department of Education & Professional Studies.

Black, P., & Wiliam, D. (2006). Developing a theory of formative assessment. In J. Gardner (Ed.) Assessment and learning (pp. 206–230). London: Sage.

Borasi, R. (1994). Capitalizing on errors as ‘springboards for inquiry’: A teaching experiment. Journal for Research in Mathematical Education, 25(2), 166–208. http://dx.doi.org/10.2307/749507

Boudett, K.P., City, E., & Murnane, R. (Eds.). (2005). Data wise: A step-by-step guide to using assessment results to improve teaching and learning. Cambridge, MA: Harvard Education Press.

Brodie, K. (2014). Learning about learner errors in professional learner communities. Educational Studies in Mathematics, 85, 221–239. http://dx.doi.org/10.1007/s10649-013-9507-1

Brodie, K., & Berger, M. (2010). Toward a discursive framework for learner errors in mathematics. In V. Mudaly (Ed.), Proceedings of the 18th annual meeting of the Southern African Association for Research in Mathematics, Science and Technology Education (pp. 169–181). Durban: University of KwaZulu-Natal.

Burger, W.F., & Shaughnessy, J.M. (1986). Characterising the Van Hiele levels of development in geometry. Journal for Research in Mathematics Education, 17(1), 31–48. http://dx.doi.org/10.2307/749317

Carnoy, M., Chisholm, L., & Chilisa, B. (Eds.). (2012). The low achievement trap: Comparing schooling in Botswana and South Africa. Pretoria: Human Sciences Research Council.

City, E.A., Elmore, R. F., Fiarman, S.E., & Teitel, L. (2009). Instructional rounds in education: A network approach to improving teaching and learning. Cambridge, MA: Harvard Education Press.

Cohen, D.K., & Hill, H.C. (2001). Learning policy: When state education reform works. New Haven, CT: Yale University Press. http://dx.doi.org/10.12987/yale/9780300089479.001.0001

Dempster, E. (2006). Strategies for answering multiple choice questions among South African learners: what can we learn from TIMSS 2003. Proceedings for the 4th Sub-Regional Conference on Assessment in Education, June 2006, Pretoria: UMALUSI.

Dempster, E., & Zuma, S. (2010). Reasoning used by isiZulu-speaking children when answering science questions in English. Journal of Education, 50, 35–59.

Departments of Basic Education & Higher Education and Training. (2011). Integrated strategic planning framework for teacher education and development in South Africa 2011–2025: Frequently asked questions. Pretoria: DoE.

Gagatsis, A., & Kyriakides, L. (2000). Teachers’ attitudes towards their pupils’ mathematical errors, Educational Research and Evaluation, 6(1), 24–58. http://dx.doi.org/10.1076/1380-3611(200003)6:1;1-I;FT024

Hill, H.C., & Ball, D.L. (2009). The curious- and crucial- case of mathematical knowledge for teaching. Kappan, 9(2), 68–71. http://dx.doi.org/10.1177/003172170909100215

Hill, H.C., Ball, D.L., & Schilling, G.S. (2008). Unpacking pedagogical content knowledge: Conceptualizing and measuring teachers’ topic-specific knowledge of students. Journal for Research in Mathematics Education, 39(4), 372–400.

Katz, S., Earl, L., & Ben Jaafar, S. (2009). Building and connecting learning communities: The power of networks for school improvement. Thousand Oaks, CA: Corwin.

Katz, S., Sutherland, S., & Earl, L. (2005). Towards an evaluation habit of mind: Mapping the journey. Teachers College Record, 107(10), 2326–2350. http://dx.doi.org/10.1111/j.1467-9620.2005.00594.x

Kilpatrick, J., Swafford, J., & Findell, B. (Eds.). (2001). Adding it up: Helping children learn mathematics. Washington DC: National Academy Press.

Long, C. (2005). Maths concepts in teaching: Procedural and conceptual knowledge. Pythagoras, 62, 59–65. http://dx.doi.org/10.4102/pythagoras.v0i62.115

Long, C. (2007). What can we learn from TIMSS 2003? Available from http://www.amesa.org.za/AMESA2007/Volumes/Vol11.pdf

Nesher, P. (1987). Towards an instructional theory: The role of learners’ misconceptions. For the Learning of Mathematics, 7(3), 33–39.

Nichols, P.D., Meyers, J.L., & Burling, K.S. (2009). A framework for evaluating and planning assessments intended to improve student achievement. In S. Brookhart (Ed.), Special issue on the validity of formative and interim assessment, Educational Measurement: Issues and Practice, 28(3), 14–23.

Olivier, A. (1996). Handling pupils’ misconceptions. Pythagoras, 21, 10–19.

Peng, A. (2010). Teacher knowledge of students’ mathematical errors. Available from http://www.scribd.com/doc/54223801/TEACHER-KNOWLEDGE-OF-STUDENTS’-MATHEMATICAL

Peng, A., & Luo, Z. (2009). A framework for examining mathematics teacher knowledge as used in error analysis. For the Learning of Mathematics, 29(3), 22–25.

Prediger, S. (2010). How to develop mathematics-for-teaching and for understanding: The case of meanings of the equal sign. Journal of Mathematics Teacher Education, 13, 73–93. http://dx.doi.org/10.1007/s10857-009-9119-y

Radatz, H. (1979). Error analysis in mathematics education. Journal for Research in Mathematics Education, 10(3), 163–172. http://dx.doi.org/10.2307/748804

Reddy, V. (2006). Mathematics and science achievement at South African schools in TIMSS 2003. Cape Town: Human Sciences Research Council.

Rowland, T., & Turner, F. (2008). How shall we talk about ‘subject knowledge’ for mathematics. Proceedings of the British Society for Research into Learning Mathematics, 28(2), 91–96.

Schwab, J.J. (1978). Education and the structure of the disciplines. In I. Westbury & N.J. Wilkof (Eds.), Science, curriculum and liberal education (pp. 229–272). Chicago, IL: University of Chicago Press.

Shalem, Y., & Sapire, I. (2012). Teachers’ knowledge of error analysis. Johannesburg: Saide.

Shalem, Y., Sapire, I., & Huntley, B. (2013). Mapping onto the mathematics curriculum – An opportunity for teachers to learn. Pythagoras, 34(1), 11–20.

Shalem, Y., Sapire, I., Welch, T., Bialobrzeska, M., & Hellman, L. (2011). Professional learning communities for teacher development: The collaborative enquiry process in the Data Informed Practice Improvement Project. Johannesburg: Saide. Available from http://www.oerafrica.org/teachered/TeacherEducationOERResources/SearchResults/tabid/934/mctl/Details/id/38939/Default.aspx

Shavelson, R., Li, M., Ruiz-Primo, M.A., & Ayala, C.C. (2002). Evaluating new approaches to assessing learning. Centre for the Study of Evaluation. Report 604. Available from http://www.cse.ucla.edu/products/Reports/R604.pdf

Shepard, L.A. (2009). Commentary: Evaluating the validity of formative and Interim assessment. Educational Measurement: Issues and Practice, 28(3), 32–37. http://dx.doi.org/10.1111/j.1745-3992.2009.00152.x

Shulman, L. (1986). Those who understand: Knowledge growth in teaching. Educational Researcher, 15(2), 4–14. http://dx.doi.org/10.3102/0013189X015002004

Smith, J.P., DiSessa, A.A., & Roschelle, J. (1993). Misconceptions reconceived: A constructivist analysis of knowledge in transition. Journal of Learning Sciences, 3(2), 115–163. http://dx.doi.org/10.1207/s15327809jls0302_1

Star, J.R. (2005). Reconceptualizing procedural knowledge. Journal for Research in Mathematics Education, 36, 404–411.

Stein, M., Smith, M., Henningsen, M. & Silver, E. (2000). Implementing standards-based mathematics instruction: A casebook for professional development. New York, NY: Teachers College Press.

Taylor, N. (2001, February). ‘Anything but knowledge’: The case of the undisciplined curriculum. Paper presented to a seminar hosted by the Gauteng Institute for Curriculum Development, Johannesburg, South Africa.

U.S. Department of Education, Office of Planning, Evaluation and Policy Development. (2010). ESEA blueprint for reform. Washington, DC: US DE. Available from http://www2.ed.gov/policy/elsec/leg/blueprint/

Venkat, H., & Adler, J. (2012). Coherence and connections in teachers’ mathematical discourses in instruction. Pythagoras, 33(3), 1–8 http://dx.doi.org/10.4102/pythagoras.v33i3.188

Walkerdine, V. (1982). From context to text: A psychosemiotic approach to abstract thought. In M. Beveridge (Ed.), Children thinking through language (pp. 129–155). London: Edward Arnold.

Footnotes

1.Ball et al.’s (2008) work includes six domains. The fourth domain, knowledge of content and teaching, is relevant in lesson design and teaching. Having completed a round of error analysis, the teachers were tasked to plan a set of lessons around the errors identified. In their groups the teachers planned practical examples of how to engage learners’ errors in classroom situations. They then reflected on the way they engaged with learners’ error during teaching. A different set of criteria was operationalised for this domain. The curriculum mapping activity exposed the teachers in their groups to certain dimensions of the fifth and sixth domains, knowledge of curriculum and knowledge of the mathematical horizon. Its analysis is reported in Shalem, Sapire and Huntley (2013).

2.There is currently a third phase of DIPIP that is located in certain schools following a similar process with teacher groups in these schools.

3.ICAS is conducted by Educational Assessment Australia, University of New South Wales. Learners from over 20 countries in Asia, Africa, Europe, the Pacific and the United States of America participate in ICAS each year.

APPENDIX 1

TABLE 1-A1: Error analysis coding template.


 

Crossref Citations

1. Use of Maple Software to Reduce Student Teachers’ Errors in Differential Calculus
Sallah E.K., Joshua K.S., Alex O.
African Journal of Mathematics and Statistics Studies  vol: 4  issue: 3  first page: 32  year: 2021  
doi: 10.52589/AJMSS-KBCFARPR