About the Author(s)


Carol A. Bohlmann Email
Centre for Educational Testing for Access and Placement, University of Cape Town, South Africa

Robert N. Prince
Centre for Educational Testing for Access and Placement, University of Cape Town, South Africa

Andrew Deacon symbol
Centre for Innovation in Learning and Teaching, University of Cape Town, South Africa

Citation


Bohlmann, C.A., Prince, R.N. & Deacon, A. (2017). Mathematical errors made by high performing candidates writing the National Benchmark Tests. Pythagoras, 38(1), a292. https://doi.org/10.4102/pythagoras.v38i1.292

Original Research

Mathematical errors made by high performing candidates writing the National Benchmark Tests

Carol A. Bohlmann, Robert N. Prince, Andrew Deacon

Received: 16 Mar. 2015; Accepted: 27 Feb. 2017; Published: 25 Apr. 2017

Copyright: © 2017. The Author(s). Licensee: AOSIS.
This is an Open Access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

When the National Benchmark Tests (NBTs) were first considered, it was suggested that the results would assess entry-level students’ academic and quantitative literacy, and mathematical competence, assess the relationships between higher education entry-level requirements and school-level exit outcomes, provide a service to higher education institutions with regard to selection and placement, and assist with curriculum development, particularly in relation to foundation and augmented courses. We recognise there is a need for better communication of the findings arising from analysis of test data, in order to inform teaching and learning and thus attempt to narrow the gap between basic education outcomes and higher education requirements. Specifically, we focus on identification of mathematical errors made by those who have performed in the upper third of the cohort of test candidates. This information may help practitioners in basic and higher education.

The NBTs became operational in 2009. Data have been systematically accumulated and analysed. Here, we provide some background to the data, discuss some of the issues relevant to mathematics, present some of the common errors and problems in conceptual understanding identified from data collected from Mathematics (MAT) tests in 2012 and 2013, and suggest how this could be used to inform mathematics teaching and learning. While teachers may anticipate some of these issues, it is important to note that the identified problems are exhibited by the top third of those who wrote the Mathematics NBTs. This group will constitute a large proportion of first-year students in mathematically demanding programmes.

Our aim here is to raise awareness in higher education and at school level of the extent of the common errors and problems in conceptual understanding of mathematics. We cannot analyse all possible interventions that could be put in place to remediate the identified mathematical problems, but we do provide information that can inform choices when planning such interventions.

Purpose of the article

The National Benchmark Tests (NBTs), as with all high-stakes testing, are at times viewed with anxiety and scepticism. Parents, test candidates and teachers appear to want more information, even though a large amount of information is readily available on the National Benchmark Tests Project (NBTP) website (www.nbt.ac.za). Particularly in relation to mathematics, the website attempts to provide information to ensure that all stakeholders have the information they need to allay anxiety and dispel some myths (see specifically the section ‘Preparing your learners for the Mathematics (MAT) test’). There are also requests for test exemplars and information regarding special preparation classes. We do not support ‘extra lessons’ in preparation for writing the NBTs, since comprehension does not necessarily increase with coaching. Once school leavers are admitted to university they will not have access to coaching. If an analysis of the results of the NBTs can raise awareness of problems that learners face, and if these can then be remediated, the result will be an improvement in the standard of mathematics in general and, in particular, in performance in mathematically demanding first-year courses. If this also translates to improved NBT results, it would indicate that the school sector has been responsive to the message the NBTP has sought to give. Higher education needs to be more responsive to the needs of the students that they admit by providing ‘responsive curricula’. Examples of such interventions include extended programmes, bridging courses, tutorials provided by lecturers or specially appointed tutors and referral to external sources of additional support such as online mathematics programmes providing particular assistance in areas not well covered at school, for example trigonometry.

We provide background to the NBTs together with an analysis of the results of the mathematics tests written in 2012 and 2013. The results are considered at a national level, and we cannot analyse specific interventions that could be put in place. This is more properly the field of teachers and the academics who are involved in teacher education who can hopefully be more fully informed by the results presented here.

Background

The NBTP was commissioned in 2005 by Higher Education South Africa (HESA), now called Universities South Africa, with the following objectives (Griesel, 2006, p. 4):

  • To assess entry-level academic and quantitative literacy and mathematics proficiency of students.
  • To assess the relationship between higher education entry-level requirements and school-level exit outcomes.
  • To provide a service to higher education institutions requiring additional information to assist in admission (selection and placement) of students.
  • To assist with curriculum development, particularly in relation to foundation and augmented courses.

At the end of Grade 12 all school leavers write the National Senior Certificate (NSC); those wishing to enter higher education also write the NBTs if required to do so by the institutions to which they intend applying. All NBT candidates must write the Academic and Quantitative Literacy (AQL) test; those who intend to study in an area requiring mathematics need to write the Mathematics (MAT) test as well.

The norm-referenced NSC mathematics exam necessarily attempts to reflect the entire school mathematics curriculum. While the criterion-referenced NBT MAT tests do not test anything outside the school curriculum, they are not constrained to include all NSC mathematics topics, and thus focus on those aspects of the school curriculum that have a greater bearing on performance in first-year mathematics courses. Clearly, the NSC mathematics exams and the MAT tests should be regarded as complementary forms of assessment. The two assessment regimes are complementary in the sense that the NSC attempts to answer the question ‘To what extent do NSC candidates meet the curriculum statement expectations as expressed in the subject assessment guidelines?’ while the NBTP attempts to answer the question ‘To what extent do students aiming to enter higher education meet the core academic literacy, quantitative literacy and mathematics competencies required by school leavers on entry to higher education study?’ Whereas the AQL tests are intended as tests of generic skills in the domains of academic and quantitative literacy, the MAT tests are explicitly designed to measure the mathematical preparedness of candidates for mathematically demanding curricula in higher education. The Curriculum and Assessment Policy Statement (CAPS), as was also the case with Curriculum 2005, emphasises the ability of mathematics to provide the necessary conceptual tools for analysing, making and justifying decisions (Department of Basic Education (DBE), 2008a, 2011a), important competencies in higher education. The MAT tests assess the degree to which learners have achieved the ability to manipulate numbers, synthesise a number of different mathematical concepts and draw strictly logical conclusions in abstract symbolic contexts. Lecturers agree that these higher-order skills underlie success in higher education mathematics. The NBTs have an important role to play in the South African educational landscape (in both basic and higher education) by providing useful information additional to that provided by the NSC. Initially there was scepticism as expressed in 2009 by the then Department of Basic Education director-general Duncan Hindle:

We would need to be convinced about the need for additional testing. We need to be shown where the NSC is not adequate, and we need to be convinced that the NBT is a credible test. In the end we run the NSC at huge expense. … Is it really justifiable to introduce something else? (Paton, 2009)

The then deputy director-general Penny Vinjevold questioned their purpose: ‘What will they be used for?’ she asked (Paton, 2009). An important use has been in the provision of additional diagnostic information with which lecturers in higher education and teachers in schools can engage.

Spaull and Taylor (2015) highlight the growing evidence of exceedingly low levels of learning in many developing countries, including India, Indonesia, Malaysia, Mexico, Pakistan, Thailand, Turkey and South Africa. They also remark that not only are the levels of learning typically low, but the actual learning associated with a year of schooling differs widely across countries. Comparing NSC results across schools and across provinces, it appears that within South Africa there are wide differences in the learning demonstrated by Grade 12 learners. The NBTs are tools that enable us to measure levels of learning relevant to higher education.

A Council on Higher Education (CHE) study (Scott, Yeld & Hendry, 2007) analysed the performance patterns of the 2000 intake into higher education. The results are further substantiated by Professor Adler. Under the auspices of Marang (the Wits Centre for Maths and Science Education), on 5 May 2011 Prof. Adler commented in Wits Maths Connect on ‘the implications for universities of the new CAPS particularly with respect to admissions requirements’. She attached to this document additional information emanating from a 2010 Academy of Science of South Africa (ASSAf) forum, called ‘Mind the Gap’, which noted the high attrition rate in first-year life and physical sciences degrees followed by low graduation rates, with only a small group completing in regulation time (Adler, 2011). These results, which are confirmed in the analyses of the 2006 intake cohort in the CHE (2013) publication, indicate that most students at South African universities take more than the specified three or four years to complete their studies.

In the period after the introduction of the NSC in 2008 up to 2012, less than a quarter of Grade 12 learners writing mathematics achieved more than 50%. In the same period the proportion of learners achieving between 70% and 100% fell from 8.3% in 2008 to 5.9% in 2011 and increased to 7.0% in 2012. In 2012, only 15 800 learners achieved between 70% and 100% for mathematics compared with 24 900 in 2008 (Snyman, 2013, p. 510).

With challenging school conditions and changing school curricula (Curriculum 2005, examined in Grade 12 from 2009 until 2013, and then the CAPS, examined for the first time in 2014), teachers find it hard to meet the challenges of changed content, changed emphases and different forms of assessment. In many cases aspects of the mathematical curriculum are not taught, or poorly taught, leaving learners less well prepared for higher education study. Over many years, experience in first-year courses reflects a lack of alignment between school outcomes and higher education expectations. Any information that can provide insight into this lack of alignment should be given consideration.

Table 1 (compiled from Table 10.1 in Chapter 10 of the 2013 NSC Diagnostic Report, and Table 10.1 of the 2014 NSC Diagnostic Report) gives the mathematics achievement rates over a five-year period (DBE, 2013a, 2014a). The table shows that from 2010 to 2014 the percentage of those who passed at 40% or more increased by just over 4%.

TABLE 1: National senior certificate mathematics achievement rates 2010–2014.

While the increase in the proportion of students achieving 40% or above is encouraging, it is sobering to consider the low proportion of candidates achieving results that enable them to be admitted to degree programmes requiring mathematics. Even though they are admitted they may ultimately take longer than expected to complete their degrees, or drop out altogether.

The DBE is concerned about the problem (DBE, 2014b). It notes a lack of algebraic skills and the fact that increased attention should be given to higher-order thinking skills. It is stated that many errors have their origins in poor understanding of the basics and foundational competencies taught in the earlier grades. Interventions to improve learners’ performance should thus also focus on knowledge, concepts and skills learnt in earlier grades and not only on the final year of the Further Education and Training (FET) phase. Teachers are also encouraged to ensure that mathematical terminology is well taught. Understanding terminology is linked to the understanding of the language of mathematics, which is linked to competence in the language of instruction. Performance in the NBT Academic Literacy tests suggests that there are problems with the comprehension of even simple English – see the reference to proficiency in Academic Literacy in the 2013 NBTP National Report (NBTP 2012-2013).

Broad references to algebraic skill, the language of mathematics, knowledge of basic competencies and foundational competence provide insufficient information. Additional information on specific skills and competencies is identified below that may require particular attention. Analysis of MAT test results identifies problem areas and their extent among otherwise high performing learners. Teachers, and mathematicians involved in teacher education programmes, are best able to design strategies that will remediate aspects of mathematics that are barriers to success in mathematically demanding programmes in higher education. The findings presented here could also assist higher education in providing appropriate support, since this is clearly necessary. Given that the highest achieving school leavers are participating in higher education, representing 16% of the age cohort, for the majority of students the curriculum structures are clearly not working (Adler, 2011).

Literature review

Mathematical assessment

Assessment is an important tool in informing teaching and learning. The DBE requires teachers and district officials to monitor learner performance and report progress. Several regional and international evaluations that include mathematical performance have taken place, such as the Trends in International Mathematics and Science Study (TIMSS, 1995, 1999, 2003), and the Annual National Assessments1 (ANAs). Analysis of the 2013 ANA results has been referenced against the goals set in the Action Plan to 2014 (DBE, 2015a). There has been some criticism of the purpose and quality of the ANAs, for example by the South African Democratic Teachers’ Union (SADTU, 2014), and also of the interpretation and use of the results (Spaull, 2014). There is also the danger of ‘collateral damage’ (Long, 2015), that is, the unintended consequences such as undermining teachers and distorting the focus of teaching. The ANAs are systemic tests, unlike the NBTs which are criterion-referenced tests. However, it is noteworthy that the overall mathematical performance of the sampled learners was at the ‘Not achieved’ level (less than 15% in both 2012 and 2013) (DBE, 2014b). The reason the ANAs are mentioned in this context is the fact that in Grade 9, in 2012 and in 2013, only 2% of the candidates achieved 50% or more. Of these candidates, not all would plan to enrol in higher education. However, some would do so, and it is then less surprising that the proportion of candidates in the ‘Proficient’ band for the MAT test is around 10%. (A Proficient score indicates that candidates who are admitted to higher education should be able to cope with regular programmes of study.)

Educational goals and assessment goals are linked: learning involves the acquisition of skills and knowledge; assessment identifies the level of knowledge or skill acquired. If assessment is to be meaningful, it needs to advance learning and not simply record its status. If teachers can engage with clear examples of the problems their learners exhibit in assessment tasks, they will be better able to communicate to the learners the underlying mathematics that would facilitate better comprehension. For example, if a teacher knows the theory of addition (to give a very trivial example) they could help their learners understand what it means to add ‘like’ terms,: that denominator relates to the name (i.e. identifies ‘like’ terms), numerator relates to ‘how many’, and so on, then fewer learners would think that

The importance of foundational knowledge

Mathematics should be a gateway, and not a gatekeeper, to success in higher education. Students entering science, technology, engineering and mathematics fields need to be proficient in the requisite mathematics. At university the prior domain knowledge and previous learning experiences that students bring to their studies are acknowledged as significant factors influencing student success (Crawford, Gordon, Nicholas & Prosser, 1998). Teachers who are able to equip their learners with appropriate domain knowledge will help them move through the gateway, rather than run up against the barriers of poor comprehension and competence. Analyses of MAT test results indicate areas where prior knowledge is in fact lacking.

Teachers who are pressed for time, and teachers who need or want to ensure specific test results, tend to teach to the test. Unfortunately, teaching (and possibly learning) may be driven by the extent and type of assessment involved (Jennings & Bearak, 2014; Kahn, 2002). Aspects of the specified NSC curriculum that are not examined may be neglected. Aspects of mathematics that are tested in the NBT MAT tests may be paid more attention as a result of increased awareness of what is deemed to be important. Raising awareness of specific conceptual gaps will hopefully result in greater attention to these areas at school. Increased understanding should lead to a greater chance of success in mathematically demanding first year courses.

Errors and misconceptions

The advent of the ANAs has given rise to additional expectations from teachers. See for example the following DBE statement (2011b):

ANA is intended to provide regular, well-timed, valid and credible data on pupil achievement in the education system. Assessment of pupils’ performance in the GET Band (Grades 1–9) has previously been done at school level. Unlike examinations that are designed to inform decisions on learner promotion and progression, ANA data is meant to be used for both diagnostic purposes at individual learner level and decision-making purposes at systemic level. At the individual learner level, the ANA results will provide teachers with empirical evidence on what the learner can and/or cannot do at a particular stage or grade and do so at the beginning of the school year. Schools will inform parents of their child’s ANA performance in March.

The above statement suggests that data from external assessments are intended to be used diagnostically. Shalem and Sapire (2012) suggest that the idea of informing local knowledge using a systemic set of evidence, diagnostically, is not without problems in terms of application. Teachers have always needed to recognise learners’ errors, a skill without which they would not have been able to assess learners’ work. The difference now is that teachers are required ‘to interpret their own learners’ performance in national (and other) assessments’ (DBE & DHET, 2011, p. 2) and develop better lessons on the basis of these interpretations. Diagnosing errors is a necessary first step, but it is not always clear to teachers how they should use the available information. Shalem and Sapire (p. 12) pose the following questions: ‘How does teachers’ tacit knowledge about learners’ errors (which they have acquired from years of marking homework and tests, encountering learners’ errors in teaching, or through hearing about errors from their colleagues) inform or misinform their reasoning about evaluation data? In what ways should teachers work with proficiency information when they plan lessons, when they teach or when they design assessment tasks?’

Smith, diSessa and Roschelle (1993) note that assessment research reflects the importance of student misconceptions. The analyses of National Assessment of Educational Progress (NAEP) mathematics results and the open-ended California Assessment Program tests both showed that misconceptions have a strong influence on how student learning is currently evaluated. Whereas researchers may previously simply have separated correct responses and errors, it is now more common, even in large-scale assessments, to actively search for misconceptions to explain frequent student errors (Smith et al., 1993).

The MAT diagnostic information can highlight some errors and misconceptions and at the same time help teachers use the information. Since methods of assessment should enable learners to demonstrate what they know rather than what they do not know, not all MAT test items present candidates with possible misconceptions as options, that is, not all MAT items have options that include identifiable error types or misconceptions. Doing so would ‘trap’ many candidates into selecting the apparently obvious option; if it is not provided they are more likely to try to solve the problem and find an answer. To give a trivial example, candidates could be asked to choose the correct answer for the following item:

The volume of the cylinder in the diagram, in cm3, is

(A) 200

(B) 201

(C) 202

(D) 203

If the correct answer is 202, then none of the others would reflect any misconception such as using diameter instead of radius, or calculating area instead of volume. The decision to limit the number of questions probing misconceptions minimises to some extent possible diagnostic opportunities, but yields more accurate results in terms of a candidate’s mathematical proficiency.

Methodology

Obtaining data

NBT data are obtained from the NBTP at the University of Cape Town, while the NSC data are obtained from DBE reports. The data are analysed by year, for candidates writing in each DBE province as well as under the Independent Examinations Board (IEB). The total number of candidates who wrote both the NSCs and NBTs from 2009 to 2013 has increased by 16 453 candidates over this period to a total of 182 156 candidates. There were 41 314 and 45 245 candidates in 2012 and 2013 respectively. The MAT tests take three hours and comprise 60 multiple choice questions. Items are scored dichotomously: a correct response to an item is given a score of ‘1’ and an incorrect response to an item is given a score of ‘0.’ The total raw score is obtained by summing the scored item responses. Test items2 are analysed using item response theory (IRT) and classical test theory. A three-parameter (a, b, c) IRT model is used to analyse and score item responses, where a = discrimination, b = difficulty and c = guessing/pseudo-chance (Yen & Fitzpatrick, 2006, p. 114). Many different tests are written in each testing cycle, all adhering to the same specifications. Test equivalence is assured through an equating process. A psychometric report for each test provides measures of test reliability, item behaviour and test behaviour. The psychometric data show, for each item, the ranking of candidates in three groups: the lower, middle and upper thirds. More information on the tests themselves can be obtained from the 2013 NBTP National Report (NBTP 2012–2013).

To determine whether learners are able to make the transition between mathematics at secondary and tertiary level, the competencies that are required, but not necessarily made explicit, by higher education need to be assessed. The choice of competencies was until 2013 influenced by the four Learning Outcomes (LO1, LO2, LO3 and LO4) that appeared in the Learning Programme Guidelines of the National Curriculum Statement for Mathematics for Grades 10–12 (DBE, 2008a). In 2014, Grade 12 learners were assessed in terms of the CAPS (DBE, 2011a) and the MAT tests for 2014 were adapted accordingly.

The MAT tests are embedded in the NSC curriculum, but cut across the different learning areas. This means that whereas the NSC Grade 12 exam assesses separately different learning areas such as algebra, trigonometry and Euclidean geometry and measurement, a MAT test may include a geometry question that will be solved using trigonometry and algebra. The MAT test specification spreads questions into six clusters: algebra, functions, transformations, trigonometry, spatial reasoning and data processing.

The NSC Subject Assessment Guidelines previously specified a taxonomy of categories of mathematical demand, which indicated that learners needed to perform at the levels of knowing (recall or basic factual knowledge), performing routine procedures, performing complex procedures and problem-solving (DBE, 2008b), carrying weights of approximately 25%, 30%, 30% and 15%, respectively. In the CAPS (DBE, 2011a) the following taxonomy is proposed: knowledge 20%, performing routine procedures 35%, performing complex procedures 30% and problem-solving 15%. The MAT tests are also cognitively differentiated, starting with lower-order questions to facilitate an easy introduction into the test and then progressing to questions with greater cognitive demand. The highest level (counting for about 8%) comprises items that involve greater insight; about 45% of the items comprise knowledge, recall and application of straightforward procedures.

Test results place candidates into three benchmark categories: Basic, Intermediate and Proficient (determined during standard setting, which took place in 2009, 2012 and again in 2015) (for further details see NBTP, 2012-2013). For convenience and additional clarity the Intermediate category is further divided into Upper Intermediate and Lower Intermediate bands. When the benchmarks were first set in 2009 it became clear that a significant proportion of applicants to higher education would be in need of support.

Less than 10% of all candidates in the cohort analysed attained the Proficient benchmark level in MAT tests (i.e. obtained scores of 62% or more using the 2009 benchmarks, which were in place for test candidates in 2012). Furthermore, Table 2 shows that less than 15% of all candidates were in the Upper Intermediate group (i.e. obtained scores of between 48% and 61%) and 36% of all candidates were in the Lower Intermediate group (i.e. obtained scores of between 34% and 47%).

TABLE 2: Distribution across benchmark levels for 2012 national benchmark test mathematics candidates.

Benchmarks were reset in 2012, resulting in the following distribution for candidates in 2013 (see Table 3). The percentage falling in the Basic category was higher in 2013 than in 2012.

TABLE 3: Distribution across benchmark levels for 2013 national benchmark test mathematics candidates.

Figure 1 compares performance in the MAT tests for the 2012 and 2013 cohorts. The percentage in the Basic category increased from 41.4% to 49.0%; however, there was also a small increase (2.1%) in the Proficient category.

FIGURE 1: Proportion of learners within NBT Mathematics performance levels for NBT 2012 and 2013 intake cycles.

Figure 1 shows that in 2012 and 2013, respectively, the MAT candidates falling into the Intermediate Upper and Proficient benchmark bands (those students for whom universities may expect to have to provide minimal or no additional support) constituted 22.5% and 24.0% of the total writer cohorts, Clearly, institutions will need to accept candidates from the remaining bands and their support needs will be substantial. This suggests that the schooling system does not adequately prepare students for the mathematical rigours of higher education. While higher education needs to provide support for incoming students, possibly by providing differentiated curricula, it is important that basic education attempts to address the problem at the level at which it occurs so that students can be better prepared for the demands of university.

Diagnostic information

For this article, we considered the results of prospective students, nationally, who wrote the MAT tests between May and November of 2012 (38 730 candidates) and 2013 (48 318 candidates) and who achieved scores in the top third band of that particular cohort (this is possible since the psychometric test data report option choices of candidates according to lower third, middle third and upper third bands). When 20% or more of the top third of all the candidates selected a particular incorrect option, we considered possible causes for their choice: whether the choice could be attributed to a specific misconception or to an identifiable flaw in reasoning.

All data relate to candidates who applied to register for courses requiring mathematics, not necessarily to those who were actually admitted. All these candidates are representatives of learners aspiring to study courses in which mathematics is required. If mathematical topics that have been identified appear to create problems for even the ‘good’ test candidates, they are most likely problematic for all others as well. Teachers need to become aware of the errors and misconceptions in order to assist learners.

The difficulties or misconceptions experienced by these NBT candidates indicate the need for various forms of intervention that could be undertaken to address the problems outlined here. Item responses from different mathematical topics that reflect similar misconceptions or errors in reasoning have been grouped together. Suggestions for possible interventions are noted, but in-depth analysis of such interventions is beyond the scope of this article. Teachers, and academics involved in teacher training, would be best placed to consider at what grade (even at primary school) and in how much depth various interventions could be focused. They will be aware for example of the work of Parker & Leinhardt (1995) on understanding percentages, of Davis (2013) on understanding what cancelling means and many more.

It may be common knowledge that for example, as noted in the 2014 NSC Diagnostic Report (DBE, 2014a), there is limited understanding of basic exponential laws, but it is sobering to note that in the NBT MAT tests as well, errors such as a3 + a5 = a8 are evident in the very group of candidates aspiring to enter university. While these common errors and misconceptions may be familiar to teachers, quantifying and characterising these for the cohort of students entering university is less well understood.

Results

Interpreting the data

The purpose of Tables 4 to 8 is to show how common errors and misconceptions in one conceptual area (such as an algebraic operation) also occur in many other clusters of items in the MAT tests. In the tables, items are listed according to their unique identity numbers (for example A5 refers to a specific item in one of the algebra subsections). In each row of each of the following tables, we give an indication of the proportion of candidates who demonstrate incomplete or incorrect understanding in relation to a specific item. An item may appear in several tests, and the table indicates the number of tests in which a particular error was identified. For convenience, items in which the same type of error has been made have been reported together. For the different error categories there are brief suggestions regarding intervention approaches that might be considered in response to the problems identified. In-depth discussion of possible strategies is beyond the scope of this article.

TABLE 4: Algebraic processing.
Implications of mistakes made in algebraic processing

Algebraic processing skills are fundamental to all aspects of mathematics in higher education and Table 4 shows that even the better writers demonstrate problems in understanding algebra.

If a student has for example understood differentiation, but is unable to correctly apply the necessary algebraic processing procedures, further application and problem-solving is undermined. Analysis of the MAT tests results show that when solving algebraic equations, test candidates have forgotten the difference between an expression and an equation, and the need to apply identical procedures to both sides of an equation in order to find its solution. They have difficulty dealing with signs when subtraction is involved. It is necessary to revise expansion of brackets preceded by a minus, so that learners understand that, for example, −(2 − 3) = (−1)(2 − 3). It may be necessary to revise all operations involving negative integers.

A number of other algebraic concepts are also problematic. ‘Cancelling’ is poorly understood (it appears to be the same as ‘crossing out variables or numbers that are the same’). This also relates to solving equations, where cross-multiplication is applicable (because the same procedure is in effect applied to both sides) whereas it is not applicable when simplifying a mathematical expression. Factorisation, fractions and equivalent fractions must be clarified, along with what ‘cancelling’ actually means. Ratio and proportion are poorly understood. Learners do not remember (from earlier grades) or know that a proportional statement is an equation involving two ratios.

All learners know that (a + b)2 = a2 + 2ab + b2 but do not necessarily understand that 2ab means twice the product of the first and last term. When either of these terms is slightly different from a single number or algebraic term they do not know what to do. Binomial expansion needs to be taught using many different types of terms in the binomial. Exponential laws are also often memorised and not understood. It is necessary to ensure that learners know the terminology of power, base and exponent, and understand how the exponential laws have been derived. Revision of all operations in which powers are involved is needed.

Implications of mistakes made with functions

The concept of a function is fundamental to all first-year mathematics courses, whether they are pure mathematics courses or mathematics courses in other disciplines such as commerce or statistics. Table 5 shows that many learners have problems understanding the function concept.

TABLE 5: Functions and the equations that define them.

The definition of a function, different representations of functions and the terminology associated with functions need to be revised. Ideally, generic graphs should be used to clarify function terminology such as domain, range, function graph (where the graph lies above, on or below the x-axis), function value (which is then positive, zero or negative), turning points, asymptotes, intercepts (of a graph) and roots (when the equation representing the graph is equal to zero), period and amplitude. Because the range of the function defined by y = sin x is [−1, 1], the assumption is easily made that any sine function has the same range, unless the concept of range is properly understood. Because the maximum value of a quadratic function occurs at the turning point of the graph, the assumption is made that any maximum must be related to the turning point. Similarly, because the y-intercept of a linear graph is the constant (in the equation y = mx + c) there is a tendency to assume that the y-intercept is also the constant in any other graph, such as y = ax + k. Various combinations of graphs should be shown with the distances between them at different points, to demonstrate that the maximum distance can occur at a point other than the turning point. Different types of graph should demonstrate different y-intercepts. Generic graphs can also be used to illustrate transformation principles such as horizontal shrink or stretch, vertical shift (upwards or downwards) and reflection in the x-axis, y-axis, or other lines such as y = x. The effect of the parameters involved in different algebraic representations of functions needs to be demonstrated graphically, so that learners understand how the numerical value and the sign of the parameters affect the shape of the graph, rather than just depend on calculators to draw specific graphs. If learners understand the graphical meaning of parameter change and the link between algebraic and graphical representation of transformed graphs they may be less likely to memorise (incorrectly) various transformation rules.

Implications of mistakes made with trigonometric functions

Trigonometric ratios are initially taught in terms of right triangles. This understanding is then poorly expanded later to trigonometric functions in the Cartesian plane, as can be seen from Table 6. Learners have remembered the ratios, but cannot move beyond these to a context where the function value is not necessarily positive.

TABLE 6: Trigonometric functions.
Implications of mistakes made with geometric concepts

Table 7 shows that by the end of high school, many geometric concepts are not yet well understood or remembered.

TABLE 7: Basic geometric concepts.

It is important to reinforce the link between circles, pi, radius and diameter. It should not be assumed that learners have remembered or understood basic geometric concepts, such as perimeter, area, surface area and volume, especially in relation to objects that are shown from a perspective different from the standard perspective, or where composite shapes are involved. The terminology of geometry is possibly problematic: are learners familiar with the meaning of geometrical terms such as face, vertex, rhombus and so on? This needs to be addressed in the earlier grades.

Implications of mistakes made in number sense

Calculator dependence has resulted in a limited understanding of the number system. It is important to teach the structure of the number system, especially numbers (natural, integers, rational, etc.) in relation to one another. Table 8 indicates some of the problems that have arisen when the number system, and several other number concepts such as percentage, are not well understood. There is a lack of awareness of how big or small a negative number is. Surds and fractions are similarly misunderstood. Where a number is represented algebraically, test candidates appear not to use a test case to determine its possible magnitude.

TABLE 8: Number sense.

Multiplication and division of numbers are not always well taught in primary school, and the results are still evident much later. Learners need to understand what the process of division actually means, in order to understand why division by zero is impossible. Multiplication and division by negative numbers need to be revised, to enable learners to see that the relative positions of numbers on the number line change, which would clarify their understanding of < and >.

It should not be assumed that percentage is a well understood concept. For many candidates, ‘percentage’ appears to exist in isolation and there is no attempt to associate it with the quantity to which the percentage is applied. The meaning of percentage, and how percentage can be applied, needs to be revised. This concept is taught in earlier grades but has apparently been forgotten.

Conclusion

This article presents some common errors made by high performing candidates in a large-scale study and indicates problems in the conceptual understanding and mathematical skills of these candidates. While teachers would anticipate some of these, it is important to note that the problems are exhibited by the top third of prospective applicants to higher education who wrote the NBT for Mathematics. This group constitutes a large proportion of first-year students in mathematically demanding programmes. The purpose of this article is to raise awareness in both higher and basic education about the type and extent of the problem. It is not the intention to engage in an analysis of all possible interventions that could be put in place. The diagnostic information provided identifies problems MAT candidates demonstrate regarding some of the essential mathematical concepts and procedures deemed necessary by higher education mathematicians. The interventions suggested in response to the diagnostic information from MAT tests can be of use to the school sector in foregrounding areas where mathematical comprehension is weak. These topics, as well as the related terminology and language, need to be given greater attention in the classroom. It is not the intention of this article to be prescriptive with respect to the suggested interventions; teachers should themselves determine how best to use the diagnostic information from the MAT tests to create learning environments which could be more responsive to the needs of higher education.

Policymakers rather than teachers need to consider that some topics may possibly need to be excluded from the school curriculum, without necessarily detracting from its value, in order to achieve greater understanding of key concepts relevant to higher education.

Acknowledgements

Competing interests

The authors declare that we have no financial or personal relationships that might have inappropriately influenced us in writing this article.

Authors’ contributions

R.P. and A.D. were responsible for compilation and analysis of all data from the NBTs referred to in the document. C.B. was responsible for the purpose, background, literature review, methodology and analysis of the MAT tests to provide diagnostic information.

References

Adler, J. (2011). Academy of Science of South Africa. Statement issued by the ASSAf Science, Technology, Engineering and Mathematics (STEM) Committee. Addendum to Comment for HESA: The implications for universities of the new mathematics CAPS particularly with respect to admission requirements. Johannesburg: University of the Witwatersrand.

Council on Higher Education. (2013). A proposal for undergraduate curriculum reform in South Africa: The case for a flexible curriculum structure. Pretoria: Council on Higher Education. Available from http://www.che.ac.za/sites/default/files/publications/Full_Report.pdf

Crawford, K., Gordon, S., Nicholas, J., & Prosser, M. (1998). Qualitatively different experiences of learning mathematics at university. Learning and Instruction, 8, 255–468.

Davis, Z. (2013). The use of the idea of coherence in descriptions and analyses of school mathematics curricula, textbooks and pedagogy. In Z. Davis, & S. Jaffer (Eds.), Proceedings of the 19th Annual Congress of the Association for Mathematics Education of South Africa (Vol. 1, pp. 35–46). Cape Town: AMESA.

Department of Basic Education. (2008a). Learning programme guidelines for the national curriculum statement for Mathematics for Grades 10–12. Available from http://www.education.gov.za/Portals/0/CD/LPGs2007/LPGMATHEMATICS.pdf

Department of Basic Education. (2008b). Subject assessment guidelines for the national curriculum statement for Mathematics for Grades 10–12. Available from http://www.education.gov.za/Portals/0/CD/SAGs2008/SAGMATHEMATICS.pdf

Department of Basic Education. (2011a). Curriculum and Assessment Policy Statement (CAPS) FET Band Mathematics Grades 10–12. Available from http://www.education.gov.za/Portals/0/CD/NationalCurriculumStatementsandVocational/CAPSFET_MATHEMATICS_GR10-12_Web_1133.pdf

Department of Basic Education. (2011b). Media statement issued by the Department of Basic Education on the Annual National Assessments (ANA). Available from http://www.education.gov.za/Newsroom/MediaReleases

Department of Basic Education. (2013a). National Senior Certificate Examinations 2013 Diagnostic Report, Chapter 10, Mathematics

Department of Basic Education. (2013b). Report on the Annual National Assessment of 2013. Available from http://www.shineliteracy.org.za/wp-content/uploads/2015/11/Annual-National-Assessment-Report-2013.pdf

Department of Basic Education. (2014a). National Senior Certificate Examinations 2014 Diagnostic Report, Chapter 10, Mathematics

Department of Basic Education. (2014b). The Annual National Assessment 2014 Diagnostic Report Intermediate and Senior Phases, Mathematics. Available from http://www.education.gov.za/Portals/0/Documents/Reports/2014ANADiagnosticIntermediateandSeniorPhase MathematicsReport.pdf

Department of Basic Education. (2015a). Action plan to 2014: Towards the realisation of schooling 2025. Available from http://planipolis.iiep.unesco.org/upload/South%20Africa/South_Africa_Action_Plan_To_2014.pdf

Department of Basic Education. (2015b). Annual National Assessments. Available from http://www.education.gov.za/Curriculum/AnnualNationalAssessments(ANA).aspx

Department of Basic Education & Department of Higher Education and Training. (2011). Integrated Strategic Planning Framework for Teacher Education and Development in South Africa 2011-2025: Frequently Asked Questions. Available from http://www.education.gov.za/Portals/0/Documents/Reports/ISPFTEDBooklet_FrequentlyAskedQuestions.pdf

Griesel, H. (2006). The Context of the National Benchmark Tests Project. In. H Griesel (Ed.), Access and entry level benchmarks: The National Benchmark Tests Project (pp. 1–6). Pretoria: Higher Education South Africa. Available from http://www.universitiessa.ac.za/files/2006_HESA_Access and EntryLevelBenchmarks.pdf

Jennings, J.L., & Bearak, J.M. (2014). ‘Teaching to the test’ in the NCLB era: How test predictability affects our understanding of student performance. Educational Researcher, 20(10), 1–9.

Kahn, P. (2002). Designing courses with a sense of purpose. In P. Kahn, & J. Kyle (Eds.), Effective learning & teaching in mathematics & its applications (pp. 92–105). London: The Institute for Learning and Teaching in Higher Education; The Times Higher Education Supplement, Kogan Page Ltd.

Long, C. (2015, April 24). Beware the tyrannies of the ANAs. Mail & Guardian. Available from http://mg.co.za/article/2015-04-23-beware-the-tyrannies-of-the-anas

NBTP 2012–2013. National Benchmark Tests Project results – National Report number 1/2013. Cape Town: University of Cape Town.

Parker, P., & Leinhardt, G. (1995). Percent: A privileged proportion. Review of Educational Research, 65(4), 421–481. Available from http://www.jstor.org/stable/1170703

Paton, C. (2009, August 28). University tests. Blueprint for a benchmark. Financial Mail. Available from http://www.financialmail.co.za/fm/2009/08/28/university-tests

South African Democratic Teachers Union (SADTU). (2014). The flaws of national assessments. Draft document. Available from http://www.sadtu.org.za/docs/disc/2014/ana.pdf

Scott, I., Yeld, N., & Hendry, J. (2007). Higher Education Monitor No. 6: A case for improving teaching and learning in South African Higher Education. Pretoria: Council on Higher Education. Available from http://www.che.ac.za/sites/default/files/publications/HE_Monitor_6_ITLS_Oct2007_0.pdf

Shalem, Y., & Sapire, I. (2012). Teachers’ knowledge of error analysis. Johannesburg: SAIDE.

Smith, J.P., diSessa, A.A., & Roschelle, J. (1993). Misconceptions reconceived: A constructivist analysis of knowledge. Transition, the Journal of the Learning Sciences, 3(2), 115–163.

Snyman, J. (2013). South Africa Survey 2013. Johannesburg: South African Institute of Race Relations. Available from http://irr.org.za/reports-and-publications/south-africa-survey/south-africa-survey-online-2012-2013/downloads/2013-survey-education.pdf

Spaull, N. (2014, December 12). Assessment results don’t add up. Mail & Guardian. Available from http://mg.co.za/article/2014-12-12-assessment-results-dont-add-up/

Spaull, N., & Taylor, S. (2015). Access to what? Creating a composite measure of educational quantity and educational quality for 11 African countries. Comparative Education Review, 59(1), 133–165. Available from http://www.jstor.org/stable/10.1086/679295

Yen, W., & Fitzpatrick, A.R. (2006). Item response theory. In R.L. Brennan (Ed.), Educational measurement (4th edn., pp. 111–153). Westport, CT: Greenwood/Praeger.

Footnotes

1. The ANAs are standardised national assessments for languages and mathematics in the senior phase (Grades 7–9), intermediate phase (Grades 4–6) and in literacy and numeracy for the foundation phase (Grades 1–3). The question papers and marking memoranda (exemplars) are supplied by the national DBE and the schools manage the conduct of the tests as well as the marking and internal moderation (DBE, 2015b).

2. In the NBTs, questions are referred to as ‘items’.


 

Crossref Citations