Original Research

Mathematical errors made by high performing candidates writing the National Benchmark Tests

Carol A. Bohlmann, Robert N. Prince, Andrew Deacon
Pythagoras | Vol 38, No 1 | a292 | DOI: https://doi.org/10.4102/pythagoras.v38i1.292 | © 2017 Carol A. Bohlmann, Robert N. Prince, Andrew Deacon | This work is licensed under CC Attribution 4.0
Submitted: 16 March 2015 | Published: 25 April 2017

About the author(s)

Carol A. Bohlmann, Centre for Educational Testing for Access and Placement, University of Cape Town, South Africa
Robert N. Prince, Centre for Educational Testing for Access and Placement, University of Cape Town, South Africa
Andrew Deacon, Centre for Innovation in Learning and Teaching, University of Cape Town, South Africa

Abstract

When the National Benchmark Tests (NBTs) were first considered, it was suggested that the results would assess entry-level students’ academic and quantitative literacy, and mathematical competence, assess the relationships between higher education entry-level requirements and school-level exit outcomes, provide a service to higher education institutions with regard to selection and placement, and assist with curriculum development, particularly in relation to foundation and augmented courses. We recognise there is a need for better communication of the findings arising from analysis of test data, in order to inform teaching and learning and thus attempt to narrow the gap between basic education outcomes and higher education requirements. Specifically, we focus on identification of mathematical errors made by those who have performed in the upper third of the cohort of test candidates. This information may help practitioners in basic and higher education.
The NBTs became operational in 2009. Data have been systematically accumulated and analysed. Here, we provide some background to the data, discuss some of the issues relevant to mathematics, present some of the common errors and problems in conceptual understanding identified from data collected from Mathematics (MAT) tests in 2012 and 2013, and suggest how this could be used to inform mathematics teaching and learning. While teachers may anticipate some of these issues, it is important to note that the identified problems are exhibited by the top third of those who wrote the Mathematics NBTs. This group will constitute a large proportion of first-year students in mathematically demanding programmes.
Our aim here is to raise awareness in higher education and at school level of the extent of the common errors and problems in conceptual understanding of mathematics. We cannot analyse all possible interventions that could be put in place to remediate the identified mathematical problems, but we do provide information that can inform choices when planning such interventions.

Keywords

National Benchmark Tests (Mathematics); diagnostic information; teaching and learning; analysis of test results; identification of errors

Metrics

Total abstract views: 4559
Total article views: 9701

 

Crossref Citations