About the Author(s)


Surette van Staden Email symbol
Department of Science, Maths and Technology Education, Faculty of Education, University of Pretoria, South Africa

Puleng Motsamai
Northern Cape Education Department, Curriculum and Assessment Services, South Africa

Citation


Van Staden, S., & Motsamai, P. (2017). Differences in the quality of school-based assessment: Evidence in Grade 9 mathematics achievement. Pythagoras, 38(1), a367. https://doi.org/10.4102/pythagoras.v38i1.367

Original Research

Differences in the quality of school-based assessment: Evidence in Grade 9 mathematics achievement

Surette van Staden, Puleng Motsamai

Received: 19 Feb. 2017; Accepted: 21 July 2017; Published: 31 Oct. 2017

Copyright: © 2017. The Author(s). Licensee: AOSIS.
This is an Open Access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

This non-experimental, exploratory and descriptive study, using a qualitative case study approach, aims to investigate whether there is evidence of variance in the quality of school-based assessment (SBA) in Grade 9 mathematics. Participants were purposefully selected from five schools in a district in the Northern Cape in South Africa. After questionnaires were completed, individual face-to-face semi-structured interviews were conducted with participants from the participating schools. Documents were collected and analysed to corroborate or contradict data from the questionnaires and interviews. Lack of adherence to policy, variation in classroom practice and inconsistent monitoring and moderation practices were identified as themes of possible sources of variation in SBA. An analysis of the interviews and document analysis revealed that most of the Heads of Department and principals lacked in-depth knowledge and understanding of their roles and functions in making SBA reliable, credible and valid. This was not only due to a lack of capacity to perform such functions, but was also due to a lack of effective induction and training by the district and provincial offices. Findings from the current study point to the necessary role that a periodic evaluation of SBA may play to ensure its effectiveness, credibility and reliability as part of successful assessment practices in a mostly developing context.

Introduction

Assessment is at the heart of the teaching and learning process (Chisholm, 2004). At the dawn of democracy in South Africa, the Department of Education (DOE) replaced traditional assessment methods such as tests, examinations and year marks with continuous assessment in order to redress the focus on traditional examinations of the past. Continuous assessment constitutes school-based assessment (SBA) and examinations. SBA encompasses all forms of assessment that are conducted by the teacher and teachers develop their own assessments (Black & Wiliam, 2010; Poliah, 2010). Gipps (1994) is of the view that SBA has the potential to be a more valid form of assessment as it covers a wide range of curricular outcomes. However, due to the subjective nature of SBA that weakens its design, it opens itself to lower levels of reliability, and reduced validity and credibility of learner performance (Poliah, 2010; Reyneke, Meyer & Nel, 2010).

South African learners across all grades continue to perform poorly in mathematics when compared to their counterparts globally, nationally and regionally. In international studies such as the Trends in Mathematics and Science Study, South Africa performs along with other poor performing participating countries in mathematics. Similarly, in the national assessments such as the Annual National Assessment (ANA), results show a very low performance in Grade 9 mathematics specifically.

The main research question that guided this study was:

What evidence is there in teachers’ classroom assessment practices that points to possible variation in the quality of SBA?

In order to address this question, issues of adherence to policy, classroom practice, monitoring and moderation practices and learner performance in SBA and external assessments will be discussed.

Problem statement

Post 1994, South Africa made major changes in assessment policies and practices. Traditional assessment practices such as tests and examinations were re-conceptualised to accommodate types of continuous assessment. Continuous assessment therefore currently comprises SBA and internal examinations. Yet, Fleisch (2008) points out that assessment was initially underdeveloped and did not form a key element in the initial training and support within education when implementing the new curriculum. Kanjee (2007) further elaborates that assessment was the most neglected aspect of government’s efforts to transform the education system, and was the area that received the most criticism. The South African DOE then presented assessment policies and practices in the form of guidelines. Because of these guidelines, assessment is most likely to be interpreted and applied differently by teachers of the same subject and the same grade, which in this case is Grade 9 mathematics. Additionally, there are currently no common external assessments in grades below Grade 12 in the South African education system.

The problem with the weighting of SBA is its quality, reliability, validity, and credibility. Long, Dunne and De Kock (2014) confirm that there are no measures and systems in place in the South African education system to ensure that SBA is reliable, valid and credible in the General Education and Training (GET) band. Despite the statutory body (known as UMALUSI) that ensures quality assurance, quality is ensured at Grade 12 level only. There are no agreed standards across provincial DOEs, across districts within the same provincial DOE, or across schools within the same district (Poliah, 2003). From the work of Poliah (2010), it is evident that there is room for variation in the scoring of assessment tasks among teachers, particularly when the assessment tasks are not the same.

Mathematics education in the South African context

A significant amount of research has taken place in mathematics content and teaching internationally and in South Africa (Dunne et al., 2002; Mullis et al., 2011; Setati, 2002; Shalem, Sapire & Sorto, 2014). However, mathematics education research in South Africa has mainly focused on curriculum and pedagogy, and has been dominated by cognition of how learners acquire mathematical understanding. Post 1994, the introduction of Curriculum 2005 saw mathematics being replaced with the learning area Mathematical Literacy, Mathematics and Mathematical Sciences (DOE, 2002). Mathematical Literacy, Mathematics and Mathematical Sciences represented a major shift in the philosophy of mathematics and mathematics education, and thus demanded a major philosophical shift of both teachers and learners (Graven, 2002). Graven (2002) identified three major shifts:

  • The approach to teaching mathematics: emphasis is placed on a constructivist, learner-centred and integrated approach to the teaching and learning of mathematics. This way of teaching moves away from the performance-based approach to the competence-based approach.
  • The nature and content of mathematics.
  • The role of mathematics education.

The rationale for Mathematical Literacy, Mathematics and Mathematical Sciences is focused on constructing mathematical meaning in order for learners to understand and make use of that understanding. Specific outcomes (SOs) for Mathematical Literacy, Mathematics and Mathematical Sciences indicate changes in the content of school mathematics. However, Vithal and Volmink (2005) argue that Mathematical Literacy, Mathematics and Mathematical Sciences poses a serious challenge in terms of both content and pedagogy, which are essential foundational competencies. The ongoing implementation challenges in the Revised National Curriculum Statement (NCS) (DOE, 2002) resulted in the development of the Curriculum and Assessment Policy Statements (CAPS). The rationale for the implementation of CAPS addressed four main concerns, namely: (1) complaints about the implementation of the NCS, (2) teachers who were overburdened with administrative duties, (3) different interpretations of curriculum requirements, and (4) the underperformance of learners (Moodley, 2013).

Moreover, learning areas are now known as subjects. In mathematics for the Senior Phase (Grades 7 to 9), there is too much content, which is combined with reduced time allocation. In the NCS, the time allocated for mathematics for Grades 7 to 9 was five hours of contact time; however, this has been reduced to four and a half hours in CAPS (Department of Basic Education [DBE], 2012). A conspicuous feature is that there is ‘linear progression’, which means that certain topics and concepts must have been dealt with in previous grades before teachers can teach new concepts in the present grade. This approach suggests that sequencing and pacing poses a threat in the classroom should learners not have been taught those concepts in previous grades. It also means that the educator has to teach the specific content that was supposed to have been taught previously in order to proceed with what has been prescribed for that particular lesson or week allocated to that content.

Several studies have reported a number of shortcomings in the teaching and learning of mathematics in South Africa. One of the challenges, according to Makgato and Mji (2006), is that not all schools in the South African education system offer mathematics in the Further Education and Training (FET) band. Moreover, many of those schools that offer mathematics do not have the necessary facilities and equipment to provide effective mathematics teaching and learning. The current picture depicts a South Africa where success in school mathematics is not randomly distributed across the population, with some groups systematically doing better than others (Reddy et al., 2012). Adler (2002) explains that mathematics needs to become more meaningful for learners, and one way of establishing meaning is by embedding mathematical problems in real world contexts. This practice would invite more learners to continue with mathematics, and thus reduce the inequalities in mathematics performance that we currently see when comparing learners from varying socioeconomic backgrounds.

There are a number of long standing, unresolved and unaddressed questions where mathematics instruction and assessment are concerned, as stated by Schoenfeld (1992). These challenges may be caused by the following reasons:

  • Learners do not know which needs are met by the mathematics topics introduced or how these are linked to known concepts.
  • Links to the real world are weak, generally too artificial to be convincing, and applications thereof are stereotypical.
  • There are few experimental practices and modelling activities provided.
  • Learners have little autonomy in their mathematical work and often merely reproduce activities. (Adapted from United Nations Educational, Scientific and Cultural Organisation, 2012, p. 21).

There is a body of evidence that suggests that one of the challenges in mathematics education is that mathematics teachers teach mathematical concepts in isolation. Simply put, mathematical concepts are regarded as ‘stand-alone’ concepts and are taught separately from each other. More than two decades ago, Schoenfeld (1992) recommended to policymakers that lessons should come in large coherent chunks, and take between two and six weeks to teach. Furthermore, lessons should be motivated by meaningful problems and be integrated with regard to subject matter, for instance simultaneous use of algebra and geometry, rather than having geometry taught separately from algebra. This strategy will dissuade teachers who do not feel comfortable teaching certain topics and concepts from skipping such topics and concepts. Geometry, in particular, in the GET band, as indicated by Usiskin (2012), is a section of the curriculum that mathematics teachers do not feel confident teaching. There is a small body of research that suggests that learners in the Senior Phase (Grade 7–9) are not taught geometry in the FET (Grade 10–12) mathematics curriculum. At the FET phase, geometry was optional, and higher institutions of learning, universities for instance, did not calculate this section in the admission point system. Currently, geometry is a compulsory component of mathematics in the FET band, and, as such, learners in the Senior Phase are introduced to the content area Space and Shape in order to prepare them for the FET band.

School-based assessment

School-based assessment is a process of measuring learners’ achievements against the defined outcomes conducted by the teacher (Maile, 2013). Many researchers define SBA as classroom assessment, formal assessment and formative assessment. As SBA is an ‘engine of educational change’ that should inform teaching, it forms an integral component of teaching and learning in the classroom. SBA is being practised in many countries; however, UMALUSI found that teachers the world over experience challenges in finding their roles in assessments (UMALUSI, 2010).

School-based assessment has its own challenges such as different schools that are not equally effective, and teachers’ subjective judgements that are frequently accused of being biased. In the South African context, the weighting of SBA varies considerably across the education system (as stipulated in the National Protocol for Assessment for Grade R to 12, DBE, 2011a), which poses additional challenges, as will be discussed in later sections.

School-based assessment is further made up of informal and formal assessments (DBE, 2011a). Informal assessments are mainly formative and prepare learners for formal assessment. SBA’s informal assessment role is to ensure, among others, that basic mathematical concepts are mastered to improve teaching and learning. Regular informal activities such as homework and classwork, coupled with regular feedback, provide information to learners and teachers, and may help the teacher to gauge what learners’ performance will be in the formal assessment. Learners should be familiar with the type of tasks used for formal assessment and should also be given the opportunity to master mathematical concepts (Davison, 2007).

Quality assurance in school-based assessment

School-based assessment, when defined as teachers’ own assessment tasks in the classroom, is an important tool, but when it serves as a component of national educational benchmarking, it needs to be rigorously controlled and quality assured (Poliah, 2014). Quality assurance in SBA can be conceptualised as all of the quality control measures put in place in keeping with the required standards (Adler, 2002). Maxwell, Field and Clifford (2006) explain that these quality control measures are important to address issues of validity, reliability, fairness, authenticity, as well the quality of marking of these assessment tasks. In Grade 9 mathematics, the forms of assessment available are tests and internal examinations, investigations, assignments and projects (DBE, 2013; World Bank, 2008). The latter three of these assessment tasks are completed by learners under uncontrolled conditions, for example at home, or even at a library.

According to the European Network for Quality Assurance (Daniel, Kanwar & Uvalić-Trumbić, 2009), institutions should have policies and procedures in place for quality assurance; South Africa is no exception. In the South African context, the DOE developed mechanisms in order to address quality assurance in SBA after the reliability and validity thereof were questioned. It has to be noted that while efforts are made to put policies and acts in place, these do not ensure compliance or standardisation across the system. The DOE promulgated a number of policies and acts, such as the National Protocol on Assessment Grade R–12, General and Further Education Training on Quality Assurance Act No. 58 of 2001, Curriculum 2005, the Revised NCS (Grade R–9), the Assessment Guidelines in GET (Grade R–9), Common Assessment Tasks in Grade 9 of the GET band, and the CAPS. However, these documents provide inadequate guidelines and are silent on the internal quality assurance processes that schools need to apply to ensure standardisation among schools (DBE, 2011a; Maile, 2013; Wilmot, 2005). Thus far the focus on the system is on UMALUSI as a statutory body to ensure that assessments are quality assured at the exit points of the system. In terms of the South African education system, the exit points are at the end of the GET and FET bands, and are Grades 9 and 12 respectively. UMALUSI (cited in Poliah, 2014) reports that there is huge disparity in the quality of SBA from one school to another across education districts at Grade 12. The significance of the current study could point to similar disparities in Grade 9 in mathematics, thereby extending UMALUSI’S findings beyond the evidence found for Grade 12 learners. Findings from the current study would aim to inform the necessity of a periodic evaluation of SBA to ensure its effectiveness, credibility and reliability as part of successful assessment practices in a mostly developing context.

Adler (2002) finds that a lack of assessment guidelines leads to variations, which may include:

  • The marking standards of teachers (which may be too high or inflated) (Maile, 2013; Poliah, 2010).
  • Types of uncontrolled assessment tasks such as investigations, assignments and projects in mathematics. Poliah (2010) highlights the fact that some teachers use homework as part of SBA.
  • The degree of guidance and assistance given to learners. Torrance and Pryor (1998) are of the opinion that learners are strategically guided with instructions and assistance for deeper understanding and discussion. This is done to close the gap between their current level of understanding and the desired goal.

Research design and methodology

This study was exploratory, non-experimental, descriptive and interpretative in nature and formed part of a larger study (Motsamai, 2017). The approach to empirical research adopted for this study was one of a qualitative case study. This approach was chosen because the aim was to capture in-depth views of the participants in order to make meaning and draw conclusions (Guba & Lincoln, 1994; Onwuegbuzie & Leech, 2007).

The participants’ questionnaires were mainly used to ascertain participant profiles, backgrounds and experience. In some cases, the questionnaires were incorrectly completed; however, these were corrected together with the participants. Face-to face individual semi-structured interviews were conducted, recorded and analysed. Documents collected, such the Grade 9 mathematics SBA tasks, with their memoranda, moderation and monitoring reports were used to triangulate the data obtained from the questionnaires and interviews in order to corroborate or contradict data.

Participants and study context

The larger study (Motsamai, 2017) was conducted in five different schools that offer Grade 9 mathematics in the John Taolo Gaetsiwe district in the Northern Cape province in South Africa. The schools were drawn from rural, semi-rural, township and former Model C schools offering Grade 9. Schools are sparsely scattered and distant from each another, an important characteristic of the Northern Cape. In each of the five schools, a Grade 9 mathematics teacher, mathematics Head of Department (HOD) and school principal were selected for participation.

Tables 13 summarise the profiles of the teachers, HODs and school principals as obtained from data from the questionnaires.

TABLE 1: Profiles of the participating mathematics teachers.
TABLE 2: Profiles of the participating mathematics Heads of Department.
TABLE 3: Profiles of the participating school principals.

The sample included five teachers, five HODs and five school principals from the participating schools. Here, the authors will additionally report on the qualifications and mathematics teaching experience of the participants. Schools were named School A to E, with teachers A to E, and HODs A to E.

All participants appear to be qualified to be appointed as teachers. However, data show that there are only two teachers who are adequately qualified to teach mathematics in Grade 9. The remaining three teachers have three-year Senior Primary Teachers’ Diploma and Secondary Teachers’ Diploma respectively. The Senior Primary Teachers’ Diploma qualification does not have a specialisation in any school subject. According to Spaull (2011), Senior Primary Teachers’ Diploma has a ‘Primary phase’ qualification and specialises in either mathematical literacy (40%) or mathematics (36%). However, these teachers obtained an Advanced Certificate in Education with specialisation in natural science, technology and mathematics. The teacher with a Secondary Teachers’ Diploma qualification specialised in physical science.

One HOD has a Secondary Teachers’ Diploma with specialisation in mathematics and teaches mathematics in Grades 10 to12. A striking feature that emerged from the data is that of an appointed HOD who does not possess any mathematics qualification nor has ever taught mathematics in his teaching career. The other HOD possesses a Senior Primary Teachers’ Diploma qualification only and did not teach mathematics at the time of data collection. In school E, the HOD obtained an honours degree with a management qualification. According to the data presented above, only one HOD reported to be highly qualified in mathematics. This HOD teaches Grade 12 mathematics, however, additionally conducts afternoon mathematics classes for learners in Grades 4 to 12.

Data on the mathematics qualifications of school principals were not sought as this data were not relevant for this study. Teachers’ teaching experience of mathematics varies between 5 months and more than 10 years. It should also be noted that most of the management staff are acting in their positions and do not hold these positions permanently. There were exceptions, where for example in school C, the permanently employed principal and HOD have more than 10 years of managerial experience.

Methods of qualitative data collection

The research data in this investigation are drawn from the three main sources: questionnaires, semi-structured interviews, and document analysis.

Questionnaires

Questionnaires were chosen specifically for this study as the responses would determine whether the participants’ biographical data had any association with their implementation of SBA and assessment policy and practice. The questionnaires were completed by all of the participants in their own time prior to the interviews. The use of questionnaires was to obtain biographical data of the participants. Biographical data included information such as gender and age, languages used for assessment, primary language of participants and the learners, participants’ mathematics qualifications and experience, training on assessment principles, policies and practice, among others. There were three sets of questionnaires: one for the teachers, the other for the HODs and the last set was for the school principals.

Interviews

Face-to-face semi-structured interviews were conducted after the questionnaires were completed. Interviews were conducted mainly in English; however, participants’ primary language was allowed to ensure that participants fully expressed their ideas and opinions. Procedures to be followed were explained to participants and all indicated that they did not have problems with being recorded.

In the interviews, participants were probed to explain their interpretation, experiences and insights with regard to each of their responses when it came to the concepts of SBA, quality, quality assurance, moderation and learner performance in SBA. Heads of Departments and principals were asked about their roles in ensuring quality and credible SBA tasks and learner performance. All audio transcripts were recorded, transcribed and stored in a safe place.

Document analysis

Document analysis was required in this study so that the data in the key documents could be compared, examined and interpreted in order to elicit meaning and gain understanding (Creswell, Hanson, Clark & Morales, 2007; Taole, 2013). Documents such as the Grade 9 mathematics SBA tasks with their assessment tools (marking tools) and moderation reports were collected and analysed in order to corroborate or contradict data obtained from the questionnaires and interviews (McMillan & Schumacher, 2010; Mouton, 2001). Table 4 illustrates the different data sources that are applicable to each of the themes that are discussed for the purposes of this study.

TABLE 4: Themes and supporting data sources.
Ethical considerations

Ethical clearance (SM14/05/01) and permission to conduct the study were obtained from the university’s ethics committee. Permission to conduct the research was also granted by the Northern Cape Department of Education and the schools where this research was conducted. Since one of the authors is a district official, this study was conducted in a different district in order to minimise power over the participants. All the participants were informed of the purpose and the rationale of the study, namely that we wanted feedback in order to understand their views, experiences and their perceptions of the quality of SBA in Grade 9 mathematics. Participants were also informed that participation is voluntary and participants may withdraw during the study if they so wished to do so. Participants who agreed to take part in the study were assured of anonymity and confidentiality. Schools’ and participants’ names were maintained by use of pseudonyms such as School A, B, C, D and E, Teacher A, B, C, D and E, HOD A, B, C, D and E and Principal A, B, C, D and E. All data collected were kept in a secure place.

Findings and discussion

In order to address the research question, evidence of variations of SBA will be described in terms of adherence to policy, classroom practice, monitoring and moderation and learner performance as themes that strongly emerged from the semi-structured interviews.

School-based assessment in terms of adherence to policy

Assessment in the South African context comprises SBA and the end of year formal examinations. The National Protocol on Assessment, the National Policy Pertaining to the Programme and Promotion Requirements of the NCS and the CAPS further state that for the grades below Grade 12, the end of year examinations are to be set internally. The National Protocol on Assessment requires every subject teacher to submit an annual assessment plan to the HOD and the school management team in order to draw up a school assessment plan (DBE, 2013). The assessment plan should assist in the smooth running of the assessment activities and also in regulating SBA. In addition, the National Protocol on Assessment requires that learners and their parents receive the term’s assessment plan at the beginning of each term to improve parental involvement. However, evidence from the interviews points to the fact that none of the participating schools had assessment plans, except one school, which appeared to have cycle tests in place. Great variation in adherence to this policy is observed across the participating schools in this study.

The weighting of SBA across the grades and subjects is stipulated in the National Protocol on Assessment for Grades R to 12 (DBE, 2011a). This protocol takes the form of guidelines, which are open to varied interpretations. The policy states that SBA in the GET band carries more weight than in the FET band. The policy further divides the weighting of the GET into Grades 1 to 8, the SBA of which is 100%, and Grade 9, the SBA of which is 75% and the weighting of examination is 25%. The weighting also varies across various subjects. Mathematics and home language carry the most weight as a learner has to obtain a minimum of 40% (level 3) in mathematics (DBE, 2011b) in order to be promoted to the FET band. Although the assessment policy provides clear guidelines regarding the number of assessment tasks and forms of assessment to be used, it is silent on the quality of these tasks. The subject educator determines what and how to assess content, skills, and knowledge in mathematics. The quality of these assessment tasks therefore depends on how each individual Grade 9 mathematics educator interprets them. However, the weighting and quality of the mathematics percentage or level of these assessments may paint a misleading picture for the parents and learners as the percentage or level may not be a true reflection of mathematical knowledge, skills and understanding.

In terms of the National Policy Pertaining to the Programme and Promotion Requirements (DBE, 2011b), where the promotion and progression requirements of learners are stipulated, there is evidence of variation in interpretation and implementation. This policy stipulates that learners should achieve a minimum of Level 3 (30% to 49%) in mathematics and a minimum of Level 4 (50% to 59%) in home language in order to be promoted to the next grade. These levels are made up of the SBA mark (40%) and the end of year examination mark (60%). This study reveals that the focus in schools is more on learners’ mathematics mark than on their home language mark (Motsamai, 2017). According to the National Policy Pertaining to the Programme and Promotion Requirements, learners who do not meet the minimum levels for promotion should be progressed to the next level on the condition that such learners have spent four years in the phase, which is known as ‘the age cohort’. Progressions should only be approved by the circuit manager; however, the evidence presented in this study shows that, prior to the circuit manager progressing learners who did not meet the minimum requirements, the mathematics teachers had already inflated the learners’ scores. The recording of assessment scores is, in many cases, inflated. One participant acknowledged that all of the previous Grade 8 learners, who at the time of the study were in Grade 9, had not achieved between 30% and 49%. The participant further explained that the Grade 8 mathematics scores were adjusted to a Level 3 by the Grade 8 mathematics teacher. This practice translates to non-adherence to the National Policy Pertaining to the Programme and Promotion Requirements policy. Some of the participating school principals admitted that they did not fully understand the National Policy Pertaining to the Programme and Promotion Requirements; as a result, they had varying interpretations and implementations of the policy. This practice could give learners and their parents the false impression that the learners have met the minimum promotion requirements.

Variation of classroom practice

Teachers are given a greater responsibility in designing quality SBA tasks. However, the guidelines on how to develop quality, reliable, credible and valid SBA mathematics tasks are problematic as these guidelines are largely generic in nature with limited specification to mathematics. From their responses, it appeared that the participants were not adequately trained to develop quality SBA tasks. Coupled with inadequate academic qualifications, it stands to reason that the development of assessment tasks is well beyond their capabilities, especially in the absence of support. As guidelines are open to interpretation and implementation, evidence emerged from the data suggested that different teachers in different schools developed Grade 9 mathematics SBA tasks that varied in quality.

According to CAPS, different forms of assessment in mathematics included test and assignment (Term 1), and test, mid-year examinations and investigation (Term 2). However, most teachers and HODs admitted that they do not know the difference between the different forms of assessment in mathematics. Teacher A said he finds it very challenging to develop an assignment and an investigation. As a result, there was no evidence in any of the schools that participated in the study of any assignment or investigation being performed. Learners in the five schools were only assessed in one form of assessment, which took the form of a test. This evidence could suggest teachers’ lack of knowledge in devising alternative forms of assessment or lack of adequate in-service training and support to empower teachers to develop a repertoire of assessment skills.

Another finding that emerged from the study was the difficulty in interpreting and implementing Bloom’s revised taxonomy of cognitive levels. This finding is in line with Long et al. (2014) that Bloom’s taxonomy of cognitive levels is problematic to interpret and implement. When analysing documents such as SBA tasks that were collected, evidence that emerged was that most teachers could only test learners on Level 1 and Level 2 questions for the SBA. Teacher A, Teacher B and Teacher C added a few Level 3 and Level 4 questions from the past ANA question papers. However, such questions were taken verbatim. Varying explanations emerged from the responses of the participants. For instance, Teacher A explained that he could not differentiate between Level 3 and Level 4 questions, while Teacher B and C believed that adding questions from the past ANA question papers would standardise their SBA tasks. However, lack of expertise in the development of SBA tasks was demonstrated by the low levels of cognitive demand and poor questioning.

The CAPS does not provide clear SBA task specifications. As such, the policy is restricted on the uniformity and weighting of the forms of assessment.

Monitoring and moderation

Monitoring and moderation are the two processes run concurrently to ensure quality assurance. Monitoring has always been done by the HODs in the form of class visits to ensure curriculum coverage, as well as ensuring that the assessment programme is unfolding according to plan. However, when probed into how monitoring is done and its frequency, all participants admitted that monitoring is not conducted at their schools. Although there is a moratorium on class visits by the teacher unions, informal and impromptu class visits are conducted at School C. HODs also cited the many roles they play and the many workloads they have as factors that inhibit the process of monitoring. This study revealed that monitoring is non-existent.

Moderation is one of the most important processes in ensuring the quality, credibility, reliability and the validity of assessment which result in improved learner performance. Heads of Departments should use a moderation protocol obtained from the provincial DOE. Evidence from the documents obtained and analysed suggest that HODs appeared to be confused in using the monitoring and moderation tools. While some HODs seem to use both the monitoring and moderation tools to moderate SBA tasks, others use either the monitoring or the moderation tool for moderation. All HODs claimed that they never received any training on moderation. They added that the Northern Cape DOE district officials gave them the moderation protocol document without any training. HOD B seems to be frustrated and confused with the origin of the moderation protocol, as she received the moderation protocol from a colleague, who also did not know its origin. The moderation reports of School D were not collected as the teacher who has been assigned to moderate mathematics SBA tasks was not available at the time of data collection due to his studies. In addition, Teacher D did not possess a copy of such a report. In School C, no moderation reports were collected as the process unfolds differently. One HOD admitted not conducting pre-moderation in which the SBA tasks are moderated before they are written. Post-moderation is being conducted by means of the marking. In one case, the HOD leads mathematics teachers through a marking process as she has a rich experience in marking and moderating NCS mathematics. In School A, B and E, the moderation protocol is being used as a checklist, to check spelling and grammatical errors only. When perusing some of the SBA tasks, glaring errors were found such as content of Grade 7 being covered and incorrect use of mathematical symbols. There are neither constructive comments nor follow-up on any verbal comments. HODs’ lack of expertise and experience may have contributed to lack of guidance in terms of producing quality assessment tasks.

Moreover, the monitoring tool had been used as a substitute for the moderation protocol. HOD A claimed that pre-moderation is done hurriedly, where teachers request his signature and the school’s stamp, without him going through the SBA tasks. Most HODs agreed that SBA tasks are submitted without a memorandum. There were inconsistencies in terms of post-moderation. Teachers claimed that they select marked scripts to be moderated. In some instances, teachers confessed that due to exhaustion, pressure and large classes, they do not mark all learners’ SBA tasks. The study found that moderation is not rigorous and is inconsistent. The study also revealed that learner performance is not a true reflection of their potential. Evidence in the recording sheets shows that learner SBA marks were tampered with and were inflated (Motsamai, 2017, p. 165).

Learner performance in school-based assessments

School-based assessments are developed and marked by subject teachers at school or classroom level. In almost all of the schools selected for the study, the learners had been performing relatively well in their SBA as compared to external assessments. When probed regarding the reason for the higher performance of learners in the SBA, the participants gave varied reasons. It would appear that teachers often explained questions to their learners during tests, which may have led the learners to the answers. The HOD at School B said, ‘teachers are explaining questions … telling them what the question wants’. Teacher A recounted having shared a similar experience: ‘teachers explain questions to the learners in class’ (Motsamai, 2017, p. 126).

The forms of classroom-based assessment associated with mathematics made it appear that the learners performed well. Most of the forms of assessment, such as assignments, investigations and projects, were done under uncontrolled conditions and, in some cases, in groups. The principal at School E expressed his views in saying, ‘good performance because of group work like assignments, assistance and all the like’ (Motsamai, 2017, p. 126). Principal E further elaborated that, ‘with the help of the parents, because some of the work learners are doing at home and parents will assist and that it’s sometimes higher’ (Motsamai, 2017, p. 126).

It seems that the high learner performance was often due to a lack of curriculum coverage in the teaching and assessment of certain topics with which the teacher and learners may have felt comfortable and that one concept that learners proved to understand well was repeatedly being asked during tests. The HOD at School B, for instance, stated that:

The teachers are asking the same questions. You find that question 1 is the same as question 2 and is based only on one concept. A lot of marks come from one concept. (Motsamai, 2017, p. 126)

The teacher at School C alluded to the fact that learner performance at her school was high due to the fact her learners were familiar with her style of questioning.

At School A, this familiar style of questioning was not the case as SBA was handled differently. The teacher at School A reported that he got a lot of his test questions from past ANA question papers and refused to explain questions to his learners. In School C, for instance, the school principal scrutinised all of the SBA marks and compared them to examination marks. According to the principal, if there was a wide mark variation between the SBA and an examination mark, the teacher was called in to explain how the wide variation had occurred (Motsamai, 2017, p. 127). The principal further elaborated that the mark variation between the two sets of marks was usually 5% or less. As a result, although the learners’ SBA performance was higher than their examination performance, this gap was kept to a minimum. This practice might be associated with the fact that a few of the staff members at the school, including the mathematics HOD, were involved with the NCS marking processes and were therefore able to filter this knowledge down to other grades (Motsamai, 2017, p. 127).

There was an overall agreement by the participants that their learners performed better in SBA because the standard of SBA, as well as the quality of the questions, was much lower than that of external assessments.

The participants hold the view that good performance in mathematics at school level is a result of the quality of SBA questions that are lower than the external assessments (such as the ANAs). Based on this evidence, the reliability, credibility, validity and quality are questioned. In 2014 in the Northern Cape, only 9.6% of the Grade 9 mathematics learners achieved acceptable levels in the ANAs. In addition, in the John Taolo Gaetsiwe district, where this study was conducted, 9.3% of its Grade 9 learners achieved acceptable levels, yet these were below the national benchmark of 10.4%. When asked about their learners’ performance, the participants admitted that their learners were not performing well in the ANAs (Motsamai, 2017, p. 130). While percentage comparisons across the ANA results are not recommended, patterns observed in the Northern Cape provide some indication that good performance in SBA tasks can be misleading.

The ANA was introduced as a national measurement tool by the DBE in 2011 and 2012 for Grades 1 to 6 and 9, respectively, as outlined in the Education Sector plan, Action Plan to 2014: Towards the Realisation of Schooling 2025 (DBE, 2013). The main purpose of the ANA is to enable a systemic evaluation of educational performance, through which learners’ skills and their achievement may be measured. These nationally standardised assessments measure the skills and knowledge that learners are expected to have acquired as a result of teaching and learning based on the mathematics and languages curriculum.

It would appear that most of the participants shared the same findings as those of Pournara (2015) in terms of the difficulty of ANA question papers over the years. The HOD at School C confirmed this, stating that ‘2013 ANA question paper was a bad, bad one, but 2014, it was a little bit better’ (Motsamai, 2017, p. 130). Whereas the principal of School A complained that the 2014 mathematics ANA ‘was a disaster. In English they are performing, but in maths… it was horrible’ (Motsamai, 2017, p. 128). Moreover, the teacher at School E expressed her views in saying, ‘ANA 2014 was the easiest’ (Motsamai, 2017, p. 128).

The teacher at School A reported that only one learner passed the 2014 mathematics ANA, which was corroborated by the HOD at School A, who expressed his anger in saying, ‘no learner passed; 0.1% … round it off, it is 0%!’ (Motsamai, 2017, p. 128). Additionally, at School D, the teacher lamented the fact that ‘with ANA, it was very, very bad … no one passed. It was 0%’ (Motsamai, 2017, p. 128).

The teachers in Schools A and D only found out after the fact that learners’ performance in mathematics was dismally low. This was due to the fact that they were not teaching at their respective schools or had been on sick leave for a long duration, respectively. When asked about the reasons for the poor performance of learners, the participants offered varied reasons for the poor mathematical performance in the Grade 9 ANAs. However, all of the participants were unanimous that the standard of the questions in the ANA was too high. The HOD at School C had strong feelings about the ANAs: ‘ANA is too difficult. There’s a question that is, according to my knowledge, is not part of the syllabus … they are asking them about exponents of Grade 11’ (Motsamai, 2017, p. 129). The teacher at School B further added, ‘our learners are scared of any papers with the departmental logo’ (Motsamai, 2017, p. 129).

From the four schools selected, the general challenge when answering ANA questions was the question of language. The participants found that learners who did not speak the same language as that in which they were being tested tended to have problems in interpreting the mathematics questions. The principal at School B stated, ‘there is nothing wrong with ANA, it’s just that our learners cannot interpret the questions’ (Motsamai, 2017, p. 129). The HOD at School B expressed his frustration:

It is the language problem … the standard of language is too high … learners do not understand the language. Reading is a problem. With the word sums, out of 30 learners, at least two will get 30%. (Motsamai, 2017, p. 129)

Differences in home language and language of the test seem to be exacerbated by the complexity of mathematical language that Grade 9 learners have not mastered either.

The teacher at School A gave this account:

Performance is lower … only two people passed mathematics in Grade 8 last year. I have 174 Grade 9 learners; it means 172 of them can’t do mathematics. They are in Grade 9 because of departmental policy. There are only five Level 7 learners in my class … there are a lot of learners in my class who cannot have the ability to do maths. (Motsamai, 2017, p. 130)

This study has revealed that, according to the responses from the interviews, poor curriculum coverage added to the poor performance of learners in ANA. The ANA is written during Term 3 and, according to the participants, only covered Term 1 and 2’s work instead of the required curriculum for Terms 1, 2 and 3 (Motsamai, 2017). In examining the ANA paper, this claim seems to be true. Teacher views on this issue further speak to time that is wasted on revising work and drilling learners to obtain higher scores without ensuring that learners understand the work.

Conclusion

This study sought to analyse evidence of variation in the quality of SBA from the perspective of principals, HODs and teachers. This is an important topic as the management, monitoring, moderation, and implementation of SBA filter down from the principal through to the teachers and, eventually, to the learners. This study was able, using a small case study sample, to confirm what has long been suspected in the education system: SBA is not as effective as it could be. Themes highlighted in the current study that point to possible sources of variation include lack of adherence to policy, variation in classroom practice and inconsistent monitoring and moderation practices with differences in learner performance when SBA tasks are administered compared to national, external assessments. While the results of this study are not generalisable, they provide insight into this topic, and provide a starting point for further research on the matter.

An analysis of the interviews and the document analysis revealed that most of the HODs and principals lacked in-depth knowledge and understanding of their roles and functions in making SBA reliable, credible and valid. This was not only due to a lack of capacity to perform such functions, but was also due to a lack of effective induction and training by the district and provincial offices. SBA is supposed to be used as formative assessment, should be used throughout the year as assessment for learning, and should provide feedback to teachers to inform and guide their teaching. School-based assessment has been deeply problematic since teachers vary in how they construe mathematical concepts. Findings from the current study confirm the views of Stiggins (2004) that current assessment systems are harming learners due to a failure to balance the use of standardised tests and classroom tests. Poliah (2010) posits that learners obtain high marks due to the quality of question papers at schools. Teachers set papers that are not of the required standard, which pass through the hands of the HODs, yet are not properly moderated. The absence of proper moderation is problematic in itself and could disadvantage further attempts to ensure valid and reliable assessment (Maile, 2013). Moreover, Fleisch (2008) argues that many GET mathematics teachers are uncertain of what is expected of them.

Any change in the curriculum and assessment policies would require intensive training to be made available to all of the stakeholders: school principals, HODs and teachers. Sufficient time for training and exposure to SBA should be provided to all teachers. The feedback gathered from stakeholders such as teachers and HODs should provide the relevant information to the ministry in terms of their attempt to decipher and make the necessary changes and modifications to the existing assessment policies and guidelines. According to Talib, Naim, Ali and Hassan (2014), the cascade model is not always the best model to be used as information withers and is lost during training. The cascade model proved to have failed to prepare district officials, school principals, HODs and teachers for the complexity involved in the implementation of the assessment policy, particularly the SBA component (Dichaba & Mokhele, as cited in Talib et al., 2014).

In a developing context, the main challenge in assessment is to find strategies that will be fair to all learners from diverse backgrounds and to provide quality, reliable, credible and valid results. Findings from the current study clearly point to the fact that the effectiveness of SBA depends on a variety of issues pertaining to teachers and learners. With constant curricular changes being made, it is imperative for SBA to be evaluated from time to time.

Acknowledgements

The authors acknowledge the National Research Foundation for providing bursary funding for this study to be undertaken.

Competing interests

The authors declare that we have no financial or personal relationships that might have inappropriately influenced us in writing this article.

Authors’ contributions

S.v.S. was responsible for the introduction, problem statement, literature review, discussion of results and conclusions. P.M. was responsible for the compilation and analysis of all data referred to in the article.

References

Adler, J. (2002). Lessons from and in curriculum reform across contexts. The Mathematics Educator, 12(2), 2–4. Available from http://tme.journals.libs.uga.edu/index.php/tme/article/view/111

Black, P., & Wiliam, D. (2010). Inside the black box: Raising standards through classroom assessment. Phi Delta Kappan, 92(1), 81–90. https://doi.org/10.1177/003172171009200119

Chisholm, L. (Ed.). (2004). Changing class, educational and social change in post-apartheid South Africa. Cape Town: HRSC Press.

Creswell, J., Hanson, W., Clark, V., & Morales, A. (2007). Qualitative research design: Selection and implementation. The Counselling Psychologist, 35(2), 236–264. https://doi.org/10.1177/0011000006287390

Daniel, J., Kanwar, A., & Uvalić-Trumbić, S. (2009). Breaking higher education’s iron triangle: Access, cost, and quality. Change: The Magazine of Higher Learning, 41(2), 30–35. https://doi.org/10.3200/CHNG.41.2.30-35

Davison, C. (2007). Views from the chalk face: English language school-based assessment in Hong Kong. Language Assessment Quarterly, 4(1), 37–68. https://doi.org/10.1080/15434300701348359

Department of Basic Education. (2011a). National protocol for assessment Grades R–12. Pretoria: DBE.

Department of Basic Education. (2011b). National policy pertaining to promotion requirements. Pretoria: DBE.

Department of Basic Education. (2012). Action plan to 2014: Towards the realization of schooling, 2025. Pretoria: DBE.

Department of Basic Education. (2013). Curriculum and assessment policy statement Grade 7–9 (Mathematics). Pretoria: DBE.

Department of Education. (2002). Revised national curriculum statement R–9 (Schools). Pretoria: DOE.

Dunne, T., Gumedze, F., Tawodzera, G., Ensor, P., Galant, J., Jaffer, S., et al. (2002). Textbooks, teaching and learning in primary mathematics classrooms. African Journal of Research in Mathematics, Science and Technology Education, 6(1), 21–35. https://doi.org/10.1080/10288457.2002.10740537

Fleisch, B. (2008). Primary education in crisis: Why South African schoolchildren underachieve in reading and mathematics. Cape Town: Juta.

Gipps, C. (1994). Beyond testing, towards a theory of educational assessment. Washington, DC: The Falmer Press.

Graven, M. (2002). Coping with new mathematics teacher roles in a contradictory context of curriculum change. The Mathematics Educator, 12(2), 21–27. Available from http://tme.journals.libs.uga.edu/index.php/tme/article/viewFile/113/104

Guba, E.G., & Lincoln, Y.S. (1994). Competing paradigms in qualitative research. In M. Hammond, & J. Wellington (Eds.), Handbook of qualitative research (pp. 105–117). London: Sage.

Kanjee, A. (2007). Improving learner achievement in schools: Applications of national assessment in South Africa. Cape Town: HRSC Press.

Long, C., Dunne, T., & De Kock, H. (2014). Mathematics, curriculum and assessment: The role of taxonomies in the quest for coherence. Pythagoras, 35(2), a240. https://doi.org/10.4102/pythagoras.v35i2.240

Maile, S. (2013). School-based quality assurance of assessment: An analysis of teachers’ practices from selected secondary schools located in Tshwane North District. International Journal of Humanities and Social Science Invention, 2(10), 15–28. Available from http://www.ijhssi.org/papers/v2(10)/Version-3/C021003015028.pdf

Makgato, M., & Mji, A. (2006). Factors associated with high school learners’ poor performance: A spotlight on mathematics and physical science. South African Journal of Education, 26(2), 253–266. Available from http://www.sajournalofeducation.co.za/index.php/saje/article/viewFile/80/55

McMillan, J., & Schumacher, S. (2010). Research in education. Evidence-based inquiry. (6th edn.). Upper Saddle River, NJ: Pearson.

Moodley, G. (2013). Implementation of the curriculum and assessment policy statements: Challenges and implications for teaching and learning. Unpublished doctoral dissertation. University of South Africa, Pretoria, South Africa. Available from http://hdl.handle.net/10500/13374

Motsamai, P. (2017). Differences in the quality of school-based assessment: Evidence for Grade 9 mathematics achievement. Unpublished master’s thesis. University of Pretoria, Pretoria, South Africa. Available from http://hdl.handle.net/2263/60965

Mouton, J. (2001). How to succeed in your master’s & doctoral studies. A South African guide and resource book. Pretoria: Van Schaik.

Mullis, I., Martin, O., Gonzalez, E., Ruddock, G., O’Sullivan, C. & Preuschoff, C. (2011). TIMSS 2011 International Mathematics report. Chestnut Hill, MA: TIMSS & PIRLS International Study Center, Boston College.

Onwuegbuzie, A.J., & Leech, N.L. (2007). A call for qualitative power analyses. Quality & Quantity, 41(1), 105–121. https://doi.org/10.1007/s11135-005-1098-1

Poliah, R. (2003). Enhancing the quality of assessment through common tasks for assessment. Pretoria: UMALUSI.

Poliah, R. (2010). The management of quality assurance of school based assessment at national level in South Africa. Unpublished doctoral dissertation. University of Johannesburg, Johannesburg, South Africa. Available from http://hdl.handle.net/10210/3688

Poliah, R. (2014, May). Twenty years of democracy in South Africa: A critique of the examinations and assessment journey. Paper presented at the 40th Annual Conference of the International Association for Educational Assessment, Singapore. Available from http://www.iaea.info/documents/paper_371f2a264.pdf

Pournara, C. (2015, March 27). Divergent views on national assessments. Media Centre, Wits University. Available from https://www.wits.ac.za/news/latest-news/general-news/2015/2015-03/divergent-views-on-national-assessments.html

Reddy, V., Prinsloo, C., Arends, F., Visser, M., Winnaar, L., Feza, N., et al. (2012). Highlights from TIMSS 2011: The South African perspective. Cape Town: HSRC Press. Available from http://www.hsrc.ac.za/en/research-data/ktree-doc/12417

Reyneke, M., Meyer, R., & Nel, C. (2010). School-based assessment: The leash needed to keep the poetic ‘unruly pack of hounds’ effectively in the hunt for learning outcomes. South African Journal of Education, 30(2), 277–292. Available from http://www.scielo.org.za/pdf/saje/v30n2/v30n2a07.pdf

Schoenfeld, A. (1992). Handbook for research on mathematics teaching and learning. New York, NY: Macmillan.

Setati, M. (2002). Researching mathematics education and language in multilingual South Africa. The Mathematics Educator, 12(2), 6–20. Available from http://math.coe.uga.edu/tme/issues/v12n2/v12n2.Setati.pdf

Shalem, Y., Sapire, I., & Sorto, M.A. (2014). Teachers’ explanations of learners’ errors in standardised mathematics assessments. Pythagoras, 35(1), a254. https://doi.org/10.4102/pythagoras.v35i1.254

Spaull, N. (2011). A preliminary analysis of SACMEQ III South Africa. Stellenbosch: Stellenbosch University.

Stiggins, R.J. (2004). New assessment beliefs for a new school mission. Phi Delta Kappan, 86(1), 22–27. https://doi.org/10.1177/003172170408600106

Talib, R., Naim, H.A., Ali, N.S.M., & Hassan, M.A.M. (2014, August). School-based assessment: A study on teacher’s knowledge and practices. Paper presented at the Fifth International Graduate Conference on Engineering, Humanities and Social Science, University of Technology, Johor Bahru, Malaysia. Available from http://www.researchgate.net/publicatiom/277562401

Taole, M. (2013). Teachers’ conceptions of the curriculum review process. International Journal of Educational Sciences, 5(1), 39–46. Available from http://krepublishers.com/02-Journals/IJES/IJES-05-0-000-13-Web/IJES-05-1-000-13-ABST-PDF/IJES-05-1-039-13-221-Taole-M-J/IJES-05-1-039-13-221-Taole-M-J-Tt.pdf

Torrance, H., & Pryor, J. (1998). Investigating formative assessment: Teaching, learning and assessment in the classroom. UK: McGraw-Hill Education.

UMALUSI. (2012). Annual report 2011/2012. Pretoria: UMALUSI. Available from http://www.umalusi.org.za/docs/annual/2012/annualreport1112.pdf

United Nations Educational, Scientific and Cultural Organisation. (2012). Challenges in basic mathematics education. Paris: UNESCO.

Usiskin, Z. (2012, July). What does it mean to understand school mathematics? Paper presented at the 12th International Congress on Mathematical Education, COEX, Seoul, Korea.

Vithal, R., & Volmink, J. (2005). Mathematics curriculum research: Roots, reforms, reconciliation and relevance. In R. Vithal, J. Adler, & C. Keitel (Eds.), Researching mathematics education in South Africa: Perspectives, practices and possibilities (pp. 3–27). Cape Town: HSRC Press. Available from http://www.hsrcpress.ac.za/product.php?productid=2034

Wilmot, A. (2005). Designing sampling strategies for qualitative social research: With particular reference to the Office for National Statistics’ Qualitative Respondent Register. Survey Methodology Bulletin-Office for National Statistics, 56, 53.

World Bank. (2008). Curricula, examinations, and assessment in secondary education in sub-Saharan Africa. World Bank Working Paper No. 128. Africa Human Development Series. Washington, DC: World Bank. Available from http://hdl.handle.net/10986/6372


 

Crossref Citations

1. The Impact of Assessment for Learning on Learner Performance in Life Science
Oluwatoyin Mary Oyinloye, Sitwala Namwinji Imenda
EURASIA Journal of Mathematics, Science and Technology Education  vol: 15  issue: 11  year: 2019  
doi: 10.29333/ejmste/108689