Abstract | | |
Changing educational assessment program represent a challenge to any organization. This change usually aims to achieve reliable and valid tests and assessment program that make a shift from individual assessment method to an integral program intertwined with the educational curriculum. This paper examines critically the recent developments in the assessment theory and practice, and establishes practical advices for redesigning educational assessment programs. Faculty development, availability of resources, administrative support, and competency based education are prerequisites to an effective successful change. Various elements should be considered when redesigning assessment program such as curriculum objectives, educational activities, standard settings, and program evaluation. Assessment programs should be part of the educational activities rather than being a separate objective on its own, linked to students' high quality learning. Keywords: Assessment, Constructive alignment, Formative assessment, Standards setting
How to cite this article: Al Kadri HM. Redesigning an educational assessment program. Saudi J Kidney Dis Transpl 2009;20:476-80 |
Introduction | |  |
Selection of an assessment program should depend on the reliability, validity, educational impact, acceptability, and cost. [1] Any group assigned to assess a curriculum should aim for reliable and valid tests; each individual item is meaningless on its own and becomes meaningful only in relation to the others. [2] The validity of these items should emanate from their in trinsic meaning, the content of the elements, and the task posed to the examiners. [3] Furthermore, there should be a shift from individual methods to an integral assessment program intertwined with the educational curriculum. [4] The revision group should not neglect the tremendous impact that the assessment program has on the learners [5] and on the final quality of the output. Test developers should use this phenomenon strategically and reinforce a desirable learning behavior.
The aim of the new design is to improve on the commonly used models and add whatever new methods in order to make them more reliable and reach the educational objectives. This paper examines critically the recent developments in the assessment theory and practice and to establish practical guidelines for redesigning the educational assessment program.
Prerequisites to Redesign the Assessment Programs | |  |
Efforts to introduce alternative assessment into a large scale of high risk testing programs have had mixed results due to high costs, logistic barriers, and political ramifications. [6] Issues to be addressed as prerequisites to redesign the assessment programs include:
Faculty Development
Staff development is an essential pre-requisite to achieve the desired results of the change. To achieve a proper operation of an assessment system, it is necessary that assessing teachers grow professionally and become more experienced in order to master the assessment system. [6] The teachers' levels of preparation and acceptance have proved a significant role in the success or failure of newly implemented assessment programs. [6] The reviewing group should make sure that faculty's development and educationalists preparation be processed parallel to the redesign of the assessment programs. Required training, cost, and administrative support should be presented and approved prior to any change; otherwise, the redesign may be unsuccessful.
Resources and Political Evaluation
A comprehensive evaluation of available and required resources is important to achieve the desired outcome of the change. The reviewing group should evaluate the allocated budget and the acceptable assessment methods in their educational environment. There is no point in redesigning an assessment program that will not be implemented due to financial or political restraints.
Administrative Support
A supportive administration system and staff is very important to support the change phase and facilitate the process.
Competency Based Education
As a pre-requisite to the redesign, we should scrutinize the curriculum objectives and decide which forms and methods are suitable for assessing competence in an integrated manner. Our approach to assessment should combine knowledge, comprehension, problem solutions, technical skills, good attitude, and ethics. [7]
The redesigned system should allow for gathering data systematically through observation and decisions to achieve a reliable performance of the assessment program.
We should not focus only on the outcome, but also examine the process whenever appropriate. [7]
Redesigning an Assessment Program | |  |
In redesigning an assessment program, one should take into consideration the following elements:
Curriculum Objectives and Constructive Alignment
In his work on constructive alignment, Biggs [8],[9],[10] explicated principles raised by all good assessment policies and summarized a constructive alignment, or a developed curriculum where teaching methods and assessment were aligned to the learning activities stated in the objectives. Accordingly, we should initially formulate clear objectives for the educational exercise and align them to the educational activities and to the assessment program. Then, all the core objectives of the curriculum and the different instructional formats should be linked to the used assessment tools, and should reflect and cover all the layers of the Miller's pyramid [Figure 1]. [11] Every instructional unit should contribute to students' progress variables. The overall picture will be completed when students peruse the instructional units and their related assessment along with their educational impact. A properly aligned assessment that covers all the objectives will help students to pass progress exams easily and perform well on them. Furthermore, there must be a match between what is taught and what is assessed. This principle represents the basic tenet of content validity. [6]
Educational Activities/Teachers' Management and Responsibilities
Such integration will require a paradigm shift in the learning and teaching processes. Peer and self assessment can be very beneficial in improving students' performances. [12] More formative assessment should be added to the program to bridge the existing gap between actual and reference levels of performance, where feedback is used to alter this gap. [13] The effectiveness of formative assessment is dependent upon the students' accurate perception of the gap and their motivation to address it. [13] Formative assessment with feedback from a trained educator should be stressed. For example, this could be addressed in the out-patient tutorial setting and the general practice visits, which may help the students, identify their weaknesses and improve their performances.
Setting Standards
A clear standard needs to be set below which a doctor would be judged unfit to practice. [14] Some clinical competencies are a must in which very high standards are required, while for others minimum standards can be accepted; arbitrary standards might result in passing unskilled and dangerous students. Norm referencing is unacceptable for clinical competency licensing tests. [14] Selection of standards of the assessment program should undergo a systematic process that decides on types and methods of the standards and selection of the referees (examiners). Several meetings should be held to reach a consensus and calculating the cut-off points. Furthermore, referees should decide overall how to pass or fail candidates. [14],[15] The review group might recommend using a relative type of setting standards for written examinations, while an absolute type can be used for assessing clinical competencies. Out of the various standards, Angoff's or Hofstee's methods are recommended for the clinical assessment such as the Objectives Structured Clinical Examination (OSCE) and Mini clinical examination (mini CEX), since they are easy to use with a sizable supportive research. [15] One examiner is frequently used to assess the performance of students. To achieve a reliable result, six to eight judges should be considered for this purpose. [15] The feasibility of this issue should be discussed and the redesigning group should insist on the necessary budget and trained personnel. More importantly, an expert in assessment and setting standards should calculate the standards based on the used methods. After each test, the credibility of the results should be discussed and the pass rate should be compared against parameters of competence to ensure that they have the expected relationship.
Assessment
To promote learning, assessment should be an educational tool and not just an examination tool. Formative assessment allows students to learn from tests, receive feedback, and acquire knowledge and skills. Furthermore, to assure that the doctors are competent, we need a summative assessment. [14] There are no inherently bad or good assessment methods; they are all relative. What matters is that the assessment program should be an integral part of the curriculum. [4]
Whenever we review any assessment program, we should not aim at disturbing the general structure of the existing assessment, but should instead edit it so that it can fit with the change of objectives. We should have adequate sampling across, judges, instruments, and context to ensure validity and reliability 4 and coverage of all Miller's pyramids components.
Program Evaluation and Quality Assurance (QA) Process
Ideally, continuous follow-up and evaluation should be structured within the assessment program. The main interest in the QA system relies on process indicators for quality assessment and improvement. Furthermore, the internal QA system should not ignore the external monitoring or accreditation standards. The periodically gathered data should be utilized to enhance learning in the educational program. Hence, a periodic analysis of this data should be performed and recommendations should be prepared.
Currently, there is a shift of emphasis from reliability to validity. The information gathered using different formats should be examined, and the students' performances should be mapped. The structural elements of the accountability system should be described, and uniform levels of system functioning for quality assurance indices should also be established. [6]
Quality assurance can be provided at the end of each unit according to acceptable frequency in addition to standards to interpret the results. Conducting mid-unit evaluation, particularly at the initial stage of implementation, will help in early identification and correction of problems, and in the definitions of the absolute and relative standards for data interpretation. After the quality assurance evaluation of each unit, relevant recommendations should be established, and a special committee should be entrusted to monitor and implement them. A 3 to 6 month audit should be performed using students' results besides focused interviews to monitor the success of the changes.
In conclusion, designing an assessment program is a complicated process. It should start at the design stage of the curriculum based on constructive alignment. We should develop curricula in which teaching methods and assessments are aligned to the learning activities stated in the objectives. Assessments should be part of the educational activities rather than a separate objective of its own. The effective use of formative assessment may help bridge the existing gap between the actual and reference levels of performance.
Program evaluation and quality assurance should be a cyclic process through a continuous measuring, judging, and developing methods. Success can be achieved if the evaluation activities of the assessment program are approached in a systematic and integrated fashion. Finally, despite the extensive assessment research, we still do not have the criteria that can help us determine the adequacy of sampling; qualitative and quantitative methods should be combined in a meaningful way. The issue then is not whether one uses old fashioned or modern methods of assessment, but much more why and when we select this or that method in a given situation.
References | |  |
1. | van der Vleuten CP. The assessment of professional competence: developments, research and practical implications. Adv Health Sci Educ 1996;1:41-67. |
2. | Schuwirth LW, van der Vleuten CP. A plea for new psychometric models in educational assessment. Med Educ 2006;40:296-300. [PUBMED] [FULLTEXT] |
3. | Ebel RL. The practical validation of tests of ability. Educ Measurement: Issues Prac 1983;2: 7-10. |
4. | Van der Vleuten CP, Schuwirth LW. Assessing professional competence: From methods to programmes. Med Educ 2005;39:309-17. [PUBMED] [FULLTEXT] |
5. | Fredriksen N. The real test bias: Influence of testing on teaching and learning. Am Psychol 1984;39:193-202. |
6. | Wilson M, Sloane K. From Principles to Practice: An Embedded Assessment System. Appl Measurement Educa 2000;13:181-208. |
7. | Hager P, Gonczi A, Athanasou J. General Issues about Assessment of Competence. Assess Eval Higher Educ 1994;19:3-16. |
8. | Biggs J. Aligning teaching and assessing to course objectives. In International Conference on Teaching and Learning in Higher Education: New trend and innovations, University of Averio. 2003. |
9. | Biggs J. Aligning the curriculum to promote good learning, paper presented at the Constructive Alignment in Action. In Imaginative Curriculum Symposium, LTSN Generic Centre. 2002. |
10. | Biggs J. Enhancing teaching through constructive alignment. Higher Educ 1996;32:347-64. |
11. | Miller GE. The assessment of clinical skills/competence/performance. Acad Medicine (suppl) 1990;65:S63-7. |
12. | Papinczak T, Young L, Groves M. Peer assessment in problem-based learning: A qualitative study. Adv Health Sci Educ 2007;12:169-86. |
13. | Rushforth HE. Objective structured clinical examination (OSCE): Review of literature and implications for nursing education. Nurse Educ Today 2006. |
14. | Wass V, van der Vleuten CP, Shatzer J, Jones R Assessment of clinical competence. Lancet 2001;357:945-9. |
15. | Norcini JJ. The metric of medical education - setting standards on educational tests. Med Educ 2003;37:464-9. [PUBMED] [FULLTEXT] |

Correspondence Address: Hanan M.F Al Kadri College of Medicine, King Saud Bin Abdulaziz University for Health Sciences, Riyadh Saudi Arabia
  | Check |
PMID: 19414957 
[Figure 1] |