Home About us Current issue Back issues Submission Instructions Advertise Contact Login   

Search Article 
  
Advanced search 
 
Saudi Journal of Kidney Diseases and Transplantation
Users online: 674 Home Bookmark this page Print this page Email this page Small font sizeDefault font size Increase font size 
 

SPECIAL ARTICLE Table of Contents   
Year : 2009  |  Volume : 20  |  Issue : 3  |  Page : 476-480
Redesigning an educational assessment program


College of Medicine, King Saud Bin Abdulaziz University for Health Sciences, Riyadh, Saudi Arabia

Click here for correspondence address and email
 

   Abstract 

Changing educational assessment program represent a challenge to any organization. This change usually aims to achieve reliable and valid tests and assessment program that make a shift from individual assessment method to an integral program intertwined with the educational curriculum. This paper examines critically the recent developments in the assessment theory and practice, and establishes practical advices for redesigning educational assessment programs. Faculty development, availability of resources, administrative support, and competency based education are prerequisites to an effective successful change. Various elements should be considered when re­designing assessment program such as curriculum objectives, educational activities, standard set­tings, and program evaluation. Assessment programs should be part of the educational activities rather than being a separate objective on its own, linked to students' high quality learning.

Keywords: Assessment, Constructive alignment, Formative assessment, Standards setting

How to cite this article:
Al Kadri HM. Redesigning an educational assessment program. Saudi J Kidney Dis Transpl 2009;20:476-80

How to cite this URL:
Al Kadri HM. Redesigning an educational assessment program. Saudi J Kidney Dis Transpl [serial online] 2009 [cited 2020 Nov 26];20:476-80. Available from: https://www.sjkdt.org/text.asp?2009/20/3/476/50785

   Introduction Top


Selection of an assessment program should depend on the reliability, validity, educational impact, acceptability, and cost. [1] Any group as­signed to assess a curriculum should aim for reliable and valid tests; each individual item is meaningless on its own and becomes meaning­ful only in relation to the others. [2] The validity of these items should emanate from their in­ trinsic meaning, the content of the elements, and the task posed to the examiners. [3] Further­more, there should be a shift from individual methods to an integral assessment program in­tertwined with the educational curriculum. [4] The revision group should not neglect the tremen­dous impact that the assessment program has on the learners [5] and on the final quality of the out­put. Test developers should use this phenome­non strategically and reinforce a desirable lear­ning behavior.

The aim of the new design is to improve on the commonly used models and add whatever new methods in order to make them more re­liable and reach the educational objectives. This paper examines critically the recent develop­ments in the assessment theory and practice and to establish practical guidelines for redesigning the educational assessment program.


   Prerequisites to Redesign the Assessment Programs Top


Efforts to introduce alternative assessment into a large scale of high risk testing programs have had mixed results due to high costs, logistic ba­rriers, and political ramifications. [6] Issues to be addressed as prerequisites to redesign the assess­ment programs include:

Faculty Development

Staff development is an essential pre-requisite to achieve the desired results of the change. To achieve a proper operation of an assessment sys­tem, it is necessary that assessing teachers grow professionally and become more experienced in order to master the assessment system. [6] The tea­chers' levels of preparation and acceptance have proved a significant role in the success or fai­lure of newly implemented assessment programs. [6] The reviewing group should make sure that fa­culty's development and educationalists prepa­ration be processed parallel to the redesign of the assessment programs. Required training, cost, and administrative support should be presented and approved prior to any change; otherwise, the redesign may be unsuccessful.

Resources and Political Evaluation

A comprehensive evaluation of available and required resources is important to achieve the desired outcome of the change. The reviewing group should evaluate the allocated budget and the acceptable assessment methods in their edu­cational environment. There is no point in rede­signing an assessment program that will not be implemented due to financial or political res­traints.

Administrative Support

A supportive administration system and staff is very important to support the change phase and facilitate the process.

Competency Based Education

As a pre-requisite to the redesign, we should scrutinize the curriculum objectives and decide which forms and methods are suitable for asse­ssing competence in an integrated manner. Our approach to assessment should combine know­ledge, comprehension, problem solutions, tech­nical skills, good attitude, and ethics. [7]

The redesigned system should allow for ga­thering data systematically through observation and decisions to achieve a reliable performance of the assessment program.

We should not focus only on the outcome, but also examine the process whenever appropriate. [7]


   Redesigning an Assessment Program Top


In redesigning an assessment program, one should take into consideration the following elements:

Curriculum Objectives and Constructive Alignment

In his work on constructive alignment, Biggs [8],[9],[10] explicated principles raised by all good assess­ment policies and summarized a constructive a­lignment, or a developed curriculum where tea­ching methods and assessment were aligned to the learning activities stated in the objectives. Accordingly, we should initially formulate clear objectives for the educational exercise and align them to the educational activities and to the assessment program. Then, all the core objectives of the curriculum and the different instructional formats should be linked to the used assessment tools, and should reflect and cover all the layers of the Miller's pyramid [Figure 1]. [11] Every ins­tructional unit should contribute to students' progress variables. The overall picture will be completed when students peruse the instruc­tional units and their related assessment along with their educational impact. A properly alig­ned assessment that covers all the objectives will help students to pass progress exams easily and perform well on them. Furthermore, there must be a match between what is taught and what is assessed. This principle represents the basic tenet of content validity. [6]

Educational Activities/Teachers' Management and Responsibilities

Such integration will require a paradigm shift in the learning and teaching processes. Peer and self assessment can be very beneficial in impro­ving students' performances. [12] More formative assessment should be added to the program to bridge the existing gap between actual and refe­rence levels of performance, where feedback is used to alter this gap. [13] The effectiveness of for­mative assessment is dependent upon the stu­dents' accurate perception of the gap and their motivation to address it. [13] Formative assessment with feedback from a trained educator should be stressed. For example, this could be addres­sed in the out-patient tutorial setting and the general practice visits, which may help the stu­dents, identify their weaknesses and improve their performances.

Setting Standards

A clear standard needs to be set below which a doctor would be judged unfit to practice. [14] Some clinical competencies are a must in which very high standards are required, while for others minimum standards can be accepted; arbitrary standards might result in passing un­skilled and dangerous students. Norm referen­cing is unacceptable for clinical competency licensing tests. [14] Selection of standards of the assessment program should undergo a syste­matic process that decides on types and me­thods of the standards and selection of the referees (examiners). Several meetings should be held to reach a consensus and calculating the cut-off points. Furthermore, referees should decide overall how to pass or fail candidates. [14],[15] The review group might recommend using a relative type of setting standards for written examinations, while an absolute type can be used for assessing clinical competencies. Out of the various standards, Angoff's or Hofstee's methods are recommended for the clinical assess­ment such as the Objectives Structured Clinical Examination (OSCE) and Mini clinical exami­nation (mini CEX), since they are easy to use with a sizable supportive research. [15] One exa­miner is frequently used to assess the perfor­mance of students. To achieve a reliable result, six to eight judges should be considered for this purpose. [15] The feasibility of this issue should be discussed and the redesigning group should insist on the necessary budget and trained per­sonnel. More importantly, an expert in assess­ment and setting standards should calculate the standards based on the used methods. After each test, the credibility of the results should be discussed and the pass rate should be compared against parameters of competence to ensure that they have the expected relationship.

Assessment

To promote learning, assessment should be an educational tool and not just an examination tool. Formative assessment allows students to learn from tests, receive feedback, and acquire knowledge and skills. Furthermore, to assure that the doctors are competent, we need a sum­mative assessment. [14] There are no inherently bad or good assessment methods; they are all relative. What matters is that the assessment program should be an integral part of the cu­rriculum. [4]

Whenever we review any assessment program, we should not aim at disturbing the general structure of the existing assessment, but should instead edit it so that it can fit with the change of objectives. We should have adequate sam­pling across, judges, instruments, and context to ensure validity and reliability 4 and coverage of all Miller's pyramids components.

Program Evaluation and Quality Assurance (QA) Process

Ideally, continuous follow-up and evaluation should be structured within the assessment pro­gram. The main interest in the QA system relies on process indicators for quality assessment and improvement. Furthermore, the internal QA sys­tem should not ignore the external monitoring or accreditation standards. The periodically ga­thered data should be utilized to enhance lear­ning in the educational program. Hence, a pe­riodic analysis of this data should be performed and recommendations should be prepared.

Currently, there is a shift of emphasis from reliability to validity. The information gathered using different formats should be examined, and the students' performances should be mapped. The structural elements of the accountability system should be described, and uniform levels of system functioning for quality assurance in­dices should also be established. [6]

Quality assurance can be provided at the end of each unit according to acceptable frequency in addition to standards to interpret the results. Conducting mid-unit evaluation, particularly at the initial stage of implementation, will help in early identification and correction of problems, and in the definitions of the absolute and rela­tive standards for data interpretation. After the quality assurance evaluation of each unit, rele­vant recommendations should be established, and a special committee should be entrusted to monitor and implement them. A 3 to 6 month audit should be performed using students' results besides focused interviews to monitor the suc­cess of the changes.

In conclusion, designing an assessment program is a complicated process. It should start at the design stage of the curriculum based on cons­tructive alignment. We should develop curricula in which teaching methods and assessments are aligned to the learning activities stated in the ob­jectives. Assessments should be part of the edu­cational activities rather than a separate objec­tive of its own. The effective use of formative assessment may help bridge the existing gap between the actual and reference levels of per­formance.

Program evaluation and quality assurance should be a cyclic process through a continuous mea­suring, judging, and developing methods. Success can be achieved if the evaluation activities of the assessment program are approached in a syste­matic and integrated fashion. Finally, despite the extensive assessment research, we still do not have the criteria that can help us determine the adequacy of sampling; qualitative and quantita­tive methods should be combined in a meaning­ful way. The issue then is not whether one uses old fashioned or modern methods of assessment, but much more why and when we select this or that method in a given situation.

 
   References Top

1.van der Vleuten CP. The assessment of profe­ssional competence: developments, research and practical implications. Adv Health Sci Educ 1996;1:41-67.  Back to cited text no. 1    
2.Schuwirth LW, van der Vleuten CP. A plea for new psychometric models in educational assess­ment. Med Educ 2006;40:296-300.  Back to cited text no. 2  [PUBMED]  [FULLTEXT]
3.Ebel RL. The practical validation of tests of ability. Educ Measurement: Issues Prac 1983;2: 7-10.  Back to cited text no. 3    
4.Van der Vleuten CP, Schuwirth LW. Assessing professional competence: From methods to pro­grammes. Med Educ 2005;39:309-17.  Back to cited text no. 4  [PUBMED]  [FULLTEXT]
5.Fredriksen N. The real test bias: Influence of testing on teaching and learning. Am Psychol 1984;39:193-202.  Back to cited text no. 5    
6.Wilson M, Sloane K. From Principles to Practice: An Embedded Assessment System. Appl Mea­surement Educa 2000;13:181-208.  Back to cited text no. 6    
7.Hager P, Gonczi A, Athanasou J. General Issues about Assessment of Competence. Assess Eval Higher Educ 1994;19:3-16.  Back to cited text no. 7    
8.Biggs J. Aligning teaching and assessing to course objectives. In International Conference on Teaching and Learning in Higher Education: New trend and innovations, University of Averio. 2003.  Back to cited text no. 8    
9.Biggs J. Aligning the curriculum to promote good learning, paper presented at the Constructive Alignment in Action. In Imaginative Curriculum Symposium, LTSN Generic Centre. 2002.  Back to cited text no. 9    
10.Biggs J. Enhancing teaching through constructive alignment. Higher Educ 1996;32:347-64.  Back to cited text no. 10    
11.Miller GE. The assessment of clinical skills/com­petence/performance. Acad Medicine (suppl) 1990;65:S63-7.  Back to cited text no. 11    
12.Papinczak T, Young L, Groves M. Peer assess­ment in problem-based learning: A qualitative study. Adv Health Sci Educ 2007;12:169-86.  Back to cited text no. 12    
13.Rushforth HE. Objective structured clinical examination (OSCE): Review of literature and implications for nursing education. Nurse Educ Today 2006.  Back to cited text no. 13    
14.Wass V, van der Vleuten CP, Shatzer J, Jones R Assessment of clinical competence. Lancet 2001;357:945-9.  Back to cited text no. 14    
15.Norcini JJ. The metric of medical education - setting standards on educational tests. Med Educ 2003;37:464-9.  Back to cited text no. 15  [PUBMED]  [FULLTEXT]

Top
Correspondence Address:
Hanan M.F Al Kadri
College of Medicine, King Saud Bin Abdulaziz University for Health Sciences, Riyadh
Saudi Arabia
Login to access the Email id


PMID: 19414957

Rights and Permissions


    Figures

  [Figure 1]



 

Top
 
 
    Similar in PUBMED
    Search Pubmed for
    Search in Google Scholar for
  Related articles
    Email Alert *
    Add to My List *
* Registration required (free)  
 


 
    Abstract
    Introduction
    Prerequisites to...
    Redesigning an A...
    References
    Article Figures
 

 Article Access Statistics
    Viewed2420    
    Printed85    
    Emailed0    
    PDF Downloaded557    
    Comments [Add]    

Recommend this journal