he medical profession training initially
originated as an apprenticeship model. The learner observed, assisted,
performed in the actual clinical setting and improved by feedback from
the mentor based on his performance. With time, we moved to a model
where initial years of training were confined within the walls of the
lecture halls and demonstration rooms and the trainees were only
subsequently exposed to the real patients. The training years were
compartmentalized as pre-clinical, para clinical and clinical. The
assessment methods also conformed to this curricular plan.
The contemporary trends in medical education
demonstrate an effort at dissolution of boundaries between the three
stages of training with an early exposure to patients and clinical
practice areas (‘early clinical exposure’), and a ‘learning while doing’
(the student doctor) approach. Similarly, the postgraduate trainees
develop different competencies at various seamless stages of training
with an overall goal of being able to deliver specialist health care.
Clearly, the emphasis is on the performance of the trainee rather than
only on his competence. Simply put, while the competence is the ability
to do a certain task, the performance refers to an overall output in a
real situation, based on the ability, context and judgment of the
trainee. Development of many competencies leads to a certain level of
performance in an actual situation.
In India, the Postgraduate Medical Education
Regulations 2000 (PGMER) [1] of the Medical Council of India (MCI) state
that PG training be competency based and also suggest the use of logbook
for monitoring the learning process. However, these regulations do not
provide any details of in-training assessments. While there is a
provision of internal assessment for periodic assessments during MBBS
program, there is no such requirement for PG courses. Postgraduate
training is directed not merely at attainment of knowledge, attitude and
skills but also at observable responsiveness and appropriate functioning
in real life situations. It follows that even the most ideal of
conventional assessments conducted in examination settings will fall
short of measuring these outcomes. There is indeed a need to observe and
assess the trainees in real situations so that necessary mid-course
corrections can be provided to the trainees. Workplace based assessment
(WPBA) is being increasingly used to assess the trainees by direct
observation and to shape their learning.
The Postgraduate Medical Education and Training Board
(PMETB) of UK, defines the term WPBA as the assessment of working
practices based on what doctors actually do in the clinical setting and
is predominantly carried out in the workplace itself [2]. The two
cardinal components of WPBA are ‘direct observation’ and ‘conducted in
workplace.’ A third, indispensible aspect is provision of feedback.
The current deficiencies in our assessment system are
due to lack conceptualization of assessment as a process for continuous
improvement and learning leading to non-utilization of many available
tools. This article discusses the rationale for using WPBA for
in-training assessment, its advantages over the traditional assessment
methods, its educational utility, the tools used for WPBA and the Indian
scenario with respect to the possible challenges that need to be
addressed for optimal inclusion of WPBA in medical training.
Rationale and Advantages of WPBA
The practice of medicine may be described as a
‘performing art based on science and judgment’. This means that while
there is a necessity to assess the scientific knowledge base, the
assessment is essentially incomplete without an assessment of
performance and judgment. And, there is no better place to do so than in
the workplace itself, in real context. Several supporting arguments can
be put forth in favour of adopting WPBA [Box 1].
Box 1: Rationale for Adopting WPBA
• Conforms to the highest level of Miller’s
Pyramid
• Focus on clinical skills including the
necessary soft skills (communication, behavior, professionalism,
ethics, attitude)
• Observation (in real situation) and
feedback
• Context and content specificity
• Compensates for some shortcomings in the
traditional assessment methods
• Seamless blending of purpose and ideology
with that of In-Training Assessment
• Alignment of learning with actual working
• Encourages reflective practice
|
Conforms to highest level of Miller’s pyramid:
Miller’s pyramid [3] is a simple and useful model for assessment of
clinical competence/performance. The base of the pyramid is rightfully
formed by the knowledge base (‘Knows’) – assessed by simple knowledge
tests. The next level ‘Knows how’ measures understanding and application
of knowledge and can be assessed using patient management problems,
short essay questions etc. The third level ‘Shows how’ or competence is
amenable to measurement by methods such as OSCE. Till recently this
appeared sufficient to make a judgment about the outcome of training.
However, the performance of doctors in controlled examination situations
correlates poorly with what they do in actual practice [4]. And hence
the need to assess at the highest levels i.e. the "Does" level. WPBA
assesses the optimal and judicious use of competencies in authentic
settings.
Focus on clinical skills including the necessary soft
skills: The clinical skills development is at the very centre of
medical training. The importance of good history and physical
examination in making a correct diagnosis cannot be overemphasized. This
is substantiated by studies that have reported that the correct
diagnosis can be established in more than 75% of patients based on
history and clinical examination alone in different clinical settings
[5,6].
The backbone of clinical skills lies in several soft
skills such as such as communication skills, professionalism, and ethics
- also referred to as non-cognitive component of clinical skills [7]. It
is this non-technical component of one’s abilities that determines how
well a person uses his/ her clinical skills for health care delivery
[8]. Therefore, it is not only important to include a formal training
for developing these soft skills along with the technical clinical
skills in the medical curriculum but also an effective assessment plan
for the same. Unfortunately, this is not done despite their perceived
importance. To add to it, these non-cognitive skills are not easily
amenable to assessment by traditional assessment methods. There is some
effort to assess these skills by methods that assess competence such as
the OSCE but these remain confined to the examination situation and the
results may not be generalized to the actual performance in real life
[4].
Many of the tools for WBPA such as the Mini-Clinical
Evaluation Exercise (mini-CEX) and Directly Observed Procedural Skills
(DOPS) inherently include an assessment of communication skills. The
subjective judgment of the patient-trainee interaction by the assessor
in a variety of situations (recorded on a global rating scale) allows
for a contextual feedback. This also overcomes the problem of a
relatively rigid checklist based assessment of communication skills
since the professional behavior and communication pattern may vary with
context, clinical situation, country and region.
Observation (in real situation) and feedback:
Carrying the earlier argument forward, the WPBA not only provides
the opportunity to observe and assess in the real life situation but
also to provide a feedback for improvement at the most appropriate time.
The landmark meta-analysis by Hattie established the importance of
feedback as an important contributor to learning [9]. Feedback is most
effective when given for specific tasks. Despite clear evidence in
support, the power of observation of actual clinical work and feedback
remains grossly underutilized in medical education. While no such data
is available from India, studies from western countries suggest that
less than one third of clinical encounters are actually observed during
training [10,11]. At the Postgraduate level, up to 80% of Postgraduate
students may have only one observed clinical encounter [12]. The above
facts make it amply clear that not only there is a limitation in terms
of number of opportunities available for direct observation and feedback
but also gross underutilizations of these sparse opportunities.
Context and content specificity: Context
is important in any learning situation. It is a major determinant of how
a physician will perform in a certain clinical setting [13]. The content
areas in a curriculum are also developed based on larger/local needs and
context. However, when selecting cases to be included in an examination,
the assessors are liable to select cases that are exclusive or catch
their fancy rather than the cases that a student doctor must essentially
master. One example is keeping a complicated case with neurological
problems rather than a ‘simple’ case of anemia or malnutrition in the
practical examination. This weakens the interpretations drawn from such
an assessment in terms of loss of generalizability to real life
performance. The WPBA inherently maintains a certain level of context
and content specificity of assessment as the work place is best suited
for sampling the situations that a student will actually encounter in
clinical practice after qualifying.
Compensates for some shortcomings of the traditional
assessment methods: As has already been discussed, the
traditional assessment methods have largely focused on assessing
competence. The assessment methods that have been in use are more
concerned with measuring the outcome rather than the learning process.
It is well accepted that for assessment to be meaningful, it should be a
longitudinal plan (rather than one time at the end or mid term), it
should include a sample of multiple areas of work (representative of
actual work); and, it should focus on the process of learning as much as
on its outcome [14]. The assessment is put to its best use when it is
also used modulate the learning process by providing a directional
feedback to the learner for improvement.
The WPBA encompasses all the above desirable
components by: (i) Its potential of being included as a
longitudinal plan spread during the course of study; (ii) The
real life situations providing good sample of the situations that the
trainee will actually encounter after completion of training; (iii)
The artificiality of examination situation not being there, the trainee
is likely to be more at ease and also the system related influences such
as facilities and infrastructure are likely to be minimum; (iv)
Providing several opportunities for contextual feedback and improvement
thereby keeping the trainee on a proper course of learning.
Alignment of learning with actual working:
Use of problems as a trigger for learning utilizes the principle of
contextual learning. This has been the basis of teaching methods such as
case-based learning or on a larger scale in the Problem based learning
curricula. Literature suggests that learning in workplace is triggered
by specific problems encountered in the course of work [15]. This
difference in on-the-spot learning and planned learning is well
described by Hoffman and Donaldson [16]. This calls for a definite and
deliberate effort at recognizing and exploiting the learning
opportunities at workplace.
WPBA is an assessment method and also a learning
method that is capable of responding to this call. It encourages
deliberate observation and feedback at the workplace and therefore has
the potential for promoting on-the-spot and problem/context specific
learning.
Encourages reflective practice: The
assessment based feedback to function as a tool for learning
necessitates reflection by the recipient (student) as well as the
provider (teacher). Feedback is more effective when provided around a
specific task. It is likely that the contextual feedback provides a
powerful trigger for reflection and the recipient is compelled to think
back upon how he performed and how he could have done better based on
feedback and his own thoughts. The teacher is also likely to reflect
upon what kind of feedback he gave and how and what effect it produced
in the learner. The WPBA functions on the foundations of feedback. The
direct observation at workplace is only made useful by the accompanying
feedback and its ability to trigger reflection. Therefore it provides
readymade system for encouraging reflective practice and hence enhances
learning.
Tools for WPBA
It may be emphasized here that WPBA is not being
recommended as a replacement for conventional assessment system but as a
complement to it for best benefit. The tools in use for WPBA are best
used in a judicious combination as per local feasibility and context.
These may be grouped under some broad categories as under:
• Documentation of work by the trainee
through logs e.g. Logbook, Clinical Encounter Cards (CEC)
• Direct observation of trainees
performance during clinical encounters such as the mini-Clinical
Evaluation Exercise (mini-CEX), Direct Observation of Procedural
Skills (DOPS), Acute Care Assessment Tool (ACAT), Clinical Work
Sampling (CWS)
• Discussion of individual clinical cases
such as Chart Stimulated Recall (CSR; also referred to as Case-based
Discussion or CbD in UK)
• Feedback on routine performance during
clinical work from the peers, coworkers and patients (multisource
feedback) using tools such as mini- Peer Assessment Tool (mini-PAT),
and Patient Satisfaction Questionnaires (PSQ)
• A longitudinal compilation of above
assessments and own reflections or learning from other sources into
a Portfolio.
Web Table I gives a brief overview of some of
the common tools in use [17, 18].
Providing Feedback after WPBA
The strength of WPBA lies in direct observation and
provision of contextual feedback. The crucial factors that determine the
effectiveness of the feedback include the timing of feedback, the method
of giving feedback, the focus of feedback being on alterable behaviors,
environment of confidentiality and mutual trust. The assessor compares
the trainee performance to standard (if possible) or expected norms
based on own professional judgment. The onus is then on the assessor to
present it to the trainee in an acceptable form with a doable action
plan for improvement. The trainee on his part is expected to have an
open approach with willingness to reflect upon his own performance and
the feedback provided.
 |
Fig.1 Workplace based assessment.
|
Various methods have been described for providing
effective feedback [19]. The simplest of these is the ‘sandwich method’
wherein criticism is delivered between ‘layers’ of praise. Pendleton’s
framework [20] is another common model in use. It requires the trainee
to first state as to what went well followed by what could have been
done better to improve the performance. Then the assessor provides the
suggestions. This is sometimes criticized on account of being too rigid
and more flexible modifications have been developed by educationists.
Utility of WPBA for Assessment Program
The utility of any assessment is conceptualized as a
product of validity, reliability, feasibility, acceptability and
educational impact [21]. It follows that there may be tradeoffs between
these key elements in various assessment plans. The corollary also is
that that an assessment may be designed to have an overall utility even
if it scores suboptimal on any one aspect.
The WPBA inherently scores high in terms of validity
by virtue of being set in real clinical situation at the work-place. It
provides for observation of a wide sample of clinical work in authentic
setting. Studies have shown a consistent correlation with other measures
of clinical competence.
The reliability of WPBA is not as much an issue of
debate as its generalizability. Since most tools used for WPBA involve
many encounters with many assessors spread over a period of time, the
reliability builds up to a reasonable extent. Six to 8 encounters in a
year are considered optimal to give an acceptable reliability. The
acceptability of WPBA depends on sensitization of students and faculty,
fostering an environment or mutual trust, the training of assessors in
providing feedback. These are all modifiable factors that can be
improved with some deliberate effort in the direction. Some reports have
come from India showing acceptability of some of the methods [22, 23]
The feasibility of WPBA may even be better than the
traditional assessment methods as it is carried out during the course of
routine work. Though it requires initial faculty training, some extra
time and student sensitization, there is hardly any requirement for
additional infrastructure. In India, where clinical work is abundant and
most trainees are actually overburdened with work, this may be the most
appropriate developmental learning modality.
The educational impact of the WPBA is high on account
of its being based on developmental and contextual feedback. The
significant impact of feedback on learning has already been discussed
earlier in the article.
Indian experience with WPBA and its potential role
Some of the WPBA tools such as the mini-CEX, DOPS or
tools similar to those described are being used at a few institutes in
our country. However, they are employed as isolated methods rather than
as a part of a planned WPBA program. [22-25]. These initial reports are
encouraging in terms of acceptance by faculty and students and
feasibility as well. There is also unpublished work by the first author
and others on including workplace based methods in UG and PG training.
It is important to point out that using these tools
per se may not result in the desired outcomes. What is important is to
incorporate them as a part of a larger assessment program that already
has traditional methods in place. Not all tools may be used in all
situations, and an optimal mix of an optimal number WPBA assessment
encounters (6-8 in a year) may retain reliability with feasibility.
Limitations of WPBA
The WPBA is not a replacement of traditional methods
of assessment but as an add-on method specially to the in-training
assessment or formative assessment. The students who perform well in
initial encounters may get overconfident and this may impede the
motivation to improve. [26]. The weaker trainees on the other hand may
get discouraged by initial few encounters and may avoid seeking
feedback. Since the WPBA puts a demand on time, there is a tendency for
the trainees to seek less senior assessors. There is evidence to suggest
that the more senior staff and expert staff may give lower but more
accurate rating of performance [27]. Given the important role of
subjective evaluation in WPBA, this becomes an important consideration.
It is also important to remember that most of the tools for WPBA are
‘un-standardized’ by conventional psychometric standards. In a
standardized tool like say multiple choice questions, reliability is
built within the tool but with WPBA, it depends on how the tool is being
used. This may require faculty training for making best use of these
tools. The trainees also need to be sensitized and shown the beneficial
effects of feedback to make these tools more acceptable.
Challenges to Implementation
Some of the challenges to introduction and
integration of WPBA in the present curricular plan are discussed below,
along with possible suggestions for overcoming them.
Sensitization of students and faculty: "The
eyes do not see what the mind does not know". Even with ample
opportunities of direct observation of trainees work every day, we as
trainers, let go of these essentially because most of us are not
consciously aware of the immense teaching-learning potential of this
simple act. We wait for holding a formal examination to evaluate the
students (and hopefully providing a feedback). Sensitization programs on
WPBA may be an initial step in making everyone aware of the
opportunities available at hand every day. The introduction and training
in actual methods may then follow.
Faculty training: This is perhaps the
biggest challenge to implementation of WPBA. The training of assessors
is important in two main areas: (1) Clarity on what to assess and the
norms to expect, and, (2) the art of giving effective feedback.
The former will reduce the chances of suboptimal
performance or essential skill being missed out by assessors, specially
those who are less experienced. It will also contribute to the
standardization of the assessment. The latter training is essential for
any trainer in view of the fact that feedback is significant contributor
to learning. The benefit of the entire exercise may be lost if the
feedback is not delivered in an appropriate positive manner with
suggestions for improvement. An additional issue could be of a potential
conflict in the role of faculty members as a teacher as well as
assessor. This may surface as unwillingness to record negative
evaluations and therefore a possible likelihood of failing to identify
the residents in difficulty. This barrier may, at least in part, be
overcome by appropriate sensitization and training of faculty.
Demonstrating feasibility: Introducing a
new method in the assessment plan requires much enterprise and planning.
This inertia is partially overcome if one sees it being introduced in
other institutions. The key solution here may be that the interested
educational leaders at various institutes in India come together and
introduce WPBA as a planned program at their respective institutions.
This may not only prove to be a motivational step for others but also
demonstrate the feasibility in our setting.
Creating an environment of mutual trust:
This calls for a change in the thinking and culture and is
understandably a big challenge. Assessments innately imply some degree
of competition. Competition can make people wary of assessment and view
the efforts at feedback/improvement with suspicion. It is therefore
important to bring the two key stakeholders (student and teacher) on a
common platform and create an environment of nurture, professional
educational support and mutual trust rather than competition [26].
Mutual trust is essential for any fruitful feedback session.
Inclusion in the regulations: There is a
global agreement on benefit of inclusion of this modality in the
assessment plan in medical education. Many countries have well
established guidelines for its implementation. The efforts at
integrating this into the Indian medical training will certainly get a
boost if it is included and recommended as a part of the standard
regulations of the MCI along with clear guidelines for implementation.
Literature also confirms the important role of external regulations on
the feedback process in WPBA [28]. They suggest that the possible ways
to enhance implementation might include stipulation of a mandatory
frequency of observation and feedback, conduction of a quality review in
addition to provision for instructions and training to assessors and
trainees.
We have argued that assessment provides us with an
opportunity to not only tell what the trainees have learnt but it also
tells us the quality of their learning. In-training assessments are
intricately related to competency based training and unless the
‘formative’ function of assessment is invoked, it may be difficult to
ensure that the trainees acquire the required competencies. WPBA
provides us with tools and techniques and is similar to internal
assessments so commonly used for undergradutes in medicine and other
educational streams. In essence, WPBA are to clinical skills, what class
tests are to knowledge.
In summary, ‘Competencies are developmental’ and so
must be their assessment. The utility is further enhanced by conducting
it in authentic settings. The WPBA has both the elements i.e.
developmental trajectory as well authenticity. Therefore it is strongly
recommended for inclusion in the in-training assessment program for any
competency based PG training.
1. Medical Council of India. Post Graduate Medical
Education Regulations 2000. Available from:
http://www.mciindia.org/RulesandRegulations/PGMedicalEducationRegulations2000.aspx.
Accessed 8 December, 2012.
2. PMETB Assessment Committee. Developing and
Maintaining an Assessment System - a PMETB guide to good practice.
Postgraduate Medical Education and Training Board. London; 2007.
Available from: http://www.gmc-uk.org/Assessment_good_practice_v0207.
pdf_31385949.pdf. Accessed 1 January, 2013.
3. Miller GE. The assessment of clinical
skills/competence/performance. Acad Med. 1990;65:S63-7.
4. Rethans J, Norcini J, Baron-Maldonado M, Blackmore
D, Jolly B, La Duca T. The relationship between competence and
performance: implications for assessing practice performance. Med Educ.
2002;36:901-9.
5. Hampton JR, Harrison MJG, Mitchell JRA, Prichard
JS, Seymour C. Relative contributions of history-taking, physical
examination, and laboratory investigation to diagnosis and management of
medical outpatients. BMJ. 1975;2:486-9.
6. Peterson MC, Holbrook JH, Hales DV, Smith NL,
Staker LV. Contributions of the history, physical examination and
laboratory investigation in making medical diagnoses. West J Med.
1992;156:163-5.
7. Norman G. Non-cognitive factors in health sciences
education: from the clinic ûoor to the cutting room ûoor. Adv Health Sci
Educ. 2010;15:1-8.
8. Lane IF. Professional competencies in health
sciences education: from multiple intelligences to the clinic ûoor. Adv
Health Sci Educ. 2010;15:129-46.
9. Hattie JA. Influences on Student Learning.
Inaugural professorial address. 1999. Available at:
http://www.education.auckland.ac.nz/webdav/site/education/shared/hattie/docs/influences-on-student-learning.pdf.
Accessed 1 January, 2013.
10. Daelmans HE, Hoogenboom RJ, Donker AJ, Scherpbier
AJ, Stehouwer CD, van der Vleuten CPM. Effectiveness of clinical
rotations as a learning environment for achieving competences. Med
Teach. 2004;26:305-12.
11. Kogan JR, Hauer KE. Brief report: use of the
mini-clinical evaluation exercise in Internal Medicine core clerkships.
J Gen Intern Med. 2006;21:501-2.
12. Day SC, Grosso LG, Norcini JJ, Blank LL, Swanson
DB, Horne MH. Residents’ perceptions of evaluation procedures used by
their training program. J Gen Intern Med. 1990;5:421-6.
13. Regehr G. The persistent myth of stability: on
the chronic underestimation of the role of context in behavior. J Gen
Intern Med. 2006:21;544-5.
14. Singh T. What to assess? In: Principles of
Assessment in Medical Education. 1st Ed. New Delhi: Jaypee Brothers
Medical Publishers; 2012. p. 14-24.
15. Wiel MWJ vd, Bossche PV, Janssen S, Jossberger H.
Exploring deliberate practice in medicine: how do physicians learn in
the workplace? Adv Health Sci Educ. 2011;16:81-95.
16. Hoffman KG, and Donaldson JF.Contextual tensions
of the clinical environment and their influence on teaching and
learning. Med Edu. 2004:38;448-54.
17. Norcini J, Burch V. Workplace-based assessment as
an educational tool: AMEE Guide No.31. 2007;29:855-71.
18. Kogan JR, Holmboe ES, Hauer KE. Tools for direct
observation and assessment of clinical skills of medical trainees. JAMA.
2009;302:1316-26.
19. London Deanery. Models of giving feedback.
Available at:
http://www.faculty.londondeanery.ac.uk/e-learning/feedback/models-of-giving-feedback
Accessed 23 December, 2012.
20. Pendleton D, Schofield T, Tate P, Havelock P. The
Consultation: an approach to teaching and learning. Oxford: Oxford
University Press; 1984.
21. van der Vleuten CPM. The assessment of
professional competence: developments, research and practical
implications. Adv Health Sci Educ Theory Pract. 1996; 1:41-67.
22. Singh T, Sharma M. Mini-clinical examination as a
tool for formative assessment. Natl Med J India. 2010;23:100-2.
23. Kapoor H, Tekian A, Mennin S. Structuring an
internship program for enhanced learning. Med Edu. 2010;44: 501-2.
24. Butterworth K, Acharya S, Bajracharya S, Shrestha
A, Yadav B. Work-based assessment – piloting of a simple feedback tool.
In: Abstract book, South East Asian International and National
Conference on Health Professions Education; Coimbatore. 5th – 8th
September, 2012. p 54.
25. Ravi Shankar SL, Ramalingam S, Chacko TV.
Ensuring a competent intern: sharing a working model of a monitoring and
evealuation system. In: Abstract book, South East Asian
International and National Conference on Health Professions Education;
Coimbatore. 5th – 8th September, 2012. p112.
26. General Medical Council. Workplace based
assessment: A guide for implementation. A GMC/AoMRC guidance paper.
2010. Available from:
http://www.gmc-uk.org/Workplace_Based_Assessment___A_guide_for_
implementation_0410.pdf_48905168.pdf. Accessed 25 December 2012.
27. Wilkinson J, Crossley J, Wragg A, Mills P, Cowan
G, Wade W. Implementing workplace-based assessment across medical
specialties in the United Kingdom. Med Educ. 2008;42:364-73.
28. Pelgrim EAM, Kramer AWM, Mokkink HGA, van der
Vleuten CPM. The process of feedback in workplace-based assessment. Med
Educ. 2012;46:604-12.