Successful Training in Gastrointestinal Endoscopy. Группа авторовЧитать онлайн книгу.
needed both to gain this skill as well as to allow programs to accurately assess and monitor ileal intubation rates in an ongoing manner.
Ongoing assessment
Suggestions as to the ideal method for training and assessment have been made for each of the cognitive and motor skills addressed in this chapter (Table 6.2). In each section, reference to the need for ongoing assessment was made. This is to allow the ability to document when a fellow has reached the threshold of competence and more importantly to help define what factors equate to competence. This continuous monitoring is something rarely done at most institutions. For greater than 15 years, experts in endoscopy education have been calling for continuous measurements of fellows' skills as they progress toward competency [42, 43]. In recent years, greater emphasis has been placed on the documentation of competence by professional organizations such as ASGE and training regulatory bodies such as the Residency Review Committee (RRC) of the American College of Graduate Medical Education (ACGME). This section will focus on the development and use of one such method of ongoing skills assessment from the beginning of training until graduation.
Many institutions utilize a global evaluation system, where, toward the end of training, supervising staff make subjective recommendations as to the preparedness of the trainee to operate alone. Though commonly used, this type of assessment is profoundly subjective in nature and is prone to significant biases. It also does not allow for early identification of learners below the average learning curve nor provide ample time as graduation draws close to remediate these fellows. Instead, professional societies have tried to develop recommendations in order to standardize training. Unfortunately, these recommendations are based on a few publications of learning curve data that have very small numbers of subjects, are predominantly retrospective data, and focus on very narrow definitions of competency based primarily on cecal intubation rates [42–45]. As a result of these data, it has been suggested that a minimum of 140 colonoscopies should be performed by a trainee before competency can be assessed [46]. However, these competency guidelines make only one suggestion as to what benchmarks define competence (cecal intubation rate). The ASGE suggests that an “90% technical success” at reaching the cecum is needed to be deemed minimally competent [47]. Though better defined, this is still only one parameter of performance and does not take into account all of the other motor or cognitive skills. The RRC guidelines state, “Assessment of procedural competence should not be based solely on a minimum number of procedures performed, but on a formal evaluation process.” What is needed is the development of an ongoing, formalized assessment that assesses a broad range of both motor and cognitive skills (Table 6.3).
One such evaluation process is called the Direct Observation of Procedural Skill (DOPS) form. This form focuses on six broad motor skill parameters and was developed by researchers as a means of assessing these skills following an intensive hands‐on training course [27, 48]. Another standardized skill evaluation form has been developed at the Mayo Clinic (Rochester, MN) in conjunction with the ASGE, which grades a spectrum of both cognitive and motor skills of trainees during colonoscopy. These component skills of colonoscopy were identified by a panel of expert endoscopists and educators and are based on the general training and competency recommendations outlined by professional societies [49]. From this, a blueprint for the evaluation tool was created, and based on this blueprint and refinement by the ASGE training committee, the Assessment of Competency in Endoscopy (ACE) was developed (Table 6.4). Some version of this survey has been in use for over a decade at Mayo and is completed on every colonoscopy from the first day of a fellow's training until graduation. The supervising staff enters these data directly into the institution's endoscopy database during the procedure so that the performance data are connected to other procedural data such as cecal intubation time and withdrawal times as well as special therapies applied, polyp detection, and complications. This assessment tool was recently made available as part of the ProVation Endoscopy reporting software package used by a vast number of endoscopy centers. Linking this evaluation to the procedural database has allowed the avoidance of having staff duplicate much of the data already generated automatically as part of the procedure. Admittedly, the goal of completing a fellow skills assessment with every procedure is labor‐intensive and daunting but can certainly be accomplished. Alternatively, the use of assessment forms such as the DOPS or ACE could be performed on a periodic basis, giving instructors a “snap‐shot” in time of how a trainee compares to the expected learning curve.
Table 6.3 Competency metrics for continuous assessment
Cognitive skills |
Appropriate use of initial sedation |
Continuous monitoring and management of patient comfort and depth of sedation |
Identification of landmarks/awareness of scope location |
Accuracy and sophistication of pathology recognition |
Selection of appropriate tools and settings for therapy |
Motor skills |
Safe scope advancement |
Loop reduction techniques |
Depth of independent scope advancement |
Cecal intubation time |
Success/failure at TI intubation |
Mucosal inspection during withdrawal |
Application of tools for therapy |
The data collected from such an ongoing evaluation system allow reporting of a fellow's performance at any given point in their training as well as the comparison of the individual's scores to the average learning curve of his/her peers at the same points in training (Figure 6.35). In the examples shown, progress reports can graphically depict the learning curves of any of the various competency parameters. In this example, the progress of three different first year fellows—A, B, and C (above, at, and below average, respectively)—is shown. This allows program directors to spot early those trainees who are falling off the learning curve and provide intervention earlier in training. Additionally, these metrics and average learning curves are valuable data for establishing criteria to define what performance equates to “competence” in a reliable, reproducible, and generizable manner. The concept of “competence” at present is rather limited in scope. A method that can formally define and assess competence such as ACE Tool or DOPS can be powerful tools. Each teaching institution should be encouraged to adopt some form of continuous assessment process whether using a comprehensive form such as the one described here or an ongoing skills assessment in a more limited form. Eventually, professional organizations and regulating organizations will require some type of direct measures of competence for all training portfolios as the parameters of cognitive and motor competence are increasingly better defined [50, 51].
Table 6.4 Assessment of Competence in Endoscopy (ACE) tool. This survey is completed by staff during each colonoscopy and grades a fellow's various cognitive and motor skills. This form is meant to augment the data already collected by the procedural database (such as medications administered, cecal intubation, and withdrawal times). Such a form allows for continuous assessment of a fellow's individual skills as they progress toward competence. (Copyrighted and used with permission of Mayo Foundation for Medical Education and Research.)
Modified from the Mayo Colonoscopy Skills Assessment Tool (© Mayo Foundation for Medical