Advertisement

Prospective, Randomized Trial Comparing Simulator-based versus Traditional Teaching of Direct Ophthalmoscopy for Medical Students

Open AccessPublished:November 18, 2021DOI:https://doi.org/10.1016/j.ajo.2021.11.016

      ABSTRACT

      Objective

      To compare results of simulator-based versus traditional training of medical students in direct ophthalmoscopy.

      Design

      Randomized controlled trial.

      Methods

      First-year medical student-volunteers completed 1 hour of didactic instruction regarding direct ophthalmoscopes, fundus anatomy, and signs of disease. Students were randomized to an additional hour of training on a direct ophthalmoscope simulator (n=17) or supervised practice examining classmates (traditional method, n=16). After 1 week of independent student practice using assigned training methods, masked ophthalmologist-observers assessed student ophthalmoscopy skills (technique, efficiency, global performance) during examination of five patient-volunteers, using 5-point Likert scales. Students recorded findings and lesion location for each patient. Two masked ophthalmologists graded answer sheets independently using 3-point scales. Students completed surveys before randomization and after assessments. Training groups were compared for grades, observer- and patient-assigned scores, and survey responses.

      Results

      The simulator group reported longer practice times than the traditional group (p=0.002). Observers assigned higher technique scores to the simulator group after adjustment for practice time (p=0.034). Combined grades (maximum points=20) were higher for the simulator group (median 5.0, range 0.0-11.0) than for the traditional group (median 4.0, range 0.0-9.0), although the difference was not significant. The simulator group was less likely to mistake the location of a macular scar in one patient (odds ratio 0.28, 95% confidence interval, 0.056-1.35, p=0.013).

      Conclusions

      Direct ophthalmoscopy is difficult, regardless of training technique, but simulator-based training has apparent advantages, including improved technique, the ability to localize fundus lesions, and a fostering of interest in learning ophthalmoscopy, reflected by increased practice time.

      Key Words

      TABLE OF CONTENTS STATEMENT
      In this controlled trial, first-year medical students without direct ophthalmoscopy experience were randomized to training with a virtual-reality, direct ophthalmoscope simulator or training by traditional methods, in which they practiced examinations on each other or on family and friends. Examination techniques were significantly better among those who practiced on the simulator, based on a standardized, masked assessment during subsequent examination of patient-volunteers. Simulation appears to be a valuable adjunct to the teaching of direct ophthalmoscopy.
      Direct ophthalmoscopy allows clinicians, regardless of specialty, to visualize fundus findings, including diabetic retinopathy and papilledema, that are critical for the evaluation of patients with systemic disorders. The Association of University Professors of Ophthalmology recommends that all medical students learn to appreciate anatomic landmarks and signs of disease through direct ophthalmoscopy.1 Nevertheless, studies report that medical students, non-ophthalmology residents, and practicing physicians lack confidence in their ophthalmoscopy skills.2-4 Both educators and trainees attribute the lack of skill, in part, to inadequate training during medical school.5,6 In a survey of residency program directors in primary care specialties, respondents felt that medical schools were not training students adequately to perform direct ophthalmoscopy once they are in residencies.6
      Traditionally, teaching direct ophthalmoscopy skills involves student-on-student (or student-on-patient) practice; however, such instruction involves many challenges that limit its educational value, including an inability to provide students with accurate feedback about what they have seen in the fundi of examinees.7 Studies have evaluated alternative teaching techniques, including use of “teaching” ophthalmoscopes, which provide instructors with images of what students see; fundus photography, and direct ophthalmoscope simulators.8-12 Simulation has been shown to improve surgical skills among residents,13-15 but evidence is limited regarding the value of simulation for teaching direct ophthalmoscopy to students.12 In this article, we describe the results of a randomized controlled trial that compared simulator-based versus traditional teaching of direct ophthalmoscopy to first-year medical students.
      METHODS
      This prospective, randomized, controlled trial was conducted at the David Geffen School of Medicine at UCLA. First-year medical student-volunteers were recruited in May 2019 to participate in the study via e-mail and social media announcements. Students were not eligible for participation in the trial if they had had more than an introductory demonstration of the direct ophthalmoscope or had practiced use of a direct ophthalmoscope independently. Meals were provided at study sessions and student were given $50 Amazon gift cards upon study completion. Five patient-volunteers with various fundus findings were recruited from faculty practices to participate in the assessment phase of the study. Patients were given $100 Amazon gift cards for participation. Seven ophthalmologists with experience in medical student instruction volunteered to assess student performance; five evaluated students as they examined patients (see Acknowledgements) and two (PAQ, SA-H) graded answer sheets on which students recorded their findings. Fundus imaging of the study eye of each patient had been performed with photography or optical coherence tomography to assist in grading student answer sheets (Figure). All assessments were masked to each student's training group. The study was approved by the UCLA Institutional Review Board prior to participant recruitment and data collection.
      Figure
      FigureClinical images illustrating pertinent findings in the fundi of four patient-volunteers who were examined by first-year medical students randomized to simulator-based versus traditional training of direct ophthalmoscopy. A. Patient 1; enlarged optic disc cup, left eye. B. Patient 2; toxoplasmic retinochoroidal scar inferotemporal to the fovea, right eye. C. Patient 4; red-free image of a large macular disciform scar, left eye. D. Patient 5; age-related macular degeneration with central pigmentary changes and drusen, left eye. An additional patient-volunteer (Patient 3) had no retinal or optic disc lesions and was considered to have a normal fundus.
      Data Collection and Study Procedures. The study was conducted in three phases: training; practice; and skills assessment. Students completed a pre-training survey that collected the following information: prior direct ophthalmoscopy experience (yes vs. no); prior study of eye disease (yes vs. no); and presence or absence of pre-existing eye conditions (refractive error corrected by glasses or contact lenses, color-blindness, amblyopia). Investigators not involved in training or assessments randomized (FY) and enrolled (GC) student-volunteers.
      Training was conducted in three identical sessions on consecutive days. The session in which each student could participate was dictated by other school responsibilities. Training began with a 1-hour didactic session on fundus anatomy, signs of retinal and optic disc disease, and basic principles of direct ophthalmoscopy (ophthalmoscope controls, proper technique, and common mistakes made by students learning to use the ophthalmoscope). Signs of retinal disease included examples similar to those in study eyes of patient-volunteers, but were not limited to those signs; examples included papilledema, large optic disc cups, chorioretinal scars, drusen, signs of vasculopathies and ischemia (hemorrhage, cotton-wool spots, exudates), and nevi. Students in each session had been randomized 1:1 into simulator and traditional training groups using block-randomization, but assignments for each day were revealed after didactic instruction. All participants on a given day attended the same didactic session, regardless of randomization assignment. Didactic sessions were conducted by the same instructor each day (GNH). Training groups ranged in size from 4 to 8 students.
      The simulator group was introduced to the Eyesi Direct Ophthalmoscope Simulator (Model EDO491 #03 × 0127, Platform 2.1, Software v1.8.0.113443, VRmagic GMBH, Mannheim, Germany) during an additional hour by one faculty instructor (CAM) and a manufacturer's representative. The simulator contains four “modules,” each with multiple “cases” that are scored on the operator's ability to complete tasks.16 The first two modules focus on finding geometric shapes superimposed on a simulated fundus. The simulator registers whether the operator successfully locates the shape.  A cross hair indicating the position of gaze must be held over the shape for 1 second for the simulator to register it as having been “seen.” The simulator also records and displays a map of the total fundus area that has been viewed, thereby promoting efficient scanning of the entire posterior pole. Each student in the simulator group used the first module during the training hour.  The third and fourth modules display simulated pathologic retinal and optic disc lesions. The fourth (examination) module evaluates the operator's ability to identify lesions, and was not available to students during the study. The simulator also addresses some aspects of direct ophthalmoscopy technique. Some cases require operators to adjust “lenses” within the ophthalmoscope hand piece to obtain a clear view of the simulated fundus; another penalizes operators unless the fundus is viewed through simulation of an undilated pupil.
      During the same hour, the traditional training group examined fellow students with their personal direct ophthalmoscopes under the guidance of two faculty instructors (SF, GNH). Pupillary dilation of examined students was optional. After training, all students were encouraged to practice ophthalmoscopy independently, using only their assigned training method (practice on the simulator versus practice on family members, friends, or fellow students).
      All students were assessed at a single session, 4 days after the final training day. Students were placed in random order to begin assessments. Patient-volunteers were placed in 5 examination lanes. Study eyes were dilated with tropicamide 1% and phenylephrine 2.5%. Prior to examinations, students were oriented to an answer sheet on which they were to record (1) abnormal findings in the posterior pole; and (2) location of those findings on fundus diagrams printed on the answer sheet. Students were asked to describe lesion characteristics (size, color, shape) in their own words. Diagnoses were not required. Two additional options were “normal fundus,” if they found no abnormalities, and “could not visualize fundus.”
      Students examined all five patient-volunteers in the same order; after the first student examined the first patient and moved to the second patient, the second student in line examined the first patient, and so forth until each patient had been examined by every student. A single masked ophthalmologist-observer was assigned to each examination lane for the entire assessment period. Upon entry into the examination lane, students were told which eye was to be examined, but no further instructions were given. Students were told not to reveal their training technique to observers or patients, and patients had been told not to reveal their eye problems to students. Both observers and patients were told not to provide students with feedback during the examination. Students had 1 minute to perform each examination, during which observers assigned them three scores (technique, efficiency, and global, as described below). Following each examination, students exited the examination lane and had 1 minute to complete answer sheets before moving to the next patient. During the same minute, patients assigned a single global score for student performance, as described below. Students were asked not to discuss their answers with other students until all examinations were completed and answer sheets submitted. At the end of the assessment session, observers confirmed that no students had revealed their training techniques.
      After examining the fifth patient and submitting the answer sheet, each student completed a standard post-assessment survey. The following information was collected: independent practice time (none, <30 minutes, 30-59 minutes, 1-3 hours, >3 hours); perceived advantages and disadvantages of the training method to which the student was assigned (open-field question); and confidence in direct ophthalmoscopy (five categories ranging from “not at all comfortable” to “very comfortable”).
      The graders (SAH, PAQ) reviewed all examination answer sheets and independently assigned one of three grades (0, 1, or 2 points) for answers to the two questions for each study eye (location, description of findings); grading criteria are described below. After grading, they met to adjudicate discrepant results and achieve a single, final grade for each question. The graders had had no contact with students during the course of the study, and students’ training assignments were not listed on the answer sheets.
      Conventions and Definitions. Rubrics were created prospectively to guide evaluators (ophthalmologist-observers, patient-volunteers) in assigning scores using 5-point Likert scales (Table 1). The global score assigned by an ophthalmologist-observer was based on a gestalt of the student's overall performance. The single score assigned by patients was based on their perceptions of a student's performance in three areas: professionalism; technique; and confidence. Ophthalmologist-graders were also provided with guidelines for assigning grades to answer sheets (Table 1).
      Table 1Prospective Rubrics to Guide Evaluators in Assigning Scores and Grades to Medical Student Participants During a Randomized Controlled Trial Comparing Simulator-based versus Traditional Training Techniques for Direct Ophthalmoscopy.
      Score or GradeFactors Related to Student Performance that were Appropriate for Assigning Specific Scores and Grades
      For ophthalmologist-observers
      Ophthalmologist-observers and patient-volunteers assigned scores using Likert scales with values from 1 through 5; for all scales, scores were defined as follows, for the factor being evaluated: 1 = poor performance; 2 = below average performance; 3 = average performance (neither good nor bad); 4 = above average performance; and 5 = excellent performance.
      ,
      Guidelines were distributed to ophthalmologist-observers only for scores 1 and 5. Scores could be assigned for any of the factors listed, a combination of those factors, or similar factors that, in the opinion of the evaluator, warranted a given score.
      Technique
      Score 1 (poor)
      • Fumbles with the direct ophthalmoscope.
      • Positions incorrectly (too far from the eye, wrong angle to the patient, uses the left eye to examine the patient's right eye or vice versa).
      • Has difficulty keeping light directed into the eye.
      • Constantly shifts position.
      • Unable to find the optic disc.
      Score 5 (excellent)
      • Holds direct ophthalmoscope correctly.
      • Properly distances from the patient.
      • Initially positions 15 degrees to the patient.
      • Remains stable (ophthalmoscope against the brow, hand on shoulder or forehead, does not lose alignment).
      • Scans the macula correctly.
      Efficiency
      Score 1 (poor)
      • Is hesitant during the entire examination.
      • Needs to “start over” multiple times.
      • Does not finish the examination.
      Score 5 (excellent)
      • Displays confidence.
      • Proceeds methodically throughout the examination.
      • Finishes the examination in the allotted time.
      For patient-volunteers
      Ophthalmologist-observers and patient-volunteers assigned scores using Likert scales with values from 1 through 5; for all scales, scores were defined as follows, for the factor being evaluated: 1 = poor performance; 2 = below average performance; 3 = average performance (neither good nor bad); 4 = above average performance; and 5 = excellent performance.
      ,
      A single score was based on the patient-volunteer's perception of a student's performance in three areas: professionalism; technique; and confidence.
      Score 1 (poor)
      • Professionalism characterized by roughness and heavy-handling, poor communication, and use of an uncomfortable level of light.
      • Technique characterized by fumbling with the ophthalmoscope, inability to keep the light directed into the eye, and constant shifting of position.
      • Lack of confidence characterized by hesitancy and appearance of being unsure of examination procedures.
      • Failure to complete the examination.
      Score 5 (excellent)
      • Professionalism characterized by a gentle touch, demonstrated consideration of the patient's comfort, and awareness of the patient's tolerance of light intensity.
      • Technique characterized by being methodical, steady, and efficient.
      • Confidence characterized by appearance of knowing correct examination procedures.
      • Finished without being told to stop.
      For ophthalmologist-graders
      Ophthalmologist-graders assigned separate grades of 0, 1, or 2 to students’ written description of lesion location and to students’ written description of the characteristics of the fundus lesions that they observed, guided by the listed factors.
      Location
      Grade 0
      • No response or incorrect fundus region.
      Grade 1
      • A generally correct region of the fundus, but no detail regarding the relationship of lesions to fundus landmarks.
      Grade 2
      • Provides detailed information regarding correct lesion location.
      Clinical Findings
      Grade 0
      • Student could not visualize the fundus; or
      • Noted a “normal” fundus despite the presence of abnormalities; or
      • Listed incorrect findings.
      Grade 1
      • Answers described a lesion in general terms, but with no specific details (e.g. “white spot” or “weird disc”).
      Grade 2
      A grade of 2 points was also assigned to students who correctly identified the normal fundus of patient-volunteer no. 3.
      • A lesion was described accurately with greater detail than for Grade 1 (e.g. “white scar with surrounding dark pigment” or “increased cup-to-disc-ratio”) or;
      • A non-specific description was provided with correct diagnosis (e.g. “lesion consistent with ocular toxoplasmosis”).
      a Ophthalmologist-observers and patient-volunteers assigned scores using Likert scales with values from 1 through 5; for all scales, scores were defined as follows, for the factor being evaluated: 1 = poor performance; 2 = below average performance; 3 = average performance (neither good nor bad); 4 = above average performance; and 5 = excellent performance.
      b Guidelines were distributed to ophthalmologist-observers only for scores 1 and 5. Scores could be assigned for any of the factors listed, a combination of those factors, or similar factors that, in the opinion of the evaluator, warranted a given score.
      c A single score was based on the patient-volunteer's perception of a student's performance in three areas: professionalism; technique; and confidence.
      d Ophthalmologist-graders assigned separate grades of 0, 1, or 2 to students’ written description of lesion location and to students’ written description of the characteristics of the fundus lesions that they observed, guided by the listed factors.
      e A grade of 2 points was also assigned to students who correctly identified the normal fundus of patient-volunteer no. 3.
      Data Analysis and Statistical Techniques. Primary outcome measures were examination grades and ophthalmologist-observer scores. Secondary outcomes included patient-reported scores, student-reported practice time, and perceived advantages/disadvantages of each training method. Pre-training survey results were compared between training groups, to identify any differences despite randomization. Each component of the examination grades (location, clinical findings) and ophthalmologist-observer scores (technique, efficiency, global) was compared between groups for each patient individually. The sum of each student's three observer scores for all five patients, and the sum of each student's two examination grades for all five patients were compared between groups. Post-assessment survey results were compared between groups.
      Statistical analysis was performed using SAS software version 9.3 (SAS, Inc, Cary, North Carolina, USA). Fisher exact test was used to calculate differences in pre-training and post-assessment survey results, and to compare proportions of specific grades assigned to individual patients. Summary grades and scores were compared using the Kruskal-Wallis test. Group differences after adjusting for time practiced were calculated using multivariable linear regression models. P-values <0.05 were considered statistically significant. We evaluated agreement between grades assigned by the ophthalmologist-graders using a simple Kappa statistic. Values <0.4 were considered to be no or mild agreement; 0.41-0.6 to be moderate agreement; and >0.6 to be good or excellent agreement.
      RESULTS
      The study adhered to guidelines of the International Network for Simulation-based Pediatric Innovation, Research, and Education (INSPIRE).17 A total of 39 first-year medical students volunteered to participate and were randomized to simulator (n=20) or traditional (n=19) training groups (Supplemental Figure, available at AJO.com). Prior to training and announcement of randomization assignments, 4 students (1 in the simulator group; 3 in the traditional group) decided not to participate. An additional 2 students in the simulator group did not attend the assessment session due to conflicts with other student activities. Table 2 describes characteristics of the 33 students (17 in the simulator group; 16 in the traditional group) who completed training, assessment, and surveys. No significant differences were found between groups. Only one student noted limited prior exposure to direct ophthalmoscopy. Among 17 students in the simulator group, 16 (94.1%) accessed the third module with simulated pathologic fundus lesions while practicing; 7 completed the entire module.
      Table 2Characteristics of 33 First-year Medical Student Participants in a Randomized Controlled Trial Comparing Simulator-based versus Traditional Training Techniques for Direct Ophthalmoscopy.
      CharacteristicSimulatorGroup(n=17)Traditional

      Group(n=16)
      P-value
      Fisher exact test.
      Prior ophthalmoscopy experience (n [percentage])0.485
      No17 (100)15 (93.75)
      Yes01 (6.25)
      Prior study of eye disease (n [percentage])0.656
      No15 (88.24)13 (81.25)
      Yes2 (11.76)3 (18.75)
      Pre-existing eye condition (n [percentage])0.728
      None8 (47.06)6 (37.50)
      Routine use of glasses
      Students were asked to identify the presence of refractive errors and the correction they would wear during direct ophthalmoscopy assessment.
      5 (29.41)3 (18.75)
      Routine use of contacts
      Students were asked to identify the presence of refractive errors and the correction they would wear during direct ophthalmoscopy assessment.
      4 (23.53)5 (31.25)
      Other
      One participant with strabismus since childhood, with best corrected visual acuity less than 20/20, both eyes; another participant alternated between glasses and contact lenses.
      02 (12.50)
      a Fisher exact test.
      b Students were asked to identify the presence of refractive errors and the correction they would wear during direct ophthalmoscopy assessment.
      c One participant with strabismus since childhood, with best corrected visual acuity less than 20/20, both eyes; another participant alternated between glasses and contact lenses.
      Table 3 summarizes grades assigned to students in each group. Grades for both groups were low overall, but the simulator group had higher grades than the traditional group for all measures, although differences were small and not statistically significant. In addition, differences were not significantly different when adjusted for practice time. The highest total grade was 11 (55%) of 20 points, assigned to a student in the simulator group.
      Table 3Summary Grades assigned by Masked Ophthalmologist-graders to 33 First-year Medical Student Participants in a Randomized Controlled Trial of Training Techniques for Learning Direct Ophthalmoscopy, Based on Examination of Five Patient-volunteers.
      GradeSimulator Group(n=17)Traditional Group(n=16)P-value
      Kruskal-Wallis test.
      AdjustedP-value
      Multivariable linear regression models, adjusting for practice time (≥1 hour vs. <1 hour).
      Location
      A grade of zero, 1, or 2 was assigned to each student for each patient-volunteer examined (maximum grade = 10 points).
      0.7680.692
      Mean ± SD2.5 ± 1.72.2 ± 1.2
      Median (range)2.0 (0.0-6.0)2.0 (0.0-4.0)
      IQR1.0, 3.02.0, 3.0
      Clinical Findings
      A grade of zero, 1, or 2 was assigned to each student for each patient-volunteer examined (maximum grade = 10 points).
      0.3860.407
      Mean ± SD2.9 ± 1.72.4 ± 1.3
      Median (range)3.0 (0.0-5.0)2.0 (0.0-5.0)
      IQR2.0, 4.02.0, 3.0
      Total
      A combination of Location and Clinical Findings grades (maximum grade = 20 points).
      0.5130.524
      Mean ± SD5.4 ± 3.34.6 ± 2.4
      Median (range)5.0 (0.0-11.0)4.0 (0.0-9.0)
      IQR3.0, 7.04.0, 6.0
      IQR = interquartile range; SD = standard deviation.
      a Kruskal-Wallis test.
      b Multivariable linear regression models, adjusting for practice time (≥1 hour vs. <1 hour).
      c A grade of zero, 1, or 2 was assigned to each student for each patient-volunteer examined (maximum grade = 10 points).
      d A combination of Location and Clinical Findings grades (maximum grade = 20 points).
      When grades were considered on an individual patient basis, the simulator group had significantly higher location and clinical finding grades for Patient 4, who had a large disciform scar (Table 4, Figure, image C). Differences remained significant when adjusted for practice time. A mistake among both groups was to label the scar as a large pale optic disc; however, simulator-trained students were less likely to mistake the location of the lesion (OR 0.28, 95% confidence interval [CI]: 0.056 – 1.35, p=0.013).
      Table 4Proportions of 33 First-year Medical Student Participants Assigned Each Grade Level during Examination of a Patient with a Macular Disciform Scar in a Randomized Controlled Trial of Training Techniques for Learning Direct Ophthalmoscopy, with Comparison between Training Groups.
      Grade
      A grade of zero, 1, or 2 was assigned to each student for Location and for Clinical Findings.
      SimulatorGroup

      (n=17)
      Traditional Group(n=16)P-value
      Fisher exact test.
      AdjustedP-value
      Multivariable logistic regression models, adjusting for practice time (≥1 hour vs. <1 hour).
      Location (n [percentage])0.0130.027
      05 (29.41)13 (81.25)
      17 (41.18)2 (12.50)
      25 (29.41)1 (6.25)
      Clinical Findings (n [percentage])0.0180.044
      05 (29.41)12 (75.00)
      14 (23.53)3 (18.75)
      28 (47.06)1 (6.25)
      a A grade of zero, 1, or 2 was assigned to each student for Location and for Clinical Findings.
      b Fisher exact test.
      c Multivariable logistic regression models, adjusting for practice time (≥1 hour vs. <1 hour).
      When all 330 grades assigned to students (33 students X 5 patients per student X 2 questions about each patient) were compared, there was good agreement between the two graders (270 [81.8%] of the 330 assigned grades were identical between graders; simple Kappa estimate 0.613 [95% CI, 0.532-0.694]). Agreement was similar when comparisons were made between grades for location only and for findings only (data not shown).
      Table 5 shows ophthalmologist-observer and patient-volunteer scores assigned to students. Observers scores assigned to simulator-trained students were consistently higher than scores assigned to traditionally trained students for technique (p=0.011), efficiency (p=0.007), and global performance (p=0.004). Technique (p=0.034) and global performance (p=0.025) scores remained statistically significant after adjustment for practice time. Patients assigned higher scores to students in the simulator group than to those in the traditional group, although the difference was not statistically significant (p=0.178). Scores did not improve in either group as students progressed from patient to patient (data not included).
      Table 5Summary Scores Assigned by Masked Ophthalmologist-observers to 33 First-year Medical Student Participants, Based on Examination of Five Patient-volunteers, and a Summary Score Assigned to the Students by those Patient-volunteers in a Randomized Controlled Trial Comparing Simulator-based versus Traditional Training of Direct Ophthalmoscopy.
      Scores
      Each score based on a 5-point Likert scale. Maximum possible points assigned to each student for each score was 25 (5 points for each of 5 patient-volunteers examined).
      Simulator Group(n=16)
      One participant did not receive a score from an ophthalmologist-observer and was not included in analysis.
      Traditional Group(n=16)P-value
      Kruskal-Wallis test.
      AdjustedP-value
      Multivariable linear regression models, adjusting for practice time (≥1 hour vs. <1 hour).
      Ophthalmologist-observer scores
      Technique0.0110.034
      Mean ± SD19.1 ± 2.016.4 ± 2.8
      Median (range)19.0 (16.0-23.0)17.0 (11.0-21.0)
      IQR17.5, 21.014.0, 18.0
      Efficiency0.0070.066
      Mean ± SD19.8 ± 2.117.1 ± 3.0
      Median (range)20.0 (16.0-22.0)17.5 (12.5-23.0)
      IQR18.5, 22.015.0, 18.5
      Global Performance0.0040.025
      Mean ± SD19.8 ± 1.817.2 ± 2.4
      Median (range)20.0 (17.0-23.0)18.0 (12.5-22.0)
      IQR18.3, 21.015.8, 18.8
      Patient-volunteer score
      Global Performance0.1780.207
      Mean ± SD21.7 ± 1.320.9 ± 1.8
      Median (range)22.0 (19.0-24.0)21.0 (18.0-24.0)
      IQR21.0, 23.019.0, 22.0
      IQR = interquartile range; SD = standard deviation.
      a Each score based on a 5-point Likert scale. Maximum possible points assigned to each student for each score was 25 (5 points for each of 5 patient-volunteers examined).
      b One participant did not receive a score from an ophthalmologist-observer and was not included in analysis.
      c Kruskal-Wallis test.
      d Multivariable linear regression models, adjusting for practice time (≥1 hour vs. <1 hour).
      Table 6 summarizes student responses in the post-assessment survey. The simulator group reported longer independent practice time than the traditional group (p=0.002). With respect to students’ perceptions of their assigned training method versus their understanding of the alternative technique, the number of students within each group who felt that their method was strictly advantageous was similar to the number who felt it was strictly disadvantageous; however, substantially more students in the simulator group specifically felt that there were both advantages and disadvantages associated with simulator training (9 students [52.94%] versus 2 students [12.5%] in the traditional group, p=0.026, Fisher exact test).
      Table 6Results of a Post-assessment Survey of 33 First-year Medical Student Participants in a Randomized Controlled Trial Comparing Simulator-based versus Traditional Training of Direct Ophthalmoscopy.
      Survey ResponsesSimulator Group(n=17)Traditional Group(n=16)P-value
      Fisher exact test.
      Self-reported estimate of independent practice time
      No student reported practice time more than 3 hours.
      (n [percentage])
      0.002
      None1 (5.88)1 (6.25)
      Less than 30 minutes03 (18.75)
      30-59 minutes2 (11.76)8 (50.00)
      1-3 hours14 (82.35)4 (25.00)
      Perception of assigned training technique
      The survey included an open-field option to provide further explanation of perceptions.
      (n [percentage])
      0.128
      Advantageous4 (23.53)6 (37.50)
      Disadvantageous4 (23.53)7 (43.75)
      Neither an advantage nor a disadvantage01 (6.25)
      Both advantages and disadvantages9 (52.94)2 (12.50)
      Confidence in ability to perform direct ophthalmoscopy (n [percentage])0.516
      Not at all comfortable1 (5.88)2 (12.50)
      Somewhat uncomfortable5 (29.41)8 (50.00)
      Uncertain5 (29.41)2 (12.50)
      Somewhat comfortable6 (35.29)4 (25.00)
      Very comfortable00
      a Fisher exact test.
      b No student reported practice time more than 3 hours.
      c The survey included an open-field option to provide further explanation of perceptions.
      Based on open-field responses, major themes among stated advantages of simulator training by students in the simulator group were the ability to visualize simulated fundus lesions (10 [76.9%] of 13 respondents) and ability to practice without bothering a patient or classmate (4 respondents [30.8%]). Students in the traditional group viewed lack of these factors as disadvantages while practicing on students with normal fundi or patients who might be bothered by extended examinations. Major themes among stated disadvantages of simulator training by students in the simulator group were the unrealistic training environment (5 [38.4%] of 13 respondents), lack of experience in positioning relative to, and interacting with, a live person (3 respondents [23.1%]), and unfamiliarity with actual direct ophthalmoscope controls prior to examination of patients (8 respondents [61.5%]). Conversely, students in the traditional group considered working with live individuals, using a real ophthalmoscope, to be an advantage. Lack of real-time feedback was viewed as a disadvantage by one student in the traditional group.
      With regard to comfort in performing direct ophthalmoscopy during assessments, no student in either training group felt “very comfortable.” When training groups were compared, more students in the simulator group felt “somewhat comfortable” while more students in the traditional group felt “somewhat uncomfortable” or “not at all comfortable,” although the differences were not statistically significant.
      DISCUSSION
      Direct ophthalmoscopy is a difficult skill for many medical students to learn. This study sought to determine the effect of a direct ophthalmoscope simulator on the early stages of learning ophthalmoscopy. We identified better performance among students in the simulator-trained group on three measures: independent practice time; technique (ability to handle a direct ophthalmoscope and conduct an examination); and, based on the results of one patient examination, the ability to locate fundus lesions accurately.
      Longer practice time among students in the simulator group is likely attributable to the novelty of the device and the fact that students did not have to depend on others to practice. The exercises are engaging and the simulator provides immediate feedback. One can hypothesize that longer practice time will ultimately result in better skills, but practice time alone did not explain better technique scores or ability to localize lesions by the simulator group.
      With regard to technique, handling of the direct ophthalmoscope was taught to both groups in the didactic session, but proper technique is reinforced by the simulator; for example, correct positioning of the hand piece is required to view the simulated fundus. By mapping that portion of the fundus viewed by the operator, the simulator reinforced didactic instructions about how to scan the posterior pole efficiently, which might explain why the simulator group was better able to identify the location of the lesion in one patient's eye.
      On a computerized literature search using PubMed, we found only one previous study that investigated the effect of training on the Eyesi Direct Ophthalmoscope simulator. Boden and associates randomized 34 German medical students to “classic” and “simulator” training groups during an ophthalmology rotation.12 All students received the same 5-minute introduction, after which the classic group practiced on dilated students and looked at pictures of fundus lesions, while the simulator group looked at representations of fundus lesions in the simulator.  Following the 45-minute training session, a course instructor assessed each student using a 23-point scale based on a combination of examination milestones (ability to see various fundus landmarks) and examination procedures not specifically related to the ophthalmoscope itself. During the assessment, students used either a direct ophthalmoscope or the simulator according to each student's randomization group. Examination milestones were self-reported by students in the classic group and reported objectively by the simulator for students who used it. Based on the assessment scale, the reported “learning success” for the simulator group (91%) was higher than for the classic group (78%, statistical comparison not reported).  A student survey about the learning experience yielded favorable responses, with no statistical differences between the two groups. Students in the simulator group also responded positively to three additional questions about the simulator.
      In our study, the overall ability of students to perform direct ophthalmoscopy on patients was poor in both groups, with the majority in both groups reporting low confidence in ophthalmoscopy skills. Lack of confidence during ophthalmoscopy has been described previously at all levels of medical training.2-4 Our purpose was not to achieve high levels of proficiency in direct ophthalmoscopy among students, but to identify any influence of simulator training during the early phase of learning. Students in the traditional group most likely practiced on people whose pupils were not dilated, while students in the simulator group had the option of looking through simulations of either dilated or undilated pupils. Although looking through undilated pupils requires examiners to position carefully, we feel that, during early training, it is easier for students to develop proper direct ophthalmoscope technique and procedures (such as scanning of the macula) while examining eyes with dilated pupils. This distinction would favor practice on the simulator.
      There are many advantages of using simulation in medical training in general.18 Simulation allows the learner to develop proper motor skills before performing those skills on patients. Simulation facilitates learning through repetition with real-time feedback and objective tracking of performance. Independent practice, spaced-out over time, results in more permanent memory encoding of motor skills; however, operators receive no feedback from human examinees, and simulation does not fully duplicate the positioning of clinicians in relation to patients’ bodies. With regard to student perceptions, those in the simulator group recognized both advantages and disadvantages to simulator training. The most commonly reported disadvantage was lack of exposure to a real ophthalmoscope prior to use on a patient. Thus, simulation is likely most appropriate as a complement to traditional training, not as a replacement. We propose that learning is best achieved by a combination of didactic instruction, initial supervised practice, both on people and a simulator, followed by independent practice and refinement of skills with the simulator. Initial supervision can reinforce techniques and clinician-patient interactions that are not specifically addressed by the simulator.
      Strengths of our study include the use of masked assessments and the fact that all students underwent the same assessment. Evaluation of performance on real patients helps to determine the transferability of skills from the simulator to people. Limitations of our study include use of a non-validated scoring system and lack of a reference standard against which medical student skills could be compared (e.g. performance of ophthalmology residents on the same assessment). The use of open-field questions required grader interpretation of student responses, which might limit the accuracy with which skills were assessed; we addressed this limitation by comparing grades assigned by two graders and adjudicating discrepancies. Despite this potential problem, we chose not to use multiple choice answer sheets to reduce the risk of guessing by students. The power of the study to detect differences was limited by the relatively small number of student and patient volunteers for the trial. It is unlikely, however, that patients would have tolerated a substantially larger number of examinations. We did not perform a statistical correction for multiple comparisons during analyses, as we considered our study to be exploratory, looking for various ways that simulation influences the learning of ophthalmoscopy. While we may have identified false relationships because of the many questions asked, a correction for multiple comparisons might also have prevented us from identifying factors that should be investigated in larger, future studies. We sensed that many students were overwhelmed by the assessment experience, which may have led to poor performance. The long-term effect simulation-based training on skills is unknown, but will be the subject of a future study.
      In summary, simulation appears to enhance early teaching of direct ophthalmoscopy. Students who practiced on the simulator demonstrated better technique using a direct ophthalmoscope during patient examinations. Simulation aims to train operators to scan the fundus completely and locate lesions accurately. The simulator also appeared to motivate students to practice longer, but duration of practice did not account for other training benefits in the timeframe of this study. We propose that simulation will likely be most useful as a complement to traditional, supervised training methods.
      ACKNOWLEDGEMENTS a. Financial Support: David Geffen School of Medicine at UCLA Office of the Vice Dean for Education (Dr. Braddock); unrestricted funds from the UCLA Stein Eye Institute (Dr. Straatsma); the Skirball Foundation (New York, NY, Dr. Holland); and Research to Prevent Blindness, Inc. (New York, NY) through an unrestricted grant to the UCLA Stein Eye Institute for research. Funding organizations had no role in the design or conduction of the study. b. Disclosures: Dr. McCannel is an unpaid consultant to VRmagic GMBH (Mannheim, Germany), manufacturer of the Eyesi Direct Ophthalmoscope Simulator; VRmagic GMBH had no role in the design or conduct of this research; other disclosures: Howell, no financial disclosures; Chavez, no financial disclosures; Quiros, no financial disclosures; Al-Hashimi, no financial disclosures; Yu, no financial disclosures; Fung, no financial disclosures; DeGiorgio, no financial disclosures; Huang, no financial disclosures; Straatsma, no financial disclosures; Braddock, no financial disclosures; Holland, no financial disclosures. c. Other Acknowledgments: Mr. Marshall Dial, North American representative of VRmagic GMBH, helped to instruct medical students who had been randomized to the simulator group about how to operate the simulator; he had no further role in conduction of the randomized controlled trial or analysis and reporting of data. The following ophthalmologists from the UCLA Department of Ophthalmology served as observers during student examinations of patients: JoAnn A. Giaconi, M.D., Lynn K. Gordon, M.D., Ph.D., Ralph D. Levinson, M.D., Tania Onclinx, M.D., and Sandip Suresh, M.D.
      R*EFERENCES
      1. Graubart EB, Waxman EL, Forster SH, et al. Ophthalmology objectives for medical students: revisiting what every graduating medical student should know. Ophthalmology. 2018;125(12):1842-1843.
      2. Gupta RR, Lam WC. Medical students' self-confidence in performing direct ophthalmoscopy in clinical training. Can J Ophthalmol. 2006;41(2):169-174.
      3. Mottow-Lippa L. Ophthalmology in the medical school curriculum: reestablishing our value and effecting change. Ophthalmology. 2009;116(7):1235-1236, 1236.e1231.
      4. Wu EH, Fagan MJ, Reinert SE, Diaz JA. Self-confidence in and perceived utility of the physical examination: a comparison of medical students, residents, and faculty internists. J Gen Intern Med. 2007;22(12):1725-1730.
      5. Shuttleworth GN, Marsh GW. How effective is undergraduate and postgraduate teaching in ophthalmology? Eye (Lond). 1997;11 (Pt 5):744-750.
      6. Stern GA. Teaching ophthalmology to primary care physicians. The Association of University Professors of Ophthalmology Education Committee. Arch Ophthalmol. 1995;113(6):722-724.
      7. Mackay DD, Garza PS, Bruce BB, Newman NJ, Biousse V. The demise of direct ophthalmoscopy: A modern clinical challenge. Neurol Clin Pract. 2015;5(2):150-157.
      8. Kelly LP, Garza PS, Bruce BB, Graubart EB, Newman NJ, Biousse V. Teaching ophthalmoscopy to medical students (the TOTeMS study). Am J Ophthalmol. 2013;156(5):1056-1061.e1010.
      9. Akaishi Y, Otaki J, Takahashi O, et al. Validity of direct ophthalmoscopy skill evaluation with ocular fundus examination simulators. Can J Ophthalmol. 2014;49(4):377-381.
      10. Ricci LH, Ferraz CA. Ophthalmoscopy simulation: advances in training and practice for medical students and young ophthalmologists. Adv Med Educ Pract. 2017;8:435-439.
      11. Souza PHLde, Nunes GMN, Bastos JMGdeA, Fonseca TM, Cortizo V. A new model for teaching opthalmoscopy to medical students. MedEdPublish. 2017; article no. 28, DOI https://doi.org/10.15694/mep.2017.000197.
      12. Boden KT, Rickmann A, Fries FN, et al. [Evaluation of a virtual reality simulator for learning direct ophthalmoscopy in student teaching]. Ophthalmologe. 2020;117(1):44-49.
      13. Ferris JD, Donachie PH, Johnston RL, Barnes B, Olaitan M, Sparrow JM. Royal College of Ophthalmologists' National Ophthalmology Database study of cataract surgery: report 6. The impact of EyeSi virtual reality training on complications rates of cataract surgery performed by first and second year trainees. Br J Ophthalmol. 2020;104(3):324-329.
      14. Jacobsen MF, Konge L, Bach-Holm D, et al. Correlation of virtual reality performance with real-life cataract surgery performance. J Cataract Refract Surg. 2019;45(9):1246-1251.
      15. McCannel CA, Reed DC, Goldman DR. Ophthalmic surgery simulator training improves resident performance of capsulorhexis in the operating room. Ophthalmology. 2013;120(12):2456-2461.
      16. Borgersen NJ, Skou Thomsen AS, Konge L, Sørensen TL, Subhi Y. Virtual reality-based proficiency test in direct ophthalmoscopy. Acta Ophthalmol. 2018;96(2):e259-e261.
      17. Cheng A, Kessler D, Mackinnon R, et al. Reporting guidelines for health care simulation research: extensions to the consort and strobe statements. Simul Healthc. 2016;11(4):238-248.
      18. Issenberg SB, Mcgaghie WC, Petrusa ER, Gordon DL, Scalese RJ. Features and uses of high-fidelity medical simulations that lead to effective learning: a BEME systematic review. Medical Teacher. 2005;27:10-28.

      Appendix. Supplementary materials

      Biography

      Grant Logan Howell, B.S. is a fourth-year medical student at the David Geffen School of Medicine at UCLA. He plans a career in ophthalmology. He is interested in medical education, and has participated in several teaching and educational research activities during medical school, including this study of direct ophthalmoscopy simulation. His career plans include teaching medical students and ophthalmology residents. He has also participated in several studies of ocular toxoplasmosis with Gary N. Holland, M.D.