section menu

Update from the European Board of Ophthalmology

Comprehensive European Board of Ophthalmology Diploma Exam

The European Board of Ophthalmology Diploma (EBOD) exam was the original exam run by the EBO for over 20 years. The exam is intended to further the mission of the EBO to harmonize the knowledge and training of ophthalmologists across Europe.

This communication outlines an update to our scoring and standard setting, which will adjust the way candidate results are deliberated.  These changes have been made to standardise the approach taken by EBO Onsite and EBO Online exams. These changes will ensure we treat all cohorts of candidates (whether taking the exam online or onsite) in the same way.  The Education Committee will watch closely as this updated approach is delivered to determine whether any further adjustments may need to be made, based on the outcomes seen in both onsite and online exams.

What has changed?

Specifically, the manner in which candidates will be scored, and the passing requirement has been updated. The EBO Education Committee has developed these changes to ensure both exams require the same standard to pass.

  • Candidates now must score a 6 or greater in PART I (Written /MCQ section) to pass the section and also to pass the entire exam
  • PART I (Written/ MCQ section) will consist of 30 “traditional” EBO MCQs (Mutliple True/ False statements), and 30 SBA (Single Best Answer) type questions
  • There will be no negative marking in Part I
  • In Part II (oral exam Viva Voce / Online Clinical Cases), standardised cases will be used, and candidates will be marked based on the number of questions they answer correctly
  • Scores from the standardised cases will be re-scaled in line with the rescaling of the MCQ (the score equal to the average minus one standard deviation will be assigned a 6, with scores above and below this rescaled accordingly)
  • Exams will be delivered via the online exam platform (rather than paper-based), unless circumstances prevents this.

What has not changed?

  • The format of the exam: A written section followed by a series of cases based on the exam syllabus
  • The exam is still weighted 40% to the MCQ and 60% to the Viva Voce/ Clinical Cases
  • Candidates must score a 6 overall to pass
  • Candidates may score less than a 6 in ONE viva voce/ clinical case station, if their overall score is still above a 6
  • Candidates will sit the 2 parts of the exam in one day, with the results released on the next day.

Please read the detail regarding these changes below.

 

Historically, the European Board of Ophthalmology comprehensive exam was organised into two sections:

MCQ section, which was composed of questions structured as a stem (context-setting phrase or term), followed by 5 statements. Candidates would mark each statement as “True” or “False”. Since 2016, negative marking was introduced to dissuade candidates from “wild guesses”. This was offset by a “Don’t know” option, which attracted a score of 0 – so if one did not know an answer, they could mark “Don’t know” and not be penalised by an incorrect answer.

The second section is a “viva voce” – a series of four 15-minute interviews with experts. Up until 2019, examiners would bring cases on PowerPoint slides to discuss with candidates. The slides would display medical history and diagnostic data, based on which candidates were expected to outline how they would handle the case. Candidates would receive a score on a scale from 4 – 10, reflecting the examiners’ estimation of how well a candidate would manage the case.  A 6 was considered a pass mark. If an examiner awarded a score below 6 they were asked to provide comments as to why the candidate failed. This could be used in the deliberation of borderline candidates, and as feedback.

In practice, candidates would answer MCQ questions using pencil on OMR (Optical Marking Readable) score sheets. Examiners would likewise score candidates and provide comments for those who failed on OMR-readable score sheets. These sheets were designed and provided by Speedwell Software (UK).

The candidates’ outcome was determined in four steps:

  1. the MCQ section scores for each candidate were added to provide a total (simple addition of points)
  2. a formula was used to convert the MCQ score into a numerical grade between 0 and 10. The average score minus one standard deviation was assigned the numerical grade of 6. Further grades were defined based on the number of standard deviation(s) above or below this score
  3. each candidate was awarded this numerical grade of 0 – 10 for their MCQ (“MCQ-10 score”)
  4. the MCQ-10 score was used in a formula, which weighted the candidates 40% MCQ Score: 60% Viva Voce score (determined by adding 40% MCQ-10 score, and 15% of each viva voce station score)
  5. Candidates had to achieve an overall score of 6 to pass. Furthermore, they could only have one section of the exam (MCQ or one viva voce section) with a score lower than 6. If they scored lower than 6 in two sections of the exam, it would be considered a fail even if their overall score was still above 6. A reminder that this is a comprehensive exam, so it went against the purpose and philosophy of the exam to allow someone to pass based on an extremely good knowledge of just one area.

In 2018, the EBO reviewed the exam content and decided to develop a more standardised approach for the viva voce section. This entailed preparing and using standardised cases for all candidates in a specific round. In 2019, a “pilot” exam was run with a small subgroup of the exam, who were given standardised cases – i.e. all candidates were given the same case to discuss in any given hour of examination. Different cases were used in different hours to prevent candidates informing peers of the subject matter / cases covered. However, the purpose of the case (its level of difficulty, approach, subject etc.) was kept similar across cases.

In 2018, there was also agreement to run a second, Autumn, exam at the DOG meeting in October 2020. Discussion at that point focused on whether to have two exams per year (one in May, at the French Society of Ophthalmology (SFO) conference in Paris, and one in October, with the German Ophthalmological Society (DOG) conference in Berlin), or to alternate the hosting of the exam over the years.

In 2020, the COVID pandemic restrictions prevented a large onsite exam from taking place. However, the Swiss Ophthalmology Society received special permission to host an exam in Interlaken. This exam assessed only Swiss candidates, who sat the usual MCQ and viva voce. The Speedwell eSystem (used as a question bank already by the EBO) was already in place and offered an online exam platform to deliver the MCQs for candidates to answer on laptops/ iPads.  However, a novel approach was used for the Viva Voce, where one Swiss examiner was joined on Zoom by an international examiner to interview and mark candidates together. Again, the examiners could input scores for the candidates they interviewed using the Speedwell eSystem. In review of this exam, difficulty in connecting and maintaining connections for “remote” examiners was noted.

In 2021, COVID pandemic restrictions were in place across Europe, and there was little confidence about when they would be lifted. Several countries saw multiple lockdowns, and travel was severely disrupted. It was decided to run the EBOD examination fully online, so that candidates could sit the exam from home.  At this time, several “online proctoring” services were available, that could monitor candidates taking exams online. The MCQ section was updated, with the introduction of Single Best Answer format questions, which is a more complex question type. This section could otherwise remain the same, as whether candidates were online or onsite, they would identify a correct answer to each question item they were presented. However, for the Viva Voce, a key issue from the previous year remained: if videoconferencing was used to connect candidates to examiners, there was a chance people could drop out or miss the exam. Therefore, EBO consulted with Speedwell about options for running an online version of the viva voce.

What was designed to replicate the classical Viva Voce was a “Clinical Case”. It has the same approach and purpose as a viva voce (to present candidates with a case that they must identify key diagnostic information, determine a diagnosis and identify what should be done next). Candidates type very short (up to 100 characters), free text responses to each question. Some model answers were provided, against which the candidate’s answers could be compared. If they matched they would be automatically awarded a pre-defined score. Scoring was aligned with the standardised viva voce – the candidate would be progressed through a case over around five questions. For each correct answer, they were awarded 1 point. There was no negative marking. In some questions, a candidate may be asked for multiple suggestions, in which case the marks awarded would be a fraction based on the number of items sought (e.g. each of “two diagnoses” may be worth 0.5 points; each of “three key features” may be worth 0.33 points). It was found that the system used did require human intervention to review and update the scores automatically awarded to candidates. Often, a difference that arose from spelling or description meant the system did not recognise the match between a candidate’s answer and the model answer. Therefore, examiners manually reviewed and updated the scoring awarded by the system.

On review of the results of Clinical Cases, EBO discovered a difference in the overall scores achieved by candidates for Viva Voce cases vs Clinical Cases. Scores achieved in the Clinical Cases section were significantly lower than those achieved in the classical Viva Voce. Several reasons for this disparity were proposed:

  1. In Clinical Cases, candidates could only answer the question as presented. If they misunderstood, they would answer incorrectly. In an interview situation, an examiner had the opportunity to ask the question again in a different way to better understand the candidate’s level of knowledge.
  2. Also, as candidates get no feedback as they go through Clinical Cases, if they do not recognise the appropriate clinical features in a case, they may get a diagnosis wrong and then get the management/ treatment wrong – like a series of falling dominoes, one incorrect answer could lead to more.
  3. Furthermore, recall the scoring systems. In the classical Viva Voce, candidates are scored from 4 to 10, based on the examiner’s assessment of candidate knowledge. With Clinical Cases, candidates were scored from 0 – 10, and could only score a point if they answer a question correctly. This difference could be somewhat equalised as standardised Viva Voce cases will be scored in this way.

In light of these differences, and with review of the exam outcomes, the EBO decided to make some adjustments to allow for these differences, but also for the fact that in the first online exam, there were several various stressors that could affect candidate performance. And so, it was agreed to reduce the passing score to 5 (which yielded a similar pass rate to previous years). It was also agreed that candidates could score less than 6 in more than one section and still pass. In subsequent exams, EBO tried to add further nuance, requiring that candidates scored at least a 5 in the MCQ section (to confirm a good theoretical knowledge base). And in 2023, using scaled scoring in the Clinical Cases (whereby Clinical Case scores were rescaled from 4 – 10 to closer match the classical Viva Voce scores).

For 2024 and beyond, EBO are considering holding one onsite exam and one online exam per year, alternating the onsite exam between Paris in May (in association with the SFO) in one year with Berlin in October (in association with the DOG) the next. The online exam will take place in October when the onsite is in May, and then online in May when the onsite exam is in October. Surveys have found roughly 1/3 of candidates prefer an onsite exam, while 2/3 prefer online exams. This can be due to the convenience of online exams, expense of travelling to attend an onsite exam and busy work schedules of younger colleagues, who may also be juggling young families etc.

This raises a fundamental question on harmonising the exam scoring systems. One step is already taken, in that the onsite Viva Voce will use standardised cases (much like the “Clinical Cases”).  EBO consulted with CESMA (Danny Mathysen, University of Antwerp, Belgium, who worked on the original scoring system for the EBO exam) and the statistician who has been assisting with scoring since 2018 (David Young, University of Strathclyde, Glasgow, Scotland).

Their advice was considered, and EBO will now apply the following system to both onsite and online exams.

Exam responses (for MCQ and Viva Voce) will be input directly to the online Speedwell eSystem.  For the onsite EBO Comprehensive exam, EBO will provide iPads / tablets for candidates taking the exams. For the online exam, candidates will need to ensure their equipment at home will be compatible with the examination systems used (a test for this purpose will be provided a few weeks in advance of the exam).

With regard to each section:

Written / MCQ & SBA (Part I)

  1. Will be composed of 30 traditional MCQs (Multiple True / False Questions) and 30 Single Best Answer questions
  2. Negative marking will be removed
  3. Duration will be 2 hours (reduced from 2.5 hours, as candidates no longer have to “transfer” their answer from the question sheet to an answer sheet). If a technical problem means candidates must again use paper sheets to answer the MCQ section, then the duration will again be increased to 2.5 hours.
  4. It will now be mandatory to score a 6 in this section to pass the exam.

Viva Voce and Clinical Cases (Part II)

  1. Will use standardised cases
  2. In onsite situations, examiners will score candidates based on how well they match model answers
  3. In online situations, examiners will review free-text input to compare candidate answers with model answers
  4. The onsite exam will be 60 minutes to cover 8 cases (in 4 stations), whereas the online exam will be 80 minutes to cover 8 cases (in 4 stations), as candidates will need to type the answers, which naturally takes longer than talking

Final outcome

The “raw scores” of MCQs and Viva Voce / Clinical Cases will be scaled from 1 – 10.

For each part (MCQ, each of the four Viva Voce/ Clinical Case Stations) the rescaled score of 6 is the passing mark.

The MCQ and Viva Voce/ Clinical Case rescaled scores will be put through the original algorithm, which gives 40% weight to the MCQ score and 60% weight to the Viva Voce/ Clinical Cases: 0.4*MCQ + (0.15*Station A Score) + (0.15*Station B Score) + (0.15*Station C Score) + (0.15*Station D Score)

To pass, the result of this algorithm must be 6 or above.

Candidates should score at least 6 in all the sections, but may score lower than 6 in one Viva Voce / Clinical Case section if the candidate’s MCQ score is over 6, and their overall score is over 6.

For live exams, final outcomes will be released the next day. Unfortunately, some candidates may fail as a result of scoring lower than 6 in PART I (MCQ Section). However, these will not be identified until Saturday, as there is no way to process the results before PART II (Viva Voce) commences on Friday.

For online exams, final outcomes will be released after 2 weeks. This will allow time for review of all responses in the Clinical Cases section (free-text input, for which experts will review and update the machine-assigned scores for each response).

Summary

In this way, EBO hope to maintain a high-quality assessment of theoretical knowledge and clinical acumen for comprehensive ophthalmology. Furthermore, the harmonisation of scoring methodology and standard setting will ensure both onsite and online exams are recognised as being of equal quality, despite their differing delivery methods.



Subscribe to our updates & instant alerts | Enter email below


EVER Logo SOE Logo UEMS Logo ICIO Logo