//

Mimic Technologies Blog

The Impact of Residents (and Training) on Patients

dV-T Maestro 2Like any new technology, a lot of focus has been placed on ensuring that new users of robotic surgery are adequately trained. Simulation has had a large part to play with this. As the technology has become more mainstream, training requirements have moved from not only training existing surgeons but to ensuring that residents and fellows develop the required skill levels to ensure that they can adapt to the new technologies used in their practice.

Earlier this year we discussed a paper published by the EAU on their curriculum aimed at ensuring that fellows followed a clear curriculum at the end of which they would be deemed to be safe and competent to operate on patients independently. As with many ways of teaching surgery, the procedure is broken into specific steps that the trainee must master before being allowed to carry the whole procedure.

A typical prostatectomy is divided into the 7 following steps: bladder takedown, endopelvic fascia, bladder neck, seminal vesicle/vas deferens, pedicle/nerve sparing, apex, and anastomosis. Typically a trainee will be given a maximum time, of say 30 minutes,to complete one of these tasks during a procedure. Once they have shown that they have mastered the tasks, they will be allowed to move onto another task and eventually to the whole procedure. This is obviously easier to accomplish on parts of the anatomy and procedures that can be standardized.

Until recently, there have not been many studies looking into this practice to see what the potential patient impact could be comparing when a surgery was performed by just the one attending surgeon to one where parts of the case had been handed over to the resident.

Dr. Thiel from the Mayo Clinic in Jacksonville, Florida, has published a paper on just this topic comparing 140 cases where just an attending was involved in the surgery to 232 cases when a resident took over part of the case.

There were no differences in some key clinical outcomes such as positive margins, length of stay, catheter days, readmissions or re-operations when comparing surgeon only to resident –involved cases. There was, however, a difference seen in mean operative time between procedures that were surgeon only cases vs. resident involved (190.4 Min vs. 206.4 Min, P= 0.003)

blog tabe

The researchers also noted that residents were more likely to be involved with at least 1 procedural step after the purchase of the dV-Trainer.

Mimic believes in this way of training residents which is why the Maestro AR set of procedural curricula we have developed are divided into the procedural steps that a resident will be required to learn. We have been able to marry narrated 3D video content with didactic exercises that allow for a student’s ability to be tested. At the appropriate point, the correct psychomotor skill is inserted to make sure that the student can match the skills required for the procedural step.

Mimic currently has the following available:

  • Right Partial Nephrectomy, Dr. Indibir Gill, USC
  • Hysterectomy, Dr. Arnold Advincula, Columbia University
  • Inguinal Hernia Repair, Dr. Rick Low, John C. Lincoln Hospital
  • Prostatectomy (Si), Dr. Henk van der Poel, Antoni van Leeuwenhoek Hospital/Netherlands Cancer Institute in Amsterdam
  • Prostatectomy (Xi), Dr. Vip Patel, Florida Hospital
  • In Development for Q4 ‘16 Release:
    • Lower Anterior Resection, Dr. Eduardo Parra, Florida Hospital

Click here for more information

 

Leave a Comment »

Things to Consider When Looking for a Robotic Surgery Simulator

There are many aspects to a training simulator that can be considered when making the initial investment in simulation training.  For robotic surgery, we believe the top factors to consider are:

  • Validation studies conducted on and using the simulator
  • Fidelity of the controllers
  • Accessibility of the simulator
  • Data, data, data!

Maddie Riley Trainer 03

Since Mimic launched its first version of the dV-Trainer in 2007, there has been a growing number of new robotic surgery simulators entering the market. The real impetus for simulation training was made clear in 2010 when Intuitive Surgical decided to launch their own Skills Simulator, a backpack-like addition for the da Vinci® Si platform.

Intuitive Surgical chose to license 27 exercises that Mimic had already developed or were in the process of developing especially for ISI. This was made possible by the new design of the system, which allowed for the console to operate independently of the patient side cart and core. Since 2010, both the ROSS Simulator from Simulated Surgical Systems and the Robotix Mentor from Simbionix (now 3D Systems) have entered the playing field.

The installed base of da Vinci® surgical systems is now over 3,500 systems around the world and close to 2,000 simulators have been installed and used to support this installed base. The majority of training simulators are da Vinci® Skill simulators (with Mimic’s licensed software) and close to 12% of robotic surgery simulators are Mimic’s dV-Trainers.

blog table 1

Our estimate is that over 70% of institutions performing robotic surgery have access to a simulator of some form or another and that close to 90% of robotic surgeons will at some point have tried a simulator. In fact, since 2007 we believe that between the dV-Trainer and the da Vinci® Skills Simulator over 6.25 million exercise sessions have been completed.

So has all of this simulation training activity been valuable you may ask? One way to look assess simulation training is through validation studies. There are currently five different ways of determining validity. Starting with the basics Face, Content, and Construct and moving to more valuable validation such as Concurrent and Predictive, the definitions are:

Face validity:  Does the simulator have a realistic look and feel, compared to the actual surgical system?

Content validity: Is the simulator useful as a training tool for the surgical system?

Construct validity:  Does the simulator have the ability to distinguish between Novice and Expert users?

Concurrent validity:  How does the simulator compare to a similar or related construct (Dry Labs, Tissue Lab, etc.) carried out on the real robotic surgical system?

Predictive: validity:  Can the simulator be used to predict actual performance in the O.R.?

Face and Content are of relatively low value as they are subjective and the most highly valued validation studies are Construct and Predictive validity. The table below shows the number of papers that have been published on various types of validation.  As you can see there have been over 30 papers published on Mimic software either on the dV-Trainer or the da Vinci® Skills Simulator platform.

blog table 2

Recently, simulation was a large part of the discussion at the FDA town hall meeting in Washington. Roger Smith from Florida Hospital presented a comparison of the different simulators led by himself (the table above is adapted from his presentation). The data presented was clear that the most focus in researching the simulators was on the controllers and how close they emulated the real robotic surgeon’s console.  Obviously, the da Vinci® Skills Simulator, which uses the real console is the real thing. However for the other simulators, this is where concurrent validity because extremely important, as essentially you are replicating (using the simulator) the same activity a surgeon would be doing on the real robotic surgical system.

A direct head to head study was done by Prof. Jacques Hubert and his team at the STAN Institute in Nancy, France between Mimic’s dV-Trainer and the da Vinci® Skills Simulator.  During the study, participants completed the same exercises on the both systems and researchers found that on average there was only a 3% difference in overall score between the two systems. (89.9% vs 86.8%). This varied by the type of exercise but remained consistent with some internal bench-marking carried out by Mimic. No studies have been done to the same extent on the Ross and Robotix Mentor systems.

Another component to take in consideration when choosing a robotic surgery simulator is the accessibility to the system.  While the great thing about the da Vinci® Skills Simulator is that it uses the real console, this can also be very detrimental and a negative for the da Vinci® Skills Simulator that it uses the real console. Very few hospitals can afford to have a dedicated console outside the OR that is used purely for training and simulation. If an institution is lucky enough to have a dual console system they will have the simulator on the second console but that is still kept in the OR. The value of the second console is in allowing programs with residents to keep training new surgeons without interrupting the flow and efficiency of the OR. Data shows that simulation systems in the OR are used less than systems outside the OR. This is due to the simple fact that as robotic programs become more successful and utilization increases there is just not enough time for training.

All things considered, any learning experience is only as good as the objectives and goals that are being set for the student and how well they are being tracked. The MScore system allows tailored pass marks, proficiency levels and curricula to be set for the students based on their learning objectives. A multitude of metrics and data can be reviewed to allow a student to learn from their mistakes and improve their psychomotor skills.

So when looking for a simulator, make sure to find one that is validated, has high fidelity controllers, can be accessed 24/7 outside the OR, and has a flexible management and scoring system that can be tailored to meet your learning objectives. In the Tanaka study that was referred to in Roger Smith’s presentation to the FDA meeting, an observation was made that while the majority of study participants preferred the usability of the da Vinci® Skills Simulator, 70%  felt the dV-Trainer was the best value for money spent when taking all things into consideration.

Click here for more validation studies on Mimic and the dV-Trainer

 

Leave a Comment »

What’s in a Score? (and What is the Data Telling You)

by: Christopher Simmonds

Data, data, data. That is all we seem to hear about today in healthcare. One of the consequences of the Affordable Care Act has been to ensure that hospitals, physicians, surgeons and nurses are becoming obsessed with data and information to an extent like never before. Looking at information across large data pools, trends can be identified and behaviors that drive the trends can be discovered and, if needed, modified, including robotic surgery, which is one of the areas where there is a lot of analysis occurring.

Robotic surgery is truly a misnomer, as in reality it is a computer-assisted surgery where the computer has been placed between the surgeon and the patient, enhancing the surgeon’s capabilities as compared to other surgical techniques. If the robot was compared to a super hero, its role would be to turn the surgeon into Iron Man whose every day actions are enhanced by the power of computing.

The fact that there is a computer between the surgeon and the patient means that a lot of data can be captured. At their town hall meeting in July 2015, this was specifically noted by the FDA.  In addition, a main focus of that meeting was training and simulation which also is computer-based and captures a lot of information, including a surgeon’s actions which can then be translated into a scoring system.  So what can these scoring systems for robotic surgery training tell us?

If you study surgeons long enough you can identify that some surgeons will be very precise in their motions and other less so. When training new surgeons there are also certain good habits you would like them to develop such as keeping their instruments in view at all times and making sure they do not use too much force or drop things. For these reasons the MScore system, which underpins all the scoring on the dv-Trainer, looks at efficiency and good habit metrics when calculating overall scores.

scoring blog

Typically you should be rewarded for efficiency and penalized for bad habits

When Mimic initially developed the MScore system it was calculated as a percentage-based scoring system. The scores were based on the weighted average of all individual metrics as compared to an expert base line.  While this provided a simple and easy way to display the score it may not have been the best in helping an individual focus on specific areas of improvement.  A high percentage in one area could compensate for a low percentage in another area while still producing an acceptable overall percentage. Mimic refers to this as the classic scoring system.

After being challenged by educators, Mimic decided to take inspiration from FLS and develop what it now refers to as its proficiency-based scoring system.

Like the classic scoring system the revised MScore system is based on expert user benchmarks, however, proficiency is measured as being within one standard deviation of the mean score of those experts.  As an example, if five surgeons’ results have been pooled to produce the benchmark you have to perform better than at least one of these surgeons in order for you to pass. Instead of the overall result being a combination of the scores you have to become proficient at each individual metric before you can pass. The example below shows an individual who has passed on all other areas but failed in the area of blood loss. The number shown is a weighted addition of all the metrics together. The user would have likely passed in a percentage-based system as their superior scores in all the other metrics would have been compensated for their lower score in blood loss.

scoring comparison

The other difference between the classic scoring system and the proficiency-based scoring system is that you can set proficiency thresholds. In FLS for example, for students to pass they need to complete the same exercise twice consecutively and ten times non-consecutively. The same principal has been introduced into MScore and defaults to two consecutive and five non-consecutive passes, though this can be modified by the end user.

Mimic realized early on that they did not have all the answers and therefore ensured that the scoring system was developed with an open architecture approach.  Expert level benchmarks can be input from peer reviewed literature as well as from scores posted by surgeons within specific institutions. Weighting and proficiency levels can modified to meet specific needs. However curriculum and benchmarks such as the Morristown protocol are often used and have been implemented across many systems.

Overall, both the classic scoring system and the proficiency scoring system are helping surgeons improve their performance which is a good thing, noting that it will probably take someone longer to pass a proficiency-based curriculum than a percentage based one. In some instances this data is being used a part of annual certification programs but that will be the subject of another blog post, another day.


 

 

Leave a Comment »