Simulator training for image-guided surgical interventions would benefit from intelligent systems that detect the evolution of task performance, and take control of individual speed–precision strategies by providing effective automatic performance feedback. At the earliest training stages, novices frequently focus on getting faster at the task. This may, as shown here, compromise the evolution of their precision scores, sometimes irreparably, if it is not controlled for as early as possible. Artificial intelligence could help make sure that a trainee reaches her/his optimal individual speed–accuracy trade-off by monitoring individual performance criteria, detecting critical trends at any given moment in time, and alerting the trainee as early as necessary when to slow down and focus on precision, or when to focus on getting faster. It is suggested that, for effective benchmarking, individual training statistics of novices are compared with the statistics of an expert surgeon. The speed–accuracy functions of novices trained in a large number of experimental sessions reveal differences in individual speed–precision strategies, and clarify why such strategies should be automatically detected and controlled for before further training on specific surgical task models, or clinical models, may be envisaged. How expert benchmark statistics may be exploited for automatic performance control is explained.
This is an open access article distributed under the Creative Commons Attribution License
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited