Improvement of design of a surgical interface using an eye tracking device
© Barkana and Açık; licensee BioMed Central Ltd. 2014
Published: 7 May 2014
The Erratum to this article has been published in Theoretical Biology and Medical Modelling 2014 11:48
Surgical interfaces are used for helping surgeons in interpretation and quantification of the patient information, and for the presentation of an integrated workflow where all available data are combined to enable optimal treatments. Human factors research provides a systematic approach to design user interfaces with safety, accuracy, satisfaction and comfort. One of the human factors research called user-centered design approach is used to develop a surgical interface for kidney tumor cryoablation. An eye tracking device is used to obtain the best configuration of the developed surgical interface.
Surgical interface for kidney tumor cryoablation has been developed considering the four phases of user-centered design approach, which are analysis, design, implementation and deployment. Possible configurations of the surgical interface, which comprise various combinations of menu-based command controls, visual display of multi-modal medical images, 2D and 3D models of the surgical environment, graphical or tabulated information, visual alerts, etc., has been developed. Experiments of a simulated cryoablation of a tumor task have been performed with surgeons to evaluate the proposed surgical interface. Fixation durations and number of fixations at informative regions of the surgical interface have been analyzed, and these data are used to modify the surgical interface.
Eye movement data has shown that participants concentrated their attention on informative regions more when the number of displayed Computer Tomography (CT) images has been reduced. Additionally, the time required to complete the kidney tumor cryoablation task by the participants had been decreased with the reduced number of CT images. Furthermore, the fixation durations obtained after the revision of the surgical interface are very close to what is observed in visual search and natural scene perception studies suggesting more efficient and comfortable interaction with the surgical interface. The National Aeronautics and Space Administration Task Load Index (NASA-TLX) and Short Post-Assessment Situational Awareness (SPASA) questionnaire results have shown that overall mental workload of surgeons related with surgical interface has been low as it has been aimed, and overall situational awareness scores of surgeons have been considerably high.
This preliminary study highlights the improvement of a developed surgical interface using eye tracking technology to obtain the best SI configuration. The results presented here reveal that visual surgical interface design prepared according to eye movement characteristics may lead to improved usability.
Increasing the speed of the surgical process and decreasing the level of invasiveness are two important objectives in a surgical procedure along with a successful treatment of the diseased lesion. Minimally invasive surgical procedures have been evolved for reducing hospitalization time and surgical complexities. In a conventional minimally invasive surgery however, a surgeon operates on deeply located lesions without actually seeing or touching. Thus, an easy to use surgical interface (SI), which benefits maximally from the surgeons' skills while providing all necessary information that can be perceived and processed by the surgeon during the intervention in the operation theatre, is needed. Surgical interfaces are designed to improve surgical treatments in all the stages of a clinical workflow, which ranges from preoperative diagnosis and planning of the surgical interventions up to postoperative evaluation. In this work, SI is used for the interpretation and quantification of the patient information, and for the presentation of an integrated workflow where all available data are combined to enable optimal treatments. Recently, several SIs that consist of functions for identification of liver segments and planning of liver surgery have been developed.
CAScination is a well-known commercial system that integrates stereotactic technology in complex liver interventions and surgery . The real-time visualization of surgical tools of the surgical interface shows the surgeon where to cut/ablate . A preoperative surgical simulator has been designed to allow surgeons to plan the surgical interventions for liver surgery in PAtient Specific Simulation and PreOperative Realistic Training (PASSPORT) project . A dynamic liver modeling can be developed with the help of a preoperative surgical planning simulator in PASSPORT. Liver pre-operative images, surrounding anatomical and pathological structures are used to extract anatomical (from CT-image, MRI or US), mechanical (from elastographic imaging) and biological information of liver (from biopsy and blood analysis). LiverAnalyzer™ (MeVis Medical Solutions AG, Germany) and Synapse Vincent™ (FUJIFILM Co., Japan) surgical interfaces have functions that can segment the liver, vessels, biliary system, and tumors, volumetry of the remnant and/or graft, evaluate vascular territories, and plan the surgery. Additionally, Pathfinder™ (Pathfinder Technologies, USA) provide a computer-assisted navigation and 3D visualization for surgeons and interventional radiologists, to enable accurate and efficient delivery of cancer therapeutics in soft-tissue organs. E-simulation and planning from radiological exams to surgery (E-SPress3D, Italy) has also been developed to plan and simulate surgical interventions deploying the information containing CT, MRI, 3D-US images. HEPAPLAN (LabHuman Human Centered Technology, Spain) is a software developed to help surgeons for make a better decision about oncological patients in hepatic field. Note that not only inclusion of functions (such as navigation, 3D view etc) for surgery, but also incorporating these functions systematically into interface design by considering the surgeon's requirements is an essential issue. Thus, recently, a user-centered virtual liver surgery planning system for liver surgery called Dr. Liver, has been developed, that considers human factors research, usability and time efficiency issues . Human factors research describes how much and what kind of information a person can use effectively, and how information should be organized and presented to make it usable . The general objectives of the human factors are to maximize user and system efficiency, to increase safety, to improve system performance and to increase ease of use etc. .
Human factors research has previously been used to provide design solutions for the disciplines of medicine, psychology and ergonomics in which human-machine interactions affect performance and usability . Various traditional, sociotechnological systems, user-centered design, computer-supported design and ecological interface design approaches in human factors research have been developed to design interfaces . In this work, we use user-centered design (UCD) approach to develop the SI for kidney tumor cryoablation ,. Foundation principles of UCD is to initially focus on users and tasks, perform empirical measurements by getting user feedback and reactions on design and prototypes, and apply iterative design ,. One way to improve the design of a visual SI, such as the one developed here, is to investigate the eye movements of users interacting with the interface.
Even though the employment of eye-tracking data in usability research dates back to 1950s , due to the slow development of reliable eye tracking technology, and difficulties arising from multidimensional gaze data analysis. Recently there has been an increase in the number of eye tracking studies . Together with research on the experience of pilots training in a simulator , users inspecting webpages , there have been a couple of studies profiting from eye movement data in medical contexts [14–17]. Nevertheless, those few studies have focused on the visual search strategies of surgeons looking for medically interesting spots in scans ,, or have investigated similarities and differences between novices and experts performing a simulated surgical intervention . What is relevant for our design purposes is the possibility of tracking people's eye movements in order to modify and improve the interface in development. It has long been accepted that aspects of user gaze data are informative in terms of revealing the difficulty of task at hand . One can address the factors that influence the usability of interfaces by deducing users' visual and display-based information processing from eye movement patterns . To our knowledge, no previous study on surgical interface design benefited from eye tracking to improve the arrangement of interface elements displayed on the screen of the surgical interface.
The difficulty that arises while working with eye movements is the multidimensional nature of the data that can be analyzed, and interpreted in several different ways . Eye trackers typically sample the position of the eye(s) in Cartesian coordinates at successive time points. One can usually extract fixation and saccade intervals and obtain several metrics such as fixation durations, amount of fixations in a given region, and the amplitude of saccades using eye tracker data , . Thus, there is a need to consider an important trade-off while analyzing eye tracking data obtained from users interacting with a purpose-designed system. While there is a vast amount of fixation and eye-movement related measures one can extract from raw eye position data, comparability across studies requires agreement on few important measures. Nevertheless, unlike free-viewing of natural images , where each scene is one representation of the same underlying world, most user interfaces are very different from each other since they are designed for unique tasks. This aspect requires that researchers define particular measures that are best suited for understanding the experience of the user interacting with one particular interface. For instance, if users fixated some regions of the interface significantly more often than others, while ignoring other elements completely, the design of the interface can be improved accordingly. The fixation duration analysis, on the other hand, reflects both the informative region search efficiency of the user, and the amount of information extracted from the regions that are fixated . Thus, while analyzing the data recorded during interaction with a human-computer interaction, the data analysis must employ common gaze measures in a creative and innovative way that serves the purposes of the given interface-based task as well as possible.
How can one use eye movement data in conjunction with task completion time and accuracy in order to evaluate whether a set of changes in the visual appearance of an interface led to improved user experience? Employing ecological tasks such as preparing a cup of tea  and relying on natural scene presentations on a screen in the laboratory ,  have demonstrated that every second humans perform on average two to three fast eye movements - saccades - each of which is followed by fixation that lasts around 200-300ms . These eye movements related characteristics thus describe the default mode of active vision. Deviations from these default values while dealing with a task might reflect suboptimal performance. Accordingly, the recorded eye movement properties are compared with the default values to improve the design of our developed SI. In this study, we investigate if the modified version of the SI results in faster and more accurate task completion in the presence of more natural gaze patterns, which can qualify as an objective improvement in usability. Thus, surgeons have been asked to conduct experiments of a simulated cryoablation using the SI with an eye tracker device in order to see whether their eye movements remain natural during this demanding task.
Surgeons are asked to find the tumor on the left kidney (target point) that is displayed on SI, and to determine a suitable entry point to start the ablation. The visual information acquisition, quantified with fixation durations, and amount of fixations at different interface elements (such as menu, 3D images etc), which have been obtained using eye tracker device, are used to improve the design of the SI configuration.
In this article, the Materials and Methods section presents the development of SI considering the four phases of the UCD approach, experimental apparatus, procedure, participants, data collection, analysis. The results of the experiments are discussed in Results section. The Discussion and Conclusion presents the potential contributions of this work. The possible directions for future work are given in Future Work.
Materials and methods
In this section, initially development of SI considering the UCD approach has been given. Then experimental apparatus, procedure, participants, data collection and analysis details are provided.
Development of SI using User-Centered Design (UCD) approach
Six participants were recruited from Department of Faculty of Medicine, Istanbul University. Given that we are primarily interested in gaze patterns of our participants, this small subject pool was enough to obtain a large set of fixations and eye movements. Three are surgeons (all male) (Participant A (PA), Participant B (PB) and Participant C (PC)) from Urology Department, and three are radiologists (two female and one male) (Participant D (PD), Participant E (PE) and Participant F (PF)) from Radiology Department. All the participants are right-handed, and have normal vision. All urologists have experience with laparoscopic surgery, and radiologists have experience in kidney biopsy process. No participants have any prior experience using an eye tracker device.
Participants were briefed on the nature of the study. The task was to find the tumor on the left kidney (target point), which had been displayed on SI, and to determine a suitable entry point to start the ablation. The participants first took part in a trial practice, during which the participants executed the simple tasks several times to get a basic understanding of the SI. Participants were calibrated and seated with their head resting on the chin-rest to ensure the accuracy of the data recording before the data acquisition was initiated.
Once cryoablation task was completed, the mental workload and situation awareness of the participants have been measured. The National Aeronautics and Space Administration Task Load Index (NASA-TLX) were used to measure the mental workload of the surgeons. Mental workload measures plays an important role in task allocation, and the SI design and development should ensure that the surgeon is not overloaded. NASA-TLX has also been used extensively in a variety of projects for assessing the cognitive load experienced while performing both open and laparoscopic operations [26–29]. The NASA-TLX requires surgeons to rate their perceived levels of mental, physical, and time demands associated with a cryoablation of a tumor task on a scale of 20 points as well as their effort, performance, and frustration during the task execution. Additionally, a subjective technique, namely the Short Post-Assessment Situational Awareness (SPASA), is used for the measurement of situation awareness (SA) . The items contain 11 statements to which the user will express his/her agreement or disagreement on a 4-point Likert scale. Sub-dimensions of SA, which are mission awareness (MA), spatial awareness (SpA), time awareness (TA), perception (L1), comprehension (L2), and prediction (L3), are also captured using these 11 statements.
Data collection and analysis
The remote eye tracker Sensomotoric Instruments (SMI) 500 sampled the participants' eye position and pupil size at 500 Hz. These data were saved to a log file during the task execution. SMI's control and analysis software Behavioral and Gaze Analysis (SMI BeGaze™, ) had been used to extract the gaze related metrics
We had defined task-relevant regions labeled areas of interest (AOIs) on the display and employed three metrics extracted from eye data for the purposes of data analysis. Areas of interest (AOI) on the SI provide a summary of task-related regions including the CT images. In the early version of the SI we defined a total of 11 AOIs, eight of which correspond to the CT scan. The remaining three AOIs were task related buttons, and their data were not analyzed in this study. We defined only two AOIs, since the number of CT scans was reduced to two after analyzing the results obtained with the early SI version in the modified version of the SI. Three eye movement metrics were used to characterize the gaze patterns. Dwell time, the total amount of time spent in an AOI including both fixations and saccades, were used for visualization purposes only. Median fixation durations and AOI specific fixation counts were employed for the statistical analysis. Standard permutation-based bootstrap methods  were preferred since they were robust due to the lack of a priori assumptions about the data.
Evaluation of early SI results
Taking the above results into account, SI has been modified and the number of CT images displayed for entry and target point selection for kidney cryoablation was reduced to two.
Evaluation of modified SI results
Total Time Each Participant Spent During Task Execution.
Relative time on CT images
Median Fixation Duration
A surgical interface design may increase the effectiveness of a surgeon's performance during surgical operation. In this work, a surgical interface (SI) for kidney tumor cryoablation task, which considers the surgeon at the center, has been designed, developed and evaluated. A user-centered design (UCD) approach has been used to design the SI. The factors of usability and functionality have been considered during the design of SI. These factors have been selected and classified on the basis of a literature review and the personal judgement of the experts . Possible configurations of the SI, which comprise various combinations of menu-based command controls, visual display of multi-modal medical images, 2D and 3D models of the surgical environment, graphical or tabulated information, visual alerts, etc., have been developed.
An eye tracking technology is used to improve the design of SI to obtain the optimum configuration. An eye tracking device, which provides quantitative data, is used to understand visual and display-based information processing, and the factors that may impact upon the usability of SI. Eye tracker devices provide a comprehensive approach while examining interaction processes like the ones in our study. Eye tracking studies had previously been used to highlight a surgeon's vigilance level during an operation , and to assess skill and situation awareness . Additionally, it has been shown in  and  that workload assessment can be retrieved via an eye tracking experiment. Nevertheless, to the best of our knowledge, the current study is the first to use eye tracking data in order to introduce changes in a surgical interface, and below we discuss the significance of our findings in relation to previous studies.
We would like to mention one possible source of criticism that might be directed at present results before we discuss the significance of our findings,. In our study we had a small number of participants - three of them interacted with the early SI, and another three with the modified SI. Even though we accept that a larger participant sample is needed for more conclusive statistical analysis, we would like to note that in the case of eye-movement data, this sample is sufficiently informative. This is because our data points consist of fixations and not of subjects. Since each subject provides hundreds of fixations to the data pool, the analyses performed return reliable results. However, we intend to invite more participants for our future studies in order to see whether the results obtained from surgeons tested here generalize to others.
Analysis and interpretation of eye-tracking data is not straightforward . The selection of both metrics that characterize gaze patterns and the statistical tools to investigate such metrics can be a daunting task. The gaze patterns of radiographers inspecting x-rays have been analyzed and revealed nearly 90.000 different associations between gaze patterns, stimulus statistics, task performance and the expertise of the user . It is impossible to make sense of such an enormous data set without subjective manual filtering of the associations. In the present study, we have used only two simple fixation metrics - fixation duration and amount of fixations in task-relevant regions-with a clear goal in mind: The modification of our SI configuration such that surgeons interacting with it will solve a simulated cryoablation task faster and with more natural eye-movements . We were able to conclude that the presence of eight scans in the early SI version was redundant by using the spatial distributions of fixations. This allowed us to present two larger scans at much higher spatial resolution, and consequently the task completion task decreased substantially. Moreover, as a result of this modification, fixation durations of the participants became similar to what is observed under natural viewing conditions. This latter point is especially interesting from a visual information acquisition perspective. Whereas long fixation durations might indicate difficulties in extracting information from an image region , many successive fixations of relatively short duration are associated with inefficient visual search . We suggest that the durations recorded during interaction with the modified SI correspond to optimal levels. Thus, visual interface designs can be improved by considering natural eye movement properties.
The modified SI configuration also led to increases in total fixation time on CT scans that are crucial for determining the entry and target points during the simulated cryoablation task. Both studies on every day behaviors such as playing cricket  and research conducted in surgery training environments  demonstrate that expertise in a task correlates with increased fixation durations at task relevant regions of the environment. Even though we did not perform a comparison between novices and experts, the increase in fixation durations at CTs might suggest that the modified SI allowed our participants to channel their task-relevant expertise more effectively.
We have used NASA-TLX and SPASA questionnaires to measure surgeons' cognitive load and situation awareness, respectively during the execution of a kidney tumor cryoablation task with the help of a SI. The questionnaire results have shown that overall mental workload of surgeons related with SI is low as it has been aimed. However, mental demand has formed nearly 29% of overall workload among all other factors. In order to decrease workload more, some changes such as different screen layout designs, graphic/text mixes, color combinations, icons should be done in the SI design. Furthermore, overall SA scores of surgeons have been considerably high. However, there was a little need to improve SA in terms of L3. Thus, necessary knowledge and information about the process should be demonstrated to the surgeons by SI to meet surgeons' objectives during the task execution. In this study, it is possible that participants might have some questions related with SI, and could not predict their future movements.
This preliminary study highlights the design of a SI using a user-centered approach (UCD) and evaluation of developed SI using subjective (questionnaires) and objective (eye tracking) methods to obtain the optimum SI configuration. The present results provide a proof-of-concept that user interface designs configured according to eye movement characteristics lead to improved usability.
In the future, we plan to continue revision of the developed SI, include the real-time cryoablation protocol, and plan to collect eye tracking data in more difficult surgery-like tasks such as cryoneedle control. We believe varying surgery-like tasks will yield richer data on the eye tracking. We plan to add saccade properties and computational attention modeling to our eye tracking analysis , . Interface designs considering eye movement characteristics might lead to improved user comfort by avoiding saccade directions and amplitudes uncommon in everyday behaviour. Computational attention models can predict fixation locations of human subjects by relying on bottom-up saliency computed from low-level visual properties such as contrast and color . The saliency of informative regions may be increased to transmit the relevant information to the surgeon as quickly as possible based on this computational attention models. Moreover, it will be interesting to characterize the contributions of stimulus-driven saliency and the top-down expertise of surgeons to their gaze behavior . Thus, the investigation of the eye movements of surgeons interacting with our SI will be important both from usability and basic attention research perspectives. Furthermore, one can compare the fixations of the surgeons performing the task with fixation locations generated by computational models of visual attention (e.g. ) to improve the visual design of the developed SI.
We also plan to investigate the use of eye tracking system to measure surgeons' mental workload and situation awareness during kidney tumor cryoablation task execution with the SI. Eye tracking observations such as pupil size, duration of fixation, saccade amplitude can be used to assess mental workload and SA. Eye tracking and questionnaire results (NASA-TLX and SPASA) can both be used to obtain the optimum SI configuration to decrease mental workload and to increase situation awareness of the surgeon.
Publication of this article has been funded by European Union Seventh Framework Programme FP7/2007-2013 under grant agreement n. 270396 (I-SUR).
This article has been published as part of Theoretical Biology and Medical Modelling Volume 11 Supplement 1, 2014: Selected articles from the 1st International Work-Conference on Bioinformatics and Biomedical Engineering-IWBBIO 2013. The full contents of the supplement are available online at http://www.tbiomed.com/supplements/11/S1.
- Peterhans M, Vom Berg A, Dagon B, Inderbitzin D, Baur C, Candinas D, Weber S: A navigation system for open liver surgery: design, workflow and first clinical applications. The International Journal of Medical Robotics and Computer Assisted Surgery:. 2011, 7: 7-16. 10.1002/rcs.360.View ArticleGoogle Scholar
- Fasquel JB, Waechter J, Nicolau S, Agnus V, Soler L, Marescaux JA: XML based component oriented architecture for image guided surgery: illustration for the video based tracking of a surgical tool Workshop on Systems & Architecture for Computer Assisted Intervention. Proceedings of the 11th International Conference on Medical Image Computing and Computer Assisted Intervention. 2008, New York, SeptemberGoogle Scholar
- Yang X, Lee W, Choi Y, You H: Development of A User-Centered Virtual Liver Surgery Planning System. Proceedings of the Human Factors and Ergonomics Society Annual Meeting. 2012, 56: 1772-776. 10.1177/1071181312561356.View ArticleGoogle Scholar
- Klatzky RL, Kober N, Mavor A: Safe, comfortable, attractive, and easy to use: improving the usability of home medical devices. Committee on Human Factors National Research Council (U.S.). 1996, Academic Press, Washington, D.CGoogle Scholar
- Handbook of human factors and ergonomics. Edited by: Salvendy G. 2006, WileyGoogle Scholar
- Barkana-Erol D, Erol D: Applying a User-Centered Design Approach to Develop a Surgical Interface for Cryoablation of a Kidney Tumor. Proceedings of the International Workshop on Human-Machine Systems. 2013, Cyborgs and Enhancing Devices (HUMASCEND)Google Scholar
- Barkana-Erol D, Duru Goksel D, Duru AD, Acik A, Ozkan M: Development and Evaluation of an Interface for Pre-Operative Planning of Cryoablation of a Kidney Tumor. Proc of 3rd Joint Workshop on New Technologies for Computer/Robot Assisted Surgery (CRAS). 2013Google Scholar
- Preece J, Sharp H, Rogers Y: Interaction Design. 2002, WileyGoogle Scholar
- Norman DA: Cognitive Engineering. User Centered System Design: New Perspectives on Human-Computer Interaction. Edited by: D. A. Norman and S. W. Draper. Hillsdale, N.J. 1986, Lawrence Erlbaum Associates, 31-61.Google Scholar
- Fitts PM, Jones RE, Milton JL: Eye movements of aircraft pilots during instrument-landing approaches. Aeronautical Engineering Review. 1950, 9 (2): 24-29.Google Scholar
- Jacob RJK, Karn KS: Eye tracking in Human-Computer Interaction and usability research: Ready to deliver the promises. Edited by: J. Hyönä, R. Radach, & H. Deubel. 2003, The mind's eye: Cognitive and applied aspects of eye movement research Amsterdam: Elsevier, 573-605.Google Scholar
- Flemisch FO, Onken R: Detecting usability problems with eye tracking in airborne battle management support. Proceedings of the NATO RTO HFM Symposium on Usability of information in Battle Management Operations. 2000, 1-13.Google Scholar
- Betz T, Kietzmann T, Wilming N, König P: Investigating task-dependent top-down effects on overt visual attention. J Vis. 2010, 10 (15): 1-14. 10.1167/10.15.1.View ArticlePubMedGoogle Scholar
- Anders B, Shyu C: Studying Visual Behaviors from Multiple Eye Tracking Features Across Levels of Information Representation. AMIA Annu Symp Proc. 201, 72-79.Google Scholar
- Chetwood AS, Kwok KW, Sun LW: Collaborative eye tracking: a potential training tool in laparoscopic surgery. Surg Endosc. 2012, 26: 2003-2009. 10.1007/s00464-011-2143-x.View ArticlePubMedGoogle Scholar
- Mello-Thoms C, Nodine CF, Kundel HL: What attracts the eye to the location of missed and reported breast cancers?. Proceedings of the Eye Tracking Research and Applications Symposium. 2002, 111-117.View ArticleGoogle Scholar
- Zheng B, Tien G, Atkins SM, Tanin H, Meneghetti AT, Qayumi AK, Panton ONM: Surgeon's Vigilance in the Operating Room. The American Journal of Surgery. 2010, 201 (5): 673-677.View ArticleGoogle Scholar
- Poole A, Ball LJ: Eye Tracking in Human-Computer Interaction and Usability Research: Current Status and Future Prospects. Encyclopedia of Human Computer Interaction. Edited by: Ghaoui, C. 2005, Hershey, PA: Idea Group, 211-219.Google Scholar
- Açık A, Sarwary A, Schultze-Kraft R, Onat S, König P: Developmental changes in natural viewing behavior: bottom-up and top-down differences between children, young adults and older adults. Frontiers in Psychology. 2010, 1: 207-PubMedPubMed CentralGoogle Scholar
- Land MF, Hayhoe M: In what ways do eye movements contribute to everyday activities?. Vision Research. 2001, 41: 3559-3565. 10.1016/S0042-6989(01)00102-X.View ArticlePubMedGoogle Scholar
- Kowler E: Eye movements: The past 25 years. Vision Research. 2011, 51: 1457-1483. 10.1016/j.visres.2010.12.014.View ArticlePubMedPubMed CentralGoogle Scholar
- Sparks DL: The brainstem control of eye movements. Nature Neuroscience. 2002, 3: 952-964. 10.1038/nrn986.View ArticleGoogle Scholar
- Rayner K, Castelhano M: Scholarpedia. 2007, 2 (10): 3649-10.4249/scholarpedia.3649.View ArticleGoogle Scholar
- Barkana-Erol D, Ozkan M., Calisir F, Goksel-Duru D, Duru AD: Development of a Surgical Interface for Cryoablation of Kidney Tumors. Proc of International Work-Conference on Bioinformatics and Biomedical Engineering. 2013, 601-611.Google Scholar
- Nielsen J: Usability engineering. 1993, San Diego Academic pressGoogle Scholar
- Gerhardt-Powals J: Cognitive engineering principles for human-computer performance. International Journal of Human-Computer Interaction. 1996, 8 (2): 189-211. 10.1080/10447319609526147.View ArticleGoogle Scholar
- Carswell CM, Clarke D, Seales WB: Assessing mental workload during laparoscopic surgery. Surg Innov. 2005, 12: 80-90. 10.1177/155335060501200112.View ArticlePubMedGoogle Scholar
- Stefanidis D, Haluck R, Pham T: Construct and face validity and task workload for laparoscopic camera navigation: virtual reality versus videotrainer systems at the SAGES Learning Center. Surg Endosc. 2007, 21: 1158-1164. 10.1007/s00464-006-9112-9.View ArticlePubMedGoogle Scholar
- Berguer R, Smith W: An ergonomic comparison of robotic and laparoscopic technique: the influence of surgeon experience and task complexity. J Surg Res. 2006, 134: 87-92. 10.1016/j.jss.2005.10.003.View ArticlePubMedGoogle Scholar
- Endsley MR: Theoretical underpinnings of situation awareness: a critical review. Situation Awareness Analysis and Measurement. Edited by: Endsley MR, Garland DJ. 2000, Lawrence Erlbaum Mahwah, NJ, 3-32.Google Scholar
- Experiment Center BeGaze Manual V3.0: Manual. SensoMotoric Instruments GmbH. 2011, JuneGoogle Scholar
- Efron B, Tibshirani R: An Introduction to the Bootstrap. 1993, New York: Chapman & Hall LtdView ArticleGoogle Scholar
- Tien G, Zheng B, Swindells C, Atkins MS: Measuring Situation Awareness of Surgeons in Laparoscopic Training. Proceedings of Eye Tracking Research and Applications ETRA. 2010, 149-152.Google Scholar
- Schulz CM, Schneider E, Fritz L, Vockeroth J, Hapfelmeier A, Wasmaier M, Kochs EF, Schneider G: Eye tracking for assessment of workload: a pilot study in an anaesthesia simulator environment. Br J Anaesth. 2011, 106 (1): 44-50. 10.1093/bja/aeq307.View ArticlePubMedGoogle Scholar
- Goldberg JH, Kotval XP: Eye movement-based evaluation of the computer interface. Advances in Occupational Ergonomics and Safety. Edited by: S.K. Kumar. 1998, Amsterdam: ISO Press, 529-532.Google Scholar
- Land MF, McLeod P: From eye movements to actions: how batsmen hit the ball. Nature Neuroscience. 2000, 3: 1340-1345. 10.1038/81887.View ArticlePubMedGoogle Scholar
- Law B, Atkins MS, Kirkpatrick A E, Lomax AJ: Eye gaze patterns differentiate novice and experts in a virtual laparoscopic surgery training environment. In Proceedings of the Eye Tracking Research and Applications Symposium. 2004, NY: ACM Press, 41-48.Google Scholar
- Foulsham T, Kingstone A, Underwood G: Turning the world around: Patterns in saccade direction vary with picture orientation. Vision Research. 2008, 48 (17): 1777-1790. 10.1016/j.visres.2008.05.018.View ArticlePubMedGoogle Scholar
- Hayhoe M, Ballard : Eye movements in natural behavior. Trends in Cognitive Sciences. 2005, 9: 188-194. 10.1016/j.tics.2005.02.009.View ArticlePubMedGoogle Scholar
- Itti L, Koch C: Computational modeling of visual attention. Nature Reviews Neuroscience. 2001, 2: 194-203. 10.1038/35058500.View ArticlePubMedGoogle Scholar
- Henderson JM: Human gaze control during real-world scene perception. Trends in Cognitive Sciences. 2003, 7 (11): 498-504. 10.1016/j.tics.2003.09.006.View ArticlePubMedGoogle Scholar
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.