Evaluating visual analytics for health informatics applications: a systematic review from the American Medical Informatics Association Visual Analytics Working Group Task Force on Evaluation
Wu DTY, Chen AT, Manning JD, Levy-Fix G, Backonja U, Borland D, Caban JJ, Dowding DW, Hochheiser H, Kagan V, Kandaswamy S, Kumar M, Nunez A, Pan E, Gotz D. Evaluating visual analytics for health informatics applications: a systematic review from the American Medical Informatics Association Visual Analytics Working Group Task Force on Evaluation. J Am Med Inform Assoc. 2019 Apr 1;26(4):314-323. doi: 10.1093/jamia/ocy90. PubMed PMID: 30840080.
This article reports results from a systematic literature review related to the evaluation of data visualizations and visual analytics technologies within the health informatics domain. The review aims to (1) characterize the variety of evaluation methods used within the health informatics community and (2) identify best practices.
A systematic literature review was conducted following PRISMA guidelines. PubMed searches were conducted in February 2017 using search terms representing key concepts of interest: health care settings, visualization, and evaluation. References were also screened for eligibility. Data were extracted from included studies and analyzed using a PICOS framework: Participants, Interventions, Comparators, Outcomes, and Study Design.
After screening, 76 publications met the review criteria. Publications varied across all PICOS dimensions. The most common audience was healthcare providers (n = 43), and the most common data gathering methods were direct observation (n = 30) and surveys (n = 27). About half of the publications focused on static, concentrated views of data with visuals (n = 36). Evaluations were heterogeneous regarding setting and measurements used.
When evaluating data visualizations and visual analytics technologies, a variety of approaches have been used. Usability measures were used most often in early (prototype) implementations, whereas clinical outcomes were most common in evaluations of operationally-deployed systems. These findings suggest opportunities for both (1) expanding evaluation practices, and (2) innovation with respect to evaluation methods for data visualizations and visual analytics technologies across health settings.
Evaluation approaches are varied. New studies should adopt commonly reported metrics, context-appropriate study designs, and phased evaluation strategies.