Analytics

Your Data Looks Good on a Dashboard

Provider looking at patient data in a dashboard

Abstract

The advent and widespread adoption of electronic health records (EHRs) has provided a unique opportunity to leverage collected data for the purpose of improving patient outcomes. Answering four important questions can provide context to how data are used and displayed in clinical practice. The question of which data should be collected and what those data are used for should be individualized to specific institutions, use cases and resource levels. Data frequently collected relate to process, financial and outcome measures. Care should be taken to measure items that will be useful for regulatory reporting, internal compliance and clinical quality improvement. Dashboards are a useful solution to the question of how to display data. While a consensus definition for a dashboard is lacking in the literature, it can generally be thought of as any vehicle used for data display to a target end user. Often dashboards come in table or chart form; however, many advanced platforms can be made interactive. The question of how to obtain data to feed the dashboard largely depends on fiscal, privacy and infrastructure constraints. Options include anything from manual chart review to electronic data warehouse sampling. The most important question to answer is what will be done with the displayed data, as measurement for the sake of measurement is wasteful and cumbersome. Data have a story to tell and quality improvement processes should be in place to respond to identified opportunities for improvement.

Introduction

The widespread adoption of electronic health records (EHR) has produced large amounts of data that can be queried to elucidate patterns and trends that may have been previously unrecognized (Stifter et al., 2015). Many professional societies and government entities sponsor practice-specific databases that receive data from practitioners around the United States and the world. There are numerous challenges when managing the data that are being collected. Logistically, issues such as data storage, privacy and data integrity are among top concerns facing the healthcare industry today (Vanderweele et al., 2018). While these concerns are valid and timely, the question of how to meaningfully use the data being collected to change and improve care is of paramount interest to clinicians and practice leaders (Clancy & Reed, 2016).

In everyday practice, data are recorded on everything from wait times and screening rates to death and readmission rates. Payers and accrediting bodies often require the tracking and disclosure of this information to maintain accreditation and, in certain cases, funding. Tracking and measuring these metrics can be expensive and administratively burdensome. This burden was acknowledged by the Institute of Medicine (IOM) in 2015 with the release of their report “Vital Signs: Core Metrics for Health and Health Care Progress” (IOM, 2015). This report detailed 15 core measures that are outcomes-oriented at a patient and systems level and that represent current issues in the U.S. health system. With this in mind, responsible and sensible use of the data collected requires a strategy and clear purpose.

The ultimate purpose of measuring clinical data should be for the improvement of patient outcomes. According to Blumenthal and McGinnis (2015), “if something cannot be measured, it cannot be improved” (p. 1901). Taking this one step further, if providers and staff do not have access to outcomes data, then they will not understand what needs to be improved. Adler and colleagues argued that if providers are measured on specific outcomes, they should have the authority and capacity to affect the performance measure (Adler, Hamdan, Scanlon, & Altman, 2018).

One strategy for getting that data in the hands of end effectors is a dashboard. While there is not a single definition in the literature of exactly what a dashboard is, we generally define a dashboard as a tool allowing for the visualization of selected measures of interest. Dashboards take many forms but operate on an original (Carver & Scheier, 1982) and adapted version of control theory (Roos-Blom et al., 2017). This theory posits that individuals continually operate on a feedback loop that takes personal performance and compares this to a goal. If a feedback mechanism leads an individual to perceive that the goal is not met, then a behavior is modified in efforts to achieve the goal. Dashboards are useful for many tasks including continuously measuring performance, detection of outliers, and analysis of acceptable performance (Chazisaeidi et al., 2015).

Do dashboards actually improve practice? It appears they may, while it is not yet known what specific characteristics of dashboards improve practice (Dowding et al., 2015). In a collaboration of eight federally qualified health centers, Fischer and colleagues found the implementation of a provider specific feedback tool led to significant improvement in patients’ target glycemic control (Fischer et al., 2011). This tool illustrated patient-specific outcomes data such as glycosylated hemoglobin and LDL levels. When looking at adherence to opioid prescribing guidelines in a primary care practice, Anderson and colleagues noted a significant increase in signed opioid agreements, functional assessments and urine toxicology studies after implementation of a dashboard (Anderson, Zlateva, Khatri, & Ciaburri, 2015). A study in the Stanford pediatric intensive care unit sought to increase compliance with central venous catheter insertion and care bundles through the implementation of a checklist. The team created a dashboard that displayed compliance with this bundle and found that the dashboard contributed to decreased infection rates (Pageler et al., 2014). Another study looked at a large dashboard project in Sweden (Stattin et al., 2016). This dashboard was aimed at displaying process measures for urologic cancer outcomes, and researchers saw a significant improvement in six out of nine performance indicators including active surveillance for very low-risk prostate cancer and procedures that are performed with nerve-sparing intention.   

It is important to note that in most of the literature demonstrating the value of dashboards, dashboards were not the sole intervention taking place. Many of these studies were using dashboards to monitor quality improvement initiatives, as well as to visualize where deficiencies may lie to determine what items require improvement. Just presenting a dashboard to relevant stakeholders does not guarantee practice change. Improvement in outcomes at all levels requires a deliberate strategy for the use of the presented data. While the list of accepted strategies to support quality improvement is beyond the scope of this article, our clinical practice employed the Institute for Healthcare Improvement’s (IHI) Model for Improvement methodology (IHI, 2018) when implementing a performance dashboard for concurrent improvement efforts. This framework for quality improvement prescribes small tests of change through Plan-Do-Study-Act cycles, so that efficacy of interventions may be easily measured to determine efficacy before expending a great deal of time and resources on interventions that may not be effective.

Dashboards have evolved significantly over the past decade with the advent of interactive technologies and conglomerated data. Their use is not unique to healthcare and therefore has produced innovation from multiple business sectors. A quick search for “dashboard software” on the internet will reveal numerous free and proprietary services available for corporations of all sizes, budgets and technological prowess. So, what is right for your practice?

Implementing a Dashboard

First, what is important for you to measure? Most practices have a series of best practice targets to meet as prescribed by payers, funders, Healthy People 2020 or other national guidelines. Examples of these targets are cervical cancer screening rates, systolic blood pressure treated to target, colorectal cancer screening rates, readmission rates, etc. These clinical metrics are important to measure as they have the potential to affect patient outcomes most readily. Process measures relating to the day-to-day operations of the practice may be of interest, as some of these metrics can be associated with patient satisfaction. Examples of these measures include patient wait times, as they have been shown to influence patient satisfaction (Xie & Or, 2017; Bleustein et al., 2014), the time it takes for a provider to close an encounter, and patient no-show rates. Financial measures may also be of interest and can include the cost per encounter, cost of supplies per procedure, number of patients seen at selected acuity levels, and other related items specific to individual settings of care. Care should be taken to include measures that will be useful to improve quality, compliance, or regulatory reporting.

Second, how many people will be using this dashboard and what is the panel size for the practice? These are important questions, as they will determine how the dashboard should be hosted. If you have a practice with a small number of end users and a smaller patient panel, using a common spreadsheet software such as Microsoft Excel, Google Sheets or LibreOffice Calc to create tables and graphs that can be distributed via presentation or electronic delivery methods may be useful. Should you have a practice in which a large number of end users are required to view the information, commercial dashboard software may be more appropriate. These products are highly customizable and can be made available to an end user via a secure access portal or a distributed hyperlink. Examples of these products include Tableau (Software, 2018), Google Cloud Business Intelligence (Google, 2018) and Microsoft Power Business Intelligence (Microsoft, 2018) to name a few. As well, some EHRs contain functionality to integrate dashboards for providers within their electronic workspaces. Each of these systems has variable pricing schemes and drill down features that allow users to scale their experiences as they see fit. The trouble that practices may encounter when storing data to feed a dashboard using a hosted solution (raw data resides outside of clinic or hospital server) is HIPAA compliance. The Office of the National Coordinator for Health Information Technology and the Health and Human Services Office of the General Counsel have mandated, as a part of the HIPAA security rule, that all covered entities be HIPAA compliant and conduct a risk assessment of their organizational practices (The Office of the National Coordinator for Health Information Technology (ONC), 2018). Often, licensure for software can be obtained so these products may be hosted from an institutional server and data need not ever leave a protected environment. This provides for additional layers of security, as only individuals with institutional intranet access are able to access the information. The hosting of this dashboard software on institutional servers is complex and usually requires a dedicated information technology or analytics department, which are generally found at larger institutions.

Third, how will you obtain the data to feed the dashboard? The heart of a usable dashboard is the data behind it. Data collection strategies have variable levels of feasibility based on factors such as data availability, information technology infrastructure and sample size. Simple strategies like chart reviews are relatively inexpensive and low maintenance in comparison to large enterprise data warehouse solutions. We created a dashboard designed to display the rates of colorectal cancer and cervical cancer screening rates in a three-provider urban federally qualified health center (Figure 1).

Figure 1: Example Organization Level Dashboard using manual data abstraction with Microsoft Excel spreadsheets and R to generate reports

Our dashboard included “screening ordered” and “screening completed” rates for the specified outcomes and was presented to providers on a monthly basis. The screening rates were provided in aggregate and provider-specific formats (Figure 2). This process was completed via a chart review of all patients identified as having received service in a particular month. Data were then stored in Microsoft Excel spread sheets. While data analysis can be conducted in Microsoft Excel, data were then fed into CRAN R (The R Foundation, 2013) allowing for a simple method of producing standard descriptive statistics for presentation (see Supplement 1 at end of this article). R is a free and open source software that has a great deal of support, flexibility and usability due to its integrated development environment (IDE) R Studio (RStudio, Inc, 2015).

Figure 2: Example Provider-Specific Monthly Dashboard using manual data abstraction with Microsoft Excel spreadsheets and R to generate reports

For large data sets, enterprise data warehouses (EDW) or data repositories are more practical. These solutions can be used for a number of purposes including both clinical and research purposes. When used for both research and quality improvement, EDW or data repositories can facilitate the translation of research findings into practice (Starren, Winter, & Lloyd-Jones, 2015).  These systems can also be designed with customizable user interfaces to allow researchers or managers to develop reports that target what type of patients they want to look at and what information about those patients is applicable for their project (Horvath et al., 2014). The ability of the end user to choose the data fields they want to query without the assistance of a data manager or analyst is unique to EDWs. Poulose and colleagues described their success with this approach when they used an EDW to examine mortality in patients who undergo percutaneous endoscopic gastrostomy (PEG). They used a combination of administrative, billing and clinical data to better understand what type of patients had increased mortality with PEG placement (Poulose et al., 2013). Mlavier and colleagues described their process for creating a dashboard in a medical intensive care unit looking at best practice quality measures (Mlaver et al., 2017). They designed a process in which the EHR was connected to the dashboard interface and was able to refresh every seven seconds to allow near-real-time visualization of clinical data. When engineering the back end of a dashboard using an EDW, knowing where the data field you desire comes from is important. For example, is this a click-enabled field or a free-text item? Generally, click-enabled fields with pre-prescribed choices are the most reliable for data capture and subsequent data analysis. This is an important concept in data integrity and it is wise to remember that your dashboard is only as good as the data that goes into it. For example, a study looking at the validity of data in an EDW related to the diagnosis of diabetic ketoacidosis (DKA) found that upon manual review of EDW data, 47% who were coded as having DKA did not actually meet criteria (Vanderweele et al., 2018). These errors were due to clinical coding accuracy, documentation and the EDW query design.

Another consideration is what clinicians want their dashboard to contain and look like and how to best present the data. Visual design and customization are two of the seven essential criteria for an effective dashboard, based on a Delphi exercise among 42 experts (Karami, Langarizadeh, & Fatehi, 2017). A study in a primary care network found that a majority of providers found it helpful to visualize their data in comparison to three reference groups: other providers in the network, other providers outside the network, and against national benchmarks (Ward, Morella, Ashburner, & Atlas, 2014). A comprehensive review of dashboard visualization suggested that features of a dashboard need to be in line with their purpose and should generally be kept to a single page (Yigitbasioglu & Velcu, 2012). The review’s authors suggested colors and graphs should not be used in a way that complicates or biases the perception of data. A systematic review looking at data comprehension and use among various types of data presentation styles found that pictographs and tables were best understood. While most preferred, bar charts were less likely to be understood and less likely to guide appropriate decision making (Hildon, Allwood, & Black, 2012). Using the most appropriate style of data display to present to those who have the ability to act on the data is essential. It is reasonable to pilot different display options and compare benchmarks on the journey to a useful dashboard.

Last, what will you do with the data you display? What systems do you have in place to improve practice when opportunities are presented? An infrastructure to support quality improvement is necessary to act on the data visualized in your dashboard. Data have a story to tell and that story needs to be told in a context that allows organizations and providers to improve patient care.

Citation: Barnum, T. J., Vaez, K., Cesarone, D. & Yingling, C. (Fall, 2019). Your Data Looks Good on a Dashboard. Online Journal of Nursing Informatics (OJNI), 23(3).

The views and opinions expressed in this blog or by commenters are those of the author and do not necessarily reflect the official policy or position of HIMSS or its affiliates.

Online Journal of Nursing Informatics

Powered by the HIMSS Foundation and the HIMSS Nursing Informatics Community, the Online Journal of Nursing Informatics is a free, international, peer reviewed publication that is published three times a year and supports all functional areas of nursing informatics.

Read the Latest Edition

Trevor J. Barnum is a graduate student in the Department of Health Systems Science at the University of Illinois-Chicago College of Nursing. ORCID.

Kelly Vaez is a Clinical Assistant Professor in the Department of Health Systems Science at the University of Illinois-Chicago College of Nursing.

Diane Cesarone is the Director of the Nurse Led Clinics at the University of Illinois-Chicago College of Nursing.

Charles T. Yingling is a Clinical Assistant Professor in the Department of Health Systems Science and Director of the Family Nurse Practitioner Program at the University of Illinois-Chicago College of Nursing.

Adler, R., Hamdan, S., Scanlon, C., & Altman, W. (2018). Quality Measures: How to get them right. Family Practice Management, 25(4), 23-28.

Anderson, D., Zlateva, I., Khatri, K., & Ciaburri, N. (2015). Using health information technology to improve adherence to opioid prescribing guidelines in primary care. Clinical Journal of Pain, 31, 573-579. doi:10.1097/ajp.00000000000000177

Bleustein, C., Rothschild, D., Valen, A., Valaitis, E., Schweitzer, L., & Jones, R. (2014). Wait times, patient satisfaction scores, and the perception of care. American Journal of Managed Care, 20(5), 1-7.

Blumenthal, D., & McGinnis, J. (2015). Measuring vital signs: An IOM report on core metrics for heath and health care progress. Journal of the American Medical Association, 313(19), 1901-1902. doi:10.1001/jama.2015.4862
Carver, C., & Scheier, M. (1982). Control Theory: A useful conceptual framework for personality-social, clinical, and health psychology. Psychological Bulletin, 92(1), 111-135.

Chazisaeidi, M., Safdari, R., Torabi, M., Mirzaee, M., Farzi, J., & Goodini, A. (2015). Development of performance dashboards in healthcare sector: Key practical issues. Acta Informatica Medica, 23(5), 317-321. doi:0.5455/aim.2015.23.317-321

Clancy, T., & Reed, L. (2016). Big Data, Big Challenges. Journal of Nursing Administration, 46(3), 113-115.
Dowding, D., Randell, R., Gardner, P., Fitzpatrick, G., Dykes, P., Favela, J., . . . Currie, L. (2015). Dashboards for improving patient care: Review of the literature. International Journal of Medical Informatics, 84, 87-100. doi:10.1016/j.ijmedinf.2014.10.001

Fischer, H., Eisert, S., Durfee, M., Moore, S., Steele, A., McCullen, K., & Mackenzie, T. (2011). The impact of tailored diabetes registry report cards on measures of disease control: A nested ranomized trial. BMC Medical Informatics and Decision Making, 11(1), 12. doi:10.1186/1472-6947-11-12

Google. (2018). Cloud Business Intelligence Solution.

Hildon, Z., Allwood, D., & Black, N. (2012). Impact of format and content of visual display of data on comprehension, choice and preference: A systematic review. International Journal for Quality in Health Care, 24(1), 55-64. doi:10.1093/intqhc/mzr072

Horvath, M., Rusincovitch, S., Brinson, S., Shang, H., Evans, S., & Ferranti, J. (2014). Modular design, application architecture, and usage of a self-service model for enterprise data delivery: The Duke Enterprise Data Unified Content Explorer (DEDUCE). Journal of Biomedical Informatics, 52, 231-242.

Institute for Health Improvement (IHI). (2018, August 4). How to Improve: Science of Improvement: Testing Changes.


Institute of Medicine (IOM). (2015). Vital Signs: Core Metrics for Health and Health Care Progress. Washington, DC: The National Academies Press.

Karami, M., Langarizadeh, M., & Fatehi, M. (2017). Evaluation of effective dashboards: Key concepts and criteria. The Open Medical Informatics Journal, 11, 52-57. doi:10.2174/1874431101711010052

Mlaver, E., Schnipper, J., Boxer, R., Breuer, D., Gershanik, E., Dykes, P., . . . Lehmann, L. (2017). User-centered collaborative design and development of an inpatient safety dashboard. The Joint Commission Journal on Quality and Patient Safety, 43, 676-685.

Microsoft. (2018). Microsoft Power Business Intelligence Software.

Pageler, N., Longhurst, C., Wood, M., Cornfield, D., Suermondt, J., Sharek, P., & Franzon, D. (2014). Use of electronic medical record-enhanced checklist and electronic dashboard to decrease CLABSIs. Pediatrics, 133, 738-746. doi:10.1542/peds.2013-2249

Poulose, B., Kaiser, J., Beck, W., Jackson, P., Nealon, W., Sharp, K., & Holzman, M. (2013). Disease-based mortality after percutaneous endoscopic gastrostomy: Utility of the enterprise data warehouse. Surgical Endoscopy, 27, 4119-4123.

Roos-Blom, M., Gude, W., De Jonge, E., Spijkstra, J., Van der Veen, S., Dongelmans, D., & De Keizer, N. (2017). Development of a web-based quality dashboard including a toolbox to improve pain management in Dutch intensive care. Studies in Health Technology and Informatics, 235, 584-588. doi:10.3233/978-1-61499-753-5-584


RStudio, Inc. (2015). RStudio: Integrated Development for R. Boston, MA: RStudio Inc.

Starren, J., Winter, A., & Lloyd-Jones, D. (2015). Enabling a Learning Health System  through a Unified Enterprise Data Warehouse: The experience of the Northwestern University Clinical and Translation Sciences (NUCATS) Institute. Clinical and Translational Science, 8(4), 269-271. doi:10.1111/cts.12294

Stattin, P., Sandin, F., Sandback, T., Damber, J., Lissbrant, I., Robinson, D., . . . Lambe, M. (2016). Dashboard report on performance on select quality indicators to cancer care providers. Scandinavian Journal of Urology, 50(1), 21-28. doi:10.3109/21681805.2015.1063083

Stifter, J., Yao, Y., Kamran, M., Dunn-Lopez, K., Khokhar, A., Wilkie, D., & Keenan, G. (215). Using electronic heath record (EHR) "Big Data" to examine the influence of nurse continuity on a hospital-acquired never event. Nursing Research, 64(5), 361-371.

Tableau. (2018). Tableau Software.


The Office of the National Coordinator for Health Information Technology (ONC). (2018). Security Risk Assessment Tool. 

The R Foundation. (2013). The R Project for Statistical Computing. Vienna, Austria: The R Foundation.

Vanderweele, J., Pollack, T., Oakes, D., Smyrniotis, C., Illuri, V., Vellanki, P., . . . Wallia, A. (2018). Validation of data from electronic data warehouse in diabetic ketoacidosis: Caution is needed. Journal of Diabetes and Its Complications, 32, 650-654.

Ward, C., Morella, L., Ashburner, J., & Atlas, S. (2014). An interactive, all-payer, multidomain primary care performance dashboard. Journal of Ambulatory Care Management, 37(4), 339-348.

Xie, Z., & Or, C. (2017). Associations between waiting times, service times, and patient satisfaction in an endocrinology outpatient department: A time study and questionnaire survey. The Journal of Health Care Organization, Provision, and Financing, 54, 1-10. doi:10.1177/0046958017739527

Yigitbasioglu, O., & Velcu, O. (2012). A review of dashboards in performance management: Implication for design and research. International Journal of Accounting Information Systems, 13(1), 41-59. doi:10.1016/j.accinf.2011.08.002

Supplement 1: Sample customizable R code for automating data analysis that can be used by those not familiar with statistical programming