Sign in to my dashboard Create an account
Menu

Have your cake and eat it too: Eliminating healthcare data silos

Esteban Rubens
Esteban Rubens
649 views

Eliminating healthcare data silos blog Healthcare suffers from a data silo problem. It’s in no way a unique problem—silos are everywhere in IT. But it’s still worth examining given the negative repercussions silos have for patients and providers alike. In a typical healthcare-delivery organization such as a hospital, an urgent-care facility, an imaging center, an ambulatory surgery center, or a medical office, there are dozens to hundreds (even thousands!) of applications running in production.

Here are some common critical (tier 1) applications:

EHR (electronic health record) EMR (electronic medical record)
HIS (hospital information system) PACS (picture archiving and communication system)
RIS (radiology information system) CVIS (cardiovascular information system)
LIS (laboratory information system) CPOE (computerized physician order entry)
HIE (health information exchange) PHM (population health management)
RCM (revenue cycle management) Supply chain management
Analytics Telehealth
Medication and claims management VNA (Vendor-Neutral Archive)


And there are many others beyond these critical applications. Because there are so many, IT teams sometimes aren’t aware of everything that is running in their infrastructure. Sometimes the only indication that an application was deployed is that it stops working, which in turn causes problems elsewhere. This reminds me of the anecdote I heard from the IT team at a large U.S. integrated delivery network a few years ago. Users started complaining of inaccessible files, but the file-serving stack seemed to be working normally. After time-consuming research that involved following mysterious cables and prodding around dusty storage rooms, the team found an old Novell NetWare server that had been hidden behind drywall during renovations.

With the advent of server virtualization, healthcare IT organizations consolidated the compute side of the infrastructure into server farms powered by hypervisors. Unfortunately, the storage side hasn’t seen much change in terms of consolidation, even though the technology exists to make things better. Each tier 1 application often runs in its own dedicated storage silo—and it’s likely that other applications do, too.

Why do we still have data silos?

Shared storage, whether accessible as blocks, files, or objects, has existed for decades. The reality, however, is that due to nontechnological factors, the scourge of data silos is still with us. There are various reasons for that: Application vendors have conservative resource requirements meant to protect their SLAs;  someone thinks that a tier 1 application must be in its own silo; different funding sources create inefficient resource utilization; or because “that’s the way we've always done it,” the data for different applications lives in different silos—often in the same data center.

Why is this a problem? The most obvious answer is that silos add complexity. Having multiple storage platforms adds to the “keeping the lights on” workload for IT no matter how simple each array might be to operate and maintain. There’s also the issue of wasted space. Each array has unused space that can’t be pooled together for the benefit of the organization. Having multiple arrays means more rack space used, more power consumed, and more cooling required. Multiple arrays add to the burden of configuring data protection, either through backups, snapshots, or other methods—making it more likely that something will fall through the cracks and not be adequately protected. And multiple support contracts not only create administrative overhead, but also represent redundant spending that most healthcare institutions don’t have the luxury of overlooking.

Any organization interested in using clinical data to improve patient care and help clinicians (that is, to support the Quadruple Aim) faces further challenges from data silos. Training neural networks, such as those used in deep learning, requires a large amount of labeled data from many sources. In healthcare, that can mean data from systems such as EHR, PACS, VNA, and PHM. Data scientists already face enough barriers to obtaining and preparing data for use; they don’t need the extra burden of navigating a maze of data silos. This problem becomes apparent whenever a healthcare institution decides to start an AI program or set up a data science team.

The solution: consolidation with NetApp ONTAP Adaptive QoS

Consolidating data silos without compromising performance and availability for critical applications, or affordability for lower-tier applications or archival data, might seem like an intractable problem. But there’s a simple solution; it’s possible to have your cake and eat it too. With the adaptive quality of service (AQoS) available in NetApp® ONTAP®data management software, it makes sense to build a healthcare organization’s entire infrastructure on a single platform that can scale up (capacity) and out (performance and availability) while enabling all applications to meet even the most stringent SLAs.

In its simplest implementation, QoS makes it possible to keep applications “behaving well” in terms of I/O storage operations. In other words, QoS prevents a performance-hungry application from overconsuming I/O resources that could cause other applications to slow down. This seems simple enough, but each QoS is typically implemented at the volume level, and each application uses many storage volumes. Furthermore, QoS parameters need to be adjusted when a volume is created, changes size, or is moved. In the aggregate, whether it’s in a single storage array or many, a typical healthcare IT team would need to monitor thousands of volumes and make QoS rule changes accordingly. A Gordian Knot emerges.

Be like Alexander the Great and slice through this knot of complexity with the adaptive QoS of ONTAP. Using AQoS, each storage volume is constantly monitored and its QoS rules dynamically adjusted. No matter how many volumes your applications need, or how those volumes change size or location, AQoS will maintain the SLAs each application vendor or developer requires. You can also deploy AQoS with Ansible, helping automate IT operations in addition to realizing the benefits of eliminating data silos. With AQoS, you can finally consolidate all your organization’s application data; simplify operations; save time, space, and money; and improve performance, uptime, and data protection.

Intrigued? Let us share how NetApp customers accomplish these goals. To explore how AQoS can help your team consolidate storage while improving application performance, contact your NetApp sales team, your NetApp partner, or get in touch with us.

Esteban Rubens

Esteban joined NetApp to build a Healthcare AI practice leveraging our full portfolio to help create ML-based solutions that improve patient care, and reduce provider burnout. Esteban has been in the Healthcare IT industry for 15 years, having gone from a being storage geek at various startups to spending 12 years as a healthcare-storage geek at FUJIFILM Medical Systems. He's a visible participant in the AI-in-Healthcare conversation, speaking and writing at length on the subject. He is particularly interested in the translation of Machine Learning research into clinical practice, and the integration of AI tools into existing workflows. He is a competitive powerlifter in the USAPL federation so he will try to sneak early-morning training in wherever he's traveling.

View all Posts by Esteban Rubens

Next Steps

Drift chat loading