Introduction: Recognizing the Work Before the Work
The Epic team has been spending a lot of time on aspects of the project that are somewhat obvious to the user community – building the system, making decisions about the different order sets and content that will be available in the system, and all of the decisions we’ve needed to make to get to that point.
One thing that has not been as clear is what we are going to be doing with all of this data that exists within all of our current different systems. And our future state is reliant on us having a clear data pathway for all of the data we already have, and layering that together with Epic and our new workflows.
In this regard, the combination of our archive system that we are implementing, along with our decommissioning and conversion plans, is the bridge that is taking us from our current state to our future state.
Our goal in doing this is to make sure that it is easy to find data from our heritage systems while in Epic. Our paradigm is that we always want to maintain user and patient context regarding these integrations, so that we serve in the best interests of our patients and providers. We never want there to be a separate login or a separate patient search window while looking up legacy data from our heritage systems while in Epic. Linking users seamlessly to the archive makes it easy and efficient to look up data from our heritage systems while in Epic, boosting both clinical efficiency and the patient experience.
Importantly, there are a lot of component parts that lead up to our ability to do conversions and system archives in the first place. This is especially true with respect to conversions. For example, we have a whole team working on our Enterprise Master Patient Index (EMPI) clean-up. While we started with a 30 percent duplication rate across our EMPI, we’ve recently gotten that down to about 11 percent, with our goal of getting down to below three percent by the time we go live on Epic. This is a great example of a project that precedes our ability to even do data conversion – we can’t grab all of the data from Cerner, Allscripts or any other downstream system until we have the EMPI work done. Once that gets loaded into Epic, then we can start looking at the downstream systems to do the conversion effort.
The end result of this is that when a user looks up results in Epic, that data will simply be there – it will have been discretely converted before we fully go live with Epic. The results will look as if they just came out of the lab.
Heritage Data Management in Three Categories: Archiving, Decommissioning and Converting
In health systems where many hospitals come together, oftentimes many applications perform the same function; for example, having three payroll systems and/or three electronic health records (EHRs). In today’s healthcare landscape, this is not the best way to go, both from a cost perspective and in terms of data management. And, most importantly, maintaining a collection of disparate/fragmented systems is not what is best for the patient.
Currently, for example, if an RWJBH end user needs to get a patient record from HIM (Health Information Management), HIM needs to go into about 50 different systems. Some of these systems are not very searchable, while some of them are downright ancient. Believe it or not, some of our systems go all the way back to the 1980s.
On the other hand, with Epic, RWJBH is deploying one unified platform to put the full power of patients’ entire medical history into the palm of providers’ hands, creating a revolution in clinical efficiency and patient safety. Furthermore, retiring systems creates cost-balance and offsets much of the cost of our adoption of Epic.
We are taking TDS, GE Centricity, Allscripts TouchWorks, Cerner PowerWorks/PowerChart, Allscripts SCM and MEDITECH and centralizing all of it into Epic, relieving the burden of HIM to go search records from different systems in order to get a patient’s complete medical chart and full picture of the patient’s record. Furthermore, Epic’s Share Everywhere tool lets patients give access to their health information to non-Epic physicians, closing the loop and further enhancing the patient experience. All in all, we will be moving over more than 150 different systems on our Epic Together journey.
To this end, our Heritage Data Management project is a key part of our transition to Epic. It is of paramount importance. Our Heritage Data Management initiative will allow us to put clinically relevant legacy patient data and financial data into Epic, from all of our heritage systems. To achieve this, we are doing three things:
- Archiving clinical and financial records
- Decommissioning various applications and systems
- Converting large amounts of data
It is important to note that archiving, decommissioning and converting all go hand in hand. One cannot happen without the other. Below, we will explain in detail each of the three aspects of our heritage data management project.
Decommissioning is About Reducing Risks and Lowering Costs
Our decommissioning strategy is due to functional obsolescence. Functional obsolescence is defined as “the reduction of an object’s usefulness or desirability because of an outdated design feature that cannot be easily changed.” In many cases, including ours, redundant functionality and lack of use in heritage systems arise as a result of merger & acquisition (M&A) activity.
Decommissioning Offsets Much of the Cost of Epic Together
It is important to note that the current cost of maintaining many disparate systems is actually a larger number than the cost of transitioning to Epic. Financially, decommissioning systems allows us to save money and repurpose those dollars. We’ve developed a ten-year TCO (Total Cost of Ownership), and while it is true that we are making a major investment into Epic, much of the cost will be offset by decommissioning systems and the savings that decommissioning naturally brings.
Reducing Operational Costs and Security Vulnerabilities at the Same Time
A main driver of our decommissioning strategy is the continued cost of supporting older systems, along with infrastructure costs, which include hardware and software licensing, continued energy and space costs for unretired servers and maintenance costs for older hardware. Another driver of our decommissioning strategy is the fact that oftentimes, vendors are unable to fix vulnerabilities, creating security risks (such as a potential data breach).
In decommissioning, we exercise “application priority.” Meaning, we focus first on decommissioning applications that cost the most, and are the riskiest in terms of support. The security aspect is extremely important, because our security team has deemed outdated systems and software to be a significant risk.
Generally speaking, older applications also tend to not be properly maintained. Older server platforms, for example, cannot be properly updated, which creates further risk. For example, our JBOSS application server will be decommissioned because it runs outdated applications.
It is important to get these systems off of our network and decommission them, as data destruction and tracking data destruction is a big part of eliminating vulnerabilities.
While we are decommissioning, remember that we are not losing the data within those systems. All of our data will either be in Epic, or in our archive, Galen. While we are getting rid of systems, we are not getting rid of data. This is a key point. Rightfully, people have taken ownership over each of our systems within their departments; however, there is nothing to be nervous about in terms of data loss.
Finally, we could never decommission systems in the first place without archiving, which leads us to the importance of archiving.
Archiving is About Maintaining Continuity of Patient Care
In reality, it is simply not humanly possible to convert all data in existence into Epic. That is why the archive is so important – it allows providers to still be able to find all of the information they need in order to treat a patient, even after certain systems and applications have been decommissioned. It provides both significant costs savings, as well as a common place to store our heritage records from archived systems. With this approach, RWJBH will be able to focus on maintaining one live system (Epic), while still having all of our critical data in the archive so that the continuity of care goes uninterrupted.
Leveling Up Our System While Adhering to Regulatory Requirements
Furthermore, both clinical and financial data will exist in the archive system. When decommissioning systems, there is often a regulatory requirement to maintain both clinical and financial data. Following state data retention periods is required, and we also have our own retention periods defined by RWJBH (that are as long or longer than state requirements).
In some cases, data retention periods can be up to 20 years or longer. However, we support internally some application that are no longer supported by the vendor. Not only does this create risk in terms of a potential breach, it also can result in a financial scenario where we are locked into paying a vendor for 20 years for an application that is no longer supported.
In contrast, with our approach, we are making it easier to maintain our own environment, while staying within regulatory requirements.
Importantly, RWJBH is working with a single archiving vendor to cover multiple platforms (i.e clinical and financial) to make it as easy a process as possible for both IT to do archiving and for end users to find the clinical and financial heritage legacy data they need. Overall, we have simplified the archiving and decommissioning process to a significant degree.
User-friendliness: Accessing the Archive is an Intuitive Experience
From a user interface (UI) perspective, when clicking the link to the Galen archive within Epic, users will feel as if they never left Epic. Clinicians will be able to launch straight out of Epic and seamlessly find what they need, all in one place. They can choose which system they want to look at, or search by document type (i.e lab result values, etc.) This information can be pulled up quickly and easily, and then the clinician can go back into Hyperspace without a hitch.
In addition, contextually, clinicians will remain linked to the same patient when they launch from Epic into the archive. There will be a single sign-on to patient contacts in the archive system, ensuring that patient and user contacts are maintained when linking out to another system. This further ensures continuity of care. This ease-of-use when linking is powerful, in that it saves clinicians time and energy as they look into the archive for patient data. Users can easily choose whether or not to look at the entirety of the patient’s record, or specific parts of it.
Conversion is About Patient Care and Provider Satisfaction
There are a few key reasons for doing data conversion – providing continuity of care, assisting with clinical decision support, maintaining physician productivity and patient safety and providing comprehensive patient care. Having legacy clinical data posted properly in Epic helps strengthen physician trust in the new system and eases the transition into the new EMR. Filing this data appropriately requires several associated mapping and build requirements to be in place.
The main difference between conversion and archiving is that conversion is done for certain years of data, and this data is put into Epic directly for clinical decision-making. On the other hand, archiving is done to have all of the data from our heritage system accessible for legal documentation. Once we are live with Galen (our enterprise archiving solution), it can be used as the legal medical record and for viewing any information that is not available in Epic from our heritage system.
There are several validation steps within conversion: initial extraction, small-scale validation (which is more on the technical side), large-scale validation and full-scale validation. And extraction is provided in different formats; for some systems, in-house technical employees at RWJBH can do the extraction, while for other systems we work with the vendor to do extraction for us.
Step One: Initial Extraction and Small-Scale Validation
In the first step, conversion analysts and someone from the heritage system will verify if the data coming across is correct. They identify if any initial changes need to be made to the extraction format.
Step Two: Large-Scale Validation
After that, we do two rounds of large-scale validation. In this step, clinical validators (i.e physicians and nurses – or whoever has access to the heritage system) further validate the data. They work with the Epic security team to give them access to the Epic conversion environment, so that they can compare the two systems and see if something is missing or if something needs to be changed.
Step Three: Full-Extraction and Production Load
By the third step – full extraction – there are no changes needed to the extraction format. This is because when working with technical teams and vendors, if too many changes are still needed by the production load step, the process has become too messy to execute. It is necessary to iron out all formatting issues by the time we get to full-scale validation and production load.
We typically like providers to do full-scale validation because, while large-scale validation involves just bits and pieces of a patient’s chart (i.e lab or radiology results), full-scale is about making sure that the patient’s entire chart as a whole looks good, and that nothing is missing. At this point, we know for certain if any more changes need to be made to the provider’s view in Epic, or if anything else needs tweaking or tuning. At this point, validation is complete.
After that, we prepare for production load. We do a dry production load to identify the time we will need to load the data before go-live, especially gap loads for cutover planning. In this step, we figure out the overall picture of what the conversion involves.
Thank you to Our Clinical Validators
It goes without saying that validation is extremely important for conversion, as this is the data that will be used for patient care. The data coming into Epic needs to be of very high quality. To that point, we very much appreciate when clinicians take an interest in validating the clinical data we are pulling across. We rely on our clinicians to identify if there is any piece of clinical data coming into Epic that is incorrect or needs to be reviewed. This extra work can be time-consuming, but we are very happy to have our clinical validators at RWJBH and have a great appreciation for them, in that they help us significantly with our conversion work.
In Conclusion: This is a Game-changing Moment for RWJBH
It should be simple and easy for patients to get their chart from all of the hospitals and practices they have been to, with the click of a button. In this future state, the entire historical view of the patient is all there, across all systems – that is what Epic Together is all about. And our Heritage Data Management project is getting us there; it is the bridge that is taking us from where we are now to where we need to be.
With Epic, we are giving all of our patients a voice, and helping our providers to thrive. To reiterate, decommissioning is about reducing risks and lowering costs, archiving is about ensuring the continuity of patient care and converting is about patient care and provider satisfaction. The safety of our patients, and the flourishing of our providers, are at the very center of our Heritage Data Management project and Epic Together.