Imagine a health care system that aspires to be among the nation’s elite. It builds new facilities, launches a successful new marketing campaign to attract patients, upgrades its clinical capabilities and adds hundreds of employees and medical providers.
Now imagine that the same rapidly growing system houses its information technology nerve center in a facility designed to meet the needs of hospitals and health care systems in the late 1990s.
One might call that a classic disconnect, and it’s been a real-life dilemma for UCHealth. Over the past year, however, the system has been rewriting that story with a massive physical and electronic migration and information technology upgrade that is slated to wrap up in October.
It’s a tale of both brains and brawn. After many months of preparation, the Information Technology team in February began moving servers from the UCHealth Data Center in Building 500 on the Anschutz Medical Campus to a large facility in the southeast Denver metro area owned by the Zayo Group, a global network provider.
The migration is long overdue, said Brent Starr, director of Information Technology for UCHealth. University of Colorado Hospital moved its Data Center operations into Building 500 in 1997, installing additional power and cooling capacity at that time. The hospital redesigned and beefed up its server architecture in 2010 in preparation for launching the Epic electronic health record system. But for the most part, the space remained a product of the past.
“It has most of the original power sources and all of the original cooling,” Starr said. Most importantly, it was vulnerable to outages. A lone power feed topped a short list of “single points of failure” that could and sometimes did snarl communications.
The Zayo facility, which includes a secure, dedicated 50-rack caged and roofed space for UCHealth’s servers, receives power from two different Xcel substations, said Craig Hollenbaugh, the system’s vice president of Information Technology. That’s part of the overall strategy to bolster the Data Center’s backups for power, cooling, communication, and information, thus guarding against weak links that can bring the system down.
Thus far, nearly 400 servers have made the trip south. Over the next month or so, another 375 servers will follow, while 169 will be decommissioned. When the move is complete, the University of Colorado will take over the vacated Building 500 space, which it owns.
Cloud cover
The move is not all heavy lifting. It also involves server virtualization: that is, using software to divide a single physical server into multiple server environments, Starr said. That work allowed IT to lighten its server fleet considerably, while increasing the available avenues for work to continue when one or more devices go down.
“We’ve designed everything with dual paths to run the workload,” Starr said. The architecture is also scalable, Starr said, meaning that the server hardware is capable of handling two to three times additional capacity as UCHealth grows.
Building virtual servers also requires far less downtime than moving the physical versions, Starr added. Packing up a server, transporting it and installing it in a rack means four to eight hours of downtime, he said. By contrast, it takes only about 15 minutes of downtime to copy the data from a server in one location to a server in another.
The planning and execution of the move point to a specific goal, dubbed the “Five 9s,” Hollenbaugh said: The system’s hardware architecture will be up 99.999 percent of the time, which translates to just five minutes of unplanned downtime per year. He added that UCHealth is also beefing up the bandwidth of the wide-area network that links its facilities up and down the Front Range, ensuring that the information from servers moves freely.
“We’re applying the Five 9s concept not only to our applications but also to our network,” Hollenbaugh said.
Unbeknownst to most, the IT team conducted the first significant server move to Zayo in April, he added. It involved copying the Epic database in Building 500 to hardware with additional connections at the new facility. It required just two hours of downtime, and the Epic servers have not been down a single time since, Hollenbaugh said.
A new farm team
Discussions about expanding the Data Center’s capacity began about a year and a half ago. Remodeling the Building 500 space was not an option, Hollenbaugh said, primarily because of its vulnerability to failures. But even if those problems could somehow have been resolved, simply adding the weight of additional servers would have required reinforcing the floor.
The choice came down to whether to build a new Data Center on campus or to move it to a ready-to-occupy, shared facility. It turned out to be a one-sided contest, Hollenbaugh said.
“From a financial and timing standpoint, co-locating was hands down a better choice,” he said.Simply getting into the construction queue for a new center would have meant as much as a three-year wait, he said, with another 18 to 24 months to build it — at a cost of roughly $11 million. In contrast, the move to the Zayo facility, carried out over a much shorter time frame, cost $2.6 million capital spending over two years, Hollenbaugh said.
I translated to a triple win, he added. “We will have a state-of-the-art data center with state-of-the-art architecture, and we’ve not had to do any retrofitting.”