Human resource and payroll vendor Kronos has been recovering from a widespread ransomware attack for more than a month, which has created major payroll issues at a number of health systems. Employees and clients are venting frustrations to local media and community boards, begging for answers: Shouldn’t there have been contingencies in place to ensure employees are paid?
And frankly, who’s at fault when ongoing outages drastically impact critical infrastructure?
The answer is a tough pill to swallow: it’s both the vendor for perhaps giving the impression there was no concern that a system outage could occur, and the healthcare provider organizations for blindly trusting those assertions and not planning for system disruptions, Mac McMillan, CynergisTek CEO and president told SC Media.
The attack has been a major concern for a lot of healthcare organizations. Those leaders are asking whether anyone has tips on how to respond to the outage, while expressing dismay at their loss of trust for Kronos and what they thought were assurances given.
McMillan explained that he’s been working with clients on the ground, who are asking whether anyone knew the right way to respond to the outage. In short, “For a lot of people, this caught them flat-footed.”
As previously reported, the Dec. 13 cyberattack impacted Kronos’ private cloud platform, which hosts the vendor’s Workforce Central, UKG TeleStaff, Healthcare Extensions, and Banking Scheduling solutions and three separate data centers. Those platforms have been rendered inaccessible, as Kronos continues to restore system availability for clients.
Kronos’ cloud clients have been forced to manually track and estimate employee hours, in addition to issuing employees paper checks. According to Fitch Ratings, healthcare has been hardest hit by the incident, as it is widely used in healthcare for human resources and payroll.
Penn Highlands, Care New England, Monument Health, the University of Florida Health, OhioHealth, and Ascension St. Vincent, and Baptist Health are among the entities with employees who’ve reported payroll delays, paycheck discrepancies, and other paycheck issues.
The ongoing disruptions couldn’t have come at a worse time, as hourly healthcare employees looked for final checks before the holidays and while the sector fights the latest COVID-19 wave. McMillan added that the attack closely followed the Log4j disclosure, but any hypothesis would just be conjecture.
Kronos, healthcare customers share responsibility
It’s easy to place blame in the wake of the attack because, in retrospect, it’s easy to see how some of these issues could have been prevented. But a moratorium can also be helpful to assess shortcomings to empower other organizations from making similar mistakes.
McMillan made it clear that Kronos likely didn’t intend to mislead anyone. However, the vendor “basically gave all of their customers the impression that they didn't have to worry.” Clients have raised these issues in their defense for failing to prepare contingencies, noting the vendor’s practices of backing up everything in multiple locations.
A vendor promising an entity not to worry, “we're never going to go down,” is a thin argument.
Kronos “gave everybody the impression there was nothing to ever worry about,” said McMillan. But “anybody who knows anything about security knows … there is no such thing as a perfect solution. There's no such thing as a solution that won't go down, or that can't be taken down.”
“So you just never, ever, ever promise 100% that nothing's ever going to happen. It's just not reasonable,” he added. However, that in no way means the customer should have rightly accepted those statements and done nothing to prepare contingencies for an emergency event.
It’s easy to blame Kronos fully for the incident. But McMillan raised an important point for healthcare customers: “You should never have put all of your data in the cloud and not had a backup strategy.” All security leaders should know there’s no such thing as a perfect solution.
More importantly, “if you're gonna put all your eggs in somebody's basket, then you better at least have really solid downtime procedures, or a manual process. So the minute they go down, you can immediately say, ‘Plan B,’” he explained. Then you’re able to tell every employee this is what we’re going to do until we get back online.
The second rule of security is that if anyone says everything is covered, you need to take a step back and assess. “They’re not overtly lying, but the point is that there’s just so much that is dynamic and unknown. There’s no way you can make that promise,” he explained.
For example, McMillan posed the question: Would you ever say to the board that you don’t need to worry about the network going down, I’ve got the solution? The answer is no, they would know better than to accept that.
“There’s a little bit of responsibility on both sides. One is Kronos, clearly they thought they had it all covered. And of course the blame also falls to “the hospitals or whoever put all of their eggs in that basket without planning for the day when this can happen,” said McMillan.
Setting up a payroll alternative
The failures of all parties involved have left the impacted client organizations working to find alternatives or going to manual processes. And for some organizations, the outage has meant apologizing to employees for pay gaps, which should not be an option for hourly healthcare employees during a pandemic and mass staffing shortages.
“This was the worst time for this to happen, which I don't think was an accident, disrupting people's payroll right before the holidays,” said McMillan. If they haven’t already, impacted organizations need an alternative solution because Kronos is manually reactivating each of its thousands of clients.
There hasn’t been an updated timeline yet, outside the predicted “weeks,” which means multiple pay periods will pass before the systems are restarted.
One effective way to maintain payroll is to review the last normal paycheck and cut manual checks based on that model, then continue to pay the employees at that rate until the system is restored. At the end of the outage, the provider should regroup and then make adjustments to fix any discrepancies.
Organizations should be transparent with employees about the outage, instructing them to manually keep track of their hours and hand them to their supervisors at the end of the day.
“Your employees are not going to be happy, but there’ll be a lot less unhappiness than if they don’t get anything” while the system is down, he explained. “We've had this whole situation happen before with folks who've been hit with ransomware, or other things that have affected their finance systems.”
Those ransomware victims have leveraged this type of model that relies on the workforce to manually keep track of hours, while they continue pay and settle up with discrepancies at the end of the outage.
“But to say that ‘I'm not going to pay you, but expect you to keep working the next three weeks, and I'm not going to pay you until later’ — that's a formula for really disgruntled employees,” said McMillan.
Rule one of healthcare security: have a contingency plan
For McMillan, the failure to have an adequate solution or “any plan in place whatsoever for this,” is just not reasonable. What might be worse is an organization that gives all of their data to a cloud vendor without keeping a data backup on-prem, secured from outside access.
It’s also important to note that the investigation into the incident is ongoing, he added. The public doesn’t know what’s going on with the Kronos investigation, how those three data centers were modeled, or whether the backups were kept offline. McMillan added that investigating incidents takes time. In this case, "because it so personal, it's hard to accept, but we need to let those guys do their job and not focus so much on the incident as the outcome.”
From an outside perspective, it looks like Kronos was counting on redundancies by “the fact that they didn't expect all three of those data centers scattered across different continents to be infected all at the same time. Unfortunately, they were wrong,” explained McMillan.
The crux of the issue is that hospitals shipped all their data to Kronos, but don’t have a physical copy on their own network. McMillan stressed it’s a problem from a business continuity perspective, which needs to be examined and addressed at every healthcare organization.
Attackers know they can get into their backups or disable the organization’s ability to get into those data stores. Once inside, they have more to leverage in their attacks and impair the victim’s ability to recover.
For CyberMDX Chief Technology Officer Motti Sorani, the incident should serve as a warning for provider organizations to have a plan for responding to attacks within their enterprise — or with their vendors. It begins with identifying the risks and the most critical elements within the enterprise that are necessary to maintaining business operations, especially payroll.
For vendors, it means assessing risks “stemming from potential unavailability of data, or unauthorized changes to the data kept on the vendor's cloud,” as well as the potential for threat actors to proliferate into their customers' network, through their on-prem biometric time and attendance devices, Sorani added.
Both the vendor and impacted organizations could have reduced the impact of the outage by having a periodic backup of the SaaS data provided to the healthcare delivery organization, Sorani explained. The attack further imparts the “the uphill battle that the healthcare industry is facing with regard to security.”
Some victims have warned they intend to cancel their contracts with Kronos over the incident. But if organizations don’t take responsibility for their role in the payroll disruptions, there will be a recurrence at some other point in time — and what then will be the response?
“Who's responsible in the hospital? Nobody wants to hear that: but it's everybody. Everyone is a victim here, the employees aren’t the only victims: it’s also the hospital and Kronos,” McMillan said. “But at the same time, I think the organization has the ultimate responsibility for anything that's going to be put in the cloud. It's critical.”
Unless the data is not considered critical — meaning it can be left offline without impacting operations, care or personnel — organizations need a backup plan for data in the cloud. McMillan added that the plan needs to be studied, trained and reviewed.
The rhetoric used in stories told from the organizations facing these outages reveal that most of these victims hadn’t planned for the outage or had the contingencies needed to maintain normal operations and payroll.
“At the end of the day, things happen to even to the best of us, because there's no way you can avoid it entirely,” McMillan concluded. “But my personal belief is that you are going to fare better just coming right out and being honest about what happened, and what you're doing, and what you know — also what you don't know.”
“There's three things that keep you out of trouble: honesty, transparency, and accountability — meaning, I tell you what happens and I do it as transparently as I can,” he added. “But I take responsibility, and I fix whatever happened. And it's that last one that trips up a lot of folks.”