Email hacks against the healthcare sector are common — and problematic from a compliance perspective in terms of reporting requirements. While the consensus is that email is merely a pivot point for other nefarious activities, the stat doesn’t hold as much water in highly regulated industries.
Fortified Health Security CEO Dan L. Dodson explains that email is the No. 1 attack vector, mainly through phishing attacks because “the end user is obviously one of, if not the riskiest, point of entry to the organization.”
SC Media examined the Office for Civil Rights breach reporting tool and found more than 122 email-related incidents affecting over 1.33 million patients have been reported to the Department of Health and Human Services in 2022, so far. The tallies range from 500 or 501 patients (typically filed as such to denote an ongoing investigation), to as many as 502,869.
Although some of these breach tallies are notable, what’s more concerning is the length of time between detection and reporting it to the impacted patients and regulators.
As seen with the largest email hack reported this year, Christie Business Holdings reported the hack of a single email hack in April this year. One of the largest multi-specialty group medical practices in Illinois, Christie Clinic’s notice resembles most email-related reports:
Namely, the date of discovery is often not included, only when the investigation concluded. For the clinic, the forensics ended on Jan. 27, 2022, even though the monthlong hack occurred six months earlier, between July 14 and Aug. 19, 2021.
Further, the notice was not sent for another three months after the investigation ended. Under The Health Insurance Portability and Accountability Act, covered entities and relevant business associates are required to report data breaches within 60 days of discovery — not at the end of an investigation.
But Christie Clinic is not alone in its forensics and reporting challenges.
To understand why email creates such a compliance headache for providers and how to improve data retention policies, SC Media spoke with Dodson, First Health Advisory Chief Security Officer Will Long, and Impact Advisors Principal Dan Golder.
“Email is the unspoken repository of data that we probably shouldn't be keeping,” said Golder. “We all do it.” Consider a personal email account, “it’s a file system for people.”
The bad habit of leaving data in these accounts is also rife for legal pursuit or even enforcement actions by OCR, not to mention the ongoing trend of legal actions being taken against providers directly following breach reports.
‘Pivot point’ is moot when health data is involved
In many of these instances, the hacker isn’t necessarily looking for the data contained in these accounts. Their goals are simple: target an individual, entity or government to look for unprotected or semi-protected data with a server hack or social engineering attack.
“Most of the time, hacking is a crime of convenience,” said Golder. “They're not really targeting specifically, they're just looking for somebody that will type in their password.” There are also true social engineering attacks where they target specific individuals or companies, even with phone calls pretending to be tech support.
Upon mention of the email trends, Golder also reviewed the OCR breach reporting tool, noting his shock at the size of these breaches. It’s “staggering” to think of how many people have been impacted. “The risk here is off the charts.”
Threat actors target email accounts by exploiting or stealing weak credentials to gain a broader foothold onto the network. Many breach notices even include language that implies these email hacks were designed for fraud attempts or further activities unrelated to personal data.
Most email-related breach notices also contain a major red flag: “However, the evidence could not conclusively determine what, if any, access to the data contained in the accounts was accessed.” Long explained that this is due to the sheer lack of visibility into actions made within the account due to the platforms used or overall settings.
The actor may reset those passwords and dupe staff members to steal their passwords, then move on to other systems, which could lead to other compromises. The hacker may also phish other internal accounts to launch a ransomware attack, or another hack but have no interest in the data contained in those accounts.
But the privacy team will immediately want to know what protected health information was in the email accounts accessed by the attacker. Long said, “the struggle is that most email systems don’t log which emails were opened or read.”
In less or differently regulated industries that insight may not be needed. But when a covered entity loses control over any system or account with protected health information, the onus squares wholly on the entity to prove, or disprove, whether PHI was indeed party to unauthorized access.
A similar reporting conundrum occurs when a device is stolen. If the device data is not encrypted, it’s presumed a HIPAA-reportable breach even if it appears the theft was meant to get ahold of the tech.
Without evidence supporting that access didn’t occur, HIPAA requires the provider to file a breach report with the OCR. And that’s where providers find themselves in a world of trouble.
The compliance nightmare of email forensics
Without strong data retention policies, a given entity may have tens of thousands of emails within a single account. After an account hack, the privacy team will need to know what data is contained in the account, prompting the security team to extract data for the entire account and run it through a scanning engine to look for PHI and credit cards.
It’s “a huge struggle because most privacy compliance and legal teams take the stance that ‘if they were in there, we have to assume that they saw it because we can't prove they didn't,’” said Long. In some cases, they’ll consider the amount of time the actor was in the account and what they were seeking, then do a risk assessment.
As a result, teams “have to scour” these accounts. In particular, if it’s a doctor’s account that’s been accessed, the forensics can contain hundreds of spreadsheets with PHI all over the emails. Long explained that when the access lasts for days, weeks, or months, without a log of what was opened or not opened, “what can providers do?”
“Most organizations have to default that the 10,000 PHI records represented in all these emails have to be reported as a breach,” he added.
Even though email is not a database, a lot of PHI is shared via spreadsheets, PowerPoints, tables, and other means. After a hack, the files are extracted into a large folder that must be manually sifted through by the forensics team, including removing duplicate records and finding health information.
“It's a huge undertaking. There is no automated tool,” said Long. “One of the side effects that a lot of people don't talk about is the reporting of it.” Security teams might not be worried about that element from a cyber perspective. “But this is a drop-everything-for-a-whole-bunch-of-people project.”
The process to just get the data “normalized and into a spreadsheet of who you're going to report to” is a huge effort. The teams must also consider the different data types like children, parents, or even whether a patient is deceased.
“It's unbelievable,” said Long. “I've seen compliance issues and people just pulling their hair out, when all of a sudden we've got hundreds of files and figuring out reporting requirements. That’s why they usually don't make the deadline.”
When “just archiving” the data is not an option
Particularly when email is commonly used as an access point for ransomware attack and fraud attempts, providers should look to reduce both the attack surface and the amount of data left hanging around like low hanging fruit.
Improving data retention policies is a strong solution for reducing these headaches, and Long has seen some providers move in that direction. But legal and compliance teams often take different stances on the best manner to go about this, as some departments want to keep everything no matter the risk.
It’s not always as simple as deleting the old email, especially given the culture shift that would need to occur for entities to be comfortable with it. Golder typically recommends data archiving for older emails, so the information is kept — but safely stored offline. Even in Outlook, the data can be saved offline as a separate file and stored on a CD or a digital drive.
If it’s stored separately, the information is still accessible to providers who may need to revisit past communications with patients or colleagues. Golder added the process is an easier task within a business account than it is for personal email, as many of those platforms make it difficult to archive data.
Providers also need to be very careful about what they choose to keep in the accounts. The concern for Golder is that “there are a lot of email and email discussions that probably, at this stage of the game, shouldn't be in email because it is discoverable.”
Secondly, studies have found that phishing education and training drastically reduce the risk posed by the accounts. Entities should make cybersecurity a prime initiative within the organization because, as Golder puts it, “Despite our best efforts throughout the industry, we still have people that are not security aware enough.”
“Hackers are always trying to stay one step ahead, so in many ways no matter what we do from a global perspective, it really does come down to the individual person being somewhat accountable,” said Golder.
“Consistency matters in this space,” so Dodson recommended that it’s realistic for entities to train employees, at a minimum, on a quarterly basis. Some progressive organizations perform it monthly.
Numerous technical options can reduce the risk as well, with some large health systems taking even larger steps by banning the use of personal email on the hospital network. Dodson takes it a step further and highly recommends leadership not to give email accounts to workforce members who don’t need it. In doing so, entities can reduce their attack surface.
Better email filtering can also reduce the amount of nefarious emails entering a user’s inbox, he added.
Golder added that a hardware security key can ensure that even if a password is stolen, the attacker can’t access the account without the physical key. The tool should be a business imperative for providers insistent that they’ll “continue to maintain email as a file system.”
The hard key is ideal for stolen password situations, as well as the current threat of phone calls. Even when the password is handed out, it’s useless without a hardware key to provide the required two-factor authentication
Using a multi-pronged approach, providers can simply work to reduce the amount of PHI sent and stored within email accounts.
On the smaller side, PHI just shouldn’t be put into spreadsheets and sent via email, Long explained. For example, some providers send daily reports to every workforce member with patient connections on a daily basis inside of the email account, often more than 100 employees. That means the report with PHI is now within 100 email accounts.
Instead, providers should be directed to that data via a secure link with two-factor authentication.
By reducing the PHI shared within accounts, reducing the length of time data is stored, and improving training, these small reductions would drastically reduce the amount of unnecessary PHI within email accounts. But security leaders must work to convince the board of the need to take on another project, which can bolster compliance.
An entity should weigh their options, but the choice appears to be: either reduce the amount of data stored in the account and how long, or spend months or up to a year with multiple employees down trying to identify the data, all while at the risk of failing HIPAA’s 60-day reporting requirement and possible enforcement actions.
“It really comes down to cost-benefit,” said Golder. “Everything is a cost-benefit analysis. If the cost is too high, you don't do it. If the benefit is great, then you will. And for security, most people feel the cost is too high: ‘I don't want to bother with all that stuff…’ When they suddenly get the data breach, then they go, ‘Oh, that cost was pretty high. Maybe I need to do something.’”
“It's just human nature,” he added. “We're not going to do it until it's so painful that we feel we have to.”