There was a noticeable chill in the air at Black Hat and DefCon, due to recent action taken against security researchers, reports Dan Kaplan.
A few months ago, Matthew Green was asked to advise a small team of undergraduate students who were investigating possible security vulnerabilities in a state's toll collection system.
A part-time research associate at the University of Maryland, Green learned that the students found a way to uncover proprietary information about the system by calling up a publicly available web page and entering particular commands into a form. It was the equivalent, he said, of “typing ‘password' into a password field,” and required no hacking or evasion of security controls.
But instead of congratulating them on their discovery and guiding them through the next steps on the project, which was being conducted strictly for academic research purposes, Green had an entirely different reaction.
“My immediate thought was that we have to get an attorney,” he recalled. “How do we keep these kids out of jail?”
Green knew the students he was asked to consult with weren't up to anything nefarious, but that may not have been enough to ensure they avoided the interest of law enforcement. As a result, they stopped working on the project. “Someone could come along and say, ‘We can prosecute them,'” Green said. “It does have a chilling effect. You can do most anything you want, until it involves something, however benign, against a real system. It's very arbitrary, and it's difficult to know where the lines are.”
The concern and worry expressed by the cryptography expert is rapidly becoming the norm in the security research community, a collective of arguably the world's most skilled and indefatigable computer enthusiasts. Because of recent examples in which the federal anti-hacking law, known as the Computer Fraud and Abuse Act (CFAA), has been interpreted in ways that permit aggressive prosecutions to be launched, researchers are significantly limiting or scrapping altogether projects that they have invested months or even years on – fearful that they will become the next Aaron Swartz or Andrew “Weev” Auernheimer, and unwilling to join a procession of digital martyrs that is expected to only grow over the next several years. Everyone, it seems, is feeling timid.
In the words of one, the current climate in which to conduct research is “terrifying.” Information security enthusiasts said the nearly 30-year-old CFAA is broadly worded, and if a prosecutor wants to make an example of a researcher, they easily can because the law, critics have argued, essentially criminalizes normal computer behavior and, to be charged, doesn't require someone to have had breached security controls or accessed something without authorization.
So it should be no surprise that when the ethical hackers, commonly called white-hats, converged on Las Vegas last month for Black Hat and DefCon, considered the world's two most preeminent security research conferences, there was something of a dark cloud hanging below the bright desert sun. This year has seen a huge number of submissions – Black Hat, for instance, put on a record 110 talks – but many of the presentations didn't go quite as far as they should.
Take Brendan O'Connor, a law student at the University of Wisconsin, who also doubles as the CTO of security consultancy Malice Afterthought. O'Connor presented at Black Hat and DefCon on CreepyDOL, a low-cost system that can mine data from public Wi-Fi traffic to create “a really nice visualization engine” on specific people based on the websites with which they interact. It's an example of how effortlessly one's privacy can be infringed. The title of his talk was “CreepyDOL: Cheap, Distributed Stalking.”
Wi-Fi publicly sends out data about which sites users visit, so anyone who is listening in can, for example, acquire someone's photo from an online dating site or their name from Facebook, O'Connor explains. By physically placing nodes – tiny sensor platforms – around a major city, one can amass a profile about a targeted individual based on their wireless “emanations,” without any need to actually hack their computer.
But this is all theoretical because O'Connor was afraid to do it, even though he said case law has shown that wireless eavesdropping is legal. Instead, he showcased the data he correlated from MAC addresses under his control. He is, of course, very confident the research would scale across a large city and produce the same results, but given the current legal landscape, it was an easy decision to abstain from trying that.
“I've had to greatly curtail how much I've tested CreepyDOL because even though there's a great deal of case law saying that it's well within the law, that hasn't seemed to matter to the U.S. government,” O'Connor said.
Unintended consequences
At Black Hat and conferences like it, researchers lately are more reluctant than ever to go that “extra three percent,” in which they explain the real-world applicability of their discovery, O'Connor said. For instance, that may mean demonstrating a major SCADA system vulnerability, but never disclosing that the bug could allow for the lights of a skyscraper to be switched off.
“Essentially, prosecutorial misconduct or prosecutorial discretion used to harm has caused this awesome chilling effect,” said O'Connor, who authored an amicus brief that was recently filed and signed by about a dozen other researchers calling for the release of Auernheimer, the 27-year-old researcher and self-described internet troll who took advantage of an AT&T website flaw to expose the email addresses of roughly 120,000 iPad users, including some high-profile people like New York Mayor Michael Bloomberg.
But Auernheimer enlisted no hacking tools and bypassed no security technology to amass the information. Everything was publicly available. He and a colleague merely built a script that expedited the process of collecting the email addresses. Gawker wrote a story about the “hack,” but didn't publish the personal information.
That didn't prevent Auernheimer from being slapped with identity theft and conspiracy charges, with the government arguing that he unlawfully accessed or exceeded authorized access to a protected computer. He lost his court battle, and in March, was sentenced to 41 months in prison.
By prosecuting white-hat researchers – even ones who have a muddied reputation, as Auernheimer does – they become more reticent about doing their work, despite the fact that it is typically performed for the public good and rarely for profit. Meanwhile, the treatment of digital researchers runs in stark contrast to those who have evaluated the safety of physical systems over the years, including automobile braking systems or the suspensions on a bridge.
But the negative consequences of discouraging and deterring these IT research efforts are potentially enormous. That's because the vulnerabilities they would have publicly exposed never get fixed – which means someone with more malevolent intentions, not concerned about facing prison time because they already have a criminal mindset, could come in and sell the information they discover to the highest bidder, at the expense of the victim.
“I really can't express how monumentally bad the decision by the FBI to go after Weev was,” said Shane McDougall, a veteran security researcher and principal partner at Tactical Intelligence, a strategy company. “Right now, hackers are the only ones pinging these systems, because security researchers aren't. [The FBI has] really done American consumers a disservice.”
Auernheimer's plight may not have received as much publicity as it did if not for Aaron Swartz committing suicide two months earlier after being hotly pursued by the U.S. attorney's office in Boston. Swartz faced 35 years in prison, after being accused of downloading millions of academic journals from subscription-based academic research repository JSTOR, which he believed should be lifted from corporate control and into the domain of the public – for free. He was slapped with 13 felony counts, 11 under the CFAA.
In April, McDougall said that, admittedly in a fit of paranoia, he destroyed all of the code he had developed for a research project he was working on, known as SchmoozeKit – described as a tool that mines multiple sites for information and can be used offensively or defensively by consumers or governments – even though he believed nothing he had done was illegal.
“It's really not a good time right now to be a security researcher,” admitted McDougall, who was planning to present his work at next year's Chaos Communication Congress in Germany. “I'd rather throw away 4½ months of code rather than do 8½ years in a federal prison.”
He said he likely will rebuild the code if Auernheimer wins his appeal. In the meantime, though, McDougall is a case study for any district attorney wondering if he or she has the power to force a security researcher to fold their hand.
Protecting the researchers
Trey Ford, general manager of Black Hat, said the annual conference in Las Vegas supports researchers and wants to help them avoid any trouble. When their talk is accepted, presenters are offered assistance in disclosing any vulnerabilities they may have discovered to the appropriate vendors. In addition, they are encouraged to contact the Electronic Frontier Foundation (EFF), a digital rights advocacy group, for pro bono legal aid.
“We're going to do everything we can to protect a researcher from themselves,” Ford said.
Despite this, researchers have become inured to enraging the manufacturers of the products with which they tinker. Sometimes, that has resulted in lawsuits or even court-levied gag and cease-and-desist orders. Rarely has it resulted in arrest, though at DefCon in 2001, Russian encryption expert Dmitry Sklyarov was charged under the federal Digital Millennium Copyright Act with violating copyrights on Adobe software. The charges eventually were dropped.
Since incidents such as that, security researchers have largely built a more positive relationship with the U.S. government – so much so that feds have been largely accepted, if not welcomed, at conferences like DefCon. But that all seems to have come crashing down with Swartz's death, the Auernheimer conviction, and to an equal, although not entirely related, extent, the mass National Security Agency spying apparatus exposed by whistleblower Edward Snowden.
Ford blames the downward trend on case law, which he said makes it easy for prosecutors to mount cases against researchers. This results in shaping public opinion, and the average person becomes further confused about the positive role that ethical white-hat hackers play.
“It's so easy to misinterpret what we're doing, and without the right framework of support, we look like we're doing bad things,” Ford said.
Charlie Miller, a regular presenter at security conferences across the world, and who is best known for his research into Apple devices and car computers, said his peers are especially unnerved to examine the security of public websites and cloud-based services – which are becoming more prolific and thus more important to test – out of concerns that they may be unknowingly crossing legal boundaries.
“It's extremely dangerous legally now to test the security of any sort of service,” said Miller. “Our job is to call bullsh*t on their stuff, and now there's a huge group of services that we can't really do that to anymore. I wouldn't test anything that I don't own, that I don't have on my own computer.”
Researchers are mixed as to why they've so ostensibly entered into the crosshairs of federal investigators. Certainly murky and outdated laws like the CFAA provide prosecutors with the justification to launch overzealous cases, they said. The bigger question, however, is why government attorneys don't exhibit more discretion. Some believe prosecutors see security researchers as easy targets to build up their résumé of courtroom victories and to appear tough on “cyber crime.” Still others frame the problem as more entrenched and inherent – that there is an ongoing yet growing effort to stifle internet freedom at the behest of major corporations and government entities that fear embarrassment.
And even if someone is eventually able to win exoneration, “three years at the sharp and pointy end of an investigation,” is no way to live a life, O'Connor said. Swartz was no stranger to this, and his friends and family believe the pain of bearing the weight of a government probe is ultimately what forced him to take his life.
Even researchers not facing a federal indictment will be driven to be more cautious, Ford said. Those settling down with partners and starting families may reconsider their actions. “I think the risk-reward equation that they're acutely aware of, the scales have tipped,” he said.
Miller agreed. “There's always a threat you'll get sued, but it's a whole other story that you may end up in jail,” he said.
Green, who also is an assistant research professor at the Johns Hopkins University's Department of Computing Science, said he no longer considers performing research that could land him in hot water. And that depresses him, considering the critical role that security researchers play in society.
“I think that right now, computer security is a mess, and the one thing we have going for us is a small number of people who make it their entire job to go and find these bugs,” he said. “Maybe they're announcing it in splashy ways, but at least they're announcing it.”
Not everyone is going to play it safe, however, either because they don't know how far is too far or because they are adamant about challenging the status quo.
But, even though a CFAA reform bill has been introduced in both the U.S. House and Senate, Ford said the outlook is likely to get worse before it gets better.
“The fact is, [these prosecutions] will ruin several people's lives along the way,” Ford said. “That's just how this works. It's going to be someone from our ranks.”
All photos courtesy of Black Hat USA. An earlier version of this article appeared on our website.