Vulnerability disclosure discussions based upon ethics are morally antiquated and naïve at best considering today's cyber-security climate.
“Responsible disclosure,” supported by industry power players (sage old men and large software vendors), demands that security researchers who discover new vulnerabilities first privately share them with the software vendor (to develop a patch), then publicly advertise the impact to persuade people to patch. Researchers must also navigate the entire and often time-consuming disclosure process without receiving or asking for monetary compensation.
In exchange for performing quality assurance for the software vendor and a good deed for the community at-large, the only benefits (if any) for the researcher are usually indirect, in the form of notoriety. On the flipside, there is the potential for negative media attention, community criticism and uncooperative vendor dialogue as their reward. Some researchers object to the "responsible disclosure” philosophy preferring “full disclosure” which provides everyone access to the same information at the same time. Across the board though, any researcher seeking financial reward for their time and energy is labeled unprofessional, unethical or an extortionist, as are the marketplaces trading in undisclosed vulnerabilities (zero-days).
Over the years many highly skilled researchers have chosen to conduct their work in private. They refrain from public release or perhaps discontinue their efforts entirely as a way to avoid the drama and hassles. Researchers instead may choose to pursue a steady paycheck rather than an ego boost. Interestingly, an increasing number of researchers are now discovering vulnerabilities and selling them to a third-party who discloses it to the software vendor. This seems to provide some middle ground. The researcher profits a few hundred dollars to as high as five figures, the third party gets to leverage some unreleased information to their customer base and increase their notoriety, and the software vendor is allowed time to patch. Win-win-win.
Unfortunately anyone relying on the vulnerable software (users, corporations, governments, etc.) does not get access to this information and is therefore unable to protect their systems in the manner they deem appropriate. Customers of the software vendor are at the mercy of the patch development cycle and however long it may take. This may take weeks, months, and sometimes years. Furthermore, it's not like the vulnerability data itself can't be compromised during the knowledge transfer. Who's to say a rogue employee at the software vendor or the third party with access to the data won't turn around and quietly release or sell it off. This is another point of contention for those who advocate in full disclosure.
Unquestionably, zero-day vulnerabilities have an increasing real-world value to many different parties. We should expect more and more researchers to demand and receive payment from governments, software vendors, security vendors, enterprises or someone on the black market. It has already happened and will continue. The evolution is underway and it will become more prevalent in the next few years as it becomes routine for our systems to be compromised using unknown vulnerabilities. This environment will force us to evolve our thinking and mature our offensive and defensive security strategies - fueling the need for third-party patches, subscriptions to unreleased vulnerability information, and general underground industry intelligence. We're already seeing these services being offered on the fringe (legally and illegally) and slowly moving towards mainstream acceptance as the business models are better understood. So it's not a matter of if, but when.
Let's look at the environment from another point of view: We no longer bat an eye at 10,000-node botnets, incredible volumes of malware-laced spam, and browser-based exploits launched from popular websites attacking their visitors. This is in addition to the botnets and machine infections now offered for subscription lease. This stuff is alarming, but no longer sensational. What really gets everyone's attention today is the cyberwarfare-like skirmishes breaking out between countries and saturating networks. For example, Russia and Estonia, China and Taiwan, Korea and Japan, Pakistan and India, the United States and, well, everyone has gotten into the action. Whether their actions are actually government supported is debatable. Certainly some probably are. But, if they're not, it's only a matter of time.
Today, governments of the modern world (and their national security) have a huge stake in the game and are currently preparing themselves for cyberintelligence-gathering and warfare missions. So, it's not far fetched to believe they all have red teams dedicated to conducting research into developing their own zero-day vulnerabilities, undetectable rootkit/trojans, field hacking toolkits, network attack strategies, and introducing backdoors into open source and commercial projects. As such, governments or defense contractors would certainly be open to purchasing these materials as it represents a significant tactical advantage - to say nothing of preventing it from falling into enemy hands. Obviously, zero-day vulnerability data purchased in this way would not likely travel down the responsible or full disclosure path, no matter what industry ethics dictate. See why ethics and morality become irrelevant as part of the discussion?
Then consider for a moment the effect of the next Slammer or Blaster worms, based on a zero-day vulnerability and sold on the black market, which exploit tens or even hundreds of thousands of machines disrupting critical infrastructure. This event could cause a tipping-point, forcing or enabling congressional lawmakers to regulate the sale and export of zero-day information - not dissimilar to those already surrounding encryption. Germany has passed laws regulating and restricting the development and distribution of loosely defined hacking tools. This development seems to have impacted the good guys more than anything else, leading several well-respected security groups to close down or move their websites beyond the country's jurisdiction.
While ethics, morals, and professionalism should always be fundamental tenants of how professionals conduct themselves, it's irresponsible to design security strategies based on the assumption people will be. Business owners and software vendors have a responsibility for the data they protect and the products they sell. They must take into consideration the environment around them, understand that it's hostile, and be pragmatic in their approach. Have no expectation that anyone is going to share any vulnerability information ahead of time. Pray they will before going public, but do not depend on it and frankly, it's hopeless to demand it. Looking at the big picture, it's obvious the industry is not what it used to be 10 or 15 years ago. Current mainstream ideas of how things are supposed to function do not prepare us for where we're heading. Someone I met the other day in Portland said, “Before the good guys address how to win the war, they first have to understand they're at war.”
- Jeremiah Grossman, founder and CTO of WhiteHat Security and a co-founder of the Web Application Security Consortium.