Clubhouse has gone from not existing to a $4 billion valuation in just about a year. But, as bug bounty guru and Luta Security CEO Katie Moussouris describes in a new blog, that rapid growth primed them for a common security pitfall.
Interestingly, that security issue is not tied to vulnerabilities – though Moussouris describes two she disclosed to the burgeoning social media app, which have now been patched. Rather Clubhouse, like many high-growth companies, started a bug bounty program before it had in place the necessary infrastructure or expertise to make it work.
It isn't a rare problem, but, Moussouris says it's an avoidable one. SC Media spoke to her about bug bounties in high-growth companies with Clubhouse as a case study.
People interested in a more detailed description of the vulnerabilities, or in a video with your cat helping demonstrate them, can get that from your blog. But can you catch folks up?
Katie Moussouris: I joined the app, right before I decided to do some hacking. There were some API issues, there was also a separate issue of audio being routed through an audio provider in China and there were some security breadcrumbs kind of afoot. What I heard through other users in Clubhouse, was that one of the things that was supposed to fix some of [the API] issues was having the users log out and log back into the app.
I thought, "That seems strange. Why not force everybody to log out if that's an actual technical fix. I wonder what else is happening." I had a spare iPhone, and before I logged out and logged back in again on my main phone, I decided to just see if it would immediately log you out of one device if you register a second phone, like other apps [usually do]. And [I thought], let me log in on a phone with a fresh installation because that should be the latest version of the app.
I logged in on the second phone, and instead of actually logging me out completely, clubhouse presented me with the welcome screen while I was still connected on the first phone. So clearly there was something wrong. I did a bunch of experiments and figured out that I could join totally new rooms on the second phone and actually hear audio on both, so I was definitely still in both rooms. And if I had speaker privileges in that first room, even when I left that room using the second phone, I was still able to talk, even though my avatar disappeared [and was immune to moderator tools].
That's pretty significant for an app that makes users promise not to record anything.
I know there are a lot of rooms where human rights advocates and journalists will get on the app and talk because the apps terms of service say it's not okay to record. A lot of those users had a false sense of security.
But when you tried to report the issue, things started to go wrong. On your blog you attribute that in part to problems many fast growing companies have. What happened here?
When I talked to the folks at Clubhouse they said they had actually invested in security and had hired penetration testers. But the fact that they had started a private bug bounty before they had filled out their engineering team internally – that told me that they were doing things out of order.
Even though their bug bounty is private, it took me weeks to get ahold of the right point of contact, because they didn't have a point of contact on the website. There's hardly anything there; there's maybe a support contact. What I ended up doing, as any researcher would, was Google 'How to report a security issue to Clubhouse.'
The email address I got back, which I sent the first report to, was actually of a different company. It was like a project management company that's also called Clubhouse. And because I found it via Google search, I saw they had a disclosure policy. I thought all right, I'll just send. I mean, that was a huge misstep, not on my part but on Clubhouse's part for not following the ISO standard and making it really obvious how to report a security hole. There was no way to report to them as a general member of the public, and because of their unfortunate name collision with a different company, their bug report ended up in somebody else's inbox. I didn't know until the next day when that company got back to me.
So, I didn't get around to digging and digging to find the right contacts for another several days. Even then, I got an automated response. To get a human, I had to point out they had a 45-day disclosure deadline that started on the day that I first tried to report to them, when I ended up sending it to the wrong folks, because quite frankly, this is the true window of exposure for their customers. I reported it as soon as I possibly could. But the delay in protecting their users was completely on them in terms of not having a solid way to contact [the company]. That's when I got the first human to come back and say apologies for the delay. We're a small company, we're still building at our team.
How can you generalize that for high-growth start ups getting into disclosure or bounties?
Starting out, in terms of building software, you're going to have bugs. Some of those bugs are going to be security bugs. And before you even think of having a bug bounty program, there has to be a clear way for people to contact you to report a security vulnerability in case they stumble across one. That was easily a couple of weeks, if not more, of delays in even getting the bug report to the few engineers that they did have.
I know you're a small company, I empathize with being a startup and trying to build. But you are too well funded and too popular with users to really be in the denial stage of the five stages of vulnerability response grief.
Hackers will pay attention to the billion dollar valuation, not how few engineers you have to solve problems.
Right, and what they told me was it was even fewer people.
When they got back to me they said it was fixed, and I went back in to try and test. What was interesting was you could still join a second room, and still appear to be in more than one room. What they explained was that was a separate issue; that was a cache latency issue in the feed display on the client. They said that you're actually logged out, but the feed takes a little while to catch up.
Then we worked on coordinating the blog.
Were there any other issues to learn from?
I had an outstanding question to them. When they invited me to the private bug bounty program, I said, "Well, you know, I and other serious researchers typically refuse the non-disclosure agreement requirement." But I said, "if it does qualify for a bounty under your program rules, can you please donate it to the Pay Equity Now foundation?"
Fast forward to [when] I showed them my blog; tI wanted to given them a nice shout out that they donated my bounty. But they couldn't give me an amount. They said "our bug bounty platform hasn't gotten back to us yet with a recommended bounty." This shouldn't be something that takes your bug bounty platform more than an hour to figure it out.
Something other companies can learn from is not to think that a bug bounty program, even at one of the major platforms, is going to solve most problems. And, frankly, it's not going to solve the non-disclosure issues if you have something that you're sitting on and a researcher is serious about getting it fixed or warning the public. There are plenty of researchers exactly like me, who the bounty platforms hold no sway over us. And I think it's those researchers that you want working with you, because they are the extremely experienced ones. You do not want to alienate them by forcing them into some arbitrary platform that clearly has some delay problems, considering they couldn't come up with a bounty amount for an issue that's been fixed for a while.