Shared irresponsibilities and the importance of product privacy: Apple vs Microsoft – ESW #365
This week, we've got data security being both funded AND acquired. We discuss Lacework's fall from unicorn status and why rumors that it went to Fortinet for considerably more than Wiz was willing to pay make sense.
Microsoft Recall and Apple Intelligence are the perfect bookends for a conversation about the importance of handling consumer privacy concerns at launch.
How can the Snowflake breach both be one of the biggest breaches ever, but also not a breach at all (for Snowflake, at least). It's time to have a conversation about shared responsibilities, and when the line between CSP and customer needs to shift.
The CSA's AI Resilience Benchmark leaves much to be desired (like, an actual usable benchmark) and Greg Linares tells a wild story about how the first Microsoft Office 2007 vulnerability was discovered.
Finally, the Light Phone III was announced. Do we finally have a usable minimalist, social media detox-friendly phone option? Will Adrian have to buy one to find out?
Announcements
Dive into cybersecurity with CyberRisk Alliance for exclusive insights from RSA Conference 2024. Explore executive interviews with industry leaders, uncovering visionary perspectives on threats and strategies. Delve into curated articles on trends and innovations, equipping yourself with essential knowledge for today's cyber landscape. Visit securityweekly.com/RSAC for expert guidance and inspiration in navigating cybersecurity challenges confidently.
Hosts
- 1. FUNDINGS: 7 rounds of funding totalling $291.5M
A. Cyberhaven Raises $88 Million to Protect Enterprise Data in the AI Economy
$88M Series C led by Adams Street Partners. "Cyberhaven protects the intellectual property that traditional data loss prevention (DLP), insider risk, and data security posture management (DSPM) tools fail to identify and secure—data like source code, product designs, and customer records. With its pioneering data lineage technology and foundational AI model that understands not only content but also context, Cyberhaven is uniquely able to classify any sensitive information, understand when it is at risk, and take action to protect it."
Are we really post-DSPM already? Sheesh, infosec marketing moves fast. Don't tell Tenable (wink)
B. ThreatModeler Raises $60 Million from Invictus Growth Partners
$60M in an institutional round from Invictus Growth Partners. Invictus also backed Binary Defense's last round in 2022.
ThreatModeler's products don't appear to do any threat modeling, however? I'm confused.
C. Greylock Leads $36 Million Financing for Cybersecurity Startup Seven AI
$36M Seed Round led by Greylock. Founding team from Cybereason. Autonomous threat hunting using AI. Name refers to the "Seven Patterns of AI" as outlined by Cognilytica.
- Hypersonalization (not a typo?)
- Recognition
- Conversation & Human Interaction
- Predictive Analytics & Decisions
- Goal-Driven Systems
- Autonomous Systems
- Patterns & Anomalies
D. Cybersecurity startup SpyCloud secures $35m to combat account takeovers
$35M round led by CIBC Innovation Banking, the investment arm of the Canadian Imperial Bank of Commerce. "SpyCloud specialises in detecting leaked employee login credentials and protecting consumer accounts through its platform."
$30M venture round (last two rounds were Series C in 2021 and 2022) led by Silver Lake Waterman.
F. YesWeHack Raises 26 Million Euros to Accelerate Its Growth and International Expansion
€26M Series C led by Wendel. French bug bounty company that's also well known in French-speaking Canada. Also happens to have Renaud Deraison, the creator of Nessus on the board of directors.
"500 customers across 40 countries"
$14.5M Series B led by SineWave Ventures, bringing total funding to $36.5M. Stacklet was co-founded by the team behind CNCF's (Cloud Native Computing Foundation) Cloud Custodian open source project and community.
The project isn't pure play security, but anything focused on observability and governance is going to have a strong security selling point, particularly with cloud. Arguably, most cloud issues are tied to observability and governance anyway.
- 2. ACQUISITIONS: Formstack Acquires Open Raven
- 3. ACQUISITIONS: Fortinet Acquires Lacework in Surprising Move
We don't know the deal amount, but as a public company, I'd be surprised if we don't see it in an 8K or the next 10Q. We'll just have to be patient and persistent. In Fortinet's investor relations material, it shares the following rationale for the deal:
- Fills gaps for the company, making it "one of the most comprehensive full stack cloud security solutions available"
- Strengthens the company's position in the CNAPP market, and in general in cloud security
- Gives Fortinet access to Lacework's 220+ patents, most of which are related to AI/ML
- 4. ACQUISITIONS: Tenable expands cloud data security capabilities with Eureka Security acquisition – SiliconANGLE
- 5. NEW PRODUCTS: Private Cloud Compute: A new frontier for AI privacy in the cloud
Apple is going to some impressive levels to keep customer data private, as they prepare to release their first GenAI features. They list some tough challenges they've chosen to tackle:
- Cloud AI security and privacy guarantees are difficult to verify and enforce
- It’s difficult to provide runtime transparency for AI in the cloud.
- It’s challenging for cloud AI environments to enforce strong limits to privileged access
They list the core requirements of Private Cloud Compute as:
- Stateless computation on personal user data
- Enforceable guarantees
- No privileged runtime access
- Non-targetability
- Verifiable transparency
- 6. DUMPSTER FIRES: After brutal critiques, Microsoft Recall will get these major privacy and security changes
I think it's time to discuss it, but I want to start from a different perspective. Instead of beginning with outrage, with "how could they possibly be this dumb", what if we considered why Microsoft built this?
What customer need are they trying to solve? What are the benefits if they got it right? Could they achieve those goals without a huge security and privacy nightmare?
- 7. BREACHES: The Snowflake Attack May Be Turning Into One of the Largest Data Breaches Ever
Although, it's not really a Snowflake breach. Snowflake is strangely the source of the breach without being breached itself. The portion of the shared responsibility model that got breached is the part the customer owns.
Right?
Is it that clear cut, or is it more complicated than that? We'll discuss.
EDIT: Rich Mogul also has an excellent take on this, over on the Securosis blog.
- 8. ESSAYS: AI is making the internet worse
I'm going to write a counterpoint to this essay, probably over on The Cyber Why.
TL;DR, Jack thinks AI is making the Internet worse, because it's breaking how media and journalism works (diverting traffic away from media sites and their sponsored ads. Sorry Jack, the AI use case to summarize media exists because no one wants to view those sponsored ads or pay subscription fees. Publishers/media/journalism have been broken for a long time.
- 9. ESSAYS: Cybersecurity is not a market for lemons. It is a market for silver bullets.
- 10. FRAMEWORKS: AI Resilience: Benchmarking AI Governance & Compliance
I was highly disappointed in this. It promises a benchmark, right in the title. However, 90% of the report simply rehashes the history of AI, explains what AI is, and discusses the different types of AI. "AI Resilience" is defined as a system's resistance, resilience, and plasticity. When it finally gets to the benchmark bit, it proposes to grade AI on three attributes, each with a 1-10 scale:
- AI resistance reflects the system's ability to maintain a required minimal performance in the face of intrusion, manipulation, misuse, and abuse.
- AI resilience focuses on the time, capacity, and capability needed to bounce back to the required minimal performance after an incident.
- AI plasticity serves as the system's gauge indicating its tolerance to “make it or break it” and allows quick action in the case of system failure or allows continuously improving AI resilience.
The report goes on to unhelpfully give an example score: "Such a score could look like (for example) 16:5-8-3 representing the sum of the three pillars and each of the three pillars, separately."
But how do I measure the "plasticity" of an AI system? Measuring the time, capacity, and capability of a people-driven process to recover is a measurable thing, but how would these metrics be applied to an AI system? All these measurements seem to imply that the AI system has these attributes, when they seem to describe the measurement of a human-powered process.
I searched for more details on how to go about practically benchmarking an AI system using these parameters, but found nothing.
- 11. STORIES: How the first Microsoft Office 2007 vulnerability was discovered, or how it wasn’t.
I'm not sure if this is a great example of bug hunting, or a terrible one. Can it be both? As Greg Linares gets increasingly drunk, he tells a story that gives some insight into what it was like to be a bug hunter/vulnerability researcher at one of the original big vulnerability vendors, eEye.
If nothing else, it's an entertaining story, and if he does get around to writing that book, I'll probably pre-order it.
- 12. REPORTS: Debunking the “stupid user” myth in cybersecurity
Some interesting research here that found:
- 78% of participants were highly likely to comply with security "nudges" <- hey, that's the name of the company doing the report!
- 67% of people will look for workarounds if you try to block them from accessing applications they want or need
- TL;DR, blocking and punitive approaches don't work (which I think scientific literature has largely already concluded)
- 13. SQUIRREL: The Light Phone
It finally might be somewhat usable? The joke with the previous Light Phones was that they were so bad, you'd do anything not to use it, and that's the point of the product.