Lemonade's Jonathan Jaffe on Trading Feedback for Security Technology
Jonathan Jaffe, CISO at Lemonade, has built what he predicts will be "the perfect AI system" using agent orchestration to automate vulnerability management at machine speed, eliminating the developer burden of false positive security alerts. His unconventional approach to security combines lessons learned from practicing law against major tech companies with a systematic strategy for partnering with security startups to access cutting-edge technology years before competitors.
Jonathan tells David a story that showcases how even well-intentioned people will exploit systems if they believe they won't get caught or cause harm, which has shaped his approach to insider threat detection and the importance of maintaining skeptical oversight of automated security controls. His team leverages AI agents that automatically analyze GitHub Dependabot vulnerabilities, determine actual exploitability by examining entire code repositories, and either dismiss false positives or generate proof-of-concept explanations for developers.
Topics discussed:
The evolution from traditional security approaches to AI-powered agent orchestration that operates at machine speed to eliminate false positive vulnerability alerts.
Strategic partnerships with security startups as design partners, trading feedback and data for free access to cutting-edge technology while helping shape market-ready products.
Policy-based security enforcement for cloud-native environments that prevents the need to manage individual pods, containers, or microservices through automated compliance checks.
How legal experience prosecuting tech companies provides unique insights into adversarial thinking and the psychology behind insider threats and system exploitation.
Implementation of AI vulnerability management systems that automatically ingest CVEs, analyze code repositories for exploitable methods, and generate proof-of-concept explanations for developers.
Risk management strategies for adopting startup technology by starting small in non-impactful areas and gradually building trust through demonstrated value and reliability.
Transforming security operations from reactive vulnerability patching to proactive automated threat prevention through intelligent agent-based systems.
Key Takeaways:
Implement policy-based security enforcement for cloud environments to automate compliance across all deployments rather than managing individual pods or containers manually.
Partner with security startups as design partners by trading feedback data for free access to cutting-edge technology while helping them develop market-ready products.
Build AI agent orchestration platforms that automatically ingest GitHub Dependabot CVEs, analyze code repositories for exploitable methods, and dismiss false positive vulnerability alerts.
Begin startup technology adoption in low-risk or non-impactful areas to build trust and demonstrate value before expanding to critical security functions.
Establish relationships with venture capital communities to gain early access to portfolio companies and emerging security technologies before mainstream adoption.
Apply healthy skepticism to security controls by recognizing that even well-intentioned employees may exploit systems if they believe they won't cause harm or get caught.
Focus AI development efforts on automating time-intensive security tasks that typically require many days of manual developer work into machine-speed operations.
Evaluate business risk first before pursuing legal or compliance actions by calculating whether the effort investment justifies potential outcomes and settlements.
Listen to more episodes:
Apple
Spotify
YouTube
Website
--------
20:46
--------
20:46
Digital Asset Redemption's Steve Baer on Why Half of Ransomware Victims Shouldn't Pay
Most organizations approach ransomware as a technical problem, but Steve Baer, Field CISO at Digital Asset Redemption, has built his career understanding it as fundamentally human. His team's approach highlights why traditional cybersecurity tools fall short against motivated human adversaries and how proactive intelligence gathering can prevent incidents before they occur.
Steve's insights from the ransomware negotiation business challenge conventional wisdom about cyber extortion. Professional negotiators consistently achieve 73-75% reductions in ransom demands through skilled human interaction, while many victims discover their "stolen" data is actually worthless historical information that adversaries misrepresent as current breaches. Digital Asset Redemption's unique position allows them to purchase stolen organizational data on dark markets before public disclosure, effectively preventing incidents rather than merely responding to them.
Topics discussed:
Building human intelligence networks with speakers of different languages who maintain authentic personas and relationships within dark web adversarial communities.
Professional ransomware negotiation techniques that achieve consistent 73-75% reductions in extortion demands through skilled human interaction rather than automated responses.
The reality that less than half of ransomware victims require payment, as many attacks involve worthless historical data misrepresented as current breaches.
Proactive data acquisition strategies that purchase stolen organizational information on dark markets before public disclosure to prevent incident escalation.
Why AI serves as a useful tool for maintaining context and personas but cannot replace human intelligence when countering human adversaries.
Key Takeaways:
Investigate data value before paying ransoms — many attacks involve worthless historical information that adversaries misrepresent as current breaches.
Engage professional negotiators rather than attempting DIY ransomware negotiations, as specialized expertise consistently achieves 73-75% reductions in demands.
Build relationships within the cybersecurity community since the industry remains small and professionals freely share valuable threat intelligence.
Deploy human intelligence networks with diverse language capabilities to gather authentic threat intelligence from adversarial communities.
Assess AI implementation as a useful tool for maintaining context and personas while recognizing human adversaries require human intelligence to counter.
Listen to more episodes:
Apple
Spotify
YouTube
Website
--------
7:22
--------
7:22
Cybermindz’s Mark Alba on Military PTSD Protocols to Treat Security Burnout
The cybersecurity industry has talked extensively about burnout, but Mark Alba, Managing Director of Cybermindz, is taking an unprecedented scientific approach to both measuring and treating it. In this special RSA episode, Mark tells David how his team applies military-grade psychological protocols originally developed for PTSD treatment to address the mental health crisis in security operations centers. Rather than relying on anecdotal evidence of team fatigue, they deploy clinical psychologists to measure resilience through validated psychological assessments and deliver interventions that can literally change how analysts' brains process stress.
Mark walks through their use of the iRest Protocol, a 20-year-old treatment methodology from Walter Reed Hospital that shifts brain activity from amygdala-based fight-or-flight responses to prefrontal cortex logical thinking. Their team of five PhDs works directly within enterprise SOCs to establish baseline psychological metrics and track improvement over time, giving security leaders unprecedented visibility into their team's actual capacity to handle high-stress incident response.
Topics discussed:
Clinical measurement of cybersecurity burnout through validated psychological assessments including the MASLAC sleep index and psychological capital evaluations.
Implementation of the iRest Protocol, a military-developed meditative technique used at Walter Reed Hospital for PTSD treatment.
Real-time resilience scoring through the Cybermindz Resilience Index that combines sleep quality, psychological capital, burnout indicators, and stress response metrics.
Research methodology to establish causation versus correlation between psychological state and SOC performance metrics like mean time to respond and incident response rates.
Neuroscience of cybersecurity roles, including how threat intelligence analysts perform optimally at alpha brain wave levels while incident responders need beta wave states.
Strategic staff rotation based on psychological state rather than just skillset, moving analysts between different cognitive roles to optimize both performance and mental health.
Key Takeaways:
Implement clinical burnout measurement using validated tools like the MASLAC sleep index and psychological capital assessments rather than relying on subjective burnout indicators in your SOC operations.
Deploy psychometric testing within security operations centers to establish baseline resilience metrics before incidents occur, enabling proactive team management strategies.
Establish brainwave optimization protocols by moving threat intelligence analysts to alpha wave states for creative pattern recognition and incident responders to beta wave states for rapid decision-making.
Correlate psychological metrics with traditional SOC performance indicators like mean time to respond and incident response rates to identify causation patterns.
Rotate staff assignments based on real-time psychological capacity assessments rather than just technical skills, optimizing both performance and mental health outcomes.
Measure psychological capital within your security team to understand cognitive capacity for handling high-stress cyber incidents and threat analysis workloads.
Establish post-incident psychological protocols using clinical psychology techniques to prevent long-term burnout and retention issues following major security breaches.
Create predictive analytics models that combine resilience scoring with operational metrics to forecast SOC team performance and proactively address capacity issues.
Listen to more episodes:
Apple
Spotify
YouTube
Website
--------
7:04
--------
7:04
GigaOm’s Howard Holton on Why AI Will Be the OS of Security Work
The cybersecurity industry has witnessed numerous technology waves, but AI's integration at RSA 2025 signals something different from past hype cycles. Howard Holton, Chief Technology Officer at GigaOm, observed AI adoption across virtually every vendor booth, yet argues this represents genuine transformation rather than superficial marketing. His analyst perspective, backed by GigaOm's practitioner-focused research approach, reveals why AI will become the foundational operating system of security work rather than just another tool in an already crowded stack.
Howard's insights challenge conventional thinking about human-machine collaboration in security operations. He explains how natural language understanding finally bridges the gap between human instruction variability and machine execution consistency, solving a problem that has limited automation effectiveness for decades. Howard also explores practical applications where AI handles repetitive security tasks that exhaust human analysts, while humans focus on curiosity-driven investigation and strategic analysis that machines cannot replicate.
Topics discussed:
The fundamental differences between AI's practical applicability and blockchain's limited use cases, despite similar initial hype cycles and market positioning across cybersecurity vendors.
How natural language understanding creates breakthrough human-machine collaboration by allowing AI systems to execute consistent tasks regardless of instruction variability from different analysts.
The biological metaphor for human versus machine intelligence, where humans operate as "chaos machines" with independent processes driven by curiosity rather than single-objective optimization.
GigaOm's practitioner-focused approach to security maturity modeling that measures actual organizational capability rather than vendor feature adoption or platform configuration levels.
Why AI will become the operating system of security work, following the evolution from Microsoft Office to SaaS as foundational business operation layers.
The strategic advantage of AI handling hyper-repetitive security processes that traditionally drive human analysts to inefficiency while preserving human focus for curiosity-driven investigation.
How enterprise security teams can identify the optimal intersection between AI's computational strengths and human analytical capabilities within their specific organizational contexts and threat landscapes.
Key Takeaways:
Evaluate your security maturity models to ensure they measure organizational capability and adaptability rather than vendor feature adoption or platform configuration levels.
Identify repetitive security processes that exhaust human analysts and prioritize these for AI automation while preserving human focus for curiosity-driven investigation.
Leverage natural language understanding in AI tools to standardize security process execution despite instruction variability from different team members.
Audit your current technology stack to distinguish between genuinely applicable AI solutions and superficial AI marketing similar to the blockchain hype cycle.
Create practitioner-focused assessment criteria when evaluating security vendors to ensure solutions address real-world enterprise implementation challenges.
Develop language-agnostic security procedures that AI systems can interpret consistently regardless of how different analysts explain the same operational requirements.
Listen to more episodes:
Apple
Spotify
YouTube
Website
Welcome to the Future of Threat Intelligence podcast, where we explore the transformative shift from reactive detection to proactive threat management. Join us as we engage with top cybersecurity leaders and practitioners, uncovering strategies that empower organizations to anticipate and neutralize threats before they strike. Each episode is packed with actionable insights, helping you stay ahead of the curve and prepare for the trends and technologies shaping the future.