PodcastsNotíciasScrum Master Toolbox Podcast: Agile storytelling from the trenches

Scrum Master Toolbox Podcast: Agile storytelling from the trenches

Vasco Duarte, Agile Coach, Certified Scrum Master, Certified Product Owner
Scrum Master Toolbox Podcast: Agile storytelling from the trenches
Último episódio

407 episódios

  • Scrum Master Toolbox Podcast: Agile storytelling from the trenches

    BONUS The Future of Seeing—Why AI Vision Will Transform Medicine and Human Perception With Daniel Sodickson

    19/2/2026 | 37min
    BONUS: The Future of Seeing—Why AI Vision Will Transform Medicine and Human Perception
    What if the next leap in AI isn't about thinking, but about seeing? In this episode, Daniel Sodickson—physicist, medical imaging pioneer, and author of "The Future of Seeing"—argues we're on the edge of a vision revolution that will change medicine, technology, and even human perception itself.
    From Napkin Sketch to Parallel Imaging
    "I was doodling literally on a napkin in a piano bar in Boston and came up with a way to get multiple lines at once. I ran to my mentor and said, 'Hey, I have this idea, never mind my paper.' And he said, 'Who are you again? Sure, why not.' And it worked."
     
    Daniel's journey into imaging began with a happy accident. While studying why MRI couldn't capture the beating heart fast enough, he realized the fundamental bottleneck: MRI machines scan one line at a time, like old CRT screens. His insight—imaging in parallel to capture multiple lines simultaneously—revolutionized the field. This connection between natural vision (our eyes capture entire scenes at once) and artificial imaging systems set him on a 29-year journey exploring how we can see what was once invisible.
    Upstream AI: Changing What We Measure
    "Most often when we envision AI, we think of it as this downstream process. We generate our data, make our image, then let AI loose instead of our brains. To me, that's limited. Why aren't we thinking of tasks that AI can do that no human could ever do?"
     
    Daniel introduces a crucial distinction between "downstream" and "upstream" AI. Downstream AI takes existing images and interprets them—essentially competing with human experts. Upstream AI changes the game entirely by redesigning what data we gather in the first place. If we know a machine learning system will process the output, we can build cheaper, more accessible sensors. Imagine monitoring devices built into beds or chairs that don't produce perfect images but can detect whether you've changed since your last comprehensive scan. AI fills in the gaps using learned context about how bodies and signals behave.
    The Power of Context and Memory
    "The world we see is a lie. Two eyes are not nearly enough to figure out exactly where everything is in space. What the brain is doing is using everything it's learned about the world—how light falls on surfaces, how big people are compared to objects—and filling in what's missing."
     
    Our brains don't passively receive images; they actively construct reality using massive amounts of learned context. Daniel argues we can give imaging machines the same superpower. By training AI on temporal patterns—how healthy bodies change over time, what signals precede disease—we create systems with "memory" that can make sophisticated judgments from incomplete data. Today's signal, combined with your history and learned patterns from millions of others, becomes far more informative than any single pristine image could be.
    From Reactive to Proactive Health
    "I've started to wonder why we use these amazing MRI machines only once we already know you're sick. Why do we use them reactively rather than proactively?"
     
    This question drove Daniel to leave academia after 29 years and join Function Health, a company focused on proactive imaging and testing to catch disease before it develops. The vision: a GPS for your health. By combining regular blood panels, MRI scans, and wearable data, AI can monitor whether you look like yourself or have changed in worrisome ways. The goal isn't replacing expert diagnosis but creating an early warning system that surfaces problems while they're still easily treatable.
    Seeing How We See
    "Sometimes when I'm walking along, everything I'm seeing just fades away. And what I see instead is how I'm seeing. I imagine light bouncing off of things and landing in my eye, this buzz of light zipping around as fast as anything in the universe can go."
     
    After decades studying vision, Daniel experiences the world differently. He finds himself deconstructing his own perception—tracing sight lines, marveling at how we've evolved to turn chaos of sensation into spatially organized information. This meta-awareness extends to his work: every new imaging modality has driven scientific discovery, from telescopes enabling the Copernican Revolution to MRI revealing the living body. We're now at another inflection point where AI doesn't just interpret images but transforms our relationship with perception itself.
     
    In this episode, we refer to An Immense World: How Animal Senses Reveal the Hidden Realms Around Us by Ed Young on animal perception, and A Path Towards Autonomous Machine Intelligence by Yann LeCun on building AI more like the brain.
     
    About Daniel Sodickson
    Daniel K. Sodickson is a physicist in medicine and chief medical scientist at Function Health. Previously at NYU, and a gold medalist and past president of the International Society for Magnetic Resonance in Medicine, he pioneers AI-driven imaging and is author of The Future of Seeing.
  • Scrum Master Toolbox Podcast: Agile storytelling from the trenches

    AI Assisted Coding: How Spending 4x More on Code Quality Doubled Development Speed With Eduardo Ferro

    18/2/2026 | 32min
    AI Assisted Coding: How Spending 4x More on Code Quality Doubled Development Speed
    What happens when you combine nearly 30 years of engineering experience with AI-assisted coding? In this episode, Eduardo Ferro shares his experiments showing that AI doesn't replace good practices—it amplifies them. The result: doubled productivity while spending four times more on code quality.
    Vibe Coding vs Production-Grade AI Development
    "Vibe coding is flow-driven, curiosity-based way of building software with AI. It's less about meticulously reviewing each line of code, and more about letting the AI steer the process—perfect for quick experiments, side projects, MVPs, and prototypes."
     
    Edu draws a clear distinction between vibe coding and production AI development. Vibe coding is exploration-focused, where you let AI drive while you learn and discover. Production AI coding is goal-focused, with careful planning, spec definition, and identification of edge cases before implementation. Both use small, safe steps and continuous conversation with the AI, but production code demands architectural thinking, security analysis, and sustainability practices. The key insight is that even vibe coding benefits from engineering discipline—as experiments grow, you need sustainable practices to maintain flexibility.
    How AI Doubled My Productivity
    "I was investing four times more in refactoring, cleanup, deleting code, introducing new tests, improving testability, and security analysis than in generating new features. And at the same time, globally, I think I more or less doubled my pace of work."
     
    Edu's two-month experiment with production code revealed a counterintuitive finding: by spending 4x more time on code quality activities—refactoring, cleanup, test improvement, and security analysis—he actually doubled his overall delivery speed. The secret lies in fast feedback loops. With AI, you can implement a feature, run automated code review, analyze security, prioritize improvements, and iterate—all within an hour. What used to be a day's work happens in a single focused session, and the quality improvements compound over time.
    The Positive Spiral of Code Removal
    "We removed code, so we removed all the features that were not being used. And whenever I remove this code, the next step is to automatically try to see, okay, can I simplify the architecture."
     
    One of the most powerful practices Edu discovered is using AI to accelerate code removal. By connecting product analytics to identify unused features, then using AI to quickly remove them, you trigger a positive spiral: removing code makes architecture changes easier, easier architecture changes enable faster feature development, which leads to more opportunities for simplification. This creates a self-reinforcing cycle that humans historically have been reluctant to pursue because removal was as expensive as creation.
    Preparing the System Before Introducing Change
    "What I want to generate is this new functionality—how should I change my system to make it super easy to introduce this one? It's not about making the change, it's about making the change easy."
     
    Edu describes a practice that was previously too expensive: preparing the system before introducing changes. By analyzing architecture decision records, understanding the existing design, and adapting the codebase first, new features become trivial to implement. AI makes this preparation cheap enough to do routinely. The result is systems that evolve cleanly rather than accumulating technical debt with each new feature.
    AI as an Amplifier: The Double-Edged Sword
    "AI is an amplifier. People who already know how to develop software well will continue to develop it well and faster. People who did not know how to develop software well will probably get in trouble much faster than they would otherwise."
     
    Edu's central metaphor is AI as an amplifier—it doesn't replace engineering judgment, it magnifies its presence or absence. Teams with strong practices will see accelerated improvement; teams without them will generate technical debt faster than ever. This has implications beyond individual productivity: the market will be saturated with solutions, making product discovery and distribution channels more important than implementation capability.
     
    In this episode, we refer to Edu's blog post Fast Feedback, Fast Features: My AI Assisted Coding Experiment and Vibe Coding by Gene Kim.
     
    About Eduardo Ferro
    Edu Ferro is Head of Engineering and Data Platform at ClarityAI, with nearly 30 years' experience. He helps teams deliver value through Lean, XP, and DevOps, blending technical depth with product thinking. Recently he explores AI-assisted product development, sharing insights and experiments on his site eferro.net.
     
    You can connect with Edu Ferro on LinkedIn.
  • Scrum Master Toolbox Podcast: Agile storytelling from the trenches

    AI Assisted Coding: Stop Building Features, Start Building Systems with AI With Adam Bilišič

    17/2/2026 | 37min
    AI Assisted Coding: Stop Building Features, Start Building Systems with AI
    What separates vibe coding from truly effective AI-assisted development? In this episode, Adam Bilišič shares his framework for mastering AI-augmented coding, walking through five distinct levels that take developers from basic prompting to building autonomous multi-agent systems.
    Vibe Coding vs AI-Augmented Coding: A Critical Distinction
    "The person who is actually creating the app doesn't have to have in-depth overview or understanding of how the app works in the background. They're essentially a manual tester of their own application, but they don't know how the data structure is, what are the best practices, or the security aspects."
     
    Adam draws a clear line between vibe coding and AI-augmented coding. Vibe coding allows non-developers to create functional applications without understanding the underlying architecture—useful for product owners to create visual prototypes or help clients visualize their ideas. 
    AI-augmented coding, however, is what professional software engineers need to master: using AI tools while maintaining full understanding of the system's architecture, security implications, and best practices. The key difference is that augmented coding lets you delegate repetitive work while retaining deep knowledge of what's happening under the hood.
    From Building Features to Building Systems
    "When you start building systems, instead of thinking 'how can I solve this feature,' you are thinking 'how can I create either a skill, command, sub-agent, or other things which these tools offer, to then do this thing consistently again and again without repetition.'"
     
    The fundamental mindset shift in AI-augmented coding is moving from feature-level thinking to systems-level thinking. Rather than treating each task as a one-off prompt, experienced practitioners capture their thinking process into reusable recipes. This includes documenting how to refactor specific components, creating templates for common patterns, and building skills that encode your decision-making process. The goal is translating your coding practices into something the AI can repeatedly execute for any new feature.
    Context Management: The Critical Skill For Working With AI
    "People have this tendency to install everything they see on Reddit. They never check what is then loaded within the context just when they open the coding agent. You can check it, and suddenly you see 40 or 50% of your context is taken just by MCPs, and you didn't do anything yet."
     
    One of the most overlooked aspects of AI-assisted coding is context management. Adam reveals that many developers unknowingly fill their context window with MCP (Model Context Protocol) tools they don't need for the current task. The solution is strategic use of sub-agents: when your orchestrator calls a front-end sub-agent, it gets access to Playwright for browser testing, while your backend agent doesn't need that context overhead. Understanding how to allocate context across specialized agents dramatically improves results.
    The Five Levels of AI-Augmented Coding
    "If you didn't catch up or change your opinion in the last 2-3 years, I would say we are getting to the point where it will be kind of last chance to do so, because the technology is evolving so fast."
     
    Adam outlines a progression from beginner to expert:
     
    Level 1 - Master of Prompts: Learning to write effective prompts, but constantly repeating context about architecture and preferences

    Level 2 - Configuration Expert: Using files like .cursorrules or CLAUDE.md to codify rules the agent should always follow

    Level 3 - Context Master: Understanding how to manage context efficiently, using MCPs strategically, creating markdown files for reusable information

    Level 4 - Automation Master: Creating custom commands, skills, and sub-agents to automate repetitive workflows

    Level 5 - The Orchestrator: Building systems where a main orchestrator delegates to specialized sub-agents, each running in their own context window

    The Power of Specialized Sub-Agents
    "The sub-agent runs in his own context window, so it's not polluted by whatever the orchestrator was doing. The orchestrator needs to give him enough information so it can do its work."
     
    At the highest level, developers create virtual teams of specialized agents. The orchestrator understands which sub-agent to call for front-end work, which for backend, and which for testing. Each agent operates in a clean context, focused on its specific domain. When the tester finds issues, it reports back to the orchestrator, which can spin up the appropriate agent to fix problems. This creates a self-correcting development loop that dramatically increases throughput.
     
    In this episode, we refer to the Claude Code subreddit and IndyDevDan's YouTube channel for learning resources.
     
    About Adam Bilišič
    Adam Bilišič is a former CTO of a Swiss company with over 12 years of professional experience in software development, primarily working with Swiss clients. He is now the CEO of NodeonLabs, where he focuses on building AI-powered solutions and educating companies on how to effectively use AI tools, coding agents, and how to build their own custom agents.
     
    You can connect with Adam Bilišič on LinkedIn and learn more at nodeonlabs.com. Download his free guide on the five levels of AI-augmented coding at nodeonlabs.com/ai-trainings/ai-augmented-coding#free-guide.
  • Scrum Master Toolbox Podcast: Agile storytelling from the trenches

    When AI Decisions Go Wrong at Scale—And How to Prevent It With Ran Aroussi

    16/2/2026 | 41min
    BONUS: When AI Decisions Go Wrong at Scale—And How to Prevent It
    We've spent years asking what AI can do. But the next frontier isn't more capability—it's something far less glamorous and far more dangerous if we get it wrong. In this episode, Ran Aroussi shares why observability, transparency, and governance may be the difference between AI that empowers humans and AI that quietly drifts out of alignment.
    The Gap Between Demos and Deployable Systems
    "I've noticed that I watched well-designed agents make perfectly reasonable decisions based on their training, but in a context where the decision was catastrophically wrong. And there was really no way of knowing what had happened until the damage was already there."
     
    Ran's journey from building algorithmic trading systems to creating MUXI, an open framework for production-ready AI agents, revealed a fundamental truth: the skills needed to build impressive AI demos are completely different from those needed to deploy reliable systems at scale. Coming from the EdTech space where he handled billions of ad impressions daily and over a million concurrent users, Ran brings a perspective shaped by real-world production demands. 
    The moment of realization came when he saw that the non-deterministic nature of AI meant that traditional software engineering approaches simply don't apply. While traditional bugs are reproducible, AI systems can produce different results from identical inputs—and that changes everything about how we need to approach deployment.
    Why Leaders Misunderstand Production AI
    "When you chat with ChatGPT, you go there and it pretty much works all the time for you. But when you deploy a system in production, you have users with unimaginable different use cases, different problems, and different ways of phrasing themselves."
     
    The biggest misconception leaders have is assuming that because AI works well in their personal testing, it will work equally well at scale. When you test AI with your own biases and limited imagination for scenarios, you're essentially seeing a curated experience. 
    Real users bring infinite variation: non-native English speakers constructing sentences differently, unexpected use cases, and edge cases no one anticipated. The input space for AI systems is practically infinite because it's language-based, making comprehensive testing impossible.
    Multi-Layered Protection for Production AI
    "You have to put in deterministic filters between the AI and what you get back to the user."
     
    Ran outlines a comprehensive approach to protecting AI systems in production:
     
    Model version locking: Just as you wouldn't randomly upgrade Python versions without testing, lock your AI model versions to ensure consistent behavior

    Guardrails in prompts: Set clear boundaries about what the AI should never do or share

    Deterministic filters: Language firewalls that catch personal information, harmful content, or unexpected outputs before they reach users

    Comprehensive logging: Detailed traces of every decision, tool call, and data flow for debugging and pattern detection

     
    The key insight is that these layers must work together—no single approach provides sufficient protection for production systems.
    Observability in Agentic Workflows
    "With agentic AI, you have decision-making, task decomposition, tools that it decided to call, and what data to pass to them. So there's a lot of things that you should at least be able to trace back."
     
    Observability for agentic systems is fundamentally different from traditional LLM observability. When a user asks "What do I have to do today?", the system must determine who is asking, which tools are relevant to their role, what their preferences are, and how to format the response. 
    Each user triggers a completely different dynamic workflow. Ran emphasizes the need for multi-layered access to observability data: engineers need full debugging access with appropriate security clearances, while managers need topic-level views without personal information. The goal is building a knowledge graph of interactions that allows pattern detection and continuous improvement.
    Governance as Human-AI Partnership
    "Governance isn't about control—it's about keeping people in the loop so AI amplifies, not replaces, human judgment."
     
    The most powerful reframing in this conversation is viewing governance not as red tape but as a partnership model. Some actions—like answering support tickets—can be fully automated with occasional human review. Others—like approving million-dollar financial transfers—require human confirmation before execution. The key is designing systems where AI can do the preparation work while humans retain decision authority at critical checkpoints. This mirrors how we build trust with human colleagues: through repeated successful interactions over time, gradually expanding autonomy as confidence grows.
    Building Trust Through Incremental Autonomy
    "Working with AI is like working with a new colleague that will back you up during your vacation. You probably don't know this person for a month. You probably know them for years. The first time you went on vacation, they had 10 calls with you, and then slowly it got to 'I'm only gonna call you if it's really urgent.'"
     
    The path to trusting AI systems mirrors how we build trust with human colleagues. You don't immediately hand over complete control—you start with frequent check-ins, observe performance, and gradually expand autonomy as confidence builds. This means starting with heavy human-in-the-loop interaction and systematically reducing oversight as the system proves reliable. The goal is reaching a state where you can confidently say "you don't have to ask permission before you do X, but I still want to approve every Y."
     
    In this episode, we refer to Thinking in Systems by Donella Meadows, Designing Machine Learning Systems by Chip Huyen, and Build a Large Language Model (From Scratch) by Sebastian Raschka.
     
    About Ran Aroussi
    Ran Aroussi is the founder of MUXI, an open framework for production-ready AI agents. He is also the co-creator of yfinance (with 10 million downloads monthly) and founder of Tradologics and Automaze. Ran is the author of the forthcoming book Production-Grade Agentic AI: From Brittle Workflows to Deployable Autonomous Systems, also available at productionaibook.com.
     
    You can connect with Ran Aroussi on LinkedIn.
  • Scrum Master Toolbox Podcast: Agile storytelling from the trenches

    BONUS: Why Embedding Sales with Engineering in Stealth Mode Changed Everything for Snowflake With Chris Degnan

    14/2/2026 | 26min
    BONUS: Why Embedding Sales with Engineering in Stealth Mode Changed Everything for Snowflake
    In this episode, we talk about what it really takes to scale go-to-market from zero to billions. We interview Chris Degnan, a builder of one of the most iconic revenue engines in enterprise software at Snowflake. This conversation is grounded in the transformation described in his book Make It Snow—the journey from early-stage chaos to durable, aligned growth.
    Embedding Sales with Engineering While Still in Stealth
    "I don't expect you to sell anything for 2 years. What I really want you to do is get a ton of feedback and get customers to use the product so that when we come out of stealth mode, we have this world-class product."
     
    Chris joined Snowflake when there were zero customers and the company was still in stealth mode. The counterintuitive move of embedding sales next to engineering so early wasn't about driving immediate revenue, it was about understanding product-market fit. Chris's job was to get customers to try the product, use it for free, and break it. And break it they did. This early feedback led to material changes in the product before general availability. The approach helped shape their ideal customer profile (ICP) and gave the engineering team real-world validation that shaped Snowflake's technical direction. In a world where startups are pressured to show revenue immediately, Snowflake's investors took the opposite approach: focus on building a product people cannot live without first.
    Why Sales and Marketing Alignment Is Existential
    "If we're not driving revenue, if the revenue is not growing, then how are we going to be successful? Revenue was king."
     
    When Denise Persson joined as CMO, she shifted the conversation from marketing qualified leads (MQLs) to qualified meetings for the sales team. This simple reframe eliminated the typical friction between sales and marketing. Both leaders shared challenges openly and held each other accountable. When someone in either organization wasn't being respectful to the other team, they addressed it directly. Chris warns founders against creating artificial friction between sales and marketing: "A lot of founders who are engineers think that they want to create this friction between sales and marketing. And that's the opposite instinct you should have." The key insight is treating sales and marketing as a symbiotic system where revenue is the shared north star.
    Coaching Leaders Through Hypergrowth
    "If there's a problem in one of our organizations, if someone comes with a mentality that is not great for us, we're gonna give direct feedback to those people."
     
    Chris and Denise maintained tight alignment at the top level of their organizations through four CEO transitions. Their partnership created a culture of accountability that cascaded through both teams. When either hired senior people who didn't fit the culture, they investigated and addressed it. The coaching approach wasn't about winning by authority—it was about maintaining partnership and shared accountability for results. This required unlearning traditional management approaches that pit departments against each other and instead fostering genuine collaboration.
    Cultural Behaviors That Scale (And Those That Don't)
    "We got dumb and lazy. We forgot about it. And then we decided, hey, we're gonna go get a little bit more fit, and figure out how to go get the new logos again."
     
    Chris describes himself as a "velocity salesperson" with a hyper-focus on new customer acquisition. This focus worked brilliantly during Snowflake's growth phase—land customers, and the high net retention rate would drive expansion. However, as Snowflake prepared to go public, they took their foot off the gas on new logo acquisition, believing not all new logos were equal. This turned out to be a mistake. In his final year at Snowflake, working with CEO Sridhar Ramaswamy, they redesigned the sales team to reinvigorate the new logo acquisition machine. The lesson: the cultural behaviors that fuel early success must be consciously maintained and sometimes redesigned as you scale.
    Keeping the Message Narrow Before Going Platform
    "Eventually, I know you want to be a platform. But having a targeted market when you're initially launching the company, that people are spending money on, makes it easier for your sales team."
     
    Snowflake intentionally positioned itself in the enterprise data warehousing market—a $10-12 billion annual market with 5,000-7,000 enterprise customers—rather than trying to sound "bigger" as a platform play. The strategic advantage was accessing existing budgets. When selling to large enterprises that go through annual planning processes, fitting into an existing budget means sales cycles of 3-6 months instead of 9-18 months. Yes, competition eventually tried to corner Snowflake as "just a cute data warehouse," but by then they had captured significant market share and could stretch their wings into the broader data cloud opportunity.
    Selling Consumption-Based Products to Fixed-Budget Buyers
    "Don't believe anything I say, try it."
     
    One of Snowflake's hardest challenges was explaining their elastic, consumption-based architecture to procurement and legal teams accustomed to fixed budgets. In 2013-2015, many CIOs still believed data would stay in their data centers. Snowflake's model—where customers could spin up a thousand servers for 4 hours, load data, while analysts ran queries without performance impact—seemed impossible. Chris's approach was simple: set up proof of concepts and pilots. Let the technology speak for itself. The shift from fixed resources to elastic architecture required changing not just technology but entire mindsets about how data infrastructure could work.
     
    About Chris Degnan
    Chris Degnan is a builder of one of the most iconic revenue engines in enterprise software. As the first sales hire at Snowflake, he helped scale the company from zero customers to billions in revenue. Chris co-authored Make It Snow: From Zero to Billions with Denise Persson, documenting their journey of building Snowflake's go-to-market organization. Today, Chris advises early-stage startups on building their go-to-market strategies and works with Iconiq Capital, the venture firm that led Snowflake's Series D round.
     
    You can link with Chris Degnan on LinkedIn and learn more about the book at MakeItSnowBook.com.

Mais podcasts de Notícias

Sobre Scrum Master Toolbox Podcast: Agile storytelling from the trenches

Every week day, Certified Scrum Master, Agile Coach and business consultant Vasco Duarte interviews Scrum Masters and Agile Coaches from all over the world to get you actionable advice, new tips and tricks, improve your craft as a Scrum Master with daily doses of inspiring conversations with Scrum Masters from the all over the world. Stay tuned for BONUS episodes when we interview Agile gurus and other thought leaders in the business space to bring you the Agile Business perspective you need to succeed as a Scrum Master. Some of the topics we discuss include: Agile Business, Agile Strategy, Retrospectives, Team motivation, Sprint Planning, Daily Scrum, Sprint Review, Backlog Refinement, Scaling Scrum, Lean Startup, Test Driven Development (TDD), Behavior Driven Development (BDD), Paper Prototyping, QA in Scrum, the role of agile managers, servant leadership, agile coaching, and more!
Sítio Web de podcast

Ouve Scrum Master Toolbox Podcast: Agile storytelling from the trenches, Antes pelo contrário e muitos outros podcasts de todo o mundo com a aplicação radio.pt

Obtenha a aplicação gratuita radio.pt

  • Guardar rádios e podcasts favoritos
  • Transmissão via Wi-Fi ou Bluetooth
  • Carplay & Android Audo compatìvel
  • E ainda mais funções
Informação legal
Aplicações
Social
v8.6.0 | © 2007-2026 radio.de GmbH
Generated: 2/19/2026 - 1:00:19 PM