PodcastsTecnologiaMachine Learning Street Talk (MLST)

Machine Learning Street Talk (MLST)

Machine Learning Street Talk (MLST)
Machine Learning Street Talk (MLST)
Último episódio

241 episódios

  • Machine Learning Street Talk (MLST)

    The 3 Laws of Knowledge [César Hidalgo]

    27/12/2025 | 1h 37min

    César Hidalgo has spent years trying to answer a deceptively simple question: What is knowledge, and why is it so hard to move around?We all have this intuition that knowledge is just... information. Write it down in a book, upload it to GitHub, train an AI on it—done. But César argues that's completely wrong. Knowledge isn't a thing you can copy and paste. It's more like a living organism that needs the right environment, the right people, and constant exercise to survive.Guest: César Hidalgo, Director of the Center for Collective Learning1. Knowledge Follows Laws (Like Physics)2. You Can't Download Expertise3. Why Big Companies Fail to Adapt4. The "Infinite Alphabet" of EconomiesIf you think AI can just "copy" human knowledge, or that development is just about throwing money at poor countries, or that writing things down preserves them forever—this conversation will change your mind. Knowledge is fragile, specific, and collective. It decays fast if you don't use it. The Infinite Alphabet [César A. Hidalgo]https://www.penguin.co.uk/books/458054/the-infinite-alphabet-by-hidalgo-cesar-a/9780241655672https://x.com/cesifotiRescript link. https://app.rescript.info/public/share/eaBHbEo9xamwbwpxzcVVm4NQjMh7lsOQKeWwNxmw0JQ---TIMESTAMPS:00:00:00 The Three Laws of Knowledge00:02:28 Rival vs. Non-Rival: The Economics of Ideas00:05:43 Why You Can't Just 'Download' Knowledge00:08:11 The Detective Novel Analogy00:11:54 Collective Learning & Organizational Networks00:16:27 Architectural Innovation: Amazon vs. Barnes & Noble00:19:15 The First Law: Learning Curves00:23:05 The Samuel Slater Story: Treason & Memory00:28:31 Physics of Knowledge: Joule's Cannon00:32:33 Extensive vs. Intensive Properties00:35:45 Knowledge Decay: Ise Temple & Polaroid00:41:20 Absorptive Capacity: Sony & Donetsk00:47:08 Disruptive Innovation & S-Curves00:51:23 Team Size & The Cost of Innovation00:57:13 Geography of Knowledge: Vespa's Origin01:04:34 Migration, Diversity & 'Planet China'01:12:02 Institutions vs. Knowledge: The China Story01:21:27 Economic Complexity & The Infinite Alphabet01:32:27 Do LLMs Have Knowledge?---REFERENCES:Book:[00:47:45] The Innovator's Dilemma (Christensen)https://www.amazon.com/Innovators-Dilemma-Revolutionary-Change-Business/dp/0062060244[00:55:15] Why Greatness Cannot Be Plannedhttps://amazon.com/dp/3319155237[01:35:00] Why Information Growshttps://amazon.com/dp/0465048994Paper:[00:03:15] Endogenous Technological Change (Romer, 1990)https://web.stanford.edu/~klenow/Romer_1990.pdf[00:03:30] A Model of Growth Through Creative Destruction (Aghion & Howitt, 1992)https://dash.harvard.edu/server/api/core/bitstreams/7312037d-2b2d-6bd4-e053-0100007fdf3b/content[00:14:55] Organizational Learning: From Experience to Knowledge (Argote & Miron-Spektor, 2011)https://www.researchgate.net/publication/228754233_Organizational_Learning_From_Experience_to_Knowledge[00:17:05] Architectural Innovation (Henderson & Clark, 1990)https://www.researchgate.net/publication/200465578_Architectural_Innovation_The_Reconfiguration_of_Existing_Product_Technologies_and_the_Failure_of_Established_Firms[00:19:45] The Learning Curve Equation (Thurstone, 1916)https://dn790007.ca.archive.org/0/items/learningcurveequ00thurrich/learningcurveequ00thurrich.pdf[00:21:30] Factors Affecting the Cost of Airplanes (Wright, 1936)https://pdodds.w3.uvm.edu/research/papers/others/1936/wright1936a.pdf[00:52:45] Are Ideas Getting Harder to Find? (Bloom et al.)https://web.stanford.edu/~chadj/IdeaPF.pdf[01:33:00] LLMs/ Emergencehttps://arxiv.org/abs/2506.11135Person:[00:25:30] Samuel Slaterhttps://en.wikipedia.org/wiki/Samuel_Slater[00:42:05] Masaru Ibuka (Sony)https://www.sony.com/en/SonyInfo/CorporateInfo/History/SonyHistory/1-02.html

  • Machine Learning Street Talk (MLST)

    "I Desperately Want To Live In The Matrix" - Dr. Mike Israetel

    24/12/2025 | 2h 55min

    This is a lively, no-holds-barred debate about whether AI can truly be intelligent, conscious, or understand anything at all — and what happens when (or if) machines become smarter than us.Dr. Mike Israetel is a sports scientist, entrepreneur, and co-founder of RP Strength (a fitness company). He describes himself as a "dilettante" in AI but brings a fascinating outsider's perspective.Jared Feather (IFBB Pro bodybuilder and exercise physiologist)The Big Questions:1. When is superintelligence coming?2. Does AI actually understand anything?3. The Simulation Debate (The Spiciest Part)4. Will AI kill us all? (The Doomer Debate)5. What happens to human jobs and purpose?6. Do we need suffering?Mikes channel: https://www.youtube.com/channel/UCfQgsKhHjSyRLOp9mnffqVgRESCRIPT INTERACTIVE PLAYER: https://app.rescript.info/public/share/GVMUXHCqctPkXH8WcYtufFG7FQcdJew_RL_MLgMKU1U---TIMESTAMPS:00:00:00 Introduction & Workout Demo00:04:15 ASI Timelines & Definitions00:10:24 The Embodiment Debate00:18:28 Neutrinos & Abstract Knowledge00:25:56 Can AI Learn From YouTube?00:31:25 Diversity of Intelligence00:36:00 AI Slop & Understanding00:45:18 The Simulation Argument: Fire & Water00:58:36 Consciousness & Zombies01:04:30 Do Reasoning Models Actually Reason?01:12:00 The Live Learning Problem01:19:15 Superintelligence & Benevolence01:28:59 What is True Agency?01:37:20 Game Theory & The "Kill All Humans" Fallacy01:48:05 Regulation & The China Factor01:55:52 Mind Uploading & The Future of Love02:04:41 Economics of ASI: Will We Be Useless?02:13:35 The Matrix & The Value of Suffering02:17:30 Transhumanism & Inequality02:21:28 Debrief: AI Medical Advice & Final Thoughts---REFERENCES:Paper:[00:10:45] Alchemy and Artificial Intelligence (Dreyfus)https://www.rand.org/content/dam/rand/pubs/papers/2006/P3244.pdf[00:10:55] The Chinese Room Argument (John Searle)https://home.csulb.edu/~cwallis/382/readings/482/searle.minds.brains.programs.bbs.1980.pdf[00:11:05] The Symbol Grounding Problem (Stephen Harnad)https://arxiv.org/html/cs/9906002[00:23:00] Attention Is All You Needhttps://arxiv.org/abs/1706.03762[00:45:00] GPT-4 Technical Reporthttps://arxiv.org/abs/2303.08774[01:45:00] Anthropic Agentic Misalignment Paperhttps://www.anthropic.com/research/agentic-misalignment[02:17:45] Retatrutidehttps://pubmed.ncbi.nlm.nih.gov/37366315/Organization:[00:15:50] CERNhttps://home.cern/[01:05:00] METR Long Horizon Evaluationshttps://evaluations.metr.org/MLST Episode:[00:23:10] MLST: Llion Jones - Inventors' Remorsehttps://www.youtube.com/watch?v=DtePicx_kFY[00:50:30] MLST: Blaise Agüera y Arcas Interviewhttps://www.youtube.com/watch?v=rMSEqJ_4EBk[01:10:00] MLST: David Krakauerhttps://www.youtube.com/watch?v=dY46YsGWMIcEvent:[00:23:40] ARC Prize/Challengehttps://arcprize.org/Book:[00:24:45] The Brain Abstractedhttps://www.amazon.com/Brain-Abstracted-Simplification-Philosophy-Neuroscience/dp/0262548046[00:47:55] Pamela McCorduckhttps://www.amazon.com/Machines-Who-Think-Artificial-Intelligence/dp/1568812051[01:23:15] The Singularity Is Nearer (Ray Kurzweil)https://www.amazon.com/Singularity-Nearer-Ray-Kurzweil-ebook/dp/B08Y6FYJVY[01:27:35] A Fire Upon The Deep (Vernor Vinge)https://www.amazon.com/Fire-Upon-Deep-S-F-MASTERWORKS-ebook/dp/B00AVUMIZE/[02:04:50] Deep Utopia (Nick Bostrom)https://www.amazon.com/Deep-Utopia-Meaning-Solved-World/dp/1646871642[02:05:00] Technofeudalism (Yanis Varoufakis)https://www.amazon.com/Technofeudalism-Killed-Capitalism-Yanis-Varoufakis/dp/1685891241Visual Context Needed:[00:29:40] AT-AT Walker (Star Wars)https://starwars.fandom.com/wiki/All_Terrain_Armored_TransportPerson:[00:33:15] Andrej Karpathyhttps://karpathy.ai/Video:[01:40:00] Mike Israetel vs Liron Shapira AI Doom Debatehttps://www.youtube.com/watch?v=RaDWSPMdM4oCompany:[02:26:30] Examine.comhttps://examine.com/

  • Machine Learning Street Talk (MLST)

    Making deep learning perform real algorithms with Category Theory (Andrew Dudzik, Petar Velichkovich, Taco Cohen, Bruno Gavranović, Paul Lessard)

    22/12/2025 | 43min

    We often think of Large Language Models (LLMs) as all-knowing, but as the team reveals, they still struggle with the logic of a second-grader. Why can’t ChatGPT reliably add large numbers? Why does it "hallucinate" the laws of physics? The answer lies in the architecture. This episode explores how *Category Theory* —an ultra-abstract branch of mathematics—could provide the "Periodic Table" for neural networks, turning the "alchemy" of modern AI into a rigorous science.In this deep-dive exploration, *Andrew Dudzik*, *Petar Velichkovich*, *Taco Cohen*, *Bruno Gavranović*, and *Paul Lessard* join host *Tim Scarfe* to discuss the fundamental limitations of today’s AI and the radical mathematical framework that might fix them.TRANSCRIPT:https://app.rescript.info/public/share/LMreunA-BUpgP-2AkuEvxA7BAFuA-VJNAp2Ut4MkMWk---Key Insights in This Episode:* *The "Addition" Problem:* *Andrew Dudzik* explains why LLMs don't actually "know" math—they just recognize patterns. When you change a single digit in a long string of numbers, the pattern breaks because the model lacks the internal "machinery" to perform a simple carry operation.* *Beyond Alchemy:* deep learning is currently in its "alchemy" phase—we have powerful results, but we lack a unifying theory. Category Theory is proposed as the framework to move AI from trial-and-error to principled engineering. [00:13:49]* *Algebra with Colors:* To make Category Theory accessible, the guests use brilliant analogies—like thinking of matrices as *magnets with colors* that only snap together when the types match. This "partial compositionality" is the secret to building more complex internal reasoning. [00:09:17]* *Synthetic vs. Analytic Math:* *Paul Lessard* breaks down the philosophical shift needed in AI research: moving from "Analytic" math (what things are made of) to "Synthetic" math [00:23:41]---Why This Matters for AGIIf we want AI to solve the world's hardest scientific problems, it can't just be a "stochastic parrot." It needs to internalize the rules of logic and computation. By imbuing neural networks with categorical priors, researchers are attempting to build a future where AI doesn't just predict the next word—it understands the underlying structure of the universe.---TIMESTAMPS:00:00:00 The Failure of LLM Addition & Physics00:01:26 Tool Use vs Intrinsic Model Quality00:03:07 Efficiency Gains via Internalization00:04:28 Geometric Deep Learning & Equivariance00:07:05 Limitations of Group Theory00:09:17 Category Theory: Algebra with Colors00:11:25 The Systematic Guide of Lego-like Math00:13:49 The Alchemy Analogy & Unifying Theory00:15:33 Information Destruction & Reasoning00:18:00 Pathfinding & Monoids in Computation00:20:15 System 2 Reasoning & Error Awareness00:23:31 Analytic vs Synthetic Mathematics00:25:52 Morphisms & Weight Tying Basics00:26:48 2-Categories & Weight Sharing Theory00:28:55 Higher Categories & Emergence00:31:41 Compositionality & Recursive Folds00:34:05 Syntax vs Semantics in Network Design00:36:14 Homomorphisms & Multi-Sorted Syntax00:39:30 The Carrying Problem & Hopf FibrationsPetar Veličković (GDM)https://petar-v.com/Paul Lessardhttps://www.linkedin.com/in/paul-roy-lessard/Bruno Gavranovićhttps://www.brunogavranovic.com/Andrew Dudzik (GDM)https://www.linkedin.com/in/andrew-dudzik-222789142/---REFERENCES:Model:[00:01:05] Veohttps://deepmind.google/models/veo/[00:01:10] Geniehttps://deepmind.google/blog/genie-3-a-new-frontier-for-world-models/Paper:[00:04:30] Geometric Deep Learning Blueprinthttps://arxiv.org/abs/2104.13478https://www.youtube.com/watch?v=bIZB1hIJ4u8[00:16:45] AlphaGeometryhttps://arxiv.org/abs/2401.08312[00:16:55] AlphaCodehttps://arxiv.org/abs/2203.07814[00:17:05] FunSearchhttps://www.nature.com/articles/s41586-023-06924-6[00:37:00] Attention Is All You Needhttps://arxiv.org/abs/1706.03762[00:43:00] Categorical Deep Learninghttps://arxiv.org/abs/2402.15332

  • Machine Learning Street Talk (MLST)

    Are AI Benchmarks Telling The Full Story? [SPONSORED] (Andrew Gordon and Nora Petrova - Prolific)

    20/12/2025 | 16min

    Is a car that wins a Formula 1 race the best choice for your morning commute? Probably not. In this sponsored deep dive with Prolific, we explore why the same logic applies to Artificial Intelligence. While models are currently shattering records on technical exams, they often fail the most important test of all: **the human experience.**Why High Benchmark Scores Don’t Mean Better AIJoining us are **Andrew Gordon** (Staff Researcher in Behavioral Science) and **Nora Petrova** (AI Researcher) from **Prolific**. They reveal the hidden flaws in how we currently rank AI and introduce a more rigorous, "humane" way to measure whether these models are actually helpful, safe, and relatable for real people.---Key Insights in This Episode:* *The F1 Car Analogy:* Andrew explains why a model that excels at the "Humanities Last Exam" might be a nightmare for daily use. Technical benchmarks often ignore the nuances of human communication and adaptability.* *The "Wild West" of AI Safety:* As users turn to AI for sensitive topics like mental health, Nora highlights the alarming lack of oversight and the "thin veneer" of safety training—citing recent controversial incidents like Grok-3’s "Mecha Hitler."* *Fixing the "Leaderboard Illusion":* The team critiques current popular rankings like Chatbot Arena, discussing how anonymous, unstratified voting can lead to biased results and how companies can "game" the system.* *The Xbox Secret to AI Ranking:* Discover how Prolific uses *TrueSkill*—the same algorithm Microsoft developed for Xbox Live matchmaking—to create a fairer, more statistically sound leaderboard for LLMs.* *The Personality Gap:* Early data from the **Humane Leaderboard** suggests that while AI is getting smarter, it is actually performing *worse* on metrics like personality, culture, and "sycophancy" (the tendency for models to become annoying "people-pleasers").---About the HUMAINE LeaderboardMoving beyond simple "A vs. B" testing, the researchers discuss their new framework that samples participants based on *census data* (Age, Ethnicity, Political Alignment). By using a representative sample of the general public rather than just tech enthusiasts, they are building a standard that reflects the values of the real world.*Are we building models for benchmarks, or are we building them for humans? It’s time to change the scoreboard.*Rescript link:https://app.rescript.info/public/share/IDqwjY9Q43S22qSgL5EkWGFymJwZ3SVxvrfpgHZLXQc---TIMESTAMPS:00:00:00 Introduction & The Benchmarking Problem00:01:58 The Fractured State of AI Evaluation00:03:54 AI Safety & Interpretability00:05:45 Bias in Chatbot Arena00:06:45 Prolific's Three Pillars Approach00:09:01 TrueSkill Ranking & Efficient Sampling00:12:04 Census-Based Representative Sampling00:13:00 Key Findings: Culture, Personality & Sycophancy---REFERENCES:Paper:[00:00:15] MMLUhttps://arxiv.org/abs/2009.03300[00:05:10] Constitutional AIhttps://arxiv.org/abs/2212.08073[00:06:45] The Leaderboard Illusionhttps://arxiv.org/abs/2504.20879[00:09:41] HUMAINE Framework Paperhttps://huggingface.co/blog/ProlificAI/humaine-frameworkCompany:[00:00:30] Prolifichttps://www.prolific.com[00:01:45] Chatbot Arenahttps://lmarena.ai/Person:[00:00:35] Andrew Gordonhttps://www.linkedin.com/in/andrew-gordon-03879919a/[00:00:45] Nora Petrovahttps://www.linkedin.com/in/nora-petrova/Event:Algorithm:[00:09:01] Microsoft TrueSkillhttps://www.microsoft.com/en-us/research/project/trueskill-ranking-system/Leaderboard:[00:09:21] Prolific HUMAINE Leaderboardhttps://www.prolific.com/humaine[00:09:31] HUMAINE HuggingFace Spacehttps://huggingface.co/spaces/ProlificAI/humaine-leaderboard[00:10:21] Prolific AI Leaderboard Portalhttps://www.prolific.com/leaderboardDataset:[00:09:51] Prolific Social Reasoning RLHF Datasethttps://huggingface.co/datasets/ProlificAI/social-reasoning-rlhfOrganization:[00:10:31] MLCommonshttps://mlcommons.org/

  • Machine Learning Street Talk (MLST)

    The Mathematical Foundations of Intelligence [Professor Yi Ma]

    13/12/2025 | 1h 39min

    What if everything we think we know about AI understanding is wrong? Is compression the key to intelligence? Or is there something more—a leap from memorization to true abstraction? In this fascinating conversation, we sit down with **Professor Yi Ma**—world-renowned expert in deep learning, IEEE/ACM Fellow, and author of the groundbreaking new book *Learning Deep Representations of Data Distributions*. Professor Ma challenges our assumptions about what large language models actually do, reveals why 3D reconstruction isn't the same as understanding, and presents a unified mathematical theory of intelligence built on just two principles: **parsimony** and **self-consistency**.**SPONSOR MESSAGES START**—Prolific - Quality data. From real people. For faster breakthroughs.https://www.prolific.com/?utm_source=mlst—cyber•Fund https://cyber.fund/?utm_source=mlst is a founder-led investment firm accelerating the cybernetic economyHiring a SF VC Principal: https://talent.cyber.fund/companies/cyber-fund-2/jobs/57674170-ai-investment-principal#content?utm_source=mlstSubmit investment deck: https://cyber.fund/contact?utm_source=mlst—**END**Key Insights:**LLMs Don't Understand—They Memorize**Language models process text (*already* compressed human knowledge) using the same mechanism we use to learn from raw data. **The Illusion of 3D Vision**Sora and NeRFs etc that can reconstruct 3D scenes still fail miserably at basic spatial reasoning**"All Roads Lead to Rome"**Why adding noise is *necessary* for discovering structure.**Why Gradient Descent Actually Works**Natural optimization landscapes are surprisingly smooth—a "blessing of dimensionality" **Transformers from First Principles**Transformer architectures can be mathematically derived from compression principles—INTERACTIVE AI TRANSCRIPT PLAYER w/REFS (ReScript):https://app.rescript.info/public/share/Z-dMPiUhXaeMEcdeU6Bz84GOVsvdcfxU_8Ptu6CTKMQAbout Professor Yi MaYi Ma is the inaugural director of the School of Computing and Data Science at Hong Kong University and a visiting professor at UC Berkeley. https://people.eecs.berkeley.edu/~yima/https://scholar.google.com/citations?user=XqLiBQMAAAAJ&hl=en https://x.com/YiMaTweets **Slides from this conversation:**https://www.dropbox.com/scl/fi/sbhbyievw7idup8j06mlr/slides.pdf?rlkey=7ptovemezo8bj8tkhfi393fh9&dl=0**Related Talks by Professor Ma:**- Pursuing the Nature of Intelligence (ICLR): https://www.youtube.com/watch?v=LT-F0xSNSjo- Earlier talk at Berkeley: https://www.youtube.com/watch?v=TihaCUjyRLMTIMESTAMPS:00:00:00 Introduction00:02:08 The First Principles Book & Research Vision00:05:21 Two Pillars: Parsimony & Consistency00:09:50 Evolution vs. Learning: The Compression Mechanism00:14:36 LLMs: Memorization Masquerading as Understanding00:19:55 The Leap to Abstraction: Empirical vs. Scientific00:27:30 Platonism, Deduction & The ARC Challenge00:35:57 Specialization & The Cybernetic Legacy00:41:23 Deriving Maximum Rate Reduction00:48:21 The Illusion of 3D Understanding: Sora & NeRF00:54:26 All Roads Lead to Rome: The Role of Noise00:59:56 All Roads Lead to Rome: The Role of Noise01:00:14 Benign Non-Convexity: Why Optimization Works01:06:35 Double Descent & The Myth of Overfitting01:14:26 Self-Consistency: Closed-Loop Learning01:21:03 Deriving Transformers from First Principles01:30:11 Verification & The Kevin Murphy Question01:34:11 CRATE vs. ViT: White-Box AI & ConclusionREFERENCES:Book:[00:03:04] Learning Deep Representations of Data Distributionshttps://ma-lab-berkeley.github.io/deep-representation-learning-book/[00:18:38] A Brief History of Intelligencehttps://www.amazon.co.uk/BRIEF-HISTORY-INTELLIGEN-HB-Evolution/dp/0008560099[00:38:14] Cyberneticshttps://mitpress.mit.edu/9780262730099/cybernetics/Book (Yi Ma):[00:03:14] 3-D Vision bookhttps://link.springer.com/book/10.1007/978-0-387-21779-6<TRUNC> refs on ReScript link/YT

Mais podcasts de Tecnologia

Sobre Machine Learning Street Talk (MLST)

Welcome! We engage in fascinating discussions with pre-eminent figures in the AI field. Our flagship show covers current affairs in AI, cognitive science, neuroscience and philosophy of mind with in-depth analysis. Our approach is unrivalled in terms of scope and rigour – we believe in intellectual diversity in AI, and we touch on all of the main ideas in the field with the hype surgically removed. MLST is run by Tim Scarfe, Ph.D (https://www.linkedin.com/in/ecsquizor/) and features regular appearances from MIT Doctor of Philosophy Keith Duggar (https://www.linkedin.com/in/dr-keith-duggar/).
Sítio Web de podcast

Ouve Machine Learning Street Talk (MLST), Tech Life e muitos outros podcasts de todo o mundo com a aplicação radio.pt

Obtenha a aplicação gratuita radio.pt

  • Guardar rádios e podcasts favoritos
  • Transmissão via Wi-Fi ou Bluetooth
  • Carplay & Android Audo compatìvel
  • E ainda mais funções
Informação legal
Aplicações
Social
v8.2.1 | © 2007-2025 radio.de GmbH
Generated: 12/29/2025 - 6:25:02 AM