PodcastsSociedade e culturaLessWrong (Curated & Popular)

LessWrong (Curated & Popular)

LessWrong
LessWrong (Curated & Popular)
Último episódio

788 episódios

  • LessWrong (Curated & Popular)

    "Schelling Goodness, and Shared Morality as a Goal" by Andrew_Critch

    06/03/2026 | 1h 14min
    Also available in markdown at theMultiplicity.ai/blog/schelling-goodness.

    This post explores a notion I'll call Schelling goodness. Claims of Schelling goodness are not first-order moral verdicts like "X is good" or "X is bad." They are claims about a class of hypothetical coordination games in the sense of Thomas Schelling, where the task being coordinated on is a moral verdict. In each such game, participants aim to give the same response regarding a moral question, by reasoning about what a very diverse population of intelligent beings would converge on, using only broadly shared constraints: common knowledge of the question at hand, and background knowledge from the survival and growth pressures that shape successful civilizations. Unlike many Schelling coordination games, we'll be focused on scenarios with no shared history or knowledge amongst the participants, other than being from successful civilizations.

    Importantly: To say "X is Schelling-good" is not at all the same as saying "X is good". Rather, it will be defined as a claim about what a large class of agents would say, if they were required to choose between saying "X is good" and "X is bad" and aiming for a mutually agreed-upon answer. This distinction is crucial [...]

    ---

    Outline:

    (01:59) This essay is not very skimmable

    (03:44) Pro tanto morals, is good, and is bad

    (06:39) Part One: The Schelling Participation Effect

    (13:52) What makes it work

    (15:50) The Schelling transformation on questions

    (19:10) Part Two: Schelling morality via the cosmic Schelling population

    (21:12) Scale-invariant adaptations

    (22:54) An example: stealing

    (30:32) Recognition versus endorsement versus adherence

    (31:34) The answer frequencies versus the answer

    (33:59) Ties are rare

    (35:06) Is the cosmic Schelling answer ever knowable with confidence?

    (36:02) Schelling participation effects, revisited

    (38:03) Is this just the mind projection fallacy?

    (39:42) When are cosmic Schelling morals easy to identify?

    (42:59) Scale invariance revisited

    (44:03) A second example: Pareto-positive trade

    (47:45) Harder questions and caveats

    (50:01) Ties are unstable

    (51:43) Isnt this assuming moral realism?

    (53:07) Dont these results depend on the distribution over beings?

    (54:41) What about the is-ought gap?

    (56:29) Tolerance, local variation, and freedom

    (58:25) Terrestrial Schelling-goodness

    (59:42) So what does good mean, again?

    (01:01:08) Implications for AI alignment

    (01:06:15) Conclusion and historical context

    (01:09:16) FAQ

    (01:09:20) Basic misunderstandings

    (01:12:20) More nuanced questions

    ---

    First published:
    February 28th, 2026

    Source:
    https://www.lesswrong.com/posts/TkBCR8XRGw7qmao6z/schelling-goodness-and-shared-morality-as-a-goal

    ---



    Narrated by TYPE III AUDIO.
  • LessWrong (Curated & Popular)

    "Maybe there’s a pattern here?" by dynomight

    05/03/2026 | 15min
    1.

    It occurred to me that if I could invent a machine—a gun—which could by its rapidity of fire, enable one man to do as much battle duty as a hundred, that it would, to a large extent supersede the necessity of large armies, and consequently, exposure to battle and disease [would] be greatly diminished.

    Richard Gatling (1861)

    2.

    In 1923, Hermann Oberth published The Rocket to Planetary Spaces, later expanded as Ways to Space Travel. This showed that it was possible to build machines that could leave Earth's atmosphere and reach orbit. He described the general principles of multiple-stage liquid-fueled rockets, solar sails, and even ion drives. He proposed sending humans into space, building space stations and satellites, and travelling to other planets.

    The idea of space travel became popular in Germany. Swept up by these ideas, in 1927, Johannes Winkler, Max Valier, and Willy Ley formed the Verein für Raumschiffahrt (VfR) (Society for Space Travel) in Breslau (now Wrocław, Poland). This group rapidly grew to several hundred members. Several participated as advisors of Fritz Lang's The Woman in the Moon, and the VfR even began publishing their own journal.

    In 1930, the VfR was granted permission to [...]

    ---

    Outline:

    (00:09) 1.

    (00:36) 2.

    (03:55) 3.

    (06:09) 4.

    (10:33) 5.

    (11:41) 6.

    ---

    First published:
    March 4th, 2026

    Source:
    https://www.lesswrong.com/posts/TjcvjwaDsuea8bmbR/maybe-there-s-a-pattern-here

    ---



    Narrated by TYPE III AUDIO.

    ---

    Images from the article:

    Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
  • LessWrong (Curated & Popular)

    "OpenAI’s surveillance language has many potential loopholes and they can do better" by Tom Smith

    05/03/2026 | 14min
    (The author is not affiliated with the Department of War or any major AI company.)

    There's a lot of disagreement about the new surveillance language in the OpenAI–Department of War agreement. Some people think it's a significant improvement over the previous language.[1] Others think it patches some issues but still leaves enough loopholes to not make a material difference. Reasonable people disagree about how a court will interpret the language, if push comes to shove.

    But here's something that should be much easier to agree on: the language as written is ambiguous, and OpenAI can do better.

    I don’t think even OpenAI's leadership can be confident about how this language would be interpreted in court, given the wording used and the short amount of time they’ve had to draft it. People with less context and resources will find it even harder to know how all the ambiguities would be resolved.

    Some of the ambiguities seem like they could have been easily clarified despite the small amount of time available, which makes it concerning that they weren't. But more importantly, it should certainly be possible and worthwhile to spend more time on clarifying the language now. Employees are well within [...]

    ---

    Outline:

    (01:27) What the new language says

    (02:46) Ambiguities

    (07:45) Why this isnt unreasonable nit-picking

    (11:04) Some of this would be easy to clarify

    (13:09) OpenAI can do much better

    The original text contained 8 footnotes which were omitted from this narration.

    ---

    First published:
    March 4th, 2026

    Source:
    https://www.lesswrong.com/posts/FSGfzDLFdFtRDADF4/openai-s-surveillance-language-has-many-potential-loopholes

    ---



    Narrated by TYPE III AUDIO.
  • LessWrong (Curated & Popular)

    "An Alignment Journal: Coming Soon" by Dan MacKinlay, JessRiedel, Edmund Lau, Daniel Murfet, Scott Aaronson, Jan_Kulveit

    04/03/2026 | 13min
    tl;dr We’re incubating an academic journal for AI alignment: rapid peer-review of foundational Alignment research that the current publication ecosystem underserves. Key bets: paid attributed review, reviewer-written synthesis abstracts, and targeted automation. Contact us if you’re interested in participating as an author, reviewer, or editor, or if you know someone who might be.

    Experimental Infrastructure for Foundational Alignment Research

    This is the first in a series of “build-in-the-open” updates regarding the incubation of a new peer-reviewed journal dedicated to AI alignment. Later updates will contain much more detail, but we want to put this out soon to draw community participation early. Fill out this form to express your interest in participating as an author, reviewer, editor, developer, manager, or board member, or to recommend someone who might be interested.

    The Core Bet

    Peer review is a crucial public good: it applies scarce researcher time to sort new ideas for focused attention from the community, but is undersupplied because individual reviewers are poorly incentivized. Peer review in alignment research is particularly fragmented. While some parts of the alignment research community are served by existing venues, such as journals and ML conferences, there are significant gaps. These gaps arise from a [...]

    ---

    Outline:

    (00:38) Experimental Infrastructure for Foundational Alignment Research

    (01:09) The Core Bet

    (02:27) Operational Design

    (03:56) Scope

    (06:08) Governance

    (06:35) Advisory board

    (09:16) Institutional stewardship

    (10:11) Next steps

    (10:14) Join the founding team

    (11:49) Support us online

    (12:14) Contributors to this document

    The original text contained 2 footnotes which were omitted from this narration.

    ---

    First published:
    March 3rd, 2026

    Source:
    https://www.lesswrong.com/posts/msnGbm52ZcG3xYcFo/an-alignment-journal-coming-soon

    ---



    Narrated by TYPE III AUDIO.
  • LessWrong (Curated & Popular)

    "Frontier AI companies probably can’t leave the US" by Anders Woodruff

    01/03/2026 | 14min
    It's plausible that, over the next few years, US-based frontier AI companies will become very unhappy with the domestic political situation. This could happen as a result of democratic backsliding, weaponization of government power (along the lines of Anthropic's recent dispute with the Department of War), or because of restrictive federal regulations (perhaps including those motivated by concern about catastrophic risk). These companies might want to relocate out of the US.

    However, it would be very easy for the US executive branch to prevent such a relocation, and it likely would. In particular, the executive branch can use existing export controls to prevent companies from moving large numbers of chips, and other legislation to block the financial transactions required for offshoring. Even with the current level of executive attention on AI, it's likely that this relocation would be blocked, and the attention paid to AI will probably increase over time.

    So it seems overall that AI companies are unlikely to be able to leave the country, even if they’d strongly prefer to. This further means that AI companies will be unable to use relocation as a bargaining chip, which they’ve attempted before to prevent regulation.

    Thanks to Alexa Pan [...]

    ---

    Outline:

    (01:34) Frontier companies leaving would be huge news

    (02:59) It would be easy for the US government to prevent AI companies from leaving

    (03:31) The president can block chip exports and transactions

    (05:40) Companies cant get their US assets out against the governments will

    (07:19) Companies cant leave without their US-based assets

    (09:36) Current political will is likely sufficient to prevent the departure of a frontier company

    (13:38) Implications

    The original text contained 2 footnotes which were omitted from this narration.

    ---

    First published:
    February 26th, 2026

    Source:
    https://www.lesswrong.com/posts/4tv4QpqLECTvTyrYt/frontier-ai-companies-probably-can-t-leave-the-us

    ---



    Narrated by TYPE III AUDIO.

Mais podcasts de Sociedade e cultura

Sobre LessWrong (Curated & Popular)

Audio narrations of LessWrong posts. Includes all curated posts and all posts with 125+ karma.If you'd like more, subscribe to the “Lesswrong (30+ karma)” feed.
Sítio Web de podcast

Ouve LessWrong (Curated & Popular), The Hidden Third e muitos outros podcasts de todo o mundo com a aplicação radio.pt

Obtenha a aplicação gratuita radio.pt

  • Guardar rádios e podcasts favoritos
  • Transmissão via Wi-Fi ou Bluetooth
  • Carplay & Android Audo compatìvel
  • E ainda mais funções

LessWrong (Curated & Popular): Podcast do grupo

Informação legal
Aplicações
Social
v8.7.2 | © 2007-2026 radio.de GmbH
Generated: 3/7/2026 - 4:17:41 AM