Inside the Minds Building the Future: Notes from the Progress Conference 2025
“The world hates change, yet it is the only thing that has brought progress.” - Charles Kettering
I attended the Progress Conference in Berkeley, California, for the second consecutive year, focusing on the emerging field of Progress Studies. It was at the Lighthaven Center run by a thoughtful group of effective altruists and rationalist thinkers. The attendees were a thoughtful group of tech founders, scientists, economists, policy thinkers, government officials, and some superannuated corporate types (me). Keynotes included Sam Altman (OpenAI), Mike Kratsios (US OSTP advisor to the President), Blake Scholl (of Boom), Jennifer Pahlka (of Recode America), and more.
First, many thanks to Ben, Jason, Heike, and the Roots of Progress team, along with their sponsors—and to Tyler Cowen, the field’s catalytic spirit. The structure and conversations were excellent. Second, I send my gratitude to all the attendees who indulged me with their works, ideas, and dreams. I have not named specific people due to the quasi-Chatham house rules, but this cast of characters made the conference fun in a reflective and chatty way. A final observation was that SF is booming now, as the default center of global AI, and Berkeley was its outpost of weirdness and variant thinking.
I attended 10 to 12 talks, gave an update talk on my AMP Program to raise genius math and physics kids, discussed my positive alignment work, and had some intellectually stimulating conversations with around 80+ of the 300+ attendees.
Here were some of the most stimulating themes from listening and talking to many brilliant attendees:
AI exponential growth: I start with AI, since it’s central to the modern world. Most people working in the field still felt we were in the exponential stage of AI development and diffusion (and that eventually we will hit walls that make this a sigmoid). Hence, the noise in the news about current LLM reasoning models and agents leveling out was worth ignoring. We have a clear path to AGI, which was a fuzzy/nebulous and slightly taboo concept to discuss for many years, until the researcher community came to accept it. We also had a new AGI evaluation paper drop during the conference from a respectable group of researchers (and building on the recent GDPeval work).
Artificial General Intelligence: AGI broadly is an AI system that matches the best human experts (95th percentile or higher), working remotely on a computer, in a wide range of fields and tasks. Almost everyone in the field now thinks we will get AGI in 2 to 10 years (a wide range, but the modal value is 2 to 5 years). This is in line with expert surveys - AGI before 2030.
Superintelligence: The much harder question is superintelligence, SI. These are systems that are vastly superior to a group of many humans (say 100 or more) in vast areas of work or research. This is a more controversial field, though I expect in 3 to 5 years, most researchers will expect this to happen, especially if we can build recursive self-improvement and self-evolving systems. One final strand was recursive self-improvement for hardware - building other chips, data centers, power plants, etc - we need more players in this space, like Google did for its TPU development.
AI superalignment: If we get AGI relatively soon and SIs within 3 to 5 years after, we need them to be closely aligned to human values, goals, needs, etc, and not rogue or amoral systems with vastly different aims. A few people talked about a research agenda of Positive Alignment, which aims to cultivate LLMs, reasoning models, and agents that are not only safe but also actively promote human flourishing in a pluralistic, polycentric, and contextual way, within the context of how LLMs and agents are trained. This is AI superalignment, and it becomes more important as we get closer to recursive self-improvement and models self-evolving without humans. Six months ago, I’d say this was science fiction. But now it’s an actual field with published research, with early ideas and systems that work. A final thought from one attendee: “What is the prompt you type into a superintelligence before you send it out into the world to be fully autonomous from humans?”
AI-native organizations: As simpler AI takes over orgs, there were multiple discussions on how our decrepit 20th-century organizations - often modeled on 17th-century organizations like the East India Company, Royal Society, Massachusetts Bay Company, Dutch Republic to US Congress - need to evolve and adapt. We had numerous conversations about AI-native startups and AI-native scientific labs, on what founders and Principal Investigators are doing, and also early experiments in government agencies and shifting large corporations to make AI central to all workflows and productivity. Add-ons won’t work. We will have to redesign many organizations from the bottom up with first principles. One idea was to bring a Pirate Party approach to the US Congress and see if we could get representatives to talk to more constituents, and then get more summarized and nuanced views and ideas on an intelligent Pareto frontier of what bills to submit or vote on. A final question: Ownership and liability for AI agents - what is the capability and agency threshold for us to let it operate - what if it hires compute capacity from a rogue nation?
Energy mix (nuclear and solar) and datacenters in space: Energy is still the bottleneck of civilization, the limiter of Progress, as primary energy use per person correlates well with GDP per capita. Compute, intelligence, and GDP all scale with total energy throughput. In the near term, many people think natural gas is the practical bridge; in the long term, the fight is between advanced fission/fusion and massive solar. Energy and compute remain the hard limits on our Kardashev level. Almost every country is falling behind on expanding per-capita energy use over time, with the partial exception of China (per capita energy use over time). As demand for intelligence and large-scale compute keep exploding, we’ll eventually need huge data centers in space — work that companies like Star Cloud have already begun. The open question is: once we’ve pushed flops per watt, what’s the next frontier?
Longevity and human life and healthspans: We had a bunch of longevity freaks there. I love these people. They pointed out that we have many treatments to extend the lifespan of lab mice and other small animals by 75% or more, or to increase their intelligence or strength. We have been afraid to use these in humans, but I hope the science labs and startups working on this can get unblocked, as that will mean life and healthspans for 120 to 200 years (and longer). If this sounds like crazy talk, read Andrew Steele’s book “Ageless” or papers like this one on “Sex-specific longitudinal reversal of aging in old frail mice” (Treatment of old frail male mice with OT+A5i resulted in a remarkable 73% life extension from that time, and a 14% increase in the overall median lifespan). I tell my wife there’s a 30-40% chance of making it to age 200, and she thinks I’m a nutter.
Reskilling, Swiss internships, and returns to college: The massive changes coming from AI will require rapid re-skilling at the country-to-city level, ideally paid for by temporary employer or industry-level automation taxes. One thought was to adopt the Swiss model of internships, where 70% of high school students get paid to work at companies and learn skills, and 30% learn general curricula and go to college. See notes here. Some also discussed how the returns to getting a college degree should go down, while returns to using AI expertly should go up, esp for doing new science and engineering. This is for doing new jobs and existing jobs better. Example: the average Silicon Valley engineer's workflows massively changed in 2025, decreasing the need for SWEs and increasing it for MLEs and senior engineers. This raised the question: how to teach people to use AI better? You just get better with usage and use it over time - human experiments and creativity will win out. One practical tip: Ask ChatGPT “teach me how to use you for XYZ task.”
Automation, labor markets, and when will Washington DC wake up: There were many conversations on how increasing automation from AI will affect labor markets, and how little we know about this. The common view was that while DC and some in the Tech Right talk about automation, it likely won’t be an issue in the 2026 election and possibly not even in 2028 (though that’s harder to predict).
The Silicon Valley Tech Canon vs the China Tech Canon: I discussed with a few people the most common books and ideas in the Silicon Valley Tech Canon (eg Deutsch, Isaacson, Paul Graham, Blindsight, Culture Series, etc), and one attendee had a great post on the Chinese Tech Canon. The lists only overlap like 40%, so it’s interesting to see the divergence. What surprised me about the Chinese Canon was the legal realist philosophers like Han Feizi, Chairman Mao (on guerrilla warfare, propaganda, and motivating people), and the novelist Jin Yong (“Every man must read Jin Yong,” according to Jack Ma).
Bottlenecks: A common theme for the policy crowd is how many silly, anti-growth and anti-tech bottlenecks exist across Western societies, put in place by liberal and conservative governments. The biggest ones were on energy, housing, job rules, manufacturing, and Baumol cost goods. There was a general sense that the government has been more a hindrance than a help, but that it can be fixed with better, simpler, and fewer laws and regulations. The tone was not to replace governments with more anarchic or libertarian arrangements (though some attendees leaned that way), but rather to make it more effective and pragmatic, improving state capacity.
Environmental Policy Self-Harm: This wasn’t explicitly stated, but many conversations took the line that environmentalists were well-intentioned but counterproductive to their own goals. Their policies and legacy have done a great amount of harm by being overinclusive and not well-tailored, often leading to great environmental damage (eg, shutting down nuclear plants led to much more fossil fuel usage and faster climate change). One paper cited was “Prevented mortality and greenhouse gas emissions from historical and projected nuclear power”, which estimates that “taking into account the effects of the Fukushima accident, we find that nuclear power could additionally prevent an average of 420,000-7.04 million deaths and 80-240 GtCO2-eq emissions due to fossil fuels by midcentury”. If you add excess mortality from heat death because energy is too expensive in Europe, this number goes even higher.
Evolution of education: AI and the modern techno-industrial complex have made all current K-12 public and private education obsolete, and likely universities. Some attendees came out and said this, otherwise implied it, and a few were working on alternatives similar to even more radical than Alpha School (the reference/starting point for what a new school should look like). The big idea was that we need to encourage more personalization, high agency, teamwork, AI collaboration, and the solving of big problems posed by children and adult mentors. Hence, we need fewer classrooms and textbooks, less linear and rote learning. A few attendees had built intriguing alternative schools to experiment and push these boundaries. One controversial topic was the value of modern research universities, given extreme political partisanship and courses that seemed far behind the edge of science and industry. Should kids take college credits early (starting at age 12-13) and go straight to work and building things at age 18, or hang out for a full undergrad and grad school and start life at ages 22 to 30?
How the US federal government uses AI, from insiders: There is no LLM use in White House due to the Presidential Records Act - though there may be some servers in the EO buildings that staff can go out to use. Most federal agencies are not using AI models; they see it as different than a normal Saas app. We need GPT for new coordinating and information processing- electricity, computers, and internet in Congress - still have a divide between pre- and post-internet representatives. There is a jagged frontier of enforcement of existing laws - these work with simple, passive enforcement, but maybe highly oppressive and terrible when AIs observe and enforce laws. Finally, the US Constitution (Article IV, Section 4) guarantees a “Republican form of government”, which is notably not a full democracy and has anti-democratic elements in the Senate, executive, and judges. Too much democracy could be a problem, per the Founders, and it’s unclear what space this makes for AI. It’s likely all four branches of government will be heavy AI users in time.
How society works and maintaining civilizational artifacts we take for granted. Progress also depends on citizens’ understanding of the physical systems that sustain modern life. Few Americans know how our sewage system, electrical grid, road and highway infrastructure, internet and telecom, and other systems work, and that’s a problem as these age out and we need to replace them. A common theme was that Asian countries are ahead of the US in this (esp China and South Korea, but surprisingly parts of India), and that much of the US was already a second-world country when it came to infrastructure.
Books, Privacy, and Propaganda: Interesting snippets from one speaker: “Books have remained while other objects have become obsolete due to tech change - but will there be a new and better way to interact with ideas and books?” “We need a right of privacy for LLM discussions; revealed preferences are that people are already doing this, having sensitive conversations with LLMs, and we need to formally protect them.” [One attendee who built an ecosystem of hyper-private apps had a fully-private LLM]. And finally, this dark point on our susceptibility to LLM influence: “Never believe that propaganda doesn’t work [to fool] you - they just haven’t found the right levers for you.”
Ambition - why it’s important and how to raise it. Many attendees, and especially the hyper-frenetic startup founders or renowned scientists, felt that a limiter in their life and work was self-imposed caps on ambition (eg “I can’t be a great founder and a great husband/dad/wife/mom”, or “I can’t work on extending human lifespan to 120 or higher because many of my colleagues at the university will shun or ridicule me.” We discussed genetic treatments to increase intelligence, strength, lifespan, and healthspan, or even resistance to sickness. The healthy root of ambition was to do meaningful things to help others, and to make the best use of our time and energy.
Thoughts on raising ambition in adults: This revolved around taking long horizon visions and goals, a decade or longer, or entire lifetime - just sheer persistence and compounding; don’t account for the view of peers or social proof; treat work life balance is a bad idea - an artificial split between the two, an assumption they are separate- but work goes in your life and brings joy, especially if it’s your life’s work; get past simple trade offs, do overlaps and allocate energy better; your flaws matter less than you think - double down on strengths, do the weak or flawed things less.
Tips on raising ambition for kids: Show them plenty of history, stories, biographies, sci-fi of impressive people; do many projects, internships, and apprenticeships; expose them to other ambitious people to make them more ambitious (eg David Senra with the Founders podcast).
Field-building: A final theme was that to make progress on many issues, you had to build entirely novel fields such as: quantum energy; accelerated or hyper-education; AGI; superalignment and positive alignment; the psychology of ambition or increasing agency; etc. Some conversations were about how new fields had active builders who helped put them together (eg Bohr, Fermi, Bethe and others for quantum mechanics, or Bardeen to Goodenough for solid state physics, etc). Every major leap forward (from quantum physics to superalignment) began when small groups decided to found entirely new fields.
Books Discussed or Recommended
Deutsch, The Beginning of Infinity
Penrose, The Road to Reality
Weinberg, Super-Thinking [Mental Models]
Gibson, Paper Belt on Fire
Steele, Ageless
Mann, The Wizard and the Prophet
Cowen, Talent
Brand, Maintenance: Of Everything
Labatut, The Maniac [novel about Von Neumann]
Potter, The Origins of Efficiency
Dunkelman, Why Nothing Works
Walker, Why We Sleep
Panda, The Circadian Code
Klein/Thompson, Abundance




Whenever I hear Walker's Why We Sleep mentioned, Alex Guzey's takedown is my instant reply
https://guzey.com/books/why-we-sleep/
Thanks for writing this!