6 Comments

Yes to more experiments!! In case of interest, I wrote something similar this week framed as “Treasure Hunting” - also quoting Stewart Brand’s How Building Learn :) https://mysticalsilicon.substack.com/p/treasure-hunting

Expand full comment
author

Excellent!

Expand full comment

Love this! Experiment boldly.

I think it's also important to delineate exactly *what* about the future that's important to know. If you knew 20 years ago that the Washington football team was going to change its name in 2022, would that really be beneficial? (Ignoring "I could make a bet they wouldn't" scenarios.) Even knowing that a technology will exist ahead of time is of limited value, since you can't actually use its features yet, and the devil is often in the details of features. (If you told someone in 2005 that the iPhone would exist in 5 years, they might very well prepare for use cases that it couldn't actually support.)

Cal Newport writes about this, saying that many people when planning for the future dream about the size of their bank account but don't get sufficiently specific about the *lifestyle* they want. The exact amount of money you own is far less relevant to your future happiness than whether you're living near the things that bring you peace and joy.

Expand full comment

Sometimes the true demandscape includes resilience in the face of extreme events ( like weather, ecosystem collapse, pandemics ). Just-in-time cost shaving, as far as I can see (like in the Texas grid failure mess) works against meeting that demand. What kind of economic behavior will help us against sudden black swans, against ongoing grinding deterioration of climate, soil, species diversity and populations? How do we harness that empirical ingenuity for long term needs?

Expand full comment

If foundational models become the new oracle governed by the laws and regulations BigTech sets upon itself in a centralized manner, what then? What if the singularity brings upon Governance by self-governing A.I.?

Expand full comment
author

I'm relatively unworried about that actually. A partial examination of why here - https://www.strangeloopcanon.com/p/agi-strange-equation

Expand full comment