Thank you for writing this piece. It is hard to argue that GDP growth and economic growth are the surest ways to reduce "suffering" in the mid-term, 100% agree.
Have you come across any explanation for the seemingly reduction of self-reported happiness in the countries that are currently experiencing the most growth? I did an analysis recently comparing GDP per capita vs HDI vs Happiness over the past 10 years and am having trouble making sense of what it means.
some thoughts on the post (from a 'recently came across EA and have been reading on it' person):
1. diversification of world view is needed among EA orgs as noted by Holden Karnofsky (openphilanthropy.org/research/worldview-diversification). charity recommendations are great for people donating small amounts. they would not spend time for researching better ways and just want to see money well spent. but there should be more diversity of projects for people donating large sums.
2. EA provides framework for how to pitch for funding NGOs. and creates the culture of analysis of impact and tractability. the same must be true for startups pitching to investors by showcasing market demand and possibility.
3. a lot of EA policies is finding the best way, with small amounts of capital. I think as it increases, orgs with the EA philosophy will donate to more risky ventures that can result in much higher returns
I think encouraging diversity of projects with still a tractability focused approach will lead to less optimal outcomes than an experimentation focused market approach, which necessarily leads to much higher fragmentation of decision making
agreed. there are some common areas where market and philanthropy try to tackle the same problem. and here, market is more innovative.
but EA focuses on activities where there are little to no financial gains (atleast in short term). where market doesn't operate. like AI risk prevention, bednet distribution. EA can do tremendous good in these areas.
Thanks for the shoutout, Rohit! Having spent a bit more time among the EA community, I deeply agree with the critique that much of EA functions as "McKinsey for NGOs" - too much analysis, not enough iteration.
It sounds like impact certs (eg as described by Scott Alexander in https://astralcodexten.substack.com/p/impact-markets-the-annoying-details) would line up with your mainline proposal for a solution? Impact certs (equity in NGOs) do exactly act as a prediction market for how impactful a particular project or org will be. Manifold is looking to launch some kind of impact cert ecosystem before the end of the year - if you (or others) are interested in investing/helping out, please reach out to austin@manifold.markets!
It's part of it! With my usual caveats about market liquidity :-) My perspective though is that this isn't a secondary market or coordination problem, which can get solved through financial instruments, and instead is a primary market (like VC) problem where de Novo innovation is what's missing.
But my broadest view is exactly that finding out the limits of prediction markets (or impact cert) is best done by doing it, less pontificating. Kudos on Manifold for it (and I'd sent your one pager to a few folks).
I've spent more time thinking about AI xrisk than about EA in general. But of course they're closely related, as AI xrisk is one of the causes embraced by EA. It's my understanding that EA didn't start out with a focus on long-termism. That emerged.
The problem, as your title indicates, is that we're dealing with radical uncertainty. In the case of AI xrisk the fundamental problem is we don't know how to think about AGI in terms of mechanisms, as opposed to FOOM-like magic. The AI xrisk people respond by creating these elaborate predictive contraptions around something where meaningful quantitative reasoning is impossible. You're arguing that the EA folks are doing this as well.
Why?
At some point it seems to me that the mechanisms of community have overwhelmed the objectives the community was created to address. So now those objectives function as a reason for engaging in this elaborate ritual intellection. The community is now more engaged in elaborating its rituals than in dealing with the world. How does that happen and why?
We've got community orientation (CO) and reality orientation (RO). CO should be subordinate to RO and should serve it. What has happened is that RO has become subordinate to RO. Put your old McKinsey hat on: How do you measure CO and RO of a group and plot their evolution over time? What's going on at the tipping point where CO surpasses RO? I think that happened in the AI xrisk space at about the time Bostrom published Superintelligence.
Science Fiction: Back in 1989 Ted Turner created the Tomorrow Fellowship for a work of fiction "offering creative and positive solutions to global problems." It was only awarded once, in 1991. https://en.wikipedia.org/wiki/Turner_Tomorrow_Fellowship_Award
Thank you for writing a piece that both made me think and solidified my opinion on a complex subject. For someone who's only touched on the larger issues surrounding altruism in the past, this piece was enlightening. Well done!
Thank you for writing this piece. It is hard to argue that GDP growth and economic growth are the surest ways to reduce "suffering" in the mid-term, 100% agree.
Have you come across any explanation for the seemingly reduction of self-reported happiness in the countries that are currently experiencing the most growth? I did an analysis recently comparing GDP per capita vs HDI vs Happiness over the past 10 years and am having trouble making sense of what it means.
https://www.notion.so/henriquecruz/Happiness-Report-75cda1301f244aa9807d1767f8ac29a4
Thanks! And yes I looked at it a while abck - https://www.strangeloopcanon.com/p/why-is-everyone-so-damn-happy
Thanks, will read!
some thoughts on the post (from a 'recently came across EA and have been reading on it' person):
1. diversification of world view is needed among EA orgs as noted by Holden Karnofsky (openphilanthropy.org/research/worldview-diversification). charity recommendations are great for people donating small amounts. they would not spend time for researching better ways and just want to see money well spent. but there should be more diversity of projects for people donating large sums.
2. EA provides framework for how to pitch for funding NGOs. and creates the culture of analysis of impact and tractability. the same must be true for startups pitching to investors by showcasing market demand and possibility.
3. a lot of EA policies is finding the best way, with small amounts of capital. I think as it increases, orgs with the EA philosophy will donate to more risky ventures that can result in much higher returns
I think encouraging diversity of projects with still a tractability focused approach will lead to less optimal outcomes than an experimentation focused market approach, which necessarily leads to much higher fragmentation of decision making
agreed. there are some common areas where market and philanthropy try to tackle the same problem. and here, market is more innovative.
but EA focuses on activities where there are little to no financial gains (atleast in short term). where market doesn't operate. like AI risk prevention, bednet distribution. EA can do tremendous good in these areas.
Thanks for the shoutout, Rohit! Having spent a bit more time among the EA community, I deeply agree with the critique that much of EA functions as "McKinsey for NGOs" - too much analysis, not enough iteration.
It sounds like impact certs (eg as described by Scott Alexander in https://astralcodexten.substack.com/p/impact-markets-the-annoying-details) would line up with your mainline proposal for a solution? Impact certs (equity in NGOs) do exactly act as a prediction market for how impactful a particular project or org will be. Manifold is looking to launch some kind of impact cert ecosystem before the end of the year - if you (or others) are interested in investing/helping out, please reach out to austin@manifold.markets!
It's part of it! With my usual caveats about market liquidity :-) My perspective though is that this isn't a secondary market or coordination problem, which can get solved through financial instruments, and instead is a primary market (like VC) problem where de Novo innovation is what's missing.
But my broadest view is exactly that finding out the limits of prediction markets (or impact cert) is best done by doing it, less pontificating. Kudos on Manifold for it (and I'd sent your one pager to a few folks).
I've spent more time thinking about AI xrisk than about EA in general. But of course they're closely related, as AI xrisk is one of the causes embraced by EA. It's my understanding that EA didn't start out with a focus on long-termism. That emerged.
The problem, as your title indicates, is that we're dealing with radical uncertainty. In the case of AI xrisk the fundamental problem is we don't know how to think about AGI in terms of mechanisms, as opposed to FOOM-like magic. The AI xrisk people respond by creating these elaborate predictive contraptions around something where meaningful quantitative reasoning is impossible. You're arguing that the EA folks are doing this as well.
Why?
At some point it seems to me that the mechanisms of community have overwhelmed the objectives the community was created to address. So now those objectives function as a reason for engaging in this elaborate ritual intellection. The community is now more engaged in elaborating its rituals than in dealing with the world. How does that happen and why?
We've got community orientation (CO) and reality orientation (RO). CO should be subordinate to RO and should serve it. What has happened is that RO has become subordinate to RO. Put your old McKinsey hat on: How do you measure CO and RO of a group and plot their evolution over time? What's going on at the tipping point where CO surpasses RO? I think that happened in the AI xrisk space at about the time Bostrom published Superintelligence.
Whoops! That should be: "What has happened is that RO has become subordinate to CO."
Science Fiction: Back in 1989 Ted Turner created the Tomorrow Fellowship for a work of fiction "offering creative and positive solutions to global problems." It was only awarded once, in 1991. https://en.wikipedia.org/wiki/Turner_Tomorrow_Fellowship_Award
This is fascinating!
Thank you for writing a piece that both made me think and solidified my opinion on a complex subject. For someone who's only touched on the larger issues surrounding altruism in the past, this piece was enlightening. Well done!
Thank you so much Brent, that’s so kind of you to say