Discussion about this post

User's avatar
Bill Benzon's avatar

"...but it’s so incredibly important to point out that this is a destruction myth. It’s apocalypticism crowdwriting its own biography."

YES! It's not simply the belief, but all the activity devoted to predicting when AGI will happen, the surveys and the studies with the Rube Goldberg reasoning over order-of-magnitude guesstimates. This is epistemic theatre. And the millions of dollars being poured into this activity. This is cargo cult behavior. There may not be a Jim Jones, or a jungle compound, much less toxic fruit punch, but this is a high-tech millennial cult. And it's all be conducted in the name of rationality.

Expand full comment
Arbituram's avatar

I'm not sure what the argument here is.

People have predicted bad things/apocalypses in the last, they didn't happen, so AI is fine?

The core arguments of those who are concerned about AI isn't that "something could go very wrong", it's that:

1) Alignment is extremely difficult, even in principle (of which there are many, many extensive arguments for, not least by MIRI)

2) We have no reliable method of telling when we cross over the threshold to where it could be dangerous, thereby making it too late for troubleshooting

The above doesn't seem to have any specific counterarguments to those concerns.

I'm not even personally an AGI guy (for what it's worth, my donations go to global poverty/health), but the arguments are much stronger than you present them, and worth addressing directly.

Expand full comment
13 more comments...

No posts