Consider the Asilomar Conference on Recombinant DNA back in 1975. At the time, Recombinant DNA was in roughly the position that AGI is today; lots of possibilities, not a lot of knowledge about the actual capabilities.
One of the key agreements was an understanding of what sorts of research could occur where; in particular, what sort of biosafety protections were needed for what sort of recombinant DNA research. (And, indeed, what sorts of recombinant DNA research should be done at all.)
I believe that something like the Asilomar Conference should be held today about AI (and AGI).
Individual humans may not behave as blind watchmakers, but humanity as a whole? I'd argue that even our cultural and technological history looks a lot more like evolution than intelligent design. Sure, our exploration is largely guided by our interests, but the discoveries we make are almost never what we expect.
That said, it's highly probable we'll see AGI development go the way of genetic engineering and nuclear bombs—tightly regulated and far more difficult than anyone can predict. Of course, that's not to say the risk is zero, but as with genetics and nuclear physics, the risk may be worth the reward, and we might just be lucky enough to manage it.
So the argument is that: superintelligence is in theory possible, and a superintelligence is in theory very scary, so we should devote many resources to this problem no matter how unlikely?
Consider the Asilomar Conference on Recombinant DNA back in 1975. At the time, Recombinant DNA was in roughly the position that AGI is today; lots of possibilities, not a lot of knowledge about the actual capabilities.
One of the key agreements was an understanding of what sorts of research could occur where; in particular, what sort of biosafety protections were needed for what sort of recombinant DNA research. (And, indeed, what sorts of recombinant DNA research should be done at all.)
I believe that something like the Asilomar Conference should be held today about AI (and AGI).
Damn it, should've added Asilomar to the conversation!
https://en.m.wikipedia.org/wiki/Asilomar_Conference_on_Beneficial_AI
Individual humans may not behave as blind watchmakers, but humanity as a whole? I'd argue that even our cultural and technological history looks a lot more like evolution than intelligent design. Sure, our exploration is largely guided by our interests, but the discoveries we make are almost never what we expect.
That said, it's highly probable we'll see AGI development go the way of genetic engineering and nuclear bombs—tightly regulated and far more difficult than anyone can predict. Of course, that's not to say the risk is zero, but as with genetics and nuclear physics, the risk may be worth the reward, and we might just be lucky enough to manage it.
So the argument is that: superintelligence is in theory possible, and a superintelligence is in theory very scary, so we should devote many resources to this problem no matter how unlikely?
I don't think the proponents would say many resources, rather more resources
I would have thought determining the correct level of resources depends on the likelihood of bad outcomes, not only that they are possible
And knowledge of where you'd deploy the resources which needs an idea of not just likelihood but path dependency of technology
Well done!
That was a great piece, very well written! It's the first time I heard about the blind watchmaker, didn't know that such a book exists
Yes it is one of Richard Dawkins best !
I just checked it out. On the reading list now!
Thank you for this.
Is that Hieronymus Bosch? In image and text?
Inspiration, yes, but made it with midjourney