So the argument is that: superintelligence is in theory possible, and a superintelligence is in theory very scary, so we should devote many resources to this problem no matter how unlikely?
So the argument is that: superintelligence is in theory possible, and a superintelligence is in theory very scary, so we should devote many resources to this problem no matter how unlikely?
So the argument is that: superintelligence is in theory possible, and a superintelligence is in theory very scary, so we should devote many resources to this problem no matter how unlikely?
I don't think the proponents would say many resources, rather more resources
I would have thought determining the correct level of resources depends on the likelihood of bad outcomes, not only that they are possible
And knowledge of where you'd deploy the resources which needs an idea of not just likelihood but path dependency of technology