A couple of important edge cases here. One is reacting swiftly to extremely high cost situations. Like an apparent nuke launch or an apparent AI foom. Quickness of reaction would be demanded yet not very helpful for producing the best outcome. Predictive knowledge has far, far more value - in support of avoiding the need for reaction at all. And that is what sensible people are advocating.
Second case is that signal detection dominates in some reaction situations. Warming is an example where clear signals abound. E.g., the oceans have clearly heated up, methane is rising, species and populations disappearing, fire and flood, temperature records. But we are doing nothing. The quick reaction/fast money faction is ignoring the signal and using every defense mechanism in the book to prevent anyone (not just themselves) from meaningfully responding to it.
Also I'm not sure we're doing nothing re climate change. Environmental movements aside just the effort in solar and wind is an enormous boost, as is reforestation and EVs and so much more.
Even in the situations you've laid out like nukes or doom, prediction is pretty useless as much as a constant effort at enabling defense. We don't rely on predictions of whether North Korea will nuke us, we actively work to ensure they won't, and constantly error correct our actions in accordance with the changing environment.
You are right. Instead of "predictive knowledge" I should have just said "knowledge." We have for centuries been at least trying to manage international relations with ongoing, iterative knowledge-based assessment (which always involves prediction, as in the form of scenarios) and reaction. Knowledge gained ahead of prediction time is vital. AI doom is without (?) historical precedent, but we have some relevant knowledge and could gain a lot more, yet a vocal and influential faction advocates doing nothing unless we are forced to react some day. I think the brain theories about predictive processing and free-energy minimization make a better model for society's decision-making needs than a dichotomy between prediction and reaction time. Brain-wise, we are always predicting inputs (based on prior knowledge) before we get them in order to react in ways that minimize the prediction error the next time. And this happens on multiple hierarchical levels of perception/cognition with different time constants. So we deal with both fast and slow changes with nested cognition/action loops.
> AI doom is without (?) historical precedent, but we have some relevant knowledge and could gain a lot more, yet a vocal and influential faction advocates doing nothing unless we are forced to react some day.
I hear you, though its one area where I'd highly emphasise building ability to react fast than predict better. For one thing we've sucked at predictions consistently here for a very long time and I don't see why that wouldn't continue being true. For another every path ahead seems incredibly complex and makes predictions fail in any case in the relevant details which are crucial. I'd much rather watch as we develop and learn to react quickly, rather than e.g., think up random benchmarks that it might be dangerous beyond, or try and predict through analogies which adds error on top of error.
I appreciate your point. And I’m not trying to have the last word here, but want to clarify my point. Unlike many disaster situations where we are the only side with agency, with AI lots of thoughtful people predict that we could be royally screwed before we even know it. One would need negative reaction time to fix that. That said, it is true that people who game it out haven’t found that either prevention (such as boxing) or reactivity (like plug-pulling) seem foolproof. So, yes, I guess we should be working it from both sides. As you say. I think there is a philosophical issue here about the connections between prevention, which in a causal sense can rarely, if ever, be proved to have happened, and prediction, which might also be said to be in practice quite impossible. So, you wrote a very thought-provoking article.
Every viable proposal for AI alignment will need "better tools for anticipation, better tools for ceaseless monitoring and testing, better tools for determining and ranking risks, better tools for remediation of harm done, and better tools and techniques for redirecting technologies as they grow."
I think the reacting vs. predicting in a similar way I think the systems vs. goals. In a complex world, goals make more sense within a system context and are helpful only for short straightforward things.
Similarly predicting might make more sense within a reacting mindset so predictions can be funneled into a chain reaction of action.
But again from an evolutionary perspective, our brain always chooses the process that is less costly. In a way, prediction is a goal, and to be able to pivot when one needs to is a system.
Now that I think about it, Friston's free-energy theory that I mentioned above would be a poster child for how the brain tries to maintain a peaceful life for us.
The issue here is that your chosen examples (start ups, trading) generally have relatively low momentum, so it's feasible to be flexible in reaction to things and have your response in time for it to still matter.
This is not true for many other fields, or for large governments or organisations. The ability to react can't be open ended if you have thousands of moving parts; whatever caused the reaction would have long gone by the time you could have feasibly responded. What planning and prediction does is allow you to winnow down to a few higher probability scenarios, then put you in a situation to respond quickly when those scenarios apply; you don't have the luxury of waiting around to see what happens before starting to plan, and if you don't try to predict you can't plan meaningfully.
I think the past few years, from Microsoft moving fast as with all companies, to government gearing up on vaccines and lockdowns, and the Fed acting fast, all show that we're occasionally capable of fast reaction. Not to say they didn't think about th eworld at all before but it was better to do this than try spend more efforts on predictions.
adapting to a dynamic landscape rather than just having some belief (however accurate)
Might I interest you in a similar idea from Cedric over @ commoncog: "When Action Beats Prediction"?
The idea here is that there's a place for forecasting and backwards induction but sometimes taking action generates or uncovers info and then tinker and iterate forward.
The guys who assemble China's Five Year Plans haven't missed a prediction in 50 years.
Being planners, they love to start planning long before anything happens, and they're drawing up a grand plan for the People Republic's first centenary, a generation hence, in 2049.
They predict that, by the centenary, they will be the richest society on earth with the lowest Gini coefficient on earth.
Agree overall that preparation to respond quickly is perhaps more important than trying to predict in several of the domains you outline. However I do not see this argument you make to be universal across all domains. There are domains like weather forecasting for example where we have gotten better at predictions (albeit in the shorter time horizons) and this has been very helpful TOGETHER with better preparation to save lives etc. In other domains (politics, stock markets, AI etc) predictions maybe useful but as one of several ways to "figure out" what's going on - and this is useful when forecasters make their assumptions explicit. You may have seen the results of the Extinction Forecasting Tournament (XPT) by Karger & Tetlock - some interesting insights there. So I would adjust your lead as "The (limited) case for using predictions smartly - together with better preparation"
I agree with you on weather, and it is indeed a circumstances where we have gotten better at short-term predictions while creating models and learning how best to actually react fast to incoming data. The idea of extinction forecasting on the other hand is almost exactly on the other side of the equation where it is barely useful and we would probably do much better if we learnt fast reaction methodologies for various types of problems including asteroid defence as opposed to trying to predict whether it is 1 in 6 chances or one in five of complicated things happening in the distant future
Let me push back - the actual prediction in extinction forecasting is irrelevant (whether 1 in 6 or 1 in 5). However, in a world where narratives dominate the discourse - where 'influencers' of all stripes make wild cliams, I think it is useful to have a way to lay out the assumptions & arguments that inform the predictions - this allows us to separate pure nonsense from the plausible. And good discussions on the plausible can inform appropriate action. You might argue that this can be done without the need to make probabilistic forecasts - I agree. But I see benefit in forecasting if some folks prefer this as a way to make their argument.
A couple of important edge cases here. One is reacting swiftly to extremely high cost situations. Like an apparent nuke launch or an apparent AI foom. Quickness of reaction would be demanded yet not very helpful for producing the best outcome. Predictive knowledge has far, far more value - in support of avoiding the need for reaction at all. And that is what sensible people are advocating.
Second case is that signal detection dominates in some reaction situations. Warming is an example where clear signals abound. E.g., the oceans have clearly heated up, methane is rising, species and populations disappearing, fire and flood, temperature records. But we are doing nothing. The quick reaction/fast money faction is ignoring the signal and using every defense mechanism in the book to prevent anyone (not just themselves) from meaningfully responding to it.
Also I'm not sure we're doing nothing re climate change. Environmental movements aside just the effort in solar and wind is an enormous boost, as is reforestation and EVs and so much more.
Even in the situations you've laid out like nukes or doom, prediction is pretty useless as much as a constant effort at enabling defense. We don't rely on predictions of whether North Korea will nuke us, we actively work to ensure they won't, and constantly error correct our actions in accordance with the changing environment.
You are right. Instead of "predictive knowledge" I should have just said "knowledge." We have for centuries been at least trying to manage international relations with ongoing, iterative knowledge-based assessment (which always involves prediction, as in the form of scenarios) and reaction. Knowledge gained ahead of prediction time is vital. AI doom is without (?) historical precedent, but we have some relevant knowledge and could gain a lot more, yet a vocal and influential faction advocates doing nothing unless we are forced to react some day. I think the brain theories about predictive processing and free-energy minimization make a better model for society's decision-making needs than a dichotomy between prediction and reaction time. Brain-wise, we are always predicting inputs (based on prior knowledge) before we get them in order to react in ways that minimize the prediction error the next time. And this happens on multiple hierarchical levels of perception/cognition with different time constants. So we deal with both fast and slow changes with nested cognition/action loops.
> AI doom is without (?) historical precedent, but we have some relevant knowledge and could gain a lot more, yet a vocal and influential faction advocates doing nothing unless we are forced to react some day.
I hear you, though its one area where I'd highly emphasise building ability to react fast than predict better. For one thing we've sucked at predictions consistently here for a very long time and I don't see why that wouldn't continue being true. For another every path ahead seems incredibly complex and makes predictions fail in any case in the relevant details which are crucial. I'd much rather watch as we develop and learn to react quickly, rather than e.g., think up random benchmarks that it might be dangerous beyond, or try and predict through analogies which adds error on top of error.
I appreciate your point. And I’m not trying to have the last word here, but want to clarify my point. Unlike many disaster situations where we are the only side with agency, with AI lots of thoughtful people predict that we could be royally screwed before we even know it. One would need negative reaction time to fix that. That said, it is true that people who game it out haven’t found that either prevention (such as boxing) or reactivity (like plug-pulling) seem foolproof. So, yes, I guess we should be working it from both sides. As you say. I think there is a philosophical issue here about the connections between prevention, which in a causal sense can rarely, if ever, be proved to have happened, and prediction, which might also be said to be in practice quite impossible. So, you wrote a very thought-provoking article.
Fire and floods are not increasing. Fire is only increasing in places with bad forestry management. Overall though, fires are not worsening.
Yes! Lots of resonance with Kevin Kelly's Pro-Actionary Principle (via Max More): https://kk.org/thetechnium/the-pro-actiona/
Every viable proposal for AI alignment will need "better tools for anticipation, better tools for ceaseless monitoring and testing, better tools for determining and ranking risks, better tools for remediation of harm done, and better tools and techniques for redirecting technologies as they grow."
I think the reacting vs. predicting in a similar way I think the systems vs. goals. In a complex world, goals make more sense within a system context and are helpful only for short straightforward things.
Similarly predicting might make more sense within a reacting mindset so predictions can be funneled into a chain reaction of action.
But again from an evolutionary perspective, our brain always chooses the process that is less costly. In a way, prediction is a goal, and to be able to pivot when one needs to is a system.
Useful for attempting to live a peaceful life, too!
Indeed
Now that I think about it, Friston's free-energy theory that I mentioned above would be a poster child for how the brain tries to maintain a peaceful life for us.
The issue here is that your chosen examples (start ups, trading) generally have relatively low momentum, so it's feasible to be flexible in reaction to things and have your response in time for it to still matter.
This is not true for many other fields, or for large governments or organisations. The ability to react can't be open ended if you have thousands of moving parts; whatever caused the reaction would have long gone by the time you could have feasibly responded. What planning and prediction does is allow you to winnow down to a few higher probability scenarios, then put you in a situation to respond quickly when those scenarios apply; you don't have the luxury of waiting around to see what happens before starting to plan, and if you don't try to predict you can't plan meaningfully.
I think the past few years, from Microsoft moving fast as with all companies, to government gearing up on vaccines and lockdowns, and the Fed acting fast, all show that we're occasionally capable of fast reaction. Not to say they didn't think about th eworld at all before but it was better to do this than try spend more efforts on predictions.
adapting to a dynamic landscape rather than just having some belief (however accurate)
Might I interest you in a similar idea from Cedric over @ commoncog: "When Action Beats Prediction"?
The idea here is that there's a place for forecasting and backwards induction but sometimes taking action generates or uncovers info and then tinker and iterate forward.
link: https://commoncog.com/when-action-beats-prediction/
Oh love Cedric, thanks!
You know who's good at predictions?
The guys who assemble China's Five Year Plans haven't missed a prediction in 50 years.
Being planners, they love to start planning long before anything happens, and they're drawing up a grand plan for the People Republic's first centenary, a generation hence, in 2049.
They predict that, by the centenary, they will be the richest society on earth with the lowest Gini coefficient on earth.
What's not to like?
I can think of a few things not to like :)
Name one.
You won't miss your prediction if you force people to give you numbers that match the prediction.
We now have 70 years of predictions:
https://en.wikipedia.org/wiki/Five-year_plans_of_China.
And we can compare them to their outcomes.
Which prediction, in your opinion, was fulfilled by falsifying numbers?
One – of the hundreds listed – is sufficient.
Agree overall that preparation to respond quickly is perhaps more important than trying to predict in several of the domains you outline. However I do not see this argument you make to be universal across all domains. There are domains like weather forecasting for example where we have gotten better at predictions (albeit in the shorter time horizons) and this has been very helpful TOGETHER with better preparation to save lives etc. In other domains (politics, stock markets, AI etc) predictions maybe useful but as one of several ways to "figure out" what's going on - and this is useful when forecasters make their assumptions explicit. You may have seen the results of the Extinction Forecasting Tournament (XPT) by Karger & Tetlock - some interesting insights there. So I would adjust your lead as "The (limited) case for using predictions smartly - together with better preparation"
I agree with you on weather, and it is indeed a circumstances where we have gotten better at short-term predictions while creating models and learning how best to actually react fast to incoming data. The idea of extinction forecasting on the other hand is almost exactly on the other side of the equation where it is barely useful and we would probably do much better if we learnt fast reaction methodologies for various types of problems including asteroid defence as opposed to trying to predict whether it is 1 in 6 chances or one in five of complicated things happening in the distant future
Let me push back - the actual prediction in extinction forecasting is irrelevant (whether 1 in 6 or 1 in 5). However, in a world where narratives dominate the discourse - where 'influencers' of all stripes make wild cliams, I think it is useful to have a way to lay out the assumptions & arguments that inform the predictions - this allows us to separate pure nonsense from the plausible. And good discussions on the plausible can inform appropriate action. You might argue that this can be done without the need to make probabilistic forecasts - I agree. But I see benefit in forecasting if some folks prefer this as a way to make their argument.