To succeed in a domain that violates your intuitions, you need to be able to turn them off the way a pilot does when flying through clouds.
Paul Graham
I
Last week I wrote about zeitgeist farming as an overlooked strategy for investing. By definition, this is an illegible strategy, because trying to win more by systematically analysing factors would’ve also lead to the fad-trades.
When we look at questions like “should we blindly invest in all IPOs”, we get one answer. If it's “should I invest blindly in all tech IPOs”, it's different. If it's “should I invest in all hot software IPOs” it differs again.
And as you go down that chain of thought, your ability to create an exact strategy fades away. You're in the weeds of judgement calls and illegible feelings. As the man said, it becomes like pornography, you know it when you see it.
But this struck a question in my mind; how do we determine the optimal amount to be wrong?
For instance, VCs have a paradox they need to deal with. They have to know that most of their gains will come from a few winners, the power law effect, while also thinking that they need to build conviction in any individual investment.
Almost all actions we take in life is judged on whether we got it right. If you do an investment you tend to root for the company. Every time you predict something will happen, and it happens, you get that beautiful jolt of endorphins. You're right, you're a genius.
But success in the portfolio mindset requires something different. It requires you to worry if you're getting too many things right.
It's actually the opposite to most methods of how public figures engage with the world. Even VCs who are normally good at putting their "risk hats" on suck at this. Paul Graham, who arguably made his career out of the recognition that there's a power law and designed a firm explicitly around this, said:
If we ever got to the point where 100% of the startups we funded were able to raise money after Demo Day, it would almost certainly mean we were being too conservative. ... We can afford to take at least 10x as much risk as Demo Day investors. And since risk is usually proportionate to reward, if you can afford to take more risk you should. ... Which means that even if we're generous to ourselves and assume that YC can on average triple a startup's expected value, we'd be taking the right amount of risk if only 30% of the startups were able to raise significant funding after Demo Day.
And that thought that only 30% of their startups seem "fundable" is terrifying. While going through YC has become a self fulfilling prophesy for startups, and the batch sizes have ballooned, the ratio is most definitely well above this metric still.
He called this Black Swan Farming. One of the methods by which YC decided to solve for this was to have batches of applicants and a simple investment strategy.
What this tells you is the power of not choosing, and willing to be wrong. All YC wanted to screen for were the companies that wouldn't be stellar, weed them out, and whatever came out of the rest would work in their favour.
Just like how there was a vast chasm of varying investment styles until Softbank and Tiger decided to do basically the same thing at later, growth stages.
Neither YC nor Tiger are afraid of being wrong.
It's extraordinarily unintuitive to intentionally choose a strategy where you try to get more things wrong. You have to take riskier bets, which means a larger chance of failure, consistently. When you see "sure things" you have to look at them with a skeptical eye, and when you see options that seem like long shots, those have to seem more attractive.
It's the same in public markets. Research analysts, paid highly and occasionally well regarded, missed their Earnings calls 81% of the time and revenue calls 79% of the time. They are hilariously wrong on a regular basis in fact, but it hasn't dented their appeal all that much.
Does this occur elsewhere? Let's cast a glance to what until so recently seemed like the most significant event to hit in a century.
Nouriel Roubini, known as Dr. Doom, had the major claim to fame because he called the housing market crisis correctly in 2005 and until 2008. (Also, he has about as awesome a moniker that an economist can get. If you work on the dismal science and get called Dr. Doom surely you've won academia.)
This was brilliant, because he not only realised the exact problem that existed, but also its shape and identifying the major causal criteria.
You look at history, you look at political data, you look at models, you look at comparisons. This crisis is not a black swan event - a random outcome from a random distribution. This case is a build-up of vulnerabilities over time that will increase and provoke a crisis. There were tens of different signals that would eventually lead to a tipping point. The fact that there would be a crisis was totally obvious to me.
But there's a hitch in this brilliant and trenchant analysis. It's that Roubini has been calling doom essentially forever. He has continued doing it before the crisis, through the crisis and now after the crisis. For example, this is his point of view of why we're in for a decade of hard recession.
There’s going to be a painful process of deleveraging, both by the corporate sector and the housing sector. They have to be spending less, saving more, and doing less investment.
So, shall we update our model of how much we should believe Roubini? If you believe in surprise as a component of information, as well you should per Shannon, then there's little information here.
When asked about Roubini, the economist Anirvan Banerjee told the New York Times, "Even a stopped clock is right twice a day."
So should we be really giving him that much credit for prescience?
Even taking Dr. Roubini's explanation that he called the specifics of the crisis into account, it does seem like there's a consistent strategy here. Be a consistent critic and identify ways in which tail events can happen, and if you do it often enough you get called an Oracle.
II
But clearly there is a difference. Being a contrarian works best in arenas where there is extraordinarily asymmetric payoffs. Early stage startup investing is one of them. At any stage, if you win by having a portfolio, then maximising each individual investment is highly counterproductive. Is this unique? I don't think so. In the world of creator economies, media darlings, books that become inexplicably popular and fidget spinners, we seem to be surrounded by the effects of a power law.
But maximising hit rate vs maximising total winnings is a difficult lesson to apply in life.
The rationalist community, a community with which I have a read-only identification and therefore slightly shudder to write about, often have this discussion on why they don't win more. And there are potential factors given, from denying that this is true, to blaming an over reliance on a particular type of rationality that leads them to overweight the outside view, to winning in this fashion not being the prime directive.
And therefore I decided to check if they had identified the craziness about to hit the markets last March appropriately and taken corrective action. And trawling through Less Wrong and SSC and Putanumonit there seems to be at least a few people who shorted the S&P (or equivalent) and did rather well!
Still considering the effort the group put into preparing for the pandemic, well ahead of the world, even ensuring they helped spread the message to others as much as possible, the number of "I shorted and made 3x my money in a few weeks" stories seem surprisingly sparse. I counted less than 10 across the blogs, including comments.
(If I’m wildly wrong about this, which is probably, and they are dotted everywhere, quietly basking in their victory, please do let me know! Another reason I’m not confident they exist is because Robin asked a very similar question again, assuming that if you’re confident in your prediction of the future you’d use it to bet on the markets.)
After all, the question isn't just if some people identified the problem and made bank. The question is if they did so above what you'd expect considering others who also did so despite not being steeped in the rationalist Bayesian modes of thinking. (Fwiw I sold some to move into cash, and bought very heavily into tech at the bottom.)
Eliezer, for instance, had the clarity to recognise there's a major pandemic coming, and to notice the markets weren't pricing it in. Clearly an anomaly. And it was an anomaly because the posts in Less Wrong seemed full of folks preparing for the pandemic and asking the rest of the world to follow suit!
If the belief was strong enough to stock up food and implement behaviour changes early, as it clearly was, why did they also rationalise away the market's insouciance?
It seems like despite understanding the asymmetric returns and the risk-reward relationship being askew, to take an action that will be wrong seems difficult for almost everyone. Like one commenter who did make bank asked:
What do people at hedge funds even do all day? You would at least have a few people working full-time to think about COVID, right? And they can't all get it wrong? Is it really that important to be nimble?
III
This is also the problem with punditry in general. When we look to see whom to trust as one of the key tenets of any half decent epistemology, the smartest amongst us focus on whether they got previous calls right.
And this is sensible. After all, it's better than choosing randomly or choosing on the basis of who seems reliable has proven unwieldy. But predicting correctly is not enough, since you can also choose what to predict.
There's a difference between getting the hard things more right than others vs getting more things right. It's the difference between maximising expected value (actual gain from bets) vs maximising correctness (perhaps log loss on predictions).
Would you rather follow an expert who got a few large things right, or someone who got most items right most of the time? How would you even distinguish the expert who got the hard things right more often?
In almost all cases it's better to fight the incentives to maximise hit rate and try to maximise EV instead. Even in cases you're going to sound silly, it seems worthwhile to be a Roubini rather than, say, Jim Cramer.
The world used to reward you very nicely for having opinions well aligned with the “smart” majority. You could get a great education, a great career and in all likelihood a great life. But these days all of those have become less reliable. Those strategies have decayed.
What remains is a much starker realisation that we do inherently live in a power law world. And in a power law world getting the few things right is way more important than being right more often, even at the cost of being like Dr Doom. We should all try and be wrong more often.
Nice one - Needed this sequel. The observation that you shouldn't focus on hit rate isn't novel, but the facts that past predictions of "hard to predict things" doesn't guarantee future predictions, and quantifying what a "hard prediction" is in the first place is very difficult add some new complexity. Not to mention that we're trying to maximize expected gain, not hit rate per se. Like Soros said, "It's not whether you're right or wrong that's important, but how much money you make when you're right and how much you lose when you're wrong."
Fwiw, I analyzed past predictions of doom by Burry (who definitely got more attention that Roubini because of Bale, I guess :p), and also the track record of Cramer... What I didn't do is compare the hit-rate of just predictions of doom by multiple experts on the same topics - I'm not sure that would be even helpful in future scenarios, considering that true surprises may stump all the existing experts, but it's something to think about...
The comments in the rationality blogs make sense in hindsight as well, but alas, how do we know what we were looking for in the times of crisis?
"In almost all cases it's better to fight the incentives to maximise hit rate and try to maximise EV instead. Even in cases you're going to sound silly, it seems worthwhile to be a Roubini rather than, say, Jim Cramer."
This seems to assume that in almost all cases there are extraordinarily asymmetric payoffs *in the rigth direction*. For instance, shorting the market has an infinite potential downside. So in this case being wrong more often is bad. If you are a doctor, an engineer, or an airplane pilot, the same applies. I'm not sure if we can say, thus, that we should be wrong more often in almost all cases, but only that we should identify extraordinarily asymmetric *positive* payoffs and then be wrong more often.
Excellent article otherwise.