On Scott contra Weyl
How successful technocracy is not just about the policies but also its implementability and legibility
I
Scott Alexander recently wrote a review of Weyl's essay against technocracy. Unlike several of his other posts, this one had both bad examples to elucidate his central points and a few strawmen, making reading its original and the review slightly discombobulating. And I've been interested in, though not necessarily in agreement with, Weyl's work since he wrote Radical Markets so I was excited.
Let me start with the points and conclusions I wholeheartedly agree with.
We should have more technocrats around in decision making roles, as a counterweight to the number of purely populist decision makers.
Regardless, we should absolutely have more objective, evidence based policy discussions and create formal mechanisms to assess them
We should make a better faith effort to figure out who those technocrats are, so we know who's trying to make points 1 and 2 happen,
Technocracy, like all other -cracys, requires easy and understandable mechanisms of control and editing, so that power doesn't get entrenched.
Now, the essay argues that several of the normal conceptions of technocratic ignominy comes from a) inaccurate representations of technocracy, b) foibles of non-technocratic decision making mechanisms. His examples leave a massive amount to be desired though. In fact you can almost use some of those same examples as showing why technocratic policies can have problems.
School desegregation: There was significant groundswell that underpinned this movement, through protest and activism and more than a little violence. There were significant technocratic and evidence based disagreement on whether the policy was right or not. The legal mandate helped but the implementation, e.g., via busing, was atrocious. Also, the decision was not a technocratic one in terms of cost-benefit assessments, but a normative decision that equality was important.
Interstate highway system: The process, as Scott writes, took the public outcry into account and amended their routes and policies at multiple junctures. This would be a case where there's been a successful technocratic policy, that was implementable in real life, and later on took public opinion into account to make the implementation better. It's perhaps what a success story should look like.
Climate change: Cap and trade was a technocratic failure, rather than a success. It has been canon for thirty years and while it has strong arguments in favour of it, it also saw impossible obstacles to its successful design and implementation. It's been pretty dead in the US and EU. While it was market-friendly and therefore easier for technocratic acceptance, it failed the real world test.
Coronavirus lockdowns: This being the most recent cut, it's also the rawest. For one thing nobody could quite agree on what lockdowns meant, who should it affect, how they should behave, and what other policies they should do apart from pure lockdown. In fact if you look at the Covid success stories of Taiwan or New Zealand, they prized clarity and simplicity over more complex technocratic implementation of (seemingly) better rules. That's why they won.
But there's also something interesting about these examples. They all have easily visible outcomes.
All policies can have problems. Surely we should try to get policies that are better than not? So let's consider what a successful technocratic policy has to do:
They have to be right. This means they have to gather evidence and do math-based cost benefit analyses and be objective and try to get to the right answer.
They have to make solutions that are tractable. There is no point getting to a "right" answer if it's impossible to actually accomplish in the real world.
They have to make solutions that are legible. You have to make solutions that you can explain to others. If there's one thing the UK Covid response has taught us is the immense gap between policy creation and policy communication.
What this suggests is two things. One is that there was a high degree of technocratic disagreement on the "right" policy or course of action. This holds true in politics, in economics, and social sciences. It also holds true in more empirical sciences - in short anything where there's no easy way to do falsifiable experimentation. Even in medicine, after decades of evidence gathering, with about as clear cut data as exists, Eliezer writes about central line infections and nutrient fluid formulae for infants.
This is not a new problem.
The second is that when we choose technocratic governance in areas where the outcomes aren't easily predictable or where we can't run experiments, we're forced to do what Weyl says we do anyway, which is engage in semi-good faith dialogue to try and get to an policy through the use of rhetoric and logic in some admixture. We don't have a choice but to bring in humanities, adversarial thinking and, yes, our biases in figuring out whom to trust.
II
So the question becomes less about do we want to be more technocratic in itself, using the reading that it's akin to being a good amateur scientist in following the evidence and being a good Bayesian, and instead becoming someone who uses their expertise to paper over bad thinking and their agendas.
Technocracy, like most loaded words, has multiple meanings. It's ok to think that it means experts doing expert analyses by painstakingly getting the right data, then modelling it out, and figuring out what should be done. Let's call this "good technocracy". It can also mean that experts apply their expert judgements and do what they want. That's "bad technocracy". Surgeons refusing to wash their hands would be one of the literally hundreds of relevant examples here.
A broader reading of Weyl suggests that apart from some misunderstandings on the rationalist community, his perspective is weirdly ... kind of sensible?
His thesis is that too high a reliance on specific methodological approaches leads us towards blindness to the full colour of reality. This seems true. So we should err on the side of simplicity and flexibility as our key methodology choosing criterion.
This shouldn't apply to physics or medicine where we should "technocrat" our way all the way (though it clearly does), but in areas where we can't tell easily if someone's a crackpot or a technocrat, perhaps the job that needs doing is slightly different. That's where the pejorative connotation comes in from - when people treat social systems the same way they would treat a physical system.
Is the answer to try and be even more objective? Sure, that's an answer. But while we're at it we should probably also wish for all of us to be kinder, gentler and sweeter on Twitter. Defining a beautiful end state is nice, but it doesn't help solve the problem that the world doesn't look like that.
The question is whether the gap in our decision making today is because we have too few people who aim to make their decisions rationally, by taking into account facts about the world and analysing whether their actions would have their desired impact.
The argument that would hold is slightly different.
We try to tame complexity by having people specialise in different things and then debate to see what we should do. In smaller and simpler cases we could experiment, but that's not feasible at the social level.
Doing this in a methodical fashion can be helpful, but can also be a barrier to legibility and executability. This means that even when a plan of action is "better" w.r.t achieving its objectives, it might not be implementable or understandable easily.
All systems survive on the basis of trust, and destroying legibility through complexity destroys that trust. This might not be an immediate death knell, but surely destroys the information transmission mechanisms in place within the system. It frays until it snaps.
Along with the ambition to get answers right, or at least less wrong, we would also have to have clarity and simplicity as virtues to aspire to. Without that we end up with highly specialised processes like this vaccination schedule and process, which is (let's grant) theoretically optimal but practically impossible to implement and not understandable by most experts, let alone a layperson.
III
I like to look at this as an equation:
(potential to provide help through policy X) * (implementability of policy X) * (continued compliance with policy X)
True rationality in its holistic Yoda-esque glory requires you to be aware of all three parts.
We spend our time debating the first variable about what policy we should implement, without taking time to consider that variables 2 and 3 are arguably as important. Variable 2 tells us if the plan can survive contact with reality. Variable 3 tells us if it's likely to get destroyed from the societal feedback!
In the society that we live in, being relatively fair and free and all, one of the core aims is to ensure that the rule is legitimate. And a way to ensure that is ensure that if a rule is deemed illegitimate to the community's interests, then it can be revoked or overruled.
Scott tries to put a few axes in which "technocracy" can be nailed down. Within the mechanistic vs judgement axis, where the disagreement seems most concentrated, the ultimate thesis seems to be that it's really hard to know where to draw the line.
And so it is.
In Weyl's words:
none of this should suggest that technical insights are of no value. Anyone who has followed my work knows the (partially) technical provenance of many of the social designs I advocate. I am also deeply impressed by technical work, in technical fields like data science and humane design, that seeks to develop new approaches to data analysis and metrics for design success. In fact, I believe that the only chance we have of saving our current political economy from the oppression of capitalism and the nation-state runs through substantial advances of a technical sort that can provide us new systems of democratic input, value accounting and social imagination and experimentation. Work by designers on these projects are among the activities I most admire today.
Where I part ways with his thesis is that his solutions are themselves in the "kumbayah" camp. He suggests that technocrats should make sure they elicit critical information to make designs succeed from the average citizen.
Really? I know that governments around the world have tried this, but the average citizen doesn't want to be bothered by questions to which they have no real answers. Suggestion boxes are used liberally in the UK, but effective they are not!
This is not entirely a criticism. Weyl's work seems to be to try and answer this hard question of how we can design a system that's sensible and achieves its goals well while also remaining flexible to change and open to ideas. Any easy answer one makes here is liable to sound trite. We can ask for education, understanding history, learning humanities, but all of those are stand-ins for the fact that we want people to develop judgement.
It is analogous to us believing that "more education is what's needed to help people think better", missing the fact that several of those who are against them are equally well educated. Yes, it would be helpful to get insights directly from the citizenry to help plug information gaps, but saying that should happen is not the same as saying it will happen.
We can hope for it as an occurrence that will help, but we should plan for the event that the hope falls through. It's likely that the future looks like the past in terms of general citizen involvement in policy making and analyses. Because it's boring!
When an individual researcher tries to popularise her work, she tries to ensure legibility. After all if you can't explain why your work is great, why would you get a grant? Sometimes the legibility is only needed for a small subsection of the elite populace, but you still need the idea to be made legible to them.
But in an institutional context the drive is towards increased complexity. Despite the billions of dollars spent in trying to make government more user friendly, it's arguable if there's been much more than a Red Queen Race here.
Even voting, which we exhort as democratic societies, is impossibly complex. The amazing part of the fight over the 2020 US Presidential Election was that the public got to see the number of byzantine and arcane steps that exist between Joe B casting his ballot and it actually changing the collective democratic outcome.
And when Weyl suggests that design must aim for both fidelity and legibility, he's not actually saying something new. We would all like policies that are simultaneously better formalised in correspondence with the world, and easily understandable. But that's a pipe dream. There's almost no design in the world that accomplishes this goal today. Trust in the experts becomes a stand-in for this goal.
Objective decision making is absolutely the right thing to do. The trouble is that objectivity by itself is not a helpful goal since it's often tautological. We try to solve for our lack of objectivity through social tools.
We try to solve for it by creating adversarial decision making engines (like law courts, but also the design of the US Constitution). But that's also one amongst several alternatives available, including inquisitorial systems in continental Europe, hybrid models like ICC and so on.
When should we create mechanised solutions to solve something vs when should we use human judgement to analyse something will remain a hard problem. To analyse what action we must take to make things better requires an understanding of where we stand, an idea of where we want to go, and what options are available to us to make that decision.
The problem is that saying "we need more objective solutions" is one of those things that seems absolutely correct if you're on my side of the aisle. But it's just another rallying cry for both the powerful and pushes an expansion of expertise that's completely unwarranted. It's so vague as to become practically useless.
The good, solid, evidence-based, Bayesian experts are a priori completely indistinguishable from the bad experts who make strong pronouncements in areas where they have zero knowledge. For god's sake, CNBC is still a thing!
So if the answer to the question "how do we make systems work better" is "we should get more experts" then that runs counter to its problems. Professionalism and expertise have been a mainstay of most successful systems since the ancient Romans, and probably a good deal before. Those are essential ingredients. But those aren't enough.
The counter to evidence-based rational decision making is not irrational decisions made by non-experts. We're not about to turn our entire financial system over to a forum to vote. It's the introduction of some epistemic humility. And the way to bring that about is to introduce increased notions that our seemingly rational processes are outcomes of our internal biases, that our systems need more transparency and legibility, and that our systems need far more flexibility. The way to bring those about would be to court those in the first place.
Especially in areas where their results cannot be meaningfully falsified or in areas where there's significant value leakage, the technocracy has to incorporate simplicity as a virtue too, and not just hide behind its technocratic prowess. It's easy to slip from "as experts we think this is the right course of action" to "shut up and listen, we know better". That slippage is the core issue. Our systems are only getting more interlinked needing the intervention of many more experts than ever before, in a bewildering array of domains. We'll be faced with this question forevermore.