8 Comments
тна Return to thread

Thanks for the kind words! Re the application to AI the point is we're jumping ahead at regulating it without knowing what/ how/ why we're regulating it, as if the process is what will give us the answers.

Expand full comment

Thanks for the response ! Hm, interesting. I do think that the why(ex:things like genAI causing an influx of false information) and the what(ex:ChatGPT, Deepfakes,etc) often seem to be defined, but the how is a bit nebulous, at least to public knowledge. I wonder if regulation in this space just has the nature of making more ill-defined trade offs compared to something classic like the seat-belt or runoff waste, just because its much more complex and multi-disciplinary. But, I don't know if that should dissuade action, perhaps we have to accept that it will by nature be uncertain but trying to tract it and do something may be better than letting it run completely unchecked. If we had some unknown form of energy spawn out of a lab for example, I think a lot of people's priors would be to regulate it so we can understand it more, even if we don't understand everything about it.

Expand full comment

Fascinating debate - IтАЩll just add a tiny thought: the action of making something regulate-able, a public concern rather than a curio concern, is an action in its own right. The тАЬhowтАЭ doesnтАЩt always matter yet. I mean, the USA was built on the strip of philosophy that humans everywhere are entitled to pursue happiness -- legislate that! :) ha! But having, in principle, the notion that AI is a public concern of great potential global societal impact, and that therefore the companies that trade in it are not free to do whatever they want, is an important something.

Expand full comment

We can't regulate something at that high a remove. Not just about tradeoffs being ill defined, they're not known. Even with deepfakes, which we know is an issue, what can you actually do?

- control generation, we can't as we'll have to build a global panopticon at the minimum

- dissuade dissemination, we already do but if people share wrong info we can't stop them

- criminalise sharing, that's too Draconian, and unenforceable

- set liabilities, which will chill entire tech development including detection, far worse outcomes

- create detection methods, that's a technological question, not a regulatory one

There's no regulatory silver bullet for these questions.

Expand full comment

I do still think there are regulation decisions which can be enforced which would create somewhat measurable trade offs or improvements. Just a small example that was in the EU AI Act was requiring that if content is made using Gen AI, it must be disclosed that this is the case. While, yes, this would not be perfectly effective, that is a problem with a lot of regulation, and in my mind is not a good argument for not enforcing it now as much as we can while we work on better methods of implementation. Even a tenant like the ban on the development on facial recognition software passes the test of being tractable in terms of a what, how, and why. There are definite tradeoffs being made there, but at least the tradeoffs are defined. It is a nebulous landscape, for sure. But my priors do actually lie with the idea that the tech industry, which I don't see as fundamentally different in regards to externalities compared to something like the oil industry, doesn't really have the incentive to regulate themselves in accordance with overall social good.

Expand full comment