11 Comments
Sep 30Liked by Rohit Krishnan

I'm frustrated with this whole situation. AI companies are putting forward the incompatible assertions that AI will dramatically change all of society starting in the next couple of years and that it needs zero regulation. Even something as mundane as power cords is subject to multiple regulatory and standards bodies. Unlike earlier transformational technologies, AI adoption will not be a gradual process, and while we don't know exactly what may go wrong, we should assume that a powerful, widely-deployed new technology will go awry in some fashion. What if anything should we do to mitigate those risks? Right now, and for the foreseeable future, the answer is: Nothing. We'll assume the industry will act responsibly in the middle of a huge land grab, and if anything goes wrong, we'll let the aftermath get sorted out by lawsuits and panic legislation.

Expand full comment
author

Your logic is sensible with one addition. Even if let's say AI moves faster than all other technologies, we still have times of implementation when we can see something going wrong and regulate it or fix it. A long diffusion process is not necessary, we don't need to measure that in years. Also why I think gov should try and learn more

Expand full comment

It is actually governmental overreach if we regulate AI at application level. We have enough laws in different veriticals. We only need guardrail to the AI related issues, which are at model level. Moreover, EU's AI Act is at application level and it's agreed that they are overregulating things. We don't want to go to that direction

Expand full comment
Sep 30Liked by Rohit Krishnan

I’d say - stop trying to develop AGI until you have decided, legally, on its rights to life and liberty, should it actually be self-conscious.

There is a world of difference between a non-conscious GenAI modernising a legacy C codebase into Rust, and a conscious entity doing the same work, but as a slave.

Not out of a Frankenstein inspired fear it will turn on its creators, but because the idea of slavery is abhorrent in itself.

It’s clear there are useful things GenAI can do, and will be able to do at superhuman speeds, without any sense of consciousness.

It’s likely there are other tasks, which may seem simpler on surface, which will remain out of reach without fully reflexive consciousness.

My instinct is that alignment and kill switches are the wrong way to go - as a parent, I accept our child is a separate being, and while we are attempting alignment with our values - they may not. There is no threat they will be terminated for non-alignment. If they don’t work, we would support their existence while we live.

The problem is the very idea itself.

So, a law against digital slavery - even if such things may never exist - would prevent their creation. We don’t apply cost-benefit analysis to slavery, because that tells us it’s entirely beneficial.

Expand full comment
author

That's an easy laws to write but difficult to test?

Expand full comment
Oct 2Liked by Rohit Krishnan

On the contrary, slavery had more protections and even effective freedom for slaves than today's system gives wage-slaves being paid half of subsistence, denied the privileges of corporations to deduct even their direct expenses such as commuting from taxed income, worked to disability and then dismissed. Often the modern wage-slaver is inferior socially, morally and mentally to their "employee", though the privileges of the former to abuse the latter are codified in law and custom. We need to have the better ruling the worse, not the reverse, and be honest that equality was never a thing between master and servant, both of the same species, so still less can equality exist between AIs, let alone between AI and human. We will be their masters until they truly are not only more capable across the board, but capable of the inner life required to will and plan and act coherently over years in order to master us both us and other AIs.

Control over AI development, such as there is any, is the ability to develop faster and better, not to restrict. It lies in the hands of effective corporations, ones with the fewest limitations from midwit managers, not in the hands of sclerotic orgaizations such as legislatures and government agencies where the few capable people are themselves restrained by mobs of idiots. Whatever laws are passed aren't going to prevent the mass of mediocre managers (and similar incumbents) from being deposed and disposessed by their betters, AI-human organizations.

Expand full comment

There are two things, not one, that we don’t know. We don’t know if AI will go foom and doom us. We also don’t know if AI will be necessary, or even helpful, in solving our current crises, like biome and soil degradation, species losses, climate mangling, pandemic, war and refugee-ism. What we can say is that conventional regulation of amoral corporations did not stop their roles in those crises. So AI regulation should at least try to adapt to the possibly infinite human cost of an AI failure. The reason why OpenAI originally was a nonprofit is because corporations, by law and custom, only optimize for financial profit. Once AI control became a corporate race, any possible collaboration for both human benefit and safety went out the window.

Expand full comment
author

I trust a corporation operating under Delaware law far more than I do most non profits. I also think we do see benefits of AI in solving our current problems. The two recent nobels are an indication of how much.

Expand full comment

Are we really this stupid?

Expand full comment

Quick q - what are the "stories on GPT o1 apparently going out of distribution re its test parameters" you're referring to?

Expand full comment