I started coding on machines with 1k of memory. By the end of the 80s - I was writing code that was directly manipulating disk sectors (not saving files, but actual blocks of data at specific locations on a spinning disk). Programming languages had relatively little in way of standard libraries or components, and even for the same language, code wasn’t particularly portable between different computer platforms.
By the early 90s - things had progressed a lot - you could open a socket on an ISDN line to a remote server, and only had to code everything else on top (the handshaking and message exchange). The desktop OS war had a winner, and it wasn’t IBM OS/2. Unix was standardising, with relatively high portability of C code between different Unix platforms.
Then we got Java, HTTP, XML, SSL- and connecting two systems would take days rather than weeks.
15 years ago - it took me minutes to write some JavaScript to make an AJAX call to an external service and display the results as a data grid.
And so it keeps accelerating - authentication is standardised, API documentation has standardised around OpenAPI in a way that WSDL never quite achieved.
Where I’m going with this - at each stage, the barriers to software development have been reduced - it requires less technical knowledge.
What can be done for $10,000 has massively changed, and that in turn has created a massive market for software development.
If you go back even further, to the first uses of commercial computing, only the very largest enterprises could see any point in investing in computer based automation.
But when adding an automatic telephony agent is something that you pay for as a service, even a mom-and-pop store can have one as a plugin on their Shopify store.
Easier cheaper development creates a larger market for development. It results in businesses refreshing front end design for reasons of fashion, because it’s not going to cost $20 million.
What I’ve not seen yet, is ‘the business’ actually wanting to do the testing and fault analysis.
If we get to the point where humans write no code - and we already write incredibly little code to produce megabytes of machine language - then ‘developer’ is still the role of the person who builds and QAs the system. It requires a kind of system thinking that is different to marketing or finance. In a small business, one person may wear all those hats - but in any larger business, there will always be specialisation in roles.
(And in some ways, this might be a return to earlier stages of IT. The notion of the full time tester is relatively new, as is the division between frontend and backend engineer. Or the backend engineer who doesn’t ’do databases’. That kind of hyper-specialisation is under threat)
Or put another way - when I write code, I am already 100x, maybe 1000x more productive than when I started. And yet the backlog of change requests is still there.
You remove a pain point in a business flow - and then something else will be the point where people spend 10% of their time.
So good Rohit. And the key to the future is the consumer, not the vibe producer. Humans are so picky and fickle. We like this today that tomorrow. Wait, maybe I don't even like your post any more. Oh yes, I do. So holding attention matters. Don't buy slop is a great motto for the late 2020s. Run marathons. Read Voltaire. Tend your garden.
Vibe coding is just the start—it’s the first clear articulation of a broader paradigm shift already underway.
Now that vibe is an input, work is increasingly defined by the ability to articulate intent and let the system generate the rest. This isn’t just a UX phenomenon—it’s a labor shift. The same principle that’s turning prompts into code will turn prompts into marketing plans, contracts, dashboards, onboarding flows, even full startups.
We’re witnessing the Software-as-a-Service to Employee-as-a-Service transformation: instead of hiring teams, you manage fleets of agents; instead of building tools, you orchestrate behaviors. This will displace huge portions of the workforce—not hypothetically, but structurally—because it doesn’t require mass layoffs. It just erodes the need to hire in the first place.
The VP-of-AI to AI-VP shift is another canary. When AI agents are running strategy loops, not just writing snippets or summarizing docs, that’s not augmentation—it’s substitution. And it’s coming for all high-leverage roles.
The unfortunate twist? Bots don’t pay taxes. So as more cognitive labor gets absorbed by models, the economic foundation of public systems starts to rot. The workforce won’t just evolve—it’ll collapse unless we build new value pipelines, compensation logic, and policy frameworks fast enough to catch the fall.
Wholeheartedly agree. It’s sad that this is even a question. I think the AGI debate (like most debates) is a way for people to have an excuse to not learn how to do more for themselves?
... jobs are ALWAYS changing as aspirations change, capabilities change & so forth. AI will be an accelerator & playing with it, learning to become more productive with it, experimenting with new careers - so many possibilities.
If you haven’t read it already, I recommend the short story “The Evolution of Human Science” by Ted Chiang in his book “Stories of your life and others”! It’s a super short & relevant read, and you can read it for free online too
> And yet, consultants have grown as an industry. Most jobs are like this.
Jobs have still changed dramatically. They remained in some capacity, because there was value to human input at a higher level of abstraction, while the lower levels were delegated to automation. But the ceiling of our own individual capacity is not increasing (at least not fast enough). What do you imagine a human can contribute in this environment?
I don't think that's true. What a human can do has increased dramatically, technologically enhanced, and I see no reason that would come to a crashing halt in the immediate future.
I am specifically referring to immediate physical and cognitive skills. We are very much the same we were hundreds of years ago. The mental capacity for patterns and abstractions is the same. The information throughput is limited by the brain. This can be somewhat alleviated by using AI itself to simplify information and present it to humans, but at that point, why bother bottlenecking the process at human information processing power, when his input is minimally valuable?
I think it will absolutely happen, as it has happened before, but the question is whether this means we will find new avenues to work and find work. ie discover new bottlenecks. I think yes.
Fair enough, but this does not seem likely to me. No one seems to be able to meaningfully describe such a future in a coherent fashion. Yet, we are hurling ourselves towards a future with very well-define threats and no meaningful countermeasures. This positive messaging around labour, thus, comes across very disingenuous.
This has been true forever, that the future is not coherent. It's literally an impossible ask to have clarity about the future in that fashion. And currently I see threats, sure, but also opportunities, and there is no reason to believe particularly that the threat is dominant in that fashion. There is of course the likelihood that labour will get disrupted, I say so about current trends, but I don't see this as a nihilistic situation.
We can't always be relying on historical precedents for predictions. We should probably be analysing the current situation for what it is. Do those perceived opportunities outweigh the threats in your conception?
I started coding on machines with 1k of memory. By the end of the 80s - I was writing code that was directly manipulating disk sectors (not saving files, but actual blocks of data at specific locations on a spinning disk). Programming languages had relatively little in way of standard libraries or components, and even for the same language, code wasn’t particularly portable between different computer platforms.
By the early 90s - things had progressed a lot - you could open a socket on an ISDN line to a remote server, and only had to code everything else on top (the handshaking and message exchange). The desktop OS war had a winner, and it wasn’t IBM OS/2. Unix was standardising, with relatively high portability of C code between different Unix platforms.
Then we got Java, HTTP, XML, SSL- and connecting two systems would take days rather than weeks.
15 years ago - it took me minutes to write some JavaScript to make an AJAX call to an external service and display the results as a data grid.
And so it keeps accelerating - authentication is standardised, API documentation has standardised around OpenAPI in a way that WSDL never quite achieved.
Where I’m going with this - at each stage, the barriers to software development have been reduced - it requires less technical knowledge.
What can be done for $10,000 has massively changed, and that in turn has created a massive market for software development.
If you go back even further, to the first uses of commercial computing, only the very largest enterprises could see any point in investing in computer based automation.
But when adding an automatic telephony agent is something that you pay for as a service, even a mom-and-pop store can have one as a plugin on their Shopify store.
Easier cheaper development creates a larger market for development. It results in businesses refreshing front end design for reasons of fashion, because it’s not going to cost $20 million.
What I’ve not seen yet, is ‘the business’ actually wanting to do the testing and fault analysis.
If we get to the point where humans write no code - and we already write incredibly little code to produce megabytes of machine language - then ‘developer’ is still the role of the person who builds and QAs the system. It requires a kind of system thinking that is different to marketing or finance. In a small business, one person may wear all those hats - but in any larger business, there will always be specialisation in roles.
(And in some ways, this might be a return to earlier stages of IT. The notion of the full time tester is relatively new, as is the division between frontend and backend engineer. Or the backend engineer who doesn’t ’do databases’. That kind of hyper-specialisation is under threat)
Or put another way - when I write code, I am already 100x, maybe 1000x more productive than when I started. And yet the backlog of change requests is still there.
You remove a pain point in a business flow - and then something else will be the point where people spend 10% of their time.
So good Rohit. And the key to the future is the consumer, not the vibe producer. Humans are so picky and fickle. We like this today that tomorrow. Wait, maybe I don't even like your post any more. Oh yes, I do. So holding attention matters. Don't buy slop is a great motto for the late 2020s. Run marathons. Read Voltaire. Tend your garden.
Vibe coding is just the start—it’s the first clear articulation of a broader paradigm shift already underway.
Now that vibe is an input, work is increasingly defined by the ability to articulate intent and let the system generate the rest. This isn’t just a UX phenomenon—it’s a labor shift. The same principle that’s turning prompts into code will turn prompts into marketing plans, contracts, dashboards, onboarding flows, even full startups.
We’re witnessing the Software-as-a-Service to Employee-as-a-Service transformation: instead of hiring teams, you manage fleets of agents; instead of building tools, you orchestrate behaviors. This will displace huge portions of the workforce—not hypothetically, but structurally—because it doesn’t require mass layoffs. It just erodes the need to hire in the first place.
The VP-of-AI to AI-VP shift is another canary. When AI agents are running strategy loops, not just writing snippets or summarizing docs, that’s not augmentation—it’s substitution. And it’s coming for all high-leverage roles.
The unfortunate twist? Bots don’t pay taxes. So as more cognitive labor gets absorbed by models, the economic foundation of public systems starts to rot. The workforce won’t just evolve—it’ll collapse unless we build new value pipelines, compensation logic, and policy frameworks fast enough to catch the fall.
Wholeheartedly agree. It’s sad that this is even a question. I think the AGI debate (like most debates) is a way for people to have an excuse to not learn how to do more for themselves?
... jobs are ALWAYS changing as aspirations change, capabilities change & so forth. AI will be an accelerator & playing with it, learning to become more productive with it, experimenting with new careers - so many possibilities.
If you haven’t read it already, I recommend the short story “The Evolution of Human Science” by Ted Chiang in his book “Stories of your life and others”! It’s a super short & relevant read, and you can read it for free online too
> And yet, consultants have grown as an industry. Most jobs are like this.
Jobs have still changed dramatically. They remained in some capacity, because there was value to human input at a higher level of abstraction, while the lower levels were delegated to automation. But the ceiling of our own individual capacity is not increasing (at least not fast enough). What do you imagine a human can contribute in this environment?
I don't think that's true. What a human can do has increased dramatically, technologically enhanced, and I see no reason that would come to a crashing halt in the immediate future.
I am specifically referring to immediate physical and cognitive skills. We are very much the same we were hundreds of years ago. The mental capacity for patterns and abstractions is the same. The information throughput is limited by the brain. This can be somewhat alleviated by using AI itself to simplify information and present it to humans, but at that point, why bother bottlenecking the process at human information processing power, when his input is minimally valuable?
I think it will absolutely happen, as it has happened before, but the question is whether this means we will find new avenues to work and find work. ie discover new bottlenecks. I think yes.
Fair enough, but this does not seem likely to me. No one seems to be able to meaningfully describe such a future in a coherent fashion. Yet, we are hurling ourselves towards a future with very well-define threats and no meaningful countermeasures. This positive messaging around labour, thus, comes across very disingenuous.
This has been true forever, that the future is not coherent. It's literally an impossible ask to have clarity about the future in that fashion. And currently I see threats, sure, but also opportunities, and there is no reason to believe particularly that the threat is dominant in that fashion. There is of course the likelihood that labour will get disrupted, I say so about current trends, but I don't see this as a nihilistic situation.
We can't always be relying on historical precedents for predictions. We should probably be analysing the current situation for what it is. Do those perceived opportunities outweigh the threats in your conception?