28 Comments
Nov 18Liked by Rohit Krishnan

Rohit, your analyses raises important questions, but early studies of AI augmentation may be missing crucial nuances. A few key considerations:

1. Learning curves matter immensely. Most current studies involve users with limited exposure to these tools - like evaluating a craftsperson's satisfaction with power tools after just a few days. Meaningful collaboration patterns take time to develop.

2. The pace of change is unprecedented. Studies from even 6 months ago may not reflect current capabilities or best practices. Both the technology and usage patterns are evolving rapidly.

3. Structure and boundaries make a difference. The most successful human-AI collaborations I've observed maintain clear roles that preserve meaningful human agency while leveraging AI capabilities. Like any symbiotic relationship in nature, each partner needs to maintain its identity while contributing unique strengths.

Rather than automating human creativity, the goal should be augmenting it through thoughtful integration. This requires understanding both human cognition and AI capabilities - something that will take time to optimize as both the technology and our collaborative patterns evolve.

The decreased satisfaction reported may reflect poor implementation rather than inherent limitations. As we develop more sophisticated approaches to human-AI collaboration, we'll likely find ways to enhance rather than diminish human creativity and fulfillment.

Expand full comment
Nov 19Liked by Rohit Krishnan

Yes. As someone who has spent hundreds of hours with ChatGPT, the most valuable thing it has provided is ideas. Specifically, I have some extremely unusual health problems, into which hundreds of hours of doctor time have provided little insight. With the right prompting o1 preview can generate dozens of hypothesis. ChatGPT's breadth of surface knowledge combined with it's looser cognitive filters means it occasionally generates useful ideas that I've never seen before. Of course, if I couldn't easily test out each idea, it would be useless.

Expand full comment

An interesting hypothetical experiment: if a society has universal basic income, so that work isn’t needed for survival (but is necessary for “perks” or “comforts” or luxuries) and this society was also richly infused with AI support, what jobs would people just stop doing?

What jobs would people find too dispiriting or pointless, if they didn’t need the money for basic needs? And what jobs would we keep doing, even if we didn’t have to, even if the pay was trivial “play money,” simply because the work is intrinsically interesting.

I think your examples above suggest some clues!

Expand full comment
author

Excellent thought. Reminds me of "basic" as they called it in The Expanse series.

Expand full comment

The truth of the modern economy is that many humans perform jobs they’d prefer not to do, and now we don’t really need humans to do them.

I see three possible futures:

An era of human flourishing, as individuals are freed from “working to live”

An era of extreme discontent, as individuals contend with a sense of deep meaninglessness

a maintenance of the status quo - new meaningless jobs are invented to replace current meaningless jobs, and the system continues more or less without change

Expand full comment

The three possibilities you list strike me as more likely or not depending on class / social power: the rich flourish, the "middle" is discontent, the poor experiences churn and personal uncertainty (as usual) and on the whole it's still the status quo

Expand full comment

> My dad was a banker for four decades and he was mostly the master of his fate, which is untrue about most retail bankers today except maybe Jamie Dimon.

I know the Dimon point is partly a joke but it is a reminder that we're not cogs in a machine run by anonymous forces, we are cogs in a machine that is ultimately run by some humans, for the benefit of those humans. One way to think about the AI tool in the study is not as a way to help the individual scientists but as a way for their bosses to commodify some piece of the scientist's work through standardization and automation. And the tools in the study weren't even fancy LLMs, which are even more confusing because they mimic human speech!

Expand full comment
Nov 19Liked by Rohit Krishnan

Yes, there’s a good history of economists looking at alienation of labour - interestingly, Adam Smith, who is often considered as Marx’s polar opposite, had similar concerns about the wellbeing of workers - although he was generally pro-automation.

250 years ago, the majority of work was closer to the Amazon warehouse worker, than some mythical period where the majority of people worked in hand crafts.

The majority of mill workers were formerly agricultural labourers, but weavers. Although the weavers certainly had something to say about being replaced.

Expand full comment
author

Absolutely true. And yes the broader point is true, that it's a polarisation in how the power dynamic shifts, management gets easier, and that too creates a rift in expectations about what a job should/ ought to be.

Expand full comment
Nov 19Liked by Rohit Krishnan

Thanks for this! My questions: what is fun? Why can't AI make it fun? I'm working on/with AI to push strategically so that the 'fun parts of the job' you reference in terms of 'coming up with ideas' are actually more productive. And challenging. I think we get collectively stuck in regarding AI as a doing thing - I was a strategy professor for years and focus on the analytic/critical thinking. Generative AI can bring dimensionality to thinking with two way conversations - the part we forget in talking about 'conversational' AI (particularly agents as we've found - we have agents/knowledge synthesis/decision support together) doesn't depend on human prompting at every turn. We are focusing on the serve and return loop, meaning that the goals are driving the conversation and not the clunky step by step prompts. We get it to hyper-personalize to play devil's advocate, etc. We can make it tougher not easier - and spark thinking - that is fun in my world.

Expand full comment
author

I think it's possible to have fun with it, I love ideating with LLMs and learning, but if I become a pure implementor I can see how that'd be demoralizing.

Expand full comment
Nov 19Liked by Rohit Krishnan

Agree. Tho to me gen AI gives us a new medium with more control in this area than I am currently seeing people getting excited about ... To me, we have a Rosetta stone to talk to the data and resources (dynamic and static) being brought in - 39% more materials - what are they doing w them ? Ideating is a part but it is only an aspect of discovery to distillation. I'm using AI to activate and combine strategic frameworks in new ways - again - fun in my world :)

Expand full comment

I thought the paper was really well done but the devil is in the caveats. I just don’t think papers like this have demonstrated automation of important kinds of idea generation yet, because it hasn’t quite gotten the performance it needs to have. https://substack.com/@manjarinarayan/note/c-76359765?utm_source=notes-share-action&r=50pac

Expand full comment

Thank you for this, Rohit - thought provoking, and also a confirmation, of sorts.

I've been looking at education VERY broadly (that is, home-schooling, alternative, apprenticeships and even autodidacts like myself) for a few decades now. Creativity also (as studies, they pair rather well). The weird, but in my experience, also invariable correlation between difficulty in mastery and satisfaction applies everywhere! Business chops, technical skills, creative brilliance.

Even as a lowly soda-jerk, I discovered that one can mop a floor (or clean a public bathroom full of syringes) in either a slovenly or an expert manner. That is - even when our minds and ambitions are grossly out of scale with our employment, using ourselves FULLY for any task, just plain feels better. (nicer floor and safer, too).

I'm also anecdotally convinced that when we dial it in (even for tasks which are objectively 'beneath us') we train ourselves for half-effort as a too-easy option - which goes with the hard truth that no one else can demand sustained excellence from us, so it remains unavailable to those who forever delay that self-requirement.

I keep hoping for a culture wide return to the humane and imperfect - the life-filled.

Quilts and bake-sales - more paintings on our walls, by people we know!

Wonder if that might be an unanticipated upside of some of the unavoidable economic down-side we all seem to be in for. (The eighties were brutal for starting workers - so many cycles of recession and layoff - but boy did we ever work hard on our long-arc vocations, as a result - gotta make that suffering romantic somehow, right?)

¯\_(ツ)_/¯

Expand full comment
Nov 19Liked by Rohit Krishnan

"the reason the developers got upskilled is that a hard part of their job, of knowing where to focus and what to do, got better automated. This isn’t the same as the materials scientists finding new ideas to research, but also, it kind of is?

Maybe the answer is that it depends on your comparative advantage, and takes away the harder part of the job, which is knowing what to do. Instead of what seems harder, which is *doing* the thing."

This is interesting - the part of any job that I find most difficult and anxiety-inducing is the "what to do" or "what to do next" aspect, e.g. the blank page. I hate it! This is why the current consumer gen AI services have been so helpful for me. I get easily stuck and it's nice to have something that points me in a direction, even if I decide to go the completely opposite way.

Expand full comment
Nov 19Liked by Rohit Krishnan

Great essay, up until you went and did the typical Bay Area thing of concluding that maybe humanity is all secretly capable of being polymaths so actually it really could all work out ok.

Expand full comment
author

Haha I did move to the Bay Area, and am not immune to dreaming.

Expand full comment
Nov 19Liked by Rohit Krishnan

Buck their dreams

Expand full comment
Nov 19Liked by Rohit Krishnan

Thanks for these thoughts Rohit.

I think many people take most satisfaction in (1) human interaction and, (2) using their bodies for something useful, and those are the tasks that AI will do last, probably, so overall we may hope things to get better.

For the cognitively inclined, I think GPT4 is giving much pleasure in just chatting about deep stuff, providing a faster, broader, but more predictable silicon-based complement to our looser, but less predictable, carbon-based thoughts. C just has a fuzzier band gap than Si, so my guess is this is likely to remain so.

So I think all be well, but of course, it's anybody's guess right now.

Expand full comment

this a very good observation, this happens when writing I'm offered a book deal all I have to do to give a few details and the Ai will do the rest our voice will be a long drone

Expand full comment
Nov 19·edited Nov 19Liked by Rohit Krishnan

Thanks Rohit - the MIT research is interesting. I have a related hunch - for AI to expand more quickly, the technology has to focus not just on the process but also on the individual using it. Because, you and I, unlike machines, don’t respond to a standard set of instructions in the same way each time. I think behaviour scientists can play an important role here. Think of what the best sports coaches do - spotting & enhancing individual strengths & cultivating a winner's mindset.

Expand full comment
Nov 19Liked by Rohit Krishnan

Yeah I recently showed a relative of mine how good Midjourney is in designing architecture. He's a retired architect. While surely being impressed he also meant "but doing that was the fun part"

Expand full comment
Nov 19Liked by Rohit Krishnan

Thanks for sharing, Rohit. Glad I came across your profile / Substack. I think this is a valuable extension to this paper:

https://www.hbs.edu/faculty/Pages/item.aspx?num=64700

Expand full comment
author

Thank you, yes I know Ethan's paper, it's a really good one.

Expand full comment

I might be missing something, but I see an age-old "job satisfaction" vs "quality of life" tradeoff curve here, except with fewer points around the middle now

Expand full comment
author

The question is why the middle is squeezed

Expand full comment

I don't know if you're looking for feedback, Rohit but here it goes: I found a few mistakes like extra words and such in the article and also, the middle portion was a bit unclear to me. Maybe editing can be more thorough.

Expand full comment
author

Always happy with feedback.

Expand full comment