I’m curious about what the improvement curve on LLMs looks like when compared to the scaling of investment. True there is still lots of progress to be made, but the timeframe of the original boom also saw a multi-order increase in the amount of money being spent on research.
Most businesses offering LLM technology operate at a loss. At th…
I’m curious about what the improvement curve on LLMs looks like when compared to the scaling of investment. True there is still lots of progress to be made, but the timeframe of the original boom also saw a multi-order increase in the amount of money being spent on research.
Most businesses offering LLM technology operate at a loss. At the moment the net effect of the LLM boom on the economy is obviously positive, but at present only because it boosts market sentiment, not because it yields productivity at anywhere near the scale of dollars put in.
For the progress of LLM technology to continue at the current rate, it will eventually become necessary for it to justify itself by an increase in economic productivity. Replacing call centres and content farms won’t cut it.
Rohit, you’re showing us examples of how this equation can change. If LLMs are able to assist with cutting-edge research, that is a step toward positive feedback loops of innovation with AI at the center. It is even a step toward the dream/nightmare scenario a few people have warned about for decades: AI spearheading research into itself, improving its own capabilities, raising its own capital, buying its own infrastructure.
Up to now I’ve been bearish on two things: the ability of LLMs to meaningfully improve the human condition (and not just act as a band-aid for the escalating suffering caused by an absurd, Byzantine rules-driven order the average human being increasingly lacks the brain-power to navigate); and the ability of LLMs to destroy us all. I’m happy to see I might be wrong about the first one.
I’m curious about what the improvement curve on LLMs looks like when compared to the scaling of investment. True there is still lots of progress to be made, but the timeframe of the original boom also saw a multi-order increase in the amount of money being spent on research.
Most businesses offering LLM technology operate at a loss. At the moment the net effect of the LLM boom on the economy is obviously positive, but at present only because it boosts market sentiment, not because it yields productivity at anywhere near the scale of dollars put in.
For the progress of LLM technology to continue at the current rate, it will eventually become necessary for it to justify itself by an increase in economic productivity. Replacing call centres and content farms won’t cut it.
Rohit, you’re showing us examples of how this equation can change. If LLMs are able to assist with cutting-edge research, that is a step toward positive feedback loops of innovation with AI at the center. It is even a step toward the dream/nightmare scenario a few people have warned about for decades: AI spearheading research into itself, improving its own capabilities, raising its own capital, buying its own infrastructure.
Up to now I’ve been bearish on two things: the ability of LLMs to meaningfully improve the human condition (and not just act as a band-aid for the escalating suffering caused by an absurd, Byzantine rules-driven order the average human being increasingly lacks the brain-power to navigate); and the ability of LLMs to destroy us all. I’m happy to see I might be wrong about the first one.
Exponential investments with linear improvement is the human condition, and that provides enough money eventually to be worth it, as it always has