> I’ve written before about the tasks that LLMs can’t do. The answer to many of those tasks were to use LLMs in a loop to do harder tasks. We can’t do that if each LLM call costs $7, but when they cost $0.0007, a lot more will get unlocked.
Spot on, and why price reduction (AND latency improvement) matters.
Reflection based prompts or other AI output revision flows have been both too expensive and too slow for most use-cases. That's starting to change, and I'm excited to see the ramifications.
> I’ve written before about the tasks that LLMs can’t do. The answer to many of those tasks were to use LLMs in a loop to do harder tasks. We can’t do that if each LLM call costs $7, but when they cost $0.0007, a lot more will get unlocked.
Spot on, and why price reduction (AND latency improvement) matters.
Reflection based prompts or other AI output revision flows have been both too expensive and too slow for most use-cases. That's starting to change, and I'm excited to see the ramifications.