Regarding AI companies hiring poets: who says the humanities skills are dead?
There's something quite funny about shape rotators taking over the world (metaphorically) and the end result is a bunch of AI systems that you get the best results out of by having good wordcel skills (I. e. prompt engineering).
Great collection of links and thoughts. I would suggest we don't truly understand what it means for people to "understand law", so not surprising we don't know what it means for an AI to do so as well. We "know it when we see it" but we can't spell it out.
I love the acknowledgment of biology as a vast untapped well of discovery. We've hardly begun to understand the insights we could achieve in this field. I think we're on the verge of a golden age of biotech (finally!).
Seems worth considering whether we should limit ourselves to creating artificial intelligence that mimics human intelligence. For instance, if passing the bar exam is a reliable indicator of a human's ability to practice law, but a machine's success on the same exam is not, what other factors are being measured in humans beyond simply getting the correct answers? This should lead us to question what human intelligence is and what it isn't. As we increasingly pair humans and machines on tasks like writing, programming, playing chess (probably more domains to come). Can we take this a step further and specifically design machine intelligences that further enhance humans?
Do you have specific examples where this was the goal or do you mean we have without necessarily meaning to? The chess example comes to mind. I wouldn't consider LLMs to be designed with the intention of applying a different type of intelligence than humans posses.
Don't think it was the intention but also not sure if we did it with that intention it would've been better, just that it works better with us in the loop
Regarding AI companies hiring poets: who says the humanities skills are dead?
There's something quite funny about shape rotators taking over the world (metaphorically) and the end result is a bunch of AI systems that you get the best results out of by having good wordcel skills (I. e. prompt engineering).
Great collection of links and thoughts. I would suggest we don't truly understand what it means for people to "understand law", so not surprising we don't know what it means for an AI to do so as well. We "know it when we see it" but we can't spell it out.
Indeed! Also why we have large complicated existing systems to interpret and create law as we go along.
I love the acknowledgment of biology as a vast untapped well of discovery. We've hardly begun to understand the insights we could achieve in this field. I think we're on the verge of a golden age of biotech (finally!).
Seems worth considering whether we should limit ourselves to creating artificial intelligence that mimics human intelligence. For instance, if passing the bar exam is a reliable indicator of a human's ability to practice law, but a machine's success on the same exam is not, what other factors are being measured in humans beyond simply getting the correct answers? This should lead us to question what human intelligence is and what it isn't. As we increasingly pair humans and machines on tasks like writing, programming, playing chess (probably more domains to come). Can we take this a step further and specifically design machine intelligences that further enhance humans?
> Can we take this a step further and specifically design machine intelligences that further enhance humans?
Arguably we have
Do you have specific examples where this was the goal or do you mean we have without necessarily meaning to? The chess example comes to mind. I wouldn't consider LLMs to be designed with the intention of applying a different type of intelligence than humans posses.
Don't think it was the intention but also not sure if we did it with that intention it would've been better, just that it works better with us in the loop