Jan 28, 2023·edited Jan 28, 2023Liked by Rohit Krishnan
It appears no-one is seriously working on sensors, particularly at scale. Your hands have thousands of temperature, texture, and pressure sensors, and synthesizing information from the data these provide is crucial for the ability to manipulate the variety of objects found in unstructured or semistructured environments. (If you've experienced numbness from cold or carpal tunnel, you will know this.)
The easy stuff, vision, has been done--possibly, well enough. Time to move on to the real challenges.
Well, why did researchers focus on problems like mathematical theorem provers, chess and Go as being short-cuts to artificial intelligence for so long?
Sensory information (especially from tactile senses) seems to exist in a kind of blind spot for computer scientists and engineers. It doesn't seem like an interesting problem, I guess. Except for vision.
But I believe taking massively parallel information from thousands of sensors and figuring out what's going on is fundamental to getting flexible, adaptable robots of the kind you're wishing for--robots that can pick up the toys, fold the laundry, wash the windows, and make you a cup of tea the way you'd make it. Actually, even for just that last task alone, requiring as it does temperature, texture, and pressure sensing in gripping, lifting, pulling, pushing, and twisting movements.
We are indeed surrounded by specialised "robots", but the true paradigm shift is if/when we engineer ones with high mobility *and* ability to generalise to new tasks. I think the latter trait will take more time to get right that the former, but eventually we would be able to free up people's time just as other household items were the unsung heroes of productivity increase in past decades. Alphabet seems to be on this pursuit with Everyday Robots, one of their X spinoffs. And it looks like the transformer paradigm that has enabled systems like ChatGPT might be the key to help robots significantly improve their ability to learn new tasks.
Agree on the need for generalisation, which is where I think they'd do well to start from bounded tasks rather than trying to start with trying to make fully generalisable and mobile bots.
Seem that these are same f the same problems as with children. Hard to train and expansive. :)
Seriously, the solutions may have some similarity: stretch out the training and payment periods, buying more capabilities as the unit learns to use its existing capabilities. Maybe in 16-18 years is would be capable of driving a car.
Jan 25, 2023·edited Jan 25, 2023Liked by Rohit Krishnan
Yes I can see different owners choosing different ways to rear there robots, DYI, in-home third party trainers various combinations of on-site and off site. Of course it helps that DNA highly standardizes the untrained model child. Another point might be to make the basic robot model pretty incapable of doing harm: soft material, low ability to exercise force during the earliest parts of the training.
Also we could harness evolution, gradually adding functions to Rumba, say.
Thank you! I think we're mitigating and solving some of the concerns you expressed in your article. Nice to see there is recognition of these critical AI production problems!
It's essential if we're to stop worrying about fixing things pre-emptively vs finding an ok amount of error we're willing to take. I guess we could think of it as distributed reinforcement learning, but that's also not quite right. My highly immature hypothesis is that there's some way of embedding the learning within models that might require a different way of thinking about it.
Love this post and the excerpt below. Have you seen what we do at SparkAI?
"And turns out that this is incredibly hard. There is no way you can handle all edge cases when the training is pretty specific, and rooted in biomechanics. To be successful, they have to work relatively autonomously, navigate its surroundings, make autonomous decisions, and be able to actually handle things like a baby blanket or a hot cup of coffee."
Where does the company Figure.ai, well, figure, into all of this? Do they have a solid plan, or are they just hype? I don't know enough to know either way.
It appears no-one is seriously working on sensors, particularly at scale. Your hands have thousands of temperature, texture, and pressure sensors, and synthesizing information from the data these provide is crucial for the ability to manipulate the variety of objects found in unstructured or semistructured environments. (If you've experienced numbness from cold or carpal tunnel, you will know this.)
The easy stuff, vision, has been done--possibly, well enough. Time to move on to the real challenges.
Why is that do you think?
Well, why did researchers focus on problems like mathematical theorem provers, chess and Go as being short-cuts to artificial intelligence for so long?
Sensory information (especially from tactile senses) seems to exist in a kind of blind spot for computer scientists and engineers. It doesn't seem like an interesting problem, I guess. Except for vision.
But I believe taking massively parallel information from thousands of sensors and figuring out what's going on is fundamental to getting flexible, adaptable robots of the kind you're wishing for--robots that can pick up the toys, fold the laundry, wash the windows, and make you a cup of tea the way you'd make it. Actually, even for just that last task alone, requiring as it does temperature, texture, and pressure sensing in gripping, lifting, pulling, pushing, and twisting movements.
We are indeed surrounded by specialised "robots", but the true paradigm shift is if/when we engineer ones with high mobility *and* ability to generalise to new tasks. I think the latter trait will take more time to get right that the former, but eventually we would be able to free up people's time just as other household items were the unsung heroes of productivity increase in past decades. Alphabet seems to be on this pursuit with Everyday Robots, one of their X spinoffs. And it looks like the transformer paradigm that has enabled systems like ChatGPT might be the key to help robots significantly improve their ability to learn new tasks.
Agree on the need for generalisation, which is where I think they'd do well to start from bounded tasks rather than trying to start with trying to make fully generalisable and mobile bots.
Seem that these are same f the same problems as with children. Hard to train and expansive. :)
Seriously, the solutions may have some similarity: stretch out the training and payment periods, buying more capabilities as the unit learns to use its existing capabilities. Maybe in 16-18 years is would be capable of driving a car.
To stretch the analogy a tad, and that's why we tried streamlining education by creation an education system - it's more cost effective.
Yes I can see different owners choosing different ways to rear there robots, DYI, in-home third party trainers various combinations of on-site and off site. Of course it helps that DNA highly standardizes the untrained model child. Another point might be to make the basic robot model pretty incapable of doing harm: soft material, low ability to exercise force during the earliest parts of the training.
Also we could harness evolution, gradually adding functions to Rumba, say.
Thank you! I think we're mitigating and solving some of the concerns you expressed in your article. Nice to see there is recognition of these critical AI production problems!
It's essential if we're to stop worrying about fixing things pre-emptively vs finding an ok amount of error we're willing to take. I guess we could think of it as distributed reinforcement learning, but that's also not quite right. My highly immature hypothesis is that there's some way of embedding the learning within models that might require a different way of thinking about it.
Maybe Gary Marcus has some suggestions you agree with? https://www.nytimes.com/2023/01/06/podcasts/transcript-ezra-klein-interviews-gary-marcus.html
Love this post and the excerpt below. Have you seen what we do at SparkAI?
"And turns out that this is incredibly hard. There is no way you can handle all edge cases when the training is pretty specific, and rooted in biomechanics. To be successful, they have to work relatively autonomously, navigate its surroundings, make autonomous decisions, and be able to actually handle things like a baby blanket or a hot cup of coffee."
I hadn't, but that's very cool!
See SOTA robots here https://ai.googleblog.com/2022/12/rt-1-robotics-transformer-for-real.html?m=1
I also wrote about the problem here https://sergey.substack.com/p/general-purpose-robots it mostly holds up
bots are ubiquitous, they just aren't the kind you're looking for
Grrrrrrr
Where does the company Figure.ai, well, figure, into all of this? Do they have a solid plan, or are they just hype? I don't know enough to know either way.
Unsure to be honest, but I'm not sure humanoid bots are the easy first step before specialised ones for kitchen or cleaning etc