I'd love to read what you write on this topic - bookmarked these ones to read this weekend! My personal hypothesis is that deep learning is pretty great, but we'll have to bring symbolic AI back and incorporate it to increase the ability to do few-shot or zero-shot learning. It's an exciting space.
I'd love to read what you write on this topic - bookmarked these ones to read this weekend! My personal hypothesis is that deep learning is pretty great, but we'll have to bring symbolic AI back and incorporate it to increase the ability to do few-shot or zero-shot learning. It's an exciting space.
David Ferrucci of Elemental Cognition (formerly head of IBM's Watson project) agrees with you on that. FWIW I worked with him a bit when he was a graduate student at RPI, but not enough so I would claim him as my student. You might want to read what I have to say about GPT-3, GPT-3: Waterloo or Rubicon? Here be Dragons, Version 2, https://www.academia.edu/43787279/GPT_3_Waterloo_or_Rubicon_Here_be_Dragons_Version_2
I think it represents a significant advance, and we need to try to understand what it is doing; I offer some preliminary thoughts in that paper. But I think such techniques will bottom out sooner or later.
I'd love to read what you write on this topic - bookmarked these ones to read this weekend! My personal hypothesis is that deep learning is pretty great, but we'll have to bring symbolic AI back and incorporate it to increase the ability to do few-shot or zero-shot learning. It's an exciting space.
David Ferrucci of Elemental Cognition (formerly head of IBM's Watson project) agrees with you on that. FWIW I worked with him a bit when he was a graduate student at RPI, but not enough so I would claim him as my student. You might want to read what I have to say about GPT-3, GPT-3: Waterloo or Rubicon? Here be Dragons, Version 2, https://www.academia.edu/43787279/GPT_3_Waterloo_or_Rubicon_Here_be_Dragons_Version_2
I think it represents a significant advance, and we need to try to understand what it is doing; I offer some preliminary thoughts in that paper. But I think such techniques will bottom out sooner or later.