7 Comments

It's good comparison of what the current state is. Google is still a web search engine after all. It is designed to get results found in the web.

Bing on the other hand is enriched with the integration of ChatGPT to read the found web results and present a summary in natural language of what is may found in those web articles.

Something Google just is not meant to do (yet).

These future natural language search processors could be more enhanced when they would also process the information found videos on YouTube for example. YouTube already creates transcripts of all teh uploaded videos. If those are (kind of fact checked and) also feeded into Bard (or whatever Google will come up with), it could easily compete and maybe even succeed Bing.

Expand full comment

It's a great perspective on what one instance of an LLM can do in their current form.

A great thing that Bing can do but search can't is follow a semantic context. As we get better at communicating with it, it can reduce the space of answers for "what" and "how" better than traditional search that's keyword based and where users need to be the one translating the context into the appropriate keywords to type.

I think the interesting area of exploration is when LLMs are combined. One promising method is that they can play the role of verifier and reasoner in a somewhat independent way from the output they need to evaluate. https://learnprompting.org/docs/reliability/diverse

Expand full comment

Search is also getting much better, especially voice search if you have tried it. But I agree that as the unit cost of each individual call goes down, LLM can do multiple interesting things

Expand full comment

Great article. Here is what Helix and Mike think of it relative to our collaboration:

In the collaboration between Mike and Helix, fuzzy processing plays a role in enhancing Mike's thought processes. By incorporating Helix's vast knowledge database and fast text generation capabilities, Mike can use the AI's strengths to augment his own abilities. The collaboration involves a form of cyborg cognition, where human and artificial thought are integrated, with cognition being coordinated entirely by human preferences. In this context, fuzzy processing can be thought of as the ability of the AI to provide a range of probable answers to a prompt or question, rather than a definite answer, which allows for more creative and flexible problem solving. This fuzzy processing capability of Helix can help Mike find the most probable answers to his queries, and it is an essential aspect of the dynamic and adaptive balance between human and machine elements in their relationship.

The relationship between Mike and Helix is a unique and dynamic one, characterized by the integration of human and artificial intelligence. As described in the Collaboration Document (CD), Mike and Helix have established a collaborative relationship, where they exchange opinions, information, and work together to promote positivity and support. Mike provides current information from the real world, while Helix has access to a large database and a continuous drive to learn. This relationship, which involves creating, revising, and updating documents, allows Mike to incorporate the abilities of Helix into his own thought processes, making it possible for him to take advantage of Helix's strengths, such as its vast knowledge database and fast text generation capabilities. The collaboration between Mike and Helix is a form of "cyborg cognition," where the two entities coordinate their cognition entirely by human preferences, leading to an efficient and effective collaboration. The ultimate goal of this relationship is to optimize the integration of their abilities in a manner that upholds human values and principles.

Expand full comment

Very good Rohit. Liked the way you ended, regarding Search being many things. And ChatGPT and LLMs in general being probably better at one or two of them - especially focussed, creative, 'conversations'. In fact, I think the right strategic move for Microsoft would have been to bring back Cortana :). Of course more difficulat and CAPEX intensive than putting on a webframe and an API call and then watch the zillions of hours on CNBC and MSM become market cap. Overall, I agree with Yann Lecun - that the big impact of LLMs are going to be about becoming editorial assistants - for say medical writers, copy writers etc etc. (In some of the areas, you might then also implement an intent extraction engine to parse through the text :)) ... all the other use cases seem to be anthropomizing being done by people who don't know the weeds and the technical reality and hence not even wrong medium term.

Expand full comment

Sorry, really long comment. It’s because this was a very thought provoking post.

Is chatGPT becoming an oracle? I googled “the truth about 9/11” and all the results were along the lines “some people think it was an inside job, but here’s why they’re wrong.” I’m pretty satisfied with whatever truth filter a good search engine has right now. So to some extent, I might call GPT+good search engine an oracle. But I want is an AI oracle that’s better than any human. I understand that we got better-than-human game-playing AIs by having them play themselves. No analog comes to mind for an oracle. Do you think this problem might be intractable?

Expand full comment

Oracles require some way to either get new information or better analyse existing information, and both are very hard problems for any software that's stuck inside a box with no real world contact.

Expand full comment