9 Comments

>We don’t know if it’s sufficient to get the leaps of imagination like that of Einstein and the General Theory of Relativity or Shannon’s information theory, but it’s definitely enough to still speed up our rate of discovery tremendously!

I've recently been thinking a lot about LLMs doing science. It seems that they already learn various concepts. These are used to condense the meaning of text in their context window (at various intermediate levels in their architecture), as well as generate text. Scientific insights are concepts that have a) not yet been directly articulated b) succinctly explain the observed world (or in this case the process that produces text which LLMs observe). Of course this also explains obvious things like embodiment (which are often not articulated), but the definition suffices for now.

As an example, let's take Freud's model of the human psyche as Id, Ego and Super-Ego. For our purposes it is true (or at least useful). I wonder if a sufficiently advanced LLM could infer something like this just from reading the all of the written word produced up to 1856, the year Freud was born. The idea is that this underlying concept would be useful to correctly predict the next word in the training text.

Now imagine we can statistically define inferred scientific insights in a LLM. Maybe they are nodes or algorithms that are broadly used in many different contexts. The technical definition of these concepts is not so important, let's take it as a given they can be statistically described.

With a definition, one could automate identification and extraction of these concepts. One could also recursively develop these concepts (eg. find training data which will be especially useful). This is essentially a system that finds the most explanatory ideas that have not yet been articulated. Science!

This could be particularly powerful for areas of research that are highly interdisciplinary, such as AI and the evolution of consciousness. There are likely concepts that explain observations in archeology, linguistics, neuroscience and the types of stories we tell. It's just that nobody can be an expert in all areas to see how a model explains disparate holes in our understanding..

Expand full comment

Indeed. If we can automate even the rote analyses that has far reaching implications even if conceptual leaps a la Einstein remain limited to human beings for the time being.

Expand full comment

I think the leaps are going to be one more thing that is easy for computers. Easier than doing the dishes

Expand full comment

LLM's doing philosophy seems at least as interesting and plausibly more valuable. For example: check out science's take on (and causal involvement in) the various wars we have going on here on Planet Earth.

Expand full comment

You should publish the entire piece (potentially + other select previous posts) as a book to, if nothing else, document how you’re thinking about AI at this point in the cycle/buildout. Maybe do a selective Substack pre-release to take advantage of the feedback that appears to be surprisingly constructive. I’d buy a copy.

Expand full comment

[Minor, nitpicky feedback] Looking forward to reading this in its entirety. But I just noticed early on you say that current AI "Can use some proscribed tools...". I think you mean "prescribed", not "proscribed". The latter is a synonym for prohibited or forbidden.

Expand full comment

Yes I definitely did. Thank you!

Expand full comment

What sort of 'system' do you use to 'maintain' references ... this is a complete word salad:

===

34. Wang, Alex, et al. “Loopy: A Neural Program Synthesis Framework with Loop Invariants.” arXiv preprint arXiv:2305.08848, 2023.

35. Brown, Tom B., et al. “GPT-4: Language Models are Few-Shot Learners.” arXiv preprint arXiv:2305.05644, 2023.

===

Both papers do no exist under those titles or arXiv document numbers ...

Expand full comment

Tried scripting to move from hyperlinks to citation but fucked up. Mea culpa. I'll fix it.

Expand full comment