Please treat this as an open thread! Do share whatever you’ve found interesting lately or been thinking about, or ask any questions you might have.
Also: I wrote a book on AI development, exploring its history and its future. Order it here, ideally in triplicate!
Strange Loop Canon had a few posts since the last symposium. A few here:
Rest, a case for sabbaticals, looking at the idea of undirected exploration as essential to true creativity. And how we’ve effectively squandered this at the altar of efficiency. Here are also some of the highlights from the comments to it!
The case against prediction, on the pitfalls of relying on predictions too much, as opposed to focusing on building capabilities for reacting faster. This means focusing more on building up the ability to see the world as it is, and against decision making structures that delay everything!
The history of innovation, looking at data to identify trends and themes, especially the twin burdens of “ideas are harder to find” and “emergence of new themes”. I’ve long been fascinated by how innovation happens and the roadblocks that stop this from happening more, and a data-driven look into it was a personal odyssey. One, incidentally, that was highly enjoyable!
The misnomer of Big Tech, where I look at the fact that while we have a name for this group of companies, they are actually all dramatically different in what they do! They’re societal automations, and consequently tough to compare, which also makes it very hard to figure out how to regulate them. Because if you don’t quite know what they all do, how do you regulate them as a group? You can’t. You have to go case by case.
- writes about the problem of pharmacokinetics
I’ve long found it fascinating that almost everything in biology is basically fuzzy. You don’t quite know why this drug works quite this way, nor do you really know how it will work for you. It’s like being in a giant invisible scatterplot and hoping you’re standing somewhere close to the regression line. He explains what this particular problem is about.
Pharmacokinetics is the study of everything that happens to a drug when you put it in your body. So, if you’ve ever asked questions like “Why does my Advil take a few hours to work?” or “Why do I have to take a Claritin every 12 hours?” or even “Why does asparagus make my pee smell funny?”, well, those are all pharmacokinetic questions.
And talks about how the biggest breakthrough in obesity drugs to come recently, the miracle drug of semaglutinide, is effectively someone solving the problem of pharmacokinetics - making it work within the body for a sufficient period of time (days not minutes), before excreting it out. All problems are analysed by giving a statistical sample of people the drug and analysing the results you manage to get, to plot a sensible distribution. Which is roughly accurate at a population level, but a far ways off from precise!
There ought to be a better funded group to analyse this stuff, even from existing public data, because the value added would be extraordinary!
- writes about the fundamental extrapolation error in the world of AI, where we look at what is the case and extrapolate to what could be.
The interesting part here to me is that as humans we’re unable to parse various forms of intelligence in any meaningful sense. If we see GPT 4 answer law exam questions, we think it understands law, and even if we don’t quite think it, we act as if it does.
And it’s a weird problem that we’ve never had to face up to. Until now!
AI companies are hiring poets, novelists and writers
I don’t want to say I told you so, but I told you so. It was seen as revolutionary that you could use the outputs of GPT 4 to fine-tune and retrain other LLMs like Llama variants or GPT 3.5 finetunes. But then we use our neural nets to retrain new LLMs too, and which makes this business model inevitable.
It’s not just annotation, but to craft high quality training data. It’s actually a non trivial problem, now that we’ve started learning how much quality matters!
Maybe this will give the striking writers in Hollywood some respite.
Speaking of poets, we have acrostics in poetry from the ancient world
The essay starts pretty boilerplate academic, but trust me its actually interesting!
Ten years ago, one of the most disruptive events in my intellectual life occurred at a dinner party at my house. My friend Richard Thomas, who had just given a talk at Baylor University, mentioned that a student of his had discovered an ‘Isaiah acrostic’ in Vergil’s Georgics, a 1st-century BCE poem ostensibly about farming but really about life and the universe. This remark simultaneously opened the door to two phenomena in ancient Greek and Latin poetry that I had not really thought about, despite a lifelong career in Classics: acrostics and Judaism.
Apart from learning a ton about ancient poetry, what stood out to me in this is that it kind of discusses Straussianism before Strauss in writing, and that it seems a revelation to academics that human beings who wrote things down might have used allusions and puzzles within their words.
I don’t know if this is childlike naivete or adultlike naivete, but it says something interesting about our conception of what human beings were like in the long past. Personally I’ve thought that they were much like us today, just not as blessed.
A separate and more speculative line of thought makes me think of the bouba-kiki psychological experiment, where some words seem rounded and others sharper. Maybe some phrases evoke certain images, a callback to the prehistory of writing where pictograms were the cutting edge. A cheat code to human psyche that still exists somewhere within us.
- writes about the almost impossible complexity within biology
If we can achieve this level of precision for aeronautical and electrical engineering, why does the design process for biology remain so empirical? To start, we don’t know what everything in a cell does. In 2016, researchers from the J. Craig Venter Institute published the first description of a minimal bacterial genome containing only the genes that are essential for life. Out of the 473 genes in this stripped-down genome, we don’t know what 149 of them do.
I find this fascinating! There was a while when I was a child where the idea that we knew more about outer space than our oceans threw me for a loop.
Biology is much the same. We’re surrounded by data that we can’t easily read or understand, and we’re surrounded by the outcomes of that data interacting that we can’t easily see or model, and this means that all our models are like small pats of an invisible elephant we can’t ever even imagine, much less conceptualise.
Elliot writes about a few papers that have emerged which tell us to test multiple interactions in a massively parallel process, and how we could create a physics engine to make this happen. Still speculative, but very interesting, and arguably one of the core questions of our era.
A phenomenal essay from 3 Googlers, from 2009, about The Unreasonable Effectiveness of Data. It’s incredible the extent to which this came true afterwards!
As always, if there are great reads, books or essays or otherwise, that Strange Loop readers would like, please do share them in the comments!
Regarding AI companies hiring poets: who says the humanities skills are dead?
There's something quite funny about shape rotators taking over the world (metaphorically) and the end result is a bunch of AI systems that you get the best results out of by having good wordcel skills (I. e. prompt engineering).
Great collection of links and thoughts. I would suggest we don't truly understand what it means for people to "understand law", so not surprising we don't know what it means for an AI to do so as well. We "know it when we see it" but we can't spell it out.