The tantalizing prospect of artificial intelligence ... has provoked both fear and excitement. We do not know whether the creation of intelligent machines is possible, but many believe that it is inevitable.
Stephen Hawking
I do not wish to comment on the work; if it does not speak for itself, it is a failure.
George Orwell
I
Prompt: What could a future utopia in 2070 look like?
In a future utopia, people would be living in harmony with each other and the environment. There would be no poverty or war, and everyone would have access to education, healthcare, and clean energy. Solar panels would line the roofs of homes and buildings, providing power for everything from transportation to electronics. Communities would be close-knit, with neighbors working together to help each other out. People would spend their leisure time pursuing their passions, whether it’s playing music or exploring new hobbies. The world would be a beautiful place, with green spaces and natural resources protected so that future generations can enjoy them as well.
To make this utopia come about we need to harness both our compassion and our intelligence. We need to have the intelligence to develop systems that allow people to have their basic needs met, for instance we need to have systems that provide food and shelter for everyone. We need to have the compassion to care for other people. We need to have the compassion to care for the environment. We need to have the intelligence to develop technology that does not harm the environment. We need to have the compassion to care for animals. We need to have the intelligence to develop systems that do not harm animals.
In order to make this utopia a reality we need to work together. We need to have the intelligence to develop systems that allow us to work together. We need to have the compassion to care for each other. We need to have the intelligence to develop systems that allow us to care for each other.
Some people might say that intelligence is necessary for creating and sustaining a utopia, while others might say that a utopia is only possible if everyone is intelligent. It is worth noting, however, that many utopias described in literature and philosophy are populated by people of above- average intelligence. This is likely because intelligent people are better able to understand and solve complex problems, which is necessary for creating a perfect society.
There has also been much debate surrounding the nature of intelligence, both artificial and natural. On one hand, some believe that intelligence is purely a product of nurture, while others contend that it is primarily innate. There are pros and cons to each perspective, but ultimately it seems that the truth lies somewhere in the middle.
One argument for artificial intelligence being solely based on nurture is the fact that machines can only be as intelligent as their programmers. This means that if humans create a flawed or biased algorithm, the AI will reflect these same shortcomings. Similarly, if an AI is not given enough data to work with, its learning will be limited. Thus, it could be argued that since humans are imperfect and limited in their knowledge, any machine they create will also inherit these limitations.
On the other hand, there is evidence to suggest that certain forms of intelligence are indeed innate. For example, studies have shown that infants as young as six months old already have rudimentary numerical skills (Dehaene & Brannon 2006). This suggests that at least some aspects of intelligence do not require extensive experience or training to develop – they are instead hardwired into our brains from birth.
Furthermore, research on savants – individuals with extraordinary abilities despite having little formal education – provides further support for the role of innateness in intelligence (Treffert 2009). Savants often display “islands of genius” which cannot be explained by exposure or training; rather they seem to result from an abnormal brain development which gives rise to exceptional abilities.
Autism, then, may be said to be a “disorder of genius.” Several studies have been done which support this notion based on an analysis of the abilities and behaviors demonstrated by autistic patients versus neurotypical controls. Many these studies were carried out prior to the technological advances that allowed for scanning of brains in live persons; newer research reveals marked structural differences between how autists and Controls think. (See Louisa Grandin from Autism Speaks below) The following traits are some lite signs associated with savant behavior:
Preoccupation with repetitive actions or interests – Obsessive behavioral patterns such as hand flapping frequently occur during episodes known as “stimming” when autistic subjects experience too much stimulation in their environment given their low level threshold for processing sensory input relative to most people
Retentiveness toward factual or cultural information – Subjects often display superior memory retention skills related to rote learning
Surplus mechanical ability
Superior mathematical ability
Expertise at speed counting
Fast computation
Visual-spatial perception
Synaesthesia
Reduced comprehension capacity
Reduced emotional recall
Classifying the varieties of intelligence associated with humans vs that demonstrated by AI is difficult because there is no agreed-upon definition of either "intelligence" or "human." Some people might say that any form of intelligence demonstrated by AI is not truly human, while others might argue that any example of intelligent behaviour demonstrates some degree of humanity. In general, however, most experts would agree that there are certain types of intelligence that are more commonly associated with humans than with AI. These include abilities like natural language processing, creative problem solving, and empathy.
The term savant is generally used to describe people with extraordinary abilities in specific areas, such as mathematics or memory. AI can be said to possess similar abilities, but on a much broader scale.
Our conniptions around the word intelligence is what makes us go in circles. We try to convince ourselves that we’re smart because intelligence is a good thing, right? But then the thought of being called ‘smart’ isn’t desired if it makes us seem arrogant or conceited in front of others. So how do you decide between pride and insecurity when it comes to intelligence/being smart without feeling guilty about any sort of answer you have?
This question can be difficult to answer because there is no definite line between pride and insecurity when it comes to intelligence. On one hand, being proud of your intelligence can be seen as a positive trait since it shows confidence in your abilities. However, too much pride in your intellect can come across as arrogant and conceited. Meanwhile, feeling insecure about your intelligence can lead to feelings of inferiority and self-doubt. Ultimately, the key is to strike a balance between the two extremes and feel comfortable with whatever level of intelligence you have.
II
The first thing to note is that this essay is ... fine! If it was written by a fifteen year old me, I’d be pretty impressed. Sure, there isn’t much personality beyond a sort of paper mache version of good-boy-caught-in-a-computer, but its still pretty impressive in both its ideas and general grammatical execution.
(I’m reasonably convinced if I sent it through another AI that focused on writing styles, like P G Wodehouse or whomever, it could perhaps do a half decent facsimile. But from my basic experiments thus far, this still has a ways to go.)
The second thing I noticed is that its extremely keen to say when it doesn’t know something! It’s actually pretty hard to get GPT to answer the type of thornier questions we like to grapple with. It’s much happier extrapolating a bit from existing sources, as long as the extrapolations remain sufficiently anodyne.
This is partially probably a response to things like Microsoft’s chatbot becoming awfully racist awfully quickly, and I’d allow necessary in the extreme cases like that, however it does make your explorations of the capability of the system feel a lot more bounded, like you’re surfing the web with safesearch turned up to 11.
The third and most problematic point, as of now, is that because the system is, in essence, a perfect data crunching and synthesising and output machine, it doesn’t have a core belief about anything. Not only that, it doesn’t even understand the concept of having or needing one, at least as of now.
This is mainly an issue because most of the times when we read things online, whether that’s essays or papers, a big part of what we love is to get closer to the source of the ideas themselves. So we don’t get to know what Scott Alexander thinks, or Tyler Cowen thinks, or Bryan Caplan thinks, and instead are left looking at a piece that stands de novo without any contextual clues.
Essays are not just words arranged on a page. They are windows into how someone thinks. In reading them you realise something akin to a revelation, a firing of neurons giving a feeling of connection to another human, because in some deeper ways you actually know them!
As Bertrand Russell said:
A style is not good unless it is an intimate and almost involuntary expression of the personality of the writer, and then only if the writer's personality is worth expressing.
And when we don’t have contextual clues to tell us how best to understand what’s written, if we become pure textualists (with apologies to my supreme court readership), we lose a large part of what makes writing meaningful.
Say if you take the maxims of how to write an essay as said by the usual writing gurus you would see that GPT satisfies it with flying colours.
There are some simple maxims-not perhaps quite so simple as those which my brother-in-law Logan Pearsall Smith offered me-which I think might be commanded to writers of expository prose. First: never use a long word if a short word will do. Second: if you want to make a statement with a great many qualifications, put some of the qualifications in separate sentences. Third: do not let the beginning of your sentence lead the reader to an expectation which is contradicted by the end.
Right now GPT feels like an eerily well behaved schoolboy who is pretty good at exams. Give any topic or question, it parrots a reply really well. If you teach it to do basic algorithms, like reversing a word, it does so. It writes sensible For now at least we have a personality dodge, in that it doesn’t seem to be able to (or want to, for specific versions of want) understand vast swathes of human experience, or be much opinionated at all. Until this too gets resolved, may many more essayists bloom!
III
Is this likely to make creative work, or indeed all fellow Strange Loops, something to relegate to machines? There are examples of areas where we’ve effectively been superseded by machines but where we still find pleasure and reward in engaging.
Christopher Hitchens, no slouch when it came to essay writing, said.
If you can talk, you can write. You have to be careful to keep your speech as immaculate as possible. That’s what I’m most afraid of. I’m terrified of losing my voice. Writing is something I do for a living, all right — it’s my livelihood. But it’s also my life. I couldn’t live without it.
The identification of what he has as a “voice”, and which has a particular connection with the audience, other humans, is critical. His essays aren’t a medium that provides a facsimile of insight to the reader, but a connection between Hitchens, in all his vertiginous complex personality, compressed in the form of an idea and its associated offshoots to a particular reader.
Or here, where he uses Orwell to emphasise the point that how you think is so much more important than what you say.
But what Orwell illustrates, by his commitment to language as the partner of truth, is that ‘views’ do not really count; that it matters not what you think, but how you think
And if that is true, if writing is one of the artforms which is made to help us connect person-to-person, connect my mental imagery and exploration of the crazy high dimensional idea landscape to that of you, the reader, then me being an understandable entity with a personality and a sense of humour and an existence is vital.
I am not the amalgam of the things I can write, but I am that which can write the things only I can write.
In many ways the problems that folks like Gary Marcus calls attention to, or the semantic apocalypse Erik Hoel wrote about are examples of our worry that AI, sufficiently advanced, will take away the meaning from being human. If the things that come closest to expressing who we are, our art, our poetry, our creations, can be facsimiled so easily, then what value is our existence!
And the answer is that our existence has always been meaningful because of the ways in which we get to stretch ourselves and get to the heights we set for ourselves, rather than a purely competitive spirit which relates our effort to that of winning a particular race.
As the avatar of an AI says in the Culture book Look To Windward.
Some people take days, sweat buckets, endure pain and cold and risk injury and - in some cases - permanent death to achieve the summit of a mountain only to discover there a party of their peers freshly arrived by aircraft and enjoying a light picnic.
‘The point, of course, is that the people who spent days and sweated buckets could also have taken an aircraft to the summit if all they’d wanted was to absorb the view. It is the struggle that they crave. The sense of achievement is produced by the route to and from the peak, not by the peak itself. It is just the fold between the pages.
The joy in writing, or indeed anything creative, is the joy of bringing about something from nothing and using that something to tell the world about you. That’s the next yardstick we await. That’s our final frontier.
I’ve talked about AI as an idiot savant, but the idiocy is also in a sort of forgetfulness, of a lack of persistence of personality amidst the dizzying array of creative outputs it’s asked to make.
George Orwell, in the quote that started this essay, wrote about the need to write himself out of his essays. “.. (should) struggle to efface one’s own personality” while writing, he said.. And as we all know, so many decades later, his essays are enjoyable exactly because he failed at this objective. Who he is is an integral reason why we enjoy what he writes, even if who he is is only known by the cumulative output of what he wrote!
The echoes of our excellence will still stand the test of time, as they’re the template on which the new sets of creativity is built, and as parts of it gets overtaken in ability over the next few years we will have to dig much deeper to find out where our love of human creativity actually lies.
And when it happens, as Oscar Wilde said all too well.
the public is wonderfully tolerant. It forgives everything except genius.
i don't think the issue is "AI lacking personality".
that essay is generic because "generic" is what one would expect as a response to a high-school-essay-like prompt. from our twitter interactions it looks like your opinion changed in the meantime, but just in case: https://lumpenspace.substack.com/p/how-come-gpts-dont-ask-for-clarifying