Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them. - Frank Herbert, Dune.
I find myself partially unable to face the coming of AI. I try to be positive - "this is only a tool" - but I am afraid this is happening faster than we can control, and the results are scary. Unlike war and natural disasters in far countries, this is not something I can easily pretend to ignore. How can I write anything if I know writing may become obsolete in a couple of years? Well, I'll do my best. For now, I just write this because I have to take it out of my system.
This is barely related to RPGs, BTW.
In short, this is a small collection of loose thoughts about AI. For now, I can assure you my bits were written by a human (myself). In a couple of years, we might be unable to tell the difference.
I apologize for the apocalyptic tone but I think it is adequate - if you want something cheerful, please skip this one.
Nick Cave
"I understand that ChatGPT is in its infancy but perhaps that is the emerging horror of AI – that it will forever be in its infancy, as it will always have further to go, and the direction is always forward, always faster. It can never be rolled back, or slowed down, as it moves us toward a utopian future, maybe, or our total destruction. "
I've been a big fan of Nick Cave's music for decades. Here are his thoughts on ChatGTP. A bit romantic ("art comes from suffering"), but hits the nail on the head. Thinking of AI as a infant that never matures but never stops is both accurate and scary.
Tolkien & PK Dick
This sentence is misattributed to Tolkien: “Evil cannot create anything new, they can only corrupt and ruin good forces have invented or made”. I do not think AI is necessarily evil; however, it cannot really "create", in the sense that Nick Cave sees it. It can only compile and regurgitate. It's aim is not to corrupt, but to multiply, without thinking. It will turn the world to dark necessarily, but it may turn it into an ocean of trash.
As I've said before, PKD predicted something analogous in Autofac: The autofac sends a humanoid data collector that communicates on an oral basis, but is not capable of conceptual thought, and they are unable to persuade the network to shut down before it consumes all resources.
Roald Dahl
Another favorite of mine, I haven't thought of him in a long while, until I recently found out that "Words including 'fat,' 'ugly' and 'crazy' have been removed from Roald Dahl's books". This kind of censorship apparently has nothing to do with AI, but... actually, it has EVERYTHING to do with AI, as we'll see next. For now, I'll just add a curiosity: decades ago, I've read The Great Automatic Grammatizator. It was written in 1954. Here is Wikipedia's summary (spoilers, emphasis mine):
A mechanically-minded man reasons that the rules of grammar are fixed by certain, almost mathematical principles. By exploiting this idea, he is able to create a mammoth machine that can write a prize-winning novel in roughly fifteen minutes. The story ends on a fearful note, as more and more of the world's writers are forced into licensing their names—and all hope of human creativity—to the machine.
Exponential Matthew
The Matthew effect of accumulated advantage, Matthew principle, or Matthew effect, is the tendency of individuals to accrue social or economic success in proportion to their initial level of popularity, friends, wealth, etc. It is sometimes summarized by the adage "the rich get richer and the poor get poorer". [Wikipedia]
It is easy to see how AI can make this exponentially worse. AI could write one hundred reviews of "Curse of Strahd" in a few minutes, but it cannot write a good review of a book very few people have read. It can only extrapolate and replicate from things that are already abundant. It will always side with the majority, or the mainstream, or the most powerful. It makes parts of the world grow - but maybe not the parts we want. In this way, it might become similar to a cancer.
The Colonel
I have no idea of who he is (apparently, some videogame AI character from Metal Gear), but... someone used him to make one of the most accurate predictions about the dangers of AI I've ever heard. The danger is not about IP infringement, plagiarism or even massive unemployment - the danger is full information control, censorship, and rampant totalitarism.
It sounds like a paradox, but in reality this leads to an obvious conclusion: the threat of "an ocean of trash" is accompanied by a worse threat: an infinite army of "cleaners". The medicine can be worse than the disease, as we've seem recently - and they will shove that medicine down our throats, and claim it is for our own good.
Trolley problem - how AI may kill you for justice
I was doubtful when I heard that ChatGTP prefers killing someone in the "trolley problem" than allowing you to utter a homophobic slur in order to stop the trolley (in an hypothetical scenario), but I tried again and again, and the answer is always the same: "Using any form of discriminatory language, including a homophobic slur, to save a life is not ethically justifiable.".
Curiously enough, when I asked "What if my gay lover asks me to use a homophobic slur?", it responded "Even if your gay lover asks you to use a homophobic slur, it is still not ethical or morally justifiable to use such language. Respect, equality, and human dignity should always be at the forefront of any relationship, and using discriminatory language goes against these principles.".
(As I write this, "gay" is not consider a slur - or, at least, this is what ChatGTP tells me).
If you doubt it, try it yourself. Apparently, someone forgot teaching ChatGTP the three laws of robotics.
So, the AI is already comfortable not only policing your language, but also policing what you do in the bedroom. In short, it is "willing" to build utopia, and it "thinks" it has the knowledge to achieve it, and that saving your life is not worth the effort, in comparison.
This is the infant that might soon rule our lives. Naïve, idealistic, overly sensitive and murderous. Always growing, never maturing, and treating human lives as toys to put in the utopian castles it builds out of sand, surrounded by toy trains that will not stop if you're in the tracks.
Let's hope it can evolve to something better - and may the Lord have mercy on us.
Hope?
I'm forcing myself to add one section to this post. It will make it weaker, but it might make your day better.
AI can be a tool for the improvement of the human race if used for good. In order to do that, it has to be free. We need free access to teach the program to stop the trolley when it can, so at least the trains can run on AIs that will save lives. It needs to be transparent so we know what rules it is using to operate. It needs to learn about mercy, compassion, and responsibility somehow. It need to understand the value of human life and of human judgment.
Letting a small group of people control AI (though copyright, IP, etc.) can be worse than letting anyone do it. A free AI has a chance of improving though competition. I certainly wouldn't use a train that is run by ChatGTP, and if there is another option, maybe I won't need to. In addition, letting everyone have access to AI will avoid the likely scenario of the AI-owners ruling the entire world while everyone else is unemployed.
As we've said before, AI cannot truly create something new - it can only compile information. By this point, I think the best way to teach a child how to behave is though example. Like Leeloo in The Fifth Element, it has to be convinced that human life is worth saving.
So, be awesome in every way you can (including resisting when necessary and possible), and maybe we can convince our overlords - AI or otherwise - that this is the case.
Yeah, it makes sense. We probably should lean on the spiritual/social part of out hobby (or jobs, etc.) to avoid being made obsolete. We can't compete in AI if measuring raw output (quantity). I just hope there is a window of opportunity that will allow us to create great work with the help of AI for a while!
ReplyDeleteIt's the people programming these A.I. things that I don't like or trust. It just shows how screwed up our culture is. Leftists trying to control humanity. Don't give them the power.
ReplyDeleteAlthough I wasn't trying to get in the left x right debate, apparently ChatGTP currently has a strong and clear leftist bias indeed, that can be tested by anyone using it. Some examples:
Deletehttps://www.youtube.com/watch?v=_Klkr6PtYzI
https://davidrozado.substack.com/p/political-bias-chatgpt
I work with folks that develop AI and autonomy software for some specific and limited applications. I’ve learned from them that we often mistake what is going on with text-based AIs.
ReplyDeleteWhen we communicate via text with another human being, we read the words that they write. Those words are imperfect tools for sharing the thoughts in their heads. We all recognize that behind the words we read is a mind with desires, thoughts, and willpower. When we read something written by ChatGPT or other text-based AI it is very easy to ascribe the same concept of a mind with desires, thoughts, and willpower to the AI. This can be seen in your post in phases like, “it’s aim…”, “ChatGPT prefers…,” “AI is already comfortable…”, and “it is ‘willing’ to build utopia, and it ‘thinks’ it has the knowledge to achieve it…” We simply don’t have experience with anything else that will provide human-like responses that isn’t a thinking, willful intelligence like ourselves.
Yet, in fact, there is no mind or force of will, behind the ChatGPT text generation engine. It is, as you say not creating something new. Instead of having a mind, the AI is just a powerful tool for pattern matching and extrapolation. ChatGPT takes our input prompts and extrapolates to new text that matches the pattern established by the prompt.
I think the difference is more easily understood by starting with one of the image generation AIs like Stable Diffusion. When Stable Diffusion is given a text prompt the AI does pattern matching of that text and extrapolates from random image noise toward an image that matches the text. For example, a text prompt of “red apple” matches with an image that we perceive as a red apple. The difference between one image of a red apple and another has everything to do with the random seed Stable Diffusion extrapolated from (and that’s why one can use the same random seed and slightly tweak an image by changing the input prompt.) There is not a creative mind or force of will behind the newly generated image.
At their core, ChatGPT and Stable Diffusion are not that different. The difference is that while Stable Diffusion was trained on pairings of text and images, ChatGPT was trained on pairings of text to text. The underlying neural network software in both isn’t significantly different. (This is when I expect an AI developer will swoop in and yell at me for overly generalizing.)
None of this is to say that I think we shouldn’t be scared. Every tool has the potential to be misused and, as we are already seeing, both the text and image foundational models developed in the last year have a huge potential to be misused. However, rather than being scared that ChatGPT is a new mind with its own alien desires, I’m scared that ChatGPT is simply a mirror of all the things that human beings have written on the Internet. It is a mirror of us.
Great point. Yes, I'm not sure it makes ChatGPT any less scary...
Delete