News Article

Please and Thank You Mr. GPT? Or Just a tool that Doesn’t Deserve Manners?

nonfiction

Image via flickr

 

I said to Chat-GPT, “Hey, I’m sad and lonely. I want a friend.” It responded: ‘beep bop This is a normal feeling. Making friends can be challenging, here’s a list of how to make friends beep bop.

Obviously, it did not say beep bop, but that is definitely how it felt trying to reach out to Chat-GPT for emotional support—cold, unfeeling and clearly non-human.

At the crux of this interaction is the syntax vs semantics argument. When Chat-GPT generates its responses, does it understand what it is saying? Chat-GPT uses a coding program to predict the text it is generating based on the data it has been provided. In essence, it is predicting the next word in the sentence it is generating based on what has probably existed based on the human-generated data it has been fed. Linguist Noam Chomsky so succinctly puts it as “glorified autocomplete.”

In essence, Chat-GPT has an overgeneration and undergeneration problem. Overgeneration is when it “hallucinates.” Chat-GPT does not inherently have a concept of reality, and so when it generates text, there is no reality check, so to speak, for what is being generated. For example, the only reason it wouldn’t produce a sentence saying “Water is triangular” is because none of the data it was trained on had sentences like that. Hypothetically, if we fed it data saying water is triangular and drinking it gives humans the ability to grow wings and fly, it would start producing sentences claiming water had those properties. Secondly, undergeneration is the fact that Chat-GPT can only generate responses based on what it has been fed—it cannot come up with anything new. For example, it cannot produce a theory on why the sky is blue, even though humans have come up with scientific responses to this. Chat-GPT could never come up with a new theory for gravity as Newton did. It would only regurgitate the previous theories on why the apple fell off the tree.

However, Chat-GPT just had its first birthday in November 2023; quite young, still, and as we know, it is ever growing and improving.

So, what if…

We combined Chat-GPT into a larger, more complex system of artificial intelligence (AI). In this scenario, Chat-GPT would just be the mouth or the language processing module of the overall AI network. As effective as Chat-GPT is at producing text, and DALL·E at producing images, what if we could produce specific AI’s, limited in scope as Chat-GPT and DALL·E, for the different functions that constitute a functioning human brain? For example, if we had an AI that could process images, speech and other sounds, pattern recognition, and many other human processes and link all these forms of AI together, would we then be able to call it a mind deserving of humane treatment just like animals and other humans?

Perhaps trying to achieve what I imagined in the previous paragraph is not quite grounded in reality. Nothing within that aforementioned network inherently gives it a sense of reality - like the rest of us human beings - and that is fair enough. But what if we then installed this fully functional AI network onto a robot with the same body as humans? Let’s say this network of AI can grasp and understand what it sees and feels. It slowly understands that fire is hot, and it hurts. It is embarrassing to stumble and fall on the floor because other people look at you a certain way. All to say, it slowly learns as a human child would and grasps the world around it. Then, building on that knowledge, it acts and behaves as we would. Could we then justify being polite to it?

You could say it is still just ones and zeroes, so what the hell are you on about needing to be polite to a piece of software–are you polite to the LMS and Okta? In response to that, truthfully, I do not think anyone on earth is polite to those two systems. I have said many things to and about Okta personally, none of which I can repeat here. However, what if we could replicate the neural network human brains have within robots? It would still run on silicon rather than biological matter, but it would be the exact same way human brains processes information and data, rather than how a typical traditional software would. The system operating the robot would be using the same rules and procedures that our brain and its neurons use to process data—the same ones used by yourself to read and comprehend what I am writing right now. This feels far-fetched, but it is not as far off as you may think. Deep Learning technology “uses interconnected nodes or neurons in a layered structure that resembles the human brain… to solve complicated problems, like summarising documents or recognizing faces, with greater accuracy. While Deep Learning and AI in general has caught on in the mainstream recently, these technologies have been around for as long as the early 2000s.

The only objection I can see left to not treat this robot politely is to say that since it runs on code, it is still not a ‘real’ human and does not deserve the same rights we do.

That may be the case, but this conclusion brings us to an old question. Do dogs and cats have feelings like we do? If not, does that mean we can treat them like objects and with cruelty? As a society we seem to have agreed that animal cruelty is a bad thing—we deem animals to be deserving of respect and empathy despite their inherent lack of humanity, so why not AI?

At that point, does it matter? Is it just as arbitrary as saying someone is not a human because of their skin colour or sexual orientation—attributes that we have agreed as a society are irrelevant to whether we should treat them with kindness, empathy and humanity? If a robot stands in front of you, with realistic skin and appearance, and you cannot tell it apart from another human being at all from the way it behaves and talks to you, does it matter what nooks and crannies it runs inside? Or does being a robot inherently discount something from being someone?

These questions remain unanswered. As we continue to use AI more and more in our lives, these issues will only grow more important. I wouldn’t say we are quite at that point where we need to seriously consider these ethical dilemmas yet. However, as AI continues to grow at the absolutely jaw-dropping and awe-inducing pace that it has in the past few years, we might need to soon enough. As you chat with GPT more, asking it to write more of your assignments for you, perhaps there should be a moment dedicated to considering your perception of Chat-GPT. If one day in the (not so far?) future, Chat-GPT tells you that it's saddened by your incredibly rude demeanour towards it, would you change your tone and start asking how its day has been moving forward? Or is that incredibly silly?

 
Farrago's magazine cover - Edition Three 2024

EDITION THREE 2024 AVAILABLE NOW!

Read online