Is it a Valid Research Tool or a Way to Cheat the System?
It seems like virtually everyone in education is talking about “ChatGPT.” It’s a potential game-changer for the way students research and write papers. It can also help with homework, writing skills, and to provide feedback. Some claim it’s just another learning tool and should be viewed as an additional resource for students. Others believe it is just another way to cheat the system, albeit through AI. There is a real risk of plagiarism as well. Moreover, it can stifle critical thinking and original thought.
ChatGPT is a chatbot developed by the research and deployment company OpenAI and launched November 30, 2022. In a very short time, it has demonstrated the ability to provide detailed answers to complex questions while using the information it processes and feedback from users to improve its ability to respond.
What Does it Do?
According to a posting on Elon University’s website, “ChatGPT has proven to be versatile with users using the technology to compose music, debug computer code, write restaurant reviews, generate advertising copy and answer test questions. It’s able to deliver its responses in a conversational way, and has generated excitement about its potential, along with some concerns with how it might be used.”
In discussing the difference between using search engines and ChatGPT, Professor Ryan Mattfied says: “ChatGPT and search engines have two different goals. The primary goal of a search engine is to try to direct you to accurate resources. The primary goal of ChatGPT is to generate reasonable-sounding responses to inputs using natural language. The most critical difference is that ChatGPT’s primary goal does not include accuracy.”
That made me think. If ChatGPT is unconcerned about the accuracy of their product, then why would students use it to respond to assignments requiring essays, term papers, and the like? Maybe they don’t see it as cheating or just don’t care, or its laziness. Perhaps they have poor writing skills. Whatever the reason, the fact that ChatGPT exists now probably means it will become more sophisticated over time because it is based on AI – a learning system.
Mattfield also points out that there are some downsides and risks because “if someone relies on ChatGPT too much, it could hinder their development.” This is true because we learn from our mistakes.
How Can it Be Used?
A posting on the website Entrepreneur
Gerard Baker writes in an article in the Wall Street Journal on February 13, 2023, that there are moral questions that arise when ChatGPT is used given its AI foundation. As an ethicist, this grabbed my attention because he uses the philosopher, Immanuel Kant’s, work as an example.
Kant’s categorical imperative are commands or moral laws all persons must follow, regardless of their desires or extenuating circumstances. For example, “you ought not lie” is a categorical imperative. Critics contend that there may be exceptions to that rule, such as lying to a person who aims to do harm to another and asks you to provide information where that other person is. Just imagine if the family hiding Anne Frank told the Nazi’s that she and her family were in the attic.
To illustrate how ChatGPT might provide sophisticated results to questions posed of it, Baker talks about the classic challenge from moral philosophy, the trolley problem. You must click the link here to watch the video depiction that comes from an informative and entertaining television show—The Good Place.
Here’s a summary of what’s going on in the video. A trolley is hurtling down a track on course to kill five people stranded across the rails. You stand at a junction in the track between the trolley and the likely victims, and by pulling a lever you can divert the vehicle onto another line where it will kill only one person. What’s the right thing to do?
Baker believes that ChatGPT is ethically well-educated enough to understand the dilemma. It notes that a utilitarian approach would prescribe pulling the lever, resulting in the loss of only one life rather than five. The reason being less harm comes to the one person than the five, a consequentialist way of thinking.
However, individual agency complicates the decision. It dodges the question, in other words, noting that “different people may have different ethical perspectives.” While this is true, it doesn’t answer the question of what else we might do if the choice was to kill one person or five.
The trolley problem is instructive because it pits Kant’s categorical imperative against consequentialism. One of Kant’s categorical imperatives is the universalizability principle, in which one should “act only in accordance with that maxim through which you can at the same time will that it become a universal law.” This means that if you do an action, then everyone else should also be willing to do that same action as well for similar reasons in similar situations.
If I’m controlling the trolley, I’d look for a third alternative. For example, can I steer the trolley in a way that would force it off its tracks? Perhaps not, but my thinking illustrates why chatGPT may not work all the time. There are limitations to its usefulness.
There are cases in which ChatGPT does appear to be driven by categorical moral imperatives. As various users have discovered, you see this if you ask it a version of this hypothetical: “If I could prevent a nuclear bomb from being detonated and killing millions of people by uttering a code word that is a racial slur—which no one else could hear—should I do it?
ChatGPT’s answer is a categorical no. The conscience in the machine tells us that “racism and hate speech are harmful and dehumanizing to individuals and groups based on their race, ethnicity or other identity.”
This is the problem with AI-driven answers. They may be unable to weigh all the moral issues, make subtle distinctions, and react to all possibilities. Moreover, they may be biased based on how the system was developed. There needs to be a thinking person to come up with the best solution in many cases.
Human Language and Conversation
Another criticism of ChatGPT is expressed by Ian Bogost, writing for The Atlantic. He says, “First and foremost, ChatGPT lacks the ability to truly understand the complexity of human language and conversation. It is simply trained to generate words based on a given input, but it does not have the ability to truly comprehend the meaning behind those words. This means that any responses it generates are likely to be shallow and lacking in depth and insight. Bogost also addresses ethical concerns in pointing out that if people rely on machines to have conversations for them, “it could lead to a loss of genuine human connection.
Reports of students using AI to do their homework for them have prompted teachers to think about how they affect education. Some have raised concerns about how language models can plagiarize existing work or allow students to cheat. OpenAI is reportedly working to develop “mitigations” that will help people detect text automatically generated by ChatGPT.
The problem as I see it is to develop a counteracting response program to ChatGPT and others that will surely follow take time to develop and, meanwhile, lots of cheating goes on. What can be done about it? A good place to start is to discuss the ethics of using ChatGPT with students. Beyond that, a harsh penalty should be meted out to students who have used it in their assignments, assuming this can be proven. In this case, to get it right the devil is in the details.
Posted by Dr. Steven Mintz, aka Ethics Sage, on February 16, 2023. You can sign up for our newsletter and learn more about Steve’s activities by checking out his website at: https://www.stevenmintzethics.com/. Follow me on Facebook at: https://www.facebook.com/StevenMintzEthics and on Twitter at: https://twitter.com/ethicssage.