OpenAi’s ChatGPT is the most advanced chatbot that can take an inquiry and produce a very readable and impressive summary of facts, deductions and conclusions that are passable in many circumstances as original work by a student.
Is the use of ChatGPT ethical in an academic setting?
We all have gut reactions to this question. Some insist that it is unethical, that it makes it too easy to cheat and, since each time the same question is posed, it produces somewhat unique answers which makes its detection difficult. Others insist that chatbots are just the next step in technology that can make us more efficient in getting information and making it accessible by distilling billions of pieces of information and presenting it to us in summary form.
With my background in technology, I am always wanting to see technology advance and believe there is no limit to its capabilities. As a business ethics instructor, I try to look at the unintended consequence of the advancement to tools that free us up from doing some type of skill. For instance, we learn how to add and subtract, but few of us use this skill outside of school for all but the simplest of problems we encounter. Surely, the cashier (the live person type) is lost if the register doesn’t work. This gets more complicated as business owners depend on those registers to keep inventory and monitor customer buying patterns for future sales – other tasks humans no longer do. Navigation apps have freed us up from looking at maps to the detriment of us knowing where we are geographically, and many time not even the general direction. We rarely get the opportunity to laugh at the “men don’t ask for directions,” since not only do all of us just let navigation tell us where to go but knowing the general direction has been lost. Instead, we numbly follow the voice direction without that image of the area in our heads.
But are these just skills that are no longer needed by our species? Our cell phones, after all, can do many things better, faster, and more accurately than we are. And this purportedly frees us up to do more important or creative things with our lives, or to relax more and enjoy a higher quality of life.
Is the use of chatbots just another one of these? Shall we first learn how to research a question, read sources, and produce reports on our findings in school just to never use those skills again and leave it up to an AI program to do that? In the math model, we learn how to manually add and subtract only to be able to put that aside and just use calculators – which allows for more complex tasks without the burden of remedial necessities. Can we first have students create original works, then move on to let our chatbots create them later? At this moment, we can see limitations in the technology, as it can go wrong, pick up unreliable or even patently wrong material and produce it as fact. It will continually improve, but that judgement, independent critical thinking and creativity will, for the foreseeable future, relegate these chatbots to fact (or fiction) finding and distilling what it finds into a human-consumable form. We must evaluate the output and judge if it is true and reliable and useful for the question we are trying to answer.
We have arguably lost the skill to use a library for researching some topic. We Google it. Then we do the source evaluation, deductions, and synthesis to come up with original thought. Our grandparents might think that the loss of that skill is a crime. How can we not know how to look things up using indexes, encyclopedias, countless books, and periodicals? But you and I are comfortable with doing that research at home, with Google and other sites by our side. And we might pride ourselves on our abilities to get to the truth (or, “truth enough”).
What, then is the role of schools and universities? And what can be said about the ethics of using AI instead of original work developed in our own brains. Let’s approach the question in a values-based framework where we consider the issue, alternatives, stakeholders, and values.
The issue is that AI chatbots can produce papers on any topic that is automatically researched and somewhat uniquely reported which circumvents detection. The alternatives include doing nothing (allowing it to be used freely), recognizing the technology and putting constraints on its use, or banning it completely. Our primary stakeholders in academia are the students, the professors, the hiring companies (this is key!), and the donors.
Values: As an example, for BU’s mission statement: We are “committed to educating students to be reflective, resourceful individuals ready to live, adapt, and lead in an interconnected world. Boston University is committed to generating new knowledge to benefit society.”
Clearly, just as in the other examples of cash registers and map navigation tools, AI assistance is here to stay. Closing our eyes to it and outright bans does not educate students to “live, adapt and lead.” And the nature of chatbots is the synthesis of existing knowledge, so it adds nothing to “generating new knowledge to benefit society.” Alternative three, the banning of it, is out. Once we admit that our student will (and should) interact with AI systems, then it becomes incumbent on us as educators to teach them the usefulness and limitations of the technology, and to ensure that they know how to be able to write without leaving the responsibility of a good assignment to the chatbot. We teach how to add and subtract. We teach geography and how to read a map. So, we need to teach how to formulate a paper, research the topic and produce original content. Once they have mastered that, then we also need to teach them how to use AI to enhance their product output. We expect a doctor to know how to diagnose a patient, but all of us also expect the doctor to take advantage of AI systems to improve their accuracy when possible.
It is necessary for us, as educators, to embrace technology and to design assignments to eliminate (reduce) the possibility of using chatbots when we are teaching that skill, then to allow and encourage its use when it can help improve their work product and learn how to use it in a workplace setting.
Let’s not be horrified and shocked by these newfound capabilities. Let’s put our energies, instead, into discovering how best to use it to society’s benefit and arm our students with the power to use it for good.
The answer then is yes, ChatGPT’s use is ethical in the context of our university’s mission. Let’s not debate that and accept this inevitable future. It is now incumbent on us to find, teach and encourage its use it to improve society.