The rise of artificial intelligence is sparking discussion between educators about how to proceed in the classroom — work with AI or against it?
ChatGPT, the newest trending AI chatbot, restarted the controversy over AI in education. Reaching 1 million users in its first five days ChatGPT is a language model designed by OpenAI and pretrained on text which students have been using to generate essays, annotations and answers for take home exams. Students have been utilizing ChatGPT for literature, history and authoring code, which produces results with shocking accuracy. Educators like Barbara Ashmore, director and professor of instruction at the Teacher Development Center, believes artificial intelligence can have dangerous repercussions if used purely to meet a decent letter grade.
“If they’re worried about a grade, that’s not gonna help them get it,” Ashmore said. “It’ll help them meet the deadline, but the quality of the paper isn’t gonna be there. Yes, the grade can be important, especially if you want to go to graduate school, or get into medical school or law school or whatever your career goals are. They usually require a [high] GPA, but that GPA is empty if you haven’t done the work to learn. You haven’t done the work to learn how to think.”
Ashmore has been talking with other educators, recognizing that advancing technology will require instructors to enrich assignments to distinguish genuine responses from ChatGPT’s algorithms. Ashmore suggested that creating more specified grading parameters and providing more one on one interactions will diminish students’ ability to generate automated answers while also expanding curriculum.
“It’ll make assignments a little richer, maybe a little deeper [and] require more thought and critical thinking,” Ashmore said.
UTD has said that the use of ChatGPT or any similar AI programs in lieu of genuine student work will be considered plagiarism and a violation of the Student Code of Conduct. Specifically labeled under academic dishonesty, the disciplinary committee plans to treat first time offenders with a warning and a zero on the assignment, with further punishments to be assessed when they occur. Shaquelle Massey, director of the office of community standards and conduct, wants to take an educational approach to the discussion of AI technology and student misuse.
“The way that I relate to students cheating is essentially [an] umbrella,” Massey said. “There are different forms of cheating or different stems as I will put it. And so the stems could be plagiarism, fabrication, collaboration, collusion — but cheating is essentially that catch-all. So using Al would be subject to our academic integrity policy, and it would be in the form of cheating.”
OpenAI presented the latest classifier on Jan. 31 to help educators distinguish between human and machine-generated essays. However, this classifier is only accurate 26% of the time.
On the other hand, some educators believe that AI can be a beneficial tool in the future of education. Not only can it be used as an assistant tool for students to create more enriching material, but it can also be used to help educators engage students.
While educators and disciplinary committees precariously prepare for the wave of students who will be using resources like ChatGPT for their essays and reports, AI’s use is not completely in opposition to education. It may open a transitional period for new methods of education in the future.
“We don’t know how artificial intelligence is going to develop and mature,” Ashmore said. “I suspect it’s going to become very sophisticated, and it will become a learning tool. That’s what a calculator is: it’s a learning tool. You want to get to the concepts and the theories in math to be able to apply them and use them in new and unique ways. You don’t want to be held up by the arithmetic of mathematics. Let’s use that information. Let’s apply it. Let’s synthesize it. Let’s evaluate it and maybe create something new with it. And that’s what we do at university.”