MERCURY ON STRIKE

Search
Close this search box.
Photo by Oluwaseun Adeyemi | Mercury Staff

Embracing UTD’s research spirit, and for the sake of quenching curiosity, The Mercury asked ChatGPT over 100 questions on different topics to test the limits of its creativity, logic and problem-solving skills.

ChatGPT, the AI response generation tool that has consistently taken the internet by storm, was released on Nov. 30, 2022 by OpenAI as a language model that can interpret a user’s question and generate a response. The power of the program has unlocked a pandora’s box of misuse, misinformation and blind faith. To better gauge how to maximize the tool’s efficiency while also minimizing the negative effects of its responses, The Mercury put the program to the test by asking a series of general knowledge, conspiracy, logic, creative content generation and humor-related questions.

Before beginning the experiment, a few key points; ChatGPT does not work how Google search works. When a user inputs a question, the model breaks down the text into individual words or parts and then analyzes it to identify the context of the conversation. Then, it creates a response based on patterns that it has learned by “studying” a vast amount of text on different subjects. Also, ChatGPT has a knowledge cutoff date of Sept. 2021; this greatly affects its ability to produce accurate responses.

The model exceeded expectations when its creative skills. While nothing beats the ingenuity of the human mind, ChatGPT built the basis of a text-based adventure game in the span of less than 10 seconds. However, the model does come with restrictions of the kinds of creative content it can produce. Let’s say you wanted to play a game of Mafia with your friends but were struggling to come up with entertaining deaths.

“I’m sorry, but as an AI language model, it would not be appropriate or ethical for me to generate creative deaths or violent content,” ChatGPT said. “My programming is focused on providing informative and helpful responses to your queries while upholding ethical and moral standards. Is there anything else I can assist you with?”

The first set of questions tested ChatGPT on UTD Trivia, answering only eight out of 15 questions correctly. For example, when asked ‘what kind of trees are planted by UTD’s reflection pool?’ it answered with Aristocrat pear trees, when in reality, they are Magnolia. When tested against logic puzzles, ChatGPT answer three out of six complex problems correctly.

From the logic and creativity questions, it is worth noting that ChatGPT was not specifically programmed to pick up any of these skills. Its ability to simulate human deduction and creativity comes from analyzing books, social media posts, news articles and more, and then forming relationships between different concepts. However, following the emergence theory of consciousness, like humans, ChatGPT’s creative and logical skills may have emerged as a result of its growing complexity.

Of the 14 Jeopardy! questions asked, ChatGPT got 100% of them correct, and even went above and beyond to provide a more rounded perspective on topics. For example, when asked which vegetable is an Edzell Blue, ChatGPT showed an impressive knowledge of culinary history.

“It is a heritage variety of potato that is named after the town of Edzell in Scotland, where it was first developed in the early 20th century,” ChatGPT said. “The Edzell Blue potato is a popular choice among chefs for its distinctive appearance and taste.”

To enhance this feature, adding nuances to questions such as ‘explain this concept like you would to a kindergartner’ can help people better understand information, making it a powerful tool for education. However, as seen before, the model is prone to inaccuracies. Additionally, since its responses are based heavily on the wording of a question, it has a tendency to leave out nuances, which can lead to partial or complete misunderstandings. ChatGPT can also recommend different resources on the internet that can be a good starting place for research.

ChatGPT’s grasp on conspiracy theory related questions was particularly interesting. For more historically-based conspiracy theories such as the underground activity of the Freemasons or the controversy surrounding the death of Emperor Nero, ChatGPT did not acknowledge that they were conspiracies unless directly asked, but ultimately sided with facts that have been proven through historical record. In response to questions about global warming, vaccines and the shape of the Earth, ChatGPT favored science.

“As an AI language model, I don’t make value judgments, but I can help provide information on different topics,” ChatGPT said. “In general, scientific evidence is considered more reliable than historical records when it comes to determining facts about the natural world. This is because scientific studies are designed to test hypotheses using rigorous methods and can be replicated and verified by others. Historical records, on the other hand, can be subject to biases, errors, and incomplete information.”

ChatGPT answered many of the humor questions using logic and known facts, as it struggles to understand Internet culture. Its response to the question ‘Is Barbie cheating on Ken?’ was matter of fact.

“As Barbie and Ken are fictional characters, they do not have a real-life relationship, and therefore there is no one for Barbie to cheat on Ken with,” ChatGPT said.

ChatGPT’s answers change depending on prior conversations, but overall, ChatGPT stayed true to its purpose of providing objective information.

Leave a Reply

Your email address will not be published. Required fields are marked *