If You’re Avoiding Artificial Intelligence, You’re Part of the Problem

 

Close-up Photo of a Yellow Rubber Duck

Generative AI Makes a Good Rubber Duck. Photo (c) Armando Are


A conversation from a recent conference stands out in my mind. The speaker was an app developer and fellow usability expert. When I explained that I research conversational AI, she said, “I hate that stuff. It’s evil. It will be the death of critical thinking and creativity.”
 

I hear this a lot, as I’ve been working on AI since 2019- first as an anthropologist and then as a practitioner who became obsessed with conversation design. I’m no programmer or engineer, although I happily work with them as part of the Cullen College of Engineering. However, I can attest with some confidence that I understand how mainstream consumer-level AI works. I can spot a ChatGPT-generated passage of text quickly, as it uses predictable formatting and grammar. I can tell you how secure your data is (briefly: don’t ever enter anything confidential into a large language model interface). I can tell you the likelihood of a human reading or listening to your input.  

I can also tell you that you need to familiarize yourself with the strengths and weaknesses of tools like Open AI’s ChatGPT, Google’s Gemini, Microsoft’s Co-pilot, and Anthropic’s Claude because there isn’t a subject among the diverse areas we research and teach that hasn’t been touched by these tools. As you have likely lamented, your students already use it to tackle your coursework and will ultimately draw upon it throughout their careers. You owe it to them to ensure they’re using it well rather than, as my colleague feared, as a replacement for critical thinking and creativity.  

A good start is walking them through how it (I’ll use ChatGPT as an example) answers your class’s core questions and prompts and constructing a group discussion out of what it gets wrong and right. (On a more cynical note- this also shows your class that you know what answers GPT will produce. Most students don’t think to ask GPT to rephrase its default elicitations, and its language will become familiar to you as you encounter it.) 

But also show your students that GPT is a great tool to think with–primarily because of the sheer volume of written human work it draws upon. In other words, it can be an effective brainstorming buddy. It’s not uncommon to call the process of bouncing ideas off an inanimate object “rubber duckingin the tech industry. In those scenarios, as with GPT, it doesn’t necessarily matter if your conversation partner gives you accurate feedback (or any feedback) so long as the process helps stimulate ideas. ChatGPT can be a decent rubber duck.  

I have assigned my share of long-form written assignments over the years and taught writing for a while. Most of my students experienced stress about starting their essays and struggled with overall organization. Over the years, I tried several approaches, from advising them to write their introductions and conclusions last to requiring them to submit outlines, but the anxiety remained common.   

ChatGPT is indeed able to help with writing anxiety in a way that is legitimate and constructive. Allowing students to see a potential approach to their topic can mitigate anxiety about what theirs should look like. Some of you may object that I’m being utopic here and that students will merely copy GPT’s output rather than be inspired by it. Yes, some will. But if it’s clear you’re familiar with these tools, and you help your students learn to use them in assistive function (just as technology already assists us in so much of our daily workflow), they will be less likely to do so. Let me reassure you that the technology for identifying generative AI plagiarism has improved dramatically in the past few years and will continue improving. But you may recall the 2023 story about a professor who failed most of his class because he asked ChatGPT whether they’d used GPT to write their essays. The takeaway should be that if you don’t understand how generative AI works, you’re at the mercy of tools like Turnitin when assessing whether your students have used it.  

Beyond writing, what about students in quantitative fields who generate solutions to problems using ChatGPT but don’t understand how the math or the code works? This is where I look at you in friendly exasperation, dear colleague. Why are you asking them questions they can answer and get credit for without understanding? (I’m sincerely wondering.) This is where we teachers need to engage our powers of critical thinking and creativity, adapt our material to the modern age of computing, and pose questions that require our students to reflect on their understanding. Remember “show your work” from K-12 math classes? Let’s do “show your work” for everything. Incidentally, this is most effective if you check what the technology is capable of relative to your assessment materials. For example, generative AI is currently ineffective at responding to prompts about the contents of many texts. It depends on the text, of course, and if an individual source has been available for a long time, then GPT and Gemini can probably handle your quiz question. But please check.  

I know the gaps in our knowledge don’t feel good to academics, who are used to being subject matter experts and are irritated to find themselves (ourselves) out of our depth about something that is now part of every class we teach. But again, we owe it to the students we’re training to approach it with the same curiosity that made most of us stellar students, even if we resentfully believe we shouldn’t have to. Imagine this is the same feeling a student has about your homework assignment, and you can understand why they’d take a shortcut.  

One of our main jobs is teaching students how to think, and this is still urgently our mandate. However, many don’t know how to use AI constructively; instead, they use it to save time or speed through work that they see as no benefit to understanding. Our job is to teach them to read generative AI output with a critical eye and prepare to edit that output extensively.  

Societies with high internet use will continue to see more prompt-generated content in every format—images, audio, and text. We best serve our students by teaching them how to recognize, respond to, and use prompt-generated content in their future careers. Instructors should understand and teach students about sycophancy and hallucinations in AI. They should also teach them to scrutinize language and check the references proposed by tools like GPT, which are often (because of how they work) amalgamations of several sources and not something that exists in the world.  

As a viral article on LinkedIn asserted, “AI Won’t Replace Your Job, but Someone Who Understands AI Will.” Before generative AI and large language models, people still reproduced unoriginal thoughts– for example, although I wrote this myself, I am making an argument frequently made by others and merely adapting it to a pedagogical forum. Most of us are already operating like generative AI in our daily work (whether we use it or not), synthesizing what we have learned from others and creating new content from this information. Regardless of how technologically sophisticated we feel ourselves to be, we must lead with something other than fear and resentment and train our students to use GPT as a starting point and not the final product.