More on AI

More on AI

Winter term began on January 3 at Knox – coincidentally, the day I tested positive with my first Covid infection. My students have been diligently working asynchronously on a number of fronts since then, including reading several articles about generative A.I. After each article, students have filled out reflections in Google forms, reflections I had hoped would become the basis of a discussion once I returned to campus. But since I haven’t yet been able to do that, I wrote up an explanation today of why I was having them read so much about A.I. in a course about the history of gender and sexuality in the United States. It strikes me that this may become my syllabus statement on A.I. in the future, so I’m sharing here in the spirit of all of us crowdsourcing multiple ways to respond to the challenges of the new generative-A.I. environment.


Why We’re Thinking About AI (Winter 2024)

It’s about a year since ChatGPT was launched and began to make an impact on higher ed. Since its launch, there’s been sustained conversation among educators about “what to do about it.”

Here’s the issue: generative AI apps like ChatGPT scrape the internet for people’s words and add them to an enormous database to which users can pose questions. Imagine you’re looking at a website and have the ability to copy every single word on every single webpage in seconds.  That’s scraping. When machines scrape the internet for words, they simply gather as many as they can – they have no consciousness with which to evaluate what they’re capturing. This is why generative AI apps need human moderators to go through the scraped material and weed out offensive and violent content. The machines that scrape up words don’t ask the original authors of the words for their permission to take them either – they simply copy them. Many people argue this is stealing (1). And when a user asks an app like ChatGPT a question and gets a response generated from all the scraped information the machine has gathered, the response is in ChatGPT’s words, not the user’s. Using the words of ChatGPT or other generative AI apps and passing them off as your own is plagiarism.

Because of this, lots of educators are banning people from using ChatGPT and other apps like it as a matter of academic integrity.

I do think that integrity is on the line here, but I think the ethics we need to think about when using generative AI are even bigger than the question of whose words belong to whom. That’s why I’ve asked you to read about the people who moderate the content that ends up in generative AI databases, and to think about the global politics and economic pressures involved. That’s why we’ve read about the environmental impact of generative AI. That’s why we’ll read about what large language models like ChatGPT actually do to give you an answer to a question. I want you to think clearly and holistically about what this technology means for the world in which we live.

Ultimately I can’t stop you from using generative AI if you choose to. But I do want any decision you make to be a considered and informed one.


Indeed, the New York Times is suing both OpenAI and Microsoft for this reason  in the Federal District Court in Manhattan. See Michael M. Grynbaum and  Ryan Mac, “The Times Sues Open AI and Microsoft Over A.I. Use of Copyrighted Work,” New York Times, December 27, 2023.

Leave a Reply

Your email address will not be published. Required fields are marked *