AI Thoughts

Hi all. 

As you know, the whole world is sort of crazy right now. We have a very divisive political system, people are struggling to make rent and buy food, and, meanwhile, whole industries are being destabilized by AI. The education world is finding itself between two forces in tension: capitalist markets embracing AI in their continuous movement towards efficiency and profit, and an ivory-tower romantic vision of academia as a place of learning and human inquiry. Here are some thoughts about it, with a caveat that, while I’ve been a professor for over a decade, I’m just some guy you know. (And, for some of you, I’m just some guy, you know?)

These thoughts are mostly addressed to students because I consistently get asked by them about the ethics of AI use and whether and how its existence will impact their futures, casting a long shadow across whatever degree they receive after paying tens of thousands of dollars for the privilege. So I’ll start by addressing some ethical concerns and then move into some guidelines on how I think AI could or should be used in coursework. When used as a way to increase learning, such as in some cases, to offload routine tedium so you can focus on tricky problems, it can be very powerful. 

(This is a snapshot of the AI Thoughts Google doc taken Nov 23, 2025. Hop over there and comment if you want! I’ll update this until I feel like it’s good.)

Ethical Concerns

Environmental

I’m at Appalachian State University now. I’m finding AI use here sort of inconsistent. The whole campus has recently decided to emphasize climate literacy, adding a climate studies designation to courses that qualify or were tweaked so they could qualify in the course catalog. All students must take at least one climate-literacy-designated course as part of their general education. Also, at App State, if you walk through the student union and if you log into the uni’s LMS, you’re immediately blasted with ads or announcements for Gemini, Co-Pilot, and NotebookLM. We have a dedicated subdomain for AI on the web. There’s very little information about how AI use is contributing to climate change and adding more stressors on limited resources, and this often disproportionately affects marginalized communities. So here are some good articles to read about it and to get a refresher for this concern (in order by date):

It seems to me that, at the very least, we ought to be having conversations about whether we should be or how we should be using AI within a larger context. Even if we deem it totally fine for particular tasks in academic work, do we still choose to do it if we know it contributes to our climate crisis? This, of course, begs the question of how much of an impact it has and whether the contributions from a single individual matter much (a classic social dilemma).

It also seems to me, a lot of the problem is growth for growth’s sake and not to meet an actual need by the masses. It’s like if I went car shopping for just something to run errands with, and the dealer forced me into a package deal where the car I drove off the lot was routinely replaced by newer and newer monster trucks and supercars. To the dealer with a more-is-better mindset, this seems like a great service to me, but to me, well…I’m not sure I need a supercar to go get groceries.

Artistic Intellectual Property and Industry Destabilization

Another issue with AI is whether human creators are getting proper credit and compensation for their work. It sort of sucks to have a particular style that you’ve been honing for decades and to have an AI app take a few minutes to mimic your style. Sure, other humans are influenced by previous artists before them, but there’s a massive time difference that just makes things seem different now. And the AI is doing it through some sort of algorithmic logic rather than an emotional human, meaning-making one.

Actors (both live and voice-over), authors, visual artists, audio artists (Is This What We Want?), and coders have all had beef against AI companies’ use of their data to generate content. It certainly changes creative and coding industries, adding to the massive anxiety college students already face as they near graduation. Many of my friends and former students in design or illustration are just outright against the use of AI at all, and I understand their reasoning. Yet I also understand a nonprofit’s desire to find or create an image to throw on a flyer. (And there are definitely interesting things being co-created with AI.)

I don’t really have answers here, but I do have specific practices I tell my students to follow (see below, but basically: credit artists if you use their stuff; compensate when you should, especially for profit; be transparent with AI use; and try to use it only when you can’t find original work to suit your needs).

Learning and Growth

Two main issues here: 1) On the ground, we’re concerned about whether students are using AI to cheat. 2) We’re also concerned about whether our students are learning and if AI is shortchanging them of a robust educational experience that prepares them for the world. But, to be honest, these concerns are symptoms of a bigger issue: whether our academic institutions, as part of a neoliberal system (Teen Vogue has been a remarkable source of information, especially since the pandemic. RIP Teen Vogue.), truly serve our students well. Indeed, maybe the biggest concern for education is how AI jeopardizes the whole grades and numbers-focused system. In the long run, this might actually be a good thing as it may force us to reconceive how higher ed should be structured.

Okay, so let me explain, since I guess this isn’t obvious to many students. Grades are bullshit. (And I don’t mean in the Frankfurt sense. I mean, grades are a symbol for something that has been compromised.) If you were to ask people why they’re in college, one of the most frequent answers is so they can get a good job afterward. This purpose for higher education is driven by economic models that need productive workers to provide goods and services that generate profit. Most often, this means entering a system where individual humans aren’t really valued; just their labor, and only insofar as it leads to increased wealth for shareholders. In order to efficiently vet new workers, we’ve devised a rating system that is supposed to signify aptitude and expertise, but it mostly just rates students’ ability to be rated. In an ideal world, sure, an A means something substantial and speaks to a student’s learning and growth. In a cynical world–and boy oh boy, it really feels like we’re in a cynical world these days–an A means you got an A. The simulacra grade is all that matters now, so no wonder we have an issue with academic dishonesty. Instead of addressing the conditions that encourage shortcutting to the A, we try to add systemic ways to surveil students so they play by the rules of the bullshit system. It’s all hand-waving; the only thing that truly matters in our hypercapitalist world is that someone is productive after they graduate. Why do you think hustle culture exists? It’s all messaging intended to cajole younger generations to be unquestioningly or forcibly enrolled as cogs in the machine. (Cf. Mackenzie Wark. (2014). Losing is fun. In Walz & Deterding (eds.), The gameful world.)

Those in education, of course, have known that we’ve been providing students a skewed service for at least a century; we’ve all read Dewey and Freire. We know that the best learning is when students are driven by interest and are given the space, time, and resources to explore and create. But even while many educators have been doing as best they can for decades, the system in which they find themselves forces inattention and a grading/exam approach to education. To me, using AI to write a paper to get an A just seems like an extension of all these issues that have existed for decades. Cheating seems like a rational decision from someone trying to participate in the existing system. I still think heavy penalties should be levied against those who cheat willfully. Ironically, their act of cheating means they’ve agreed to be part of the system, and the system does not allow cheating (even though it encourages it).<–I know. Wtf did I just write?

We’re all a part of this, whether cognizant of it or not. That said, most of us and many students are here for a different reason: learning and growth as human beings. So, yeah, I’d say the biggest concern that is on college campuses these days is whether using AI shortchanges students’ learning. (Sometimes, I wonder if the biggest concern is how to curb students’ AI use to get that A for all the reasons above, but I’ll let my optimism prevail here…)

Tl;dr: The logic is that students who use AI to do the cognitive or creative work for them for their assignments, like writing a paper, coding an app, or editing a video, are not learning the material. They don’t deserve the A, sure, but the main worry is that they are also sacrificing their own learning and growth.

This is getting long, so I’ll just link to some good articles:

Guidelines

Let’s start with this:

Using AI to do your work and pass it off as your own is 100% academic dishonesty.

I know the system is stupid; I just spent a whole bunch of words saying that. But forget all that. Focus on you as a whole human. As a human, you’ve got your whole life ahead of you, and I want to impress upon you that a good life is one where you become better and better at dealing with challenges, one where you’re constantly learning and growing, and one where you’re helping others to also learn how to deal with challenges.

If I see a student cheat, whether through AI or not, I am saddened. I’m not angry. I’m sad because I care about them, and they’re shortchanging themselves of the human experience and of an opportunity for growth. 

Anyway, it might help to think about what you care about and what you’re trying to learn. Also might help to think about ways to map out kinds of learning. Go look at this site, as one framework to use, and see if it makes you think of your learning in different ways: XQ Competency Navigator

With that in mind, here are some guidelines:

Do not use AI to shortcut the point of the assignment.

This seems pretty straightforward to me. Here are some questions:

  • Is the assignment to write an analysis paper on a particular topic? Then make sure you’re doing the analysis, not the AI.
  • Is the assignment to practice grammar and punctuation (like in a language course, maybe, or an intro writing course)? Then don’t use AI to do that for you.
  • Is the assignment to learn routines in coding? Then don’t ask AI to do that for you.
  • Is the assignment to paint a landscape? Then don’t ask AI to do that for you.

It may help to realize that the times when you have the potential for maximal learning are the ones where you feel discomfort and confusion. Using AI to get out of those situations is stunting your personal growth. Furthermore, if you’ve heard me (and a whole bunch of others) talk about games, you know that games are fun because you figure out how to deal with deliberately obtuse situations. This is essentially what learning is. You’re figuring out the patterns in a space that is initially unclear to you, and that pattern recognition is actually fun.

So think about whether you’re using AI to take away from that opportunity for fun. Why would you do that to yourself? (Other than time; sometimes you run out of time. If that’s what’s going on, I’d argue that that’s a completely different issue you should be working on. The solution is better time management, not using AI to complete the tasks. You’re using AI on the wrong thing. Ask AI to help you with time management.)

Think of ways AI can be used to actually help you reach the learning goals.

Writing is thinking. But sometimes AI can help you refine. For analysis writing, usually I create a bullet list of topics or points I want to make. Then I write a sentence or two for each one and continue to expand it from there, cutting and pasting if the arguments need reordering to make sense. The figures below show this process.

A screenshot of the outline for this piece on AI Thoughts.
A screenshot of the outline and an introduction written on top.
A third screenshot showing the expansion of one of the outline bullet items.

Figures 1-3: Screenshots of my process for writing this, captured sequentially over time. First, the outline. Second, a preamble. Third, starting the expansion of the outline.

If you’re like me in this way, after the bullet list and initial expansion, maybe it makes sense to use AI to ask, “Hey, does this make sense?” (I haven’t asked AI for help… yet (other than spellcheck). I might later have AI check argument flow.)

When you’ve finished writing your paper, ask AI to roast it. The idea here is that first you had to think about it, about how to create a logical argument that you thought made sense, and in doing so, you exercised that critical thinking that is often the point of writing the paper. Then have AI tear it apart, feel humbled, and come up with counterarguments for its critiques.

For a design project, it might make sense to have AI help you iterate on ideas, but you’re the one coming up with the ideas that you want it to sketch out, so you can see if they work. 

Look up cognitive load theory (isn’t it Swell…er? ?). The idea is that we have a limited space for working memory, and it helps with math and other problems to write stuff down so you don’t have to remember everything everywhere all at once. (Though, do remember to always be kind.) You can refer to the piece of paper when you need to remember something, like a constant or variable value.  Once you free up your cognitive space, you can use it to actually do the heavy thinking and critical logic-making, connection-building sort of thoughts. For your academic work, if you use AI, you want it to help you remember things (such as creating a table of figures to refer to or retaining a library of sources that you’ve given it) so that you can focus on the thinking parts of the work.

Here are more thoughts that might help you think through this:

Check AI for bias.

Most AI gives answers based on lots and lots of data that it finds online and then tries to present its best guess for an answer to a prompt. (Some AI draw upon stuff you thought was private, but it might be worth considering everything you’ve ever done near a phone or on a computer as NOT private.) One problem with this is that AI can end up silencing minority voices or experiences. If 100 people wrote about something, and 90 of them said one thing, 10 of them said something slightly different, and 1 person said something completely different, the AI is likely to tell you what the 90 folks said. This might be okay, but it might also unintentionally leave out important and equally valid sorts of answers. (This is a problem with numbers-based, big data research in general; it tends to marginalize the outliers.) Education, at least in the US, is supposed to be for everyone, and many educators try their damnedest to make sure minority voices are included, not excluded. So ask yourself whether the way you’re using AI could be prone to bias and whether the types of biases it could be prone to are okay. Maybe disclose this stuff as you use AI.

Use AI art only if you can’t find original work.

Do an image search first and see if existing work is out there that would suit your needs. Then make sure to credit the work you find. (This, of course, assumes you’re using the art to enhance your work and that the art itself is not the assignment!) (I find it interesting to think about how manually finding images online is basically the same first step that an AI model does before it creates derivative art.)

Vibe coding?

This past summer, I had a long conversation with Gemini to help me create an app that has been sitting in my head for about 15 years. I know I can do it on my own with a lot of trial and error, but I’m not really a coder. I mean, I understand how coding logic works, and I came up with various functions and routines, but I just don’t know Javascript well enough to be able to actually write them without a lot of syntax errors. So, I asked Gemini to help me with that. It produced code that didn’t work or needed to be tweaked. We troubleshooted together; mostly me suggesting edits and tweaking things in VSCode. When we fixed something, Gemini would weirdly also change other parts that were working, resulting in those parts breaking. It’s sort of a dumb partner in coding who remembers proper syntax really well, but isn’t that great at the logic parts. But we finally got there with a lot of hand-holding and final revisions from me. I don’t know if Co-Pilot would’ve been better. I suppose this is an example of vibe coding. I’m not sure if me mentioning this belongs in this document, since I think the issue for students is that a lot of coding assignments are deliberately so you do learn the language well enough to not make syntax errors. I’m not in CS so I don’t know best practices for it, but I would guess that if AI use is allowed, you would at least have to comment liberally about how AI was used.

Use AI as a tutor.

Having a conversation with an AI model can be extremely helpful in exploring a topic. As briefly mentioned earlier, ask for help in organizing your week so that you can stay on top of your deadlines and other obligations. But also ask for it to explain things that you don’t get after trying to get. Some of the texts you have to read are dense, and academics can sometimes be terrible writers (this piece included!). Read and jot down questions. Ask AI to explain the parts that are unclear. Ask AI to help plan out a series of lessons on a topic you’re interested in but are having a hard time accessing in your college or university. Use it to help you learn and build capacity to do, not to do for you. And use it as an extended, personalized tutor outside of formalized higher ed after you graduate. You’re on a lifelong path of learning.

Also, decide for yourself whether it’s ethical to use AI whenever you use it.

AI use is pervasive, inevitable, and unavoidable in day-to-day internet life, but actively asking an AI platform to do something needs to be measured against your values and code regarding the ethical considerations listed in the first part of this writing about AI.

Some disciplines have specific AI tools that make a lot of sense to use. Most of these use grunt force to get through tedious tasks, such as folding proteins. 

When using AI, become good at it.

There are a whole bunch of articles you can find online on how to use AI in ways that enhance your work, retain your voice, don’t do the creative and cognitive stuff for you, etc. There are also tricks to prompting and specific tools with nuance for specific sorts of tasks. It’s too much for me to cover here, but good use of AI is sometimes very time-consuming as you refine your prompts and keep iterating on the AI output. (That vibe coding example above took about a week.) Also, don’t be afraid to then pop the image into Photoshop or whatever and keep working on it yourself, or edit AI text heavily so it retains your voice.

Double-check AI stuff. It hallucinates all the damn time, especially in making up fake references or saying something is something when it isn’t. So you actually have to fact-check it. This isn’t that much of a problem if you’re using AI to just help you refine your own ideas and/or help you find new avenues for your work, but you can’t rely on how it cites its sources, unless you’re using a specific AI platform that is careful about information validity and provenance.

Disclose how you use AI.

Each professor will likely have their own AI policies for the course you’re taking. Read their syllabi! ? Beyond that, I think you should disclose with sufficient detail how you used AI. This is pretty good practice, in general, similar to how you should be citing whose artwork and photos you’re using if you grabbed stuff on the web to put into your slides or whatever. (Oh, you’re not doing that?? You should be!)

sporadic ramblings of a gamer in academia