Using AI Appropriately
Humans do not stand alone. In programming, as in other arts, we do better work and learn more when we communicate and cooperate. Sometimes, however, people copy the work of others as a way to avoid their own work; such plagiarism is unhelpful and unethical. The other extreme, however, leaves you working without the benefit of other people. Finding the right middle path is one of the things students must learn in their work.
This document focuses on using Artificial Intelligence (AI) tools.
For guidance on getting help from other people, see the working with people guide.
For guidance on getting help from online sources, see the working with resources guide.
Policy
Unless specifically instructed to do so as part of a learning activity, you should not use any form of generative AI to write code for any assignment. Turning in code generated by an AI represents a violation of the academic integrity policy.
This policy exists to protect your long-term interests (to develop a set of skills and knowledge) from short-term temptation (‘I need to get this assignment turned in and I need to leave for work in an hour.’).
Generative AI as a Software Engineering Tool
Generative AI is reshaping how programmers work. Recent developments have made AI remarkably capable of writing computer code. An AI can rapidly generate code for solving small, well defined problems, especially if the solution follows a common pattern.
However, AI is not perfect. There are going to be bugs in the code it generates. It struggles to generate high quality code for problems involving complex or novel applications of programming.
An experienced human programmer can use AI to write programs more rapidly than they could do so unassisted. The developer can rely on the AI to generate basic boilerplate code while they focus on tasks the AI is less capable of doing: deciding on the problems to solve, picking the strategies to use to solve them, verifying that solutions are correct, debugging complex issues, and working with other humans.
Generative AI as a Learning Tool
The strengths and weaknesses of generative AI, when it comes to writing code, make it dangerous to rely on in an early CS class. What it can do well—generate chunks of code to solve well defined problems—is part of what you are trying to learn. And doing much of what the AI can't do—like decide whether a chunk of code is correct or not—relies on experience you won't have yet.
An AI chatbot is like an advanced student who is is really eager to show off what they know and isn't at all worried about what you actually understand. ‘Sure, let me show you how to do that...’ It is not capable of identifying what you do and do not understand. It is not going to help you identify ways to test your knowledge and assumptions. It just wants to be ‘helpful’ and give you the answer to your problem.
However, in a CS course, your real goal is never actually solving a particular problem. Your real goal is to develop the knowledge and skill required to develop that solution.
Seeing vs Doing
Seeing examples of how code works and common strategies for solving problems is an important first step in learning what is possible and to develop a mental toolbox.
However, watching someone else solve a problem is very different than solving it yourself. Learning professionals use something called Bloom's taxonomy to rank the complexity of various activities. Understand is a fairly low level task. Apply, Evaluate, and Create are much more complex tasks. And that is the level of task you need to develop your programming proficiency to.
The only way you will reach that level of skill is by practicing it. Looking at a solution generated by someone or something else robs you of the opportunity to practice applying or creating. It reduces what you are doing to understanding.
You should recognize that the danger to your long term goals exists even if you use AI with the best intentions. ‘Just’ asking how to solve a problem so you can ‘see how it is done’ is going to rob whatever you are working on of much of the potential learning—even if you don't directly copy and paste code generated by the AI. This is like studying a walkthrough of a game before you play it yourself for the first time; even if you end up playing it, you've lost the opportunity to see it fresh for yourself.
Are there acceptable ways to use AI in class?
A good heuristic is to think of an AI as a person, and ask whether your use of it would be acceptable when working with a human. This extends not only to the concern that they might try to do your work for you, making it harder for you to learn, but even to the concern that they might plagiarize from you if you let them see your work.
Questions that would be acceptable to ask a human, such as help interpreting a compiler message, or how to use a language feature, are probably safe.
A great use of AI is to ask it to explain a code sample you have been provided—using it to help develop understanding at the stage of learning where that is your goal. AI can provide line-by-line descriptions of exactly what a code sample is doing. When asking AI to explain code, it will not be able to judge your understanding like a human tutor might. So you need to take responsibility for asking for clarification where you need it. But the AI will be infinitely willing to answer those questions!
Another good use of AI is to generate example questions for testing yourself, like a buddy with flashcards. This can be a good way to study for a quiz or exam. Just be aware that the AI doesn't really know the answers either, so again it is important to double-check the results.
Endnote
The role of AI in computer science—and society in general—is rapidly changing. If you think of a way to use AI that isn't addressed in this guide, or if you're not sure how it might apply, talk to your professor. Not only do we want to help you learn, we're curious about ways we might not have thought of to use AI tools!