I hate LLMs so much. Now, every time I read student writing, I have to wonder if it's "normal overwrought" or "LLM bullshit." You can make educated guesses, but the reasoning behind this is really no better than what the LLM does with tokens (on top of any internalized biases I have), so of course I don't say anything (unless there is a guaranteed giveaway, like "as a language model").
No one describes their algorithm as "efficiently doing [intermediate step]" unless you're describing it to a general, non-technical audience
what a coincidence
and yet it keeps appearing in my students' writing. It's exhausting.
Edit: I really can't overemphasize how exhausting it is. Students will send you a direct message in MS Teams where they obviously used an LLM. We used to get
my algorithm checks if an array is already sorted by going through it one by one and seeing if every element is smaller than the next element
which is non-technical and could use a pass, but is succinct, clear, and correct. Now, we get^1^
In order to determine if an array is sorted, we must first iterate through the array. In order to iterate through the array, we create a looping variable
i
initialized to0
. At each step of the loop, we check ifi
is less thann - 1
. If so, we then check if the element at indexi
is less than or equal to the element at indexi + 1
. If not, we outputFalse
. Otherwise, we incrementi
and repeat. If the loop finishes successfully, we outputTrue
.
and I'm fucking tired. Like, use your own fucking voice, please! I want to hear your voice in your writing. PLEASE.
1: Made up the example out of whole-cloth because I haven't determined if there are any LLMs I can use ethically. It gets the point across, but I suspect it's only half the length of what ChatGPT would output.
I love you, David.