Check out my first post here to read more about the namesake of this newsletter and make a copy of the input/output tracking sheet if you so desire. Sorry for typos.
Upcoming class (recorded if you can’t attend live)
How to Write a Short Humor Piece 5/17 1-3pm ET
Do you want to write funny in 750 words or less? Do you have something to say on a topic you know well? Do you crave fun, need an outlet for your creativity, and hunger for more bylines? Then, short humor is for you.
This two-hour class will teach a repeatable process for writing a short humor piece. You’ll learn how to brainstorm, find a structure, craft a strong comedic premise and title, and use the two-list system of joke writing to fast draft a humor piece. Come ready to write and leave with a new piece and a set process to write even more in the future.
Let’s do it. Let’s talk about ChatGPT and writing.
Here are some ways I’ve heard people mention using ChatGPT in the past month:
To write performance reviews of their direct reports (imagine working a job you hate to pay your ever-rising bills only to find out that it was ChatGPT who said you need to “work harder to meet baseline expectations of social standards in the office workplace.”)
To find comps for their novel-in-progress
To come up with titles for a humor piece
To create a progressive overload lifting plan
To respond to emails from friends
The main thing each of these people stressed is that by using ChatGPT, they saved time. And sure, that’s a positive. But what did they trade for that time?
[DRAMATIC VOICE] Their own brain.
A manager should struggle to do performance reviews, because it forces them to think through each of their report’s work in the context of the organization’s goals and lay out next steps for both. It should be a challenge to find great comp titles for your novel, because it means you have to do research into your genre, the current marketplace, other writer’s voices, and compare them to your own. And if you don’t want to respond to your friends yourself, then…find new friends? Leave society? I don’t know what to do with that one.
The thinking and the challenge of choosing what to say is THE POINT. Those moments in writing when you grapple with what comes next, when you confront two elements that don’t make sense or flow naturally from one to the other, when you aren’t sure how to proceed—that’s the part your brain is supposed to do to make you a better writer, not a language learning model built on the back of other people’s work.
And research is part of thinking. I get bummed out when students ask me to break down a website’s entire tone for them, along with topics and pieces that do well there. The research is part of internalizing a site’s voice and audience so that you can come up with ideas and write drafts that could fit there. It’s a key stepping stone to becoming a better writer and getting published and finding your voice.
In order to get good at something hard, something that will truly differentiate you from other people, there’s going to be a long time where it’s hard. People I consult with know I love to talk about the stages of competence, and I’m back on my bullshit now:
If you use ChatGPT to write, you will never move beyond unconscious incompetence, or, at best, conscious incompetence.
Think about that: you’re on your deathbed and you think, “I’m so glad I gained conscious incompetence as a writer.”
Does that feel good??
Since ChatGPT is built on the (stolen) work of other writers, it will never create anything truly new. This is what all these CEO’s who want to fire their copywriters in favor of ChatGPT are missing. When the entire basis of the system is based on preexisting work, you will never get something unique. All the slop will sound the same. If you outsource your thinking and titles and ideas to ChatGPT, you can’t infuse that ineffable “you” quality into a piece of writing that makes it compelling.
I used ChatGPT for a humor piece exactly one time. I wrote a piece about a jacked romantasy elf and needed an “elf-esque” name for the magical realm. I asked for “names of magic elf kingdoms.” It gave me the option Lúthien's Hollow.
Now, I liked that, especially the little accent. It worked and I didn’t need to think about it. I popped it in the piece. But when I got feedback from a writer I admire and he specifically called out liking that name, I felt sick. Because I hadn’t come up with it myself.
It had seemed like a decent shortcut at the time, a way to focus on the piece’s unique structure and voice and skip the deep dive into Elven languages and conventions that would have taken…well, about ten minutes. Yeah, I only saved ten minutes.
I deleted Lúthien's Hollow. and came up with my own damn Elven Realm name. The experience made me feel ashamed and slightly uneasy, like I had put my hand a bit too close to something corrosive.
When I teach classes on how to write comedy and satire, I lay out a repeatable process for students to follow. But there are always small sections I can’t pattern exactly, because THAT’S WHERE THE UNIQUE THINKING HAPPENS.
And if you consistently skip the hard part, the part where you’re training your brain to make connections and exaggerate and see patterns, then you are never going to be a good writer. Period.
I’m not even going to get into the other objections to ChatGPT (environmental, copyright, etc.) because I’m not very well-versed on them, and frankly, they matter less to me than then threat of training myself to be a little more stupid, day by day.
Look, if you work a bullshit job (as defined by David Graeber) and have inane busywork to do that ChatGPT makes easier so you can steal time at work to create things you care about…OK, more acceptable. But you still may be losing an essential element of what makes you you, what makes your thoughts collide and form in a very specific way. Be careful. There are many other ways to steal time at work.
Not to be too YA-dystopia about it all, but you are voluntarily granting use of your brain to capitalist organizations and the government. You’re putting yourself to sleep. The point of being alive and creating is to make connections and have random thoughts and misunderstand things in a funny way and experience mind sparks that create a new idea where previously there was nothing. It’s to excite yourself so much you have to write out a bunch of jokes that moment. It’s getting notes back from people that say, “how the hell did you even think of this are you ok???”
Why do you want to contact all that out to a website? Do you hate the thought of having a rich interior life?
Writers write, yes, but before that, writers think, even if it’s about something as minor as the right Elven Realm name for a humor piece. You cannot write without thinking. Or, you can, but you won’t write anything worth reading.
It’s hard to get good at hard things. Sorry!
Honestly, thinking of all the college kids using ChatGPT to write papers makes me wish the world was flat so I could walk off the side of it. Let’s end on a fun note.
Here are two recent humor pieces and one classic piece that ChatGPT would kill to write:
“Bagels, Ranked” by Josh Lieb. ChatGPT would just rank bagels literally. They would never come up with logic like, “9. BLUEBERRY: O.K., you’ve been alive for a thousand years. You were cursed by God after stepping on a butterfly or something. You’ve seen multiple generations of your descendants grow up and live and die, painfully. You watched Rome burn. You made love to Mona Lisa. You killed Kennedy. There is nothing in this world your jaded senses haven’t experienced and become weary of. Finally, you’ve come to this.”
“We Are Mark Wahlberg’s Personal Trainers, and We’re Pretty Much Just Messing With Him at This Point” by Seth Rubin. THE SPECIFICS IN HERE. ChatGPT could NEVER be this way: “Mark Wahlberg is training. He learns ancient burpee wisdom from Adidas scholars and drinks smoothies the color of living manatee skin. Eight times a year, he releases a new movie where he plays a dad who snaps men’s necks in empty rooms inside shopping malls.”
And one of my all-time favorite humor pieces by Mike Sacks, where an office reply-all goes places no one but him could ever imagine. This comes right at the beginning of the piece and it only heighten from here: “Whoops!”
To: All Staff
11:01 AM
Subject: re: what the fuck?!Wow. Today just ain’t my day! I’ve been told that I have more “explaining” to do, re: “the realm of the imaginary.” So here goes: I probably should have told you that for the past two years, give or take a few months, I’ve imagined myself as a talking horse and that, as this talking horse, I’ve ruled a fantasy kingdom populated by you guys, my co-workers. The 27 images I included in the first e-mail are, in fact, Photoshop montages, not actual photos. Carry on!
What are your angriest/meanest/strongest thoughts on ChatGPT and writing?
A fun mention of a four-year-old humor piece:
A 2021 piece I co-write with Taylor Kay Phillips for The New Yorker’s “Shouts & Mumurs” was mentioned again today, aka REAL ID FOR REAL THIS TIME:
The piece is called “Wireless Printers and Other Myths:”
Fuck Hollywood execs, as a gift for my first Mother’s Day this weekend I’m going to see Sinners by myself!!! “Hollywood Execs Fear Ryan Coogler’s Sinners Deal ‘Could End the Studio System’”
Do me a favor—if you like these newsletters, hit the heart button! This helps more people in the Substack Network find it.
ABOUT ME: My name is Caitlin Kunkel and I’m a writer, teacher, and pizza scientist. My second book, INSIDE JOKES: A COMEDY AND CREATIVITY GUIDE FOR ALL WRITERS, co-written with Elissa Bassist, will be out in January 2026.
I read that NY Mag piece this morning and it describes so much of what I feel like I’ve been trying to contend with while teaching this semester and you captured a lot of my thoughts well in this piece. Hard to not feel like we are just sliding toward the Idiocracy future.
Couldn't agree with you more, Caitlin, especially after seeing a recent study that found a negative correlation between frequent use of AI and critical thinking skills!
I also really wish people would stop using it for research and suggesting other people do as well because it makes shit up constantly. If I remember correctly, the latest model of ChatGPT hallucinates answers more than 30% of the time? It's insane to me that people continue to trust it as a source of information. Wikipedia is a far more reliable source than this thing now, which is wild.
To end my rant: I saw a post on Bluesky a couple months back from someone who said they had asked ChatGPT to write their grandmother's obituary for them. It was around the time that I had also lost my grandmother and my family had asked me to write the obituary, which I did. Seeing that post just made me sick to my stomach. I could never pass on something that meaningful and emblematic of humanity to a machine, and I don't know if I'll ever understand the impulse people seem to have to do exactly that.