A study out of MIT on how ChatGPT is eating our brains made the rounds this week and last. First came the folks arguing that the study showed ChatGPT destroys critical thinking. Second came the folks rightly pointing out the limits to the study, including a small sample size. Third came the folks pointing out that with any new technology it’s not about using it but about how you use it.*
*AI Disclosure: I used perplexity.ai to find additional commentary on the MIT study. It did a great job of finding real articles and organizing them by theme, saving me lots of time throwing random queries at Google and getting irrelevant results. One (of many) examples of AI-for-the-win.
In general, the study seems to show what we all know: untrained use of (any) technology doesn’t result in great outcomes. As I’ve argued before, if you don’t teach people how to drive cars but then let them loose behind the wheel you’ll end up with quite a lot of harm. Generative AI is the same way.
Using AI Like an Intern
Rather than rotting my brain, AI has facilitated my writing and podcasting workflows in pretty radical ways. The key, I’m finding, is to think about each use-case in 4D (scroll down to jump to the 4D framework I use).
Generally speaking, my use cases boil down to a pretty simple rule: if I would be comfortable outsourcing this particular thing to an intern, I’m probably comfortable outsourcing it to AI too.
Both interns and AI require supervision and both require fact-checking, but there’s not a lot of moral difference between the two (and there might even be a moral argument for AI as I lay out below).
Below are four use-cases for AI in my own writing workflow. You may have more and I would love to hear them.
1. Custom GPT + Voice Memos = Draft Magic
First, I finally paid for a ChatGPT Pro (I’m now using Claude) subscription—$20/month—and created a custom GPT trained on my Radical Moderation book manuscript (about 250 pages). Eventually, I’ll add more of my favorite Substack posts too, but the book alone provides a solid baseline.
With this in place, I can use voice memos on my commute or while doing laundry or wherever else I happen to be to capture my thoughts—like this post—and then I run the transcript through Claude to clean it up. Doing my thinking while driving or doing yard work allows me to save my mental energy for editing and revision later. It also allows me to reclaim some unproductive and non-transferable time in my day (in this case, my commute), which gives me more time to do things like relax or play with my kids.
This shift has saved me at least 10 hours a week. Instead of using that time to write rough drafts, I’m using it to revise and wordsmith when my brain is fresh. Drafting used to burn me out before I even got to the editing phase. Now, I still spend a lot of time revising the AI-assisted draft, but I have solid head start.
That said, even with my voice embedded, ChatGPT still flattens my style a bit. It adds weird jokes or little internet-y asides that don’t feel like me. So I still have to revise it. But the foundation is there.
I updated this post after returning from a conference where a colleague recommended Claude over ChatGPT for writers and researchers. He was absolutely right. Claude produces better outcomes and less flattening, so that’s my new go-to for transcribing voice memos into Substack drafts. I have Claude Plus and the rest of what’s above applies: create a custom GPT with your own content and run with it.
2. Suggesting Improvements
Claude also does a good job at suggesting improvements to make arguments stronger. I typically don’t agree with all of them, which would be the case with any beta reader I asked for advice. But having a beta reader on demand in your back pocket is a pretty powerful tool for writers who need immediate feedback and who don’t have a paid research assistant or developmental editor waiting in the wings 24/7 a day.
And while I haven’t tried it yet, according to colleagues it’s good at copyediting as well.
3. Podcast Workflow Win
I’ve been using Descript to edit the podcast We Made This Political that Lura Forcum and I co-host. This allows me to edit both audio and video relatively painlessly (there’s still admittedly some pain).
Not only does AI in Descript have the ability to auto-level volume, but it can also aligns audio and video, has built-in audio and video templates, and can auto-generate show notes, time stamps, and chapters. It’s incredibly helpful. It saves me hours of work that I can now devote to writing or idea generation or hanging out with my family. I have very few concerns about farming this work out, though I always check its work just like I would check the work of a podcast producer (at least for the first couple episodes). Since these changes don’t affect the creative content at all, I outsource as much as I can to the AI and grudgingly do the rest.
Where I Don’t Use It (Yet)
I don’t use generative AI for creative work or idea generation. I also wouldn’t use it to write a full draft of something that wasn’t already a full transcript. Doing so would cross a moral line for me in terms of owning my own ideas and making sure they’re authentically mine in a meaningful way.
I also don’t really like it (yet) for crafting content for social media. There’s a cadence and rhythm it produces that feels kind of artificial to me. Since drafting a quick LinkedIn post doesn’t take me very long I haven’t found as much use here as I thought I would. I don’t have moral issues here the way I do with using it to draft from scratch; it’s just a use-case that hasn’t worked that well for me so far.
In general, I wouldn’t use AI for anything I wouldn’t feel comfortable asking a research assistant to do. In the same way that it would feel inauthentic and dishonest to ask a research assistant to draft an entire essay for me on a specific topic, I wouldn’t use generative AI for that either. I wouldn’t ask a research assistant to generate a personal thank-you or a heartfelt email to a family member.
But on the other hand, anything I would feel comfortable delegating or farming out to another human being I’m perfectly comfortable farming out to AI. And of course I would check both human and AI work in either case, because of course either can make mistakes.
Some Additional Thoughts on Equity
I’ll add one point that I don’t see many people talking about: there’s a huge gulf between the haves and have nots in a range of knowledge work spaces. The academic who has teaching assistants to help with grading and graduate assistants who help chase down citations is in a completely different workload space compared to the academic who has to grade everything by hand and who has to look up every citation for every published work until their eyes bleed (ask me how I know).
As one of the latter academics, I’ve written two books where I had to do the bulk of the checking, revising, and editing and re-checking by myself. Even something like fixing errors in my Zotero citations fell to me. For my last book this meant manually checking over 600 citations. The time I spent doing all this wasn’t real intellectual work. It was administrative work that took me away from my family and prevented me from generating new ideas or following up on existing ones. It was exhausting and demotivating and totally unnecessary. But my university didn’t have research funding to support that work, so it fell on me.
This workload gap is true for other kinds of authors, not just academics. People who can afford interns or staff members or TAs can offload the administrative work onto those folks. People who can’t have to make do.
Being able to use generative AI tools like an intern helps (at least minimally) bridge the equity gap between those who have developmental editors and PR firms and research assistants and those of us who do not.
Thinking About AI in 4D
When I think about a particular use-case for generative AI (as well as any other part of my workflow) I can use my radically moderate four dimensional framework to help me assess how I can and should proceed.
1D: Individual Values Does whatever this specific AI use-case is align with my own values? How would I feel about the final product? Proud? Or mildly ashamed?
2D: Social Networks and Community Norms Would I feel good talking about this use of AI to a person I respect? What are the norms in my intellectual space? Can I legitimately defend this particular use-case to my intellectual/creative community? Why or why not?
3D: Pits and Tradeoffs What are the pits (aka dangers) I want to avoid? Whether the danger is intellectual laziness, sounding inauthentic, or introducing errors, tradeoffs exist. They’ll be different for different kinds of AI and different use-cases. Some tradeoffs can be mitigated with sufficient care and attention. Others maybe not.
4D: Time What would using this tool mean for me and my creative processes over the next year? Over the next 5 years? Are there critical skills that will likely rust or am I really just saving time? What opportunities would this use-case open up for me? How might it interact with my other goals over the long-term? Can I create a feedback loop where I can check back in on how a particular use-case is working? (This is what I did with social media and I ultimately decided it wasn’t working for me, at least not yet).
Once I’ve run through these questions I have a better idea of whether a particular AI tool or use-case works for me in my particular landscape. Where I’ve landed depends on what I’m using it for: I have no issues whatsoever with allowing AI to generate clips from our podcast episodes or having it transcribe voice memos. I wouldn’t use it for research without a lot of oversight for research and I wouldn’t use it much at all for idea generation.
AI Resources I Use and Like
Finally, here are the AI resources here in June of 2025 that I use and like. I’m sure this will change quickly as new tools come out, but these are the ones I reach for almost daily:
Claude for cleaning up transcripts, suggesting revisions, and helping organize very early drafts of ideas. I’ve only scratched the surface with Research mode, but so far it’s been a really powerful tool for finding sources and deepening my analyses.
Perplexity.ai for AI plus internet search ability. It’s a great first step for most research, especially when you need a quick lay of the land.
Elicit for academic research, including finding trends in research over time. This looks like it’s gotten even better since the last time I used it, which is great.
Descript for audio and video editing for the podcast, including creating clips to share on social media and other promotional summaries.
ChatGPT for daily, family, and personal stuff. I use this less now that I have Claude, but I still find it useful for suggesting a recipe with a certain amount of protein or drafting a minimalist packing list for a family of five for vacation or suggesting start-up and shut-down workflows to help me remember to check things off my lists. It can also mimic a personal online shopper, though Perplexity.ai can do this too.
Your Turn
Let me know what you think! If you’re a writer using AI in some other brilliant way—please tell me! I love hearing how others are experimenting. And if you think I’m wrong about any of the above, I’m all ears. Given how new this tech is, we’ll all be working through a lot of these moral questions together.
If you’ve got thoughts, questions, or disagreements, drop them in the comments below. As always, if you like what you read please subscribe or share it with someone who might find it helpful.
This is really helpful. As a writing instructor, it's increasingly becoming expected that I'll have some kind of knowledge / wisdom to impart to first year writing students about these kinds of tools, but my knowledge and wisdom are somewhat limited, in part because there isn't really anything I use it for in my own writing process. I still tend to minimize the amount of my writing that I do in front of a computer screen and don't even use citation management software (when I'm taking notes on a work, I write down the bibliographic information at the top of the page, so then I have it for later when I'm writing the works cited entry). It's really helpful to read a description of how it can be used by someone who has found a number of good uses for it. I also really like the you conceptualize it - the question of whether you would trust a human assistant with the task.
I appreciate your thoughtful account of your approach to this, Lauren. I myself don't use "A.I." at all. In my 25 years as a philosophy professor, I never used a TA or RA. Early in graduate school I worked as both and decided then that I would never outsource those valuable skills that take a lot of good judgment to anyone else for my own work. You'll certainly get a kick out of my unusual Facebook post this morning I made on this very topic! :o)