AI and Moral Panics: Navigating New Tech in 4D
At my university, as at many others, there’s been a lot of conversation lately about the use of AI by students. I won’t pretend that I don’t have concerns about this technology, particularly the way college students are likely to use it, but I also think that society tends to have a bit of a "moral panic" whenever a new technology emerges. It's like the reaction to cars or telephones when they were first introduced. New technologies often dramatically change our lives and society’s initial reaction often falls on a polarized binary between euphoria and terror.
Over time, though, our expectations pull into the middle. We figure out what the trade-offs are and how to capitalize on the promises and avoid the pitfalls. This process is iterative and involves a lot of trial and error, but we are actually pretty good at it, if we give ourselves enough time. In general, we don’t give ourselves enough credit for managing it. Sure, there are bumps along the way, and no, it hasn’t always been smooth or painless. But we do in fact learn from the past and figure out ways to manage risk and extract benefits.
Models of Innovation and Adaptation
When cars became widespread, they brought a new wave of car accidents and pollution, but they also saved lives by enabling quick transportation to hospitals, expanded social connections, provided greater economic mobility and changed social and political landscapes. These are the kinds of trade-offs that come with any significant technological shift.
With cars we figured out some important thing along the way.
First we adapted the tech itself to make it less dangerous: seatbelts, crumple zones, airbags, and now driver assistance and driverless cars.
Second, we adapted or created rules around the technology’s use: where you can drive, how fast, and how skillfully, rules around who can use it (licensed drivers), and so on.
Did those two sets of adaptations completely eliminate the dangers of cars? Of course not.
Would we be better off with more shared-use cars, buses, and more high-speed rail? Of course.
But people are more free, more mobile, have greater economic opportunities, better access to healthcare, and much greater ability to explore their shared landscapes because we have these metal things that take us from one place to another.
More to the point: we didn’t have any choice but to adapt. Given that cars rolled out in a liberal democratic society with no central controls over new innovations (and thank goodness, to be honest), everyone participated in a grand and ongoing experiment about the tech itself and how to use it.
AI, Moral Adaptation and the Amish
AI will follow a similar trajectory. One option is to panic. We could ban all AI use in academic settings, as some of my colleagues suggest. But as many have pointed out, including those who work closely with AI, this approach isn’t really practical. We can’t ban AI anymore than we could have banned cars when they first came out.
Banning AI also isn’t sound pedagogy. It’s a bit like trying to ban calculators; they have their place in some classrooms but not in others. What we need to do instead is think critically about where and how AI can be useful.
As always with these things, there’s a radically moderate approach that can help us make these difficult decisions in more thoughtful and intentional ways.
Oddly enough, the Amish give us a nice framework for how to think about new technologies.
Far from eschewing new tech across the board, the Amish have a series of decision-rules around how and when they use what kinds of technologies. The basic decision-making framework revolves around whether the tech helps pull the community together or is more likely to fragment it.
For the Amish, cars fall into the “fragment” category and most communities do not allow members to drive cars, though some will allow outsiders to drive Amish members around, as we found when we hired Amish roofers a few years back.
Phones fall into the “sometimes useful for community wellbeing” category, so most Amish communities do not allow them in the home but allow their occasional use for business or other communal purposes.
Each Amish community makes its own decisions about each tech, starting with their own particular value structure and then working out to investigate likely harms and likely benefits.
The Amish decision matrix ends up being very close to the radically moderate framework for decision-making I play with a lot here.
Because radical moderation starts local, decision rules will start with a close look at your local context. You need to know where you’re located on your own social, moral and political landscape, what challenges you’re facing, and what your values are. It means looking at what’s actually happening in a particular environment.
Your foundational decision-rule will probably not revolve around community cohesion like the Amish rule does, but it helps to identify a similar rule that works for you. It might differ depending on the tech. My decisions around social media in my home life are different from my decisions around AI or other tech in the classroom because the social context is so different. But in all those cases, you want to drill down to what’s the most important thing for you, in this particular context, before you can assess whether any given tool will help or hurt.
When thinking about AI use in a classroom setting, instructors can ask what is the goal of a particular course or assignment? What skills are we trying to cultivate in students?
In a creative writing class, it might make sense to limit the use of AI for drafting but allow it for brainstorming or revisions. In a political science class like I teach, an instructor might decide that students can use AI research tools like Elicit to find sources but that generative AI tool like ChatGPT can only be used for revision of drafts, given their tendency to hallucinate sources. Some faculty I know use various AI tools to show students how algorithmic bias works while others help students see we can tailor our own queries to get better AI outputs.
Each faculty member will need to assess their specific learning outcomes and tailor use-cases accordingly. This means that a single faculty member may have different AI policies in different courses precisely because their course goals are different in each one. And that’s a good thing! Variation doesn’t mean inconsistency. Sometimes it actually means that we’ve figured out how to clearly apply our values to different contexts in a principled way.
One advantage of this approach is that it forces faculty to get clear right off the bat about what they’re trying to get students to learn. This is not something I was good at as a young professor back in the day. It’s also a good practice outside the classroom in our daily lives.
Radically Moderate Tech Decision-Making: Amish 2.0
We know that most of us can’t sequester ourselves from the modern world in a community like the Amish. But we can all make Amish-like decisions about what tech we allow into our homes and our classrooms. We still have a lot of hyperlocal decision power and we should use it.
In the weeks to come I’ll provide a more detailed radically moderate framework and flesh out how this all relates to living in 4D. But the six steps below, broken into pre- and post-decision, provide a good start for anyone struggling with when and how to incorporate new technologies into their homes, their workflow, or their teaching:
Start local: know what’s going on on the ground. What are your values and goals in this situation? What are you hoping to get out of any new tech? What specific use cases make sense?
Understand the social/political context: Who are the stakeholders in your shared landscape? Who is affected by your decision? Who isn’t? What local norms are at play and are they helpful or harmful to your overall goals? How might this tool interact with other tools already in place?
Identify the positive use-cases: What specific friction point would this particular technology address? What could it make easier? How could it help align your decision making with your big picture values (#1). What would successful use look like?
Identify, avoid and escape pits: What are drawbacks to its use? What kinds of errors or side-effects do you want to avoid? How might its use conflict with your big picture values and is there a reasonable way to avoid that conflict?
Create a feedback loop: Once you make a decision, go back in a week or two (or whatever timeframe makes sense for this particular decisions) and assess. Check to see how the use is going. Is it living up to its promises? Are there unexpected friction point? Crucially, for most decisions, you need to keep these feedback loops open, so plan on coming back to this decision frequently for check-ins. Don’t put new tech on autopilot or it will start making decisions for you (ask me how I know…).
Accept and tolerate variation: Not everyone starts with the same #1. People’s #2s will differ because their needs differ. And their #3 will likely differ from their neighbors because people assess tradeoffs differently. We need to allow different people and solutions to co-exist with AI just like with everything else. This is why I (and most sensible observers) am against university-wide AI policies and instead encourage local course-specific policies that take into account the uses people really need this tech for.
Rad Mod AI in Action
I recently experimented with using ChatGPT to help me with my own blogging process. On my commute, I dictated a rough idea for a blog post and later used ChatGPT to create a detailed outline from that recording. I wasn’t thrilled with the result when I asked it to generate a full draft—it didn’t sound like me. But when I used it to produce an outline, it was great.
It clarified my thoughts and gave me a clear structure to work from, turning my rambling car thoughts into a coherent outline. This way, I could use my Saturday to flesh out the draft without spending precious time at home brainstorming or organizing my thoughts. It was a great example of how AI can save time on routine tasks while leaving the creative work to me. This use-case also helps me maximize time with my family, which is another important value for me.
When it comes to new technologies, you have to start by asking fundamental questions about your own values and what you’re trying to achieve. Where are you in your life or your work? What problem are you addressing? Once you have a clear understanding of that, you can think about whether a particular AI use case will help you achieve your goals. Will it save time on important projects or help develop skills? And equally, what are the costs? Is there a risk of losing a skill, like writing or critical thinking?
For me, I’m cautious about letting AI do too much of my writing because I need more practice at clear writing, not less. But I’m all for using AI to save time, like turning dictation into a detailed outline that I can build on. This isn’t a one-time decision either—our relationship with technology evolves.
My own views on social media have changed over time; there were periods when I stepped back, and now I focus my efforts on LinkedIn for professional interactions and Facebook for personal connections. I made these changes based on info from periodic feedback loops, which in my case were occasional (usually quarterly) check-ins about how I felt about social media usage.
The same can apply to AI. It’s not about making a single, fixed decision on how we use it. It’s about creating a decision-making framework that helps us adapt as we learn more about what AI can do and what its impact is.
So yes, this post was partially crafted with the help of ChatGPT, using its outline capabilities to transform my commute thoughts into a detailed blog post outline. I think it worked pretty well, though I’ll know more more about potential issues once I’ve played with it for a while.
What about you? What technologies have you made intentional decisions about? What framework did you use to do it? How are you using AI in your work or teaching? What’s been helpful, and what hasn’t? Are there use cases you’ve found particularly beneficial—or ones that didn’t work out as you hoped? Leave a comment: I’d love to hear what you think!
And, as always, subscribe and share if you like what you read!
Hi Lauren. I love the organized decision heuristic you present! Your example of the Amish makes me wonder what the role of community is in deciding when/if/how to use AI. For example, how can one weigh the effects of AI on things like trust (key to community)? Or, in an academic department, what effect does teaching students to use AI in a 200-level course have on how they learn when they get to a 400-level course? I’d be interested in thinking about these community effects (externalities?) that new technologies can have. One more thought - you are exactly right that it’s important to keep feedback loops open - soon enough, AI will be just another tool for learning (like calculators).