On the Reasonable Use of AI
Is there a "dumb phone" approach to AI use?
What follows arose from work on documents and presentations about responsible use of AI as part of my routine work. It comes from a question that popped into my head after reading an essay on the move toward "dumb phones" last week. The content does not reflect the views or policies of my employer, the University of Missouri System. It is basically a summation of what I am trying in my own life and of some thoughts arising from that. I begin by setting out some principles, then discuss a few trends related to them, and follow up with an explanation of the principles before a brief conclusion. This is very different from my usual posts.
Principles of Reasonable AI Use
Maintain AI as a tool, not as a relationship.
Choose to use AI; don't let AI choose for you.
Find balance.
Consider cognitive, moral, and ethical consequences in decisions about when to use or not use AI.
Be deliberate about sequencing.
Make a list of AI functions you find useful and ones you find detrimental.
Turn off automatic AI search.
Limit AI apps and AI options in your operating systems.
Cultivate AI blackouts.
Find alternatives to AI that make sense for you.
Trends
Like most of us, I am sensitive to trends in our society and culture. Each of us has antennae tuned to different things for different reasons. AI trends, especially around Generative AI (GAI), have occupied a lot of my working hours, some of my leisure time, and much of my thoughts over the past two years. Another group of trends that has intrigued me a lot, especially since about this time in 2024, includes digital minimalism, everyday carry, emphasis on analog tools, and various "slow" movements. These create political, ethical, and moral debate, as well as pragmatic questions of work and personal life. In reading, viewing, listening, and doing, I have been exploring their implications.
The idea of "responsible use" often comes up in regard to both these trends - certainly for GAI and often for digital minimalism. There are serious moral and ethical issues around the use of all of our digital and many of our analog technologies. There are also cognitive ones. Many would argue there are no ways of using things like GAI or social media responsibly. Some people are in positions where they can reject AI and most digital technologies. Some of us can avoid them entirely in our personal lives but not in our work. Many of us do not want to but understand the cons as well as the pros of the digital lifestyle. All of this can also get deeply intertwined in politics and religion as well.
A couple of days ago, I ran into a post by Brendon Holder called Dumbphones Unstacked. I read it twice, and he went into more depth on the return of Gen Z to dumb phones. (I prefer the two-word spelling.) Later in the afternoon, I wrote down the question: "Is there a 'dumb phone' approach to AI?" The next morning, I began writing out the "principles" outlined here. It took another day for me to see that I was not asking a moral, ethical, or cognitive question per se, nor ones that required a specific political or religious answer (though you may not agree), and was skirting the question of whether there is any responsible use of AI. Instead, I realized that this is about what is reasonable in our lives under current conditions.
Let me put this in a different context. Many people consider our car culture immoral and irresponsible. In many respects, they are correct. I live in a community in the American Midwest that is built for cars (though the city of Columbia, Missouri, has made concerted attempts to make things more convenient for bikes and runs a number of buses). I grew up with the car culture of the 60s and 70s. I also have health problems that somewhat limit my mobility (enough to make walking or standing for long periods excruciating but do not qualify as a disability). Mostly, though, I am responsible in some uses of my car but not others, and I find some changes that may be responsible unreasonable for other reasons. Call me a hypocrite if you wish, but this seems to be the way most of us deal with the complete absurdity of 21st-century life.
So, I tried to boil this down to a series of principles or questions to help each of us reasonably balance GAI use in our lives and work and tailor that use in ways that are specific to each of us. The first four items on the list are more abstract and general and do include moral, ethical, and cognitive aspects. The lower six are more targeted and more practical. In the rest of this essay, I want to explain each of them as I conceive them.
Explanations
Maintain AI as a tool, not as a relationship.
This originated from some things I have read or heard recently, but especially from a remark that Brent Anders of the American University in Armenia and the Sorovel YouTube channel made in one of Bryan Alexander's excellent Future Trends Forum sessions (March 21, 2025). In talking to his students, Brent noted that some of them referred to AI chatbots as being among their best friends, while he knows many people who spend 6-7 hours a day chatting with bots. It is easy to start treating GAI as if it were human, even when we know it is not. I slipped into this occasionally when I first started using them. There is a school of thought that one should be polite to it to get the best results and keep from coarsening our interactions with real humans. I understand that, but it is necessary to maintain a distance and a stance towards bots that is different from a human or an animal.
Any reasonable use of AI requires accepting that it is a tool, not a conscious or sentient being.
Choose to use AI; don't let AI choose for you.
Be deliberate about what AIs you use and how you use them. In the past year, and especially in the last few months, the major makers of operating systems have added more and more AI features and left them on by default. Apps also have more AI features. It has become a selling point. We are being indoctrinated or acclimatized to using AI all of the time and for everything. We need a form of mindfulness to use AI reasonably, let alone responsibly. We are constantly bombarded by AI suggestions and recommendations. It is easy to slip into accepting them uncritically and unthinkingly. Consider which AI tools and foundation models suit you, and be mindful in your use of them. Many of the principles below follow from this.
Find Balance
Each of us must find a balance between what we feel is reasonable and unreasonable, responsible or irresponsible, and simply decide what is useful in our work and leisure. This applies to everything, but here, it applies to AI use. Maybe AI is useful to you to polish your emails and other written work. I have a friend whose native language is not English, and though she has little difficulty understanding English and speaks English fluently, she has difficulty writing in a way that she finds acceptable, so she sometimes uses AI to "Americanize" her writing. I do not do that, as I am a native speaker and want to maintain some of my dialect and idiosyncrasies of my style. At the moment, AI is mainly useful to me in finding aspects of topics I may have missed and finding good sources. In the past, when I was less concerned about ethical issues, I used it for creating clip art for presentations, backgrounds for Zoom meetings, and images for personal enjoyment. As I became more aware and interested in the ethics of AI, I cut back on that severely.
Consider cognitive, moral, and ethical consequences in decisions about when to use or not use AI.
As my explanation of the previous point suggests, we still need to consider more philosophical and psychological aspects of AI when we think about using it reasonably. Your views of the morality and ethics of AI may differ significantly from mine. They may also vary a great deal by use. If a professor forbids the use of AI on a paper or project and the student uses AI, that is an important ethical issue, and the student is clearly in the wrong. On the other hand, if your boss strongly suggests you use AI more to improve performance, well, there are clearly ethical issues there, but it opens the door to using AI in many different ways, and you had probably better try some of them. Hopefully, you will find at least some of them reasonable.
The ethical and moral issues may be more or less important to you depending on your inclinations and situation. The cognitive effects are just beginning to be explored philosophically and psychologically. This is something that you will, for now, have to consider on your own. You may want to read up on it or listen to talks and interviews on the subject, but in this context, I suggest the important point is to observe how AI affects you cognitively and emotionally. (This also goes back to the first point about framing AI as a person and not as a friend or companion.)
Be deliberate about sequencing.
That is, when working, decide at what stages of a task or project AI is appropriate and most useful. It may be important to interleave AI use with your own thoughts and work. This is one of the practical applications of the principle of Finding balance. For instance, I refrain from using AI for initial brainstorming. I prefer to sit down and write out the main points that need to be covered in any activity. Allowing AI to do that feels cognitively dangerous to me. Generally, I do not use AI to flesh out those points, but at some point, I may want to see if there are things I am missing. Using a tool like Perplexity (and yes, I am very aware of some of the company's unethical behaviors) often works better than a regular web search or in conjunction with conventional web searches. This is particularly true for finding sources, but more on that below.
It may be that you prefer to involve AI in brainstorming from the start. I am not criticizing you. It just isn't for me. Again, this is part of finding your balance with AI. I just think you should do it deliberately and with your eyes wide open to both the limitations of the AI tools you use and possible consequences.
Make a list of AI functions you find useful and ones you find detrimental.
This goes hand in hand with sequencing. Having a list, at least in your head, should make a lot of the other choices easier. This is one I am including provisionally. I think it is likely useful, but have not tried it before, but I do find that I have been making such a list recently. It is reflected in some of the previous points. It looks something like this:
Expanding a topic.
Difficult searches.
Intermediate searches for sources.
Occasional use of AI to OCR my handwriting (this sometimes works well and sometimes not).
Images to reinforce points in presentations (doing less of this, as noted above).
Occasional deliberate use of AI in Grammarly to fix awkward sentences but not larger pieces of text.
For testing AI detectors (part of my job).
This is incomplete but reflects the main uses I find useful.
Some of the ones I find detrimental are:
Basic brainstorming.
Rough drafts (these require too much rewriting and fact-checking).
Extensive rewriting.
Deep research.
Reasoning.
These are going to vary by individual, but having some of this in your head, if not written out, is useful.
Turn off automatic AI search.
This is something I have been doing for the past month or so, and found it eliminates a lot of bad information and lets me get to sources more easily. I have stopped using Google as my primary search engine. I now have a kind of hierarchy that looks like for searches from easiest to most difficult.
Search the web withDuckDuckGo without AI or search through notes on my computer and devices.
Search with Bing or Google.
Search with Perplexity (with either Web or Academic searches turned on depending on the question).
Do a Semantic Scholar search.
Search library databases.
This is part of being deliberate about using AI. I also find it produces fewer false starts and trails.
Limit AI apps and AI options in your operating systems.
This is absolutely central to being deliberate, sequencing, and finding balance. I have, for instance, turned all of the Apple Intelligence features off on my phone, iPads, and Macs. I do not find them useful. I also have specific features off in various apps, but know how to turn them on selectively if I do want them. Likewise, I am careful about which and how many AI apps I add to my devices and their settings. The idea is to customize the AI setup to your needs so you are not constantly bombarded with AI suggestions and interruptions. A colleague recently noted that his new Android phone had Samsung's, Google's, and Microsoft's AIs fighting. I have heard similar things from others. That would be much too much for me.
Cultivate AI blackouts.
This is pretty simple. It is the same idea as taking breaks or holidays from social media, just with AI. You might decide not to use AI at certain times of day, on a given day, or in specific kinds of situations. Giving yourself a break from AI is a good way to maintain the necessary perspective.
Find alternatives to AI that make sense for you.
Finally, remember how you did things without AI or look for new ways of doing things that do not include it. The alternatives might just be a way to give yourself a break, too. They might allow you to flex cognitive muscles that feel weak. In some cases, they might just be better ways of doing things. These might be as simple as working through the sources a Wikipedia article uses, working from books, articles you have saved, or notes you have taken. Turning to a library database might be another. For writing, spending time with pen and paper or switching to a stripped-down writing tool on your computer may help. Sometimes, you need a minor change of modality to be creative.
Final Thoughts
This post is much more prescriptive in tone than my usual writing. I do not want to tell anyone what to do. I am not judging. I am passing on things that seem to be working well for me, trying to find a way to integrate AI into my life while protecting my mental skills and abilities and hopefully helping others regarding AI in their lives.
Generative AI came at most of us with too much speed and too much hype. The psychological, ethical, moral, economic, political, cultural, environmental, and religious issues surrounding it are a lot to deal with. My own opinions on some of them fluctuate more than I would care to admit. I am not sure we are capable of using most of our technologies, let alone something as new and strange as AI, responsibly, so I am setting a somewhat lower bar, using it reasonably, rationally, and deliberately, and balancing it with other ways of doing things.
If this sparked anything with you, please comment and let me know. I would love to know more about how others deal with AI.


Great read and some great points. I can see this being a great guide to managing AI in someone's everyday, professional, and educational careers.
Valuable pointers here!