This is the last of four parts.
Part 1: Seeing Through
Part 2: Seeing Around
Part 3: Seeing, and Not Seeing, the Whole
If you have already stuck through the first three parts, thank you. If you are just now coming across these posts, I hope they are enjoyable and useful to you.
In previous parts, I have shared some of the ways of seeing AI I find useful. I have given an in-depth example of how I do so in my work as an instructional technologist. I have also suggested some relatively abstract methods. An area I did not explore, because I simply lack the expertise and experience, is from the point of view of an expert in the field. I have to rely on others for that.
One of the most frustrating things in talking to people about AI, or really about anything, is how often one runs into fixed points of view. Sometimes people have not had much experience, and that's fine and to be expected. I encourage them to try different things with an AI and try the same thing with at least two or three. Other times a person may not want to reveal that they have ambivalent or unpopular views to avoid criticism or for other reasons. One-on-one, they can usually be drawn out; it is a matter of trust. The most frustrating ones are true believers – for or against AI – who only want to see things in one way. Too often that is a case of ideology, ego, self-interest, or all three.
I do not mind the inconsistency that having multiple perspectives and ways of viewing allows. I think it is one of the reasons friends encouraged me to start this blog. If this is true in general, it is much truer for something as new and ill-defined as consumer-grade AI tools. For me, while trying to help others navigate the changes we have experienced in the last two years and will continue to see, this multiplicity is absolutely necessary. It does not keep me from making mistakes, but believe it does help me correct the ones I can, and understand the trade-off in the decisions we make.
Let me finish with a fictional example to reinforce my point. If you have read my posts consistently, you will know I mention William Gibson novels at the drop of a hat. I will not apologize for that; I enjoy them and find them useful. Gibson has been writing speculative fiction that involves AI for 40 years. It is not realistic, but I have found the stories of the AI's Wintermute and Neuromancer, Rei Toei, The Aunties, and Eunice (UNISS) compelling. They have become my fictional touchstones for AI, along with the cinematic HAL and Colossus. I will not go into depth here, though at some point I want to write a long essay on Gibson's AI characters, but I do want to talk a little bit about the AGI, maybe ASI, Eunice from the 2020 novel Agency.
The striking thing for the present purpose is the number of ways Eunice, the AI, is perceived, both literally (how she manifests for the different characters in various situations), but also in how her existence and potential are viewed by others.
The first of these takes us back to what I wrote about AI as a hyperobject. Eunice is about as non-local as one can get. Almost from the beginning, she is distributed across supercomputers in different locations. She is introduced to us through a pair of glasses with an earbud. (Gibson was writing this not long after Google Glass was so heavily hyped and had already dealt quite a bit with AR technologies in 1993's Virtual Light and 2007's Spook Country.) What Verity, the character hired to test her, sees is what Eunice sees, but Verity quickly realizes that for Eunice, that view is processed as data, when various forms of crosshairs appear on the display picking out drones, reading license plates, and performing biometric scans of other people.
Sometimes Eunice interacts through voice, other times through text, occasionally as an avatar, and also as an almost ghostly presence during a period when she comes under heavy attack. As I have begun to deal with some AIs, particularly ChatGPT multi-modally, carrying on voice conversations, and feeding it pictures from my phone camera or screenshots, this helps me make more sense of the experience. Of course when ChatGPT answers me in its Juniper voice, I know I am not dealing with anything that actually understands me, as Eunice does understand Verity, but it clearly influences the experience. As more and more capabilities like this are rolled out, having some clear references are important, at least to me.

The other aspect is how others perceive Eunice. For Verity, she quickly becomes a beloved friend, companion, confident, and fixer. For Stets, probably the only nice rich person in any of Gibson's novels, she becomes an advisor as he, in turn, becomes her ally. To Cursion, the company that hired Verity, she is both a promise of immense profit and power, but also a deadly threat. To Lowbeer, a figure from the future with unlimited access to the best AIs of her time, Eunice is both an anomaly and a means of guiding Verity's timeline down a different, less destructive path than her own.
No two characters perceive Eunice the same way. They cannot, any more than any two people have identical experiences, feelings, or opinions about any of today's AI. If we are to navigate through this period of our history, we need this sort of variety. We need to cultivate it in our institutions, our societies, and in ourselves.
Too many today have motives to limit our ways of seeing, not just of AI, but of our polycrisis, and our possibilities. Cultivate all the different ways you see the world or any of its aspects, reflect on them, and share them. Without that multiplicity, without reflection, the likelihood that AI, the climate catastrophe, or any of a multitude of problems will narrow the prospects of our survival, or if we do survive, the quality and character of our lives.