This is the first of four closely related posts I will make over the next few days. These form a series. Because I refer quite a bit to my work as an instructional technologist in the second part, I want to reiterate that the views expressed here do not necessarily reflect the views of my employer, the University of Missouri System. For the post, the comment, and my reply, that sent me down this path see: It's Time to Listen to Teachers About AI! The comment is the one by Emanuele Sabetta. I apologize to Nick and Emanuele if I misrepresent their views in anything that follows. The interpretations of what they wrote are my own. The second part of the series elaborates on my reply to Emanuele.
As I wrote in my blog introduction a few weeks ago, this is not an AI blog but will touch on AI a lot; nevertheless, this week will be devoted to different ways of perceiving AI. I hope it attracts comments from readers.
Part 2: Seeing Around
Part 3: Seeing, and Not Seeing, the Whole
Part 4: Why How We See Matters
A couple of days ago, I replied to a comment on Nick Potkalitsky's blog suggesting we are missing the big picture because of our worries about the responsible use of AI in education. Nick's post was about the need to listen to teachers about their concerns and experiences to guide how we implement and use AI in education. It is necessarily focused on the present and the near term. In a comment, instead of small concerns, Emanuel Sabetta argued, we should focus on how AI and personalized learning will completely transform education and destroy existing systems. Disregarding the question of what we are supposed to do while we get from here to there, or if we even can, the comment raises a question of perspective, something I struggle with in most of my interactions with and about AI.
In this, and the following parts, we will explore different perspectives we should take with AI and what we may gain from them. Some of the approaches I put forward will be readily apparent, some will be abstract, and one will turn to a medieval way of seeing the world. These are all ways I am exploring AI in my thinking and my work.
In terms of my work as an instructional technologist, my daily focus is on the immediate issues of AI use and abuse, privacy, security, intellectual property, etc. specifically in a university setting. There is some variation due to the character and nature of our four campuses but it is still a very limited field of view.
When I use AI for fun, usually playing with images, my point of view shifts. I can lose myself in this, but I am also driven to reflect on creativity. My mother taught art, and I am surrounded by her paintings. My father also had a talent for linocuts, and I have many of his prints. As for myself, I have almost no talent except for manipulating existing images and have never been comfortable calling that art. Perhaps it is some sort of design skill. I am even less at ease with calling what an AI creates art, but can entertain a scenario where it becomes high art, as happens in William Gibson's novel Count Zero (Ace, 1986), just one of many things I want to eventually explore in this blog.
For the record, I take a secularized version of Tolkien's ideas on creativity as my guide. It may not be very philosophical, but it works for me. As a devout Catholic, Tolkien argued that only God is truly creative. I would replace God with Nature. What humans create, even art at its most transcendent, is a secondary creativity, reflecting God, or Nature, but always derivative. When it comes to AI, it is at best tertiary or quaternary. To some that may not matter; to me, who grew up around the visual arts, around discussions of what Duchamp did, and took a fair amount of art history in graduate school, it is vitally important.
We can look at AI through a very practical lens. We can see it as a Victorian industrialist might have seen machine tools. We might see it through the visionary lens so many have adopted – as something that may transform the world for the better – if it does not destroy us.
Or we may see through it.
There are at least three senses in which we may be said to see through a thing.
One is contained in the English expression, "to see through," that is to penetrate the illusion and see the thing as it truly is. This is the simplest sense and is what most people probably mean when they speak of seeing through AI. They are looking through the hype, the marketing, to the underlying technologies and their current or possible limitations or capabilities. I am not interested in that sense for the present purpose.
A second sense implies a lens, that is to use the thing in a way that changes the way we see. We use this sense when we speak of optical devices – glasses, microscopes, telescopes, etc. If the lens is good, there are benefits. If it is bad, so are the results. We are not too sure what kind of lens AI is for now.
There is a third, related sense, one of which I am fond, that extends the second meaning. I first ran into it more than 30 years ago, reading Abbot Suger of Saint-Denis. Saint-Denis was one of the wealthiest and most important religious institutions in twelfth-century Western Europe. Among other things, it was the necropolis of French kings. Suger was a hardheaded administrator, a statesman, regent of France, and an eager backer of the latest architecture. He also had a mystical side. He wrote of how looking at or through the precious artworks, gems, and stained glass of the abbey, he saw through the material to the numinous. His view was anagogic, seeing through the object to something beyond it. Anagogy is not symbolic in any usual sense, it is about using a thing to see a deeper, hidden truth, as through a glass darkly.
I am not proposing that anyone is going to look through ChatGPT or the imagery of DALL-E and see God or his choirs of angels – though stranger things have happened. But I will suggest that by looking at them, and metaphorically through them, we may see not just reflections of ourselves, but other ways of looking at ourselves, our world, and AI as well.
Suger saw the wonders of God in his heavens. We may not see anything so exalted, though we might, but we may also see things that are much, much worse. This is a metaphor of depth, of using one thing to explore another. I do not mean that we should use AI as a metaphor, as has been our wont with computers and networks, using them as heuristics to understand our minds or the universe.
On one hand, looking back at the more optical view I mentioned earlier, we can understand nature, mind in nature, non-human communication, and much more using AI as a new kind of tool to help us analyze the most complex signals and phenomenon bringing into focus patterns that are difficult or impossible for us to see in the world around us. Think of it as a very versatile sort of microscope that can open new realms and also help explore areas that seem to have already yielded most of their secrets.
On the other, more like Suger, we may gain new insights into such things as intelligence and consciousness by reflecting on what these tools are able to do and not do. We may gain new understandings of aspects of creativity and destruction from observing them – so long as we do not fall into the simple traps of allegory or analogy. We need to treat them as Suger treated the treasures of his abbey, with anagogy - seeing through.
So that is a bit about seeing through. I want to turn in more detail to other ways of seeing. We will turn next to looking at AI in context, which is vitally important. Then we will look at how incomplete our view of AI is. We will borrow a leaf from Timothy Morton's philosophy and talk about AI as a hyperobject.
For more on Suger, see:
Abbot Suger, Abbot Suger on the Abbey Church of St. Denis and Its Art Treasures: Second Edition, edited and translated by Erwin Panofsky and Gerda Panofsky-Soergel, Princeton University Press, 1979.
"As a devout Catholic, Tolkien argued that only God is truly creative. I would replace God with Nature. What humans create, even art at its most transcendent, is a secondary creativity, reflecting God, or Nature, but always derivative."
From a physical sense, the universe is pretty boring. Yes, we are discovering new things all the time about the universe, but it has been plugging a long for billions of years in a predictable way. Life is the most interesting thing that has ever happened so far, the ability of living things to unevenly distribute entropy. Accelerating it in some aspects (food decay in the gut is faster than in a vacuum), and harvesting that energy to decrease entropy to create cells, etc.
Is AI doing the same? Burning coal to organize ones and zeros? Is that creative?