This is the second of four parts.
Part 1: Seeing Through
Part 3: Seeing, and Not Seeing, the Whole
Part 4: Why How We See Matters
If we look less at the AI and more at how it is situated, we get different sorts of views. These range from very short-term to extremely long-term. They may be panoramic or microscopic. There are many contexts in which we can see AI.
How we approach them, depends on our fundamental orientation to the world. This is not just a matter of left/right, religious/agnostic/atheist, spiritual/materialist, or socialist/capitalist. Qualities of character - openness, wonderment, empathy, and so on - are all important to how you see AI. It matters too if your approach to reality is primarily mediated by economics, social thought, an orientation to culture, politics, psychology, etc. If you are an adult, your profession matters as well. If you are a child, then it is your parents' professions. All of these things affect the context that each of us finds important.
I think it best to give you these notes on my orientation and background before proceeding. I cannot look at the big picture of AI, nor the close-up view, the way you do. We may see things similarly, but never identically.
For me, I see it in the context of higher education, of culture, of mind. The political, military, and intelligence aspects are of immense importance in my worldview. I like to situate things historically. So much of history is in the details, the beliefs and modes of thinking, found in, individuals and whole cultures, and how those interact. I have never been that interested in great historical forces.
All of which is to say that I view AI from several angles, different levels of detail and timeframe, with decades of pondering how psychology, culture, and technology interact. I was trained in a history department, not a Science and Technology Studies program, so I have borrowed ideas and developed some of my own, and I have been at this, formally or informally, for about 40 years.
Now that we have that out of the way, let's think about the big picture a bit.
How big is the big picture? Among some of those who drive AI development and support it, the big picture is very long-term, millions of years, and spans the galaxy, but it tends to be fuzzy about the here and now. Their expansive focus on the future is often unmatched by a broad focus on the present. Sometimes this is true too of those who believe there will be one particular outcome in a few years. They may well be right, but we cannot be certain, and how we get from here to there matters, at least for those of us who live through it. It also will determine the details of that future, of what happens after us.
One problem with big pictures is that you need to zoom in on individual topics to see how they may play out. I will work with the area of most immediate concern to me. Since this is directly related to my work, let me reiterate, once again, that this reflects my experience and opinions, not necessarily those of my employer.
Let us suppose that within five years, for the sake of argument, we have true AGI, that it will be so cheap and efficient in its use of resources that it can educate everyone in the world up to the level of an Oxford graduate. Say in 10 years, universities have fully faded away. Except for the timeline, that was the premise if I understand it correctly, of the comment Emanuele made on Nick's blog that sent me in this direction (see previous post for details). I am not previously acquainted with Emanuele and apologize if I've misrepresented their thoughts.
It could be a glorious vision. Maybe we should strive for it. Whether I zoom in on it and think about how we can achieve it and how it works, or zoom out and project forward other developments in society, I find a lot of concerns. God, or the devil, depending on whom you ask, is in the details. Let me start from my perspective as an instructional technologist, my basic job for the last 25 years and counting.
If we stick to universities, primarily North American ones, for the moment, it is important to understand just how complex the technological landscape is. To introduce new applications, let us say a personalized AI tutor, takes time, money, and effort. It has to be vetted for security, privacy, accessibility, and support needs. Someone has to figure out how it will be paid for and where that money will come from. University may have to put out a request for proposals and may have to accept the least expensive contract, which may or may not be the best product. Then we have to figure out how to deploy it, support it, deal with the vendor on how to maintain it and train people to use it.
Of course, we already have AI-powered tools that provide personalized feedback. Some we may pilot, others may be part of a textbook and fall partially under an existing agreement. (Do not think of textbooks as books, they are complex collections of databases, text, images, and multimedia, automated quizzes – some automated proctoring – homework systems, and other features. They are software.) One thing we will have to deal with in this transition is a multiplicity of personalized AI tutors. One possibility is that the big textbook companies will use copyright and intellectual property laws to force the use of their AI tutors if you want to use their content. The way the market for AI tutors plays out may depend on the maneuvers of lawyers, legislators, and judges.
Of course, we do need to think in terms of markets. Those markets will not work in the manner of traditional ones, but more in the way of cloud services. If you want to go into this in depth, see Yanis Varoufakis, Technofeudalism: How Capitalism Died, (Melville House, 2023). These markets will help determine how the personalized tutors develop their features and their content.
That last takes us into the realm of politics, nationalism, and religion. There is a good chance that different countries will want their own tutors. Certainly, the US and China will become quite territorial. As the new Cold War develops, getting other countries to accept the tutors of one nation or another will be a matter of great import given their potential to indoctrinate entire populations.
We have already seen concerns within the US about the ideology of AI from both ends of the political spectrum. It is difficult to control the content, but adding different guard rails can shape the output to a significant degree. Congressional and state hearings on the contents safeguards built into these tutors may become important politically. Religious groups will also get involved, even if only for the homeschoolers. They will need tutors that reflect their beliefs.
We are now in the territory of class, race, ethnicity, gender, sexuality, and any other of several hot-button issues conflict. Regardless of your position, this should, this will concern you. How do we deal with this? In the short term, we will have a combination (in America) of states and individual school districts, making decisions for the community, parents will make decisions for their children, and universities will have to look at all of the issues I have already outlined. Unless someone produces an AGI that is infinitely malleable and can be set up to teach some children about abortion, slavery, holocaust, communism, etc., in a way acceptable to the authorities, to parents, and to activists, we will have to have many different AGI tutors
Maybe we will have one or two that are acceptable, one that is orientated to the US power block and another to the Chinese (perhaps with a handful of other countries having their own – say Russia, India, ran, South Africa and Brazil). Will everyone have equal access? Or will some features be free and others paid? Will we maintain economic equality in this way?
And if they are being used by everyone, every learner from infancy to adult, who will have access to all of that data? Who will use that data? How will they use it? Surveillance and control become imaginably more powerful when we're dealing with AI, and especially a persuasive AGI that can use that data to become even more persuasive, or raise alarms about the behavior of its users. We have to think about things like social credit systems, very detailed, individualized surveillance, and what all this really means to society.
We can begin to see the complexity of the transition, and I've not even touched on things like job loss, the dissolution of institutions, who will care for children during the day if there are no schools and the parents still have jobs outside the home, or issues with self-motivation that many students events during the pandemic.
We might get to the point where we do have everyone learning through AI in a personalized fashion able to take them to the level of an Oxford graduate. Getting there would be hard. It has to be seen in this broader context. There's no quick technological fix. Technology always interacts with culture, society, economics, politics, and our individual psychologies. There's no getting around that.
I will leave things there for this part. Will continue to look at context, but will shift perspective in the next section.
"If we stick to universities, primarily North American ones, for the moment, it is important to understand just how complex the technological landscape is. To introduce new applications, let us say a personalized AI tutor, takes time, money, and effort. It has to be vetted for security, privacy, accessibility, and support needs. "
And actually understanding how all these apps work together, their subcontractors, etc., is hugely complex technically and legally.