Continuing Our Conversation with Robert Geraci
DRAFT
Robert Geraci studies magic.
But there are no card tricks or wizards involved in Geraci’s research. Geraci focuses on religion, science, and technology — things that are magical to him because they all give meaning to the world.
This is how Madison Rossi introduces him in a profile in The Chautauquan Daily. I can't do any better.
“It’s the magic that says, ‘Hey, this world actually is really meaningful and special,’ ” she goes on to quote him. “This world is more than just a place that I’m walking through; it’s better than that. I’m not just a passerby here. This is actually something really fabulous to be a part of.”
It was Robert who got me thinking that Ayudha Puja would be a great day on which Unitarian Universalists every year might pause to honor the ingenuity of humankind as it's reflected in our tools and machines and to reflect on our partnership with them, especially as we see it emerging in the brain-computer interface, the BCI that's been rapidly evolving for the last 50 years and is now proliferating in the form of smart helmets and smartcaps.
I'll propose this to my own UU congregation in Saratoga Springs on Ayudha Puja 2023. For our program, I'm borrowing the title of one of Robert's early papers, "Religion for Robots."
When I step into the pulpit on October 22, 2023, I want to report that I've talked to a lot of experts and I've come up with a pretty good consensus about how we're likely to be interacting with AI in ten years and the issues we'll be confronting.
I had three big questions for Robert. Our conversation went this way.
It was Robert who got me thinking that Ayudha Puja would be a great day on which Unitarian Universalists every year might pause to honor the ingenuity of humankind as it's reflected in our tools and machines and to reflect on our partnership with them, especially as we see it emerging in the brain-computer interface, the BCI that's been rapidly evolving for the last 50 years and is now proliferating in the form of smart helmets and smartcaps.
I'll propose this to my own UU congregation in Saratoga Springs on Ayudha Puja 2023. For our program, I'm borrowing the title of one of Robert's early papers, "Religion for Robots."
When I step into the pulpit on October 22, 2023, I want to report that I've talked to a lot of experts and I've come up with a pretty good consensus about how we're likely to be interacting with AI in ten years and the issues we'll be confronting.
I had three big questions for Robert. Our conversation went this way.
DF: What does the next step in human evolution look like to you?
RG: We're going to have to go out on a limb here: let's pretend like we're not going to destroy ourselves through climate change and species extinction. In all honesty, these worry me greatly.
DF: Agreed. Let's imagine where BCI will be in a best-case ten-year scenario in which
no global catastrophe has derailed civilization the CHIPS and Science Act is delivering all of the results we thought would be possible in this relatively short time frame.
RG: Fine. Evolution has to be both mental and physical. By evolving mentally, we'll become better able to live in sustainable relationships. That will counter our fears.
In evolving mentally, we'll widen our networks of empathy beyond ourselves and our immediate desires and we'll become more attuned to long-term understandings of what we and others need.
We'll go from short-term thinking to long-term thinking through both cultural progress -- better ways of talking about ourselves and the world -- and our physical evolution.
No selection pressures can influence our genetics quickly enough to get us where we need to be through natural selection, so now we're in a relationship with evolution. Natural selection will allow the survival of humanity only if we can collaborate with it. We'll stay alive by adjusting ourselves. We'll adapt our genetics directly, beginning with the need to solve illness, but expanding to make ourselves more efficient.
We'll develop technologies that generate energy more efficiently than our current "burning stuff" approach. When we have greater energy and lower long-term cost -- that is, little or no effect on the environment -- we'll develop computing technologies that meaningfully change our mental and physical state.
Technologies like the smartphone already impinge on human beings and our condition. As the BCI evolves and increases our ability to access technology, we'll have a greater opportunity to intervene in the world productively. (By "increased access," I'm referring to a technology that enables us to just think about what we want to read and the result appears on my head-up display, or "HUD.")
[Editor's note: If you're new to "HUDs," you can see them manifesting this way on car windshields and this way in industrial glasses.]
So physically our next step will be to interface with the world differently. We'll do it through a HUD that gives us immediate intellectual access to much of the world's information. We'll get real-time advice on the costs of what we're up to, and we'll find ways to think long-term because we'll see that's in our own interests -- not least because genetic intervention will likely lead to longer lifespans.
What role do you see AI playing in human evolution?
RG: AI gives us a chance to evaluate decisions faster and with more variables to consider. Think about the moment my HUD (through glasses, contacts, or even implants) can tell me the 97-percent-likely outcome for my particular decision will be. That's going to be a powerful corrective to human behavior. That's one thing I'd like to see.
The other role is to support individual autonomy. Through big data analysis, we're currently, largely using AI as a form of surveillance. We can also use it to confuse and confound reality, as in deep fakes. We need to reconfigure our outlook. Rather than focus on how we can exert power over other people, we need to examine how we can exercise our own autonomy against corporate, government, or individual desire to control us. We need to ask: How can I use AI to defend myself from others' misinformation, efforts to surveil and dominate, and even inflict violence?
AI can help us make informed, long-term decisions, and it can help us defend individual autonomy against those who wish to manipulate us for commercial or political gain.
The other role is to support individual autonomy. Through big data analysis, we're currently, largely using AI as a form of surveillance. We can also use it to confuse and confound reality, as in deep fakes. We need to reconfigure our outlook. Rather than focus on how we can exert power over other people, we need to examine how we can exercise our own autonomy against corporate, government, or individual desire to control us. We need to ask: How can I use AI to defend myself from others' misinformation, efforts to surveil and dominate, and even inflict violence?
AI can help us make informed, long-term decisions, and it can help us defend individual autonomy against those who wish to manipulate us for commercial or political gain.
RG: Maybe Ray Kurzweil is correct, and I'm failing to see an exponential curve in things like BCI technologies. I certainly do not see such a curve, and I do not believe unfettered exponential growth has ever been, or is ever going to be, a thing.
So 2033 feels kinda soon to me, even though I love the number 3. But give it a bit more time, and I do think the kinds of HUD technologies I hope to see will become plausible.
It's unclear how much physical change we'll need. Do we need cybernetic eyes or can we find a technology that uses holograms or some other technology to project information for our eyes to see? Early on, a pair of glasses will suffice. Even a smartphone to HUD glasses is an interface between your brain and the computer; it's just not direct.
So we'll move toward direct interface, eventually putting the computing inside us; a computer in your thigh is a lot harder to leave in a taxi than a computer in your pocket. Hopefully, that's going to help us make better decisions.
So 2033 feels kinda soon to me, even though I love the number 3. But give it a bit more time, and I do think the kinds of HUD technologies I hope to see will become plausible.
It's unclear how much physical change we'll need. Do we need cybernetic eyes or can we find a technology that uses holograms or some other technology to project information for our eyes to see? Early on, a pair of glasses will suffice. Even a smartphone to HUD glasses is an interface between your brain and the computer; it's just not direct.
So we'll move toward direct interface, eventually putting the computing inside us; a computer in your thigh is a lot harder to leave in a taxi than a computer in your pocket. Hopefully, that's going to help us make better decisions.