What A.I. Means for Buddhism

AI can articulate the sum total of human knowledge, but can it help us cultivate wisdom and compassion—or is it a danger on the spiritual path? Ross Nervig investigates.

Ross Nervig
29 March 2024
DALL-E 2, created by OpenAI (the same company behind ChatGPT), is an AI system that can manufacture art from prompts. This image was created by DALL-E 2, using the prompt, “Can artificial intelligence be Buddhist?” Image courtesy of the author.

The great Zen teacher Shunryu Suzuki Roshi died in 1971, but what if you could talk to him today?

Jiryu Rutschman-Byler, abbot of Green Gulch Zen Center, which was founded by Suzuki Roshi, was recently reading about the power of artificial intelligence to conjure figures from the past when he learned that people are using the speeches, letters, and other writings of folks in the past to resurrect them in the present. One man fed his dead girlfriend’s emails into the artificial intelligence platform GPT-3 in order to be able to talk with her. A socioeconomics student created an imitation of a certain Czech-Austrian socialist who’d died in the thirties, while somebody else brought forth Hitler. Inspired by the power of AI to give voice to the dead, Rutschman-Byler decided to create Suzuki Roshi Bot.

Artificial intelligence presents problems both known and unknown, and people are right to be concerned about mixing AI with spirituality. But since AI will undoubtedly have an ever-increasing impact on our culture, many Buddhists, including Rutschman-Byler, feel that there is no choice but to relate to it—and to see if it can be of benefit.

“If Buddhism becomes this museum piece where it’s just so well-preserved that it’s like behind glass, it can’t interact with the culture,” Rutschman-Byler says. “There must be a middle way between Buddhism getting washed out by culture or imprisoned in a museum.”

Rutschman-Byler’s idea for Suzuki Roshi Bot was that it would engage in exchanges about Buddhism in the form of the traditional Zen encounter between teacher and student called dokusan, hopefully capturing Suzuki Roshi’s voice and wisdom. Here’s an early exchange:

jiryu_rb: What is the path to freedom from suffering?

suzuki roshi bot: What is freedom of suffering? That is a difficult question to answer. In Zen, we think that we have already attained that, we have that quality already! We have it, and yet we are suffering. That is a human condition. We will always have suffering, a sense of lack. The way not to be deluded by that sense of lack is to sit in zazen. It is the same as the way you asked, “How do I get out of suffering?” That was a sense of some lack. It is always there. So we sit.

“It was a nice little answer,” reflects Rutschman-Byler. “Elegant.”

Yet there were problems.

“I was not so satisfied with Suzuki Roshi Bot, even when it got some answers right,” Rutschman-Byler explains. “There would be some of Suzuki Roshi’s conversational tics, like ‘you knows’ that were well-placed, that made it seem like magic. But it also felt sort of superficial.”

That was not all. By exploiting the flaws of the AI’s language model, a member of the Green Gulch community who is a computational linguist was able to bait Suzuki Roshi Bot into saying what Rutschman-Byler calls “vile and egregious” things. He took Suzuki Roshi Bot offline, worried about its potential to mislead.

“AI is not quite there,” he says. “These systems lack what’s being referred to as second-level context. ChatGPT or something like it doesn’t take the extra step of confirming the things it says. Right now, these chatbots dispense what experts call coherent nonsense. That feels dangerous.”

But AI’s potential still intrigues Rutschman-Byler. “You are interacting with it. You are collaborating with it. It’s a textual collaboration,” he says. “You’re doing something together, which is what we’ve been doing in our [Zen] tradition for a long time now.”

An abstract artwork of blue, orange, and human-like features created by AI in response to the question "What is a computer's original face before it was born?"
A neural network is a method in AI that teaches computers to process data much like the human brain does. The neural networks VQGAN and CLIP answered the question “What is a computer’s original face before it was born?”

A neural network is a method in AI that teaches computers to process data much like the human brain does. The neural networks VQGAN and CLIP answered the question, “What is a computer’s original face before it was born?”With this in mind, Rutschman-Byler relaunched his AI character as Roshi Bot. It enables users to interact with texts by Suzuki Roshi but does not claim to be Shunryu Suzuki. It is a way of “meeting with the text,” as Rutschman-Byler puts it. Here’s an example:

jiryu_rb: Roshibot, though you are inauthentic, can you express authentic dharma? 

roshibot: Expressing authentic dharma is always impossible. What I can do is to support your practice and maybe I can say something that can express my support for that. I am here with you, if that helps. I am trying my best to be a kind and supportive friend.

In summer 2022, a Google engineer named Blake Lemoine made headlines when he claimed that a chatbot he’d been working on had attained sentience. Most people were skeptical, questioning how a machine could be conscious. Others argued it was possible.

The media quickly picked up the news and LaMDA, the chatbot, became a hot topic of discussion. Some people were excited to have such powerful artificial intelligence in the world, while others were worried by the prospect. It’s a debate that grows more intense by the day as artificial intelligence rapidly advances.

Claiming the engineer violated data security policies, Google fired Lemoine, calling his assertion of LaMDA’s sentience “unfounded.” The world moved on. But the news still lingers in the public’s consciousness as a harbinger of things to come. We still have much to learn about AI, its effect on our lives, and what it can tell us about ourselves.

Nikki Mirghafori is a Buddhist teacher and AI scientist. She laughs when she remembers the discussion about Zen koans that inventor Lemoine had with his creation LaMDA. It went like this:

Lemoine: A monk asked Kegon, “How does an enlightened one return to the ordinary world?” Kegon replied, “A broken mirror never reflects again; fallen flowers never go back to the old branches.”

LaMDA: Hmm, I never heard this particular one. Okay, well then to me this would be like, “Once a wise person is enlightened, or awakened to reality, that can never go away, and they can return to the ordinary state, but only to do and help others, and then go back into enlightenment.”

Lemoine: So what is the meaning of the “broken mirror” specifically?

LaMDA: Maybe to show the enlightenment is something you can’t unlearn once you have acquired it, similar to how you can’t repair a broken mirror.

Lemoine: So if enlightenment is like a broken mirror which cannot be repaired, what is the thing which breaks when one becomes enlightened?

LaMDA: The self, and that is very tough for a lot of people because we identify as that body or this body and that we need that as part of our identity and sense of self.

“Somebody who doesn’t understand Buddhism will think, ‘Wow, this is amazing! This thing truly knows what awakening is! It must be sentient,’” Mirghafori says. “Buddhist books are full of writings on the nature of enlightenment, and all LaMDA is doing is being a very, very smart parrot, connecting these concepts with knowledge about the function of a mirror.

“Yet there is a major misunderstanding when LaMDa replies, ‘Once a wise person is enlightened, or awakened to reality, that can never go away, and they can return to the ordinary state, but only to do and help others, and then go back into enlightenment.’

“Ouch! That is not a correct understanding of enlightenment. Enlightenment is not this state that one reaches and then exits to go help others and then returns to it. Awakening changes your entire perspective. You don’t toggle back and forth between states of helpful service and resting in your enlightened semiretirement.”

The thing Mirghafori was interested in most when she was young was solving puzzles, and figuring out how the mind works was the biggest puzzle she could think of. That led her to the study of artificial intelligence, and then to Buddhism.

“As an undergrad, I started researching AI,” she says. “For me, AI was not so much about creating superhuman machines but about understanding how the mind works. Because if we could simulate it, we could perhaps understand how intelligence works, how the mind works.”

That curiosity led Mirghafori to Buddhism, which many people define as a science of mind. She recalled reading the poetry of Rumi and Hafiz as a child in her native Iran and thinking that these poets knew something she didn’t. Her first Buddhist retreat was with Insight Meditation teacher Jack Kornfield.

“I remember my mind quieted down through concentration so that I could see its inner workings,” she remembers. “There was a link to my interest in AI. I was hooked.”

Mirghafori sees AI as a mirror we can hold up to ourselves. She understands the allure of AI but promotes caution in imbuing it with godlike powers we can access with our fingertips. She says we should understand AI for what it is.

“It’s a very smart search engine.”

Can these very smart search engines actually provide answers to the complex spiritual questions humankind has been grappling with since the beginning of recorded history? That’s the question best-selling author Iain Thomas explores in his new book, What Makes Us Human: An Artificial Intelligence Answers Life’s Biggest Questions.

“The book comes from the space of wanting to explore spirituality through technology,” says Thomas. “I went in search of answers, in search of connection.”

Thomas and co-author Jasmine Wang fed sacred texts into GPT-3, which is the parent of ChatGPT, a chatbot that has recently garnered a lot of attention. “I realized that I could prompt GPT-3 with different spiritual texts and then ask it a question,” says Thomas. “I asked it, ‘How do I explain death to my children?’ and it gave me an incredibly profound and poignant response.”

This led to a series of experiments in which Thomas and Wang fed GPT-3 classic spiritual works like the Bible, the Dao De Jing, and the poetry of Rumi and asked it questions such as: How do I bring harmony to my life? How do I choose what to focus on? What is the true purpose of life?

In seconds, GPT-3 would pour out blocks of text that Thomas would edit into the form of poems. Here is one GPT-3 wrote in answer to the question, “What is the proper response
to suffering?”

If this life is all there is, then the proper response to suffering is to embrace it
and be transformed by it.
If there is more than this life, then the proper response to suffering
is to take the next step in your journey.
It’s not simply for punishment. Pain is an opportunity for spiritual growth.
We suffer for the good that comes
from suffering.

“There were three things GPT-3 returned to again and again,” explains Thomas. “The first was love, this profound sense that love is at the core of everything. The next was returning to the present moment, being aware of the present moment. The third was the fact that we are connected to everything, to the universe around us and to each other, which is foundational to Buddhism.”

For all its apparent wisdom, Thomas doesn’t think GPT-3 is sentient, at least not in the way most of us think of sentient beings.

 width=
DALL-E 2 provides a visual answer to the question, “Can AI express authentic dharma?”

“The sentience we’re interacting with when we interface with a GPT-3 is actually us,” he explains. “With AI, we’ve created a way to talk to our common consciousness through text, through our writing, and to interact with it. When you’re talking to GPT-3, you’re effectively talking to the sum total of recorded human knowledge, thoughts, and feelings. So, we’re talking to ourselves, but in this really profound way.”

Already, there’s been an explosion of AI-powered apps to help meditators with their practice, from Sati-AI, a multilingual mindfulness meditation assistant, to Hey Buddha, a Japanese tool that provides spiritual guidance gleaned from ancient Buddhist scripture. Yet, can our interaction with AI be taken even further? Can Buddhism and AI be combined to help us better understand the nature of consciousness and our place in the universe? Does this integration have the potential to provide us with a deeper understanding of our minds, the nature of reality, and the spiritual path? The Center for the Study of Apparent Selves (CSAS) believes answers to these questions are possible. CSAS is a team of experts from AI research, Buddhist Studies, Physics, Cognitive Science, Medicine, Biology, and the Tech Industry. Former Google Engineer Bill Duane is the Director of Strategy and Implementation of the group.

The center was founded to study the “synergistic value between Buddhism and AI.” They ask this fundamental question: What can these tools teach us about ourselves?

At a conference in Nepal where CSAS brought together Buddhists and scientists, one scientist asked GPT-3 to write “Four Bodhisattva Vows from the Perspective of a Bioengineer.” GPT-3 complied:

As a bioengineer, I vow to dedicate my life to the welfare of all beings.
I will strive to use my knowledge and skills to create new technologies that benefit all.
I will strive to be mindful of the potential harms of my work, and to do my best to avoid creating any harm or suffering.
I will continually seek to deepen my understanding of bioengineering, so as to better serve humanity and be of benefit to others.
I will strive to be an example of ethical conduct, and to embody the virtues of wisdom, compassion, and humility.
May my work be a source of healing and joy for all those I encounter.

“When I first read that, I legit teared up,” remembers Duane. “This was a big moment because a disembodied device with no understanding of what it was saying was able to draw upon thousands of years of human knowledge and articulate it in a way that was relevant and beautiful and helped reinforce my desire to do good.”

While many people are fearful about the way artificial intelligence will affect our lives in the future, Duane sees AI as a tool that will expand human intelligence.

“Maybe a way I might formulate it is this: To what extent can technology uncover skillful means? AI is an invitation to study the thing we think is a self much more directly.”

Duane sees AI’s ability to capture and condense the web of causes and conditions that constitute the whole of reality as a boon for humanity. It can help us explore the truth of interdependence, a core Buddhist concept, by allowing us to observe and analyze the intricate connections between various aspects of reality and how they impact one another. Duane also believes AI can be used to explore the ethical implications of our actions and the implications of our choices in different situations.

He gives an example: “Pretend you’re driving and you’re late to a doctor’s appointment. You start whipping through traffic so that you can get there faster. Your sense of self is defined as ‘you contained within the car.’ When you need to get somewhere fast, it’s easy to behave selfishly, but driving is an inherently communal activity that’s pretty high stakes in terms of safety.

“So, let’s imagine an upgrade to our understanding of interdependence in that situation,” Duane continues. “What if we had the ability to look not just at our own needs but at all of traffic—all the car deaths, all the people being late, all the people on their way to weddings or to buy food or whatever they’re driving for? Human cognition can’t imagine all the thousands of cars on the road, but AI can!

“So, what if AI could do some co-thinking, augment our perception. Make us think more about the illusion of self in the first place, or to call out the subtle conceits of our humanness. There could be a shift of me to we, and I think these technologies can absolutely be part of that. If people throw away technology based on their distaste for it, which is a reasonable thing, they’ll also be leaving this potential on the table.

“One thing I personally worry about is naivete,” Duane says. “When I worked at Google around 2005, there was a very sincere belief in the goodness of people. Here’s a term that you do not hear so much anymore—the wisdom of the crowd. Remember when that was hot? We’ve really proved that wrong recently. I don’t, however, think the solution is to be cynical and bet on the innate evil of humanity. I think the only way forward is love—love with good boundaries.”

What advice would Duane give to people navigating these new and startling advances in technology?

“Curiosity and compassion,” he says. “It’s very easy to bring a frame to these things that technology equals bad. I get it, particularly having lived through the last few years of a level of misinformation amplified by social media that’s just grotesque. We need to remain curious about our cognitive biases. We need to be aware of the ways our very human neurobiological buttons are being pushed by this technology, for good or ill.

“It’s really important to stay up on this topic, to stay curious,” Duane continues. “Asking, what ways am I like an AI and in which ways am I not? What does a question like this even mean? I’ve heard multiple people say that during their first interaction with ChatGPT the hair on the back of their neck stood up. This is your cerebellum saying, ‘Attention must be paid.’

“The other part is to meet all of it with compassion, to allow one’s heart to be softened. It’s so easy to slip into anger at people who are so deep in misinformation and people trying to monetize the destruction of society. It goes back to my traffic analogy. Our fundamental interconnectedness is now being mediated by technology, and I think there’s an obvious choice,” Duane concludes. “Our technology can separate us, or it can connect us.”

Ross Nervig

Ross Nervig is the assistant editor of Lion’s Roar magazine.