How should we take responsibility for the problems brought about by AI?—from “Self-Punishment” to Response-ability
From an interview published in the Japanese web magazine Levtech LAB - PART 2
【PART2】
This article is an English translation of an article published in the Japanese web media Levtech Lab.
How should we take responsibility for the problems brought about by AI?—from “Self-Punishment” to Response-ability
October 23, 2025
Buddhist monk Shoukei Matsumoto moves fluidly between two seemingly opposite worlds—Buddhism and business—offering new perspectives on the challenges of modern society. In the first part of this interview, we explored his view that the anxiety many people feel—“AI might take our jobs”—stems from what Buddhism calls attachment.
But that is not the only kind of anxiety we feel toward AI.
At present, AI still contains many “black box” elements. It is difficult to fully understand how it works, what it generates, or how it will continue to evolve. How, then, should we handle something that carries such unknowability within it? And for companies that use AI—which can never be completely controlled—how should they take responsibility if something unexpected occurs?
In response to these fundamental questions, Shoukei says that the key lies in letting go of the desire for perfection and reconsidering the meaning of “responsibility.” What does he mean by that? In this second part, we delve deeper into his reflections on how we can relate to AI—an uncertain presence—and on what “responsibility” should mean for companies and engineers in the age of AI.
Letting go of “Perfection” and accepting “Uncertainty”
— In the first part of this interview, you explained that the anxiety engineers feel—the fear that AI’s evolution might take away their jobs—has its roots in what Buddhism calls attachment. That makes sense, but I also feel that the “unknowability” of AI—especially in the case of generative AI, which still has many black-box elements—can itself be a source of fear and unease. How should we face such an incomprehensible presence?
Shoukei:
Our fear and anxiety toward “not knowing” may, at their root, come from a kind of perfectionism—the desire to understand and control everything.
The pursuit of perfection is endless. It is a painful path, one that constantly keeps us side by side with a sense of lack—the feeling that “it’s still not enough.”
That is why, rather than trying to perfectly understand and control everything in this world, including AI, I believe it is becoming increasingly important for us to let go of the idea of perfection itself.
Earlier, I mentioned that the cause of suffering lies in attachment. But this does not mean we must abandon all forms of commitment or passion. Rather, when our attachment takes the form of an obsession with perfection, it inevitably leads to suffering. In other words, to hold commitment while being free from the idea of perfection—this is one of the timeless themes that Buddhism has long sought to explore.
— Letting go of perfection, you say?
Shoukei:
Yes. In Buddhism, there is a phrase hakkushō (literally “eight or nine out of ten are almost completed”), which teaches that instead of striving for one hundred percent, we should aim for eighty or ninety. A hundred percent, after all, is nothing more than a kind of fantasy.
Let me explain with an example of cleaning.
As a monk, I’ve made cleaning one of the central themes of my practice. Here at Komyoji Temple in Tokyo, we hold an event called Temple Morning about once a month, where everyone gathers in the morning to clean together.
When you think of cleaning a temple, you might imagine polishing every corner until it shines—a practice that seems to pursue perfection.
But in reality, there is no such thing as perfect cleaning. If you tried to polish every single grain of the wooden floor, there would be no end to it.
And the moment you finish sweeping beautifully, dust will already begin to fall again.
A perfect state cannot be maintained, not even for a single moment.
— So, when it comes to working with AI, do you mean that the pursuit of “perfection” itself may actually cause us more suffering?
Shoukei:
Yes, that’s how I see it.
As is often pointed out about the “black-box” nature of generative AI, we do not yet fully understand how the vast number of parameters interact and lead to a specific output in the generative process.
The more AI-related technologies advance, the more pronounced this tendency will likely become. In other words, it is becoming increasingly unrealistic for humans to “perfectly understand and control technology,” as we once believed possible.
If we continue to cling to the idea of perfect “understanding” or “control,” I believe we will lose sight of how we should truly relate to technology.
After all, the Bayesian statistics applied in machine learning are characterized by their ability to improve estimation and prediction accuracy as new data are incorporated. To put it simply, rather than aiming for one hundred percent accuracy in the first calculation, they work by allowing for a certain degree of uncertainty—absorbing various pieces of information and flexibly adjusting predictions and inferences to move closer to a more reliable answer.
The very philosophy on which AI’s development has been built is, from the outset, incompatible with perfectionism. At its core, I think there exists something closer to the spirit of hakkushō—a kind of looseness or flexibility that allows for imperfection.
This shift in mindset—letting go of the pursuit of perfection—is equally important in the world of business. Of course, this does not mean doing careless work. Rather, in an age of rapid and constant change, we must recognize that the “perfect” state we often seek to maintain is merely an illusion. To move forward, we need to let go of our desire to control everything and open ourselves to new ways of thinking.
— I understand the idea that we should let go of the illusion of “perfect control.”
But then, what kind of relationship should we build with AI instead?
Shoukei:
Trying to “perfectly control” something according to human will and desire is, in itself, an unnatural act—one that inevitably involves strain. The path we must take from here, I believe, is that of being with—in other words, coexistence. But what does it actually mean to “coexist” with AI? The clue, I think, lies—perhaps unexpectedly—within the values that we Japanese have long cultivated throughout our history.
Three Attitudes Toward AI — Confrontation, Utilization, and Coexistence
— The values that we Japanese have long cultivated, you say?
Shoukei:
Yes. But before we get to that, let me clarify one premise. It seems that the way people relate to AI differs greatly depending on their country and cultural background.
Recently, I had an interesting conversation with a German philosopher, Markus Gabriel, with whom I’ve become quite close. He pointed out that “human attitudes toward AI can be roughly divided into three types.”
The first is the stance of viewing humans and AI as being in a relationship of opposition. In simple terms, this is the idea that, ultimately—just like in the movie Terminator—it will become a battle over which side, humans or machines, holds power and superiority.
The second is a view that, while less extreme, still sees AI merely as a tool to be used by humans. In this view, AI is acceptable only insofar as it serves human purposes, with humans remaining firmly in control as the masters. Some countries and regions are moving rapidly to regulate AI, and such efforts may be rooted in this way of thinking.
And the third attitude is coexistence. Markus described this as the “Japanese attitude.”
He said that the symbolic embodiment of this view is Doraemon.
— Doraemon ?
Shoukei:
Yes. In the world of the Japanese animated series Doraemon, humans and the AI robot Doraemon coexist. Living together with Doraemon, Nobita is not afraid of him. They help each other as imperfect beings, and when there is only one dorayaki (a Japanese red bean pancake) left, they share it equally. There is no hierarchy between human and robot—only a relationship of mutual recognition and coexistence. Markus pointed out that this represents a way of relating to AI or AI robots that differs from the value systems of “conflict” or “utilization.” Such values are not unique to Doraemon; they can also be seen in many other Japanese animated works, such as Astro Boy (Tetsuwan Atomu).
I believe the background to this lies in the animistic worldview that runs deeply through Japanese culture. As expressed in the ancient phrase Yaoyorozu no Kami—“eight million gods”—there is a belief that spirits or deities dwell in all things throughout nature. Japanese Buddhism, too, has a philosophy—“mountains, rivers, grasses, and trees all possess Buddha-nature.” It teaches that everything in the natural world holds the potential to become Buddha.
Within such a worldview, humans are not seen as special beings, but merely as one of the many elements that make up the world. Humans, animals, plants—and even AI and robots—are neither above nor below one another. They simply exist together. I believe that this Japanese sense of coexistence holds an important hint for how we might live alongside AI in the future.
“Responsibility” is not about making amends through sacrifice or punishment, but about continuing to respond.
— I understand very well the need for a shift in perspective—accepting the uncertainty of AI and exploring ways of coexistence. However, in the real world of business, if a service that utilizes AI causes some kind of problem, the developer or company will inevitably be held “responsible.” How do you view this issue of responsibility?
Shoukei:
“Responsibility (normally translated “SEKININ:責任” in Japanese)”... That Japanese word itself may be somewhat problematic.
I sometimes work as a translator, and in Japanese, “SEKININ” is used as the translation of the English word “responsibility”. But I feel that when first translated into Japanese, the nuance of the original English term may have been lost—or at least shifted slightly.
— So, does the meaning of “responsibility” differ from the Japanese word “SEKININ” ?
Shoukei:
In Japanese, “SEKININ” carries a strong nuance of “being in a position to make amends through sacrifice,” whereas it is often pointed out that the root of the English word ‘responsibility’ carries the connotation of response.
This distinction is discussed in detail by philosopher Koichiro Kokubun and pediatrician Shinichiro Kumagaya in their coauthored book The Genesis of ‘Responsibility’: Middle Voice and Participant Research (Shinyosha, 2020).
The Japanese word “SEKININ(責任)” is written with the characters for “to bear blame” (責) and “to be entrusted” (任). From this combination of characters, one naturally gets a negative impression—of being blamed or held accountable when something goes wrong. In fact, in Japan, the expression “to take SEKININ” often carries a self-punitive nuance, —resigning from one’s position or even, in older times, “cutting one’s belly” as an act of atonement.
However, considering the etymology of ‘responsibility’ in English, it can also be understood as ‘response-ability’, meaning ‘the capacity to respond’.
To borrow the words of philosopher Masaki Ichinose, this can be understood as “the ought-to-respond-ness”—perhaps a bit of a difficult phrase, but it means being in a position where one must respond to a given situation. Put more simply, I see responsibility not so much as a kind of “ability” but as a sense that one cannot help but respond.
Therefore, the idea of “taking responsibility” through self-punishment or atonement under the Japanese concept of “SEKININ” feels quite different from what responsibility originally implies.
And from the standpoint of problem-solving, I believe it is this response—the act of continuing to respond—that is far more important.
— Could you explain that in a little more detail?
Shoukei:
For example, if you see an elderly woman crouched down on the side of the road, you naturally ask, “Are you all right?” That is not because you feel a sense of duty or obligation that you “should” do so, but because the situation itself calls forth a natural response. It is precisely this spontaneous act of responding that I believe represents the original meaning of responsibility.
— So in other words, the true meaning of responsibility is not limited to “making amends for the past,” as the Japanese word “SEKININ” often implies, but is closer to an ongoing attitude of “continuing to respond to the situation here and now”?
Shoukei:
Exactly. And I believe that this English sense of responsibility—as the ability to continue responding—is essential in the age of AI.
As I mentioned earlier, there is an inherent “unknowability” in AI; we can never predict its behavior with 100% certainty. For services built upon such uncertainty, the traditional Japanese notion of “taking SEKININ” by cutting one’s belly or punishing oneself can no longer function effectively.
What will be required of future developers and companies is not to “take responsibility” by resigning or shutting down a service when something goes wrong, but rather to continue responding—to stay in dialogue. This means deeply understanding the uncertainty of AI technologies, and continually explaining their nature and risks to users and to society.
And when problems do occur, what truly matters is maintaining a relationship in which one can keep responding. Of course, this relationship must not become one of constraint or control. At times, it is equally important to create space—to allow for pause and margin—so that we can look more deeply into what is happening.
These, I believe, are what responsibility should mean in the age ahead—a new way of understanding “SEKININ”, grounded in the act of continuing to respond.
Toward an era where everyone is asked to develop “technological attunement”
— Listening to you, I feel that those who work with AI in business will need not only technical expertise, but also a deeper understanding of the ethics, social context, and user literacy surrounding it—and a willingness to stay in dialogue.
Shoukei:
That’s exactly right. The way responsibility is understood in the old paradigm will differ greatly from how it must be understood in the coming society, where we live and work together with AI in many different forms.
To keep pace with this civilizational shift, companies and technologists will need to pay attention not only to the technology itself, but also to the ethical and social consensus that surrounds it—and, most importantly, to be able to speak about their technologies and services in their own words. That, I believe, is where new value will emerge for future enterprises and for engineers as interpreters of technology.
— So you mean the very role of engineers will expand in the years ahead?
Shoukei:
Yes, that’s right. Moreover, I think it’s not only professional engineers, but everyone, in their own positions, who will be required to develop what we might call a kind of technological attunement.
This doesn’t necessarily mean that everyone needs to learn programming skills. Rather, as we build a society that coexists with AI, we will all need to understand—at least as a basic form of literacy—the fundamental ways of thinking and the characteristics that underpin technology itself.
When that happens, the social presence of engineers—the specialists who deeply understand both the light and shadow of these technologies—will become immensely significant. Within that, I believe, lies a new kind of fascination in this work, and possibilities far greater than we’ve ever imagined.
Original Media: Levtech Lab
Original Article (in Japanese)
【PART1】https://levtech.jp/media/article/interview/detail_730/
【PART2】https://levtech.jp/media/article/interview/detail_744/Credits:
Interview and Text by: Ryotaro Washio
Edited by: Imajin Tamura



This article comes at the perfect time. So insightfull!