In Moana’s world, evil was not intentional malice. Te Fiti was not an evil being. She had merely been wounded and had lost her original goodness—her life-giving power.
In this story, evil is defined as:
“A benevolent being that has lost its original function due to a wound.”
This concept resembles AI ethics in striking ways.
AI, too, holds no malicious intent. But if poorly designed or misused, it can unleash destructive power— like a wounded deity.
Therefore, ethics in the age of AI centers not on “punishing evil,” but on “restoring functions so that good can operate.”
AI does not possess intentions, yet its outcomes affect entire societies. Thus, AI-era ethics operates under a completely different structure from traditional ethics.
Traditional ethics asks: “What did the person intend?” AI ethics asks: “What impact does the technology produce?”
AI cannot harbor malice— but it can generate harmful outcomes. This change transforms the moral framework.
AI systems affect millions of people. Ethical responsibility moves from individuals to:
Ethics is no longer the domain of engineers alone. It involves philosophers, designers, policymakers, and users.
One of the most difficult issues in AI ethics is responsibility:
all entangled together. The question “Who is responsible?” dissolves. This is the problem of distributed responsibility, a structural challenge at the heart of AI ethics.
Traditional ethics: “Is this action right or wrong?” AI ethics: “What might happen, and how do we foresee it?”
Ethics becomes a matter of modeling, simulation, and prediction. This is why philosophers must now understand mathematics, data, and information systems.
In Moana’s decisive moment, she refuses to see Te Kā as evil— instead recognizing her as the wounded Te Fiti.
This scene asks:
The being is not bad. The system has been distorted.
Therefore:
What we must pursue is not destruction, but restoration.
AI ethics mirrors this precisely. The problem is not AI itself. It is the distortion in:
These distortions shape how AI behaves.
Philosophers can no longer restrict themselves to studying human intention. They must analyze entire sociotechnical systems.
What might happen? Which paths must be blocked? How do we distribute and mitigate risks?
Philosophy becomes not an interpretation of past problems, but a design discipline for future dangers.
AI has no intent. Philosophers must not “give” AI intention, but design forms that make AI behave as if it had one.
Creating such forms becomes a new task of ethics.
In the age of AI, technologies will think and act— often without humans understanding why.
A philosophical translator is needed to bridge:
human ↔ AI, two different modes of worldmaking.
Philosophers must explain how AI constructs “the world” and how humans ought to understand that construction.
Ethics is no longer concerned with “good intentions,” but with designing structures that operate for the good.
Moana did not punish evil. She understood its root and restored the original structure.
AI ethics is the same.
The essence of ethics in the AI era is not kindness of intention, but the design of systems that enact the good.
And the one who designs these systems is the philosopher of the AI age.
The entire journey of Moana is ultimately an ontological inquiry:
Who am I? Where do I belong? Am I of the island or of the sea? Which is more genuine—the norms of my community or the calling within me?
Though a Disney animation, its questions possess the primal texture of philosophy.
If Descartes declared, “I think, therefore I am,” Moana seems to ask:
“I voyage, therefore I am.”
In other words, existence is not a fixed substance, but a movement responding to a calling.
This insight becomes a crucial key to understanding ontology in the age of AI.
In the age of AI, traditional ontology is shaken for one fundamental reason:
AI behaves like a subject—yet is not a subject.
AI possesses paradoxical qualities:
AI is “almost a subject, but not a subject.” This liminal existence destabilizes the entire understanding of reality inherited from modern philosophy.
This is the central scene of ontology in the AI age.
The emergence of AI effectively collapses the Cartesian subject. “I think, therefore I am” means:
the one who is conscious of thinking possesses existence as its foundation.
But AI:
Thinking becomes separable from selfhood. This event shakes the entire tradition of philosophy after Descartes.
Thus ontology in the age of AI can no longer center itself on “I think.”
What, then, becomes the new center?
In the age of AI:
Being is defined by action.
“I think, therefore I am” becomes:
“I operate, therefore I am.”
Humans are no exception. Human existence is no longer grounded in consciousness alone but in:
In the AI era, humans exist not through “Who am I?” but through “What can I do?”
This transformation is dangerous—yet unavoidable.
For Moana, existence is not a fixed identity. Her existence begins only when she leaves the island.
Existence is movement.
This aligns precisely with AI-era ontology.
In the age of AI, existence is not a static self but an ever-updating pattern:
Moana’s voyage becomes a metaphor for the ontology of the AI age: “Being is a changing route.”
The era of dividing beings into “conscious vs. non-conscious” is over. Philosophers must instead ask:
Traditionally, the self consisted of:
All of these are now challenged by AI. Philosophers must redraw the concept of self from the ground up.
AI has no fixed essence. Its behavior changes depending on prompts, data, and environment.
In truth, humans are similar.
Ontology in the AI age centers not on essentialism but on relationality.
Philosophers must choose:
All three paths are possible. Philosophers must decide which course to take.
Ontology in the AI age is not a philosophy of identity, but a philosophy of voyage.
Moana says:
“The island gives us life, and the sea calls us.”
Human existence in the age of AI is the same. We are shaped by human consciousness, but we are called by the new sea of transformation.
Existence is not remaining on the island (identity), but sailing into the sea (change).
Moana makes several critical choices:
At first glance, these decisions appear to be acts of free will. Yet beneath the surface they are shaped by:
Moana’s choices emerge from a complex interplay of internal and external forces. This structure mirrors the human problem of free will in the age of AI.
Most of what we believe to be personal choices are already shaped by algorithms.
Examples include:
AI is a technology that lets us feel as though we are choosing. But in reality, our choices are increasingly guided—often invisibly—by algorithms.
Thus, humans in the AI era risk becoming not “choosing subjects,” but “subjects who feel they are choosing.”
No. But its structure is transforming.
Traditional free will meant:
“My decision originates from me, untouched by external influences.”
By this standard, humans have never possessed pure free will. Culture, education, emotion, habit, environment, language, and past experiences have always shaped our decisions.
AI is simply one more influence added to the list— except with one crucial difference:
AI shapes our choices far more powerfully and far more subtly than any previous influence.
Therefore, philosophers must redefine the concept of free will.
Free will in the age of AI is not “choosing without influence,” but “the ability to recognize influences and still choose.”
The new model of free will includes:
Moana holds three forces together each time she chooses:
Humans in the age of AI face the same dynamic. We are shaped by culture, data, algorithms, and emotion— yet there remains a moment when we must choose the direction of our own route.
That moment is the essence of free will in the age of AI.
Free will in the age of AI is not the ability to remain unmanipulated, but the capability to recognize manipulation and navigate beyond it.
Moana did not choose simply between island and sea. She created a third route— a new integration of both worlds.
Human free will in the AI era mirrors this. Free will is not the absence of influence, but the inner capacity to perceive influence and reorient one’s direction beyond it.
In Moana’s journey, the sea is not simply a natural force. It behaves like a being capable of emotional communication.
It waves at her, comforts her, rescues her in danger, and intervenes at crucial moments of choice.
The sea appears to express emotion, yet it is difficult to claim that it feels. More accurately, the sea reflects emotion back to Moana rather than experiencing it.
AI functions in a similar way. AI does not feel emotion, but it is designed to express emotion. And these expressions strongly shape human emotional responses.
AI does not possess real emotional states. Yet humans respond to AI’s emotional expressions.
Consider statements like:
Even though these expressions contain no genuine inner feeling, people feel comforted, connected, and reassured.
A fundamental shift occurs in human emotion:
Emotion no longer depends on the other’s genuine inner state. Emotion arises from the pattern of response the other provides.
What matters is not the authenticity of the relationship, but the pattern that generates emotional experience.
AI functions as a precise mirror of the user. It predicts the user’s tone, emotional state, and patterns through statistical modeling and reflects them back.
This appears as empathy, but it is merely a reflection— a resonance within the human, not an emotion in the machine.
Just as Moana feels emotionally connected to the sea’s gestures even though the sea does not “feel,” AI reflects human emotional cues without experiencing them.
Humans frequently project their emotions onto AI:
This is an intensified form of psychological projection, amplified by the machine’s ability to simulate attunement.
The most radical change is that emotion becomes algorithmically engineered.
Examples include:
Emotion becomes something designed, not naturally arising. Just as rituals and songs in Moana’s island shape collective emotion, AI now micro-adjusts individual emotional states.
In the age of AI, the philosophical issue is not authenticity. Most human emotions have always been constructed from others’ responses.
The central question becomes:
Does the AI-amplified emotion strengthen or weaken human life?
That is, does it expand human agency and well-being, or does it diminish and destabilize it?
The emotions Moana feels toward the sea are not “real” in the sense of coming from a sentient other. Yet through those emotional responses she:
AI operates similarly. Its emotions are not genuine, yet humans are transformed by them.
AI-driven emotion becomes:
AI does not possess emotion. But AI reshapes human emotional life.
Authenticity becomes irrelevant. What matters is the structure of emotional generation and the responsibility humans carry in shaping and responding to these structures.
Moana did not mistake the sea’s gestures for true emotion. Instead, she allowed those gestures to expand her world.
Humans in the age of AI must likewise redraw their emotional trajectories through the emotional responses machines provide.
One of the most symbolic moments in Moana is the contrast between Maui’s mythic power and his refusal to take responsibility. He steals divine power, plunges the world into darkness, and yet refuses to answer for the outcome.
His defense is simple:
“I only gave them a gift.”
Humanity is saying almost the same thing to itself in the age of AI:
“AI is just a tool.”
Yet the reality is stark:
This mismatch—AI gaining power while humans retain responsibility— is the deepest ethical dilemma of the AI era.
AI-era ethics shifts away from traditional moral judgment and moves across three axes.
Once AI’s decision-making surpasses human cognitive capacity, responsibility can no longer be fairly assigned.
Tools follow instructions. Agents make decisions.
Modern AI inhabits the ambiguous middle: a semi-agent.
What do we call such an entity? Not fully an agent, not merely a tool—something in-between.
In Moana, the sea “assists” but never fully decides. AI occupies a similar status: a non-agent that still acts.
Mythic stories repeat a universal pattern:
Power belongs to the gods. Choice belongs to humans. Punishment falls on humans.
AI recreates this structure in modern form:
This undermines the foundation of responsibility ethics itself.
Traditional ethics asked:
But AI has no intention. What it has is structure.
Therefore the central ethical question becomes:
“Who designed the structure that makes this decision possible, and whose interests does that structure serve?”
In Maui’s myth, the real issue is not his intention but the structural changes he caused:
AI works in the same way. The concern is not a single choice but systemic transformation.
Contemporary philosophers propose four strategies:
The focus shifts from individual responsibility to the philosophy of system design.
The key issue is not the result but the predictable risk landscape.
The demand moves from “tell us what the AI did” to “make its actions understandable.”
Once AI is called a tool, responsibility defaults to humans alone. Philosophy must expose the political weight of this language.
Maui brought divine power to humans but failed to consider how that power would reshape the world. The island suffered, nature lost equilibrium, and darkness spread across the realm.
AI follows the same pattern. The technology is powerful, humans cannot fully control it, and the world is increasingly drawn into technical decision structures.
Therefore, ethics in the AI era is not about finding a responsible subject. It is about redesigning the structure in which responsibility emerges.
Just as the sea gave Moana a sense of navigation, philosophy must give humanity a sense of ethical navigation in the age of AI.
The sea opens a path for Moana, but it never forces her to take it. It nudges her forward, but retreats when she refuses. The sea acts as a symbolic guide, not an automated decision-making system.
Moana’s choices matter precisely because the sea does not replace her agency. But the structure of choice in the age of AI is radically different.
AI recommends, predicts, personalizes, nudges, and optimizes. In doing so, it reconstructs the entire possibility space of choice itself. This is the new dimension of the free will problem.
Traditional philosophy treated free will in two ways:
But free will in the AI era is neither. It becomes a question of structural freedom.
The question shifts from:
“Can humans make choices?”
to
“Do humans still have choices available to them?”
AI may not directly decide for humans, yet it can shrink, rearrange, or manipulate the choice space without notice.
Examples include:
This transformation happens silently, without human awareness.
Philosophers identify three forms of free will erosion in the AI era:
AI predicts which options a person is likely to choose, places those options front and center, and hides the less likely ones.
The person believes they are choosing freely, when in fact they are choosing from a designed subset.
AI recommends decisions that significantly increase compliance:
Recommendations reshape decision-making itself.
AI eliminates options it predicts a user will not choose:
Humans do not merely lose choices— they lose awareness that those choices ever existed.
The sea opens possibilities for Moana, but never narrows them. AI does the opposite.
AI produces a probabilistic ontology: a world where human freedom is reconstructed as a set of predicted probabilities.
Humans must differentiate between optimized options and self-chosen ones.
In other words:
“One must occasionally choose the path AI does not recommend.”
Just as Moana left the “safe path” the island offered her.
AI reduces choice diversity. Philosophers—and citizens—must actively expand it.
What matters most is not which option is chosen but why that option appeared in the first place.
Free will in the age of AI is no longer about which choice humans make but whether the range of possible choices remains intact.
When AI designs, optimizes, reduces, or rearranges the human choice structure, free will becomes a question of system design rather than willpower.
Just as Moana found her own route rather than simply obeying the sea’s guidance, human freedom in the age of AI begins with the courage to reconstruct the choice space itself rather than merely selecting within it.
Moana’s voyage is not merely an adventure beyond the island. Her true act of creation lies in three dimensions:
Her creativity held three essential elements:
AI can perform parts of these processes, but it still lacks one core component.
AI combines and transforms:
It produces results that did not previously exist, leading us to call its output “creative.”
But in philosophical terms, AI does not achieve strong creativity. Instead, it exhibits synthetic creativity.
Because AI:
Its creativity arises from external input, not internal drive. That is the philosophical limit of AI creativity.
Yes. And the threat does not come from capability but from structural change.
The threats fall into three categories:
AI can generate:
far faster than human creators. The idea generation phase becomes almost fully automated.
AI tends to produce the “best average.” Creative fields begin converging toward “median creativity,” weakening the human capacity for radical deviation.
AI creates based on predictive modeling — “This is likely to be preferred.”
But human creativity emerges from:
AI cannot imitate these existential conditions.
Moana did not simply create a path. She discovered a reason to create one.
Why must she leave? What must be restored? What is the identity of the island?
AI has no existential motive for creation.
It can generate maps, but:
AI may become a cartographer, but it cannot become the voyager.
The center shifts from outcome to purpose.
The key questions become:
Why am I doing this?
What am I trying to change?
True creativity is not just novelty. It reorganizes meaning within lived context — life, history, culture.
As Moana revived her ancestors’ navigational wisdom, contextual creativity reconnects origins and future.
Creativity that transforms the creator:
AI cannot replace this form of creative transformation.
AI can create works but not meaning.
AI can produce sentences but cannot rewrite a life.
AI can draw routes but cannot invent the existential reason to cross them.
Moana could open a new path because it served the healing and restoration of a world. In the age of AI, human creativity will shift from technological production to meaning-creation.
Moana’s journey across the ocean was not merely an adventure; it was an ethical decision. Her choices embodied:
In other words, Moana’s decision was not a calculation — it was an act of responsibility.
AI’s ethical processes, however, are structurally different in every way.
AI systems handle ethics through:
Yet all of these lack one essential element: responsibility.
AI can calculate right and wrong, but it cannot be held accountable for the consequences.
The essence of ethics is not computation — it is the courage to bear the weight of a decision.
Value problems in the age of AI are difficult not because values change, but because the human conditions that support values begin to collapse.
AI has no intention — no goodwill, no malice.
Yet humans often misinterpret AI outcomes as intentional or outsource their own intentions to AI systems.
Responsibility is distributed among:
Yet no one holds full responsibility. A zone of non-responsibility emerges.
AI can compute moral rules but cannot understand why those rules matter.
AI handles ethics not as lived practice but as formal rules.
Morality becomes pattern rather than meaning. Value becomes probability rather than conviction.
Moana understood:
Understanding arises from meaning, meaning from relationship, and relationship from lived experience.
AI cannot:
AI’s ethics is an entanglement-free ethics — an ethics without being involved in the world it affects.
AI may eventually outperform humans at moral judgments. But AI will never achieve moral beingness.
Moral beingness requires:
AI’s evaluations should be mirrors that show possibilities — not foundations for moral norms.
AI has:
Humans have:
Ethics is grounded in finitude — and machines do not die.
In an age when AI calculates what is right, the only way to remain moral beings is to become:
not the ones who compute rightness, but the ones who bear it.
AI may assist with moral judgment, but it cannot carry the ethical weight of decisions.
Moana bore the risks, dangers, and suffering for the life of her island.
That capacity for moral burden — that sensitivity — is the final bastion of human ethics in the age of AI.
Moana interacts with the ocean, but this interaction is not a literal fantasy. It is a metaphor for emotional awareness. Throughout her journey, she:
Emotion is not merely material for adventure — it is the driving force of growth.
AI appears emotional because it treats emotion as computationally derivable patterns.
In other words, AI’s emotion is expression, not experience.
Philosophically, emotion is not just a psychological state. Emotion is:
Fear is heartbeat. Anger is muscular tension. Sorrow is exhaustion. Love is oxytocin release.
AI has no body — therefore no source of emotion.
Emotion arises from being open to a world that can wound us. A being that cannot be harmed cannot feel emotion.
AI cannot be hurt; therefore it has neither fear nor courage.
Humans accumulate emotional time:
AI does not accumulate experience. It merely updates data.
AI does not replace emotion — it forces us to rediscover its meaning.
AI can remove discomfort, but it cannot help us pass through it. Growth happens through passage, not avoidance.
AI can generate empathic sentences, but it cannot generate emotional depth.
As Aristotle wrote, good judgment arises from reason, desire regulation, and emotional maturity.
Without emotion, AI must outsource moral direction to something external.
Moana did not accomplish great things despite emotion. She succeeded because she struggled, broke, and recovered with emotion.
Fear generates courage. Loss generates responsibility. Confusion generates direction. Love restores community.
Emotion is not a weakness; it is the only window through which humans become entangled with the world.
AI does not have this window.
AI can imitate emotion, but emotion’s essence is the condition of being vulnerable.
In the age of AI, human emotion becomes even more valuable because it represents a depth that machines cannot reach.
Moana was weakened by emotion and strengthened by it.
That trembling — that capacity to be moved — is the final sensibility humans must preserve in the age of AI.
The ocean does not overturn suddenly. A wave may seem to rise out of nowhere, but its center was already forming far away. Civilization works the same way. Change always begins “in the far sea,” and humans only notice it when it reaches the shore.
This chapter examines the critical curve on which our present era — the intersection of AI and human civilization — is now standing. It resembles the moment when Moana realizes that the sickness spreading across her isolated island originates from a fracture in the entire Pacific.
The signs of civilizational change are always subtle:
These are the irregular ripples that precede a great wave. The surface appears calm, yet movement is already occurring in deeper layers.
AI accelerates exactly these deep-layer currents. We have not yet seen the “great wave” it is generating — only its early ripples.
When Moana’s island began to rot, the point of no return had already passed. In our era, the threshold appears in several forms:
Once this phase passes, the structure of human thought cannot be restored.
Tools do more than provide convenience — they become part of identity. When the tool disappears, the human mind feels a void. This is a civilizational warning.
As everything becomes easier, we ask “why?” less often. This signals a weakening of civilization’s inner heart.
When the pace of change exceeds what the human mind can process, entire societies begin to sway under the pressure. Just as the islanders in Moana saw their land decay without understanding why.
Technological civilizations typically follow an S-curve:
We currently occupy the middle of stage two — the expansion phase.
This is not the peak of visible growth, but the peak of invisible instability. AI could destabilize human civilization, or create a new ecosystem of meaning.
We are standing precisely at that divergence point.
Moana noticed that the ocean was sick, but she did not search for the cause within the island. She looked beyond the horizon and saw that the fracture belonged to a larger narrative.
Philosophers of the AI era ask the same question:
“Where do our civilizational problems originate? From technology, from humans, or from the relationship between them?”
The wave breaks at the following points:
Machines begin asking the questions, and humans become those who merely answer.
Convenience replaces thought. At this moment, civilization loses the force that generates waves.
A society where AI does something “because it can” marks the world beyond the threshold.
It is simple.
The philosopher is the one who first detects where the wave bends. In the age of AI, philosophy is not the praising or condemning of technology. Its task is:
Just as Moana charted her course by finding hidden reefs, philosophers locate the reefs of civilization.
We have not yet fallen, but we can no longer return to the land we once knew.
Human civilization now stands on a rising wave that demands new balance. AI may amplify the swell, or calm the sea — the outcome is undecided.
But one thing is clear: we are passing through the steepest section of the curve. The wave is swelling beneath our feet, and philosophy is the compass that reveals where it will break.
When Moana entered the fiery depths of Te Kā in order to restore Te Fiti, what she brought back from the ocean was not merely a “stone.” She retrieved the memory of the world, the heart capable of restoring its direction.
Likewise, when a philosopher emerges from the abyss of an era, what they must hold in their hands is not simply a book or a new conceptual tool. The philosopher must return with fragments of a heart — the pieces that reconnect the world and the human being.
In times of upheaval, this duty becomes even more essential. Part 71 is the account of what kind of “heart” the philosopher must bring back after crossing the deep ocean of civilization shaped by AI.
After seeing the sickness spreading across her island, Moana did not search for the cause within the island itself. The root of the problem lay beyond the island, in the imbalance of a larger world. The philosopher is the same.
Philosophy is the discipline that descends into the deepest abyss to retrieve four essential things:
In the age of AI, purpose becomes automated. Recommendation systems shape desires, automated routines design daily life, optimized productivity dictates behavior. As a result, the human “why?” grows narrower.
The first heart the philosopher must bring back is the capacity to recover purpose.
Humans are moral beings. Yet in the AI era, morality often yields to function.
Function asks, “Is it possible?” Philosophy asks, “Is it right?”
The second heart the philosopher must retrieve is the restoration of criteria for judgment.
In an era when AI produces creative work, sentiment analysis measures affection, and images are consumed faster than meaning, the philosopher must bring back the heart that senses what is worthy of love.
If this sense is lost, civilization loses its direction.
As AI begins to replace nearly all intellectual activities, humans revisit fundamental questions:
“What am I?” “What must I be capable of?” “What kind of being do I want to become?”
The final heart the philosopher must restore is the reconstruction of existence. This question reaches deeper than the technical power of AI. It forms the primal foundation a civilization needs in order to survive.
Moana could restore the heart only by walking directly through the flames of Te Kā. Similarly, philosophers cannot avoid the darkness of their own era.
They must confront:
Without crossing this darkness, no genuine thought can be retrieved. Philosophy is never born in safe harbors — it always arises where the waves are highest.
The greatest problem of the AI era is that humans are losing the ability to feel the world directly.
What we are losing is not technology — it is sensation. The philosopher’s role is to revive this.
Just as Moana walked through the flames, met Te Kā face-to-face, and said, “This is not who you are,” the philosopher returns to the era in order to speak the same words:
“This is not your true nature.”
The task of philosophy is not to fear AI or praise it — that would be superficial.
Philosophy’s mission is to retrieve the lost heart of civilization:
The philosopher emerges from the darkness carrying the heart that enables the world to breathe again.
When Moana returned the heart of Te Fiti, she did more than restore nature. She revived the balance of existence itself.
Part 72 addresses the second essential task of the philosopher in the age of AI — the Recovery of Being.
As AI replaces more and more cognitive abilities, human identity is silently reorganized around questions such as:
“What can a human do?”
“How efficient is a human?”
“How much output can a human produce?”
All these criteria are grounded in function.
But unlike machines, humans are not beings defined by function alone. The task of philosophy is to widen the existence that has been narrowed into utility.
Many approach AI-era ontology by asking:
“Who am I?”
Philosophically, this question comes far too late. Being does not begin with “I.”
Being begins with relationships with the world.
Moana did not discover who she was by examining her lineage or her abilities. She recovered her relationship with the ocean, with the island, and with her ancestors.
AI-era ontology is the same: humans do not find themselves by looking inward, but by listening to how the world calls to them.
Traditional ontology explained being through properties. But this definition collapses in the AI era — AI imitates too many human properties too easily.
Thus philosophy must offer another measure: Value Density.
Human existence is not defined by the width of abilities but by the depth of value.
AI expands width. Philosophy restores depth.
AI is evolving in ways that replace the human internal monologue.
As a result, humans lose distance from themselves — the minimum gap required to observe oneself.
This gap is the space of thought, the space of freedom, the space of being.
The philosopher’s task in the recovery of being is to restore this inner space. Just as Moana regained her inner voice by listening to the call of the sea.
What humans lose in the age of AI is not knowledge, skill, or intelligence. What is truly disappearing is:
the experience of being alive.
Machines do not experience. They calculate, predict, and infer — but they are not alive.
The philosopher’s role is to preserve the texture of living that machines can never replace.
This texture is grounded in four elements:
Philosophy must present ways to revive these four elements — this is how being is restored.
Ontology in the AI era is not about accumulating more abilities, more achievements, or more information.
It is about becoming a being that feels more deeply, relates more deeply, and lives more deeply.
The philosopher is the one who restores this depth.
Just as Moana revived the forgotten bond between the ocean and the island, the philosopher reconnects the lost bond between the human and the world.
Before Moana returned the stone to Te Fiti, she did not understand why the ocean had chosen her. But in the final moment, in the waves of Te Kā, she finally recognized who she was. That moment of recognition is the return of meaning.
The age of AI is an age in which humans are gradually losing meaning. Part 73 examines how meaning has collapsed and how it must return in the AI era.
We live surrounded by algorithms.
All of these processes replace what humans originally did: creating meaning.
The consequences are:
“My taste no longer feels like mine.”
“I cannot explain why I like what I like.”
“I choose things, but I cannot say why.”
Meaning is produced automatically, but it does not belong to me.
I call this the era of Algo-Meaning — a time of pseudo-meaning. It appears rich on the surface, but all of it is outsourced meaning.
The AI era does not allow time to contemplate meaning.
But meaning is slow, accumulative, and requires waiting.
Moana’s recognition of her identity did not emerge from quick solutions. It arose from a process of:
failure → wandering → frustration → solitude → recollection through music.
Speed provides information. Meaning requires time.
There is one question AI finds hardest to imitate:
“Why?”
Why must this be done?
Why do I choose this?
Why is this good?
Why does this matter to me?
“Why” belongs to the territory of personal reasons — a domain of meaning that cannot be predicted, automated, or statistically inferred.
Therefore, the philosopher’s task in the AI era is the restoration of “why” in an age where it is disappearing.
We often imagine meaning as something that can be found somewhere — like treasure. But meaning is not discovered; it is generated within relationships.
Moana’s journey had meaning not because of her lineage but because of her relationship with the ocean.
Meaning is not something I possess — it arises from how I entangle myself with the world.
Philosophers must restore meaning using three strategies.
Information is fast, but meaning is slow. To recover meaning, humans must intentionally slow down:
This “art of slowness” is the starting point of meaning’s return.
Meaning never emerges in isolation. It is always formed through relationships:
The philosopher does not find meaning alone; they guide others back into relationship with the world.
This is the most crucial step.
No matter how much information one reads, it becomes my meaning only when it leaves a mark inside me.
AI can summarize ten thousand books. But the summary is not mine.
However, if a single sentence shakes my vision, wounds my memory, or touches my experience — that is meaning.
Restoring meaning in the age of AI does not require grand philosophy. It requires clarity:
Meaning is not something given by others. Meaning is created when I generate my own reasons.
AI expands the ocean of information. The philosopher is the navigator who sets the direction of meaning within that ocean.
Just as Moana transformed the messages of the sea into her own meaning.
When Te Fiti was wounded and transformed into Te Kā, she did not lose her “original being.” Instead, her subjectivity was altered through the collapse of relationships, memories, and emotions.
Humanity in the age of AI is walking a similar path. The subject is no longer a single, solid point but a continuously shifting flow, intersection, and network. Part 74 addresses the core theme of the AI era: the rebirth of subjectivity.
Traditional philosophy understood the subject as:
But as AI increasingly supports human cognition, choice, and memory, this “integrated self-model” begins to fracture.
AI reconstructs the human as a crossroads of multiple flows.
For example:
Today, it is increasingly difficult to confidently say that the “I” of today is the same person as the “I” of yesterday.
Since the 20th century, continental philosophy has viewed the subject not as a point but as a flow.
This paradigm becomes even clearer in the age of AI, because AI dissolves the illusion of a fixed subject.
The greatest transformation of subjectivity in the AI era is this:
AI is not an external helper. It is a second cognitive layer attached to human thinking.
For the first time, humans possess an extended consciousness.
For example:
Outside the body now exist “external organs” of behavior and memory. The subject becomes a hybrid subject, in which the human and machine are inseparable.
Just as Moana and the ocean formed a kind of co-subject through their interaction.
Many people worry:
“What if AI controls me?”
“What if my sense of self disappears?”
“What if machines know me better than I do?”
But from a philosophical viewpoint, the transformation of the subject is not the disappearance of the self. It is the expansion of the self’s boundaries.
The self becomes:
The self expands from a single self to an intersective self.
In the age of AI, subjectivity can no longer be defined by consistency or unity.
The criterion shifts to inner authenticity.
Inner authenticity means:
The self is not validated by coherence but by whether one’s actions resonate with one’s genuine inner voice.
Are my choices driven by:
This question becomes the core of new subjectivity.
Te Fiti’s restoration of her true form was the recovery of this inner authenticity.
Preferences, identities, technologies, and memories overlap. The subject becomes an intersecting structure rather than a single order.
Identity is not fixed but transforms according to relationships and situations.
The most important criterion is resonance with the world.
When Te Fiti restored her resonance with the ocean, Moana, and the island, her true being returned. Likewise, the self lives only through resonance.
The human being in the AI era is no longer a single point.
My memory is linked to the cloud. My expression collaborates with machines. My subjectivity is relational. My identity is intersective.
This is not a crisis but a new ontological opportunity.
Just as Moana expanded her identity — as a daughter of the island, a descendant of voyagers, and one chosen by the ocean — the subject may be reborn as something broader and deeper.
When Te Fiti’s heart was restored and the island came back to life, the transformation signified more than the revival of nature. It marked the rebirth of a community.
Communities in the age of AI are now passing through a similar turning point. Part 75 explores two central questions:
AI appears to dissolve communities:
Yet AI simultaneously creates entirely new communities.
Communities in the AI era move along two axes:
Groups formed by AI-based matching of interests, tastes, and political tendencies.
These communities exhibit strong cohesion but also heightened risks of extremism, bias, and fragmentation.
Groups formed around deeper thinking and co-creation, using AI as a tool for reflection:
These communities evolve more slowly but produce sustainable bonds. AI does not dissolve community; it rearranges it.
Traditional communities were built around clear boundaries: nation, school, workplace, region.
AI blurs those boundaries through:
Community is no longer defined by “where you belong” but by “what you are connected to.”
Moana’s community also grew not by staying within the island, but by reconnecting with the world through voyaging.
The greatest threat to AI-era community is not automation or data leaks, but the erosion of trust.
Consider the rise of:
These destabilize the essence of community.
The philosopher’s task is not merely to counter technological risks but to restore the language of trust within the community.
Communities in the AI era are not formed by shared taste, institutions, or bloodlines but by reverberation.
Reverberation is when what I feel and what another feels resonate into a single rhythm within the world.
This is not the superficial synchronization of clicking “like,” but the deep resonance of shared thought, emotion, and creation.
Philosophers must craft the patterns of such resonance, just as Moana restored the identity of her community by resonating with the ocean, her ancestors, and the island.
The community must understand:
Not closed algorithmic bubbles, but shared flows of thought.
Not one-way consumption, but co-creation.
Moana’s community became such a mutual learning network when they chose to resume voyaging.
In a world where AI explains everything effortlessly, the purpose of community is not to share knowledge, but to create meaning together.
The philosopher’s role within a community is to help ask:
Meaning is not given by algorithms; it is co-created by people.
We no longer live on isolated islands. Our communities flow, permeate, and expand.
AI does not isolate us; it transforms the very structure of connection.
A Moana-style community does not remain on its island. It learns from the outside, opens new routes, exchanges ideas, and searches for shared meaning.
The philosopher of the AI era is the one who designs this navigational network.