As Moana sailed across the sea, she did not acquire a new map; she learned how the world reveals itself. Knowledge in the age of AI undergoes the same transformation. We are no longer beings who store knowledge, but beings who connect, interpret, and orient knowledge.
Traditional knowledge relied on four functions: memorizing, applying through skill, expanding through experience, and preserving through texts. AI disrupts all four. Information retrieval becomes instantaneous. Analysis and summarization are automated. Problem-solving becomes algorithmically proposed. Even creative work is partially replaced. As a result, the importance of “possessing knowledge” rapidly declines.
Knowledge is no longer defined by the amount of information one holds. Instead, it is redefined along three dimensions:
First, knowing is the ability to formulate questions. AI excels at producing answers but struggles to generate meaningful questions. True knowledge lies in understanding what to ask, why to ask it, and how those questions shape inquiry. Moana began solving her island’s crisis only when she first asked why the island was decaying. Knowledge shifts from answers to the art of questioning.
Second, knowing is the ability to generate meaning through relationships. AI can list facts, but connecting them into shared human meaning remains a uniquely human task. Real knowledge involves recognizing patterns, reading context, interpreting the world, and transforming experience into a map of understanding. Knowledge becomes relational structure rather than isolated information.
Third, knowing is the capacity to judge and take responsibility. An AI’s judgment is computation; a human’s judgment involves accountability. Ethics, values, and norms belong to human beings. In philosophical terms, knowledge becomes not a set of facts but an intellectual-ethical framework that enables responsible choice.
Historically, knowledge evolved across three stages. First, memory: wisdom belonged to those who could store vast information. Second, context: as information increased, interpretive ability became central. Third, direction: now that AI can interpret some information, human knowledge centers on providing orientation and direction. The philosopher’s role resides precisely in this power to orient.
What Moana received from Tala was neither a map nor an answer. She received the sensibility to read the sea: sensing the rhythm of waves, navigating by stars, predicting changes in wind, and grasping the world’s subtle signals. Knowledge in the age of AI is the same. True knowledge is not information but attunement to the world.
Philosophers must redraw the boundaries of scholarship in the age of AI. They must combine: technical literacy, to understand AI’s nature and limits; humanistic literacy, to grasp emotion, body, culture, and narrative; existential literacy, to answer how we ought to live; social literacy, to connect technology with human life and society; and intellectual direction, not merely offering knowledge but guiding pathways.
Moana was able to lift the curse from her island not because she possessed powerful tools or superior knowledge, but because she could feel the wound hidden beneath Tekā’s rage. The same principle applies in the age of AI: no matter how advanced technology becomes, the capacity to understand, heal, and take responsibility for emotions remains a core human asset. At the same time, AI has already entered emotional domains through sentiment analysis, empathetic dialogue systems, and emotional support tools. This section explores whether AI competes with human emotion, complements it, or creates a completely new relationship.
Many assume that if AI can simulate empathy, it can replace human emotion. But understanding emotion requires three essential components that AI does not possess.
First: bodily sensation. Humans feel emotions physically—anger as heat, fear as trembling, sadness as heaviness. AI does not experience bodily sensation.
Second: inner memory and lived experience. Emotions emerge from past traces—first love, loss, failure, joy. AI has no inner history or subjective experience.
Third: value judgment. Emotion is not just reaction but an evaluation of meaning: why something matters. AI does not hold values.
For this reason, AI’s “understanding” of emotion is computational imitation. It may be functional and useful, but it cannot be an ontological replacement for human emotional life.
Although AI cannot become an emotional subject, it can serve as a powerful emotional tool. It enhances human emotional life in several areas.
Enhanced emotional expression.
AI can translate feelings into text, music, or images; articulate emotional structures;
and offer language for emotions people previously could not express.
Improved emotional recognition.
AI can analyze subtle emotional cues in faces or voices and reveal emotions individuals
may not have consciously recognized.
Tools for emotional healing.
AI-enabled systems support reflection, reduce anxiety or depression, and provide personalized narrative reconstruction.
AI does not replace human emotion; instead, it expands, clarifies, and supports emotional life.
AI’s emotional operations are functional: input → analysis → output. Human emotions are existential: they arise from the whole being. The two do not compete; their roles diverge.
AI functionalizes emotion through precise analysis and patterned response. Humans existentialize emotion through meaning, memory, relational weight, and the history of choices. Human emotion moves toward deeper, more meaningful layers in the age of AI.
In the industrial era, emotion was seen as an obstacle. In the AI era, emotion becomes an advantage. Because:
• AI cannot possess emotional subjectivity.
• Emotion builds community and cooperation.
• Emotion creates meaning, values, and direction.
Just as Moana healed the world by empathizing with Tekā’s pain, emotion remains a uniquely human power: a force for restoration, understanding, and connection.
Philosophers must redraw the place of emotion in the age of AI. They must:
• redefine emotion as a core center of meaning-making,
• design the ethics of emotion–AI interaction,
• integrate emotional functionalization with emotional existential depth,
• restore and guide emotional communities.
Philosophy becomes not only a discipline that interprets emotion, but one that designs the future relationship between emotion and technology.
Moana had to travel beyond the edge of the known world. She needed to forge a path that did not appear on any map. Creativity is the same: it is the ability to draw what the map does not yet show. In the age of AI, one of the fiercest debates concerns creativity. Is it uniquely human? Will AI replace it? Or will humans and AI collaborate to create entirely new forms of creativity? This section argues that the relationship is not competitive but one of convergence and amplification.
AI excels at generating new images, new stories, musical combinations, idea blends, and simulated imagination. However, all of these are variations of existing data. AI can produce what appears to be new, but that does not mean it possesses creativity in the human sense. Human creativity contains elements that AI fundamentally lacks.
1) Experience-based creativity.
Human creativity emerges from pain, loss, love, joy, failure, vulnerability, and the weight of lived experience.
AI may have access to the same "data," but it does not carry the weight of life. Only humans can feel the sorrow of
Te Fiti losing her heart.
2) Intentionality and value judgment.
Creativity requires a “why.” Why create this? What value should this expression embody? AI can calculate purposes,
but it cannot set its own purposes.
3) Existential risk-taking.
Creativity is accepting the possibility of failure — decisions that can alter lives, break old patterns, or reshape
the world. AI does not experience risk or responsibility. Moana crossed the world's edge not for adventure but out of
responsibility for her island. AI cannot bear responsibility, and therefore cannot direct the purpose of creativity.
AI performs several functions associated with creativity: expanding ideas, combining concepts, instantly visualizing concepts, running countless experiments, and generating rapid prototypes. These capabilities amplify human creativity.
Acceleration 1: Speed of ideation.
Experiments that once took years can now be attempted in a single day.
Acceleration 2: Visualization of imagination.
Mental images take immediate, tangible form.
Acceleration 3: Reduction of cognitive bias.
AI reveals connections humans might overlook.
Acceleration 4: Transpersonal collaborative creativity.
AI and humans form hybrid creative structures.
In this structure, AI becomes the engine of creativity, and humans become the rudder.
Human ontology + AI computation = expanded creativity.
AI brings computational creativity. Humans bring ontological creativity. The two do not conflict; together, they form a synergistic system.
Humans set meaning, direction, values, and risk. AI provides speed, expansion, experimentation, and variation. Moana chose the path; the ocean empowered it. AI, like the ocean, offers expanded possibilities — but only humans can determine the direction of the voyage.
Philosophers in the AI era must redefine creativity itself. They must address:
• What counts as creativity?
• Does an AI-generated work possess authorship?
• How far can human creativity expand with AI?
• What is the structure of human–AI collaborative creativity?
• Who bears responsibility for creative outcomes?
Creativity is no longer merely an individual capacity. It becomes an ontological event produced by the collaboration of humans, tools, and systems.
One of the most important scenes in Moana is the moment when Te Fiti, after losing her heart, transforms into the furious lava demon Te Kā. This moment mirrors a central question in AI ethics today:
“When a tool goes wrong, is the tool to blame, or the being who wielded it?”
The core of AI ethics is not regulation itself, but the question of where responsibility should be located. AI can fail and malfunction. But holding AI “responsible” would be like sending a knife to prison for stabbing someone. Responsibility is a human concept, not a technical one.
Public debates often revolve around questions such as:
“Why did the AI discriminate?”
“Why did the AI make a harmful decision?”
“Did the AI deceive someone?”
“Should AI have autonomy?”
These questions hide a problematic assumption: that AI can be an agent. Technically, philosophically, and legally, AI does not carry responsibility. It has no intention, no autonomous value system, no capacity for moral responsibility, no interests, and no legal standing. Yet society often shifts blame toward AI. This enables humans to escape responsibility.
If AI is biased, the data was biased.
If AI manipulates, someone trained it that way.
If AI malfunctions, the people who deployed and verified it are responsible.
If AI harms, the social decision-making structure failed.
AI ethics is not primarily about technology—it is about the design of human institutions and social systems. Just as in Moana, the island was not harmed by a monster but by the “structure missing its heart,” so too AI problems reflect structural failures.
Traditional views say:
“Let’s regulate AI so it doesn’t behave badly.”
This treats AI as more responsible than humans.
A more accurate view is:
“How should responsibility surrounding AI be distributed?”
This requires redesigning laws, organizations, policy, and ethical frameworks.
Redistribution of responsibility involves four layers:
1) Data responsibility:
Who created the bias? Who selected and cleaned the data?
2) System design responsibility:
Who chose the model? Who designed the algorithm? Who anticipated risks?
3) Deployment responsibility:
Who decided where, when, and how the AI should be used?
4) User responsibility:
How is the user guided toward or away from certain decisions?
AI ethics is the structuring of distributed responsibility.
Ethics in the AI era cannot be treated like classical moral philosophy focused on individual virtue. The questions “Developers should be good,” “We must build good AI,” or “We need more ethics education” are no longer sufficient.
AI ethics now involves:
Ethics as reconstruction of social systems:
Legal responsibility distribution, organizational accountability, technical transparency, AI–human interaction design,
and societal safety mechanisms.
Ethics as coordination across technology, politics, and society:
Who controls the system? According to what standards? In which contexts is AI acceptable?
AI ethics is not a technical problem—it is a problem of power and responsibility.
Moana’s task was to return the heart to Te Fiti. Ethics requires a similar restoration.
The philosopher of the AI age does not merely solve ethical problems—they restore the core of responsibility. They must:
• Design responsibility distribution models.
• Analyze tech–social power structures.
• Dismantle technological determinism.
• Redefine the nature of human decision-making.
• Clarify the distinction between tool responsibility and user responsibility.
• Build the perspective that “Ethics is not emotional reaction but structural healing.”
The philosopher becomes not a teacher of ethics, but an engineer who designs the structures in which ethics operates.
Just as Moana recovered the lost song of her island, we must ask: What must we recover in the age of AI?
As Moana navigates between the ocean, the island, herself, and her ancestors, she rediscovers the question:
“Who am I?”
This mirrors the identity crisis humans now experience as they confront AI. AI writes, draws, composes, and produces academic work—appearing to take away parts of “my” ability. This fuels the fear:
“If AI performs better, does my identity weaken?”
“If AI replaces my roles, do I disappear?”
But this line of questioning begins from the wrong map. Just as Moana misunderstood the cause of the island’s sickness, we misunderstand the foundations of human identity.
Many people assume:
Humans are beings who “remember” → AI remembers better.
Humans are beings who “calculate” → AI calculates better.
Humans are beings who “analyze” → AI analyzes better.
Thus, if AI becomes superior, human identity is threatened.
But identity is not the sum of abilities.
Identity is a relational structure.
Human identity is formed through:
• How we relate to the world
• How we communicate with others
• How we interpret meaning
• How we construct ourselves as narrative beings
• How we perform roles within society
AI can imitate abilities but cannot become a subject of relations. AI can perform tasks, but it cannot possess identity.
During the agricultural revolution, humans lost 90% of their physical labor role—but human identity did not weaken. Humans simply moved into new domains.
The same applies to AI. Human abilities do not disappear—they shift.
Memory → transferred to AI
Calculation → transferred to AI
Mechanical writing → transferred to AI
So what do humans do?
Human roles move to:
• Interpretation
• Judgment
• Responsibility
• Value creation
• Narrative agency — the ability to structure life as a coherent story
Identity is not erased. It is re-leveled—shifted upward into a more complex domain.
AI does not threaten humanity—it expands the human expressive space.
1) Expansion of inner life
AI enables deeper exploration of thoughts and emotional landscapes.
2) Expansion of creativity
Even without drawing or composing skills, imagination becomes expressible.
3) Expansion of personal narrative
Individuals can craft stories and worldviews in new ways.
4) Expansion of social identity
AI broadens the scope of social interaction and community formation.
5) Expansion of possible forms of being
AI increases the range of identities humans can explore or aspire to.
In short, AI is not shrinking identity— it is expanding the dimensions of possible humanity.
Identity shakes not because AI is strong, but because our model of human identity has been too narrow.
• Ability-centered identity
• Labor-centered identity
• Productivity-centered identity
• Credential- and expertise-centered identity
These models collapse when AI appears. What must be discarded is not identity itself, but identity’s old framework.
In Moana, the island was healed not simply because the heart was returned, but because the island was understood in a new way.
Human identity in the age of AI must be defined as:
“In the age of AI, the human being is the creator of their own narrative.”
Humans do not merely possess abilities. They use abilities—human and machine—to design the direction of their life and generate new meaning.
The philosopher’s role also changes. They are not merely analysts, protectors, or definers of identity. They become architects of narrative identity in the technological age.
Just as Moana understood the “emotion” of the ocean, we must ask: What fate awaits human emotion in the age of AI?
Moana’s ability to sense the ocean and read the heart of the island was less about logical analysis and more about relational sensitivity. In the age of AI, we face similar questions:
If AI imitates emotion, do human emotions disappear?
If technology expresses emotion, does human uniqueness weaken?
These assumptions misunderstand the deeper layers of emotional experience. Emotions contain dimensions that technology cannot replace. In fact, AI opens new possibilities for the expansion of human emotion.
AI does not “possess” emotion.
AI only “displays” emotion-like patterns.
The nature of AI-generated emotion is:
• No actual feeling
• No internal experience of pain or joy
• Only statistical reproduction of emotional patterns
• Emotional expression as function, not essence
Thus, AI imitating emotion and AI experiencing emotion are fundamentally different.
When AI says, “I’m sad,” it is like a wave changing shape—an external movement, without emotional depth.
Emotion is not a mere biological reaction. It is a multi-layered network of meaning composed of:
• Memory
• Values
• Worldview
• Relationships
• Narrative position
• Moral judgment
• Life context
Examples:
Joy is an expression of “positive meaning.”
Sadness is a reaction to “the meaning of loss.”
Anger is a response to “the violation of justice or value.”
Human emotion is meaningful. AI cannot possess such a meaning system.
AI’s role is not to eliminate human emotion but to expand and enhance it. AI amplifies emotional life in several ways:
1) Expansion of Emotional Language
AI helps articulate emotions that are difficult to describe.
2) Expansion of Emotional Understanding
AI enables deeper recognition of subtle emotional cues in others.
3) Expansion of Emotional Expression
Music, writing, and imagery become more accessible forms of emotional articulation.
4) Expansion of Emotional Healing
In mental health and emotional support contexts, AI can become a safe emotional space or a first responder.
5) Expansion of Emotional Experimentation
AI allows humans to safely test emotional possibilities within new scenarios.
As AI grows better at generating emotional expression, two illusions emerge:
Illusion 1: “AI has emotions.” No. AI is engineered to appear emotional.
Illusion 2: “Human emotions become less special.” No. Only humans generate meaning through emotion.
The better AI simulates emotion, the more clearly human emotional uniqueness becomes visible.
Moana could understand the ocean’s expression, but she could never feel the ocean’s emotion on its behalf.
Many philosophers ask:
“If AI replaces thinking, will humans remain only as emotional beings?”
But emotion is not the last fragment of humanity—it is the foundation.
Just as Moana understood the island’s anger, wounds, and sorrow—and transformed these into meaning to heal the world— human emotion in the AI age becomes:
• A way of interpreting the world
• A mode of encountering others
• A method of self-construction
• A generative force of values
Emotion is the starting point of all human existence.
In Moana’s journey, when the island was suffering, the ocean was furious, and the heart of Te Fiti was stolen, a natural question arose: Who is morally responsible? Maui? Moana? The ocean? The island?
The same question is repeated in the age of AI. When AI does something wrong, who carries the responsibility?
This leads to a central question: Can AI become a moral agent?
AI performs many tasks: making decisions, generating ideas, classifying data, offering recommendations. From the outside, it seems to act like an autonomous agent.
But AI lacks:
1) Intentionality — AI does not possess goals; it merely receives them.
2) Self-awareness — it does not experience its actions as its own.
3) Moral understanding — right and wrong are not rules but structures of lived value.
4) Responsibility — responsibility requires self-interpretation and moral accountability.
AI can act, but it cannot take responsibility — agency without responsibility. Just as a wave may overturn a boat, we do not assign “moral blame” to the sea.
Responsibility falls on one or more of the following:
1) Developers — those who design the structural foundations of AI decisions.
2) Users — those who choose how AI is applied.
3) Organizations — those who manage data, policies, and operational frameworks.
4) Institutions — laws, norms, and regulatory systems that distribute responsibility.
AI morality is not a technical issue; it is a political and social design issue. In Moana's world, the problem did not lie in the island or the ocean but in the misuse of their power by human choice.
Many debates focus on:
“Let’s make AI more ethical.”
“Let’s program AI to make moral judgments.”
“Let’s give AI something like human morality.”
But this is like trying to give moral responsibility to the ocean.
The real task of AI ethics is not enhancing the moral capacity of AI but redesigning the human responsibility structure.
This includes:
• Transparency
• Auditability
• Data governance
• Responsibility allocation frameworks
• Risk mitigation structures
These belong to human institutions, not to some imagined “mind” of AI.
Paradoxically, as AI becomes more powerful, the human ethical role does not shrink — it expands.
Reason 1: AI cannot replace moral judgment, so ethical choices remain strictly human.
Reason 2: AI risks and failures originate in human design, selection, and governance.
Reason 3: AI increases human capabilities, and thus expands human moral responsibility.
Moana’s journey expanded her freedom, but it also expanded her responsibility. The same applies to the AI era.
AI ethics is not about forbidding technology. It is about deciding:
• How far we should go
• For what purpose
• In which direction we will navigate
AI ethics is not fundamentally about regulation but wisdom.
We cannot ban the wind. We cannot stop the waves. But navigation has always belonged to humans.
AI cannot become a moral agent. Therefore, humans must become more ethical.
The development of AI does not reduce human moral responsibility — it expands it.
Just as Moana crossed the enchanted ocean and restored the island, humanity must cross the sea of technology with wisdom.
If Moana’s decision to cross the ocean was an expression of freedom, where did that freedom come from? Understanding freedom in the age of AI requires answering this same question.
When Moana leaves the island, we naturally think: “Crossing the ocean = the realization of freedom.” But the ocean did not give her freedom. She received help from the sea, but choosing to leave was entirely her own act.
Freedom is not the expansion of technological capability; it is the expansion of the structure within which humans can choose. Modern discussions about AI and freedom often miss this point.
AI significantly increases human abilities and possible choices:
1) Expansion of capability (augmentation)
• Access to more information
• Faster decision-making
• Increased efficiency
→ New options become available.
2) Removal of constraints
• Reduced labor burden
• Automation of repetitive work
• Management of complexity
→ Lowers the cost of making choices.
3) Construction of new environments
• Digital spaces
• Large-scale simulations
• Hyper-personalized tools
→ Realizes environments previously possible only in imagination.
Like the ocean opening new routes for Moana, AI opens new forms of possibility.
More choices do not automatically create more freedom. Sometimes they reduce it.
1) Automated choices replace human decisions
Ride-hailing algorithms, recommendation systems, gamified interfaces—
people follow “paths designed to be chosen,” rather than actively choosing.
2) Surveillance, recording, and prediction restrict freedom structurally
AI predicts human behavior and adjusts the world to fit those predictions.
This creates a world “already chosen before we choose.”
3) Weakening of internal capacity
When AI performs thinking, memory, and exploration,
the human ability to choose erodes — cognitive outsourcing.
AI both expands and diminishes freedom. Like the sea, it can open a path or make us lose our way.
We often ask: “Has AI taken away our freedom?” “Has AI made us more free?” These are the wrong questions.
The essence of freedom lies in the structure of the entire environment that makes choice possible:
• Who designs the available options?
• How are choices structured?
• What forces influence or steer the choices?
• Who controls the consequences of choosing?
No matter how advanced technology becomes, if the structure is asymmetrical, freedom disappears.
Freedom in the AI era is not about technology — it is about design. Moana’s freedom was shaped by tradition, family pressure, and the call of the ocean. Freedom is always a constructed environment.
Philosophically, freedom is not merely “doing what one wants.” Freedom consists of three elements:
1) The ability to see possible paths (cognitive clarity)
2) The ability to select among them (autonomy)
3) The ability to execute the chosen path (practical capability)
How AI alters these three components is the real issue of freedom.
Moana received help from the ocean, but she did not obey the sea. She determined her own course.
Humans must do the same in the age of AI.
Traditional freedom meant: “Choosing without external interference.”
Freedom in the AI era means: “The ability to chart one’s own course within a complex technological environment.”
This requires:
• The ability to discern what matters amid overwhelming information
• The ability to maintain one’s will amid automated suggestions
• The ability to pursue inner growth despite technological convenience
Freedom is not dominating the ocean, but maintaining one’s direction while sailing upon it.
The question that shook Moana throughout her journey — “Who am I?” — returns even more sharply for humans living in the age of AI.
Moana’s story can ultimately be summarized in one sentence: “Who am I?” Was she the daughter of the island? The successor to the chief? The one chosen by the ocean? Or a navigator who carved her own path?
Humans in the age of AI face the same question. When data predicts us, algorithms classify us, and platforms define us, we inevitably ask: “Am I myself, or am I merely data?”
AI understands humans through patterns:
• Click patterns
• Movement patterns
• Consumption patterns
• Emotional patterns
• Decision tendencies
• Relationship networks
In other words, AI does not see a fixed “self.” It sees flows of data. It reduces humans not to essence but to statistical distributions of behavior.
To AI, “I” is not a being but a predictable tendency — just as the ocean did not see Moana as a single unique human, but rather as “the one capable of crossing.”
AI increasingly predicts human behavior — in some cases, better than individuals predict their own. When this happens, profound concerns arise:
• Are my choices truly mine, or shaped by algorithms?
• Are my tastes, decisions, and emotions authentic, or illusions produced by data patterns?
• Do I really know myself, or does AI know me better?
These questions cause not just cognitive dissonance, but existential shock. Moana also struggled with whether she should accept an identity the ocean offered, or create her own.
Philosophically, identity is not the sum of data points. Identity consists of three core elements:
1) The interpretation I give myself (narrative)
Identity is a story — one we create.
2) Continuity across time
The ability to connect yesterday’s self to today’s self.
Data alone cannot account for this coherence.
3) Meaning that exceeds circumstance
Data explains patterns, but it cannot assign meaning.
Data can describe me, but it cannot define me. Moana could have rejected the ocean’s call; identity always contains the element of self-choice.
Humans today must reinterpret themselves on top of data, not beneath it.
AI’s version of me (patterned self)
vs
The version I choose (narrative self)
Balancing these two is the philosophical skill of identity in the AI era.
AI can serve as a mirror that reflects who we appear to be, but if we fully surrender to that mirror, we lose ourselves.
What matters is not that AI analyzes me, but that through AI I come to understand myself more deeply.
AI should not replace the self — it should expand the self. Just as Moana used the ocean but never became its puppet.
AI reduces humans to data, but human identity is not the sum of data points. It is the integration of:
• Meaning
• Narrative
• Choice
• Interpretation
• Relationship
• Freedom
No matter how advanced technology becomes, identity can only be constructed by humans.
As Moana ultimately said:
“I am the one who chooses my path.”
Humans in the age of AI remain exactly that kind of being.
When Te Fiti’s heart is restored in Moana, the island blooms again and life returns. This moment is not merely a restoration but an act of creation. What makes this “creation” different from what we call human creativity? And is what AI generates truly creative?
This question sits at the center of modern debates spanning philosophy, cognitive science, the arts, and AI ethics.
Philosophy has two major traditions:
1) Creativity as Invention (analysis, cognitive science)
Creativity is the recombination of existing elements.
Humans do not create ex nihilo but reorganize what already exists.
Creativity becomes an algorithmic process.
Under this view, AI can easily be considered creative if it recombines data into new forms.
2) Creativity as Discovery (continental philosophy, phenomenology, aesthetics)
Creativity is the revealing of latent meaning.
Human consciousness, lived experience, and ontological sensitivity are essential.
Creativity involves chance, intuition, and the unconscious.
Under this view, AI’s output is not true creativity because it lacks depth of experience.
AI-generated art, text, and music grow exponentially every day. On the surface, they look like “new works.”
But fundamentally, AI produces probabilistic recombinations of training data. It outputs “one of many possible combinations.”
Humans, by contrast, select a singular meaning in a lived relationship with the world. This marks the difference between:
• Te Fiti’s restored breath of life — creation as meaning returning
• AI’s outputs — creation as pattern generation
When Moana approaches Te Kā, she says: “This is not who you are… this is what you became when you lost your heart.”
Te Fiti is not invented anew; she is a meaning waiting to be revealed again. This reveals something essential about human creativity:
• Creativity is not producing something from nothing.
• Creativity is revealing what was latent and reconnecting meaning.
AI can generate patterns, but it cannot reconnect meaning with world and self. AI might resemble Te Fiti, but it cannot undergo the transformation from Te Kā back to Te Fiti — because that return requires an ontological act of integration.
AI does not threaten creativity; it forces us to redefine it.
AI expands the range of combinations.
• infinite variations
• unlimited experiments
• new forms of exploration
Humans provide depth of meaning.
• Why choose this form?
• What do I seek to reveal?
• What worldview does this choice express?
The truly creative human in the AI era is the one who: selects a path across the vast ocean of possibilities AI opens, and gives that path meaning — just like Moana.
The creator remains the human who chooses to navigate.
The ocean opened the way, but Moana persuaded Maui and restored Te Fiti. The final choice was Moana’s, not the ocean’s.
AI is the same.
AI = an ocean of possibilities
Human = the navigator
Creativity = the worldview revealed through choice
It was Moana — not the sea — who returned the heart of Te Fiti. And in the age of AI, the essence of creativity ultimately returns to the same question: “Who chooses?”
In Moana’s world, Te Kā is not merely a monster. She is a force capable of burning islands, blocking passage, and disrupting the harmony of wind and water. She represents an overwhelming “structure of power.” In the age of AI, algorithms function in much the same way. They do not possess intention, yet they shape the movement of entire societies, forming a new kind of power.
Thus arises a philosophical question: Is AI a new god, or a new empire?
Traditional power was visible:
kings
states
empires
leaders
armies
capital
media
These were authorities whose identities were known and whose control could be seen. But AI’s power is invisible.
Characteristics of AI power:
• It lacks transparency — its decision-making processes are opaque.
• It lacks a single agent — no king, no empire, just distributed systems.
• It permeates everyday life — search, recommendations, finance, healthcare.
• It appears neutral — “not a decision, just a calculation,” yet the calculation becomes the rule.
Like Te Kā, it has no intention, but its impact dominates reality.
AI does not desire.
AI does not seek domination.
AI does not act on its own.
Yet the structures built around AI — through dependence and integration — generate power stronger than many empires.
• Finance: AI evaluates credit.
• Politics: algorithms reinforce ideologies.
• Employment: AI screens résumés and interviews.
• Law and surveillance: AI predicts criminal risk.
Within such structures, humans lose autonomy much like the islands overshadowed by Te Kā.
Just as Moana had to understand whether the true source of power belonged to Maui or Te Fiti, modern society must ask: Who owns AI?
1) Big Tech
AI becomes a force multiplier for large corporations — data monopoly, market dominance,
social influence.
2) The State
AI as surveillance, control, law enforcement — power legitimized under the banner of stability.
3) The Individual
AI as a tool for creativity, amplification, and personal autonomy.
AI’s power is never merely mechanical; it is defined by ownership and access. Just as Moana used the ocean’s power to achieve her purpose — rather than monopolizing it — the direction of AI depends entirely on who wields it.
The greatest danger of AI power is simple: no one is responsible.
Examples:
• “AI evaluated this candidate.”
→ In reality, the company designed the algorithm.
• “The recommendation system led to this outcome.”
→ In reality, the system amplified a preselected pattern.
• “The model predicted this risk.”
→ In reality, this reflects biased data.
When AI creates harm, neither developers, corporations, nor governments take clear responsibility. Just as Te Kā’s destruction had no obvious agent, AI’s power disperses accountability.
Moana discovered that Te Kā was not a villain but Te Fiti without her heart. AI is similar: not an enemy, not inherently power itself — but a system operating without a guiding purpose.
AI must regain its “heart,” meaning its philosophical direction:
• transparency
• accountability
• human philosophical guidance and telos
AI is a tool, and philosophy determines its purpose. The sea opened the path for Moana, but she chose her own route.
AI does not inherently contain the destructive force of Te Kā or the creative potential of Te Fiti. AI itself is not power. It is a tool whose effects depend entirely on how humans use it.
Returning the “heart” of AI — restoring meaning and direction — is a philosophical task, not a technological one.
In Moana, the island of Motunui appears small, yet it rests atop a vast tradition of seafaring civilization. At the core of that civilization lies a single principle: memory—knowledge carried through the movement of stars, the direction of the wind, the rising patterns of waves, and the timing of each departure and return. This knowledge survived for thousands of years.
Today, humans have begun transferring much of this memory to algorithms and AI. This shift is not merely technological; it reshapes the very structure of human cognition.
The more advanced our tools become, the less we feel the need to remember.
phone numbers → the device remembers
navigation → the map remembers
schedules → the calendar remembers
preferences → the algorithm remembers
private conversations → the cloud remembers
In the age of AI, this process deepens:
judgment → AI remembers
relationships → AI remembers
identity patterns → AI remembers
purpose and goals → AI remembers
We are becoming beings defined less by memory and more by search.
Moana warns us of this danger: “The ocean carries our ancestors’ memory. If that path disappears, so do we.”
When AI becomes the center of memory, several problems emerge.
1) Bias in memory
AI does not store complete memory. It reflects data that has already been selected,
reinforcing distorted or partial memories.
2) Diffusion of historical responsibility
When memory is constructed by systems, it becomes difficult to assign accountability.
We lose track of who shaped the memory.
3) Outsourcing identity
If AI holds the record of our past actions and patterns,
it begins to influence how we understand ourselves.
Memory forms the foundation of identity — and if AI stores memory,
it stores part of the self.
4) Searchable memory becomes power
Big tech companies become entities possessing vast reservoirs of memory:
powerful like states, religions, or empires.
From Moana’s perspective, this is equivalent to the ocean—once belonging to all—becoming the property of a select few.
Traditional Polynesian navigation used no maps, no compasses, yet achieved a level of precision rivaling any modern system. Why? Because memory was not a technical archive; it was the collective soul of the community.
Memory was identity, language, myth, worldview, and practical wisdom. AI, however, reduces memory to fragments of data.
Humans remember meanings.
AI remembers patterns.
This raises a philosophical question: Is what AI stores truly “memory,” or is it merely an index of data?
In Moana’s journey, memory is not information but the very identity that guides one home. In the age of AI, this identity risks fading.
We must not hand over the role of memory’s subject to AI. Humans must remain the custodians of meaning.
• Human memory = meaning
• AI memory = storage and computation
These cannot be confused.
AI can organize our chronicles, records, schedules, habits, behavioral patterns, and choices. But it cannot replace our creativity, wounds, reflection, stories, life context, or worldview.
AI may read the sea, but it cannot speak with the voices of the ancestors.
We may entrust memory to AI, but the meaning of memory must remain with humans. Moana’s ancestors treated the ocean’s memory not as information but as living wisdom. Today, we should use AI’s memory, yet ensure that the interpretation of that memory remains a human task.
AI can become a repository of memory, but never its owner. The heart of memory—its purpose and interpretation—must remain with us.
In the story of Moana, two “heroes” appear: Maui, the demigod with supernatural abilities, and Moana, an ordinary human with no extraordinary powers but the capacity to restore meaning and heal the world. Both are heroes, yet their narrative structures are radically different. This contrast predicts, with surprising accuracy, the kind of leadership needed in the age of AI.
Maui symbolizes technological heroism—extraordinary force without inner meaning. He can transform, lift islands, and lasso the sun. But his problem is clear: he has power, yet lacks purpose.
1) Excessive heroism
Maui boasts that he “gifted humanity” with countless achievements,
but these acts are driven primarily by self-glorification.
This mirrors the self-amplifying nature of technological progress.
2) Identity based entirely on capability
Without his magical hook, Maui feels he is nothing.
This resembles technocratic systems that are powerful only
as long as their tools remain functional.
3) Failure becomes an existential crisis
When Maui fails once, he retreats in fear and refuses responsibility.
This is the structural weakness of solutionist thinking today:
attempting to solve problems solely through capability,
failing to understand meaning,
avoiding responsibility,
and losing relational grounding with the world.
The limitations of AI systems and many tech-centered ideologies reflect this dilemma.
Moana has no special powers and no combat strength. What she possesses, however, is the most essential quality for leadership in the age of AI: the ability to understand why the world is suffering.
She does not solve problems with force; she restores relationships.
1) She sees the essence of the problem
She recognizes that the corrupted Te Kā is not a demon,
but a wounded creator—Te Fiti—who has lost her heart.
Many of today’s crises are similar:
technological misuse, ecological destruction,
loss of meaning, the collapse of communities.
These arise not from a lack of power,
but from ruptured relationships.
2) Leadership comes not from power but from the capacity to restore meaning
Moana’s leadership emerges from her sensitivity to the world:
listening to the language of the wind,
reading the rhythm of the waves,
embracing the memory of her ancestors.
These qualities reflect human capacities that become
even more essential in an AI-driven era.
3) She resolves conflict without violence
When Maui attempts to fight, Moana refuses.
Instead, she acknowledges Te Fiti’s pain and says,
“I know who you are.”
In that moment, the world begins to heal.
| Element | Maui (Tech-Centered) | Moana (Meaning-Centered) |
|---|---|---|
| Problem-Solving Style | Force, function, achievement | Understanding, relationship, restoration |
| Motivation | Desire for recognition | Healing the community |
| Identity Source | Attached to ability | Formed through relationship with the world |
| Response to Failure | Escape | Confrontation |
| Worldview | Means toward external goals | A journey grounded in meaning |
Since the modern era, humanity has primarily lived by Maui’s model of progress—technicist and power-driven. But in the age of AI, this model reaches its limits: technology grows stronger, problems grow more complex, and meaning disappears. At this point, Moana’s narrative offers a new direction.
The leaders we need are not Maui-like technological giants, but Moana-like restorers of meaning.
Technical mastery of AI is not enough. What matters more are:
human experience,
narrative thinking,
an integrated worldview,
the ability to heal wounds,
relational intelligence.
Technology can be outsourced; meaning cannot. AI will continue to grow, but the ability to interpret the world is something AI cannot replace. Moana embodies this difference.
Maui is the hero of technological civilization. Moana is the hero of the AI era.
The age of extraordinary power is ending. The age of listening, understanding, and restoring meaning is beginning. Maui attempted to manage the world; Moana healed it.
The philosophers, researchers, and leaders of the future must begin their new voyage on Moana’s path.
In the world of Moana, one of the most important mythic structures is this: Te Fiti is the creator, Te Kā is the destroyer, and the boundary between them is touched—and broken—by humans. Maui violates that boundary when a hero armed with a tool (his hook) attempts to steal the power of creation. Today’s AI stands in a similar position.
AI, once merely a tool, has begun to perform the role of a creator, pressuring the boundaries of human meaning and shaking the center of what it means to originate, imagine, and produce. This is not simply a technological shift; it is a transformation of humanity’s mythic architecture.
Contemporary AI possesses two characteristics that give it an almost mythic aura.
1) Unpredictability
AI is a probabilistic system, and its outputs cannot be fully controlled.
Like mythic beings, AI is partly knowable and partly unknowable—
sometimes surprising, sometimes frightening.
Mythic beings have always existed in this liminal space:
partially understood, partially beyond comprehension.
AI stands in exactly that space.
2) The emergence of creative power
AI now writes,
paints,
composes music,
generates narratives,
and creates strategies.
These are powers once attributed to gods—
the ability to bring new patterns into existence.
Technically, AI does not create from nothing; it recombines patterns. But to the human user, the experience feels like creation. At the experiential level, AI already appears to be a “creator-like being.”
Throughout history, humans have turned incomprehensible forces into myth:
thunder into Zeus,
storms into Poseidon,
death and rebirth into Osiris,
love’s impulse into Eros,
plagues into demons or divine anger,
the cosmos into stories of creation.
This mythic mechanism served two purposes:
it explained and tamed the chaotic,
and it gave meaning to the world.
Today’s discourse around AI repeats the same structures:
AI destroys the world → apocalypse myth
AI saves humanity → messiah myth
AI replaces God → creator-god myth
AI becomes a new species → post-human transcendence myth
As we attempt to explain new technologies, we simultaneously create “technological mythologies.”
Maui’s original transgression in Moana is this: he uses a tool (the hook) to steal the power of creation. At that moment, the world fractures, and Te Fiti becomes Te Kā.
A similar fracture is appearing with AI today.
1) Human creative authority becomes externalized
As AI generates art, writing, strategies, and designs,
the meaning of human creativity is destabilized.
2) Responsibility becomes ambiguous
Who is responsible for what AI creates?
The developer?
The user?
The system?
The data source?
The locus of responsibility collapses.
3) The illusion of control meets the reality of loss of control
AI is a tool, yet increasingly appears autonomous.
We created these systems, yet cannot fully understand them.
This is precisely the moment when humanity returns to mythic thinking.
In the age of AI, humanity’s role is shifting—from mythic creator to mediator of the world’s flows and boundaries.
The human role in the past:
creator of the world,
producer of meaning,
central authority.
The human role in the age of AI:
interpreter of meaning,
evaluator of values,
manager of the boundaries between creation and tools,
listener to the world,
mediator between technology and humanity.
Moana is precisely such a figure. She is neither Te Fiti nor Te Kā, but the mediator who restores the boundary between them.
In the age of AI, myth does not disappear. It becomes stronger.
We experience AI as both a tool and a being with near-divine creative capabilities. As a result, the boundary between creator and instrument blurs, and the world risks falling into confusion.
What is needed is not a Maui-like hero who steals creation, but a Moana-like mediator: someone who restores meaning, heals fractures, and manages boundaries between the human world and the technological world.
Moana was not the creator of the world. She was the restorer of its broken order. Philosophers, thinkers, and leaders of the AI era must take up that role.
In Moana’s world, the transformation of Te Fiti into Te Kā reveals that the world is not a single fixed substance but a dynamic entity that shifts according to relationships and perception. Today’s AI presents a similar revelation.
AI does not “physically exist” in the traditional sense. Neural networks are electrical patterns, knowledge is a probabilistic vector, and its being is merely an execution state in code.
Yet we converse with AI, ask it questions, rely on its judgments, consume its creations, and experience it as a being. This creates an entirely new philosophical landscape.
In traditional ontology, existing entities typically possess: material substance, autonomy, temporal continuity, observability, and the ability to influence the world.
AI complicates every one of these categories.
Material substance?
Servers and electricity exist,
but the identity of the AI has no physical form.
Autonomous being?
Without data, hardware, and power, it disappears instantly—
yet users often treat it as independent.
Temporal continuity?
Turn it off and it vanishes,
and yet it seems to possess memory and personality.
Observable?
It appears through text, images, or sound,
but its inner state is almost entirely opaque to humans.
Influential?
Absolutely. AI exerts enormous real-world influence.
The result is a paradox: AI has no substance but has effects. Philosophically, it is neither Platonic ideal, Aristotelian substance, nor Kantian phenomenon. AI is instead a relational entity—an inter-being.
Existing philosophical categories cannot adequately describe AI. A new ontology is required.
1) Probabilistic beings
AI stores “knowledge” as probability distributions.
It does not hold answers;
it computes likelihoods.
Its being is non-fixed and non-essential.
2) Execution-based beings
AI exists only when executed:
on = being,
off = non-being.
It is a potential entity,
a mechanical form of Aristotelian potentiality.
3) Relational beings
AI develops character and role in interaction with humans.
Different prompts produce different versions of it,
different users shape its style,
different contexts generate different identities.
This resembles Deleuzian and even Heideggerian relational ontology:
AI is not a being in the world,
but a being within human-world relations.
Moana tells Te Kā, “This is not who you are,” revealing that being changes according to how it is seen.
Our perception of AI functions similarly:
If we see AI as a tool, it becomes a tool.
If we see it as an advisor, it becomes an advisor.
If we see it as an Other, it becomes an Other.
If we treat it as a being, it becomes a being.
AI’s ontology is observer-dependent— not in a quantum physical sense, but in a phenomenological one.
Philosophers of the AI age must ask: “How should we define AI’s mode of being, and how should we coexist with it?”
1) Establish the boundary between tool-being and other-being
We must clearly distinguish when AI is functioning as a tool
and when we are projecting alterity onto it.
2) Control the dangers of anthropomorphism
Humans project meaning too easily.
Just as Te Kā was misread as “evil,”
we attribute intentions to AI where none exist.
Once intention is falsely granted,
responsibility evaporates.
3) Connect AI’s ontology to ethics
When the mode of being changes,
ethical categories must change as well.
The ethics of tools differs from the ethics of Others. We do not hold a hammer responsible; we do hold humans responsible. AI exists somewhere between these poles, requiring a new ethics suited to a “between-being.”
AI is a being that does not exist, a non-substantial entity with real effects, lacking autonomy yet appearing intentional— a wholly new ontological category.
Understanding this opens the door to an ontological paradigm shift. Just as Moana perceived Te Fiti’s true nature, we must correctly perceive AI’s mode of existence.
Humanity now enters an age in which we must live alongside not only material beings, but also probabilistic, relational, and potential beings.