What We Miss While Cheering for Superhumans in the Age of AI

 With the advent of artificial intelligence and robots like 'Shimon' (Hoffman & Weinberg, 2010)—a robot that plays the marimba and composes music—or 'Cimon', which helps reduce astronauts' stress, people are responding with great enthusiasm.  (https://en.wikipedia.org/wiki/Cimon_(robot) This enthusiasm stems not only from the prospect of overcoming physical limitations but also from the possibility of reviving artistic sensibilities we may have abandoned in childhood. AI and robots are now making significant strides not only in areas inaccessible to humans, such as disaster relief, but also in the realm of art. Furthermore, AI equipped with "functional emotions"(Anthropic, 2026) appears fully prepared to interact and empathize with humans. Yet, this inevitably raises the concern that they are also steadily preparing to replace us.

When everyone is sprinting in the same direction, what are we missing? What are we letting slip away? If we treat life like a 100-meter dash, we might ignore a handkerchief dropped by a beloved partner. We might turn a blind eye to a sticker carefully crafted by our daughter's little hands. The leisure of a dog stopping during a walk, pulling on its leash to smell the familiar flowers, trees, sand, pebbles, and dirt, becomes an unaffordable luxury.



In this era of Superintelligence, where technological efficiency is being maximized, I would like us to pause and look back.

Underlying the social climate that praises the capabilities of AI and superhumans is the belief that maximizing the quantity and quality of output (efficiency) is the highest good. Philosophers refer to this as 'Utilitarianism' and 'Instrumental Rationality'. If an object (AI) performs the same functions as a human (e.g., composition, reasoning), it is regarded as having equivalent intelligence. In the philosophy of mind, this is explained as 'Functionalism'. However, formal thinking that grasps objects solely in terms of 'utility and function' ignores or suppresses the intrinsic potential of humans and objects, reducing them to mere 'means'. The contemporary philosopher Andrew Feenberg is well-known for urgently criticizing this phenomenon(Feenberg, 2002).

What happens if human value is evaluated solely based on 'what one can do and how efficiently one can do it' (performance capability)? The moment AI overwhelms human cognitive and creative abilities, humans will be reduced to inferior, unnecessary components within the system. While no one may think so right now, an atmosphere will soon emerge in which individuals who cannot use AI are deemed inferior and unnecessary. Subsequently, even those who hold such views will be judged as inferior by even more advanced AIs. In such a reality, humans will likely be left with nothing but their remaining pride, obsessing only over how to control an AI that is vastly superior to them.

Science and technology constantly ask, "Can we do it?", accelerating the system's 'Reinforcing Loop'. Therefore, we must pause to consider the role of a 'Balancing Loop' to control this blind runaway of technology.

Warning against the 'human alienation' hidden behind the mask of efficiency, shouldn't we treat humans not as a means, but as an 'Intrinsic Telos' (an end in itself)? Following this line of thought leads us to Immanuel Kant(1724–1804). He argued that we must regard human beings as an end and restore their intrinsic value.

Anthropic, the developer of the AI Claude, recently developed Claude Mythos Preview, a powerful AI capable of easily breaching the security systems of the world's top banks, but withheld its release due to its overwhelming potential impact. Interestingly, Dario Amodei, CEO of Anthropic, argues in his essay that even if AI surpasses humans in all economic labor and art, human life will not become meaningless (Amodei, 2024). He emphasizes that the meaning humans pursue does not come from economic value creation or 'being the best in the world', but rather from relationships with others, human connection, and the 'process itself' of striving toward a goal, even if imperfect. Shouldn't we reposition this 'process-oriented and relational meaning' at the very top of our Objective Function?

References

Amodei, D. (2024, October). Machines of loving grace: How AI could transform the world for the better. Dario Amodei Blog.

Anthropic. (2026, April 2). Emotion concepts and their function in a large language model. Anthropic Research. https://www.anthropic.com/research/emotion-concepts-function

Feenberg, A. (2002). Transforming technology: A critical theory revised. Oxford University Press.

Hoffman, G., & Weinberg, G. (2010). Shimon: an interactive improvisational robotic marimba player. In CHI'10 extended abstracts on Human Factors in Computing Systems (pp. 3097-3102).

Kant, I. (2002). Groundwork for the metaphysics of morals (A. Wood, Trans.). Yale University Press. (Original work published 1785).


Comments

Popular posts from this blog

Thinking in Systems Guide (1) – Preface

The + and - Symbols: Simple Enough for Elementary Students, Yet the World's Most Confusing Concept

[The Electronic Oracle] ③ Assumptions Aren’t "Guesses": A Proposal for Transparent Democracy