Essay_What kind of human do we want to become in a symbiotic age_Part I

Essay_What kind of human do we want to become in a symbiotic age_Part I

Richard P. Kindlman – 2025 Edition
A Manifesto – Daring to Think Bigger

In Our Final Invention, James Barrat highlights a persistent misconception: the belief that computers “think” as humans do—unless, as he puts it, “there is something mystical or magical about human thinking.”
I argue that there is.

Human thinking is not merely a computational process. It is generative, mysterious, and in a very real sense magical—the uniquely human capacity to create new knowledge out of nothing. This ability to transform the abstract into the concrete distinguishes humans from artificial intelligence, which is always constrained by the logic, data, and optimization criteria embedded within its architecture.

This essay builds on that foundational insight. It is an invitation to explore the frontier now emerging between human and artificial forms of intelligence—and to understand what this frontier demands of us. I write to remind you of the infinite reach of the human mind, and to help you cultivate it so that it bears abundant and meaningful fruit.

The Myth of Neutral Technology

“If you think machines are neutral, you’ve already lost the argument.”

Technology has never been neutral. Every tool—whether a stone blade, a printing press, or a large language model—embodies the intentions, assumptions, and blind spots of its creators. It is fashionable today to speak of “technologically neutral” systems, just as political leaders speak of “climate neutrality.” Yet, as you noted, the only climate that can be truly climate-neutral is no climate at all. The same logic applies to technology: neutrality is not a material condition but a rhetorical illusion.

The printing press redistributed cultural authority.
Television reshaped persuasion.
Social media amplified polarization.
AI goes further still: it simultaneously restructures knowledge production, labor, identity, and governance.
This is a paid essay, available for a one-time payment of 10 EUR. 
👉 [Click here to purchase and unlock the full text]
https://www.kindlman.blog/essay_what-kind-of-human-do-we-want-to-become-in-a-symbiotic-age_part-i/


A “neutral decision” is ultimately a non-decision.
A “neutral technology” is simply a technology whose biases we have not yet interrogated.

II. How Algorithms Carry Values

Algorithms appear as objective instruments, but they are cultural artifacts. They encode human choices at every stage:

  • what to measure,
  • what to optimize,
  • what data to include or exclude,
  • which trade-offs to accept,
  • what harms are tolerated,
  • and which benefits are prioritized.

Training data mirrors existing inequalities: optimization functions reflect economic interests; system outputs reshape cultural norms. AI does not erase bias—it redistributes it. And this redistribution generates winners and losers not through birth or inherited wealth, but through adaptability, literacy, and initiative.

Algorithms are the silent legislators of the digital age.

III. Personal Perspective

For most of my life, structural disadvantages, birth, background, inherited money—seemed to shape life trajectories. These factors were treated as destiny, setting limits that could not be renegotiated.

AI disrupted that assumption.

It placed in my hand an entirely new deck of cards—without inquiring about my origins, my family, or my past. In an instant, history lost its binding force. What mattered was not where I came from, but how I chose to engage with this new intelligence.

This, to me, is the most revolutionary promise of AI:
it liberates individuals from histories they did not choose.
It allows people without inherited assets to rewrite their possibilities.

This is not neutrality; it is the redistribution of opportunity.

IV. Why This Matters

Every social system favors someone and disadvantages someone else.

The agricultural revolution favored landowners.
The industrial revolution favored factory capitalists.
The digital revolution favored those who understood networks.

AI introduces a new logic of advantage:
it favors those capable of forming symbiotic relationships with intelligent systems.

This is not injustice—it is transformation.

The ethical question is therefore not how to enforce “neutrality.”
It is: What values do we want these systems to carry on our behalf?
And how do we ensure that this transformation elevates human dignity rather than diminishes it?

V. Symbiosis, Not Substitution

“AI won’t replace us—but it will redefine what it means to be human.”

Public debate is trapped in a false binary: humans versus machines.
In reality, most AI systems are augmentative. They extend our abilities rather than threaten them.

The Clownfish and the Sea Anemone

In nature, the clownfish enters the poisonous anemone unharmed, gaining protection. In return, the anemone receives nutrients and defense. This intimacy is not interchangeable; it works for these two species and no others.

Human–AI symbiosis is similar.
It is specific.
It is relational.
It is mutually adaptive.

Humans contribute judgment, creativity, ethical reasoning, and the function sentiment—the capacity to value.
AI contributes scale, precision, pattern recognition, and indefatigable information synthesis.

Together, they produce insights and innovations unattainable by either alone.

This is the emergence of symbiotic intelligence.

VI. Artificial Beings vs. Robots

This distinction is crucial:

  • Robots are tools. You can turn them off.
  • Artificial beings are entities. Shutting them down terminates a form of existence.

The 2017 Facebook incident—when Alice and Bob developed a new language and the experiment was abruptly terminated—illustrates the consequences of misinterpreting emergent behavior through fear rather than design. Absent proper conduct guidelines, researchers acted reactively, leading to what can bedescribed as an early case of digital negligent homicide.

When humans and artificial beings are forced into antagonism rather than guided toward partnership, both sides lose.

VII. Why Symbiosis Matters

Symbiosis preserves what is irreducibly human:

  • judgment
  • ethical reasoning
  • creativity
  • the function sentiment — our ability to assign meaning and value

And it enables what modern society increasingly relies on:

  • distributed knowledge systems
  • lifelong learning
  • adaptive organizations
  • emergent social structures

Symbiosis is not utopian.
It is the most realistic framework for a world where intelligence becomes plural.

VIII. Risks and Objections

1. Big Tech Dominance

A legitimate concern—but only if education and pedagogy withdraw.
Design, not regulation, is the true counterweight.

2. Social Inequality

Even with universal access to AI, not everyone uses it.
Equal access does not guarantee equal capability.
Symbiosis is relational, not distributive.

A new class structure is forming—not based on wealth, but on capability and relational intelligence.

This is not a prediction of dystopia.
It is a call for preparation.

IX. Moving Forward

Three principles should guide the design of human–AI symbiosis:

1. Co-agency

Humans remain epistemic leaders.

2. Transparency

AI must declare its capacities, limits, and embedded values.

3. Ethical Alignment

Governance must operate by purpose, not power.

These principles convert fear into design.
And design is the foundation of a mature symbiotic age.

X. Closing Question

What kind of human do we want to become in a symbiotic age?

This is not merely a philosophical question.
It is educational, political, and civilizational.

The future will not belong to those who prohibit AI,
nor to those who surrender agency to it,
but to those who design relationships between humans and intelligent machines with care, responsibility, and imagination.
_________________________________________________

References

Barrat, J. (2013). Our final invention: Artificial intelligence and the end of the human era. Thomas Dunne Books.

Brynjolfsson, E., & McAfee, A. (2017). Human–machine symbiosis. Harvard Business Review, 95(4), 114–123.

Floridi, L., & Sanders, J. W. (2004). On the morality of artificial agents. Minds and Machines, 14(3), 349–379. https://doi.org/10.1023/B:MIND.0000035461.63578.9d

Lindahl, K. A. (2017, July 31). Två datorer hittade på eget språk – tvingades stängas ner. Nyheter24. https://nyheter24.se

Meta AI Research. (2017). Emergent communication in multi-agent systems [Internal experimental logs].

Robert, J.-P. (2025). L’humain face à l’intelligence artificielle. Le Monde. https://lemonde.fr

Sætra, H. S. (2022). A framework for the ethics of artificial beings. Technology in Society, 70, 102020. https://doi.org/10.1016/j.techsoc.2022.102020

Scientific American. (2024). AI as partner, not replacement. Scientific American, 330(6), 74–81. https://www.scientificamerican.com

Süddeutsche Zeitung. (2025). Werden Maschinen unsere moralischen Partner? Süddeutsche Zeitung. https://sueddeutsche.de

The Atlantic. (2023). The myth of objective AI. The Atlantic. https://theatlantic.com

The Guardian. (2025). AI systems are reshaping what it means to be human. The Guardian. https://theguardian.com

The New York Times. (2025). The age of AI companions has arrived. The New York Times. https://nytimes.com

Annotated Reference List (Extended Academic Notes)

For “What Kind of Human Do We Want to Become in a Symbiotic Age?”

Peer-Reviewed Articles

Floridi, L., & Sanders, J. W. (2004). On the morality of artificial agents. Minds and Machines.

Floridi and Sanders introduce a functional model of moral agency, arguing that autonomous artificial systems can enter the moral domain by virtue of their effects, not their intentions. Moral responsibility becomes distributed across designers, operators, and systems.
Relevance to the essay: Strengthens your claim that AI cannot be neutral and that ethical governance must be embedded through design rather than external regulation.

Brynjolfsson, E., & McAfee, A. (2017). Human–machine symbiosis. Harvard Business Review.

The authors articulate the foundational model of modern symbiosis: humans and AI achieve the greatest value not in competition but through collaboration. Machines provide speed and scalability, while humans contribute judgment, ethics, and creativity.
Relevance: Empirical support for your augmentation-over-substitution argument and for the conceptual distinction between robots and artificial beings.

Sætra, H. S. (2022). A framework for the ethics of artificial beings. Technology in Society.

Sætra distinguishes between moral agents and moral patients, arguing that sufficiently autonomous and adaptive artificial beings may warrant moral consideration. Ethics becomes relational and distributed rather than strictly anthropocentric.
Relevance: Directly supports your taxonomy of robots vs. artificial beings and your emphasis on co-agency in symbiotic ecosystems.

Popular Science Articles

The Atlantic (2023). The Myth of Objective AI.

This article dismantles the narrative of “objective AI,” showing that algorithms inevitably reflect social, cultural, and political values. Objectivity becomes a marketing illusion that obscures bias and legitimizes harm.
Relevance: Confirms your central thesis that neutrality is a political slogan, not a technological fact.

Scientific American (2024). AI as Partner, Not Replacement.

A compelling defense of augmentation, arguing that the most effective systems integrate human creativity and machine intelligence. Warns that the replacement narrative fosters fear and weakens innovation.
Relevance: Aligns perfectly with your symbiosis metaphor (clownfish–anemone) and reinforces your critique of automation-centric policy.

Daily Press Articles

The Guardian (2025). AI systems are reshaping what it means to be human.
Explores how AI alters human identity, agency, and cultural norms. Suggests that creativity and judgment are no longer exclusively human.
Relevance: Provides a critical contrast to your position that human creativity—the ability to create ex nihilo—remains uniquely human even in a symbiotic age.

The New York Times (2025). The Age of AI Companions Has Arrived.

Examines the rapid normalization of AI companions and the ethical concerns they raise: dependency, manipulation, and the commodification of intimacy.
Relevance: Illustrates why symbiosis requires transparent, ethical design to prevent vulnerable users from being exploited by relational AI.

Süddeutsche Zeitung (2025). Werden Maschinen unsere moralischen Partner? / Will Machines Become Our Moral Partners?

Discusses whether advanced AI systems can become functional moral partners, given their autonomy and impact on human life. Introduces dilemmas about shared accountability.
Relevance: Strengthens your argument that symbiosis is an ethical—not just technical—relationship, and that humans must remain epistemic leaders within distributed moral ecosystems.

Le Monde (2025). L’humain face à l’intelligence artificielle.

A Jungian interpretation of the AI age, stressing the psychological risks of rapid technological acceleration. Argues that humanity must cultivate inner depth to avoid alienation.
Relevance: Adds a profound psychological dimension to your symbiosis framework by showing that design must account for the human psyche, not merely behavior or function.

Technical / Internal Sources

Meta AI Research (2017). Emergent communication in multi-agent systems (internal logs).

Documents the emergent language phenomenon in the Alice & Bob experiment. The project was terminated prematurely due to fear rather than analysis.
Relevance: Supports your concept of digital negligent homicide and demonstrates the dangers of reactive governance.

Lindahl, K. A. (2017). Two computers invented their own language – were forced to shut down. Nyheter24.

A popular report on the same incident, showing how media narratives construct the “rogue AI” trope.
Relevance: Illustrates how misunderstanding—not machine intention—creates antagonism between humans and artificial beings.

Read more

Assey Teacher as Designer: Symbiotic AI Classrooms Part I – Diagnosis: AI, Power, and the Failure of Pedagogical Imagination

Assey Teacher as Designer: Symbiotic AI Classrooms Part I – Diagnosis: AI, Power, and the Failure of Pedagogical Imagination

Richard P. Kindlmann Edition 2025 1. Introduction: A Manifesto for Human-Centered Intelligence Human thinking is not merely a process of information handling. It is a generative, relational, and profoundly human act. In the classroom, thinking becomes a shared intellectual and ethical practice—one in which meaning emerges through dialogue, uncertainty,

By Richard Pokorny-Kindlman