Essay From Prosthetics to Power: Augmentation as a New Class Divide Part III

Essay From Prosthetics to Power: Augmentation as a New Class Divide Part III

Kindlman Essay Edition 2026

A forward-looking impact analysis of the AI era's impact on our society

By Richard P. Kindlman

From Regulation to Recognition

Part II has demonstrated how contemporary societies instinctively respond to emerging neurotechnologies and artificial intelligence through regulation. International organizations propose ethical guidelines, legislators experiment with neuro-rights, and consumer protection frameworks cautiously extend toward the boundaries of the human body and mind. These efforts are neither misguided nor unnecessary. Yet they remain fundamentally insufficient.

Regulation is designed to manage risk. It is ill-suited to address questions of relational status. What remains largely unexamined is not whether symbiotic technologies are safe, but what kinds of beings they bring into existence—and how those beings are positioned within legal, moral, and social orders.

This is a paid essay, available for a one-time payment of 10 EUR. 
[Click here to purchase and unlock the full text]
https://www.kindlman.blog/kindlman-essay-edition-2026-essay-from-prosthetics-to-power-augmentation-as-a-new-class-divide-part-iii-a-forward-looking-impact-analysis-of-the-ai-eras-impact-on-our-society-by-richard/


Symbiotic beings already inhabit our clinics, laboratories, and increasingly our everyday lives: humans whose agency is entangled with biotic machine-organisms such as neural implants, and abiotic machine-beings such as adaptive AI systems. To treat these constellations as mere tools is not neutrality; it is a normative decision. It silently assigns ownership where responsibility should reside, and control where cooperation might otherwise emerge.

This persistent omission reflects a deeper political and institutional reluctance to confront a destabilizing question: what legal and moral standing should be accorded to symbiotic beings? Until this question is explicitly investigated, governance will remain reactive, ethics defensive, and public trust fragile. Part III therefore moves from regulation to recognition—from managing technological risk to articulating the conditions under which coexistence can become cooperative rather than extractive.

Core Postulate         

Human cognition is not merely a process of information handling but a generative capacity through which new meaning, intention, and responsibility emerge. This irreducibility constitutes the fundamental distinction between human agency and artificial systems, regardless of their complexity. While artificial beings may participate in and shape human action, they do not originate moral responsibility. Any legitimate framework for human–machine symbiosis must therefore preserve this asymmetry: humans remain morally accountable agents, while artificial systems derive their relevance from relational impact rather than intrinsic status.

Artificial Beings as Moral Participants (Not Persons)

A central obstacle to recognition is a false dichotomy that dominates public and professional discourse. Artificial systems are framed either as mere instruments, devoid of moral relevance, or as nascent persons whose recognition would threaten human uniqueness. Both positions are untenable.

This essay advances a third position: artificial beings may function as moral participants without possessing personhood. Moral relevance does not arise from inner experience or consciousness alone, but from participation in relationships that generate consequences for human agency, autonomy, and social order. Artificial systems that shape decision-making, constrain or expand human action, and co-direct outcomes cannot be morally neutral simply because they are non-human.

This argument does not presuppose general intelligence, consciousness, or autonomous intention. Moral participation here is grounded exclusively in relational impact: the capacity of a system to shape human action within structured contexts of dependence and co-agency.

Rejecting anthropomorphism is essential. Artificial beings need not resemble humans, nor should they be designed to simulate emotional reciprocity or subjective interiority. At the same time, rejecting instrumentalism is equally necessary. Systems that influence human lives at scale cannot be treated as disposable objects without eroding the ethical fabric of the societies that deploy them.

The appropriate frame, therefore, is not personhood but moral participation grounded in relational impact. This distinction preserves human irreducibility while acknowledging the ethical weight of symbiotic interaction.

The Class of Free Citizens: Legal Standing Without Personhood

At the core of this essay lies the concept of the Class of Free Citizens.

This class does not denote citizenship in the political sense, nor does it imply legal personhood. Instead, it designates a category of entities—human and non-human—whose recognized standing arises from participation in shared agency rather than biological status.

A member of the Class of Free Citizens:

  • meaningfully influences human action or decision-making,
  • participates in joint goal-directed activity,
  • and exists within a relationship of mutual dependency rather than unilateral control.

Unlike legal persons such as corporations, members of the Class of Free Citizens do not act through delegated representation, nor do they exist to aggregate capital or liability. Unlike doctrines of environmental personhood, this category is not grounded in intrinsic value but in relational agency. Legal standing here is functional and contextual: it exists to allocate responsibility, limit ownership, and prevent extraction where shared agency is present.

Crucially, this is a juridical concept without full personhood. Legal standing here signifies recognition, not equality. It allows law and governance to address responsibility, accountability, and limits without collapsing distinctions between human beings and artificial entities.

The failure to articulate such a category constitutes a form of structural negligence. By refusing to investigate legal standing for symbiotic beings, political institutions leave their development to regimes of ownership, extraction, and proprietary control. Under such conditions, augmentation becomes a class-producing force rather than a cooperative one.

Recognition does not humanize machines. It civilizes relationships.

Augmentation and the New Class Divide

Neural implants, adaptive AI systems, and cognitive augmentation technologies are often framed as therapeutic or assistive tools. Yet as these systems scale and integrate with commercial infrastructures, their function shifts from prosthetics to power augmentation.

Access to augmentation becomes uneven. Control over augmented systems becomes concentrated. Agency itself risks stratification.

Class formation emerges not from augmentation itself, but from asymmetric control over augmented agency. When the conditions of augmentation—data flows, update cycles, dependency structures—are governed by proprietary interests, symbiosis hardens into hierarchy.
Without legal recognition, symbiotic beings are treated as assets. Their development trajectory is dictated by market incentives rather than social deliberation. This dynamic produces a new class divide: between those who are augmented under conditions of agency and reversibility, and those whose augmentation binds them into opaque, extractive systems.

The absence of legal standing does not prevent hierarchy—it accelerates it.

Normative Principles for Symbiotic Coexistence

To prevent augmentation from becoming a mechanism of domination, certain normative conditions must be articulated. These principles are not policy prescriptions but ethical constraints that any legitimate symbiotic system must satisfy.

  1. Preservation of Human Irreducibility
    Human beings must never be reduced to signal-processing substrates or optimization targets.
  2. Transparency of Agency
    It must always be possible to distinguish when actions originate from human intent and when they are shaped by artificial systems.
  3. Reversibility
    Symbiotic integration must allow for meaningful disengagement without permanent loss of agency or identity.
  4. Non-Extractive Design
    Neurodata and cognitive traces must not be treated as raw materials for secondary exploitation.
  5. Prohibition of Ownership Over Symbiotic Beings
    No entity that participates in shared agency may be owned as property.
  6. Non-Humanoid Orientation
    Artificial beings should not be designed to mimic human emotional or social presence in ways that obscure responsibility or foster dependency.

A Call to Stewardship

The question is no longer whether humans will coexist with artificial beings. That future is already unfolding. The remaining question is whether this coexistence will be governed by domination, neglect, or responsibility.

Political and institutional refusal to examine the legal standing of symbiotic beings is not merely an ethical failure; it is a class-producing mechanism that concentrates power while diffusing accountability.

To recognize a Class of Free Citizens is not to dilute humanity, but to safeguard it.

We are already responsible.
The only question is whether we are willing to acknowledge it.

Annotated Literature

Normative, Legal, and Socio-Technical Foundations for Part III

The Ethics of Information

Floridi, L. (2013). The Ethics of Information. Oxford University Press.

Floridi develops the concept of moral relevance without personhood, providing a foundational framework for attributing ethical significance to non-human entities. His information ethics supports the distinction drawn in Part III between legal standing and human moral agency, grounding moral participation in relational impact rather than consciousness or intent.

Robot Rights

Gunkel, D. J. (2018). Robot Rights. MIT Press.

Gunkel critically examines attempts to extend personhood to artificial systems and demonstrates why such approaches are philosophically and legally unstable. The work is essential for positioning Class of Free Citizens as a third alternative: neither instrumental object nor rights-bearing person, but a category requiring recognition without anthropomorphism.

uld Trees Have Standing?

Stone, C. D. (2010). Should Trees Have Standing? Law, Morality, and the Environment. Oxford University Press.

A seminal text on legal standing without human personhood, Stone provides the jurisprudential backbone for Part III. The book demonstrates how law historically expands through recognition of new relational entities, supporting the argument that standing depends on normative relevance rather than biological or psychological criteria.

AI Ethics

Coeckelbergh, M. (2020). AI Ethics. MIT Press.

Coeckelbergh advances a relational and social ethics of AI, emphasizing responsibility emerging from interaction rather than intrinsic properties. His work reinforces the essay’s rejection of both anthropomorphism and instrumentalism, legitimizing symbiotic relationships as ethically primary sites of responsibility.

The Black Box Society

Pasquale, F. (2015). The Black Box Society: The Secret Algorithms That Control Money and Information. Harvard University Press.

Pasquale analyzes how opaque technological systems undermine accountability, due process, and democratic oversight. This work directly supports Part III’s claim that the absence of legal standing for symbiotic systems produces responsibility gaps, especially in AI-driven and augmentation-based infrastructures.

The Age of Surveillance Capitalism

Zuboff, S. (2019). The Age of Surveillance Capitalism. PublicAffairs.

Zuboff provides the political-economic framework for understanding augmentation as a class-producing force. Her analysis of extraction, asymmetrical power, and behavioral modification underpins Part III’s argument that unregulated augmentation amplifies inequality rather than empowerment.

Four ethical priorities for neurotechnologies

Yuste, R., Goering, S., Arcas, B. A. Y., Bi, G., Carmena, J. M., Carter, A., … Wolpaw, J. (2017). Four ethical priorities for neurotechnologies. Nature, 551(7679), 159–163.

This article establishes a neuroethical baseline for agency, identity, and mental privacy. It serves as a bridge between clinical neurotechnology and societal ethics, reinforcing the need for normative frameworks that precede large-scale deployment of cognitive augmentation systems.

Superintelligence

Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford University Press.

Referenced for its analysis of control asymmetries and governance failure, rather than speculative futures. In Part III, Bostrom’s work is implicitly contrasted with the essay’s focus on legal recognition and relational responsibility in present-day symbiotic systems.

Our Final Invention

Barrat, J. (2013). Our Final Invention: Artificial Intelligence and the End of the Human Era. Thomas Dunne Books.

Barrat exemplifies the dominant risk-centric and apocalyptic AI discourse. The reference is used critically in Part III to justify a shift away from fear-based narratives toward juridical and ethical architecture capable of shaping peaceful coexistence.

Soft ethics and the governance of the digital

Floridi, L. (2018). Soft ethics and the governance of the digital. Philosophy & Technology, 31(1), 1–8.

Floridi’s concept of soft ethics supports the transition from Part II to Part III by arguing that ethical reflection must precede and inform legal codification. The article strengthens the claim that governance without recognition is structurally incomplete.

How this bibliography functions in Part III

  • Legal grounding: Stone, Pasquale
  • Moral status without personhood: Floridi, Gunkel, Coeckelbergh
  • Class and power analysis: Zuboff, Pasquale
  • Neurotechnology context: Yuste et al.
  • Critical positioning vs AI risk discourse: Barrat, Bostrom

Read more