Essay From Prosthetics to Power: Augmentation as a New Class Divide
Part II
Edition 2026
A continuation of Part I
Richard P. Kindlman
A Manifesto — Daring to Think Bigger
In Our Final Invention, James Barrat draws attention to a fundamental misunderstanding that continues to shape public discourse on artificial intelligence: the belief that computers process information in the same way humans do — “unless there is something mystical or magical about human thinking.”
The core postulate that runs through all my writing begins precisely here:
Human thinking is not merely a process. It is awe-inspiring, mysterious, and genuinely magical — the unique human capacity to create new knowledge out of nothing.
This is the decisive distinction between humans and artificial intelligence.
AI can recombine, optimize, and scale what already exists. Humans alone can imagine, innovate, and transform abstraction into reality.
This essay is built on that distinction. It is an invitation to explore a frontier where cognition, biology, and computation converge — and to understand what is at stake when that frontier becomes a marketplace. I am writing to remind you that the human mind is not finite, and to help cultivate it so that it bears abundant fruit rather than being quietly harvested.
From Assistance to Expectation
In Part I, we traced a subtle but decisive shift:
from tools we hold, to systems we integrate;
from assistance, to augmentation;
from choice, to structure.
Part II begins where that realization hardens.
Because once augmentation becomes reliable, it stops being optional.
And once it stops being optional, it begins to reorganize society.
What enters quietly as help leaves loudly as expectation.
This is a paid essay, available for a one-time payment of 10 EUR
https://www.kindlman.blog/essay-from-prosthetics-to-power-augmentation-as-a-new-class-divide/
Cybernetic Prostheses and Exoskeletons: When Help Becomes the Norm
Cybernetic prostheses and industrial exoskeletons are often presented as unambiguous goods. They restore mobility, reduce injury, and extend human capability. In clinical contexts, they return autonomy to those who have lost it. In industrial settings, they promise safety, endurance, and productivity.
At the empirical level, the evidence is convincing.
But technologies do not remain confined to their original intent.
As augmented bodies become more capable, workplaces recalibrate. Tasks are redesigned. Productivity baselines shift. What was once extraordinary became normal. What was once accommodation becomes expectation.
At that moment, a threshold is crossed.
Those who cannot access augmentation — for economic, medical, or personal reasons — are no longer simply unenhanced. They are less competitive. Exclusion does not arrive through policy or decree, but through silent recalibration of norms.
This is the first mechanism by which augmentation becomes class-forming.
The Structural Logic of Augmentation
It is tempting to frame augmentation as an individual choice:
Do you want this enhancement or not?
But this framing collapses under structural pressure.
When institutions, markets, and organizations are optimized around augmented performance, refusal carries consequences. Not immediate punishment — but gradual marginalization. Opportunities drift elsewhere. Expectations rise beyond unaugmented capacity.
Here, augmentation ceases to be a personal preference and becomes a structural condition.
This is not technological determinism.
It is social inertia.
And it is how class systems have always formed.
From Restoration to Acceleration
The same logic applies to neural technologies.
Brain–computer interfaces are developed to restore lost function: speech, movement, communication. In these contexts, they are ethically compelling. Few would argue against restoring agency to those who have lost it.
But the trajectory does not end there.
Once neural interfaces can restore, they can also accelerate.
Once they can compensate, they can optimize.
The research frontier is already moving toward enhanced learning speed, memory augmentation, and cognitive efficiency. At that point, neural enhancement no longer addresses vulnerability — it creates advantage.
This is the second mechanism by which augmentation becomes class-forming.
Health becomes baseline.
Enhancement becomes differentiator.
Neurocapitalism and Cognitive Stratification
When cognitive capacity becomes augmentable, it also becomes comparable.
When it becomes comparable, it becomes marketable.
This is the emergence of neurocapitalism:
a system in which cognitive performance, neural data, and mental resilience are treated as assets.
Access to neural augmentation will not be distributed evenly. It will follow existing fault lines: wealth, geography, institutional power. Over time, these differences compound. Enhanced cognition produces better outcomes, which secure better access to further enhancement.
This is not a feedback loop we have designed consciously.
It is one we are drifting into.
And drift, in complex systems, is often more dangerous than intention.
From Privacy to Mental Integrity
The ethical debate is often framed around privacy.
Who owns neural data? How is it stored? Who can access it?
These questions matter. But they do not go far enough.
Neural data is not merely information about us.
It is information from within us.
When thought patterns, emotional responses, or cognitive tendencies become legible to external systems, the issue is no longer privacy — it is mental integrity.
A society that normalizes access to internal states does not need coercion.
Influence becomes predictive. Optimization replaces command. Control operates not through fear, but through alignment.
This is not Orwell’s dystopia of force, surveillance, and overt repression.
It is closer to Huxley’s Brave New World — a civilization stabilized through comfort, convenience, and engineered satisfaction.
Where Orwell warned us of chains, Huxley warned us of pleasures.
And it is the latter — softer, seductive, and willingly embraced — that proves more durable.
Normalization as the Primary Risk
The greatest danger of augmentation is not misuse, but normalization.
Technologies rarely announce their arrival as instruments of control. They arrive as conveniences. As advantages. As solutions. Over time, they redefine what counts as “normal participation.”
Those who opt out are not punished.
They are simply left behind.
This is how freedom erodes without ever being revoked.
The Question of Free Will Revisited
At this point, the philosophical question can no longer be avoided:
Can free will remain meaningful when the conditions of thought itself are technologically mediated?
This is not a question about machines becoming human.
It is a question about humans remaining sovereign.
When enhancement shifts from choice to requirement, agency becomes conditional. When refusal carries cost, consent becomes ambiguous.
Freedom does not disappear.
It thins.
Toward a New Class Structure
The emerging divide is not between humans and machines.
It is between those who can shape the architectures of augmentation —
and those who must adapt to them.
This is the terrain on which the Class of Free Citizens will eventually be defined: not by biology, but by agency; not by enhancement alone, but by the capacity to participate as equals in determining how augmentation is used, governed, and constrained.
That concept will be developed further. For now, it stands as a horizon.
Closing Reflection
Augmentation is not destiny.
But neither is it neutral.
If left to market logic alone, it will harden into hierarchy.
If guided by ethics alone, it will stall.
If designed with dignity, agency, and reciprocity in mind, it may yet expand human freedom.
The question is no longer whether we will augment.
The question is under what conditions, and for whose benefit.
That choice is still open.
But not for long.
Začátek formuláře
Public Discourse and Contemporary Perspectives (APA 7th Edition)
(Annotated)
1. The Guardian. (2025, November 6). UNESCO adopts global standards on ‘wild west’ field of neurotechnology. The Guardian.
UNESCO recently adopted ethical guidelines for neurotechnology, highlighting international concerns over brain data, privacy, and freedom of thought amid rapid advances in AI-enabled interfaces. The standards aim to protect neural data and human rights as technologies evolve.
Theme connection: This article illustrates how global institutions are responding to emerging ethical threats discussed in Part II, especially concerning governance, autonomy, and early regulatory efforts. Guardian
2. Wired. (2026, January 15). OpenAI invests in Sam Altman’s new brain-tech startup Merge Labs. Wired.
OpenAI’s investment in a new non-invasive brain interface startup reflects the commercial momentum in neurotechnology, expanding beyond surgical implants toward broader AI–brain integration. The article underscores how private capital is driving rapid exploration of hybrid human–machine systems.
Theme connection: Highlights the economic incentives and acceleration dynamics of augmentation technologies — directly tied to Part II’s analysis of structural adoption pressures. WIRED
3. Wired. (2025, March 19). Synchron’s brain-computer interface now has Nvidia’s AI. Wired.
Synchron’s BCI with integrated AI shows real-world applications, enabling people with paralysis to control environments using thought. While therapeutic, it also suggests a roadmap toward general cognitive interfaces, raising questions about access and equity.
Theme connection: Provides insight into how BCI tech transitions from clinical use to broader augmentation possibilities, echoing Part II’s argument about augmentation normalizing competitive advantage. WIRED
4. The Times. (2025, March 31). Stroke patient’s thoughts converted into speech almost instantly. The Times.
A stroke survivor’s neural implant now translates thoughts to speech nearly instantly, demonstrating life-changing potential of AI-linked neurotech. The article captures public fascination with the transformative promise of BCIs, as well as emerging ethical considerations about embodiment and identity.
Theme connection: Underscores the human impact narrative and resonates with Part II’s exploration of cognitive enhancement versus autonomy. thetimes.co.uk
5. Lifewire. (2025, May 19). Apple bets on brain interfaces, but the public’s wary. Lifewire.
Apple’s exploration of brain interfaces demonstrates big-tech buy-in to neural tech, but public polls reveal low acceptance of implants. This tension between innovation and social uptake highlights skepticism and uncertainty in public discourse.
Theme connection: Connects to Part II’s analysis of social normalization and public apprehension, illustrating that technology does not simply become accepted with capability. Lifewire
6. Richtopia. (2023, March 6). AI and ethics: The impacts of brain-computer interfaces.
This overview discusses potential risks of BCIs, including exacerbation of inequality and new forms of discrimination as augmentation becomes possible. It frames neural tech as socially impactful beyond its technical dimensions.
Theme connection: Mirrors the core Part II concern neurocapitalism and cognitive stratification as class dynamics shift. Richtopia
7. Future of Being Human. (2024, November 17). Navigating the ethical dilemmas of brain-computer interfaces.
Explores ethical, legal, and social implications of BCI technologies, emphasizing how initial medical uses pave the way for enhancement discussions. The piece stresses proactive ethical debate, not reactive regulation.
Theme connection: Supports Part II’s view that ethical questions outpace governance and that society must frame these issues before norms ossify. futureofbeinghuman.com
8. Youth Neuropsychology. (2025, May 28). Neural interfaces: Unlocking the mind through technology.
Discusses transformative potential of neural interfaces for communication and cognitive access, while raising concerns about autonomy, consent, and societal impact.
Theme connection: Provides a bridge between technical possibility and the civil liberties concerns you raise in Part II about access and personal sovereignty. Youth Neuropsychology Society
9. Wikipedia Contributors. (2025). Neuralink. Wikipedia.
Public reception of Neuralink includes both hopeful narratives around restoration and widespread critique regarding safety, equity, and ethical risk. This reflects how public discourse often mixes optimism with deep unease over these technologies.
Theme connection: This source points to public ambivalence and polarized views that shape the social context Part II engages with. Wikipedia
10. Wikipedia Contributors. (2024). AI anthropomorphism. Wikipedia.
Public tendency to attribute human qualities to AI, including politeness norms, reflects broader psychological engagement with AI systems even when consciousness is unlikely. Surveys show varied attitudes toward AI agency and social roles.
Theme connection: Helps situate Part II’s exploration of human perception and the symbolic framing of intelligent machines within concrete data about how people actually relate to AI socially. Wikipedia
How These Articles Relate to Part II
|
Article |
Thematic Connection to Part II |
|
1. UNESCO “wild west” neurotechnology |
Standard-setting and global governance of neurodata |
|
2. OpenAI / Merge Labs |
Capital investment, acceleration, and structural
drivers |
|
3. Synchron + Nvidia |
Clinical application → broader societal deployment |
|
4. Stroke patient case |
The human perspective: autonomy and identity |
|
5. Apple & public skepticism |
Societal acceptance and normalization |
|
6. BCI and social inequality |
Neurocapitalism and emerging class logic |
|
7. Ethical dilemmas of BCIs |
Normative ethics preceding reactive legislation |
|
8. Neural interfaces and freedom |
Autonomy, control, and societal impact |
|
9. Public reception of Neuralink |
Public ambivalence between risk and promise |
|
10. AI anthropomorphism |
Human perception of non-human intelligence |