The Invisible Power of Algorithms
The great novelty of our time is not the machines that talk, but the systems that classify.
On Systems Without Face
For decades we imagined the technological future as a collection of spectacular machines: robots with human voices, flying cars, artificial intelligences conversing like people. While we waited for that aesthetic, another transformation, much quieter, had already taken place. The great novelty of our time is not the machines that talk, but the systems that classify. Systems without body or face that decide what information we see, what opportunities we are offered, what risks we appear to represent, and what portion of the world we are allowed to perceive.
Many of these systems work extraordinarily well. AlphaFold predicted the structures of more than two hundred million proteins in months, a feat that human structural biology took half a century to even approach. AlphaZero discovered strategies in chess and Go that no human had imagined. Fraud-detection algorithms prevent massive losses, and medical-imaging systems equal or surpass specialists in multiple pathologies. To deny this efficacy is as ideological as to celebrate it without nuance.
The counterfactual matters. The point of comparison is not an ideally just world, but the previous world of arbitrary human decisions, full of nepotism, prejudice, fatigue, and class bias. A well-designed algorithm, at the very least, does not get hungry. Algorithms amplify human ills, yes, but they can also reduce them. The relevant question is not whether they "discriminate," but against what baseline they are measured and which human decision they replace.
Frozen Correlations
That said, the real problems exist. The first is that algorithms tend to freeze past correlations. They detect real patterns (cultural, socioeconomic, historical, or biological), but they do not distinguish correlation from causation. The Optum case is by now a classic: a hospital management system used historical healthcare spending as a proxy for need. Statistically it worked. Causally it was unjust. Poor patients spent less because they had worse access, not because they were healthier. No one inscribed ideology there; they simply confused proxy with reality.
Not all domains are alike. In computational chemistry, protein folding, or weather forecasting, the reference truth is physical and independent of the observer. By contrast, in criminal justice, credit, hiring, or content moderation, the ground truth itself is contaminated by prior human decisions. Conflating the two cases is the symmetric error of naive optimism and indiscriminate critique.
The Impossibility of Fairness
The second problem runs deeper: the mathematical incompatibility between different notions of fairness. The COMPAS case illustrates this almost didactically. ProPublica denounced unequal false-positive rates for Black defendants. The company replied that the system was well-calibrated across groups. Both claims were correct. When base rates differ between groups, formal definitions of "fairness" (demographic parity, equal error rates, calibration) cannot be satisfied simultaneously. Every implementation necessarily chooses who carries more of the errors. That choice is not technical. Nor is it automatically conspiratorial. It is moral, with winners and losers.
Mathematics does not neutralize values; it conceals them. But we should not fall into the opposite trap either: declaring "biased" any result that contradicts strong egalitarian premises, without examining whether it reflects reality or an artifact of the system.
Feedback Loops and Opacity
A graver risk follows. Algorithms do not only observe the world; sometimes they manufacture it. Feedback loops can turn vicious (extreme recommendations on YouTube, predictive policing that generates the very arrests it later cites as confirmation). Not all loops are bad, however. Some antifraud and quality-control systems improve progressively.
The most structural asymmetry is auditability. We can be judged by systems that neither we nor, often, their own operators fully understand. Here the comparison with human opacity is valid but incomplete. A human being can be interrogated, shamed, or disqualified. A model cannot. Algorithmic opacity is structural, and it tends to exempt from responsibility.
The concentration of algorithmic power in a handful of corporations aggravates all of this. Heavier regulation, however, tends to consolidate the incumbents. The more effective response is usually greater openness, competition, and plurality of models, though surgical, evidence-based regulation has its place.
The Real Risk
The real risk is not a rebel AI, but a silent and opaque administration of human life. Teachers, doctors, judges, and recruiters who end up signing decisions they no longer fully control.
It is worth projecting four plausible future scenarios. The first is the mute administocracy: more automation, nominal human override, gradual loss of the institutional capacity to explain its own decisions. The second is epistemic correction: real auditability standards, causal validation, external red teaming. The third is cognitive fragmentation: regionally and culturally diverse models, where plurality is gained and epistemic unity is lost. The fourth is the atrophy of judgment: generations that no longer know how to decide without algorithmic assistance, where the critical faculty becomes a vestigial skill. We will probably live with some mixture of all four. The question is which way we want to tip the balance.
Who Bears Responsibility
Let us acknowledge an uncomfortable limit. It is unrealistic to expect that most of society will soon acquire a technical culture sufficient to evaluate these systems, when even the experts themselves cannot reach consensus. Mass technical literacy will not arrive in time. Transferring that burden onto the citizen is, in itself, a sophisticated form of evasion. The heavier moral burden must fall on those who design, deploy, and operate these systems. Asymmetric power demands asymmetric responsibility.
Sensible responses run along five key principles, distributed across two planes of responsibility.
What falls on those who design and deploy:
- Domain restriction: not everything should be automated (serious criminal sentencing, child custody, denial of critical treatments, and so on).
- Asymmetric prudence: deliberately bias the system toward the less harmful error.
- Effective transparency and a genuine right of appeal.
What falls on institutions, markets, and society:
- Openness and competition: open models, and the reduction of barriers that protect the incumbents.
- Clear separation of roles: the algorithm as support, never as a shield of responsibility. Internal technical dissent must be protected.
A technically ignorant society does not resist the systems that administer it; it obeys them without knowing. But that ignorance is an additional reason to build with care, not an excuse to build without it. A power that cannot be seen, that cannot be understood, and that has no identifiable author is, historically, the hardest to revoke. All the more reason for those who design these systems to accept limits, before the later dismantling becomes impossible.