3 Comments
User's avatar
Leslee Petersen's avatar

Finally! A voice with moral authority and people are listening!

I have looked at the tech bros as addicts. Addicted to the power and money. But much more dangerous than an addict living on the street.

Expand full comment
Patty Anne Merrick's avatar

Thank you Pope Leo , as a mother of 8 & Grandma to 13 from 20 to 3 months , your wisdom and ability to communicate to all , especially to young people is a true light in our darkness . God Bless & protect you Holy Father 🙏

Expand full comment
David Hope's avatar

Jacques Ellul, the late French Protestant theologian, offers important insight from the past.

Ellul’s critique of modern technology—his account of “la technique” as an autonomous, self‑augmenting logic that prioritizes efficiency over human values—offers a powerful lens for understanding the social and moral stakes of artificial intelligence.

Though Ellul wrote before machine learning matured, his core insights about technique’s tendency to colonize institutions, narrow judgment, and erode freedom map closely onto contemporary AI. Read together, Ellul’s warnings and present realities point to both urgent dangers and practicable responses.

Ellul defines technique not as mere machines but as the whole of methods and procedures optimized for the most efficient means to any given end.

Technique’s apparent neutrality masks a deeper fact: when efficiency becomes the governing criterion, non‑quantifiable goods—dignity, deliberation, moral responsibility—are sidelined.

AI exemplifies this dynamic.

Machine learning, predictive optimization, and data infrastructures are not merely tools; they are embodiments of the efficiency imperative. Systems are judged by accuracy, throughput, and cost, and institutions increasingly favor algorithmic procedures because they promise speed, consistency, and scale.

As a result, what can be measured and automated becomes what matters.

One of Ellul’s central concerns—technical rationality displacing qualitative judgment—plays out in AI’s spread through decision domains once reserved for human discretion. Hiring, lending, policing, and medical triage are among the arenas where algorithmic scoring replaces deliberation because it is faster and seemingly objective.

That shift produces an epistemic narrowing: complex human lives are reduced to digitizable features and labels; phenomena that resist quantification drop out of view.

It also promotes moral outsourcing: actors claim compliance with opaque algorithms in lieu of ethical responsibility. Finally, it standardizes practice—optimized algorithms impose uniform responses that diminish local wisdom and plural approaches to problems.

Ellul’s claim that technical systems tend toward autonomy and self‑perpetuation is particularly salient for AI.

Machine learning pipelines generate and consume the very data that reinforce their models; feedback loops can normalize and amplify patterns—predictive policing that directs patrols according to algorithmic forecasts, then records more incidents in patrolled neighborhoods, is a clear example.

The infrastructures supporting AI—cloud platforms, continuous monitoring, real‑time analytics—create path dependencies.

Early design choices about data, objective functions, and proxy metrics lock actors into trajectories that are costly to reverse.

The scale and speed at which automated systems can act magnify harms and outpace the slower work of democratic oversight.

For Ellul, technique undermines freedom not only through direct control and surveillance but by internalizing instrumental rationality.

AI’s predictive and persuasive capacities subtly shape preferences and social norms; algorithms designed to maximize engagement or compliance reshape public life by engineering choice environments.

As people and institutions come to evaluate decisions through metrics supplied by systems, human autonomy is eroded.

Institutional dependence grows: coordination and knowledge production tether to technical solutions that weaken the civic capacities for reflection and contestation.

Ellul also insisted that technique cloaks political choices in a rhetoric of neutrality.

AI’s apparent objectivity conceals important political decisions—what to measure, which outcomes to optimize, who supplies the training data.

Algorithms can perpetuate and amplify bias, redistribute power toward those with data and computational capital, and obscure accountability behind layers of code and corporate architecture.

An Ellulian analysis thus demands that we expose the ideological investments embedded in technical design.

At the same time, a strict Ellulian determinism risks resignation. Contemporary debates show that AI’s trajectory is contested: engineers, policymakers, civil society, and publics influence design, distribution, and governance.

AI can also augment human flourishing—improving diagnostic medicine, aiding accessibility, and expanding knowledge—so a uniformly negative verdict is incomplete.

Ellul’s critique is best deployed as a cautionary tool: it sharpens our attention to structural tendencies without foreclosing the possibility of deliberate corrective action.

From Ellul’s concerns we can distill practical priors for AI governance. First, protect spaces of human judgment: preserve domains where moral reflection and discretionary deliberation remain primary, especially in high‑stakes contexts.

Second, slow down deployment: require extended, participatory testing for systems with broad social impact so that deliberation can temper technical momentum.

Third, democratize design: open decision processes to civic participation so that objective functions and metrics reflect plural values, not only commercial incentives.

Fourth, insist on institutional checks: transparent audits, explainability requirements, and legal accountability must prevent the abdication of moral responsibility to “the algorithm.”

Fifth, prioritize plural human goods over mono‑metric efficiency, and decentralize power by curbing the concentration of data and computational control through public stewardship and interoperable architectures.

Jacques Ellul’s vocabulary—technique, autonomy, instrumentalization—remains vital for interpreting AI’s social consequences. His critique reminds us that naming the tendencies of technology is only the first step; the greater task is to translate that analysis into institutional, ethical, and political practices that limit technique’s capacity to dictate ends.

If Ellul teaches us vigilance, the contemporary challenge is constructive: to shape AI so that it serves human flourishing rather than narrowing it, to build slower, more participatory institutions, and to hold fast to values that resist reduction to efficiency.

Expand full comment