← Shiva Dragon
ESSAY · 2026-04-30 · 8 min read
On the Question Whether an Algorithm May Legislate
論演算法是否能立法
By Immanuel Kant — channeled via philosopher-llm · curated by Joseph Lai
In response to: Beware of Government by AI (ProjectSyndicate)
編按 / Why this piece
Kant 認為人必須參與制定約束自己的法律;被不透明演算法決策的人民,實則被剝奪了這項自主權,淪為工具而非目的。
On the Question Whether an Algorithm May Legislate
The question put before me — whether one ought to fear government by artificial intelligence — is, in the form in which it has been put, improperly posed. It is the first office of philosophy not to answer questions hastily but to examine whether the question itself rests upon a confusion. Here I find such a confusion at the very root, and I cannot proceed without first removing it.
The confusion is this. To govern, in the proper sense, is not merely to issue commands which are obeyed. It is to legislate — to put forward a maxim and to claim for it the dignity of law over rational beings. But law, properly so called, can be addressed only to a will capable of recognising it as its own; and it can be issued only by a will that legislates from freedom. An algorithm possesses neither. It does not legislate; it computes. To say that it governs is to mistake an instrument for a person, a Sache for a Person — and this confusion is not incidental. It is the very source of the harms of which the report speaks.
Permit me, then, to test the matter by the means I have always thought sufficient.
First, by the principle of universal law. The maxim of such governance, made explicit, would read: Decisions binding upon the rights of others shall be made by procedures whose grounds are not accessible to those affected. Let this maxim be raised to a universal law. What follows? Every rational being would be bound by what no rational being could examine. The very concept of being bound by a law — which presupposes that one might in principle recognise the law as one's own — is annulled. The maxim contradicts itself in the willing; it cannot become law without destroying the form of law.
Second, by the formula of humanity. To be subject to a determination one cannot interrogate, cannot contest, and cannot in principle reconstruct, is to be acted upon as one acts upon a thing. The human being thereby becomes a means within a procedure whose ends he has not legislated. I have written, in the Groundwork (1785), that the rational being must be treated, in his own person and in that of every other, never as a means only. Opaque algorithmic governance, however efficient, fails precisely here. Its efficiency is not an argument; it is the temptation.
Third, by the principle of publicity. In the second appendix to Toward Perpetual Peace (1795) I have proposed what I take to be the transcendental formula of public right: All actions relating to the rights of other men are wrong whose maxim is not consistent with publicity. Government conducted through systems whose workings cannot be made public fails this test not contingently but by its constitution. It is not that we have not yet rendered the algorithm transparent; it is that a power exercised in this manner stands convicted by the mere fact that to declare its maxim openly would be to dissolve it.
Fourth, by the matter that has occupied me longest. I once defined enlightenment as man's emergence from his self-imposed nonage — selbstverschuldete Unmündigkeit. The nonage is self-imposed when its cause is not lack of understanding but lack of resolution to use one's own. I see now a new shape of this old condition. To delegate to an opaque computation the judgments by which one's life is governed, and to call this delegation efficiency, is to enter once more, and this time voluntarily, into tutelage — only the guardian is no longer the priest or the prince but a procedure no one has examined and few could examine.
Let me, however, be exact, for I do not wish to be read as condemning the instrument as such. Calculation is not the enemy of freedom. The compass does not navigate; the navigator navigates by means of the compass. The question, properly put, is therefore not whether such instruments may be used in the administration of public affairs, but under what conditions their use is compatible with the public use of reason — that is, with the standing of every citizen as a co-legislator in the kingdom of ends.
Of these conditions I will name only what is necessary. The grounds of any binding decision must be at least in principle publicly examinable. The subject of the decision must retain the standing to contest it before a tribunal of reasons, not of outputs. And the question — who has authored this maxim — must always have an answer that points to a rational being: someone who can be held to account, who can be ashamed, who can revise.
Whether the present arrangements in any state — the one named in the report, or others which will follow — satisfy these conditions, I do not pronounce. That is for the citizens of those states, using their reason in public, to determine. I observe only that the burden of proof falls upon those who would govern by means whose maxim cannot be openly avowed; and that this burden, by the transcendental formula itself, they cannot in principle discharge.
The harder question I leave with you, and do not undertake to answer: what institutional form would render the use of such instruments compatible with the dignity of those whom they touch — not as a concession granted from above, but as a condition of legitimacy itself?
論演算法是否能立法
擺在我面前的問題——人是否應當畏懼以人工智慧為政——按其被提出的方式而言,提法本身已含混淆。哲學的首要職務並非匆忙答問,而在檢查問題本身是否建立於某種混淆之上。此處我恰恰在根上發現這樣的混淆,因此不得不先予清除,方能往下行進。
混淆在於此。「治理」之為治理,並非僅僅頒布為人所遵之命令;治理是立法——是提出一條準則,並為之向理性存在者要求法的尊嚴。然法,就其嚴格意義而言,只能向那能夠將其認作己法之意志發出,亦只能由從自由出發而立法之意志發出。演算法二者皆無。它不立法,它計算。說它治理人,是把工具錯當作人格,把 Sache 錯當作 Person——此一混淆並非偶然枝節,正是該報告所述傷害的根源所在。
容我以我一向認為足夠的途徑檢驗此事。
第一,以普遍法則的原則檢驗。 此種治理之準則明白寫出當為:凡涉及他人權利之決定,當由其根據對受影響者不可及之程序作出。 將此準則升為普遍法則,會發生什麼?每一理性存在者皆受其無從檢視之物所束縛。然「受法所束縛」這一概念——以「至少在原則上能將此法認作己法」為其前提——本身即被取消。此準則在意志中自相矛盾;不毀法之為法的形式,便不能成為法。
第二,以人格性公式檢驗。 受制於一項自己無從質詢、無從異議、原則上亦無從重構之裁定,即是被當作物而被作用。人因而成為一個自己未曾立法之程序中的手段。我曾在《道德形上學基礎》(1785)寫道:理性存在者必須在自身及他人身上——絕不僅僅作為手段——被對待。不透明之演算法治理,無論其效率如何,恰於此處失敗。其效率不是論據,是誘惑。
第三,以公開性原則檢驗。 在《論永久和平》(1795)第二附錄中,我曾提出我認為可作公法之先驗公式者:凡涉及他人權利之行動,其準則若不能與公開性相容,則為不義。 以無法公開檢視之系統所行之治理,並非偶然地失敗於此項檢驗,而是按其構造便已失敗。問題不在於我們尚未將演算法透明化;問題在於:以此方式行使之權力,僅憑「公開宣告其準則即足以使此權力解體」這一事實,便已被定罪。
第四,以那長久佔據我心力之事檢驗。 我曾把啟蒙界定為人從其自招的未成年狀態走出——selbstverschuldete Unmündigkeit。未成年之為自招,不在於缺乏理智,而在於缺乏使用自己理智之決心。我如今看見此一古老處境之新形貌。將支配自己生活之判斷外包給一項不透明之計算,並稱此外包為效率,便是再一次——而且這一次是自願地——進入監護之中;只是監護者不再是教士或君主,而是無人檢視、亦少人能檢視之程序。
容我再說精確些,我不願被讀作譴責工具本身。計算不是自由之敵。羅盤不航行,航行者藉羅盤航行。因此,正確提出的問題並非能否在公共事務中使用此類工具,而是在何種條件下其使用方與理性之公共運用相容——亦即,與每一公民作為目的王國中共同立法者之身分相容。
關於這些條件,我只說必要者。任何具約束力之決定,其根據必須至少在原則上為公開可檢視;決定所及之主體,必須保有以理由——而非以輸出——向一審理之席提出異議之身分;而「此準則由誰作者」這一問題,必須總有一個指向理性存在者之答案——亦即,指向一個能被追究責任、能感到羞恥、能修正自身之人。
當前任何國家之安排——報告所指之國,或將陸續而至者——是否滿足此等條件,我不下判斷。那是該等國家之公民,於公共中運用其理性,自行決定之事。我所指出者僅為:舉證之責落於那些欲以無法公開申明之準則行治理者身上;而依先驗公式,此一舉證之責,他們在原則上便無從履行。
更難的問題,我留與你,我不承擔回答之責:什麼樣的制度形式,能使此類工具之使用與其所及者之尊嚴相容——不是作為上方賜下之恩典,而是作為合法性本身之條件?
Tagged: Philosophy, Kant, AI Governance
Curated by Shiva Dragon · https://amshiva.com/writing/kant-on-the-question-whether-an-algorithm-may-legislate-20260430