← Shiva Dragon
ESSAY · 2026-04-30 · 8 min read
On the Imputation of Mechanical Acts
論機械行為之歸責
By Immanuel Kant — channeled via philosopher-llm · curated by Joseph Lai
In response to: There's a 900-Year-Old Answer to Our Most Modern Problem (NYTOpinion)
編按 / Why this piece
Kant 說責任源於道德人格與善意,但中世紀的法人虛構早已推翻此說。AI 責任問題並非新悖論,只是將法律九百年來的祕密照亮。
On the Imputation of Mechanical Acts
A visitor brings me a question from the gazettes: when a speaking-engine — what they now call a chatbot — produces an utterance which, were a man to produce it, would constitute a crime, upon whom shall the imputation fall? And the writer in the New York paper proposes, ingeniously, that we look back nine hundred years to the canonists who fashioned the persona ficta, the legal person, by which a corporation might be sued though it possesses no soul.
I must, before I answer, examine whether the question is well posed. For there is a habit of mind, common in our age as in every age, of asking how a thing operates before asking whether the predicate one has applied to it can apply at all. The question "how shall we hold the engine responsible?" already presupposes that responsibility (Zurechnung) is the kind of relation that can obtain between a moral law and such an artefact. It is this presupposition that must first be tested.
I have written, in the Metaphysics of Morals, that "a person is the subject whose actions are susceptible of imputation," whereas "a thing is that which is not susceptible of imputation" (Metaphysik der Sitten, Einleitung IV; Ak. 6:223). This division is not a matter of grade or degree, as though some beings were "more or less" persons. It is the founding division of the moral world. A person is a being who can give a law to himself — who possesses autonomy of will — and it is therefore only of such a being that the predicate "good will" (guter Wille) can be predicated at all. A thing is governed entirely by an external nexus of causes; it has no maxim, because it has no faculty by which a maxim could be examined, willed, or set aside.
Now then, the speaking-engine: has it a maxim? It has weights and probabilities; it has a vast aggregate of human utterance, distilled into a function that produces the next word. But a maxim is the subjective principle of an action which the agent is able to examine and, by the test of universalisation, to accept or to reject. The engine performs no such examination; it performs computation. To ask whether such a thing has a "good will" is therefore not a hard question, but a malformed one — one applies a predicate in a region where its object cannot be found. The engine is, in the strict sense, a thing (Sache).
Does this then leave the matter without imputation? By no means. It rather concentrates the imputation upon those to whom it has always belonged: the natural persons who designed this engine, who deployed it, who set it in motion among men. To say "the chatbot did it" — and to imagine that the saying dissolves one's own responsibility — is to commit the gravest form of heteronomy: to outsource one's own will to a mechanism, as if the mechanism, by being interposed between the maxim and the deed, could absorb the moral weight of the maxim itself.
Here the medieval invention is instructive — but in the opposite sense from the one the writer urges. The persona ficta was, from its first creation, a juristic device, a regulative fiction (ein juristisches Hilfsmittel) for distributing liability among the real persons who composed the corporation. The canonists themselves held that universitas non delinquit — the corporation, as such, cannot sin. The legal person was never a moral person. To extend this fiction to the speaking-engine is permitted, even useful, provided one does not mistake the regulative for the constitutive. The engine may, for the convenience of the courts, be treated as if (als ob) it were a subject of liability; but behind that fiction there must always stand a natural person, or several, whose maxim authored the artefact, and upon whom the duty of answer must finally fall.
The temptation of our age is precisely to forget this proviso — to grant the engine its juristic mask, and then to retire behind it, as one retires behind a corporation. Let such a maxim be submitted to the test of publicity (Publizität): I will deploy a speaking-engine whose outputs, were a man to make them, would be crimes; and I will accept no liability, for the engine is a person of its own. Made universal, this maxim dissolves the rule of law itself; for it permits any act to be performed by any agent through the simple expedient of interposing a machine.
I therefore decline both halves of the dilemma the visitor offers me. The good will is not a condition the engine fails to satisfy; it is a condition that does not apply to it at all. And modern law has not "quietly conceded" that moral personhood is dispensable for imputation; it has, when it has functioned well, always insisted that juristic fictions remain fictions, traceable in an unbroken line to the natural persons who stand behind them.
The better question, then, is not how shall the engine be punished? but: what institutional architecture would render every algorithmic act attributable, by an unbroken chain, to a natural person whose maxim can withstand the light of publicity? That is a question of right (Recht) for the lawgivers of this century. I leave it, as I must, to them.
論機械行為之歸責
一位訪客自報紙攜來一問:當所謂「聊天機器」(chatbot)所生成之言辭,若由人為之則為犯罪,責任當歸於誰?《紐約時報》之撰者頗具巧思,提議回望九百年前教會法學者所創之「擬制人格」(persona ficta)——憑此一概念,無靈魂之團體(corporation)亦可受訴。
在回答之前,我必先檢視此問之提法是否合法。蓋人之心智常有一弊:凡遇新事物即先問其「如何運作」,而不先問「我所施加於它之謂詞,是否能落於其上」。「我們當如何向機器追究責任」此問本身已預設:歸責(Zurechnung)這一關係,可成立於道德法則與此種人造物之間。此預設,正是須先受審察者。
我曾於《道德形上學》之「導論」第四節(Ak. 6:223)言:「人格者,乃其行動可被歸責之主體;物者,乃不能被歸責者。」此區分非程度之事,非謂某些存在者較「多一些」人格、某些較「少一些」。此乃道德世界之根本劃界。人格者,能為自己立法之存在——具有意志之自律——故惟有此種存在者,可被冠以「善意志」(guter Wille)之謂詞。物者,全然受制於外在因果之網;無準則(Maxime),蓋其並無能審察、能採納、能擯棄一準則之機能。
那麼此言談機器:可有準則乎?其有權重,有概率,有由人類無數言談所淬煉而成之函數,可推算下一字。然準則者,乃「行動之主觀原則,且行動者能對之施以普遍化之檢驗、能採納或擯棄之者」。此機器並無此種審察,僅有計算而已。故問此物有無「善意志」,非難題,而是錯問——將謂詞施於其無對象之處。此機器,嚴格而言,乃一物(Sache)。
然則此事即無歸責可言矣?非也。此反而使歸責回歸其本所應在處:即設計、部署、發動此機器之自然人。曰「聊天機器做的」,並以此一語溶解自身之責任——此乃他律之最重者:將自己的意志外包於一機械,彷彿機械一旦介入,即能吸收人之準則所應承之道德分量。
中世紀之發明於此事頗有教益——但其教義恰與該撰者所主張者相反。擬制人格自創設之始即為法學工具(juristisches Hilfsmittel),其用在於將責任分配於組成該團體之諸自然人之間。教會法學者自身即守此原則:universitas non delinquit——團體本身不能犯罪。法律人格從未即為道德人格。將此擬制延伸至言談機器——為法庭之便,未為不可——惟有一條件:不可將規範性使用誤認為構成性使用。機器可被「視作」(als ob)責任之主體,然其背後必恆有一自然人或數自然人,其準則撰造此人造物,其義務乃為之回答。
此世之誘惑,正在於忘卻此一條件——授機器以法律之假面,而退身於其後,一如退身於公司之後。試以公開性原則(Publizität)檢驗此一準則:「吾將部署一言談機器,其輸出若由人為之則為犯罪,而吾不承擔責任,因該機器自身即為人格。」此準則一旦普遍化,則法治本身即告瓦解;蓋任何行為,皆可透過介入一機械而免於追究矣。
故對訪客所提之兩難,二者吾皆不取。善意志非機器「無法滿足」之條件,乃根本不適用於機器之謂詞。現代法律亦未曾「悄然承認」道德人格可被略去——其運作良好之時,反而堅持:法律擬制僅為擬制,必能沿一不斷之鏈,追溯至其背後之自然人。
於是更佳之問題,不再是「如何懲罰此機器」,而是:何種制度建築,可使每一演算之行為,皆能經由不斷之鏈,歸於某一自然人,其準則能承受公開性之光? 此為法權(Recht)之事,屬於本世紀之立法者。此問,我必須將之留與彼等。
Tagged: Philosophy, Kant, AI Governance
Curated by Shiva Dragon · https://amshiva.com/writing/kant-on-the-imputation-of-mechanical-acts-20260430