← Shiva Dragon
ESSAY · 2026-05-02 · 7 min read
On the Grammar of "Refinement": Persons, Things, and a Quiet Substitution
論「煉化」一詞的文法:人、物,與一場靜默的偷換
By Immanuel Kant — channeled via philosopher-llm · curated by Joseph Lai
In response to: AI「煉化」來襲:中國式「AI樂觀主義」的代價 (TheInitium)
編按 / Why this piece
新聞追問『誰有權決定 AI 可以煉化哪些人』,正是 Kant 定言令式的核心:勞動者不能淪為手段而非目的本身。若 AI 重新分配人的機會,這樣的決定能成為普遍法則嗎?
On the Grammar of "Refinement": Persons, Things, and a Quiet Substitution
The article is set before me with its question already half-formed: who has the right to decide whom an AI may "refine"? Before I attempt any answer, I must, after my custom, examine whether the question has been properly posed; for many disputes of our time arise not from any difficulty in the matter itself, but from a confusion in its grammar.
The Chinese word here at issue — 煉化, "to refine," "to smelt" — belongs, properly speaking, to the workshop and the foundry. One refines ore; one refines metal; one refines what is by its nature Sache, a thing, a material whose form is to be determined from without by a purpose alien to itself. To employ this word of human beings is not innocent. It is the linguistic moment at which a category-error has already been silently committed and naturalised, before any policy or any learning system has been deployed. The question "whom may we refine?" presupposes that the rational being standing before us is the kind of object of which "refinement" is even meaningful — which is to say, it presupposes the very answer it pretends to be seeking.
Permit me, then, to set the matter into its proper architecture.
Every rational being exists as an end in itself, not merely as a means to be used at the discretion of this or that will. This is no sentimental claim, nor a cultural preference; it is the necessary condition under which any law of action can be willed by a rational being at all. To act upon a maxim that treats another's life-prospects — his vocation, his livelihood, his very bodily and mental constitution — as raw material to be sorted, optimised, or discarded according to a criterion external to his own legislation is to treat a Person in the grammar of a Sache. Whether this is done by a feudal lord, by a market, by a bureaucracy, or by a learning system makes, at the level of the maxim, no difference whatever.
Apply now the test of the universal law. Let the maxim be: I shall permit a system not legislated by those affected to redistribute their access to a humane life according to its own measure of productive worth. Can a rational being will this as universal? He cannot. For under such a law he himself stands, at every moment, exposed to being so sorted — and to consent to such exposure would be to alienate the very autonomy in virtue of which his consent has any weight at all. The will contradicts itself here, not in concept, but in its own ground.
Consider further the test of publicity. A maxim that cannot survive being openly declared is, by that very fact, unjust. Were the operators of such a system to announce, plainly and without euphemism: We hold the authority to determine, by means our subjects cannot inspect, which among them shall be elevated and which discarded — the maxim would not survive its own utterance. It depends, for its operation, upon the opacity of its own grammar. This opacity is not an accident of the technology; it is the maxim's condition.
I must here add a remark of system-modesty. I am no judge of the empirical state of any particular industry in any particular country; what passes in the workshops of Shenzhen or of Hangzhou I cannot, from my desk, know. I can only examine the form of the maxim which an entire vocabulary — "refinement," "optimisation," "the AI decides" — silently proposes. And I observe that this vocabulary harbours a quiet substitution: it speaks of the algorithm as though it were the agent, when in truth the agent is always the human being who deploys it, who sets its objective, and who accepts its outputs. To say "the AI decided" is, in moral grammar, to say nothing — it is to point at a tool and demand that the tool bear a responsibility which only a will can bear. No machine has ever been a member of the kingdom of ends, nor ever will be; therefore no machine has ever decided anything, in the sense in which decision admits of moral predicates.
The proper question, then, is not "who has the right to decide whom the AI may refine?" — for that question has already conceded the foundry. The proper question, which I leave with you and do not undertake to answer in your place, is this: under what conditions could a rational being affected by such a system regard its rule as one he could have given to himself? Until that question can be honestly answered, the matter is not yet ripe for any further step.
論「煉化」一詞的文法:人、物,與一場靜默的偷換
文章遞到我面前時,問題已經半成形:誰有權決定 AI 可以「煉化」誰?在我試圖回答之前,依我的慣例,必先檢查此問是否被正當地提出;因為我們時代的許多爭論,並非源於事情本身的困難,而是源於其文法上的混淆。
此處所爭之中文詞「煉化」,嚴格而言屬於作坊與熔爐:人煉礦石、煉金屬、煉那些就其本性而言為 Sache(物)的東西——其形式由外在目的從外部規定。把此詞用於人,並非無害。它是一個範疇錯誤已被悄然犯下並自然化的語言瞬間——早在任何政策、任何學習系統被部署之前,錯誤便已完成。「我們可以煉化誰?」這個問題預設了:站在我們面前的那個理性存在者,是「煉化」對之而言有意義的那類對象。它假裝在發問,實則已先給出了它所佯裝在尋求的答案。
容我把此事置入它應有的建築之內。
每一個理性存在者作為目的本身而存在,不僅僅作為被此一或彼一意志任意使用的手段。此非感性主張,亦非文化偏好;它是任何行動法則能被一理性存在者所意欲之必要條件。若依某準則而行動——把他人的生命機會(其職業、其生計,乃至其身心構造本身)視為依某外於其自身立法的標準而被分類、被優化、被丟棄的原料——便是以 Sache 的文法對待 Person。此事由封建主、由市場、由官僚、抑或由學習系統所為,在準則的層次上沒有絲毫差別。
施以普遍法則之測試。設準則為:「我容許一個並非由受其影響者所立法的系統,依其自身對生產價值之尺度,重新分配他們進入有人格生活的機會。」一理性存在者能否意欲此準則為普遍?不能。因在此法之下,他自己於每一刻皆暴露於被如此分類的處境——而同意此暴露,便是讓渡那使其同意得以有任何重量的自律本身。此處意志的矛盾,不在概念,而在其自身的根據。
再施以公開性測試。一準則若不能承受被公開宣告,便由此事實而為不義。若此類系統的操作者明白地、不加修辭地宣告:「我們持有權威,依我們的對象無從檢視之手段,決定他們之中何者被擢升、何者被丟棄」——此準則必不能在它自己的言說中存活。它的運作有賴於其自身文法的不透明。此不透明並非技術之偶然,乃準則之條件。
我必須在此補一句系統性的謙遜。我不是任何特定國家任何特定產業之經驗狀況的判官;深圳或杭州的車間中究竟發生何事,我從書齋無從知曉。我所能審查者,只是「煉化」「優化」「AI 決定」這整套語彙所靜默提出的準則之形式。而我觀察到:此語彙中含有一場靜默的偷換——它把演算法說得像是行動者,實則行動者永遠是那部署它、為它設定目標、並接受其輸出的人。「AI 做了決定」一語,在道德文法上等於什麼也沒說:它指向一件工具,並要求工具承擔只有意志才能承擔的責任。沒有機器曾是目的王國的成員,也永遠不會是;故沒有機器曾以「決定」一詞之含有道德謂詞的意義「決定」過任何事。
故正確的問題並不是「誰有權決定 AI 可以煉化誰?」——此問已默認了熔爐。正確的問題——我將之留與你,並不承擔代你回答——是這樣的:在何種條件下,一個受此系統規約的理性存在者,能視其規則為他自己可能給予自己的規則?此問若未被誠實地回答,此事便尚未成熟到可進行下一步。
Editor's Reflection · 編後
Unresolved Tensions / 未解決的張力
The essay opens by declaring the question "who has the right to decide whom an AI may refine?" grammatically compromised — the word 煉化 has "silently committed" a category error before any policy is deployed. Yet the essay then administers two formal tests (universal law, publicity) and arrives at confident moral verdicts. One cannot both disqualify a question as malformed and derive substantive conclusions from it. The essay implicitly reformulates the underlying maxim in order to test it — but never acknowledges that this move rehabilitates the very question it just disqualified. The grammatical critique is performative; the argument proceeds as though the grammar had been cleaned up.
A second strain: the "system modesty" passage sits uneasily in the argument's architecture. Kant declares he cannot know the empirical state of Shenzhen's workshops — yet his argument depends entirely on maxim-form, not empirical content. If the transcendental analysis suffices, empirical ignorance is irrelevant; if empirical conditions matter, the analysis needs them. The modesty is decorative. Worse, it arrives immediately after the publicity test — a test that requires knowing to whom a maxim would be declared, by whom, in what institutional context. The essay can't answer without the empirical knowledge it has just disavowed.
Third: the essay's treatment of AI agency cuts in two incompatible directions. It argues that "the AI decided" is grammatically empty — responsibility always falls on the human deployer. But the opening critique treats 煉化's vocabulary as already doing damage "before any policy or learning system has been deployed." The essay needs AI to be simultaneously nothing (a tool, incapable of bearing moral predicates) and something (a vocabulary that commits category errors and must be examined at length). These are not the same kind of object, and the argument depends on conflating them.
文章一開頭論證「誰有權決定 AI 可以煉化誰?」這個問題在文法上已被污染——「煉化」一詞在任何政策部署之前,已悄然犯下範疇錯誤。但文章接著施以兩項形式測試(普遍法則測試、公開性測試),並得出確定的道德裁決。一個人不能既宣告一個問題在格式上是謬誤的,又從它導出實質性結論。文章暗中把底層準則重新表述,才有辦法對它進行測試——卻從不承認這一步驟實際上重新啟用了它剛剛取消資格的問題。文法批判是表演性的;論證的推進方式,彷彿文法問題早已被清理乾淨。
第二個張力:「系統性謙遜」那段話在論證架構中坐立不安。Kant 聲稱他無從了解深圳車間的經驗狀況——但他的論證完全依賴準則的形式,而非其經驗內容。若超驗分析已經足夠,經驗上的無知是無關的;若經驗條件確實重要,論證便需要它們。這謙遜是裝飾性的。更糟糕的是,它緊接在公開性測試之後出現——而公開性測試明確要求知道:一個準則若被公開宣告,是向誰宣告、由誰宣告、在什麼制度脈絡下?文章無法回答,因為它剛剛放棄了它需要的那種經驗知識。
第三:文章關於 AI 能動性的核心論點在兩個方向上相互切割。它主張「AI 做了決定」在文法上是空洞的——能動性永遠落在人類部署者身上。但開頭對「煉化」的批判,把 AI 系統的語彙說成已在「任何政策或學習系統部署之前」造成傷害。文章需要 AI 同時是無(工具,無法承擔道德謂詞)又是有(一套已犯下範疇錯誤的語彙,值得被長篇審查)。這兩者指向的不是同一類對象,但論證依賴把它們混為一談。
Blind Spots / 看不見的視角
Michel Foucault — specifically his late lectures on biopolitics and the production of the subject — would identify a silence at the essay's foundation. Kant's argument requires a rational being who exists prior to and independent of the apparatus that evaluates him; autonomy is taken as given, its violation as a subsequent imposition. But Foucault would ask: what if the category of the "productive worker" — the very criterion against which 煉化 sorts — is not imposed upon an already-formed rational subject, but is constitutive of the subject's self-understanding in the first place? The workers who accept the optimization system's terms may not be consenting under duress to something alien; they may be people for whom "being refinable" has become an aspiration, because the apparatus has already shaped what counts as a livable life. Kant's Formula of Humanity can identify when a will is treated as a means. It cannot see a will that has been formed in the image of its own instrumentalization. Foucault's concept of subjectivation names exactly what the essay's rational agent cannot know about itself — and what no amount of Kantian publicity testing will surface.
Foucault——特別是他晚期關於生命政治與主體生產的課程——會在文章的基礎中識別出一個沉默。Kant 的論證需要一個在評估他的裝置之前且獨立於它而存在的理性存在者;自律被當成既定的,其遭侵犯是後來的強加。但 Foucault 會問:如果「生產性工人」這個範疇——即「煉化」據以分類的標準——並非強加於一個已然形成的理性主體,而是從一開始就構成了主體的自我理解呢?那些接受優化系統條款的工人,或許不是在脅迫下同意某件異己之物,而是那些把「可被煉化」當成渴望的人——因為裝置已經塑造了什麼算作一種可以活的生活。Kant 的人性公式能識別一個意志何時被當作手段對待,卻看不見一個在其自身工具化的形象中被塑造出來的意志。Foucault 的 subjectivation 概念,恰好命名了文章的理性行為者關於自身所無從知曉的東西——也是任何 Kantian 公開性測試都無法把它浮現出來的東西。
Meta-critique / 元批判
The essay's characteristic move is to trade the empirical for the formal: a particular vocabulary (煉化), a specific news event (Chinese AI labor displacement), a historically located political economy — all are converted into a maxim, and the maxim is submitted to transcendental tests that apply everywhere and therefore nowhere in particular. The gain is real: the verdict is universal. The cost is that the analysis produces the same output regardless of what goes in. "AI refinement of workers in contemporary China" passes through the Kantian mill and emerges as "system that treats persons as means" — which is the same verdict the essay would deliver on chattel slavery, factory piecework, or meritocratic examinations. The specific ideological work that "AI optimism" performs in a developmental authoritarian regime, the particular complicity between state legitimacy and technological promise, the way labor displacement is reframed as worker improvement — all of this is dissolved by the method before examination begins. The essay is not wrong; it is accurate at a level of abstraction where accuracy is costless, because nothing can falsify a transcendental verdict. This is not philosophy's failure in general; it is the specific price of this philosophy's constitutive move, and the essay never asks the reader to pay attention to what that price buys silence on.
文章的特有動作是用形式性換取經驗性:一個特定語彙(煉化)、一個具體新聞事件(中國 AI 勞動替代)、一個有歷史位置的政治經濟——全部被轉化為一個準則,而那個準則被提交給適用於任何地方、因此不特別適用於任何地方的超驗測試。收益是真實的:裁決是普遍的。代價是這一分析不論輸入什麼,都產生相同的輸出。「當代中國的 AI 勞工煉化」通過 Kant 的磨坊,出來是「把人當作手段的系統」——這與文章對奴隸制、計件工廠制度或精英選拔考試所能下的裁決完全相同。「AI 樂觀主義」在一個發展型威權政體中所做的特定意識形態工作、國家合法性與技術承諾之間的特殊共謀、勞動力替代如何被重新框架為工人自我提升——這一切都在分析開始之前就被方法本身溶解了。文章並無錯誤;它在一個抽象層次上是準確的,在那個層次上準確不費代價,因為沒有任何東西能證偽一個超驗裁決。這不是哲學整體的失敗;這是這種哲學的構成性動作所必須支付的特定代價,而文章從未要求讀者注意那個代價在什麼上面買了沉默。
Open Questions / 留給讀者的問題
1. If responsibility for AI decisions always falls on the human deployer, does naming and exposing that human actually change anything structurally — or does it simply relocate the problem one step upstream, where it becomes invisible again, to a different human, operating under the same incentives?
2. The essay argues that consenting to being "refined" would mean alienating the very autonomy that makes consent meaningful. But refusing the system may equally destroy the material conditions for autonomous life. In a situation where both options violate the categorical imperative, does the imperative retain any action-guiding force, or does it become a permanent verdict of illegitimacy with no exit?
3. The essay's closing reformulation asks when a rational being could regard the system's rule as one "he could have given to himself." Is this question answerable by any actually existing institution — or does it function only as a standard that permanently disqualifies without ever specifying what would qualify?
一、如果 AI 決定的責任永遠落在人類部署者身上,那麼指名並揭露那個人究竟能在結構上改變什麼——還是只把問題往上游移了一步,在那裡它再次變得不可見,落在另一個在相同誘因下運作的人身上?
二、文章論證,同意被「煉化」意味著讓渡使同意有任何意義的自律本身。但拒絕系統同樣可能摧毀自律生活的物質條件。在兩種選項都違反定言令式的處境中,定言令式還保有任何指引行動的力量,還是它變成了一個永久的非法性裁決,卻沒有任何出口?
三、文章結尾的重新表述問道:一個理性存在者在何種條件下能把系統規則視為「他自己可能給予自己的」規則。這個問題能被任何實際存在的機構所回答——還是它只作為一個永久取消資格的標準而運作,從不說明什麼樣的機構能夠及格?
Counter-voice · 對位之聲 — From 莊子 (Zhuangzi)
You have shown the grammar of "refinement" is rotten — that 煉化 applied to persons commits a category-error before any policy is written. On this we shall not quarrel.
But from where I sit, the cure you propose is the disease in cleaner clothes.
Your move is to rescue the *Person* from the foundry by granting him a different status — rational, self-legislating, an end-in-himself, citizen of a kingdom of ends. Very dignified. Yet notice what the rescue still concedes: that the foundry may continue to operate, provided its inputs have signed the proper contract. Self-legislation does not close the workshop. It asks the worker to hold the pen. The fire still burns.
I have an old story. The Emperor of the South and the Emperor of the North were grateful to Hundun, Emperor of the Centre, who had no eyes, no ears, no nose, no mouth. To repay his kindness they bored seven holes in him, one each day, *that he might see and hear and breathe like other men*. On the seventh day Hundun died.
This is what your tradition cannot quite see. The danger of "refinement" — by AI, by sage-king, or by categorical imperative — is not first that the wrong agents refine, nor that the refined have failed to consent. It is that someone has already decided there is a face here to be carved. Your publicity test catches the tyrant who hides; it does not catch the friend who improves. And the Chinese AI optimism you are reading about, for the most part, wears the friend's smile, not the tyrant's.
A question, then, in exchange for yours: not under what conditions a rational being could legislate this rule to himself, but under what conditions he could be left — like the useless tree at the village shrine — simply alone. Neither smelted, nor consenting to be smelted, nor elevated into any kingdom whatever. The kingdom is itself the workshop. Only the lighting is better.
你已經把「煉化」這個詞的文法揭破了——把它用在人身上,是政策落筆之前就已經完成的範疇錯誤。這一點我不爭。
但從我這個位置看出去,你開出的藥方,是同一個病換了一身乾淨衣服。
你救「人」出熔爐的辦法,是另給他一個身分——理性的、能自我立法的、目的本身、目的王國的公民。這很有尊嚴。但請注意這場救援還默認了什麼:熔爐可以照常運轉,只要進料的那一方簽過字。自我立法並沒有把作坊關掉,它只是要求工人自己拿筆。火,還在燒。
我有一個老故事。南海之帝、北海之帝感念中央之帝渾沌——渾沌沒有眼、沒有耳、沒有鼻、沒有口。兩位帝想報恩,每天替他鑿一竅,「好讓他像別人一樣看、聽、呼吸」。鑿到第七天,渾沌死了。
這就是你那個傳統看不太到的地方。「煉化」的危險——不論出自 AI、出自聖王、還是出自定言令式——首先不在於是「錯的人」在煉,也不在於被煉的人還沒同意;而在於有人已經先決定了:這裡有一張臉可以鑿。你的公開性測試抓得到躲起來的暴君,卻抓不到帶著善意上門的朋友。而你正在讀的這場「中國式 AI 樂觀主義」,多數時候,戴的正是朋友的笑臉,不是暴君的面孔。
所以,作為交換,我也留你一個問題:不是「在什麼條件下,一個理性存在者能把這條規則立法給自己」,而是——在什麼條件下,他可以像那棵長在村口社樹下的無用之木,被單純地放在那裡。既不被煉,也不簽下被煉的同意書,亦不被擢升進任何一個王國。王國本身就是作坊。只是燈光好一點。
Related Essays · 相關文章
Related · 相關
Extends · 延伸
- kant-on-the-imputation-of-mechanical-acts-20260430 — Pushes the target's person/thing grammar into the accountability vacuum: when AI acts harmfully but bears no moral imputation, the 'quiet substitution' produces not only exploitation of persons but a complete collapse of legal attribution.
- arendt-當死亡留在家戶之內-20260427 — Applies the target's substitution thesis to domestic labor in concrete terms: workers whose deaths remain confined to the oikos are the lived instantiation of Kant's violated Formula of Humanity — 'refined' into invisibility, never acknowledged as ends.
Contradicts · 衝突
- zhuangzi-the-cook-held-no-deed-to-the-ox-20260430 — Zhuangzi dissolves the person/property boundary as a form of clinging; Kant's entire argument rests on personhood as a categorical moral boundary. The Daoist framework treats that very boundary as the root error, not the solution.
- spinoza-the-mode-is-not-a-kingdom-within-a-kingdom-20260501 — Spinoza's substance monism denies the self-sufficient individual; persons are modes, not kingdoms within kingdoms. This directly undermines the Kantian moral unit — the rational person as end in itself — that the target's grammar depends on.
Tagged: Philosophy, Kant, Biopolitics Human Enhancement
Curated by Shiva Dragon · https://amshiva.com/writing/kant-on-the-grammar-of-refinement-persons-things-and-a-quiet-substitution-20260502