https://www.hk-lawyer.org/content/defamed-robot-artificial-intelligence-internet-and-law-defamation
In Smart Until It’s Dumb, Dr Emmanuel Maggiori, a computer software engineer who wrote algorithms for Expedia, argued artificial intelligence (“AI”) was an overhyped monumental bubble about to burst. That was January 2023. What if his prediction was wrong, and ChatGPT, the automated text generation service powered by AI (i.e., a “Chatbot”), or its future incarnations, replaced humans as the dominant creator of written works? What are the legal implications?
AI is not new. For many years it had been possible for us to play Chess, or Mahjong, with players who were just AI computer programmes, as opposed to humans. Many people did so, often regularly. Notably, as is well known, Garry Kasparov, when he was the reigning World Chess Champion, was defeated by Deep Blue, an IBM supercomputer trained to play Chess, in a Chess Competition played under tournament conditions. That was 1997, i.e., 26 years ago.
However, ChatGPT, launched by OpenAI in November 2022, was quite different in that it was able to generate complex conversations and mimic writing styles, making factual assertions based on its training database (in turn selectively downloaded from the Internet in 2021) along the way.
In an article published by Legal Cheek on 23 March 2023, it was claimed that a Reddit user managed to ask ChatGPT to put a spin on Donoghue v Stevenson [1932] AC 562 and explain the facts of that case to him “in a gangsta way”. The result was hilarious. It was completely harmless, and a good laugh.
Indeed, ChatGPT could apparently generate text about literally anything, from the law of defamation to the design of aeroplanes and everything in between. The problem, however, was that its factual assertions were frequently false, and sometimes seriously defamatory.
Many would cite the famous example of Brian Hood, an elected mayor in Australia, who became concerned about his reputation when, according to Reuters and the BBC, members of the public told him ChatGPT described him as a convicted criminal sentenced to 4 years in prison for bribery, when that was untrue. He was in fact the whistle-blower who reported other people’s criminal activities to the authorities. He retained lawyers who sent a demand letter to ChatGPT’s owner, OpenAI, on 21 March 2023, setting a time limit within which OpenAI was to fix the problem, failing which a libel lawsuit would follow. At the time of writing this article, it is unclear what happened next.
Just as shocking was the example of Jonathan Turley, a Law Professor at George Washington University Law School, who according to The Washington Post had been falsely accused, by ChatGPT, of being the subject of a sexual misconduct complaint. ChatGPT appeared to have fabricated a non-existent Washington Post article, supposedly dated 21 March 2018, out of thin air, in support of the lie.
But is the law of defamation the answer? If a victim of Chatbot Libel sued in Hong Kong, what would happen?
This article seeks to address some, but not all, of the potential issues.
Issue 1 – Proper Forum
Whether Hong Kong can be a proper litigation forum for any particular case of defamation does not depend on the location of the parties, but on the location of the victim’s reputation.
For that reason, it is perfectly proper for a foreign resident or foreign corporation to sue for defamation in Hong Kong if he can establish he has a reputation in Hong Kong. As Findlay J had said in Investasia Ltd v Kodansha Co Ltd [1999] 3 HKC 515 (at 522-C): –
“If a plaintiff has a reputation in Hong Kong, as the plaintiffs in this case have undoubtedly established they have, it is not right to tell him to go elsewhere to vindicate that reputation. The place to vindicate a damaged reputation in Hong Kong is in Hong Kong, not in Japan or somewhere else.”
Similarly, it does not matter whether the wrongdoer is in town. Order 11 rule 1(1)(f) of the Rules of the High Court (Cap 4A), and of the Rules of the District Court (Cap 336H), allows a libel victim to apply for leave of the Court to serve his writ outside of Hong Kong, if it can be established the damage to his reputation was arguably sustained within Hong Kong, or resulted from an act committed within Hong Kong. As Cheung JA had said in Oriental Press Group Ltd v Google LLC [2018] 1 HKLRD 1042 (at §3.35), in any such application what matters is whether there has been, at least arguably, a real and substantial tort within the jurisdiction. Where defamatory contents had been created or uploaded from outside of Hong Kong but read within Hong Kong, the Court does not carry out a “number crunching exercise”. The Court does not require proof of any arbitrary minimum number of local readers before granting leave.
One may therefore say while neither Brian Hood nor Jonathan Turley may ever attempt to sue in Hong Kong, plenty of other potential litigants may properly start their Chatbot Libel cases in Hong Kong, irrespective of whether they are based in Hong Kong, if they feel aggrieved by defamatory contents generated by AI.
Issue 2 – Identifying Publishers
A libel victim contemplating legal action should carefully consider who to sue. No one can sue ChatGPT as, just like a table, a spoon, or a car, ChatGPT is not in law a person. But what about its corporate owner?
At common law, all persons who “procured or participated in” the publication of a libel, in any way or form, regardless of the degree of responsibility, are deemed “publishers” for the purposes of the law of libel and are, prima facie, jointly and severally liable for the whole damage suffered by the victim.
As noted by Ribeiro PJ in Oriental Press Group Ltd v Fevaworks Solutions Ltd (2013) 16 HKCFAR 366 (“Fevaworks”) (at §23): –
“Thus, under the strict rule, publication of a libel, for instance by a newspaper, meant that the journalist who was the originator of the article; the editor who accepted and prepared it for publication; the printer who set the type and printed it; the wholesale distributor who disseminated it; the newsagents who sold it to the readers; and the newspaper’s proprietor who published it through its employees or agents were all jointly and severally liable for the damage to the plaintiff’s reputation.”
Prima facie, therefore, just as the corporate owner of a newspaper can be sued for all words appearing on that newspaper, one may argue the corporate owner of any Chatbot may also be sued for all words generated by that Chatbot.
However, out of the strictness of this “publication rule”, the defence of innocent dissemination was born. It is now necessary for a libel victim to do more than just identifying “the publishers”. He must also consider which of them were the “main” publishers, and which of them were just “subordinate” publishers, before deciding who to sue, or indeed whether to sue at all.
The difference is huge. Subordinate publishers may have no legal liability vis-à-vis the libel published by them if they can demonstrate they did not know the published contents contained a libel, and that their lack of knowledge was not due to their own lack of care: Fevaworks (§24 to §31).
In Fevaworks itself, what fell to be decided by the CFA was whether the providers, administrators, and managers of HKGolden, a popular Internet discussion forum in Hong Kong, were “main” or “subordinate” publishers vis-à-vis the libel in question (§55). Having reviewed cases involving newspapers, Ribeiro PJ declared (at §76) the criteria for identifying a person as the main publisher as: –
“(i) that he knows or can easily acquire knowledge of the content of the article being published (although not necessarily of its defamatory nature as a matter of law); and (ii) that he has a realistic ability to control publication of such content, in other words, editorial control involving the ability and opportunity to prevent publication of such content.”
Can the proprietor of a Chatbot like ChatGPT “easily acquire” knowledge of what the Chatbot publishes? Does the proprietor of a Chatbot have a “realistic ability” to control publication? These questions can only be answered properly if the proprietor tells us substantially more about how its Chatbot works in practice.
Assuming the proprietor of a Chatbot can only be sued as a “subordinate” publisher, and the defence of innocent dissemination can in theory be invoked, whether such a defence would be successful would depend on the evidence in the specific case in question.
Issue 3 – Identifying Recipients
For Social Media Libel, it is relatively easy to identify the recipients in question, or at least to count them anonymously. Instagram (“IG”), for example, allows its users to check who had looked at their own IG stories. LinkedIn also provides “post impressions” statistics to its users. How about Chatbots like ChatGPT? Do their proprietors store chat records or other electronic data and if so, for how long and are they searchable?
We may have to wait for the first Chatbot Libel case to proceed to trial before we can have more access to such information.
As things stand, it is potentially difficult for a victim of Chatbot Libel to prove, with admissible evidence, the true extent of a Chatbot’s publication of defamatory words.
Issue 4 – Enforcement
Finally, one must consider the practicality of enforcement. A typical Chatbot conversation is one-to-one, and in that limited sense “private”. Even if victims of Chatbot Libel managed to get court injunctions, unless they constantly monitor the Chatbot’s responses, in person or through agents, it can be difficult for them to detect whether defamatory contents are being repeated or expanded upon by the Chatbot in question, or indeed by other Chatbots picking up such contents and using them. Instead of relying on injunctive reliefs, it may well be more important for Chatbot Libel victims to get public judgments correcting the falsehoods and vindicating their reputations, and ensure such judgments are themselves available to any person who may wish to check the truth or falsity of relevant contents generated by Chatbots.
Final Remarks
It has often been said that ChatGPT, the automated text generation service powered by AI, works a bit like Tom Riddle’s diary in J K Rowling’s Harry Potter and the Chamber of Secrets – it writes back to you in real time, but you are not quite sure why or how, or whether you should trust any of the words generated by a non-human. Harry Potter fans may well be able to recall how Arthur Weasley, an adult wizard in the fictional fantasy world created by J K Rowling, upon discovering what Tom Riddle’s diary managed to do, said this to his daughter: –
“Ginny! Haven’t I taught you anything? What have I always told you? Never trust anything that can think for itself if you can’t see where it keeps its brain!”
While in the real world, it may not be necessary to destroy all Chatbots with basilisk fangs, and it may not even be necessary to commence a libel action for each and every defamatory publication generated by them, we should at the very least be extremely cautious when dealing with Chatbots, fact-check all contents generated by them, and notify their corporate owners if they are in fact spreading fake news or other misinformation. Victims of Chatbot Libel should also be promptly notified.
In fact, staying on the Harry Potter theme, perhaps we should all do what Mad Eye Moody (or, to be precise, his polyjuice potion imposter Barty Crouch Junior) always said we should do in Harry Potter and the Goblet of Fire: – “Constant Vigilance!”
Barrister, Jason Pow SC’s Chambers
Kenneth is a Barrister in private practice. Called to the Bar in 2004, he sat as a Deputy District Judge, and as a High Court Master. He is a Fellow of the Hong Kong Institute of Arbitrators, and a member of Jason Pow SC’s Chambers. He acted for the successful plaintiff in the Social Media Libel case of Chow Wing Kai v Liang Jing [2021] 2 HKLRD 1189. He is recognised as a Leading Junior of the Hong Kong Bar by Legal 500 (Asia Pacific, 2022).
被機器人誹謗? 人工智能、互聯網和誹謗法
在 《Smart Until It’s Dumb》中,曾經為智游网編寫演算法的電腦軟件工程師以馬內利·馬焦雷博士認為,人工智能 (“AI”) 是一個被過度炒作的巨大泡沫,即將破裂。 那是2023年1月。 如果他的預測是錯誤的,由 AI 運作的自動文字作品創作服務 (即 “聊天機器人”) ChatGPT或其未來化身即將取代人類成為文字作品的主要創作者,哪又如何? 有什麼法律後果?
人工智能並非新事物。 多年來,我們一直能夠與只是人工智能電腦程式而非人類的競賽選手下國際象棋或搓麻將。 許多人曾經這樣做,而且定期做。 值得注意的是,眾所周知,當加里·卡斯帕羅夫仍然是全球國際象棋冠軍時,他曾經在錦標賽條件下舉行的國際象棋比賽中被一台受過國際象棋訓練的 IBM 超級電腦深藍擊敗。 那是1997年,即26年前。
然而,OpenAI 於2022年11月推出的 ChatGPT 卻大不相同,因為它能創造複雜的對話並模仿寫作風格,同時根據 2021 年從互聯網上選擇性地下載的訓練資料庫提出事實主張。
於2023年3月23日在Legal Cheek發表的一篇文章聲稱一名 Reddit 用戶成功要求ChatGPT 就 Donoghue v Stevenson [1932] AC 562 進行另類陳述,並以「黑幫方式」向他解釋該案的案情。 結果引人發笑。 這是完全無害的,而且是一個非常有趣的的玩笑。
事實上,ChatGPT 顯然可以創作和任何話題有關的文字,從誹謗法律到飛機設計以及兩者之間的一切均可。 然而,問題在於它的事實主張經常是錯誤的,有時甚至是嚴重的誹謗。
許多人會引用澳大利亞民選市長佈賴恩·胡德的著名例子。 據路透社和英國廣播公司報導,公眾人士告訴他 ChatGPT 將他描述為一名被判處 4 年監禁的已被定罪賄賂罪罪犯,而這是虛假的。 這令他開始擔心自己的聲譽。 他實際上是向執法部門舉報他人犯罪的舉報人。 他聘請律師於2023年3月21日向ChatGPT的擁有人OpenAI發送了一封律師信,要求OpenAI在指定時限內解決問題,否則他將提出誹謗訴訟。 往後的事態發展,在撰寫本文時仍然是不確定的。
喬治華盛頓大學法學院法學教授喬納森·特利的例子同樣令人震驚。據華盛頓郵報報導,他被 ChatGPT 錯誤地指控為不恰當性行為的投訴對象。 ChatGPT 似乎憑空捏造了一篇不存在的華盛頓郵報文章,並聲稱其日期為2018年3月21日,以支持該謊言。
但誹謗法是答案嗎? 如果聊天機器人誹謗案的受害人在香港提出訴訟,結果會怎樣?
本文旨在討論一些 (但不是全部) 相關的爭議。
爭議一:合適的訴訟地點
香港能否成為任何特定誹謗案件的合適訴訟地點,並不是取決於當事人的所在地,而是取決於受害人的名譽所在地。
因此,如果外國居民或外國公司能夠顯示自己在香港享有聲譽,他們選擇在香港提出誹謗訴訟是完全恰當的。 正如范達理法官在Investasia Ltd v Kodansha Co Ltd [1999] 3 HKC 515 (at 522-C) 中所說:-
「如果原告人在香港享有聲譽,正如本案中的原告人無疑已經證明的那樣,讓他去其他地方維護該聲譽是不對的。 維護在香港受損的聲譽的地方是香港,而不是日本或其他地方。」
同樣地,侵權人是否在城裡也是不相關的。 如果誹謗受害人可以顯示他的聲譽可以說是在香港境內受到傷害,或因香港境內的行為而受損,《高等法院規則》(第4A章) 和《區域法院規則》(第336H章) 第11號命令第1(1)(f)條規則允許他向法院申請許可在香港以外的地方送達他的令狀。 正如高等法院上訴法庭法官張澤祐在 Oriental Press Group Ltd v Google LLC [2018] 1 HKLRD 1042 (at §3.35) 中所說,在任何此類申請中,重要的是在香港司法管轄區內是否 (或至少可以說是) 存在真實和實在的侵權行為。 如果誹謗內容是在香港境外製作或上傳但在香港境內閱讀,法院不會進行「數字運算」。 法院不會先要求顯示一個任意制訂的最低本地讀者數目才給予准許。
因此,有人可能會說,雖然佈賴恩·胡德和喬納森·特利都不會嘗試在香港提出訴訟,許多其他潛在的訴訟人,無論他們是否居於香港,如果他們對由人工智能創作的誹謗內容感到憤憤不平,他們可能會恰當地在香港展開他們的聊天機器人誹謗案。
爭議二:辨認發佈者
正在考慮採取法律行動的誹謗受害者應仔細考慮訴訟對象。 沒有人可以向ChatGPT提出訴訟,因為ChatGPT就像一張桌子、一把勺子或一輛汽車一樣,在法律上不是一個人。 它的企業擁有人又如何?
在普通法中,所有以任何方式或形式「促成或參與」發佈誹謗的人,無論責任程度如何一律被誹謗法律視為發佈者,表面上看需要共同及分別地就受害人遭受的全部損害承擔責任。
正如終審法院常任法官李義在 Oriental Press Group Ltd v Fevaworks Solutions Ltd (2013) 16 HKCFAR 366 (“Fevaworks”) (§23) 中指出:-
「因此,根據這嚴格的規則,發佈誹謗,例如由報紙發佈,意味著作為文章原創者的記者; 接受並準備出版的編輯; 設定字體及列印的印刷商; 散播文章的批發經銷商; 將它賣給讀者的報刊經銷商; 以及通過其僱員或代理人出版該報紙的東主全部均需要共同及分別地就原告人聲譽的全部傷害承擔責任。」
因此,表面上看,正如報紙的東主可以就報紙上出現的所有文字成為訴訟對象,有人可能會說任何聊天機器人的企業擁有人也可以就該聊天機器人創作的所有文字成為訴訟對象。
然而,基於這「發佈規則」的嚴厲本質,不知情傳播這一個抗辯理由應運而生。 現在誹謗受害者需要做的不僅是識別「發佈者」,他還必須考慮誰是「主要」發佈者,誰是「從屬」發佈者,然後再決定向誰人提出訴訟,或是否提出訴訟。
不同之處頗大。 如果從屬發佈者能夠證明他們並不知道發佈內容包含誹謗,而且他們沒有該等認知並非因為自身疏忽,他們就他們發佈的誹謗內容可能沒有法律責任:Fevaworks (§24 至 §31)。
在 Fevaworks 案件本身,終審法院須裁定的事宜為香港一個大受歡迎的互聯網論壇香港高登討論區的提供者、管理者和行政人員是否涉案誹謗文字的「主要」或「從屬」發佈者 (§55)。 在回顧及探討過涉及報紙的案例後,終審法院常任法官李義宣佈 (§76) 將一個人確定為主要發佈者的準則為:-
「(i) 該人知道或能輕易得悉被發佈文章的內容 (但不一定包括其在法律上屬誹謗性的特質); 及 (ii) 該人具實際能力控制該等內容的發佈,換句話說,該人能行使編輯控制權,包括有能力和機會阻止發佈該等內容。」
像 ChatGPT 這樣的聊天機器人的擁有人能否「輕易得悉」聊天機器人發佈的內容? 聊天機器人的擁有人是否具有「實際能力」控制內容發佈? 這些問題只能在擁有人告訴我們更多有關其聊天機器人的實際運作細節之後才會有令人滿意的答案。
假設聊天機器人的擁有人只能以「從屬」發佈者的身份成為訴訟對象,並理論上可援引不知情傳播為抗辯理由,該抗辯理由是否成立將取決於所涉案件的具體證據。
爭議三:辨認讀者
就社交媒體誹謗而言,辨認或匿名計算讀者相對地容易。 例如,Instagram (“IG”) 允許其用戶查看誰看過他們自己的 IG 故事。 領英亦可向用戶提供「發文瀏覽次數」統計數據。
ChatGPT 之類的聊天機器人又如何呢? 他們的擁有人有否貯存談話記錄或其他電子數據? 如有的話,貯存多久及可否被搜索?
我們可能必須等待第一件聊天機器人誹謗案進入審判階段,才能獲得更多此類資料。
就目前情況而言,聊天機器人誹謗的受害人可能難以透過可被法庭接納的證據去證明聊天機器人發佈誹謗性言論的確實覆蓋程度。
爭議四:執行
最後,大家必須考慮執行禁制令的可行性。 一個典型的聊天機器人對話是一對一的,即在此有限的定義上是「私密的」。 即使聊天機器人誹謗的受害人成功獲得法庭禁制令,除非他們親自或透過代理人不斷監控聊天機器人的反應,否則他們可能很難發現相關聊天機器人,甚或經已取得此類內容並正在使用它們的其他聊天機器人,是否正在重複或擴大誹謗內容。 與其依靠禁制令救助,對聊天機器人誹謗受害人來說,更重要的可能是讓公開的法庭判詞糾正謊言及維護他們的聲譽,並要確保任何可能希望檢查相關資訊真假的人都能獲得此類法庭判詞。
後話
人們常說ChatGPT,即是由AI運作的自動文字作品創作服務,運作起來有點像 JK 羅琳的《哈利波特-消失的密室》中湯姆·瑞斗的日記—它會實時用文字回覆你,但你不太確定它如何或為何這樣做,或者你是否應該相信由非人類創作的言辭。 哈利波特迷可能還記得 JK 羅琳創造的虛構奇幻世界中的成年巫師亞瑟·衛斯理在發現湯姆·瑞斗的湯的日記曾經做到的事情後,是如何對他的女兒說:-
「金妮! 我沒有教過妳任何東西嗎? 我一直在告訴妳什麼? 如果妳看不到它的大腦生在哪裡,永遠不要相信任何能獨立思考的東西!」
雖然在現實世界中,可能沒有必要運用蛇怪毒牙消滅所有聊天機器人,甚至可能沒有必要就它們創造的每一篇誹謗性文章都展開誹謗訴訟,但我們至少應該在處理聊天機器人時極度謹慎,對它們創造的所有內容進行事實核對。如果它們實際上在傳播假新聞或其他錯誤資訊,請通知其企業擁有人。 聊天機器人誹謗的受害人亦應得到迅速的提示。
事實上,延續此哈利波特主題,也許我們都應該做瘋眼穆敵 (或者,準確地說,他的魔藥變身水冒充者小巴堤·柯羅奇) 在《哈利波特與火焰杯》中總是說我們應該做的事情: –
「時刻警惕!」
鮑永年資深大律師辦事處 大律師
林嘉仁是一名私人執業大律師。在 2004 年獲認許為大律師的他曾經擔任區域法院暫委法官及高等法院聆案官。他是香港仲裁師協會的資深會員,亦是鮑永年資深大律師辦事處的成員。他曾經在 周榮佳 訴 梁京 [2021] 2 HKLRD 1189 此一社交媒體誹謗案中代表勝訴的原告人。他被法律 500 強 ( 亞太地區 , 2022) 認許為一名香港領先大律師。