當前位置

首頁 > 英語閱讀 > 雙語新聞 > 警惕社交媒體“武器化”

警惕社交媒體“武器化”

推薦人: 來源: 閱讀: 2.27W 次

Disinformation, propagated on the internet, influenced last year’s US presidential election. The degree of influence is impossible to gauge with precision, of course, but there is no denying the vast scale of malicious attempts by the Internet Research Agency — a Kremlin-linked troll farm — to sway US public opinion. Revelations on that subject in US congressional hearings this week should give pause to anyone who cares about democracy.

在互聯網上傳播的虛假信息,影響了去年的美國總統大選。當然,影響程度無法精確估量,但不可否認的是,一個與克里姆林宮有關聯的“噴子製造廠”(troll farm)——“互聯網研究機構”(Internet Research Agency),付出了巨大規模的惡意努力,試圖左右美國輿論。最近美國國會聽證會上曝光的情況,應該讓關心民主體制的所有人深思。

Among these were Facebook’s acknowledgment that 150m Americans, including Instagram users, may have viewed at least one post of fake news originating with the Russian agency, which took out a total of 3,000 paid ads. That figure says much about the evolution of the media landscape. A few platforms can now reach audiences of previously unimaginable size.

這些情況包括,Facebook承認1.5億美國人(包括Instagram用戶)可能看到了至少一條來自這家俄羅斯機構的假新聞,該機構總共投放了3000則付費廣告。這一數據在很大程度上表明瞭媒體版圖的發展演變。少數幾個平臺如今能夠觸及到的受衆規模之大,是以往難以想象的。

In the 13 years since then undergraduate Mark Zuckerberg launched Facebook as a college networking site, the company has grown to become the largest global distributor of news, both real and fake. That ascent comes with responsibility. Social media platforms on this scale, for all the good they can do, can be weaponised — in some cases by hostile state actors.

本科大學生馬克?扎克伯格(Mark Zuckerberg)創辦作爲大學社交網站的Facebook 13年來,該公司已發展壯大,成爲全球最大的新聞分銷商(包括真實真新聞和虛假新聞)。這樣的地位攀升必然帶來責任。這種規模的社交媒體平臺可以被用來做很多好事,但也可能被武器化——在某些情況下是被敵對的國家行爲者武器化。

The tech titans have insisted that they are neutral platforms, with no role as arbiters of truth or social acceptability. They are rightly wary of drawing accusations of bias. At the same time, though, Facebook and others have tacitly acknowledged that they have a role in policing content by striking out posts that promote terrorism and crimes such as child pornography. The ambiguity they have nurtured — that they can be both neutral and upstanding — is becoming increasingly untenable.

科技巨頭堅稱自己是中立平臺,沒有判斷真相或社會可接受性的義務。它們有理由擔心,那麼做容易招致偏見指控。然而,與此同時,Facebook及其他平臺已經默認,它們在監督內容方面應該發揮作用,包括刪掉那些煽動恐怖主義或兒童色情等犯罪行爲的帖子。它們所堅持的這種模棱兩可——它們可以既中立又正直——正變得越來越站不住腳。

It is a matter of public interest that the big platforms become more transparent and that clearer standards are in place concerning the flagging and removal of destructive content — be it slanderous, criminal, or designed to subvert democracy.

大型平臺提升透明度、在標記和刪除有害內容(無論是誹謗或犯罪內容,還是旨在顛覆民主體制的虛假內容)的問題上實行更清晰的標準,事關公共利益。

The problem may have become globally understood with the 2016 US presidential election, but it did not start there. Ukraine’s government said this week that it warned Facebook in 2015 that Russia was conducting disinformation campaigns on its platform. That should have been a wake-up call, and prompted a much faster response.

2016年美國總統大選也許使這個問題得到了全球理解,但它並非始於美國大選。烏克蘭政府最近表示,它曾在2015年警告Facebook:俄羅斯正在其平臺上進行散佈虛假信息的活動。那本來應該敲響警鐘,並引發迅速得多的應對措施。

The solution is not to subject platform companies to the same standards publishers face: that would destroy much of the value that they offer to society (while wrecking their businesses). But allowing Facebook and its peer companies to determine their responsibilities to the public is not acceptable, either. To start, when the platforms receive direct payment for political advertising, there is no reason they should not be held to the same standard as publishers. They should be as transparent about the funding of such advertising as other media.

解決方案並不是讓平臺公司受制於與出版商相同的標準:那會摧毀它們提供給社會的很大一部分價值(同時毀掉它們的業務)。但是,允許Facebook及其同行企業自行決定它們對公衆的責任也是不可接受的。首先,當這些平臺直接承接政治廣告時,它們沒有理由不受制於與出版商相同的標準。它們應該像其他媒體一樣,在這些廣告由誰買單的問題上做到透明。

Unpaid content presents more difficult questions. In the US, internet companies still benefit from the blanket protection provided by the very broadly worded section 230 of the Communications Decency Act, which has been interpreted as relieving them of all responsibility for content that appears on their sites.

不付費發佈的內容提出了更難以解決的問題。在美國,互聯網公司仍然受益於措辭非常寬泛的《通信內容端正法》(CDA)第230條的全面保護。該條已被解讀爲,互聯網公司無需爲其網站上的內容承擔任何責任。

The act is too strong and needs sharpening. In particular, it needs to reflect a reasonable standard for the responsibility of platforms removing malicious content once they have been made aware of it.

該法太過絕對,需要更有針對性。尤其是,該法需要反映一種合理標準,規定平臺在被告知惡意內容後刪除這些內容的責任。

警惕社交媒體“武器化”

As for the questions of what constitutes malice, and who decides. The same authority that, in democratic societies, has always made decisions about what is acceptable communication in the public square — the elected representatives of the people.

至於如何界定惡意內容、以及由誰決定的問題。在民主社會,這也是一直對什麼是公共場合可接受的通信做出界定的那個權威——人民選舉產生的代表。