當前位置

首頁 > 英語閱讀 > 雙語新聞 > 上海交大研發人工智能 通過臉部識別技術辨別罪犯

上海交大研發人工智能 通過臉部識別技術辨別罪犯

推薦人: 來源: 閱讀: 1.75W 次

The fields of artificial intelligence and machine learning are moving so quickly that any notion of ethics is lagging decades behind, or left to works of science fiction.

由於人工智能和機器學習領域發展得太迅速,以致於任何倫理概念都滯後幾十年,或是留給了科幻作品。

This might explain a new study out of Shanghai Jiao Tong University, which says computers can tell whether you will be a criminal based on nothing more than your facial features.

這也許能夠解釋上海交通大學的一項新研究。該研究表明,計算機只需根據你的面部特徵就能分辨出你是否是一個罪犯。

上海交大研發人工智能 通過臉部識別技術辨別罪犯

In a paper titled "Automated Inference on Criminality using Face Images," two Shanghai Jiao Tong University researchers say they fed "facial images of 1,856 real persons" into computers and found "some structural features for predicting criminality, such as lip curvature, eye inner corner distance, and the so-called nose-mouth angle."

在一篇題爲《基於面部圖像的自動犯罪概率推斷》的文章中,兩位上海交通大學的研究人員表示,他們將"1856個真人的面部圖像"錄入計算機,發現"一些能夠預測犯罪率的結構特徵,例如上脣曲率、內眼角間距和鼻脣角角度。"

They conclude that "all classifiers perform consistently well and produce evidence for the validity of automated face-induced inference on criminality, despite the historical controversy surrounding the topic."

他們的結論是:"儘管該主題一直具有歷史爭議,但是所有的分類器都表現出色,併爲人臉識別技術辨認罪犯的有效性提供了證據。"

In the 1920s and 1930s, the Belgians, in their role as occupying power, put together a national program to try to identify individuals' ethnic identity through phrenology, an abortive attempt to create an ethnicity scale based on measurable physical features such as height, nose width and weight.

在20世紀20年代及30年代,比利時人以佔領國的身份制定了一項國家計劃,試圖通過骨相來識別個人的民族特性,試圖根據可測量的身體特徵,如身高、鼻子寬度和重量,來劃分一個的種族範圍。

The study contains virtually no discussion of why there is a "historical controversy" over this kind of analysis — namely, that it was debunked hundreds of years ago.

此項研究幾乎沒有討論爲什麼這種分析有一個"歷史爭議",它在幾百年前就被揭穿了。

Rather, the authors trot out another discredited argument to support their main claims: that computers can't be racist, because they're computers.

相反,作者提出了另一個可信的論點來支持他們的主要論斷:計算機不能成爲種族主義者,因爲它們是計算機。

Unlike a human examiner/judge, a computer vision algorithm or classifier has absolutely no subjective baggages, having no emotions, no biases whatsoever due to past experience, race, religion, political doctrine, gender, age, etc.

與人類檢查員/法官不同,計算機視覺算法或分類器絕對沒有主觀看法、沒有情緒、沒有由於過去經驗、種族、宗教、政治信條、性別、年齡等而造成的偏見。

Besides the advantage of objectivity, sophisticated algorithms based on machine learning may discover very delicate and elusive nuances in facial characteristics and structures that correlate to innate personal traits.

除了客觀性的優勢,基於機器學習的複雜算法可能發現面部特徵和結構中非常微妙和難以捉摸的細微差別,這些細微差別與先天的個人特徵相關。