當前位置

首頁 > 英語閱讀 > 雙語新聞 > 人工智能與核武器 哪個更危險

人工智能與核武器 哪個更危險

推薦人: 來源: 閱讀: 2.8W 次

Ebola sounds like the stuff of nightmares. Bird flu and SARS also send shivers down my spine. But I’ll tell you what scares me most: artificial intelligence.

埃博拉病毒聽起來像噩夢。禽流感和SARS也讓我脊背發涼。但是我告訴你什麼讓我最害怕:人工智能。

The first three, with enough resources, humans can stop. The last, which humans are creating, could soon become unstoppable.

如果有足夠的資源,人類能阻止前三項疾病的傳播。但最後一項是由人類所創造,它很快將變得無法阻擋。

人工智能與核武器 哪個更危險

Before we get into what could possibly go wrong, let me first explain what artificial intelligence is. Actually, skip that. I’ll let someone else explain it: Grab an iPhone and ask Siri about the weather or stocks. Or tell her “I’m drunk.” Her answers are artificially intelligent.

在我們探討可能出現什麼問題之前,讓我先解釋一下什麼是人工智能。實際上不用我解釋。我讓別人來解釋一下。你拿起iPhone,問問Siri天氣和股票情況。或者對她說“我喝醉了”,她的回答就是人工智能的結果。

Right now these artificially intelligent machines are pretty cute and innocent, but as they are given more power in society, these machines may not take long to spiral out of control.

現在,這些人工智能機器非常可愛、無辜,但是隨着它們在社會上被賦予更多權力,用不了多久它們就會失控。

In the beginning, the glitches will be small but eventful. Maybe a rogue computer momentarily derails the stock market, causing billions in damage. Or a driverless car freezes on the highway because a software update goes awry.

一開始只是些小毛病,但是它們意義重大。比如,一臺出現故障的電腦瞬間讓股市崩潰,導致數十億美元的損失。或者一輛無人駕駛汽車因軟件升級錯誤在高速公路上突然靜止不動。

But the upheavals can escalate quickly and become scarier and even cataclysmic. Imagine how a medical robot, originally programmed to rid cancer, could conclude that the best way to obliterate cancer is to exterminate humans who are genetically prone to the disease.

但是這些騷亂能快速升級,變得非常可怕,甚至變成大災難。想像一下,一個最初用來對抗癌症的醫用機器人可能得出這樣的結論:消滅癌症的最佳方法是消滅那些從基因角度講易於患病的人。

Nick Bostrom, author of the book “Superintelligence,” lays out a number of petrifying doomsday settings. One envisions self-replicating nanobots, which are microscopic robots designed to make copies of themselves. In a positive situation, these bots could fight diseases in the human body or eat radioactive material on the planet. But, Mr. Bostrom says, a “person of malicious intent in possession of this technology might cause the extinction of intelligent life on Earth.”

《超級智能》(Superintelligence)一書的作者尼克·博斯特羅姆(Nick Bostrom)描述了幾種會導致人類滅絕的可怕情況。一種是能自我複製的納米機器人。在理想情態下,這些機器人能在人體內戰勝疾病,或者消除地球上的放射性物質。但博斯特羅姆說,“如果有邪惡企圖的人掌握了這種技術,那可能導致地球上智能生命的滅絕。”

Artificial-intelligence proponents argue that these things would never happen and that programmers are going to build safeguards. But let’s be realistic: It took nearly a half-century for programmers to stop computers from crashing every time you wanted to check your email. What makes them think they can manage armies of quasi-intelligent robots?

人工智能支持者們辯稱,這些事情永遠都不會發生,程序員們會設置一些防護措施。但是讓我們現實一點:程序員們花了近半個世紀才能讓你在每次想查看郵件時電腦不崩潰。是什麼讓他們認爲自己能夠駕馭這些準智能機器人大軍?

I’m not alone in my fear. Silicon Valley’s resident futurist, Elon Musk, recently said artificial intelligence is “potentially more dangerous than nukes.” And Stephen Hawking, one of the smartest people on earth, wrote that successful A. I. “would be the biggest event in human history. Unfortunately, it might also be the last.” There is a long list of computer experts and science fiction writers also fearful of a rogue robot-infested future.

不是隻有我一個人有這樣的擔心。硅谷的常駐未來主義者埃隆·馬斯克(Elon Musk)最近說,人工智能“可能比核武器還危險”。斯蒂芬·霍金(Stephen Hawking)是地球上最聰明的人之一。他寫道,成功的人工智能“會是人類歷史上最重大的事件。不幸的是,它也可能會是最後一個大事件”。還有很多計算機專家和科幻小說作家擔心未來的世界充滿故障機器人。

Two main problems with artificial intelligence lead people like Mr. Musk and Mr. Hawking to worry. The first, more near-future fear, is that we are starting to create machines that can make decisions like humans, but these machines don’t have morality and likely never will.

人工智能有兩個主要問題讓馬斯克和霍金等人擔憂。離我們較近的一個問題是,我們正在創造一些能像人類一樣做決定的機器人,但這些機器沒有道德觀念,而且很可能永遠也不會有。

The second, which is a longer way off, is that once we build systems that are as intelligent as humans, these intelligent machines will be able to build smarter machines, often referred to as superintelligence. That, experts say, is when things could really spiral out of control as the rate of growth and expansion of machines would increase exponentially. We can’t build safeguards into something that we haven’t built ourselves.

第二個問題離我們較遠。那就是,一旦我們創造出和人一樣智能的系統,這些智能機器將能夠建造更智能的機器,後者通常被稱爲超級智能。專家們說,到那時,事情真的會迅速失控,因爲機器的增長和膨脹速度將是迅猛的。我們不可能在自己尚未建立的系統中設置防護措施。

“We humans steer the future not because we’re the strongest beings on the planet, or the fastest, but because we are the smartest,” said James Barrat, author of “Our Final Invention: Artificial Intelligence and the End of the Human Era.” “So when there is something smarter than us on the planet, it will rule over us on the planet.”

“我們人類掌控未來不是因爲我們是地球上最強壯或最快的生物,而是因爲我們是最智能的,”《我們的終極發明:人工智能和人類時代的終結》(Our Final Invention: Artificial Intelligence and the End of the Human Era)的作者詹姆斯·巴拉(James Barrat)說,“所以當這個星球上有比我們更智能的東西時,它將統治地球。”

What makes it harder to comprehend is that we don’t actually know what superintelligent machines will look or act like. “Can a submarine swim? Yes, but it doesn’t swim like a fish,” Mr. Barrat said. “Does an airplane fly? Yes, but not like a bird. Artificial intelligence won’t be like us, but it will be the ultimate intellectual version of us.”

更難理解的是,我們並不確切知道超級智能機器的外形或行爲方式。“潛水艇會游泳嗎?會,但它的游泳方式跟魚不同,”巴拉說,“飛機會飛嗎?會,但它的飛行方式跟鳥不同。人工智能不會跟我們一模一樣,但它將是我們的終極智能版本。”

Perhaps the scariest setting is how these technologies will be used by the military. It’s not hard to imagine countries engaged in an arms race to build machines that can kill.

也許最可怕的是這些技術將會如何被軍隊利用。不難想像那些正在進行軍備競賽的國家會製造能殺人的機器。

Bonnie Docherty, a lecturer on law at Harvard University and a senior researcher at Human Rights Watch, said that the race to build autonomous weapons with artificial intelligence — which is already underway — is reminiscent of the early days of the race to build nuclear weapons, and that treaties should be put in place now before we get to a point where machines are killing people on the battlefield.

邦妮·多徹蒂(Bonnie Docherty)是哈佛大學的法律講師,也是人權觀察組織的高級研究員。她說,人工智能自主武器的軍備競賽正在進行,這讓人想起了核武器競賽的初期;在這些機器人上戰場殺人之前,我們必須先訂好條約。

“If this type of technology is not stopped now, it will lead to an arms race,” said Ms. Docherty, who has written several reports on the dangers of killer robots. “If one state develops it, then another state will develop it. And machines that lack morality and mortally should not be given power to kill.”

“如果現在不制止這種技術,它將會導致軍備競賽,”多徹蒂說。她寫過幾個報告,講述殺手機器人的危險。“如果一個國家在開發它,那另一個國家也會開發。這些致命的機器缺乏道德觀念,不應該被賦予殺人權力。”

So how do we ensure that all these doomsday situations don’t come to fruition? In some instances, we likely won’t be able to stop them.

那麼我們如何保證所有這些世界末日的情形不會成爲現實?在某些情況下,我們很可能無法阻止它們。

But we can hinder some of the potential chaos by following the lead of Google. Earlier this year when the search-engine giant acquired DeepMind, a neuroscience-inspired, artificial intelligence company based in London, the two companies put together an artificial intelligence safety and ethics board that aims to ensure these technologies are developed safely.

但是在谷歌的領導下,我們能阻止某些可能出現的混亂。今年年初,這個搜索引擎巨頭收購了DeepMind公司,後者是倫敦的一家以神經系統科學爲基礎的人工智能公司。這兩家公司建立了一個人工智能安全倫理委員會,旨在保證這些技術安全發展。

Demis Hassabis, founder and chief executive of DeepMind, said in a video interview that anyone building artificial intelligence, including governments and companies, should do the same thing. “They should definitely be thinking about the ethical consequences of what they do,” Dr. Hassabis said. “Way ahead of time.”

DeepMind的創始人、首席執行官傑米斯·哈薩比斯(Demis Hassabis)在一次視頻採訪中說,所有開發人工智能的機構,包括政府和公司,都應該這樣做。“他們一定要考慮自己的所作所爲會帶來的倫理後果,”哈薩比斯說,“而且一定要早早考慮。”