當前位置

首頁 > 英語閱讀 > 雙語新聞 > 機器人並不只搶人類工作 開始向人類發放工作崗位了

機器人並不只搶人類工作 開始向人類發放工作崗位了

推薦人: 來源: 閱讀: 2.64W 次

機器人並不只搶人類工作 開始向人類發放工作崗位了

Robots are not just taking people’s jobs away, they are beginning to hand them out, too.

機器人並不只搶走人類的工作,它們也開始向人類發放工作崗位了。

Go to any recruitment industry event and you will find the air is thick with terms like machine learning, big data and predictive analytics.

參加招聘行業的任何一場活動,你都會發現空氣中瀰漫着像機器學習、大數據和預測分析這樣的字眼。

The argument for using these tools in recruitment is simple.

在招聘中使用這些工具的理由很簡單。

Robo-recruiters can sift through thousands of job candidates far more efficiently than humans.

機器人招聘者可以快速篩選數以千計的應聘者,效率遠高於人類。

They can also do it more fairly.

它們還能做到更加公平。

Since they do not harbour conscious or unconscious human biases, they will recruit a more diverse and meritocratic workforce.

因爲它們不會像人類那樣帶着有意或無意的偏見,它們會招聘到一批更多元化和擇優錄用的員工。

This is a seductive idea but it is also dangerous.

這是個很誘人的想法,但也是危險的。

Algorithms are not inherently neutral just because they see the world in zeros and ones.

算法的中立並非是其固有,而是因爲它們看到的世界只是0和1。

For a start, any machine learning algorithm is only as good as the training data from which it learns.

首先,任何機器學習的算法,並不會比它所學習的訓練數據更好。

Take the PhD thesis of academic researcher Colin Lee, released to the press this year. He analysed data on the success or failure of 441,769 job applications and built a model that could predict with 70 to 80 per cent accuracy which candidates would be invited to interview.

以學術研究者科林•李(Colin Lee)今年向媒體發佈的博士論文爲例,他分析了44.1769萬份成功和不成功的求職申請,建立了一個準確度達70%至80%的模型,可預測哪些應聘者會被邀請參加面試。

The press release plugged this algorithm as a potential tool to screen a large number of CVs while avoiding human error and unconscious bias.

該新聞稿稱,這一算法潛在可用作工具,用於在篩選大量簡歷的過程中避免人爲錯誤和無意識偏見。

But a model like this would absorb any human biases at work in the original recruitment decisions.

但這樣的模型會吸收最初招聘決定中的人爲職場偏見。

For example, the research found that age was the biggest predictor of being invited to interview, with the youngest and the oldest applicants least likely to be successful.

例如,上述研究發現,年齡因素可以在最大程度上預測該應聘者是否會被邀請面試,最年輕和最年長的應聘者最不可能成功。

You might think it fair enough that inexperienced youngsters do badly, but the routine rejection of older candidates seems like something to investigate rather than codify and perpetuate.

你可能覺得這挺公平,因爲沒有經驗的年輕人幹不好,但拒絕年長應聘者的常見做法似乎值得調查,而不是被編入程序和得以延續。

Mr Lee acknowledges these problems and suggests it would be better to strip the CVs of attributes such as gender, age and ethnicity before using them.

科林承認這些問題的存在,並建議最好從簡歷中剔除一些屬性(例如:性別、年齡和種族)再加以使用。

Even then, algorithms can wind up discriminating.

即使那樣,算法仍有可能帶有歧視。

In a paper published this year, academics Solon Barocas and Andrew Selbst use the example of an employer who wants to select those candidates most likely to stay for the long term.

在今年發表的一篇論文中,索倫•巴洛卡斯(Solon Barocas)和安德魯•謝爾博斯特(Andrew Selbst)這兩位學者使用了一個案例,即僱主希望挑選最有可能長期留在工作崗位上的僱員。

If the historical data show women tend to stay in jobs for a significantly shorter time than men (possibly because they leave when they have children), the algorithm will probably discriminate against them on the basis of attributes that are a reliable proxy for gender.

如果歷史數據顯示,女性僱員在工作崗位上停留的時間大大少於男性僱員(可能因爲當她們有了孩子便會離職),算法就有可能利用那些性別指向明確的屬性,得出對女性不利的結果。

Or how about the distance a candidate lives from the office? That might well be a good predictor of attendance or longevity at the company; but it could also inadvertently discriminate against some groups, since neighbourhoods can have different ethnic or age profiles.

應聘者住址與辦公室之間的距離如何?這也可能是預測該僱員出勤率和在公司服務年限的不錯的預測因素;但它可能也會在無意間歧視某些羣體,因爲不同的住宅社區有不同的種族和年齡特徵。

These scenarios raise the tricky question of whether it is wrong to discriminate even when it is rational and unintended. This is murky legal territory.

這些現象提出了一個棘手問題:在理性和非有意的情況下,歧視是否錯誤?這是一個模糊的法律領域。

In the US, the doctrine of disparate impact outlaws ostensibly neutral employment practices that disproportionately harm protected classes, even if the employer does not intend to discriminate.

在美國,根據差別影響(disparate impact)原則,貌似中立的僱傭實踐若超出比例地傷害了受保護階層,即爲不合法,即便僱主並非有意歧視。

But employers can successfully defend themselves if they can prove there is a strong business case for what they are doing.

但僱主若能證明該做法有很強的商業理由,就能爲自己成功辯護。

If the intention of the algorithm is simply to recruit the best people for the job, that may be a good enough defence.

如果使用算法的意圖僅僅是爲相關職位招募最佳人選,那可能是個足夠好的辯護理由。

Still, it is clear that employers who want a more diverse workforce cannot assume that all they need to do is turn over recruitment to a computer.

話雖如此,那些希望擁有更多元化的員工隊伍的僱主,顯然不能想當然地認爲只需把招聘交給電腦去做。

If that is what they want, they will need to use data more imaginatively.

假如這正是他們想要的,那他們也得把數據運用得更富想象力一些。

Instead of taking their own company culture as a given and looking for the candidates statistically most likely to prosper within it, for example, they could seek out data about where (and in which circumstances) a more diverse set of workers thrive.

比如說,與其將他們自己的公司文化設爲既定條件,進而尋找統計學上最有可能在該文化中成功的人選,不如找到相關數據顯示,一支更爲多元化的員工隊伍在哪些情況下會成功。

Machine learning will not propel your workforce into the future if the only thing it learns from is your past.

如果機器學習唯一學到的只是你的過去,那麼它將無法推動你的員工隊伍走向未來。