日韩色综合-日韩色中色-日韩色在线-日韩色哟哟-国产ts在线视频-国产suv精品一区二区69

手機APP下載

您現在的位置: 首頁 > 英語聽力 > 國外媒體資訊 > 經濟學人 > 經濟學人商業系列 > 正文

人工智能與人類的關系

編輯:Sara ?  可可英語APP下載 |  可可官方微信:ikekenet
  


掃描二維碼進行跟讀打分訓練

Business

商業版塊

Bartleby

巴托比專欄

Machine Learnings

機器學習

How do employees and customers feel about artificial intelligence

員工和客戶對人工智能有何感受

If you ask something of Chatgpt, an artificial-intelligence (AI) tool that is all the rage, the responses you get back are almost instantaneous, utterly certain and often wrong.

Chatgpt是最近很流行的一種人工智能工具,如果你問它一些問題,你能立即得到明確肯定,但常常是錯誤的回復。

It is a bit like talking to an economist.

這有點像和一位經濟學家交談。

The questions raised by technologies like Chatgpt yield much more tentative answers.

對于Chatgpt等科技提出的問題,這些問題的答案更多是試探性的。

But they are ones that managers ought to start asking.

但這些都是管理者應該開始詢問的問題。

One issue is how to deal with employees’ concerns about job security.

其中一個是如何處理員工對工作保障的擔憂。

Worries are natural.

擔憂是很正常的。

An AI that makes it easier to process your expenses is one thing; an AI that people would prefer to sit next to at a dinner party quite another.

幫你計算開支的人工智能是一回事,在晚宴上讓人們更愿意坐在它身旁的人工智能則是另一回事。

Being clear about how workers would redirect time and energy that is freed up by an AI helps foster acceptance.

清楚地說明通過人工智能而釋放出來的時間和精力將用于何處,這有助于培養員工對人工智能的接受度。

So does creating a sense of agency:

創造一種代理感也是如此:

research conducted by MIT Sloan Management Review and the Boston Consulting Group found that an ability to override an AI makes employees more likely to use it.

《麻省理工學院斯隆管理評論》和波士頓咨詢集團進行的研究發現,擁有凌駕于人工智能的權力會讓員工更有可能使用人工智能。

Whether people really need to understand what is going on inside an AI is less clear.

人們是否真的需要了解人工智能內部發生了什么,這一點還不甚清晰。

Intuitively, being able to follow an algorithm’s reasoning should trump being unable to.

根據直覺,能夠弄懂算法的推理過程應該比不能弄懂要好。

But a piece of research by academics at Harvard University, the Massachusetts Institute of Technology and the Polytechnic University of Milan suggests that too much explanation can be a problem.

但哈佛大學、麻省理工學院和米蘭理工大學的學者進行的一項研究表明,過多的解釋可能會造成問題。

Employees at Tapestry, a portfolio of luxury brands, were given access to a forecasting model that told them how to allocate stock to stores.

奢侈品牌集團Tapestry的員工被允許訪問一個預測模型,該模型告訴他們如何為商店分配庫存。

Some used a model whose logic could be interpreted; others used a model that was more of a black box.

一些人使用的模型邏輯可以得到解釋;另一些人使用的模型則更像是一個黑匣子。

Workers turned out to be likelier to overrule models they could understand because they were, mistakenly, sure of their own intuitions.

事實證明,員工更有可能否決他們能夠理解的模型,因為他們錯誤地相信了自己的直覺。

Workers were willing to accept the decisions of a model they could not fathom, however, because of their confidence in the expertise of people who had built it.

然而,員工們愿意接受他們無法理解的模型所做出的決定,因為他們對模型建造者的專業能力有信心。

The credentials of those behind an AI matter.

人工智能背后的人的資歷對他們來說是重要的。

The different ways that people respond to humans and to algorithms is a burgeoning area of research.

人們對人類和算法的不同反應是一個新興的研究領域。

In a recent paper Gizem Yalcin of the University of Texas at Austin and her co-authors looked at whether consumers responded differently to decisions—to approve someone for a loan, for example, or a country-club membership—when they were made by a machine or a person.

在最近的一篇論文中,德克薩斯大學奧斯汀分校的吉澤姆·亞爾辛和她的合著者研究了消費者對于機器或人做出的決定是否會有不同的反應,例如批準某人貸款,或者是允許某人成為鄉村俱樂部的會員。

They found that people reacted the same when they were being rejected.

研究人員發現,當申請被拒絕時,他們的反應是一樣的。

But they felt less positively about an organisation when they were approved by an algorithm rather than a human.

但當算法而非人類批準了申請時,他們對這個組織的看法就不那么積極了。

The reason?

為什么呢?

People are good at explaining away unfavourable decisions, whoever makes them.

人們善于解釋消極的決定,無論是誰做出的。

It is harder for them to attribute a successful application to their own charming, delightful selves when assessed by a machine.

但如果是機器做出評估,他們很難認為申請成功是因為自己本身有魅力或討人喜歡。

People want to feel special, not reduced to a data point.

人們希望感覺到自己是特別的,而不是被簡化為一個數據。

In a forthcoming paper, meanwhile, Arthur Jago of the University of Washington and Glenn Carroll of the Stanford Graduate School of Business investigate how willing people are to give rather than earn credit—specifically for work that someone did not do on their own.

與此同時,在即將發表的一篇論文中,華盛頓大學的亞瑟·賈戈和斯坦福大學商學院的格倫·卡羅爾調查了人們在多大程度上愿意給予而非贏得稱贊——尤其是對于并非自己獨立完成的工作。

They showed volunteers something attributed to a specific person—an artwork, say, or a business plan—and then revealed that it had been created either with the help of an algorithm or with the help of human assistants.

他們向志愿者展示了一些由某個特定的人所完成的東西——比如一件藝術品或一份商業計劃——然后告訴他們這些東西是在算法或其他人的幫助下完成的。

Everyone gave less credit to producers when they were told they had been helped, but this effect was more pronounced for work that involved human assistants.

當得知他們有額外幫助時,每個人給出的稱贊都更少了,但對于得到人類幫助的情況,這種傾向更加明顯。

Not only did the participants see the job of overseeing the algorithm as more demanding than supervising humans, but they did not feel it was as fair for someone to take credit for the work of other people.

參與者不僅認為給算法把關比給人類把關要求更高,而且他們還認為,因為其他人的工作而得到稱贊是不公平的。

Another paper, by Anuj Kapoor of the Indian Institute of Management Ahmedabad and his co-authors, examines whether AIs or humans are more effective at helping people lose weight.

印度管理學院艾哈邁達巴德學院的阿努伊·卡普爾和他的合著者發表了另一篇論文,研究了人工智能和人類在幫助人們減肥方面哪個效果更好。

The authors looked at the weight loss achieved by subscribers to an Indian mobile app, some of whom used only an AI coach and some of whom used a human coach, too.

作者觀察了一個印度手機應用的用戶的減肥成績,其中一些用戶只使用了人工智能教練,其他一些用戶也使用了人類教練。

They found that people who also used a human coach lost more weight, set themselves tougher goals and were more fastidious about logging their activities.

他們發現,同樣使用人類教練的人體重減輕了更多,給自己設定了更嚴格的目標,對自己活動的記錄也更細致。

But people with a higher body mass index did not do as well with a human coach as those who weighed less.

但身體質量指數較高的人在使用人類教練時的表現不如體重較輕的人。

The authors speculate that heavier people might be more embarrassed by interacting with another person.

作者推測,體重較重的人在與另一個人互動時可能更容易感到尷尬。

The picture that emerges from such research is messy.

這些研究呈現出的圖景是混亂的。

It is also dynamic: just as technologies evolve, so will attitudes.

但它也是動態的:就像隨著技術發展,人們的態度也會變化。

But it is crystal-clear on one thing.

但有一點是非常明確的。

The impact of Chatgpt and other AIs will depend not just on what they can do, but also on how they make people feel.

Chatgpt和其他人工智能的影響不僅取決于它們能做什么,還取決于它們給人們帶來什么樣的感受。

重點單詞   查看全部解釋    
loan [ləun]

想一想再看

n. 貸款,借出,債權人
v. 借,供應貨款,

 
dynamic [dai'næmik]

想一想再看

adj. 動態的,動力的,有活力的
n. 動力

 
artificial [.ɑ:ti'fiʃəl]

想一想再看

adj. 人造的,虛偽的,武斷的

聯想記憶
confidence ['kɔnfidəns]

想一想再看

adj. 騙得信任的
n. 信任,信心,把握

聯想記憶
respond [ris'pɔnd]

想一想再看

v. 回答,答復,反應,反響,響應
n.

聯想記憶
acceptance [ək'septəns]

想一想再看

n. 接受(禮物、邀請、建議等),同意,認可,承兌

 
tapestry ['tæpistri]

想一想再看

n. 掛毯 v. 飾以織錦畫

聯想記憶
foster ['fɔstə]

想一想再看

vt. 養育,培養,促進,鼓勵,抱有(希望等)

聯想記憶
willing ['wiliŋ]

想一想再看

adj. 愿意的,心甘情愿的

 
embarrassed [im'bærəst]

想一想再看

adj. 尷尬的,局促不安的,拮據的

 
?
    閱讀本文的人還閱讀了:
  • 為什么互相指責于事無補 2023-02-02
  • 頭像的詛咒 2023-02-03
  • 過于熱愛工作的陷阱 2023-02-17
  • 咖啡會議的禍害 2023-02-24
  • 低頭做事有好處也有壞處 2023-03-03
  • 發布評論我來說2句

      最新文章

      可可英語官方微信(微信號:ikekenet)

      每天向大家推送短小精悍的英語學習資料.

      添加方式1.掃描上方可可官方微信二維碼。
      添加方式2.搜索微信號ikekenet添加即可。
      主站蜘蛛池模板: 孕妇直播肚子疼揉肚子| 河北卫视节目表| 侠客行演员表| 生猴子视频| 孙源| 抗日电影大全免费观看| 川岛芳子电影| 《月夜》电影| 工伤赔偿协议书| 在线播放啄木乌丝袜秘书| 最火图片| 护航电影| 三年电影免费高清完整版| 谁的青春不迷茫 电影| 电影网1905免费版| 爱情公寓海报| 珍爱如血泰剧全集在线观看| 电影《遗产》韩国丧尸| 德兰| cctv6电影节目表| 电影《donselya》在线观看| 推拿电影| 杨晓宁| cctv16体育频道直播| 梁祝《引子》简谱| 轻佻寡妇电影| 一路向东电影| 地铁电影| 想你的时候问月亮男声沙哑版| 汤灿的歌曲| 浙江卫视今日播出节目表| 光明与黑暗诸神的遗产攻略| 房兵| 金马电影网| 超薄打底广场舞视频| 里番动漫在线观看| 电视剧热播剧大全| 陆廷威| 张静初吴彦祖演的门徒| 欧美艳星av名字大全| 河北美术学院教务系统|