Business
商業版塊
Bartleby
巴托比專欄
Machine Learnings
機器學習
How do employees and customers feel about artificial intelligence
員工和客戶對人工智能有何感受
If you ask something of Chatgpt, an artificial-intelligence (AI) tool that is all the rage, the responses you get back are almost instantaneous, utterly certain and often wrong.
Chatgpt是最近很流行的一種人工智能工具,如果你問它一些問題,你能立即得到明確肯定,但常常是錯誤的回復。
It is a bit like talking to an economist.
這有點像和一位經濟學家交談。
The questions raised by technologies like Chatgpt yield much more tentative answers.
對于Chatgpt等科技提出的問題,這些問題的答案更多是試探性的。
But they are ones that managers ought to start asking.
但這些都是管理者應該開始詢問的問題。
One issue is how to deal with employees’ concerns about job security.
其中一個是如何處理員工對工作保障的擔憂。
Worries are natural.
擔憂是很正常的。
An AI that makes it easier to process your expenses is one thing; an AI that people would prefer to sit next to at a dinner party quite another.
幫你計算開支的人工智能是一回事,在晚宴上讓人們更愿意坐在它身旁的人工智能則是另一回事。
Being clear about how workers would redirect time and energy that is freed up by an AI helps foster acceptance.
清楚地說明通過人工智能而釋放出來的時間和精力將用于何處,這有助于培養員工對人工智能的接受度。
So does creating a sense of agency:
創造一種代理感也是如此:
research conducted by MIT Sloan Management Review and the Boston Consulting Group found that an ability to override an AI makes employees more likely to use it.
《麻省理工學院斯隆管理評論》和波士頓咨詢集團進行的研究發現,擁有凌駕于人工智能的權力會讓員工更有可能使用人工智能。
Whether people really need to understand what is going on inside an AI is less clear.
人們是否真的需要了解人工智能內部發生了什么,這一點還不甚清晰。
Intuitively, being able to follow an algorithm’s reasoning should trump being unable to.
根據直覺,能夠弄懂算法的推理過程應該比不能弄懂要好。
But a piece of research by academics at Harvard University, the Massachusetts Institute of Technology and the Polytechnic University of Milan suggests that too much explanation can be a problem.
但哈佛大學、麻省理工學院和米蘭理工大學的學者進行的一項研究表明,過多的解釋可能會造成問題。
Employees at Tapestry, a portfolio of luxury brands, were given access to a forecasting model that told them how to allocate stock to stores.
奢侈品牌集團Tapestry的員工被允許訪問一個預測模型,該模型告訴他們如何為商店分配庫存。
Some used a model whose logic could be interpreted; others used a model that was more of a black box.
一些人使用的模型邏輯可以得到解釋;另一些人使用的模型則更像是一個黑匣子。
Workers turned out to be likelier to overrule models they could understand because they were, mistakenly, sure of their own intuitions.
事實證明,員工更有可能否決他們能夠理解的模型,因為他們錯誤地相信了自己的直覺。
Workers were willing to accept the decisions of a model they could not fathom, however, because of their confidence in the expertise of people who had built it.
然而,員工們愿意接受他們無法理解的模型所做出的決定,因為他們對模型建造者的專業能力有信心。
The credentials of those behind an AI matter.
人工智能背后的人的資歷對他們來說是重要的。
The different ways that people respond to humans and to algorithms is a burgeoning area of research.
人們對人類和算法的不同反應是一個新興的研究領域。
In a recent paper Gizem Yalcin of the University of Texas at Austin and her co-authors looked at whether consumers responded differently to decisions—to approve someone for a loan, for example, or a country-club membership—when they were made by a machine or a person.
在最近的一篇論文中,德克薩斯大學奧斯汀分校的吉澤姆·亞爾辛和她的合著者研究了消費者對于機器或人做出的決定是否會有不同的反應,例如批準某人貸款,或者是允許某人成為鄉村俱樂部的會員。
They found that people reacted the same when they were being rejected.
研究人員發現,當申請被拒絕時,他們的反應是一樣的。
But they felt less positively about an organisation when they were approved by an algorithm rather than a human.
但當算法而非人類批準了申請時,他們對這個組織的看法就不那么積極了。
The reason?
為什么呢?
People are good at explaining away unfavourable decisions, whoever makes them.
人們善于解釋消極的決定,無論是誰做出的。
It is harder for them to attribute a successful application to their own charming, delightful selves when assessed by a machine.
但如果是機器做出評估,他們很難認為申請成功是因為自己本身有魅力或討人喜歡。
People want to feel special, not reduced to a data point.
人們希望感覺到自己是特別的,而不是被簡化為一個數據。
In a forthcoming paper, meanwhile, Arthur Jago of the University of Washington and Glenn Carroll of the Stanford Graduate School of Business investigate how willing people are to give rather than earn credit—specifically for work that someone did not do on their own.
與此同時,在即將發表的一篇論文中,華盛頓大學的亞瑟·賈戈和斯坦福大學商學院的格倫·卡羅爾調查了人們在多大程度上愿意給予而非贏得稱贊——尤其是對于并非自己獨立完成的工作。
They showed volunteers something attributed to a specific person—an artwork, say, or a business plan—and then revealed that it had been created either with the help of an algorithm or with the help of human assistants.
他們向志愿者展示了一些由某個特定的人所完成的東西——比如一件藝術品或一份商業計劃——然后告訴他們這些東西是在算法或其他人的幫助下完成的。
Everyone gave less credit to producers when they were told they had been helped, but this effect was more pronounced for work that involved human assistants.
當得知他們有額外幫助時,每個人給出的稱贊都更少了,但對于得到人類幫助的情況,這種傾向更加明顯。
Not only did the participants see the job of overseeing the algorithm as more demanding than supervising humans, but they did not feel it was as fair for someone to take credit for the work of other people.
參與者不僅認為給算法把關比給人類把關要求更高,而且他們還認為,因為其他人的工作而得到稱贊是不公平的。
Another paper, by Anuj Kapoor of the Indian Institute of Management Ahmedabad and his co-authors, examines whether AIs or humans are more effective at helping people lose weight.
印度管理學院艾哈邁達巴德學院的阿努伊·卡普爾和他的合著者發表了另一篇論文,研究了人工智能和人類在幫助人們減肥方面哪個效果更好。
The authors looked at the weight loss achieved by subscribers to an Indian mobile app, some of whom used only an AI coach and some of whom used a human coach, too.
作者觀察了一個印度手機應用的用戶的減肥成績,其中一些用戶只使用了人工智能教練,其他一些用戶也使用了人類教練。
They found that people who also used a human coach lost more weight, set themselves tougher goals and were more fastidious about logging their activities.
他們發現,同樣使用人類教練的人體重減輕了更多,給自己設定了更嚴格的目標,對自己活動的記錄也更細致。
But people with a higher body mass index did not do as well with a human coach as those who weighed less.
但身體質量指數較高的人在使用人類教練時的表現不如體重較輕的人。
The authors speculate that heavier people might be more embarrassed by interacting with another person.
作者推測,體重較重的人在與另一個人互動時可能更容易感到尷尬。
The picture that emerges from such research is messy.
這些研究呈現出的圖景是混亂的。
It is also dynamic: just as technologies evolve, so will attitudes.
但它也是動態的:就像隨著技術發展,人們的態度也會變化。
But it is crystal-clear on one thing.
但有一點是非常明確的。
The impact of Chatgpt and other AIs will depend not just on what they can do, but also on how they make people feel.
Chatgpt和其他人工智能的影響不僅取決于它們能做什么,還取決于它們給人們帶來什么樣的感受。