日韩色综合-日韩色中色-日韩色在线-日韩色哟哟-国产ts在线视频-国产suv精品一区二区69

手機APP下載

您現在的位置: 首頁 > 在線廣播 > VOA慢速英語 > VOA慢速-科技報道 > 正文

VOA慢速英語(翻譯+字幕+講解):研究希望讓機器人具備類似人類的社交技能

來源:可可英語 編輯:aimee ?  可可英語APP下載 |  可可官方微信:ikekenet
  


掃描二維碼可進行跟讀訓練
  下載MP3到電腦  [F8鍵暫停/播放]   批量下載MP3到手機
jpean[tBveec%u

Zt23GFOdBQ4~

Research Aims to Give Robots Human-Like Social Skills
One argument for why robots will never fully measure up to people is because they lack human-like social skills.
But researchers are experimenting with new methods to give robots social skills to better interact with humans. Two new studies provide evidence of progress in this kind of research.
One experiment was carried out by researchers from the Massachusetts Institute of Technology, MIT. The team developed a machine learning system for self-driving vehicles that is designed to learn the social characteristics of other drivers.
The researchers studied driving situations to learn how other drivers on the road were likely to behave. Since not all human drivers act the same way, the data was meant to teach the driverless car to avoid dangerous situations.
The researchers say the technology uses tools borrowed from the field of social psychology. In this experiment, scientists created a system that attempted to decide whether a person's driving style is more selfish or selfless. In road tests, self-driving vehicles equipped with the system improved their ability to predict what other drivers would do by up to 25 percent.
In one test, the self-driving car was observed making a left-hand turn. The study found the system could cause the vehicle to wait before making the turn if it predicted the oncoming drivers acted selfishly and might be unsafe. But when oncoming vehicles were judged to be selfless, the self-driving car could make the turn without delay because it saw less risk of unsafe behavior.
Wilko Schwarting is the lead writer of a report describing the research. He told MIT News that any robot working with or operating around humans needs to be able to effectively learn their intentions to better understand their behavior.
"People's tendencies to be collaborative or competitive often spill over into how they behave as drivers," Schwarting said. He added that the MIT experiments sought to understand whether a system could be trained to measure and predict such behaviors.
The system was designed to understand the right behaviors to use in different driving situations. For example, even the most selfless driver should know that quick and decisive action is sometimes needed to avoid danger, the researchers noted.
The MIT team plans to expand its research model to include other things that a self-driving vehicle might need to deal with. These include predictions about people walking around traffic, as well as bicycles and other things found in driving environments.
The researchers say they believe the technology could also be used in vehicles with human drivers. It could act as a warning system against other drivers judged to be behaving aggressively.

M!H|hGQ;tg@KI

人形機器人.jpg
Another social experiment involved a game competition between humans and a robot. Researchers from Carnegie Mellon University tested whether a robot's "trash talk" would affect humans playing in a game against the machine. To "trash talk" is to talk about someone in a negative or insulting way usually to get them to make a mistake.
A humanoid robot, named Pepper, was programmed to say things to a human opponent like "I have to say you are a terrible player." Another robot statement was, "Over the course of the game, your playing has become confused."
The study involved each of the humans playing a game with the robot 35 different times. The game was called Guards and Treasures which is used to study decision making. The study found that players criticized by the robot generally performed worse in the games than humans receiving praise.
One of the lead researchers was Fei Fang, an assistant professor at Carnegie Mellon's Institute for Software Research. She said in a news release the study represents a departure from most human-robot experiments. "This is one of the first studies of human-robot interaction in an environment where they are not cooperating," Fang said.
The research suggests that humanoid robots have the ability to affect people socially just as humans do. Fang said this ability could become more important in the future when machines and humans are expected to interact regularly.
"We can expect home assistants to be cooperative," she said. "But in situations such as online shopping, they may not have the same goals as we do."
I'm Bryan Lynn.

^CU1i52kLH)Y

F!Z^uGtrY[,VjeE%X!Pf

2htz,U~&J^hsDk+yQyFST7fI^e5N+[;A@a9Cho13C5Pw=N*

重點單詞   查看全部解釋    
social ['səuʃəl]

想一想再看

adj. 社會的,社交的
n. 社交聚會

 
expand [iks'pænd]

想一想再看

v. 增加,詳述,擴展,使 ... 膨脹,
v

聯想記憶
affect [ə'fekt]

想一想再看

vt. 影響,作用,感動

聯想記憶
revive [ri'vaiv]

想一想再看

vt. 使重生,恢復精神,重新記起,喚醒
vi

聯想記憶
departure [di'pɑ:tʃə]

想一想再看

n. 離開,出發,分歧

 
fang [fæŋ]

想一想再看

n. 尖牙

聯想記憶
release [ri'li:s]

想一想再看

n. 釋放,讓渡,發行
vt. 釋放,讓與,準

聯想記憶
avoid [ə'vɔid]

想一想再看

vt. 避免,逃避

聯想記憶
statement ['steitmənt]

想一想再看

n. 聲明,陳述

聯想記憶
vehicle ['vi:ikl]

想一想再看

n. 車輛,交通工具,手段,工具,傳播媒介

聯想記憶
?
發布評論我來說2句

    最新文章

    可可英語官方微信(微信號:ikekenet)

    每天向大家推送短小精悍的英語學習資料.

    添加方式1.掃描上方可可官方微信二維碼。
    添加方式2.搜索微信號ikekenet添加即可。
    主站蜘蛛池模板: 经济合同法| 黄漪钧| 小泽真珠| 亚洲电影在线观看| 搜狐网站官网| 殷明珠| 六级词汇电子版| 汤晶锦| 2024年计划生育家庭特别扶助| 头文字d里演员表| 变形金刚1免费完整版在线观看 | 你是我的玫瑰花简谱| 相识电影| 补锌之王的食物| 美妙天堂第三季| 张柏芝艳照无删减版| 久久久在线视频| 彩云曲 电影| 悠悠寸草心第一部| 感恩节英语祝福| lanarhoades黑人系列| 天云山传奇 电影| 中国人免费观看| ktv视频| 好男人电视剧| 觉醒年代免费看| 迷失之城剧情介绍| 抖音网页版电脑版| 我的仨妈俩爸演员表| 二年级拍手歌生字组词| 二年级竖式计算天天练| 电影百度百科| 格子论文| 微信头像图片2024最新| 陈牧驰介绍个人资料| 黄海冰主演电视剧大全| 优越法外电视剧免费观看| 刘烨主演的电视剧| 电影宝贝| chinese国产xxx实拍| 谍变1939全部演员表|