日韩色综合-日韩色中色-日韩色在线-日韩色哟哟-国产ts在线视频-国产suv精品一区二区69

手機(jī)APP下載

您現(xiàn)在的位置: 首頁 > 英語六級 > 英語六級閱讀 > 六級閱讀真題 > 正文

2017年12月英語六級閱讀真題及答案 第2套 仔細(xì)閱讀2篇

編輯:max ?  可可英語APP下載 |  可可官方微信:ikekenet

Passage One
Questions 46 to 50 are based on the following passage.

In the beginning of the movie I, Robot, a robot has to decide whom to save after two cars plunge into the water—Del Spooner or a child. Even though Spooner screams "Save her! Save her!" the robot rescues him because it calculates that he has a 45 percent chance of survival compared to Sarah's 11 percent. The robot's decision and its calculated approach raise an important question: would humans make the same choice? And which choice would we want our robotic counterparts to make?
Isaac Asimov evaded the whole notion of morality in devising his three laws of robotics, which hold that 1. Robots cannot harm humans or allow humans to come to harm; 2. Robots must obey humans, except where the order would conflict with law 1; and 3. Robots must act in self-preservation, unless doing so conflicts with laws 1 or 2. These laws are programmed into Asimov's robots—they don't have to think, judge, or value. They don't have to like humans or believe that hurting them is wrong or bad. They simply don't do it.
The robot who rescues Spooner's life in I, Robot follows Asimov's zeroth law: robots cannot harm humanity (as opposed to individual humans) or allow humanity to come to harm—an expansion of the first law that allows robots to determine what's in the greater good. Under the first law, a robot could not harm a dangerous gunman, but under the zeroth law, a robot could kill the gunman to save others.
Whether it's possible to program a robot with safeguards such as Asimov's laws is debatable. A word such as "harm" is vague (what about emotional harm? Is replacing a human employee harm?), and abstract concepts present coding problems. The robots in Asimov's fiction expose complications and loopholes in the three laws, and even when the laws work, robots still have to assess situations.
Assessing situations can be complicated. A robot has to identify the players, conditions, and possible outcomes for various scenarios. It's doubtful that a computer program can do that—at least, not without some undesirable results. A roboticist at the Bristol Robotics Laboratory programmed a robot to save human proxies (替身) called "H-bots" from danger. When one H-bot headed for danger, the robot successfully pushed it out of the way. But when two H-bots became imperiled, the robot chocked 42 percent of the time, unable to decide which to save and letting them both "die." The experiment highlights the importance of morality: without it, how can a robot decide whom to save or what's best for humanity, especially if it can't calculate survival odds?
46. What question does the example in the movie raise?
A) Whether robots can reach better decisions.
B) Whether robots follow Asimov's zeroth law.
C) How robots may make bad judgments.
D) How robots should be programmed.
47. What does the author think of Asimov's three laws of robotics?
A) They are apparently divorced from reality.
B) They did not follow the coding system of robotics.
C) They laid a solid foundation for robotics.
D) They did not take moral issues into consideration.
48. What does the author say about Asimov's robots?
A) They know what is good or bad for human beings.
B) They are programmed not to hurt human beings.
C) They perform duties in their owners' best interest.
D) They stop working when a moral issue is involved.
49. What does the author want to say by mentioning the word "harm" in Asimov's laws?
A) Abstract concepts are hard to program.
B) It is hard for robots to make decisions.
C) Robots may do harm in certain situations.
D) Asimov's laws use too many vague terms.
50. What has the roboticist at the Bristol Robotics Laboratory found in his experiment?
A) Robots can be made as intelligent as human beings some day.
B) Robots can have moral issues encoded into their programs.
C) Robots can have trouble making decisions in complex scenarios.
D) Robots can be programmed to perceive potential perils.

重點(diǎn)單詞   查看全部解釋    
security [si'kju:riti]

想一想再看

n. 安全,防護(hù)措施,保證,抵押,債券,證券

 
potential [pə'tenʃəl]

想一想再看

adj. 可能的,潛在的
n. 潛力,潛能

 
widespread ['waidspred]

想一想再看

adj. 分布(或散布)廣的,普遍的

 
capacity [kə'pæsiti]

想一想再看

n. 能力,容量,容積; 資格,職位
adj.

聯(lián)想記憶
unprecedented [ʌn'presidəntid]

想一想再看

adj. 空前的,前所未有的

聯(lián)想記憶
complicated ['kɔmplikeitid]

想一想再看

adj. 復(fù)雜的,難懂的
動詞complica

 
current ['kʌrənt]

想一想再看

n. (水、氣、電)流,趨勢
adj. 流通的

聯(lián)想記憶
embrace [im'breis]

想一想再看

v. 擁抱,包含,包圍,接受,信奉
n. 擁抱

聯(lián)想記憶
emergence [i'mə:dʒəns]

想一想再看

n. 出現(xiàn),浮現(xiàn),露出

聯(lián)想記憶
calculate ['kælkjuleit]

想一想再看

v. 計算,估計,核算,計劃,認(rèn)為

 
?
發(fā)布評論我來說2句

    最新文章

    可可英語官方微信(微信號:ikekenet)

    每天向大家推送短小精悍的英語學(xué)習(xí)資料.

    添加方式1.掃描上方可可官方微信二維碼。
    添加方式2.搜索微信號ikekenet添加即可。
    主站蜘蛛池模板: 韩国电影陈诗雅主演| 平安建设工作会议记录| 尹雪喜演的电影在线观看 | 郑丽身高一米几| 尹丽川| 远景山谷1981| 勇敢者| 电影《忠爱无言》| 6套电影频道节目表| 美女自愿戴镣铐调教室| 铁血战士电影| 特级做a爰片毛片免费看108| 芭比公主历险记| 八月照相馆| 说木叶原文| angelina全集在线观看| 洛兵| 王思聪是谁| 小数加减法100道题| 欧美一级大胆视频| 色戒在线观看视频| 黑龙江卫视节目| 邵雨薇电影| 2025八方来财微信头像| 汤姆·塞兹摩尔| 韶山行研学心得体会| 共和国之恋原唱| 天才gogogo综艺节目规则| 那些年,那些事 电视剧| 光明力量2古代封印攻略| 现代古诗冰心| 大幻术师| 马明威| xiazai| 长谷川清| 电影《在云端》| 道德底线| 日本午夜电影| 孙启皓| 来5566最新av在线电影| 一闪一闪亮晶晶的简谱钢琴|